Skip to main content
SearchLoginLogin or Signup

The Effects of Participatory Propaganda: From Socialization to Internalization of Conflicts

A look at how propaganda has been rewired for the digital age and how this new, “participatory propaganda” mediates conflict, manipulates relationships and creates isolation, both online and offline.

Published onAug 07, 2019
The Effects of Participatory Propaganda: From Socialization to Internalization of Conflicts
·

In this essay, Gregory Asmolov, a Leverhulme Early Career Fellow at King’s College London and a scholar noted for his work understanding the Russian Internet (Runet), examines a new set of propaganda strategies emerging on social networks in Ukraine and Russia. He takes us on a conceptual journey from understanding how traditional propaganda has been “rewired” for the digital age to examining its methodologies and impact today. This new phenomenon of "participatory propaganda" seeks not only to persuade users to interpret events through a particular lens, but also to manipulate relationships, dividing friends, breaking alliances and leaving individuals isolated and tractable, online and offline.

— Ethan Zuckerman, Editor

Propaganda is no longer just a tool for changing your opinion. Now, in our digitally mediated world, propaganda is a pathway to instantaneous participation in political conflicts from the safety and comfort of your living room chair. It is also, ironically, now a tool for instantaneously breaking connections between friends and relatives whose opinions differ. Participatory propaganda helps to socialize conflicts and make them part of everyday life. This increasing scope of engagement can also lead to an internalization of conflict, which means that instead of encouraging you to filter alternative sources of information, participatory propaganda aims to reshape your cognitive filters as well as the relationship between you and your environment.1

Introduction: Back in the USSR

It is October of 1986. I am one of 25 children in the pre-school group of a kindergarten in the Leninsky district of Moscow. It is the “quiet hour” in the middle of the day when children are supposed to nap, but I cannot sleep. I am very worried. Every evening, after the “Spokoynoy Nochi, Malyshi” (Goodnight, Kids) children’s show on television at 8:45 pm, I watch the evening news program “Vremya” (Time). Last evening I heard that our leader, Mikhail Gorbachev, is going to meet the American leader, Ronald Reagan, in Reykjavík. I cannot understand why Gorbachev is going there. I am sure the Americans are going to kill him. I am also sleepless because I am afraid of nuclear war. At this time “Star Wars” for me is not a movie, but a plan for American military aggression against the people of the Soviet Union.

So my parents offered me a new game. They gave me an old radio and taught me how to search for short-wave radio stations. Unlike our TV, which had only six buttons for six channels, the radio offered a range of voices in different languages. The purpose of the game was to scan the short waves and find Russian-speaking stations broadcasting from beyond the borders of the USSR; the so-called “Vrazheskie golosa” (Enemy Voices). It was quite tricky, since the tiniest movements of my fingers would sweep past these stations, and their wavelengths sometimes changed in order to avoid being jammed by the Soviet government.

I learned very quickly how to recognize Radio Freedom, the Voice of Israel, the Voice of America and the BBC. (Who could imagine that 30 years later I would have an office in Bush House, where the BBC Russian Service was broadcasting from at that time, and is now part of King’s College London!) I really enjoyed my parents’ new game. For the first time in my life, I was actively involved in searching for news. I also started to sleep better during the “quiet hour” at kindergarten. Through that radio game I learned that the same events can be described in very different ways. Although I wasn’t able to understand many things, it highlighted the polyphony of voices and framings. I was lucky to have this experience just then, in 1986. Only a year later, “glasnost,” a new policy of media openness, began to influence Soviet TV, and the “enemy voices” lost their unique value as a window onto an alternative reality.

The image of me as a child sitting in my bedroom in front of the radio and searching for “enemy voices” comes back as I think about how the Internet has changed propaganda. In 1986 that old short-wave radio was a physical mediator between me as a user and the global environment. It brought new meanings directly into my bedroom. I didn’t know that what I heard was called “anti-Soviet propaganda.” Similarly, I hadn’t known that the news I watched on TV was propaganda, either. What really mattered was the range of voices brought to me by these various tools of mediation.

Today, with the Internet, it is much easier to find alternative sources. The quality of information is often good, and there is no need for tiny movements of the fingers, although instead of the Soviet-style “glushilki” (jammers), we have new technologies like packet filtering and state-sponsored censorship. These days, however, it seems that even a huge diversity of voices still does not help to challenge propaganda or increase critical thinking. One could suggest that, in order to address this puzzle, we need to focus not on the content of propaganda, but on its delivery, and to ask how the new technological tools used for the proliferation of propaganda change the relationship between users and their environment.

The Affordances of “Rewired Propaganda”: A Mediational Perspective

The comparison between a television set picking up six broadcast channels and a short-wave radio picking up hundreds highlights the difference between closed and open artifacts mediating the relationship between subjects and their environment. “Closed” artifacts transfer only limited streams of information, both in terms of the number of channels and of the scope of sources that can be covered by those channels. “Open” artifacts offer a window onto a limitless world of sources and an unrestricted number of channels. Propaganda has always been more at home in an isolated environment, where it need not compete with alternative sources and where it has a monopoly over shaping the perception of the audience. Counter-propaganda, in its turn, has tried to break this monopoly and find a way through the “curtain” of isolation, either by distributing printed matter (for example, dropping leaflets from the air) or by using radio, whose signal waves are notoriously unimpressed by national borders.

The emergence of the Internet, however, challenged the capacity of state actors to isolate any environment from external information. Some countries, such as North Korea and Turkmenistan, disconnected their local Internet from the global infrastructure in order to maintain that isolation. Others introduced advanced mode of filtering such as what we now know as the “Great Firewall of China.” Russia chose a different path. From the outset, the Russian Internet, also known as Runet, developed as an independent space. Its development has been driven by imaginaries of alternative cultural, social, and political environments beyond the control of traditional political institutions (Asmolov & Kolozaridi, 2017). “The online sphere challenges how the Russian state has traditionally dominated the information heights via television” (Oates, 2016). This, however, sparked a new type of propaganda that would be effective despite the lack of state control over the information environment. This new type of propaganda, described by Sarah Oates as “rewired propaganda,” seeks to neutralize the Internet’s capacity to undermine authority and challenge the narratives of the state. In Oates’s formulation, “a commitment to disinformation and manipulation, when coupled with the affordances of the new digital age, gives particular advantages to a repressive regime that can pro-actively shape the media narrative.”

Rewired propaganda uses some traditional tools of Internet control, like filtering and censorship. But its novelty lies in its preference for more innovative models of propaganda, including sophisticated manipulation of information and computational propaganda (Wooley & Howard, 2016; Sanovich, 2017). Computational propaganda, in particular, relies on affordances that allow fake identities to be created by mutually reinforcing human and non-human agents, including disinformation agents and bots. For example, a believable inauthentic voice is created by an individual, then amplified by bots. These actors not only distribute content but also increase the visibility of information. They may also change the structure of discourse and increase its emotional sentiment. Human image synthesis technologies, which rely on AI and machine learning, provide the means to fabricate evidence, including “deep fakes” where the line between what appears to be genuine and what is not has been eliminated (Edwards, S. & S. Livingston, 2018). Less sophisticated tools for content editing allow these actors to create “shallow fakes” in which an image is recontextualized, or simply misrepresented (Johnson, 2019).

Defining Propaganda

Before we go deeper into our discussion of propaganda, we must first define what it is. One of the classical definitions of propaganda is “the management of collective attitudes by the manipulation of significant symbols” (Laswell, 1927). A more detailed definition states that “Propaganda is the expression of opinions or actions carried out deliberately by individuals or groups with a view to influencing the opinions or actions of other individuals or groups for predetermined ends and through psychological manipulations” (the Institute for Propaganda Analysis, 1937/1972). Yet another describes propaganda as “Communication designed to manipulate a target population by affecting its beliefs, attitudes, or preferences in order to obtain behaviour compliant with political goals of the propagandist” (Benkler et. al., 2018). One may also argue that propaganda often incorporates the voice of the state and is driven by the interests of institutional hegemonic actors.

Three elements are central to these definitions: Propaganda is intentional; it relies on manipulation, specifically through the use of misleading information; and its purpose is to support political goals by drawing out and managing behavior. The challenge, however, is to define what elements within propagandist messaging are misleading or manipulative. Addressing these questions is particularly challenging in the context of conflicts. What is considered propaganda by one side of the conflict would be treated by the other side as the legitimate “presentation of a case in such a way that others may be influenced” (Stuart, 1920) and dissemination of information for a justified cause. It is considerably challenging to “coherently distinguish ‘propaganda’ from a variety of other terms that refer to communication to a population that has a similar desired outcome: persuasion, marketing, public relations, and education” (Benkler et al., 2018).

In order to address some of these challenges, I focus my attention in this essay on one particular aspect of propaganda: its role in the mobilization of individuals and groups. Gustave Le Bon in 1903 was among the first to consider propaganda as a way to shape the opinions and beliefs of crowds in order to move those crowds towards specific goals. By 1965 Jacques Ellul was also focused on the link between propaganda and action, while considering propaganda “A set of methods employed by an organized group that wants to bring about the active or passive participation in its actions of a mass of individuals….” More specific models for the interrelationship between propaganda and desired action had already been mapped by George Bruntz (1938). For example, leaflets dropped from the air onto enemy soldiers can be viewed as a “propaganda of despair” intended to “break down the morale of the enemy,” and at the same time as a “propaganda of hope” intended to present to the enemy army and civilians a picture of a promised land they can enter if they will only lay down their arms.

Understanding propaganda as a way to drive a specific mode of action among a target audience highlights the dual role of propaganda. On the one hand, it seeks to shape a particular world view and offer a specific interpretation of something happening in the environment surrounding the subject. On the other hand, by relying on the symbolic dissemination of meanings, it also seeks to support or provoke an action by this subject that will impact and potentially change the environment in a specific way. This duality can be captured and conceptualized if we approach propaganda from a mediational perspective (Kaptelinin, 2014), in other words, as something that shapes the relationship between a subject and their environment. Relying on that approach, I offer a definition of propaganda that relies on a notion of mediation:

Propaganda is an intentional effort to shape the relationship between an individual target of information (the subject) and their environment (the object) by relying on the dissemination of symbolic meaning in order to support a particular course of the subject’s activity in relation to specific objects of activity.

In a nutshell, digital propaganda changes the relationship between users (subjects) and conflict (objects of users’ activity in their environment).

The relationship between subject and object has two directions. The first direction, from the world towards the subject, relies on the mediation of meaning. The second direction, from the subject towards the world, relies on the mediation of activity. Propaganda aims either to support or change an existing relationship to an object, or to construct a new object that requires the subject’s activity. The intentional construction of subject-object relationships may rely on manipulative psychological techniques, as well as on the dissemination of disinformation. The mediational perspective suggests that the discussion of digital affordances should focus on how new digital means of production and the proliferation of propaganda change the relationship between a subject and their environment. The relationship between digital users in conflicts is an example of the subject-object relationship. In that case, the mediation perspective explores not only how propaganda offers new frames and interpretations of different conflict-related events, but also illustrates the range of activities that is offered to users relying on digital tools in conflict situations.

I’ll note, however, that propaganda does not necessarily aim to construct an active relationship between subject and object. As pointed out by Ellul, one mode of activity is passivity, which is sometimes the mode that the propagandist desires. This often happens in cases where propaganda seeks to induce disorientation, a situation “in which the target population simply loses the ability to tell truth from falsehood or where to go for help in distinguishing between the two” (Benkler et al., 2018).

To sum up, propaganda is not only a way to change a person’s perception of the environment via symbolic means, but also a way to change the behavior of a target audience in order to change the environment. In this sense, mediation always acts in two directions: One, it aims to change the perceptions of the recipient/ target audience (a group of subjects). Two, it aims to shape the activity of the target audience in relation to the environment (or lack of action, should the activity need to be neutralized). In the past, these two processes were distinguishable from each other. First, a subject received a message via an artifact, either in public spaces (e.g. posters, cinema, newsstands or loudspeakers) or in private spaces (TV or radio receivers). The subject then chose to act in accordance with the message they received.

Digital affordances have now changed the structure of relationships between messaging that tinkers with the subject’s perception of the environment and the subject’s activity in relation to that environment. Digital platforms allow Internet users to not only consume information, but to also choose from a broad range of potential follow-on activities in relation to the objects whose perception is shaped by propaganda. In order to understand the effects of “rewired propaganda,” we need to look specifically at how the design of our digital information environment allows for new kinds of links between how subjects receive information and their activity after they receive it.

The Participatory Affordances of Propaganda

Over the last century, propaganda has gradually moved from open squares and public places to our homes. This process can be associated with the domestication of technologies, where the device that mediates meanings, particularly the TV, has continuously occupied domestic spaces (Silverstone, 1994). The boundaries of spaces in which we consume media have expanded further with the rise of mobile technologies including laptops and handheld devices. Maren Hartmann (2013) describes this trend as a shift from domestication to “mediated mobilism.”

As a consequence, propaganda infiltrates our most intimate spaces, where users interact with their laptops and mobile devices. The location of technological interaction is not simply the household, but the bed or sofa — spaces commonly associated with relaxation and entertainment. Propaganda moves from the living room to the bedroom, follows people as they travel to work on crowded public transport, and remains with them in office time. We can wake up and fall asleep with propaganda in our hands. It finds us at the university, in the bathroom or on the beach.

Propaganda is also reshaped by the design of the spaces in which content is encountered and shared. Traditional media relied on physical artifacts such as newspapers or TV, so content consumption was mostly a solitary activity rather than a social one. Even when news consumption happened in a public place, for example, people listening to the radio outside in the square or friends or family watching TV news together, the media space and the social interaction space were separate. In contrast, the interactive nature of digital media removes the gap between the space where content is generated and distributed and the space where content is consumed and discussed. Social networking platforms combine news consumption with social interaction, turning social interaction into a mechanism of content proliferation and selective amplification (Zuckerman, 2018).

The integration of content generation/sharing and content discussion creates an immersive effect whereby users are unable to separate content consumption (and its impact on their lives) from their personal communication. In online environments, the consumption of propaganda is deeply embedded in the structure of social relations, which allows the propaganda to further infiltrate our everyday lives. More important are the ways social media and the spread of online content create opportunities for immediate action: spreading propaganda further, or taking other actions directly suggested by the propaganda.

Propaganda has often been linked to a desired mode of action, such as surrender or contributing one’s resources to a specific cause. Historically, however, the means of propaganda distribution and the means of action were separate and distinct. The target (or subject) of propaganda was first exposed to a message (via leaflet, poster, newspaper article, or broadcast message), which they subsequently acted upon. Due to the participatory nature of digital technologies, propaganda distribution, consumption, and participation often share the same platform and are mediated by the same digital devices (such as mobile phones or laptops). The person exposed to propaganda is also offered a selection of actions to carry out (often instantly) in the same virtual environment.

The consequences of these new participatory affordances are particularly visible in the context of conflicts. In his book iSpy: Surveillance and Power in the Interactive Era, Mark Andrejevic points out that “in a disturbing twist to the interactive promise of the Internet, once-passive spectators are urged to become active participants.” In this way, Andrejovic says, Internet users become citizen-soldiers when “we are invited to participate in the war on terrorism from the privacy of our homes and from our offices, or wherever we might happen to be.” David Patrikarakos analyzes a number of cases of digitally mediated citizen involvement in war and comes to the same conclusion: “In the world of Homo Digitalis, anyone with an Internet connection can become an actor in a war” (Patrikarakos, 2017).

Social Media and Propaganda

At least three novel aspects in the relationship between social media and propaganda are worth considering:

  1. Digitally mediated participation in the creation and proliferation of propaganda and various online content-related activities, including various forms of engagement with content (from commenting to complaining).

  2. Digitally mediated participation in online and offline action triggered by propaganda, beyond content-related activities and relying on various forms of crowdsourcing.

  3. The action of disconnection, using digital means to effect the immediate cutting of social ties, including unfollowing, unfriending or blocking.

The participatory nature of propaganda, particularly where propaganda is linked to a call to take part in propaganda efforts, has been well-documented. “Peer-to-peer propaganda” is a situation where “Ordinary people experience the propaganda posts as something shared by their own trusted friends, perhaps with comments or angry reactions, shaping their own opinions and assumptions” (Haigh et al., 2017). The same researchers argue that “States can rely on citizens’ do-it-yourself disinformation campaigns to maintain the status quo.” Mejias and Vokuev (2017) point out that “…social media can also give ordinary citizens the power to generate false and inaccurate information,” while “propaganda is co-produced by regimes and citizens.” Finally, Khaldarova and Pantti (2016) explore participatory verification of data, where an online initiative such as the StopFake platform “mobilizes ordinary Internet users to engage in detecting and revealing fabricated stories and images on the Ukraine crisis” and address this as “Crowdsourced Information Warfare.”

It is important to differentiate between open and transparent calls to participate in the generation, proliferation, and verification of content in order to support your state, and various forms of clandestine or camouflaged online manipulation designed to trigger user participation. An illustration of an open call can be seen in the case of the Ukrainian I-army project launched by the Ukrainian Ministry of Information:

“In one year, we created a powerful army that defends us in the Donbas area. Now, it’s time to resist Russian invaders on the information front. Every Ukrainian who has access to the Internet can contribute to the struggle. Every message is a bullet in the enemy’s mind.”

A similar type of initiative could be seen on the Russian side. A website, “Internet Militia” called on Internet users to take part in defense of the Motherland:

“Even in five minutes you can do a lot. “Internet Militia” — this is a news feed, where links are accompanied by suggestions for direct action. For example, follow the link and leave a comment. What could be easier? Today it is important the participation of everyone who loves his Motherland.”

In many cases, however, user participation is driven not by open, direct calls, but by various forms of psychological manipulation. We can see forms of propaganda that support user engagement via the sharing of the emotional, imaginary and so-called “fake news” and through the activity of state-sponsored trolls and computational propaganda. We can also differentiate between volunteer and paid forms of users participation. These paid forms of participation (as in the case of the Chinese 50 cents party) limit the scope of participants and usually operate in secret. In other cases, user generated propaganda transforms from crowd participation to targeted engagement of selected users who develop specific skills, as in the case of the Russian troll factory in Ol’gino where “More than 1,000 paid bloggers and commenters reportedly worked only in a single building at Savushkina Street in 2015.” This illustrates the shift from crowdsourcing to outsourcing of propaganda.

Crowdsourcing Conflict

The notion of crowdsourcing is particularly useful when analyzing participatory propaganda, as mobile devices are not only good tools for recirculating content, but also for mobilizing resources. When combined with crowdsourcing, propaganda offers a double effect. It not only builds awareness of the propaganda messaging, but also allows users to respond to propaganda issues at the same time and through the same channel. The range of user resources that can be mobilized by relying on digital mediation of propaganda is astounding and includes: sensor resources (for data collection); analytical resources (for data classification); intellectual resources (to build knowledge and skills); social resources (to engage more people around a specific goal); financial resources (also known as crowdfunding), and physical resources. Crowdsourcing also allows us to highlight how propaganda creates an emotional condition in the user, which in turn supports the mobilization of resources and reshapes the user’s priorities for future resource allocation.

Content-related activities, such as sharing, liking, commenting and complaining, can also be viewed as a form of crowdsourcing since the generation and proliferation of content also relies on the mobilization of user activity. Crowdsourcing as a concept is particularly helpful in showing how propaganda-driven digitally mediated activity goes beyond the usual content-related actions that take place online. The Russia-Georgia and Russia-Ukraine conflicts illustrate the range of potential activities in this context (Deibert et al., 2012; Hunter, 2018). This includes data-gathering for intelligence purposes, diverse forms of open-source intelligence analysis (OSINT), various forms of hacktivism, logistical support for different sides of a conflict, including the purchasing of military equipment through crowdfunding, and various forms of offline volunteering.

Some forms of participation are afforded by increasing the role of big data. For example, modern conflicts take place in an environment where all sides of the conflict as well as the local population in the areas of conflict generate conflict-related data. These data create new opportunities for gathering valuable intelligence, both for informational as well as ground warfare. In that way, users have an opportunity to participate in data generation, collection, and analysis. Some users develop skills for open source intelligence and create online data analytics communities. Examples include groups like the Ukrainian Inform-Napalm, Russian Conflict Intelligence Team (CIT) and UK-based Bellingcat (Toler, 2018). Members of communities also teach others how to analyze conflict-related data. These community groups played a major role in confirming the presence of Russian soldiers in Ukraine, despite denials by Russian leaders, exposing the scale of casualties among Russian soldiers, as well as investigating the downing of Malaysia Airlines flight 17.

Some Russian conflict related data are not available in open sources, but are still obtained by hackers from both sides of the conflict. Various forms of hacker activities include accessing restricted data or attacking websites that are considered enemy targets. Most aspects of hacktivism require some degree of advanced skills, though a broad range of Internet users can carry out hacking-related tasks using standard computing resources and tools that simplify participation. Members of “the crowd” successfully helped analyze hacker-obtained email and other types of internal communication by the rival side of the Russian conflict. That analysis fed into propaganda and counter propaganda efforts by both sides of the conflict, while also providing valuable intelligence.

Various crowdfunding initiatives sprang up on both sides of the conflict, and relied on social networks and blogs as well as dedicated websites. These crowdfunding efforts supported both traditional military units (particularly on the Ukrainian side) as well as volunteer units, with most of the funds collected being used to purchase military equipment and ammunition. Other crowdfunding efforts enabled offline engagement of Internet users. For example, by using the funds to purchase drones, some Ukrainian users were able to self-organize and establish volunteer groups for air reconnaissance (drone-based surveillance) in order to gather real-time intelligence.

Digital platforms also played a major role in engagement and coordination of various types of warfare-related offline activities. A variety of Ukrainian groups relied on social networks, messengers, and crowdsourcing platforms to coordinate logistical support for volunteer battalions and military units. On the Russian side, dedicated Vkontakte groups as well as the website Dobrovolec.org (volunteers) helped coordinate opportunities for volunteers to join pro-Russian paramilitary units in the eastern Ukraine. And social media on both sides of the conflict allowed users to provide humanitarian assistance to people displaced by conflict.

These examples of digitally-mediated user resource mobilization illustrate the increasing scope of users’ participation in conflict. These forms of participation were shaped by the perception of the conflict as it was communicated via digital media on both the Russian and Ukraine side. I’ll also note that the scope of participation on the Ukrainian side was broader due to a shared understanding that the state is under threat of Russian aggression, and because of the limited capabilities of the Ukrainian traditional military to provide an adequate response during the initial phases of conflict. To some extent, Ukrainian users formed a digitally mediated ecosystem of participation where various forms of conflict-related activity supported one another.

The Ukrainian case demonstrates that digital platforms were effective in supporting users’ participation in conflict, not only due to the connection between the calls to action and the affordance of participation, but also because digital networks exposed the inability of traditional institutions to offer an adequate response to such an external threat. Therefore, one may argue that users’ participation was driven not only by the state’s propaganda but also by narratives related to the absence of state. One may also argue that propaganda as a strategy to shape the relationship between people and conflict aims not only to support people’s engagement, but also to control the scope of participation. On the Russian side of the conflict, the scope of users’ participation was mostly limited to online content-related activities (such as commenting, liking, sharing, etc.) and crowdfunding, while on the Ukrainian side, the scope of participation was substantially broader and went beyond the state’s control.

While participatory propaganda and crowdsourced participation leverage the non-geographic nature of digital content to place production and action in the same channel — a channel that pervades all physical and social spaces in human life — they are not the truly disruptive faces of this phenomenon. More disturbing is propaganda that seeks disconnection. Bruntz (1938) argued that one type of propaganda is “particularist propaganda” that seeks to divide the members of a target audience. Christopher Wiley, a whistleblower who revealed information about Cambridge Analytica’s operations, points out that disconnection is one of the main elements of the Breitbart doctrine that was shaped by Steve Bannon. Wiley says, “If you want to fundamentally change society, you first have to break it. And it’s only when you break it is when you can remold the pieces into your vision of a new society.” (Source: the documentary “The Great Hack”).

Digital Disconnection

Disconnection shapes the boundaries of social networks and consequently their social structure. It is easy to forget that before the digital age, disconnection from a friend required either face-to-face action, such as a refusal to shake hands, or time-consuming mediated action such as sending a letter. Online social networking sites (SNSs) offer not only easier ways to make friends, but also easier ways to unmake them. The affordance of disconnection depends on the particular design of a social networking site. For example, on Twitter a user can be unfollowed, muted, blocked and/or reported. On Facebook, one of the most common acts of disconnection is unfriending.

It is very easy to cut social ties online, as most of us know by now. And, like other types of digitally mediated activities, the disconnection takes place in the same domain as the messages are distributed. Because of this, when political messages including propaganda are pushed out, they can be followed by an immediate act of disconnection, particularly since other users take an active role in the generation and proliferation of the content. So propaganda can not only influence users’ perception of a situation and trigger activity around it, but it also shapes how we perceive other users within the situation. When we receive propaganda via social networks, we are forced to decide whether the sender should remain part of our social network.

Facebook’s design offers a fruitful environment for disconnection since it enables the “sharing [of] the same conversations with highly different audiences” (Schwarz and Shani, 2016). And because people are exposed to the political opinions of their Facebook friends, as well as other bits of information they may not have been privy to otherwise, propaganda becomes an effective tool for disconnection and polarization. Nicholas John and Shira Dvir-Gvirsman (2015) argue that Facebook unfriending can be considered “a mechanism of disconnectivity that contributes to the formation of homogeneous networks.” The constant production of categories used to divide social groups into “us” and “them” as well as disconnection between members of these groups can be viewed as a longterm impact of propaganda. That is, the impact of messages can be seen in changes to social structure and goes beyond the specific context of the situation that triggers unfriending. In the case of the Russia-Ukraine conflict, considerable evidence suggests that the conflict had a robustly destructive impact on strong ties, including those between relatives, close friends and classmates. It mainly affected relationships that had been developed long before the conflict (Asmolov, 2018).

The type of social relationship most affected by disconnective practice was between former classmates. Many platforms and groups support relationships between classmates, including Facebook and a social network called Odnoklassniki (classmates) that is popular among users aged 40 and over. Through these platforms, many people who shared the same school room dozens of years ago found themselves on different sides of the Russia-Ukraine conflict. One Facebook user reported that she unfriended two of her classmates because of their position on the situation in Crimea. Another user from Ukraine described on Facebook an experience of chatting with classmates from Russia on WhatsApp. When his classmates discovered that he lives in Ukraine, they began discussing the conflict and eventually tried to ban him from the chat. A Ukrainian user, Irina Anilovksaya, published a book in 2014 describing the experience of conflict-driven disconnection between people who were once close friends. In the book, Irina describes a two-day exchange of online messages between herself and her classmate Alexander, who lives in Russia. The story begins with a friendly discussion of the events and ends when Alexander and Irina accuse each other of being “zombies” and “people who are afraid of the truth.” They say a mutual farewell forever.

The Effects of Participatory Propaganda

What do these new digital affordances actually do to us as individuals? And what are the effects of participatory propaganda on our individual and collective psyches? Propaganda that relies on the participatory design of digital networks is best explained by looking at the link between two interrelated processes: the socialization of political conflicts and the internalization of political conflicts.

The notion of “conflict socialization” was introduced by E. E. Schattschneider (1975), who argues that “the outcome of all conflict is determined by the scope of its contagion,” while “the number of people involved in any conflict determines what happens.... every increase or reduction in the number of participants, affects the result.” The notion of the scope of contagion highlights the role of the crowd in the context of political conflicts. Schattschneider notes that, “Nothing attracts a crowd as quickly as a fight. Nothing is so contagious.” Schattschneider and others (Coser, 1956) also highlighted many years ago how political actors can control and manipulate a conflict for their own purposes.

Today the digital public sphere offers a new set of tools for the manipulation and control of citizen engagement in conflicts. The socialization of conflict is now driven by the content proliferated through social networks, as well as through the digital affordances of online platforms that offer a range of responses to conflict. The role of content in the socialization of conflicts relies on the distinctive nature of social networking platforms that combine the consumption of news with social interaction, and makes social interaction a mechanism of content proliferation. New information technologies — social networks and crowdsourcing practices — also enable the geographically unrestricted “socialization of conflict.” They provide an option not only of “watching together” but also of “acting together.” In other words, users can participate in a conflict a continent away without ever leaving the safety and comfort of their bedrooms.

As a result, one may argue that propaganda has become less interested in changing people’s opinion about a specific object or in convincing people that it is either truth or fiction. The main purpose of 21st century propaganda is to increase the scope of participation in relation to the object of propaganda. In a digital environment relying on user participation, propaganda is a technology of power that drives the socialization of conflicts and a tool for increasing the scope of contagion. While participation in political debates is often considered to be an important feature of democracy, propaganda allows us to define the structure and form of participation in a way that serves only those who generate propaganda, and minimizing the constructive outcomes of participation. In addition, the focus on propaganda as a driver of participation could be considered a meeting point between political and commercial interests, since increasing engagement with a given object of content is a path towards more pageviews and more surrender of personal data. In that sense, propaganda serves not only the political actors, but also the platform owners.

Increased participation in political conflicts also has effects on both the individual and the collective psyche. This is highlighted by the notion of internalization, developed originally as part of developmental psychology (Vygotsky, 1978). “Internalization of mediated external processes results in mediated internal processes. Externally mediated functions become internally mediated” (Kaptelinin and Nardi, 2006). Through internalization, external cultural artifacts are integrated into the cognitive process and help to define our human relationship with reality. For example, using maps gradually transforms the way we think about our environment and how we navigate it. In other cases, “likes” and emojis have been internalized and have become ingrained in our attitude toward a specific object when we think about it. The way we see things translates into our activities in relation to our environment, but the reverse is also true: Our relationship with our environment is shaped by participatory affordances and by the design of digital networks. In that way, digitally mediated participation in conflict is linked to the development of cognitive filters that shape the way we perceive social reality.

The role of internalization can be seen in the ways in which we think about conflicts and how we consider various objects in the context of a conflict. This suggests that participatory technologies that offer a broad range of ways to participate in conflict both increase socialization of conflict (meaning an increase in the scope of participation), but also create a psychological change in users. The latter process is conceptualized here as the internalization of conflict. This internalization means that the participatory design of social networks shapes not only our views on a specific issue, but our perception of our environment in general. Aleksandr Shkurko tells us that “social cognition is fundamentally categorical” (2014). According to Shkurko, we perceive others and regulate our behaviour according to how we position ourselves and others in the social world by ascribing them to a particular social label.” In that light, internalization means that digitally-native propaganda is able to shape the structure of categorization.

As a result, digital participatory propaganda shapes our relationship with our environment beyond any specific topic (object); it changes the apparatus of cognitive optics that structures our perception of everyday life. A conflict encountered through digital propaganda becomes a point of reference for the classification of a broad spectrum of events and social interactions. It shapes interpretative frameworks in a variety of situations that are not related directly to the conflict.

Nobel Laureate Joseph Brodsky noted that humanistic classification of others should rely not on abstract categories of a person’s nationality, culture, religion or political beliefs, but primarily on very specific categories that are related to their deeds; i.e. if they are greedy or not, kind or not, coward or not. Conflict-driven social classification diminishes the role of individual deeds in shaping the structure of social relations and allows the institutional actors and state-sponsored media to impose a dominant structure of classification. For example, friends, relatives, former classmates, and co-workers started to be judged based not on previous interactions, common experiences, their professionalism or their values, but based on their positions in regard to conflict. The activities of everyday life, whether related to work or just a common experience on the street, as well as personal frustrations and joys, are examined through the lens of a conflict. A birthday party or family meeting turns into a discussion of conflict, which either concludes satisfactorily because everyone agrees about the conflict, or transforms into an unpleasant and even hostile encounter if one or more individuals disagrees.

One outcome of internalization is the destruction of social ties between friends by means of disconnection. It is not so much that the shape of social categories shifts, but that certain categories become increasingly significant when it comes to classification of everyday life events and social relationships. Individuals begin to view everything through a conflict-oriented cognitive filter, including issues not at all related to the conflict. Internalizing the conflict — allowing it to reshuffle the relevance of one’s social categories — supports the socialization of the conflict, through recirculating propaganda and mobilizing resources towards crowdsourced warfare projects. In that way, the internalization and socialization of conflict mutually support and reinforce each other.

Figure 1 illustrates how these processes are interrelated. Digital platforms mediate a relationship between a user in Russia or Ukraine (subject of propaganda) and the various aspects related to perception of the conflict (object of propaganda), e.g. the Russian annexation of Crimea. The tools that mediate these relationships offer the user a broad range of conflict-related forms of participation, from proliferation of conflict-related content to crowdfunding, hacktivism, and online volunteering. This is conflict socialization. In contrast, the participation of a user also contributes to an increase in the prominence of conflict in the user’s everyday life and specifically the way conflict-related judgments shape the users’ perception of their social circles and the environment beyond the conflict. The outcome, as we discussed earlier, is that former classmates, friends, and relatives begin to identify primarily with their position vis à vis the conflict. The categories of that position are imposed by propaganda embedded in the news feeds of social networks, and whose effect is multiplied by commenting, sharing, and generating additional propaganda-related content. Those who have an opposite opinion about the topics are excluded from social circles. This is an outcome of conflict internalization.

Figure 1: The mechanism of digitally mediated participatory propaganda

Internalization explains the most insidious aspect of digital propaganda: the transformation in users’ cognitive structure that manifests as a shift in their classification structure. This shift, usually to binary thinking — in seeing the world in terms of either you support the Russian statement “Crimea is ours” or oppose it — affects all spheres of the user’s social relations and perceptions of the world far beyond the specific topic of propaganda. The collective and the individual psyche are interrelated. One may suggest that the more propaganda has been socialized, the more it is internalized by the subject and reproduced within the subject. Digitally mediated participation in propaganda-related activities makes propaganda a part of our “inner space” and allows it to define our perception of reality from within.

Conclusion: Beyond the USSR

Back in the USSR, propaganda sought to ensure that the state controlled the way its citizens perceived reality and mobilized their resources. This control was achieved by relying on a monopoly over informational sources. The purpose of Western “counter-propaganda” was to break the walls of informational isolation. The radio that I used as a child to search for “enemy voices” was actually my Internet — an opportunity to look for information in an open environment beyond the walls.

More than 30 years later we live in a significantly different information environment. Thanks to the proliferation of the Internet, states like Russia are not able to control the information environment by limiting the range of sources. Despite infrastructural support and major financial investment, state-sponsored TV channels have become less popular than YouTube (Ostrovsky, 2019). In addition, thanks to social networks and messenger services, personal communication relies on horizontal networks and is not limited by any physical borders. In the “space of flows,” as conceptualized by Manuel Castells (1999), information technologies challenge the state’s sovereignty not only over its territory but also, and significantly, over its citizens. In the multicultural and global information environment, state actors have no effective tools that allow total isolation of their citizens from a broad range of sources (with the exception of North Korea and Turkmenistan). Complete control over the information space through filtering and blocking is very hard to achieve.

The threat that comes from new information technologies was identified by some states at very early stages. The first document signed by Russian President Vladimir Putin in 2000 was the Information Security Doctrine, which addressed new information technologies as a potential threat to political and social stability. Concern over the loss of control in the new media environment is manifest in the way the Russian authorities try to regulate the Internet. The concept of a “sovereign Internet” seeks to equalize the scale of control over cyberspace with the scale of control over offline space. But it seems that most traditional approaches to re-creating various forms of isolation, at least within the Runet, are failing.

The need to compensate for the loss of control over the media environment and social interactions between people has required new approaches. These seek not to restrict new information technologies, but to build on new digital affordances, which allow us to offer a direct link between propaganda and the mobilization of the resources of digital crowds. New forms of propaganda harness the participatory design of social networks, crowdsourcing and the affordances of disconnectivity. They flourish in an environment where news cannot be separated from interpersonal communication.

The purpose of the new propaganda is neither the production of reality nor of unreality. The new propaganda seeks to offer a new way of restoring the state’s sovereignty over people in the new information environment and to rebuild walls that have been demolished by global horizontal networks of communication. It aims to mitigate the capacity of these networks to challenge the state’s sovereignty. If the state is not able to control the flow of information and communication, it targets the way this information is interpreted and analyzed. Conflict-based cognitive filters ensure that horizontal networks and uncontrolled flows of information do not threaten a state’s control over its citizens, as well as expand the control of a state’s actors over individuals beyond its borders.

I’ll note that this essay doesn’t present an argument against digitally mediated participation in conflict. People retain the right to disconnect online as well as offline from people they don’t agree with. The question addressed here, however, is if and how these participatory and disconnective affordances can be harnessed by state actors relying on propaganda in order to achieve their political goals. One may argue, for instance, that a massive digitally mediated participation of users in the Ukrainian conflict was essential in order to protect their country from a potential security threat. It’s not my purpose to draw a line at what type of participation is genuine, and what type of participation can be considered as an outcome of political manipulation. I might argue, however, that participation that is driven by non-genuine actors and information from non-transparent sources, participation that relies on fakes, and participation that harnesses emotions is likely to be considered part of participatory propaganda. The analysis of disconnective action should also focus on whether that type of action was driven by manipulative efforts of institutional actors, shaping our relation with the environment. In that light, I argue that it is essential to understand the political goals of participatory propaganda.

Participatory propaganda restores state sovereignty from within. It aims to build walls in the inner spaces of the subject by shaping categories of perception of the environment. First, it constructs the object of a conflict that can potentially divide people. Second, relying on the design of social networks that combine information proliferation with personal interaction as well as the mediated mobility of devices, it makes this conflict an omnipresent and integral part of everyday life. Third, it offers a range of simple and immediate opportunities for participation in conflict-related activity. Fourth, it increases the importance of conflict in shaping the structure of people’s social categorizations. Finally, it relies on the affordance of disconnectivity to mitigate the capacity of horizontal networks to cross borders and challenge a state’s sovereignty.

What does this sort of propaganda do to us as a society? It is designed to implement new forms of sovereignty. It is designed to replace networked structures of society with fragmentation and polarization. It helps to pull people apart by forcing them into the role of combatants rather than citizens. It is designed to destroy horizontal relationships that offer alternative sources of information and that can potentially be transformed into independent collective action and a broad opposition to institutional actors. It is designed to divide and rule. It produces a reality with new walls and borders that can sever personal relationships and weaken critical thinking capabilities.

Participatory digital propaganda enables the private, everyday identity of users to be occupied and taken over by the institutional actors that propagate it. Addressing these effects of propaganda requires that we lessen the significance of conflict-related categorization for the interpretation of everyday life and offer alternative forms of subject-object and subject-subject relationships that are not driven by conflict. The protection of identity in a conflict-prone digital environment may rely on the user’s capacity to control the scale of their engagement in the conflict and may mitigate the role of conflict-related classification in the interpretation of social relations and everyday life. It also requires that counter-propaganda offer not an alternative view of specific events, but alternative classification structures that protect the autonomy of the subject, horizontal networks, and independent forms of collaboration.

In 2014 and 2015, something strange happened in a place apparently quite far from any conflict: the Russian-speaking segment of Tinder. One could see that an increasing number of users wrote as a part of their personal description either “Crimea ours” or “Crimea not ours.” The relationship with a conflict became not only a signifier for evaluating existing relationships, but also a driver for forming new romantic relationships and friendships. The Crimea conflict found its way into one of the most intimate aspects of life. I argue that the way to counter propaganda is not to convince others whose Crimea it is, but to weaken the role of propaganda in shaping our relations and follow Brodsky’s vision of humanistic social classification. That means we judge and love one another not on the basis of political categories that are created to divide us, but on our everyday deeds and actions.

References

Andrejevic, M. (2007). iSpy: Surveillance and Power in the Interactive Era. Lawrence, KS.: University Press of Kansas.

Anilovskaya, I. (2014) The War: the correspondence between classmates. Kiev: Alfa Reklama.

Asmolov, G. (2018). The Disconnective Power of Disinformation Campaigns. Journal of International Affairs, Special Issue 71(1.5): 69-76.

Asmolov, G. & Kolozaridi, P. (2017). The Imaginaries of RuNet: The Change of the Elites and the Construction of Online Space. Russian Politics, 2: 54-79.

Benkler Yochai, Robert Faris, and Hal Roberts (2018), Network Propaganda Manipulation, Disinformation, and Radicalization in American Politics, Oxford University Press.

Bruntz G. C. (1938). Allied Propaganda and the Collapse of the German Empire in 1918. Stanford, CA.: Stanford University Press.

Castells, M. (1999). Grassrooting the Space of Flows. Urban Geography, 20(4):294-302.

Coser, L. A. (1956). The Functions of Social Conflict. New York: The Free Press.

Deibert, R. J., Rohozinski, R., and Crete-Nishihata, M. (2012). Cyclones in cyberspace:

Information shaping and denial in the 2008 Russia-Georgia war. Security Dialogue, 43(1).

Edwards, S. & S. Livingston (2018). Fake news is about to get a lot worse. That will make it easier to violate human rights — and get away with it. The Washington Post, April 3, 2018

https://www.washingtonpost.com/news/monkey-cage/wp/2018/04/03/fake-news-is-about-to-get-a-lot-worse-that-will-make-it-easier-to-violate-human-rights-and-get-away-with-it/

Ellul, J. (1965). Propaganda: The Formation of Men's Attitudes. New York: Alfred A. Knopf.

Habermas, J. (1989). The Structural Transformation of the Public Sphere. An Inquiry into a Category of Bourgeois Society. Cambridge MA.: MIT Press.

Haigh M., Haigh, T., & Kozak, N. I. (2017). Stopping Fake News. Journalism Studies, 1-26.

Hartmann, M. (2013). From Domestication to Mediated Mobilism. Mobile Media & Communication, 1(1): 42–49.

Hunter, M. (2018). Crowdsourced War: The Political and Military Implications of Ukraine’s Volunteer Battalions 2014-2015. Journal of Military and Strategic Studies, Volume 18, Issue 3, 78-124.

Institute for Propaganda Analysis. A. McClung Lee & E. Briant Lee (Eds.) (1972). The Fine Art of Propaganda. New York: Octagon Books.

John, N. A. & Dvir-Gvirsman, S. (2015). ‘I Don’t Like You Any More’: Facebook Unfriending by Israelis During the Israel-Gaza Conflict of 2014. Journal of Communication, 65(6): 953-974.

John, N. A. & Gal, N. (2018). “He’s Got His Own Sea”: Political Facebook Unfriending in the Personal Public Sphere. International Journal of Communication, 12: 2971–2988.

Johnson, B. (2019) Deepfakes are solvable — but don’t forget that “shallowfakes” are already pervasive, MIT Technology Review, Mar 25, 2019 https://www.technologyreview.com/s/613172/deepfakes-shallowfakes-human-rights/

Kaptelinin, V. (2014). The Mediational Perspective on Digital Technology: Understanding the Interplay between Technology, Mind and Action. In S. Price, C. Jewitt & B. Brown (Eds.), The Sage Handbook of Digital Technology Research (pp. 203-217). London: Sage.

Kaptelinin, V. & Nardi, B. A. (2006). Acting with Technology: Activity Theory and Interaction Design. Cambridge, MA.: MIT Press.

Khaldarova I & Pantti, M. (2016). Fake News. Journalism Practice, 10(7): 891-901. DOI: 10.1080/17512786.2016.1163237

Laswell. H. (1927). Propaganda Technique in the World War. New York: Alfred A. Knopf.

Le Bon, G. (1903). The Crowd. London: Unwin.

Light, B. (2014). Disconnecting with Social Networking Sites. Basingstoke, UK: Palgrave Macmillan.

Mejias U. A., & Vokuev, N. E. (2017). Disinformation and the Media: The Case of Russia and Ukraine Media. Culture & Society, 39(7): 1027– 1042.

Oates, S. (2016). Russian Media in the Digital Age: Propaganda Rewired. Russian Politics, 1: 398-417.

Ostrovsky, A. (2019). Russians are Shunning State-Controlled TV for YouTube.

The Economist, March 7th. https://www.economist.com/europe/2019/03/09/russians-are-shunning-statecontrolled-tv-for-youtube

Patrikarakos, D. (2017). War in 140 Characters. How Social Media is Reshaping Conflict in the Twenty-First Century. New York: Basic Books.

Sanovich, S. (2017). Computational Propaganda in Russia: The Origins of Digital Disinformation. In S. Woolley & P. N. Howard (Eds.), Working Paper 2017.3. Oxford: Project on Computational Propaganda.

Schattschneider, E. E. (1975). The Semisovereign People: A Realist's View of Democracy in America. Harcourt Brace College Publishers.

Schwarz, O. & Shani, G. (2016). Culture in Mediated Interaction: Political Defriending on Facebook and the Limits of Networked Individualism. American Journal of Cultural Sociology, 4: 385–421.

Shkurko, A. V. (2014). Cognitive Mechanisms of Ingroup/Outgroup Distinction. Journal for the Theory of Social Behaviour, 45(2): 188-213.

Silverstone, R. (1994). Television and Everyday Life. London: Routledge.

Silverstone, R. (2002). Complicity and Collusion in the Mediation of Everyday Life. New Literary History, 33(4): 761-780.

Stuart, C. (1920). Secrets of Crewe House: The Story of a Famous Campaign. London, New York, and Toronto: Hodder and Stoughton.

Toler A. (2018) Crowdsourced and Patriotic Digital Forensics in the Ukrainian Conflict. In: Hahn O., Stalph F. (eds) Digital Investigative Journalism. Palgrave Macmillan, Cham

Vygotsky, L. S. (1978), Mind in Society: The Development of Higher Psychological Processes. Cambridge, MA.: Harvard University Press.

Wooley S. C., & Howard, P. N. (2016). Political Communication, Computational Propaganda, and Autonomous Agents. International Journal of Communication, 10: 4882–4890.

Zuckerman, E. (2018) Four problems for news and democracy, Medium.com. https://medium.com/trust-media-and-democracy/we-know-the-news-is-in-crisis-5d1c4fbf7691

Comments
1
?
Laila Kumasi:

,