Skip to main content
SearchLoginLogin or Signup

On the Internet, Nobody Knows You’re a Bot: Pseudoanonymous Influence Operations and Networked Social Movements

An exploration of what happens when politically motivated humans impersonate vulnerable people or populations online to exploit their voices, positionality and power.

Published onAug 07, 2019
On the Internet, Nobody Knows You’re a Bot: Pseudoanonymous Influence Operations and Networked Social Movements
·

Brian Friedberg is an investigative ethnographer whose work focuses on the impacts that alternative media, anonymous communities and popular cultures have on political communication and organization. Brian works with Dr. Joan Donovan, who heads one of the world’s leading teams focused on understanding and combating online disinformation and extremism, based at Harvard’s Shorenstein Center on Media, Politics and Public Policy. In this essay, Brian and Joan explore a challenge the Unreal has presented for study of activism online, the question of whether an online actor is real or synthetic. In this essay, they explore what happens when politically motivated humans impersonate vulnerable people or populations online to exploit their voices, positionality and power.

— Ethan Zuckerman, Editor

It is not true that in order to live one has to believe in one's own existence. There is no necessity to that. No matter what, our consciousness is never the echo of our own reality, of an existence set in "real time." But rather it is its echo in "delayed time," the screen of the dispersion of the subject and of its identity — only in our sleep, our unconscious, and our death are we identical to ourselves.

— Jean Baudrillard

Introduction

When facts become hard to establish, distortion takes over. Social media has given rise to countless operators who assume false identities, infiltrating political networks and using them to influence public opinion. And while social media platforms say they are working to stop this malfeasance, they remain open to all manner of abuse by bad actors. The outcomes are varied, but in almost every instance, a con game is operating — one that manipulates other social media users and online social movements with reams of posts, a firehose of unprovable “facts’’ that in measurable ways influences public debate on key social issues.

For example, with each election cycle, a tremendous amount of online content is generated: Users digest and analyze candidates’ words and images, and politicians themselves produce a waterfall of social media posts as they seek to connect with the electorate. From petitions and polling, meme campaigns and the interventions of marketing firms like Cambridge Analytica, social media and networked communication has created enormous opportunity for professionals and amateurs alike to attempt to influence public opinion. At the same time, a robust practice of punditry provides exhaustive analysis of these campaigns. The prevalence and visibility of these influence operations has also intensified American tendencies toward skepticism and conspiracism. A tacit coalition of journalists, activists, and academics now tries to make sense of these rival perceptions of polarization, with pessimistic analysis suggesting discourse as we know it is dead, or perhaps has never existed. The cacophonous debate about the role social media played in election outcomes leaves most of us with two options: belief or indifference.

After the 2016 U.S. presidential election, we have placed tremendous importance on the influencing of political participation and attitudes, as well as on measuring this influence. What we see as the study of disinformation may be in itself the exercise of power: with social and material harm to marginalized groups often being deprioritized in these analyses in favor of party politics and the interests of big data. Algorithmic anti-blackness (Noble, 2018), relentless gendertrolling (Mantilla, 2015), digital redlining (Gilliard & Culik, 2016), the exploitation of gig economy workers (Rosenblat, 2018 and Gray, 2019) and a resurgence of white nationalist organizing in the new public commons present immediate threats to minoritized groups, many of which have been mounting their own resistance to attacks that benefit from the cloak of online pseudoanonymity. Our understanding of disinformation, however, must also include an understanding of these harms, not just their effects on party politics.

Historically, activists have adopted anonymization techniques to ensure operational security when organizing on open or closed networks. Despite Facebook leaders’ protestations to the contrary, anyone may at any time create a new Facebook account not linked to his or her verified legal identity. And while a user’s reach depends on access to amplification, users with strong networks or those assisted by artificial tools to add automated followers are favored by algorithmic recommendation mechanisms. Leveraging these mechanisms creates a newfound hypervisibility for individuals who regularly create content with high rates of interaction. It’s a double-edged sword, however: political figures, pundits, and social movements all enjoy newfound reach while simultaneously facing intense scrutiny. Others, who have been outed as activists online, have lost their jobs or other membership status.

Alongside this hypervisibility of celebrities, candidates, elected officials arise new opportunities and incentives to influence online conversations anonymously. We have seen foreign and domestic actors leverage amplification opportunities and algorithmic vulnerabilities using political wedge issues, such as immigration, LGBT rights, anti-Black racism, religious discrimination, and more. It is these divisive politicized positions concerning identity, authority, and justice that fuel news cycles and electoral campaigns. Politicians deploy these issues during campaigns, amplifying and iterating them amongst mainstream and independent media alike. These issues are frequently at the heart of operations designed to influence elections, culture, and policy, with operators accomplishing these goals through social media activity and the manipulation of media coverage. By artificially creating the impression of broad support and legitimacy, inauthentic actors can wield disproportionate influence by giving their fabrications a solid platform on social media and making it appear that they have large numbers of supporters.

Platforms such as Facebook call these operations “coordinated inauthentic behavior” (CIB), though it also terms the same practices authentic if “legitimate” users, rather than political operatives, paid partisans, or bots perform them. For example, when instances of artificial amplification (bot networks) or misrepresentation (sockpuppet accounts) were covered in the news or discussed by social media users after the 2016 US election, platform companies labelled this activity as CIB based on their own internal data analysis. Because of public outcry and political pressure, major platforms are now removing detectable automated accounts and reporting these removals publicly. At the same time, the accusation of an account being a “Russian bot” became a common dismissive reply online, especially to those calling attention to trolling campaigns or online harassment.

Instead of focusing on the notion of authenticity, our analysis centers on “pseudoanonymous influence operations” (PIO), wherein politically motivated actors impersonate marginalized, underrepresented, and vulnerable groups to either malign, disrupt, or exaggerate their causes. Because PIOs travel across platforms, researchers cannot rely on one platform’s designation of CIB in order to assess the complexity of an operation. PIOs are more subtle than botnets and their data traces can be difficult to track. Rather than relying on volume for amplification, as bots do, their inauthenticity comes from the adoption of a minority persona by a human seeking political influence. PIOs play with and upon political polarization, confirming our most deeply held beliefs with their messaging, either reinforcing preexisting stereotypes or creating a convenient straw man of the (approximated) positions of political opponents.

In order to investigate the phenomena of PIO, we ask: “What happens when a bot is not a bot?” We compare the residual artifacts, impacts, and press surrounding four pseudonymous (use of pen name) or pseudoanonymous political actors that were charged with inauthentic behavior online. Our analysis reveals how a tradition of anonymity online became a tactic for appropriating the style and techniques of networked social movements, groups of loosely affiliated individuals who use the Internet to coordinate action (Donovan 2018). For example, the Occupy Movement in 2011 used social platforms, conference calls, and the Internet to coordinate protests in hundreds of public parks around the world (Donovan 2016). And it’s not only that networked social movements used social media to coordinate public protests; by repurposing search engine optimization techniques for movement goals they also learned to “play the algorithms” in order to draw online attention to specific issues (Monterde and Postill 2014).

In contrast to the use of social media by previous networked social movements, PIO is not just a new flavor of astroturfing, infiltration, or co-option. PIO accounts are linked to individual, foreign, domestic, and other unknown interests. Posts from PIOs have had both measurable and immeasurable impact on political conversation, employing rhetoric that exploits wedge issues, identity-based movements, and critiques of state power with unclear or even malicious intention. It doesn’t matter if the movements are associated with the right or the left. Rather, PIOs focus on the wedge itself in order to drive polarization. The four accounts we examine highlight different ways anonymity is being deployed as a means of weakening the visibility and political communication of social movements for short-term political gain, often at the expense of groups seeking civil rights protections.

While these accounts differ in affiliation and tone, they all relied on anonymity to advance identifiable agendas to incite action. Because imposter accounts and networks share text, images, and video to frame contemporary political issues, we can assess the spread of these messages and see how different interests and groups are aligned. All, in their way, take a position as activists. Blacktivist and LGBTUnited, accounts most prominent on Facebook and Twitter, were civil rights oriented social media personas that were eventually revealed to be operations of the Internet Research Agency’s troll farm. Gay Girl in Damascus (GGID) was an online persona depicting a young Syrian lesbian, which in reality was created and run by Tom MacMaster, a white male from the United States. Amy Mek is a pseudonym for Amy Mekelburg, a far right political commentator who evaded identification for several years while amassing a substantial following on Twitter.

The motivations of these actors vary, but they all play into a polarized, and increasingly unreal, media environment. Some of these accounts have been unmasked as the result of deep journalistic investigations or through the findings of Senate hearings on Russian manipulation in the 2016 presidential elections. While the GGID and IRA-orchestrated operators’ accounts have been removed from Blogspot, Facebook and Twitter, Amy Mek still retains her Twitter account.

In our case study section below, we examine the available data traces of these four accounts from 2011–2018, specifically the archives of their social media activities and the repositories of posts shared by these actors. We pay special attention to the amateur and professional journalism that helped contextualize and frame their social importance, and reactions to this journalism by sympathetic and hostile audiences. Additionally, we describe the threat to the integrity of political institutions and journalism that leads to a fog of unreality, where belief and indifference become logical reactions to our new media ecology.

Beyond the Pen: Between Pseudonymity and Pseudoanonymity

Anonymity is an “absence of information” produced through social, discursive, technical or legal processes (Bachmann, Knecht and Wittel 247); “a context-dependent identity performance expressing private sentiments in the public sphere by negating some aspects of the legally identified and/or physically embodied persona (Asenbaum 1).” In democratic politics, anonymity is of central importance to the electoral process, both the secretly-cast “Australian ballot” adopted in the early 20th century and the ability to financially contribute to political campaigns without repercussion. The right to anonymity in political speech is also foundational to both online and offline communication in the United States, reinforced in McIntyre v. Ohio Elections Commission (1995), which at the time specifically referred to the distribution of unattributed political flyering. This interpretation is now broadly prescribed to the protection of political speech online under the First Amendment.

The intentional or non-intentional withholding of identifying information about a subject in a public commons is a sociotechnical process in online communication that developed out of the public use of avatars and flexibility in email naming conventions during the early days of Usenet and black board systems. In the early Internet, anonymity was common, but not generally for the purpose of evading sanctions — you could post on Usenet using whatever identity you wanted, but the assumption was that your local sysadmin could find you and shut you down if you were disruptive (Tepper 1997). When the Internet became the web, anonymity became technically and operationally easier to maintain across sites — often the only identifier associated with an account was an IP address. And due to Network Address Translation, it remains very hard to narrow an IP address down to a specific user. In our research, we specifically detail how social media and self-publishing afford specific kinds of anonymity, while also making certain anonymization techniques more difficult to uphold across different platforms. True anonymity must not include any possibility for identification, including IP address, geolocation, or deidentification.

While users may act in a fashion approximating anonymity online, they are most commonly using a placeholder name — a pseudonym — rather than engaging in legitimately anonymous speech. While true anonymity, completely escaping any trace of verifiability in identity, is difficult for the average user to achieve, the adaptation of pseudoanonymous accounts is possible for anyone willing to put in the effort. What does it mean when technological access barriers are effectively removed from the manipulation process, and the creation of falsified accounts becomes trivial? (Coleman, 109) The degradation of truth in networked social spaces is the very soil in which PIOs take root and grow. According to Judith Donath, “If the rate of deception becomes very high, the signal becomes meaningless.” This paranoid unreality plagues a significant amount of social and political communication online.

Pseudoanonymity is different from pseudonymity. We draw this distinction, in part, to address how pseudoanonymous influence operations attempt to blend in with the networked public. Whereas pseudonymity often ascribes a separate or unique identity to an individual or group as a signature or pen name, pseudoanonymity intentionally cloaks identity within the network of actors in order to remain within a liminal space of recognition. For example, press and commentators covering the American Constitutional debates in 1787–1788 used pen names for fear of reprisal (Main, 1961). Avoiding political or social backlash for one’s beliefs or values is only one rationale for adopting a pen name. For others, it might be useful simply to shorten a name or pay homage to someone else. Anonymous political communication is a necessary protection for those who otherwise would be unable to speak openly due to fear for their own safety.

Accounts that use pseudoanonymous techniques, however, are not simply trying to influence political outcomes while remaining anonymous, but are rather active participants in shaping the field of political engagement, where their constructed identity provides legitimation for their claims to truth. Choosing a pseudonym that signals a marginalized identity with precarious social protections (a young queer woman or a radical black activist) grants some reprieve from the demands of self-identification: one would expect a queer female activist in Syria to mask her true identity. This situation suggests that political wedge issues that depend on claims from a particular group, and the careful self-representation of those at social risk, may be more likely to draw PIOs than campaigns that are led by an identifiable politician or social movement organization.

Anonymity in a SocioTechnical System

In the context of political organizing, anonymity is a key technique for masking one’s personal identity in order to criticize those in power. In his studies of speech in ancient Greece, Foucault (2001) showed how “fearless speech” was available only to certain status groups as defined by their relationship to their rulers. Throughout history, female authors used male and gender-neutral pen names to avoid the stigma associated with gender, while others have used pseudonyms to hide from authorities or to publish manifestos. Prior to the Internet, different forms of speech, taken by different groups of people, required different kinds of anonymization to protect real-world identity. However, in the case of the internet, identity is not what is secured; systems are.

Online speech is nothing like the ancient Greek agora, despite numerous attempts to theorize the Internet as the public square. If someone wants to be anonymous online, they must assume pervasive surveillance, where the digital street records your every click, your every keystroke, your every attempt to mask activity, in perpetuity. As Bruce Schneier, a security researcher, wrote in Wired in 2006, “Anonymous systems are inherently easier to abuse and harder to secure.”1 In this sense, platform companies allow a “limited anonymity” to all users, where security refers to protecting one’s information from unauthorized use, but assuming that all users will be thoroughly tracked, even if employing anonymization techniques.

The securitization of technical systems has led to rapid advances in data markets where, from a corporation's point of view, data harvesting through surveillance technologies is rational and profitable. As a result, political organizers have taken a two-pronged approach to fighting for data privacy regulations, while also building anonymizing software and circumvention technologies to enhance, but not ensure, the capacity for anonymity online. Schneier points out that when using technical systems to enhance anonymity, the promise is only one of pseudoanonymity, where information is ported through “trusted third parties.” In other words, the risk of reidentification is still very real and ongoing (Chuang et al., 2014).

The most famous use of anonymity in political organizing online involved the communication infrastructure used by an ensemble of hackers who chose the collective name of “Anonymous” to carry out large-scale public interest hacks (Coleman, 2017). In Hacker, Hoaxer, WhistleBlower, Spy: The Many Faces of Anonymous, Coleman analyzes two distinct periods of Anonymous — trolling and direct political action. Organizing against Scientology and in support of Occupy Wall Street, and sending signals of progressive social values, Anonymous initially organized to fight injustice and to expose secrets harmful to popular interest. Then Anonymous and affiliated organizations changed, and allegiances shifted. On the same anonymous message boards in which previous justice movements organized, newfound reactionary groups espousing white supremacist and antifeminist positions now thrive.

The type of anonymity allowed by text based communications, wherein “users can separate their online actions from their real-world selves” (Gray and Kappeler, 40), avoids direct responsibility for racist, sexist, LGBTQ-phobic, anti-religious, or other harmful kinds of speech. Increases in anonymous trolling from usenet to private forums to 4chan spills over into activity on social media, where posting is not just a speech act, but can also be a coordinating function of a call to action.

If a networked social movement has a public-facing communication strategy, wherein identity and political positions are clearly articulated, it may be mimicked and reproduced by PIOs who augment the messages as an act of trolling or sabotage. Trolling language is often utilized in the subtext of online posts (Donath, 15), something researchers can trace in the accounts that impersonate activist movements. While platforms like Facebook have toyed with the idea of mandatory identity checks for new users, this can be problematic both for at-risk activists and individuals seeking validation from new identities (Boyd, 2011). In this situation, it is impossible to both eliminate anonymity and the opportunity for pseudoanonymity from online spaces, which enables further exploitation of networked social movements through platform abuse.

Another well-worn form of platform abuse is sockpuppetry, a method for manipulating conversations online often employed by PIOs. The use of the term “sockpuppet” to describe deceptive online accounts goes as far back as 1993, and was loosely coined by Dana Rollins commenting on listserv opinion manipulation.2 Since then, the term has become popular amongst Usenet and social media users, and was later adopted as a means of classification by journalists and researchers. In the hands of a skilled operator, a collection of sockpuppets is a hydra of pseudoanonymous influence. These sockpuppets are used to create fake product reviews, evade previous account suspensions, stealth-edit Wikipedia in bad faith, and execute harassment campaigns. Communities subjected to attacks and interference from sockpuppet networks can fight back by attempting to systematically eliminate inauthentic accounts, discredit them, and link them back to a named operator. Finding technical solutions for sockpuppet armies that are disrupting closed communication platforms has long been the subject of computer science researchers who seek automated solutions for human problems. When accounts behave in a way that suggest automation, automatic takedown becomes achievable through algorithmic means of detection. In our cases, the PIOs all employ some form of inauthentic communication outside the traditional understanding of how and why sockpuppets are deployed — specifically, the adaptation of identifiers that signal group affiliation organized around and against marginalized groups.

Case Studies

“A Gay Girl in Damascus” — A Pseudonymous Influencer

Our first case of a pseudonymous influence operator (PIO) is a notorious blog, simply titled “A Gay Girl in Damascus” (GGID). The creation of a 40-year-old white male graduate student who published and privately communicated as a young Syrian-American woman using the name Amina Arraf, the GGID project started as an alias in 2006, began blogging in February 2011, and operated until the author was exposed in June 2011. While Amina had traces of web activity previously, the project began publishing on a now-defunct queer news site “Lez Get Real” before moving on to a self-managed Blogspot account. GGID captivated the attention of fellow LGBT bloggers, online activists and mainstream press through the devastating revelation of its inauthenticity. Exposed through critical journalism and the investigation of actual Syrian activists, the GGID hoax generated an enormous amount of press, both in support of the fictitious character and critical reflections upon revealing its true creator.

The GGID phenomenon was situated within the Western media spectacle of the Arab Spring, particularly the Syrian revolution of 2011. Approximating the position of a marginalized political dissident fluent in both English and progressive social rhetoric, the operator, Tom MacMaster, published regular blog posts establishing the character of Amina, her political position, and analysis of the Syrian conflict. Amina was in opposition to President Bashar al-Assad, supportive of revolutionary movements, and situated in a global discourse of LGBT rights and protections. These positions mirrored the self-reported liberal democratic beliefs of MacMaster, and of his assumed audience of progressive Western readers. Unlike later influence operations that approximated political messaging of marginalized people, MacMaster claims his hoax was not meant to deceive or sow division but to amplify the reality of the Syrian revolution: “While the narrative voice may have been fictional, the facts on this blog are true and not mısleading as to the situation on the ground” (Bell and Flock, 2011). In several interviews, MacMaster relishes the legitimacy that adopting the Amina moniker allowed him in political conversation circles, allowing him both access and authority about Syrian politics, a topic he was obsessed with. His frequent blog posts in fluent English were the perfect bait for Western journalists looking to amplify an accessible, personalized frame of a conflict that was difficult for outsiders to decipher. Outside journalists had great difficulty gaining access to Syria at this time, making them reliant on social media posting and on-the-ground updates from protesters, in this case the falsified accounting of GGID.

As the attention on Anima grew, the daily blog posts and private communications became reportedly too much for MacMaster to keep up with, so in June 2011 he had her kidnapped. This dramatic turn was announced under a new identity, that of Anima’s cousin, writing on the GGID blog that the young activist had been detained by Syrian auththorities and was now subject to abuse and torture. Soon after this revelation, online friends of Amina and other activists launched the #FreeAmina hashtag, created several Facebook pages in support, and sent appeals to the US State Department to investigate her imprisonment. The urgency of Amina’s story was amplified by press coverage, leading many journalists to attempt contact with Amina’s friends, family and fellow activists. It was this surge of attention that led to the collapse of the operation, and the identification ofMacMaster as the blog’s true author. Journalists and activists scoured the Internet for any identifying information, and conversations between these investigators determined that not only was Amina not real, but that MacMaster had been manipulating them for several months.

The deceptive legitimacy of the Amina character was dependent on the author’s words and expressed persona and made possible by cross-platform posting, using Facebook for networking and Blogspot for publishing. After proving herself a part of the LGBT online community through frequent comments on articles, interaction, and supportive feedback, Amina was invited to contribute to the Lez Get Real news site. Interpersonal trust was built over time, and Amina gained legitimacy as a persona from a wide set of online interactions. As the Amina performance expanded onto other platforms, MacMaster stole photos from Jelena Lecic, an unrelated woman in London and posted them as selfies. He also created a network of accounts centered around Amina and her close friends and family. As detailed in the 2015 dramadoc “The Amina Profile,” MacMaster engaged in a long-distance romantic relationship with Sandra Bagaria, and corresponded with other westerners who were following the conflict from abroad. For several well-intentioned individuals, regular correspondence with Amina was a daily occurrence for several months, and it was this communication across multiple platforms that motivated the activists who launched the #FreeAmina campaign and the open-sourced investigation, published by The Electronic Intifada (Abunimah, 2011), that lead to MacMaster’s identification as the author. It was subsequently revealed that the site owner of Lez Get Real was also a heterosexual man using a falisfied lesbian persona (Bell and Flock, 2011).

Many news outlets, including The Guardian, were subsequently forced to acknowledge the hoax as well as the inaccuracies of their previous coverage. The press was instrumental in both amplifying Amina’s fictional life, and exposing the illegitimacy of the operation. Some press highlighted the potential damage this PIO did to the safety of Syrian activists, the reality of the confirmation biases that led many to easily accept this hoax, and the fragility of identity online. Other commentary suggested, despite the inauthenticity, that GGID somehow increased Western awareness of the plight of dissidents and the LGBT community in Syria (McCrum, 2011). Many activists and journalists condemned the disproportionate attention this unverified persona received in Western press. In both the political and personal performance of Amina, MacMaster used a studied veneer of queer femininity as a means to access press and relationships not available to him otherwise. Ultimately, the hoax did significant damage to those truly affected by these conditions as it lessened trust in communications between journalists and queer Arabs using pseudonyms.

This switch where pen names are feminized is critical for understanding how claims of situated identity online become fodder for believability. The entire project is lit by identity, the exploitation of the goodwill of intersectional activists, and progressive journalists’ desire to highlight and amplify the voices of marginalized people in their reporting. MacMaster also exploited the communication channels and platforms used by networked social movements, mimicking the tone of journalist/activists and the communication channels they use both to build movement solidarity and to organize action. The identities and bodies of marginalized people are an exploitable wedge issue for those seeking to manipulate media and gather online influence as we will see in the following cases. MacMaster’s deception gained press traction specifically due to the impending physical threats surrounding Amina, making GGID a particularly valuable identity to impersonate, as we would expect a vulnerable young woman to make herself hard to find offline (Zuckerman, 2011).

Internet Research Agency as Pseudoanonymous Influence Operations

On April 3, 2018 Facebook announced the discovery, and removal, of 70 Facebook accounts and 138 pages run by a Russian company called the Internet Research Agency (IRA),3 an organization identified by Adrien Chen of the New York Times in 2015. Facebook revealed that up to a million Facebook users could have been served ads purchased by the IRA to grow the visibility of these pages. Some ads were crafted to exploit wedge issues during the 2016 election, while some were designed to draw traffic to community pages built around American social issues (Shinal, 2017). The IRA mimicked a number of activism-oriented Facebook communities and thus was able to draw users interested in certain communities and issues to their pages, cultivating sympathetic followers over time by paying for targeted ads and promotion. The IRA operated pseudoanonymously, identifying and amplifying content consistent with the themes of the disparate pages they controlled.

The IRA pages showcased a variety of political and religious affiliations, including patriotism, Black activism, conservatism, and LGBT issues (Nadelr et al., 2018). They created YouTube videos, squatted domains relating to various identities, and seeded hashtags encouraging secessionism. Of note here are two pseudoanonymous IRA accounts: Blacktivist and LGBTUnited. And the IRA’s presence was not only limited to Facebook: they had multiple parallel accounts on Instagram and Twitter, some of which were created as far back as 2008. Content created by these pages was further shared on Instagram and was the subject of a number of Youtube videos. In an exclusive report by CNN, a group of IRA run pages were exposed, and subsequently removed by the platforms on which they existed (O'Sullivan and Byers, 2017). What remains of the pages that still exist online are traces of interactions on Twitter and archives of screenshots and memes these pages distributed. Both Blacktivist and LGBTUnited had operated largely undetected for several years.

In the case of Blacktivist, its IRA-created social media accounts appear to have been active as early as 2015. These accounts shared content in the style of the Black Lives Matter movement, including videos of police brutality, and empowering memes on topics like black pride, sisterhood, and political reform. Aside from posting and reposting pro-black content, the page promoted rallies, sold merchandise, and in one noted case privately interacted with a black activist when challenged.4 As the 2016 election approached, Blacktivist accounts encouraged viewers to reject Hillary Clinton in favor of third-party candidate Jill Stein. Researchers, journalists, and activists struggled to make sense of these revelations, and mass media narratives largely centered the conversation around the possibility of election interference by Russian interests.

When major press outlets interviewed those who had direct communication with Blacktivist accounts, many re-centered the conversation to address historic and contemporary instances of the suppression of black voices for political gain. Malkia Cyril of the Center for Media Justice suggested this pseudoanonymous manipulation was an “old tactic” and a “global phenomenon” (Entous et. al., 2017), calling on platforms to systematically address how they are consistently being used to amplify racism. Indeed, the inauthenticity of the Blacktivist pages was first suspected by black feminists, many of whom had been involved in previous anti-troll operations like exposing #EndFathersDay as a PIO involving an array of sockpuppets coordinated on an anonymous message board. (Hampton, 2019) Some writers compared the tactics at play as reminiscent of COINTELPRO’s disruption actives within New Left movements of the 1970’s, particularly the Black Panther Party (Mock, 2017).

Like the networked social movements they were seeking to mimic, Blacktivist communicated in text, memes and reposts, sharing content from official Black Lives Matters chapters. The most significant collection of Blacktivist content exists as a crowd-sourced Medium5 archive, wherein users were encouraged to submit screenshots of activity and images attributed to the disingenuous account. The Blacktivist accounts used a logo, a permutation of the black power fist popularized by the Panthers in the 1960s. The use of this logo as a watermark signals solo authorship — these pages did not simply aggregate the content of others, but crafted each image, sourced from new articles and Google image results. On Facebook, Twitter and Instagram, Blacktivist liberally employed hashtags to expand their audience, linking the terms #BlackLivesMatter #BlackAndProud #BlackPower #BlackUnity in the image descriptions. Legitimate social movements use hashtags to organize, mobilize and increase visibility (Donovan 2018). While the exploitation of frames set by Black Lives Matter were perhaps the IRA’s most significant impact, Blacktivist was not the only faux-social movement in the group’s repertoire.

Another notable IRA attempt to replicate the presence of an online social movement through pseudonymous operation and targeted advertisement was LGBTUnited. Written in the tone of a young queer woman coming to terms with her sexuality through online expression, LGBTUnited was active from July 2015 to August 2017 with several accounts on Facebook, Twitter, and Instagram. The political content on these accounts included critical commentary on Republican politicians’ stance on gay rights and several calls to support Bernie Sanders (with some distrust for Hillary Clinton). But the majority of messages called for community and inclusivity, including several posts highlighting LGBT veterans. There are many mentions of family acceptance, best practices for parenting LBGT teens, and celebrations of notable out queer actors and public figures.

LGBTUnited dealt primarily with memes. Their original content, created stock images and basic text overlay, paired images of models, queer celebrities and public figures with inspirational quotes affirming sexual identity,6 often sourcing images from Tumblr posts. LGBTUnited used a rainbow-tinted logo, signaling solo authorship much like branding of the Blacktivist account. The operators engaged with hashtags like #LGBTPride #Equality and #LoveisLove, and their content was reposted by other queer advocacy accounts. The reach and interactions with this account were significantly less than Blacktivist and some other IRA operations, and they subsequently received less critical press. While LGBTUnited actively supported queer visibility and pride parades, there is no direct evidence this account had any significant role in promoting or organizing public demonstrations or civil actions. Instead, they spoke an approximation of the language of the queer Internet, of a young woman grappling with the realities of LGBT life and the emancipatory possibilities of self expression.

The operators of LGBTUnited spent time and effort creating an identifiable cultural position as a young queer woman, and situating these posts within popular hashtags of that community. Still, almost two years after the revelation of this account’s inauthenticity, it remains unclear what the operators were trying to accomplish. Social media posts and memes left behind showed some distrust for social institutions and a few statements of support of Bernie Sanders, though the majority of the LGBTUnited activity concerned social justice rather than direct political action. Is it possible that this was an account created for use in wedge issues in the event that such issues had come to the fore in the 2016 campaign? Maybe what we saw was a disinfo campaign that was prepared but never executed? How do we deal with a world in which accounts are sleepers, adopting an identity with the hope of exploiting it, but not necessarily pulling the trigger?


Amy Mek — The Bot That Wasn’t

Amy Mek joined twitter in November 2012. Described as a “major cog in the Islamophobia machine,” a willing participant in the “Islamophobia Industry” (Lean, 2012) , her Twitter presence predates the MAGA movement that exploded her popularity to a level of infamy. She devoted much of her earlier attention to Barack Obama, echoing conspiratorial narratives popular among the Tea Party and Birther communities online. Mek celebrated European nationalists like Viktor Orban, Alt-lite provocateurs Jack Posobiec, Mike Cernovich and Milo Yiannopoulos, and was followed by conservative celebrities like Roseanne Barr and Fox News host Sean Hannity. A direct mention by then presidential candidate Donald Trump exposed her to a rapidly expanding audience: MAGA Twitter. Her tweets leading up to the election were infamous, brutal and constant — sometimes posting several times an hour for days on end.

Mek shared edited video and reuploaded images, Pro-Trump, and anti-Democrat slogans with a particular virulent focus on Hillary Clinton and other Democratic female politicians. She sourced her tirades from fringe anti-immigration news, particularly publications catering to an ultranationalist, anti-immigration European position. She adopted the far-right fixation on George Soros as a globalist boogeyman and adopted the ‘cultural Marxism’ terminology popularized by white nationalists. Despite her Jewish heritage and her staunch Zionist positionality, Mek’s racist and nationalist messaging was complementary to the Alt-Right, and she openly supported such extremist groups like The Proud Boys and Generation Identity. After being uncritically interviewed for a New York Times article on women who support Trump (Roller, 2016), her platform expanded, gaining followers and becoming a fixture of the Twitter sphere that helped put Trump in office. She started a small organization called Resistance Against Islamic Radicals (RAIR) with the assistance of a network of Islamophobic pundits.

Many journalists and independent researchers questioned Mek’s authenticity, speculating she was a bot, a cyborg, or a sockpuppet account designed to sow discord (Erwin, 2017). It was only through critical investigative journalism did this presumed fake account become associated with a real person — Amy Mekelburg of Fishkill, New York. She was exposed by Luke O’Brien, an investigative journalist covering far right actors for the Huffington Post (O’Brien, 2018). This account, claimed by many to be a pseudonym, was in fact a pseudonym, but one clearly connected to her verifiable identity. The views shared were presumed not to be hyperbolic ramblings of a foreign agent attempting to interfere in American elections, but the product of the toxic standardization of racist and xenophobic speech online.

Upon the release of O’Brien’s exposé for the Huffington Post, Amy Mek took to Twitter, citing harassment by the publication and calling on her followers for support. Numerous conservative and alt-right publications posted articles in defense of Mek, calling the critical reporting on her an attempted doxing, of being intended to ruin her life. O’Brien was swarmed by trolls and was himself doxed, illustrating the power and commitment of the networks in which Mek was embedded. As of May 2019, Amy Mek has amassed 248K Twitter followers and operates a YouTube channel. She interviews critics of Islam and reuploads real and spurious videos exposing the ‘truth’ about Islam. While her accounts are banned as violations of hate speech codes in several European countries, she remains active on Twitter.

The feminization of pen names is critical for understanding how claims of situated identity online become fodder for believability, in the case of Amy Mek, the female Trump supporter and hate merchant is operating in a territory visibly dominated by men. Unlike the IRA-crafted Blacktivist and LGBTUnited accounts, Mek’s online presence remains largely intact, and the media and text she shares is still widely circulating on Twitter. While she did not use a logo, she employed loose branding for (seemingly defunct) RAIR Foundation, and used popular conservative hashtags like #tcot #PJNet #MAGA #LockHerUP and #JewsForTrump. She shared edited video and reuploaded images, Pro-Trump memes, and anti-Democrat slogans, with particular focus on Hillary Clinton and other Democratic female politicians. Amy Mek shared memes and in a way became one herself, with screenshots of her content being passed around Reddit and other social media sites. The sources for the fringe anti-immigration sentiments she shared came from mainstream and alternative news, particularly European nationalist publications.

Despite activity that was commonly associated with artificial accounts, and a veneer of anonymity, Amy Mek was very much who she claimed, disrupting many journalists’ idea of authenticity.

Conclusion

Anonymity as an ideal functions very differently from pseudonymity as a practice, as these examples demonstrate. In instances where accounts were thought to be automated due to hyperactive posting, investigations reveal the human in the loop. Taking the last decade of networked social movements into account, we bear witness to the continued astroturfing of social issues as they become important vectors of attack against political institutions. In effect, what was learned by one movement became a tactic available to all networked social movements, but not without its own strife and potentials for co-option. And yet, the major political disruptions in 2016 of Brexit and Trump’s ascension revealed more than a war between unidentifiable factions. While many perceived these social changes as chaotic, researchers and journalists were determined to understand how such events could happen and zeroed in on the capacity of social media to organize networked publics. While anonymity was a tactic for activists and protesters seeking redress of grievances, pseudoanonymity became tradecraft that allowed for manipulation of networked social movements.

All of the PIOs we examine intentionally or unintentionally reproduce social harm by degrading trust in the authenticity of marginalized people. GGID is now an important part of the history of the global uprisings of 2011, where the Internet was often considered a liberatory technology giving a voice to the voiceless. Yet, this case foreshadows the problems ahead. Amy Mek represents the outcome of rampant Islamophobia and the far right’s manipulation of anonymity in order to spread hate, using xenophobia as threat around which to mobilize politically. Blacktivist exploited the precariousness of black representation online, and how it is can be mimicked using language and iconography associated with liberation movements. LGBTUnited is an attempt to approximate an inclusive meme sphere, of a young woman using anonymity to protect herself from homophobia, an artifice that eroded over time to reveal the intentions of the operator, the Internet Research Agency.

The use of hashtags by the IRA concealed as much as it revealed about their place within the networked social movement of Black Lives Matter, in particular. By seeding hashtags with memes constructed to affirm preexisting worldviews, the IRA mimicked communication styles specific to marginalized communities, including black activists and LGBT youth, amongst many others not analyzed here. The damage by PIOs is not measured by how many individuals from these communities were ‘tricked’ by these accounts, but rather the PIO operators’ ability to gain followers and interactions from preexisting networked conversations being enacted and linked via hashtags. Whereas anonymous accounts seek to frame a social issue and influence different publics, PIO as a strategy only becomes available when sociotechnical design fails to protect those who are vulnerable to platform abuse. While Black Lives Matter activists outed these accounts as sockpuppets, it was only confirmation from within platform companies’ own data storehouses that corroborated their stories. Networked social movements by design must be public. As a result, abuse of their “open” membership, including the ability for anyone to create an account affiliated with the movement, will continue. As well, responsibility for account verification, especially when money exchanges hands for advertising, lays within the purview of platform companies. In this way, a PIO’s fatal flaw lies in its inability to gain traction and membership through face-to-face or trusted communication; instead PIOs must always parasitically latch onto the predictability of social media platforms to gain attention and influence.

Platform companies have begun to address the issues arising from pseudoanonymous influence operations as “coordinated inauthentic behavior,” but this terminology is fraught because authenticity is a highly contestable category. Further, coordinated behavior as a tactic of networked social movements only becomes an issue after movements learn to speak to and organize through algorithms. Prior to social media, activists relied heavily on face-to-face interaction, email, and message boards (Juris 2008). Astroturfing, infiltration, and co-option existed, but did not scale in the same way that pseudoanonymous influence operations have via platforms such as Twitter and Facebook, especially when aided by targeted advertising. Under this new content moderation regime, coordinated activity of all kinds will be suspicious, but those who are willing to traffic in pseudoanonymous influence operations will adapt to the new environment.

Alongside sociotechnical manipulation, PIOs used prescriptive political positions to expand in preexisting online networks, growing and shaping audiences for different ends. This is not to problematize anonymity, but to question the technical and social foundations that allow PIOs to adapt to platform affordances to leverage social movements against one another. Black Lives Matter, queer activists, and many other groups need anonymity to protect themselves from personal or institutional violence. This real need raises the question, “How do we value anonymity in contemporary political communication when it is clearly being used to manipulate and divide along such vital social issues?” Do vulnerable populations suffer the most when bad faith actors appropriate their positions to undermine democratic stability? Is distrust is so deeply embedded in political communication that bad-faith pseudoanonymous operators will be free to disproportionately influence legitimate social movements? We can offer no definitive answers today, but instead grapple with the situation at hand.

It is clear to us that platform companies have more power to define (and address) the situation than any governments, civil society, researchers, journalists, or users. In this way, past political considerations that warranted strong protections for privacy and free speech are now limited by the sociotechnical affordances of platform design and the money to be made through data surveillance, a business model that undermines democratic participation from the outset. By continuing to allow imposter accounts to “participate” in political discussion, platform companies are degrading the public’s trust in social institutions.

In his essay “Radical Thought,” Baudrillard concludes that when facts become hard to establish, an “illusion of the factual” becomes our reality. If only platform companies have access to data and evidence of the real relationships that can unmask imposter accounts, then we are left only with belief or indifference. As researchers of the sociotechnical, we seek a third option: knowledge. It is not only that knowledge is power, but it is also how we as a society arrive at justice, fairness, and accountability in an age of the unreal.


References

Abunimah, Ali. “New Evidence about Amina, the ‘Gay Girl in Damascus’ Hoax.” Text. The Electronic Intifada, June 12, 2011. https://electronicintifada.net/blogs/ali-abunimah/new-evidence-about-amina-gay-girl-damascus-hoax.

Asenbaum, Hans. “Anonymity and Democracy: Absence as Presence in the Public Sphere.” American Political Science Review 112, no. 3 (August 2018): 459–72. https://doi.org/10.1017/S0003055418000163.

Bachmann, Götz, Michi Knecht, and Andreas Wittel. “The Social Productivity of Anonymity.” Ephmera 17, no. 2 (2017): 18.

Baudrillard, Jean. “Radical Thought.” Parallax, 2009. https://doi.org/10.1080/13534649509361992.

Bell, Elizabeth Flock and Melissa. “‘Paula Brooks,’ Editor of ‘Lez Get Real,’ Also a Man.” Washington Post, June 13, 2011, sec. National. https://www.washingtonpost.com/blogs/blogpost/post/paula-brooks-editor-of-lez-get-real-also-a-man/2011/06/13/AGld2ZTH_blog.html.

Bell, Melissa, and Elizabeth Flock. “‘A Gay Girl in Damascus’ Comes Clean.” Washington Post, June 12, 2011, sec. Style. https://www.washingtonpost.com/lifestyle/style/a-gay-girl-in-damascus-comes-clean/2011/06/12/AGkyH0RH_story.html.

Byers, Donie O’Sullivan and Dylan. “Exclusive: Fake Black Activist Social Media Accounts Linked to Russian Government.” CNNMoney, September 28, 2017. https://money.cnn.com/2017/09/28/media/blacktivist-russia-facebook-twitter/index.html.

Chen, Adrian. “The Agency - The New York Times.” Accessed July 3, 2019. https://www.nytimes.com/2015/06/07/magazine/the-agency.html.

Chuang, Jon P. Daries, Justin Reich, Jim Waldo, Elise M. Young, Jonathan Whittinghill, Andrew Dean Ho, Daniel Thomas Seaton, Isaac. “Privacy, Anonymity, and Big Data in the Social Sciences.” Accessed July 26, 2019. https://cacm.acm.org/magazines/2014/9/177926-privacy-anonymity-and-big-data-in-the-social-sciences/fulltext.

Coleman, Gabriella. “From Internet Farming to Weapons of the Geek.” Current Anthropology 58, no. S15 (November 22, 2016): S91–102. https://doi.org/10.1086/688697.

danah boyd. “Danah Boyd | Apophenia » ‘Real Names’ Policies Are an Abuse of Power.” Accessed July 3, 2019. https://www.zephoria.org/thoughts/archives/2011/08/04/real-names.html.

Donath, Judith. “Identity and Deception in the Virtual Community.” Communities in Cyberspace, August 26, 1996.

Donovan, Joan. “‘Can You Hear Me Now?’’ Phreaking the Party Line from Operators to Occupy.’” Information, Communication & Society 19, no. 5 (May 3, 2016): 601–17. https://doi.org/10.1080/1369118X.2016.1139610.

Entous, Adam, Craig Timberg, and Elizabeth Dwoskin. “Russian Operatives Used Facebook Ads to Exploit America’s Racial and Religious Divisions.” Washington Post, September 25, 2017, sec. Technology. https://www.washingtonpost.com/business/technology/russian-operatives-used-facebook-ads-to-exploit-divisions-over-black-political-activism-and-muslims/2017/09/25/4a011242-a21b-11e7-ade1-76d061d56efa_story.html.

Foucault, Michel. Fearless Speech. Zone Books, 2001.

Chris Gilliard, Chris andHugh Culik, Hugh. “Digital Redlining, Access, and Privacy.” Common Sense Education, May 24, 2016. https://www.commonsense.org/education/articles/digital-redlining-access-and-privacy.

Gray, K.L., and V.E. Kappeler. Race, Gender, and Deviance in Xbox Live: Theoretical Perspectives from the Virtual Margins, 2014.

Gray, Mary. Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. Eamon Dolan/Houghton Mifflin Harcourt, 2019.

Hampton, Rachelle. “Years Ago, Black Feminists Worked Together to Unmask Twitter Trolls Posing as Women of Color. If Only More People Paid Attention.” Slate Magazine, April 23, 2019. https://slate.com/technology/2019/04/black-feminists-alt-right-twitter-gamergate.html.

Joan Donovan. “After the #Keyword: Eliciting, Sustaining, and Coordinating Participation Across the Occupy Movement - Joan Donovan, 2018.” Accessed July 3, 2019. https://journals.sagepub.com/doi/full/10.1177/2056305117750720.

Lean, Nathan. The Islamophobia Industry - Second Edition: How the Right Manufactures Hatred of Muslims. 2nd ed. Pluto Press, 2017. https://doi.org/10.2307/j.ctt1v2xvxq.

Mantilla, Karla. Gendertrolling: How Misogyny Went Viral, 2015. http://ebooks.abc-clio.com/?isbn=9781440833182.

Maureen Erwin. “Searching for Proof of Amy.” The San Francisco Examiner, March 30, 2017. https://www.sfexaminer.com/news/searching-for-proof-of-amy/.

McCrum, Robert. “Lessons Learned from A Gay Girl in Damascus.” The Guardian, June 15, 2011, sec. Books. https://www.theguardian.com/books/booksblog/2011/jun/15/lessons-learned-gay-girl-damascus.

Mock, Brentin. “Today’s Russian ‘Blacktivism’ Is Just Yesterday’s FBI COINTELPRO.” CityLab. Accessed July 3, 2019. https://www.citylab.com/equity/2017/09/theres-something-familiar-about-the-russian-blacktivst-campaign/541560/.

Monterde, Arnau, and John Postill. “Mobile ensembles: The uses of mobile phones for social protest by Spainsh indignados,” April 24, 2014. http://openaccess.uoc.edu/webapps/o2/handle/10609/39161.

Nadler, Anthony, Matthew Crain, and Joan Donovan. “Weaponizing the Digital Influence Machine.” Data & Society, 2018. https://datasociety.net/output/weaponizing-the-digital-influence-machine/.

Noble, Safiya Umoja. Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press, 2018. https://www.jstor.org/stable/j.ctt1pwt9w5.

O’Brien, Luke. “Trump’s Loudest Anti-Muslim Twitter Troll Is A Shady Vegan Married To An (Ousted) WWE Exec | HuffPost.” Accessed July 3, 2019. https://www.huffpost.com/entry/anti-muslim-twitter-troll-amy-mek-mekelburg_n_5b0d9e40e4b0802d69cf0264.

Postill, John. “Hacker, Hoaxer, Whistleblower, Spy: The Many Faces of Anonymous by Gabriella Coleman Brooklyn: Verso, 2014. 464 Pp.” American Anthropologist 117, no. 4 (2015): 823–823. https://doi.org/10.1111/aman.12389.

Roche, John P. Review of The Antifederalists: Critics of the Constitution, 1781-88, by Jackson Turner Main. Political Science Quarterly 77, no. 2 (1962): 271–73. https://doi.org/10.2307/2145879.

Roller, Emma. “Opinion | The Women Who Like Donald Trump.” The New York Times, May 10, 2016, sec. Opinion. https://www.nytimes.com/2016/05/10/opinion/campaign-stops/the-women-who-like-donald-trump.html.

ROSENBLAT, ALEX. Uberland: How Algorithms Are Rewriting the Rules of Work. 1st ed. University of California Press, 2018. https://www.jstor.org/stable/10.1525/j.ctv5cgbm3.

Shinal, John. “Facebook Says 10 Million People Saw Russian-Bought Political Ads.” CNBC, October 2, 2017. https://www.cnbc.com/2017/10/02/facebook-says-10-million-people-saw-russian-bought-political-ads.html.

Tepper, Michele. “Usenet Communities and the Cultural Politics of Information.” Internet Culture, September 13, 2013. https://doi.org/10.4324/9780203948873-3.

Zuckerman, Ethan. “Understanding #Amina.” My Heart’s in Accra (blog), 2011.http://www.ethanzuckerman.com/blog/2011/06/13/understanding-amina/.


FOOTNOTES

Comments
1
JoDS Editor:

Read a response to this article by Brandi Collins-Dexter:

https://jods.mitpress.mit.edu/pub/273294u8