Public Information on Russian Information Operations in the US, 2014–2016
“The real Facebook revolution is global,” a 2011 CNN editorial declared, “and it’s only just getting geared up.” The Christian Monitor described the prevailing opinion of the time: “This was the global revolution that Twitter built.”
Protesters had just unseated dictators in Egypt and Tunisia. The activists had organized, connected, and gained fame on social networks. Behind the scenes, Facebook and Twitter had been busy finding what kept users most engaged with their platforms. They looked at user’s browsing habits, and those of similar to users, to offer a stream of content users would like, favorite, and share. When Egyptian or Tunisian followers of the BBC, Voice of America, and dissidents went online, they found themselves surrounded and supported by pro-democracy friends and reports.
With a billion people on social networks, the effect rippled across the world. From Burkina Faso to Azerbaijan to China, democratic protesters filled the streets. As headlines trumpeted a global revolution and a new democratic world order, the Kremlin was learning a different lesson: A change in the information environment — the way people find, process, and act on information — can lead to a change in regime.
Over the next five years, Russian generals, academics, and influencers would write volumes on and invest millions in the control and command of the information environment. Investigative journalists and ex-operators would disclose how the Kremlin achieved those goals. Simultaneously, technology to predict, imitate, and alter human behavior came into its own. A new era, where hostile powers could affect civilian populations in ways still unimaginable to many today, would begin.
A Gray Building Outside St. Petersburg
In late 2011, a Facebook event became the largest protest Russia had seen since the fall of communism. Two days before, TrendMicro, a security blog, posted about hashtags in support of the event. All the tweets on the hashtag were nonsense. Not anti-protests, not pro-Putin, just nonsense. The blog found that bots were tweeting hundreds of times a minute to drown out supporters. A few months later Kommersant, the leading independent newspaper in Russia, reported that the Kremlin’s Foreign Intelligence Service invested $1M in social media control operations.
Over the next year, Russians would talk about a change in the tone of online comments. That’s according to Alexandra Garmazhapova of Novaya Gazeta, an investigative journal. In 2013 she responded to a job posting at the Internet Research Agency (IRA) in St. Petersburg. The hiring manager there told her people had a negative view of Russia, and they’d spread those views online, leading to more negativity. IRA’s job was to right that wrong. Garmazhapova would be posting 100 comments a day to that end. They’d tried robots for the job, but robots couldn’t get by Twitter and Facebook’s detectors. She aced the interview, but when IRA conducted a background check and found she was a journalist, her investigation ended.
IRA fit neatly into the Russian conception of information warfare. “A new type of war has emerged,” explained Vladimir Kvachkov, a former officer with Russia’s foreign intelligence service, in a published piece on the role of special forces, “armed warfare has given up its decisive place in the achievement of the military and political objectives of war to another kind of warfare — information warfare.”
Anonymous Collective, a hacker group, sent Buzzfeed hundreds of files taken from IRA showing how to put those directives into practice. The documents instructed employees to create and maintain six Facebook profiles, post three times a day, post selected stories fifty times, and gain 500 subscribers in their first month. They were to post on right wing sites like Fox News, left wing sites like Huffington Post, and centrist ones like Politico. The documents show meticulous details recorded on what worked, what didn’t, and how to change. “We are not hackers in the traditional sense,” an IRA spokesman explained. “We are trying to change reality.”
Three months later, hundreds of phones in Centreville, LA buzzed in unison. Their screens flashed with the same text: “Toxic fume hazard warning in this area until 1:30 PM. Take Shelter. Check Local Media and columbiachemical.com.” A search for #Centreville brought up images of CNN and the Times-Picayune reporting on an explosion at the town’s massive chemical plant. People began fearing the worst. A resident tweeted her senator, “@SenJeffMerkley Jeff, Hope it wasn’t a terrorist attack in Louisiana, was it? #ColumbianChemicals”. Another at Karl Rove, “@KarlRove Karl, Is this really ISIS who is responsible for #ColumbianChemicals?” Then a video surfaced of an outraged local watching men in all black and ski masks over the Al Jazeera logo and crawl. ISIS had taken credit for the attack.
But there was no attack. There was no explosion. The CNN and Times-Picayune screenshots were doctored. The Arabic words in the ISIS video were all spelled backwards. The man in the video sounded more Australian than Cajun but didn’t really sound like either. The tweets, the Facebook posts, were all from bots.
“[F]alsifying events and imposing restrictions on the activity of the mass media are among the most effective asymmetric means of warfare” explains the Russian Special Forces manual. By the time of the Centreville hoax, bots had become so good at spoofing real accounts that a third of people could not distinguish the two. Researchers at the University of Washington had found pro-Assad tweets from bots among the most retweeted in all of Syria and automated accounts influencing entire networks.
Following two more coordinated, bot-driven hoaxes, one claiming an Ebola outbreak and one claiming a police shooting, Adrian Chen, an investigative journalist at The New York Magazine began digging for answers. Everything led back to St. Petersburg. He traveled there, interviewed employees, tracked accounts and tied the videos, screenshots, and bots back to IRA.
A consensus around the importance of information warfare had emerged in the Kremlin. “Wars will be resolved… primarily through informational superiority,” declared Sergey Chekinov of the Russian General Staff Academy. According to a field manual compiled by Kvachkov, information operations should not be limited to wartime but instead should be “conducted under conditions of natural competition, i.e. permanently”
As 2016 approached, IRA had no fewer than 800 employees on Facebook, making their minimum budget for a single social network $6.5M. Employees had a quota of 135 comments every twelve hours. Human operators alone were responsible for 216,000 Facebook comments a day.
Hacking the Electorate
When IRA bots drowned out tweets for the 2011 Moscow protests, the world was producing 4 zettabytes of data a year. By early 2016, the world generated that much data every two weeks. Companies took note, and the number of public firms mentioning “Artificial Intelligence” in earnings reports went from five in 2011 to two hundred last year. In 2011, 108 studies with “Big Data” in their title won publication. In 2016, 7,000 did.
Social networks applied this to incredible effect. They doubled their user base between 2011 and 2016. They increased their penetration of the American market from from 50% to 69%. How the platforms were used changed, too. In 2013, 47% of Facebook users got their news there. In 2016, 66% did. Add in Twitter, Reddit, and others and 62% of Americans in 2016 got their news from social media.
Few were prepared to distinguish real news on their feed from Russian disinformation. When they liked fake news stories, the same algorithms that connected protesters in Egypt connected them to conspiracy theorists. When they retweeted fake news, the same algorithms that showed Tunisian protesters the BBC showed more fake news. When they looked up hashtags, they found hundred of bots that looked like them, tweeted like them, used the same hashtags, and all supported the same telling of events. The stories, posts, and tweets hewed closely to the guidelines for achieving “information-psychological confrontation objectives” from a Russian military manual including “Direct lies for the purpose of disinformation, concealing critically important information, burying valuable information in a mass of information dross, simplification, confirmation and repetition and providing negative information, which is more readily accepted by the audience than positive.”
The bots and trolls together could bring about real world action, like when bot-backed stories of a shooter at JFK Airport resulted in its closure. They could mobilize real Americans, like in the anti-Muslim protests in Houston in May 2016. They could influence the highest rungs of power, like when Paul Manafort, the Republican campaign chair, publicly fell for a Russian hoax claiming an attack on a NATO base. He was far from alone: bots linked to Russia had sent more than 4,000 tweets in 80 minutes about the supposed attack. Kremlin-backed RT and Sputnik promoted the story. As a result, searches for the name of the base increased twenty fold.
In the months leading up to the election, a full fifth of political tweets had come from bots. Fake news tweets outnumbered tweets from official political accounts two to one, and four of the five most shared, liked, and commented news stories of the year were fake. A third of human users retweeted bots, and a tenth of the most influential accounts were automated. According to the Oxford Internet Institute, “bots reached positions of measurable influence during the 2016 US election.”
Reconquering Our Information Environment
The most common words in bots’ profiles were “God, Military, Trump, Family, Country, Conservative, Christian, America, and Constitution.” Those that would use that as a political bludgeon against Republicans further Russia’s goal of dividing Americans and ignore their own vulnerability. Russia has already begun targeting leftist groups, and Americans of all political parties perform abysmally when trying to spot fake news. Russian military officials view information operations as a means to “deprive [a state] of its actual sovereignty without the state’s territory being seized” and the United States should treat it the same.
Big data has given us insights into human behavior unimaginable a generation ago. Social networks have used those insights to build worlds that cater to users’ desires. Bad actors have used those insights to build worlds that cater to users’ worst instincts. An algorithmic match can ignite a fire of paranoia and distrust from a world away.
That information operations can wrest away control of how civilians perceive reality is an existential threat to all societies that guarantee free speech. To build an immunity, we must know better what we are fighting. Measuring the information environment — how bad actors are spreading their propaganda, how effective it is, and why — is a long overdue first step. We can only implement strong defenses when we can measure how strong existing defenses are. Only after we know how to respond can we do so effectively. The future of democracy depends on it.