In October 2017, Twitter general counsel Sean Edgett faced difficult questions from the Senate Judiciary Committee about foreign interference in the 2016 election. Flanked by representatives from Facebook and Google, Edgett explained how Russia’s Internet Research Agency (IRA) had systematically spread fake news and stoked partisan sentiment through a carefully coordinated, years-long social media campaign.

A year later, Twitter released an archive of more than 10 million tweets, from 3,841 accounts it said were affiliated with the IRA, hoping to encourage “open research and investigation of these behaviors from researchers and academics.” The company has followed with additional data dumps, most recently last month when it released details of accounts linked to Russia, Iran, Venezuela, and the Catalan independence movement in Spain. All told, Twitter has shared more than 30 million tweets from accounts it says were “actively working to undermine” healthy discourse.

Researchers say the trove has been invaluable in learning about state-sponsored disinformation campaigns and how to combat them. Patrick Warren and Darren Linvill of Clemson University used the data to identify different kinds of troll behavior and examine how each contributed to the IRA campaign. “A lot of people have been using the data to try to come up with strategies to make our political conversation more robust,” Warren says. He points to a recent Stanford report that recommends regulating political ads, strengthening internal monitoring at social media companies, and standardizing labels for content linked to disinformation campaigns.

Still, there’s much missing from Twitter’s data dumps, and many unanswered questions about how impactful these accounts really were, how they operated, and how successful Twitter is at finding and shutting them down.

The data releases include the text of the tweets, the account names, number of people those accounts followed, the number of people who followed them, and how many times a tweet was liked and retweeted. But Twitter doesn’t release the names of accounts that followed or were followed by these state-sponsored profiles, to protect the privacy of those users. “The real thing that we don’t know is who saw these tweets?” says Cody Buntain, a postdoctoral researcher at NYU’s Social Media and Political Participation Lab. “That’s the critical piece of information that Twitter does not provide.”

Without those follower networks, Buntain and others say it’s hard to assess the impact of the accounts and how they grew and evolved over time. Did a bunch of fake accounts start following each other to give themselves the appearance of normalcy? Or did they start following specific people and grow their following organically? Researchers can’t say. With that information, “we could see what kind of content was the most engaging,” says Buntain. He says that information would also help us understand which niches of Twitter were targeted and how.

The follower networks are public while an account is working, but they disappear once Twitter shuts it down. Exposing those followers could subject users to abuse or harassment. “I can see why the platforms would be hesitant,” says Ben Nimmo, a senior fellow of the Atlantic Council’s Digital Forensic Research Lab. People who followed IRA or other state-sponsored accounts may have been manipulated, but they weren’t breaking the law or even violating Twitter’s terms of service.

“We're committed to publishing every tweet, video, and image that we can reliably attribute to a state-backed information operation,” a Twitter spokesperson says via email. “We have an obligation to balance these important public disclosures with our commitment to protecting people's reasonable expectation of privacy, and we conduct thorough impact assessments before each.”

Twitter and other social media companies are trying to find a balance among transparency, user privacy, and a timely response to state-sponsored activity. Facebook, which was also targeted by the IRA and other groups before and after the 2016 election, has taken a different approach with its data. Instead of releasing troves of information to the public, Facebook partners with researchers it trusts, including the Digital Forensic Research Lab where Nimmo works. Facebook also shares data through an independent research commission called Social Science One that vets the information and the researchers who get access to it, hoping to prevent another Cambridge Analytica-style privacy breach.

Google, which owns YouTube, says it has taken steps to counter state-sponsored activity and to prevent phishing and hacking campaigns. The company shares information with law enforcement and with other social media companies, but it doesn’t usually release that information to the public. Google, along with Facebook and Twitter, released some information to researchers at Oxford’s Computational Propaganda Project, which issued a comprehensive report on the IRA’s impact on American politics from 2012 through 2018. That report noted that Google’s contribution was “by far the most limited in context and least comprehensive of the three.”

For all of Twitter’s openness, much is not known about its data releases. No one is sure how Twitter finds suspicious accounts, how it defines “state-sponsored,” or how it distinguishes between acceptable and “malicious” content. Twitter doesn’t discuss how it chooses countries and networks to focus on. As a result, it’s difficult to assess how successful the company is at ferreting out disinformation.

Twitter would not reveal any specifics about its process for this article. “We seek to protect the integrity of our efforts and avoid giving bad actors too much information, but in general, we focus on conduct, rather than content,” the Twitter spokesperson wrote in an emailed statement. “This means we look at the behavioral signals behind networks of accounts to intricately understand how they interact across the service,” the statement continued, adding that Twitter works with governments, law enforcement, and other tech companies to better understand such operations.

But in keeping those specifics secret, Twitter and other social media companies make oversight impossible and make themselves the sole arbiters of what kinds of speech are authentic and legitimate, says Danny O’Brien, director of strategy at the Electronic Frontier Foundation. The platforms decide who is normal, who is newsworthy, and who is dangerous, without revealing how they make those judgment calls. “From a social standpoint this puts a huge amount of faith and trust and responsibility in the platforms,” says Buntain.

In some ways, the operations Twitter has identified in Russia, Iran, and elsewhere are low-hanging fruit. It’s against Twitter’s rules to impersonate someone in order to intentionally “mislead, confuse, or deceive others.” It’s also straightforward to say one country shouldn’t mount a massive, covert disinformation campaign to manipulate another country’s voters. But the issues get more complex when you look at domestic social media campaigns. Is it wrong for a political action committee to hire marketing and PR firms to promote specific ideas on social media? Or for a private citizen to set up a web of blogs and posts that promote particular candidates or disparage others? “Is the problem that people are trying to influence one another? Because if it is, then you’re probably going to have to ban elections, because that’s the whole point of elections,” O’Brien says.

Erin Gallagher, a social media researcher, says the market for this persuasive online activity is growing, getting more complex, and harder to categorize. “Globally we're looking at a smorgasbord of actors and methods in a cottage industry that no one really knows much about,” she wrote in an email.

In his 1970 book Culture Is our Business, Marshall McLuhan examined American civilization through advertising. Part collage, part social commentary, it smashes McLuhan’s own frighteningly prescient observations against articles about smoking, quotes from Finnegans Wake, and ads for Hertz, Western Electric, Karmann Ghia, and TWA. “World War III is a guerrilla information war with no division between military and civilian participation,” he wrote.

That description mirrors the world some researchers describe: one in which personal political views and state-sponsored propaganda easily intermingle and are difficult to untwine. “Basically this is where we are right now, and it’s a total clusterfuck,” wrote Gallagher. The line between a bad actor who intentionally posts misleading information and an individual promoting persuasive posts is muddy and hard to define.

As disinformation tactics spread, such ethical questions get even more complicated. Recent elections in Brazil and India were plagued by disinformation campaigns launched on WhatsApp, a Facebook-owned secure messaging service that uses end-to-end encryption. That encryption gives users an added expectation of privacy, but makes it harder for researchers to monitor the platform. “Is it worth the risk of invading peoples’ privacy to collect the data that academics would need in order to understand how these platforms are being used?” asks Buntain. “I just don’t know the answer to that question.”


Read more: