‘Brexit, however, soon began to look like just a dry run for Russian-linked trolls and political ads’ Photograph: Hannah Mckay/Reuters Brexit, however, soon began to look like just a dry run for Russian-linked trolls and political ads. As part of the US Senate intelligence committee’s ongoing investigation into how Russia used social media to influence the result of the election, representatives from Facebook, Google and Twitter have been obliged to submit evidence about relevant activity on their platforms. Twitter provided a list, 65 pages long, with the handles of some 36,746 Russian-linked bots that tweeted a total of 1.4m times. The company estimates these tweets were viewed 288m times.
Facebook also admitted to lawmakers that between June 2015 and August 2017, 11.4 million Americans definitely saw Russian IRA (its troll farm) advertisements, which ranged from “like and share if you want Burqa banned in America” to claims that “Hillary is Satan, and her crimes and lies have proved just how evil she is. And even though Donald Trump is not a saint by any means, at least he is an honest man, and he cares deeply for his country.” At the bottom of the ad was a press “like” to help Jesus (Trump) win. The most successful ads were clicked on and shared by almost a quarter of the people who saw them. This means, according to Facebook, that up to 126 million residents (or almost half of the US population) were likely to have seen a Russian-linked post.
A study by BuzzFeed found that the top five fake news items in the last weeks of the election were all negatives for the Clinton campaign. In other words, the Facebook algorithm picked a side – it’s not neutral.
“Because of algorithms, social media has never really been a forum for the best ideas to rise organically to the top,” says Goff. “What’s been shown in the past couple of years is not so much that people are inherently racist, sexist or hard right-wing, but that these platforms are engineered to be vulnerable to propaganda campaigns, among other interventions.” Bot accounts furiously sharing messages happened on such a grand scale that it’s hard to believe the platforms didn’t notice. Algorithms “left to their own devices” mean that content generated by any random individual – with no journalistic track record, no fact-checking or no significant third-party filtering – can reach as many readers as, say, the BBC. And that’s a critical problem.
I’m a huge advocate of free speech, open democracy and online dialogue that mirrors the exchange of opinions that happens offline – at work, at home and even in classrooms. The issue is that it has become a free-for-all, a corruptible beast that we can’t, or haven’t, yet learned to control. We don’t have the tools to deal the scale of fraught challenges created by a new world of self-reference and “hands-off” proprietors.
Platforms are trying to figure out their role in it all – are they mere facilitators in bringing people together or something more? Who’s in charge? And who’s responsible when things go wrong?
Facebook insists it is not a media company but merely a “neutral technology pathway” facilitating connections between people. It is a misconceived and dangerous position. It is a media company with enormous influence in shaping someone’s worldview about whom to trust. And it is profit-driven. “Facebook makes money if the advertiser pays, regardless of whether people’s lives are being improved,” says Heiferman. In May 2017, Facebook reported that 98% of its quarterly revenue came from advertising, up from 85% in 2012. In other words, it’s in the company’s interests to keep our eyes glued to the screen, no matter what the content.
The tech leaders, rightly, are under fire for not doing nearly enough to detect and stem the flow of false information. Stung into action, in October 2017, Facebook made some initial “transparency and authenticity efforts” to force advertisers to verify their identity and label their ads more clearly.
“We’re making it possible to visit an advertiser’s page and see the ads they’re currently running,” says Samidh Chakrabarti, Facebook’s product manager for civic engagement. “We’ll soon also require organisations running election-related ads to confirm their identities so we can show viewers who exactly paid for them.”
In February 2017, in the run-up to the French presidential election, Facebook and Google News announced they were part of CrossCheck, an industry coalition of local media companies, including AFP, Le Monde and 15 others, to identify and fact-check dubious content. Stories flagged by two fact-checking companies as making false claims or pushing myths, such as Emmanuel Macron plotting a new tax on homeowners, were labelled as “contested”. Facebook also cracked down on more than 30,000 fake accounts in France, including those created by Russian intelligence agencies attempting to spy on Macron’s election campaign by posing as friends.
Many critics, however, and even other business leaders believe that social media companies should be doing more, a lot more, regardless of cost. “Protecting our community is more important than maximising our profits,” Zuckerberg said after the criticism. It won’t come cheap. According to David Wehner, the company’s chief financial officer, Facebook’s operating expenses could increase by 45% to 60% if the platform were to invest significantly in security features, or – horror of horrors – hire a lot more human beings to keep an eye on the algorithms.
Another, more insidious problem than even straight-out fake news, says Goff, is factual news presented out of context and designed to play on our brain’s tendency to jump to conclusions. Imagine, he says, a true story breaking about a Syrian refugee committing a murder.
“I can guarantee that Breitbart and Fox will spend the next 96 hours talking about nothing but this,” says Goff, “and it’s not fake news. In this scenario, it’s true. What they won’t say is that it’s the first murder committed by a Syrian refugee and that, statistically, murder by refugees is far [less frequent] than murder by those born in the United States and so this isn’t really a problem at all. The brain is vulnerable to these kinds of stories, stories that may not be a Russia-misinformation campaign.”
Technology is only the means. We also need to ask why our political ideologies have become so polarised, and take a hard look at our own behaviour, as well as that of the politicians themselves and the partisan media outlets who use these platforms, with their vast reach, to sow the seeds of distrust. Why are we so easily duped? Are we unwilling or unable to discern what’s true and what isn’t or to look for the boundaries between opinion, fact and misinformation? But what part are our own prejudices playing?
Luciano Floridi, of the Digital Ethics Lab at Oxford University, points out that technology alone can’t save us from ourselves. “The potential of technology to be a powerful positive force for democracy is huge and is still there. The problems arise when we ignore how technology can accentuate or highlight less attractive sides of human nature,” he says. “Prejudice. Jealousy. Intolerance of different views. Our tendency to play zero sum games. We against them. Saying technology is a threat to democracy is like saying food is bad for you because it causes obesity.”
It’s not enough to blame the messenger. Social media merely amplifies human intent – both good and bad. We need to be honest about our own, age-old appetite for ugly gossip and spreading half-baked information, about our own blindspots.
Is there a solution to it all? Plenty of smart people are working on technical fixes, if for no other reason than the tech companies know it’s in their own best interests to stem the haemorrhaging of trust. Whether they’ll go far enough remains to be seen.
We sometimes forget how uncharted this new digital world remains – it’s a work in progress. We forget that social media, for all its flaws, still brings people together, gives a voice to the voiceless, opens vast wells of information, exposes wrongdoing, sparks activism, allows us to meet up with unexpected strangers. The list goes on. It’s inevitable that there will be falls along the way, deviousness we didn’t foresee. Perhaps the present danger is that in our rush to condemn the corruption of digital technologies, we will unfairly condemn the technologies themselves.