Study discovered several accounts, now known to belong to the same Russian trolls who interfered in the US election, tweeting about vaccines
Bots and Russian trolls spread misinformation about vaccines on Twitter to sow division and distribute malicious content before and during the American presidential election, according to a new study.
Scientists at George Washington University, in Washington DC, made the discovery while trying to improve social media communications for public health workers, researchers said. Instead, they found trolls and bots skewing online debate and upending consensus about vaccine safety.
The study discovered several accounts, now known to belong to the same Russian trolls who interfered in the US election, as well as marketing and malware bots, tweeting about vaccines.
Russian trolls played both sides, the researchers said, tweeting pro- and anti-vaccine content in a politically charged context.
“These trolls seem to be using vaccination as a wedge issue, promoting discord in American society,” Mark Dredze, a team member and professor of computer science at Johns Hopkins, which was also involved in the study, said.
“By playing both sides, they erode public trust in vaccination, exposing us all to the risk of infectious diseases. Viruses don’t respect national boundaries.”
The study, published in the American Journal of Public Health, comes as Europe faces one of the largest measles outbreaks in decades, one which has been partly attributed to falling vaccination rates. In the first six months of 2018, there were 41,000 cases of measles across the continent, more than in the entirety of 2017. Meanwhile, the rate of children not receiving vaccines for non-medical reasons is climbing in the US.
“The vast majority of Americans believe vaccines are safe and effective, but looking at Twitter gives the impression that there is a lot of debate,” said David Broniatowski, an assistant professor in George Washington’s School of Engineering and Applied Science.
“It turns out that many anti-vaccine tweets come from accounts whose provenance is unclear. These might be bots, human users or ‘cyborgs’ – hacked accounts that are sometimes taken over by bots. Although it’s impossible to know exactly how many tweets were generated by bots and trolls, our findings suggest that a significant portion of the online discourse about vaccines may be generated by malicious actors with a range of hidden agendas.”
Russian trolls appeared to link vaccination to controversial issues in the US. Their vaccine-related content made appeals to God, or argued about race, class and animal welfare, researchers said. Often, the tweets targeted the legitimacy of the US government.
“Did you know there was secret government database of #Vaccine-damaged child? #VaccinateUS,” read one Russian troll tweet. Another said: “#VaccinateUS You can’t fix stupidity. Let them die from measles, and I’m for #vaccination!”
“Whereas bots that spread malware and unsolicited content disseminated anti-vaccine messages, Russian trolls promoted discord,” researchers concluded. “Accounts masquerading as legitimate users create false equivalency, eroding public consensus on vaccination.”
Researchers examined a random sample of 1.7m tweets collected between July 2014 and September 2017 – the height of the American presidential campaign that led to Donald Trump’s victory. To identify bots, researchers compared the rate at which normal users tweeted about vaccines with the rate at which bots and trolls did so.
“We started looking at the Russian trolls, because that data set became available in January,” said Broniatowski. “One of the first things that came out was they tweet about vaccines way more often than the average Twitter user.”
Broniatowski said trolls tweeted about vaccines about 22 times more often than regular Twitter users, or about once every 550 tweets, versus every 12,000 tweets for human accounts.
Researchers found different kinds of bots spread different kinds of misinformation. So-called “content polluters” used anti-vaccine messages as bait to entice their followers to click on advertisements and links to malicious websites.
The study comes as social media companies struggle to clean their houses of misinformation. In February, Twitter deleted 3,800 accounts linked to the Russian government-backed Internet Research Agency, the same group researchers at George Washington examined. In April, Facebook removed 135 accounts linked to the same organization.
This week, Facebook removed another 650 fake accounts linked to Russia and Iran meant to spread misinformation. Researchers did not study Facebook, though it remains a hub of anti-vaccination activity.
“To me it’s actually impressive how well-organized and sophisticated the anti-vax movement has become,” said Dr Peter Hotez, the director of the Texas children’s hospital center for vaccine development at Baylor College of Medicine, and the father of an autistic child. Hotez, who maintains an active Twitter presence, said he struggled to identify whether Twitter accounts were human or bots.
“There are clearly some well-known anti-vax activists that I know to look out for and I know to block or to mute, but that’s a minority,” said Hotez. “A lot of it just seems to come out of nowhere, and I’m always surprised by that.”
One of the most striking findings, Broniatowski said, was an apparent attempt by Russian trolls to Astroturf a vaccine debate using the hashtag #VaccinateUS. Accounts identified as controlled by the Internet Research Agency, a troll farm backed by the Russian government, were almost exclusively responsible for content emerging under #VaccinateUS.
Some of the Russian trolls even specifically used a hashtag associated with Andrew Wakefield, the discredited former physician who published fraudulent papers linking vaccines with autism, such as #Vaxxed and #CDCWhistleblower.
The Guardian requested comment from Twitter and was referred to a blogpost in which the company said its “focus is increasingly on proactively identifying problematic accounts”, and that its system “identified and challenged” more than 9.9m potential spam accounts a week in May 2018.
Read more: www.theguardian.com