On paper, they would seem to have little in common. Tun Khin is a human rights activist who advocates for the persecuted Rohingya Muslims in his home country of Myanmar. Jessikka Aro is a Finnish journalist who exposed the international influence of Russian propagandists at the Internet Research Agency long before the rest of the world had ever heard of them. Lenny Pozner is an American father who lost his 6-year-old son, Noah, in the shooting at Sandy Hook Elementary in 2012. Ethan Lindenberger is almost a kid himself, a high school student who’s become a vaccination proponent despite his parents’ anti-vaccination beliefs.

But all four of them are bound by one unfortunate and common thread: They’ve all seen firsthand just how ugly—and downright dangerous—the spread of fake news and disinformation online can be. Which is why this week, they gathered in Silicon Valley to talk with tech executives about what they’ve been through and what they want tech companies to do about it. The group met with Twitter on Tuesday, and another meeting was planned at Facebook Wednesday afternoon.

The meetings, which were organized by a nonprofit advocacy group called Avaaz, come at a time of fierce debate over what responsibility tech companies have to limit the spread of toxic content on their platforms. Just last week, Facebook announced it was banning seven people, including Infowars conspiracy theorists Alex Jones and Paul Joseph Watson, under a policy that prohibits “dangerous individuals” from having any presence on Facebook. The bans prompted President Trump to lash out against tech companies over the weekend, ramping up accusations of censorship that have become a constant drumbeat on the right.

The discussions organized by Avaaz served as a counterpoint to all that pressure, as individual victims of online harassment campaigns came forward to tell tech companies exactly how they’ve been hurt by the hate and hoaxes that have festered on their platforms. “Our job as advocates is to make them stop for a minute and think about the implications of not acting fast enough,” says Oscar Soria, a senior campaigner with Avaaz.

During Tuesday’s meeting with Twitter, the attendees took turns telling their stories. Aro shared the details of the global smear campaign that was lodged against her, after her reporting outed the Internet Research Agency. She explained the threats that have been made against her life and read a recent direct message she received while traveling in the Czech Republic, in which a stranger threatened to “castrate” her if she ever came back to the country.

Aro says the harassment she’s received violates Finnish defamation laws, and she is in the process of pursuing cases against some of her harassers in court. And yet, she says, the complaints she’s filed to Twitter and Facebook often go unanswered, leaving local investigators to do the work the American companies won't. “I'm basically here, to put it simply, to give a user report live, because they haven't reacted to the ones that I have made online,” Aro says.

Khin described the trauma he’s seen in Rohingya refugee camps and pressed Twitter about why it continues to provide safe haven for Senior General Min Aung Hlaing, the commander-in-chief of the Myanmar military. The military was behind some of the accounts that notoriously flooded Facebook with anti-Islam rhetoric, and the United Nations called for its leaders to face genocide charges last year. Facebook has since banned Min Aung Hlaing and other accounts and pages that the UN linked to human rights abuses in the country. While the general's Twitter account hasn’t been active since last year, it remains up on the platform today.

“He was the mastermind of the Rohingya genocide. The UN has said he was personally responsible. And Facebook has already banned him. What more evidence do they need?” Khin wrote in a tweet following the meeting.

Lindenberger, meanwhile, discussed how his parents came to believe anti-vaccination propaganda on social media, leaving him and his siblings exposed to potentially deadly viruses like the measles. According to Soria, Lindenberger told Twitter executives that after he testified about this issue before the Senate, he himself became the subject of a disinformation campaign. Recently, he said, his own pastor told him to avoid church for his own protection. (WIRED wasn't able to reach Lindenberger.)

Pozner, for his part, has faced such violent threats that he is participating in the meetings remotely. Ever since the Sandy Hook tragedy took his son's life, Pozner and his family have been forced to live in hiding, hounded by online death threats from people who believe that the shooting was a hoax. The conspiracy theory, propagated by figures like Alex Jones, has no basis in reality.

Now, Pozner runs a non-profit called HONR Network aimed at ending online harassment campaigns, helping its victims, and working with tech companies to change their policies. Of all the tech platforms, Pozner says, Twitter has the farthest to go in terms of cracking down on hoaxes and harassment. "Twitter has allowed their platform to be used as a weapon of mass destruction for which they must take accountability," he says.

Twitter spokesperson Liz Kelley told WIRED that the conversation on Tuesday centered on how Twitter can prohibit the “manipulation of the conversation, not serving as the arbiters of truth,” and how Twitter is enforcing the policies against hate speech and violent threats that are already in place. “Hearing these stories is a valuable way for us to inform our decisions and product investments going forward,” Kelley said.

Facebook confirmed its executives met with the group, but declined to offer further comment. Avaaz's organizers also hoped to meet with executives from Google, whose video platform YouTube has helped promote some of the internet's worst conspiracies. As of Wednesday afternoon, a meeting with Google had not yet been scheduled.

In addition to giving the group a chance to share their stories, Avaaz also encouraged Facebook and Twitter to adopt a policy that would alert people when they've been exposed to information marked false by third-party fact-checkers. Facebook has taken steps to expand fact-checking on its platform, recently announcing that it will limit the visibility of groups that repeatedly share content marked as false by fact-checkers. And just this week the company announced that fact-checkers will also begin vetting information on Instagram. Avaaz wants to see Twitter adopt its own fact-checking policy and to see Facebook build upon the one that's already in place.

"This is a necessary step to restore public trust," Soria says.

Social media companies have been historically reluctant to make such editorial decisions on their platforms. And, given the recent heightened accusations of liberal bias in Silicon Valley, including from the President of the United States, making decisions about who is right and wrong on the internet comes with risks for these companies. Pozner just hopes these meetings will underscore the fact that the risks he and other victims have faced are so much greater.

"I am a strong proponent of the First Amendment, and free speech is an essential aspect of American society. However, there is a fundamental misunderstanding of people's rights and responsibilities online," Pozner says. "A person cannot violate my civil rights to be free of harassment, bullying, or to have my likeness manipulated and my family targeted with death threats and intimidation and then simply attempt to hide behind 'free speech.'"

Update: 9:27 AM ET 5/9/2019 This story has been updated to include confirmation from Facebook about its meeting with Avaaz and to clarify the nature of Aro's reporting on the Internet Research Agency.


Read more: