Facebook has problems. Fake news. Terrorism. Russian propaganda. And maybe soon regulation. The company’s solution: Turn them into artificial-intelligence problems. The strategy will require Facebook to make progress on some of the biggest challenges in computing.

During two congressional sessions last month, CEO Mark Zuckerberg referenced AI more than 30 times in explaining how the company would better police activity on its platform. The man tasked with delivering on those promises, CTO Mike Schroepfer, picked up that theme in a keynote and interview at Facebook’s annual developer conference Wednesday.

Schroepfer told thousands of developers and journalists that “AI is the best tool we have to keep our community safe at scale.” After the congressional hearings, critics accused Zuckerberg of invoking AI to mislead people into thinking the company’s challenges are simply technological. Schroepfer told WIRED Wednesday that the company had made mistakes. But he said that for Facebook—with more than 2 billion people on its service each month—AI is the only way to address them.

Even if the company could afford to have humans check every post, it wouldn’t want to. “If I told you that there was a human reading every single one of your posts before it went up it would change what you would post,” Schroepfer says.

Facebook already uses automation to police its platform, with some success. Since 2011, the company has used a tool called PhotoDNA, originally developed by Microsoft, to detect child pornography, for example. Schroepfer says the company’s algorithms have steadily improved enough to flag other images it wants to keep off its platform.

First came nudity and pornography, which Schroepfer describes as “on the easier side of the spectrum to identify.” Next came photos and videos that depict “gore and graphic violence”—think Isis beheading videos—which at a pixel-by-pixel level are difficult to distinguish from more benign imagery. “We're now fairly effective at that,” Schroepfer says.

But tough problems remain. Schroepfer says Facebook in recent months has been investing a “a whole heck of a lot more” into the teams working on problems like election integrity, bad ads, and fake news. “It's fair to say we've pivoted a whole lot of the energy of the company over the last number of months towards all of these issues,” he says. Zuckerberg said earlier this week that he expected to spend three years building up better systems to catch unwanted content.

Facebook’s plan for an AI safety net faces larger challenges on problems that require machines to read, not see. For software to help fight fake news, online harassment, and propaganda campaigns like that mounted by Russia during the 2016 election, it needs to understand what people are saying.

LEARN MORE

The WIRED Guide to Artificial Intelligence

Despite the success of web search and automated translation, software is still not very good at understanding the nuance and context of language. Facebook’s director of AI and machine learning, Srinivas Narayanan, illustrated the challenge in Wednesday’s keynote using the phrase “Look at that pig!” It might be welcome to someone sharing a snap of their porcine pet, less so as a comment on a wedding photo.

Facebook shows some progress with algorithms that read. On Wednesday, the company said that a system that looks for signs a person may harm himself had prompted more than 1,000 calls to first responders since it was deployed late last year. Language algorithms helped Facebook remove almost 2 million pieces of terrorist-related content in the first quarter of this year.

Schroepfer says Facebook has improved its systems for detecting bullying by training them on fake data from software taught to generate insults. In a process called adversarial training, both the abuse hurler and blocker become more effective over time. That places Facebook among a growing number of companies using synthetic, or fake, data to train machine learning systems.

Another hurdle: other languages. Facebook’s language technology works best in English, not just because the company is American, but because the technology is typically trained using text taken from the internet, where English dominates. Facebook’s figures indicate that more than half of its users don’t speak English. “That's a huge problem,” Schroepfer says.

Facebook is so dominant in some parts of the world that its language skills could even be a matter of life and death. UN investigators examining claims of genocide in Myanmar after the deaths of Rohingya Muslims said the company’s services had played a role in spreading hate speech against the group. Facebook has admitted that the crisis caught it without enough Burmese language content reviewers.

Facebook is working on a project called MUSE that could one day make technology developed for one language work in a different language, without needing piles of new training data. Until it is practical, Facebook’s progress on expanding its AI systems to new languages depends on gathering new data to bring its systems up to speed.

In some cases—and places—that data could be slow to arrive. As the Myanmar problems showed, Facebook hasn’t chosen to build up the same language resources everywhere. In a conference session Tuesday on Facebook’s efforts to slow the spread of fake news, executive Tessa Lyons-Laing said machine learning software was learning to flag misinformation from the work of fact checkers at organizations like AP, who manually mark fake stories for Facebook. But she said the technology would only work where Facebook establishes relationships with local fact-checking groups and has built up a good collection of their data.

Schroepfer says that finding ways to move forward without having to depend on fresh human input is one of his main strategies for advancing AI. On Wednesday Facebook researchers showed how billions of Instagram hashtags provided a free data source to set a new record in image recognition. On many of Facebook’s trickiest problems, there’s no way to cut human judgment out of the loop. “AI is not a substitute for people when it comes to deciding what's okay and what's not okay up-front,” says Schroepfer. “AI is a great implementation tool to implement the rules once people have decided them.”

Social Problems

Read more: