OpenAI, the Elon Musk-backed startup that wants to give away its artificial intelligence research, also wants to make sure AI isn’t used for nefarious purposes. That’s why it wants to create a new kind of police force: call them the AI cops.

As its team of top researchers help to hasten the spread AI technology, this rather unusual startup is worried that such tech could spread too far—that someone else will make a breakthrough in secret and use it “for potentially malicious ends.” So, it’s calling for other top researchers to join its ever-expanding operation and develop new technologies that can somehow detect these breakthroughs as they’re deployed in the real world.

The company’s founders believe that AI can make the world a much better place, but they also worry it could cause serious damage. “As AI systems become more and more capable and powerful, we’re going to see them deployed in a variety of ways, some more nefarious than others,” says Greg Brockman, the former chief technology officer of big-name payments startup Stripe who now oversees OpenAI. “The more the world is aware of what’s going on—the more there is scrutiny—the better.”

The Openness Paradox

Tesla CEO Elon Musk and Y Combinator President Sam Altman founded OpenAI along with some of the brightest researchers in the field, including several poached from Google and Facebook. Their aim is to rapidly accelerate the progress of AI, while also protecting the world from the rapidly accelerating progress of AI. When Musk and Altman unveiled the company last December, they said that they would freely share their AI with anyone who wants it—and that this would help stop miscreants from misusing artificial intelligence.

That’s not as paradoxical as it might seem. Musk and Altman believe that if you put AI in the hands of everyone, then more people will be prepared to combat bad actors. Instead of putting all our trust in AI tech built by one or two big companies, the thinking goes, we will have the power needed to stop code gone rogue. Though the idea has limits—you can’t share everything, lest it fall into the wrong hands—many believe OpenAI can act as an important check on AI superpowers like Google and Facebook by reducing “the probability that super-intelligence would be monopolized,” in the words of AI philosopher Nick Bostrom. Openly shared AI, Bostrom told me this spring, “can remove one possible reason why some entity or group would have radically better AI than everyone else.”

The call for an AI police shows the beginnings of this philosophy put into practice. OpenAI doesn’t just want to build new AI and share it with the world. It wants to actively track down the bad guys. With its call for an AI police force—made late last week on its website—the startup is trying to figure out exactly how that would work.

“We don’t have a great idea yet,” says Ian Goodfellow, who recently joined OpenAI from Google. “That is part of why we put out the call for special research.” But the company does posit a few starting points.

One option is to home in on financial markets, online games, and online news services—areas where AI is already in use. Tracking these uses—maybe through more traditional research methods, maybe by building AIs to track other AIs—would allow the cops to see how far the technology has advanced in the real world and whether these uses seem less than wholesome. Goodfellow suspects that financial firms, for instance, are already using subtle tricks to fool AI systems used by competitors—and boost their profits in the process.

Brockman, meanwhile, worries about the evolution of news. “Think about manipulating public opinion through social media. As you get more powerful automated systems, the capability there really increases,” he says. Brockman points to recent debates over how Facebook’s secret algorithms affect what people see in their News Feeds. “Studying systems like that—systems that already exist—is a good starting point.” He acknowledges that detecting malicious behavior is difficult, but that’s why OpenAI is calling on researchers to start exploring the possibilities. “It’s important to think about where you want to be—the problems that are going to be important,” he says.

At the same time, Brockman and company are calling on researchers to bootstrap new projects with a more positive vibe. Like many others, they want to build AI that can handle cyber security on its own. And they want to build an AI that could win an online programming contest, much as Google built as system that could beat the ancient game of Go. “A program that can write other programs would be, for obvious reasons, very powerful,” they say.

Indeed it would. But it could also be very dangerous. OpenAI is seeking to strike the balance between making the benefits of such a system accessible while doing as much as possible to ensure such technology doesn’t go bad.

Read more: