Over the past year, Silicon Valley has been grappling with the way it handles our data, our elections, and our speech. Now it's got a new concern: our faces. In just the past few weeks, critics assailed Amazon for selling facial recognition technology to local police departments, and Facebook for how it gained consent from Europeans to identify people in their photos.

Microsoft has endured its own share of criticism lately around the ethical uses of its technology, as employees protested a contract under which US Immigration and Customs Enforcement uses Microsoft’s cloud-computing service. Microsoft says that contract did not involve facial recognition. When it comes to facial analysis, a Microsoft service used by other companies has been shown to be far more accurate for white men than for women or people of color.

In an effort to help society keep pace with the rampaging development of the technology, Microsoft President Brad Smith today is publishing a blog post calling for government regulation of facial recognition. Smith doesn’t identify specific rules; rather, he suggests, among other things, that the government create a “bipartisan and expert commission” to study the issue and make recommendations.

Smith poses a series of questions such a commission should consider, including potential restrictions on law-enforcement or national-security uses of the technology; standards to prevent racial profiling; requirements that people be notified when the technology is being used, particularly in public spaces; and legal protections for people who may be misidentified. But he doesn’t detail Microsoft’s view of the answers to those questions.

“In a democratic republic, there is no substitute for decision making by our elected representatives regarding the issues that require the balancing of public safety with the essence of our democratic freedoms,” Smith writes. “Facial recognition will require the public and private sectors alike to step up – and to act.”

Like many technologies, facial recognition can be useful, or harmful. Internet users tap services from Google, Facebook, and others to identify people in photos. Apple allows users to unlock the iPhone X with their faces. Microsoft offers a similar service through Windows Hello to unlock personal computers. Uber uses Microsoft’s facial-recognition technology to confirm the identity of drivers using its app. Facial analysis can be a form of identification in offices, airports, and hotels.

But there are few rules governing use of the technology, either by police or private companies. In the blog post, Smith raises the specter of a government database of attendees at a political rally, or stores monitoring every item you browse, even those you don’t buy. Given the political gridlock in Washington, an expert commission may be a convenient way for Microsoft to appear to be responsible with little risk that the government will actually restrict its or any other company’s, use of facial-recognition technology. But Smith says such commissions have been used widely—28 times in the past decade—with some success; he points to the 9/11 commission and subsequent changes on the nation’s security agencies.

LEARN MORE

The WIRED Guide to Artificial Intelligence

Outside the US, facial recognition technology used extensively in China, often by the government, and with few constraints. Suspected criminals have been identified in crowds using the technology, which is widely deployed in public places.

Beyond government regulation, Smith says Microsoft and other tech companies should take more responsibility for their use of the technology. That includes efforts to act transparently, reduce bias, and deploy the technology slowly and cautiously. “If we move too fast with facial recognition, we may find that people’s fundamental rights are being broken,” he writes. Smith says Microsoft is working to reduce the racial disparities in its facial-analysis software.

Concern about the ethical uses of technology is not new. But the increasing power of artificial intelligence to scan faces, drive cars, and predict crime, among other things, have given birth to research institutes, industry groups, and philanthropic programs. Microsoft in 2016 created an internal advisory committee, cosponsored by Smith, on its use of artificial intelligence more broadly. In the post, Smith says the company has turned down customer requests to deploy its technology “where we’ve concluded there are greater human rights risks.” Microsoft declined to discuss specifics of any work it has turned down.

Microsoft’s approach wins praise from Eileen Donahoe, an adjunct professor at Stanford’s Center for Democracy, Development, and the Rule of Law. “Microsoft is way ahead of the curve in thinking seriously about the ethical implications of the technology they’re developing and the human rights implications of the technology they’re developing,” she says. Donahoe says she expects the post to spark conversations at other technology companies.

Some critics have suggested that tech companies halt research on artificial intelligence, including facial recognition. But Donahoe says that’s not realistic, because others will develop the technology. “I would rather have those actors engaging with their employees, their consumers and the US government in trying to think about the possible uses of the technology, as well as the risks that come from the use of the technology,” she says.

Michael Posner, director of the NYU Stern Center for Business and Human Rights, says he welcomes Microsoft's statement. But Posner cautions that governments themselves sometimes misuse facial-recognition technologies, and urges companies to ensure that "those who develop these technology systems are as diverse as the populations they serve." He also urges companies to develop "clear industry standards and metrics" for use of the technology.


More Great WIRED Stories

Read more: