On October 27, 2012, Facebook CEO Mark Zuckerberg wrote an email to his then-director of product development. For years, Facebook had allowed third-party apps to access data on their users’ unwitting friends, and Zuckerberg was considering whether giving away all that information was risky. In his email, he suggested it was not: “I’m generally skeptical that there is as much data leak strategic risk as you think,” he wrote at the time. “I just can’t think of any instances where that data has leaked from developer to developer and caused a real issue for us.”

If Zuckerberg had a time machine, he might have used it to go back to that moment. Who knows what would have happened if, back in 2012, the young CEO could envision how it might all go wrong? At the very least, he might have saved Facebook from the devastating year it just had.

Facebook CEO Mark Zuckerberg was called to testify before Congress in the wake of the Cambridge Analytica scandal.
Win McNamee/Getty Images

But Zuckerberg couldn't see what was right in front of him—and neither could the rest of the world, really—until March 17, 2018, when a pink-haired whistleblower named Christopher Wylie told The New York Times and The Guardian/Observer about a firm called Cambridge Analytica.

Cambridge Analytica had purchased Facebook data on tens of millions of Americans without their knowledge to build a “psychological warfare tool,” which it unleashed on US voters to help elect Donald Trump as president. Just before the news broke, Facebook banned Wylie, Cambridge Analytica, its parent company SCL, and Aleksandr Kogan, the researcher who collected the data, from the platform. But those moves came years too late and couldn't stem the outrage of users, lawmakers, privacy advocates, and media pundits. Immediately, Facebook’s stock price fell and boycotts began. Zuckerberg was called to testify before Congress, and a year of contentious international debates about the privacy rights of consumers online commenced. On Friday, Kogan filed a defamation lawsuit against Facebook.

Wylie’s words caught fire, even though much of what he said was already a matter of public record. In 2013, two University of Cambridge researchers published a paper explaining how they could predict people’s personalities and other sensitive details from their freely accessible Facebook likes. These predictions, the researchers warned, could “pose a threat to an individual’s well-being, freedom, or even life.” Cambridge Analytica's predictions were based largely on this research. Two years later, in 2015, a Guardian writer named Harry Davies reported that Cambridge Analytica had collected data on millions of American Facebook users without their permission, and used their likes to create personality profiles for the 2016 US election. However, in the heat of the primaries, with so many polls, news stories, and tweets to dissect, most of America paid no attention.

The difference was when Wylie told this story in 2018, people knew how it ended—with the election of Donald J. Trump.

This is not to say that the backlash was, as Cambridge Analytica's former CEO Alexander Nix has claimed, some bad-faith plot by anti-Trumpers unhappy with the election outcome. There’s more than enough evidence of the company's unscrupulous business practices to warrant all the scrutiny it’s received. But it is also true that politics can be destabilizing, like the transportation of nitroglycerin. Despite the theories and suppositions that had been floating around about how data could be misused, for a lot of people, it took Trump’s election, Cambridge Analytica’s loose ties to it, and Facebook’s role in it to see that this squishy, intangible thing called privacy has real-world consequences.

Cambridge Analytica may have been the perfect poster child for how data can be misused. But the Cambridge Analytica scandal, as it's been called, was never just about the firm and its work. In fact, the Trump campaign repeatedly has insisted that it didn't use Cambridge Analytica's information, just its data scientists. And some academics and political practitioners doubt that personality profiling is anything more than snake oil. Instead, the scandal and backlash grew to encompass the ways that businesses, including but certainly not limited to Facebook, take more data from people than they need, and give away more than they should, often only asking permission in the fine print—if they even ask at all.

One year since it became front-page news, Cambridge Analytica executives are still being called to Congress to answer for their actions over the 2016 election. Yet the conversation about privacy largely has moved on from the now-defunct firm, which shut down its offices last May. That's a good thing. As Cambridge Analytica faded to the background, other important questions emerged, like how Facebook may have given special data deals to device makers, or why Google tracks people's location even after they've turned location tracking off.

Alexander Nix and other former Cambridge Analytica executives are still being called to Congress over the 2016 election.
Bryan Bedder/Getty Images

There has been a growing recognition that companies can no longer be left to regulate themselves, and some states have begun to act on it. Vermont implemented a new law that requires data brokers which buy and sell data from third parties to register with the state. In California, a law is set to go into effect in January that would, among other things, give residents the ability to opt out of having their data sold. Multiple states have introduced similar bills in the past few months alone. On Capitol Hill, Congress is considering the contours of a federal data protection law—though progress is, as always in Washington, slow-going.

These scandals and blowbacks have badly bruised Facebook and arguably the entire tech industry. If Zuckerberg had trouble seeing the "risk" associated with sloppy privacy protections back in 2012, they should be all too familiar to him now. Facebook faces a potential record fine by the Federal Trade Commission, and just this week news broke that the company is under criminal investigation for its data sharing policies.

At the same time, the fallout from the Cambridge Analytica flap has prompted Facebook to—at least in some respects—change its ways. Last week, in a hotly contested blog post, Zuckerberg claimed that Facebook’s future hinges on privacy. He said that Facebook will add end-to-end encryption to both Facebook Messenger and Instagram Direct as part of a grand plan to create a new social network for private communications.

Critics have debated whether Zuckerberg finally has seen the light, or if he is actually motivated by more mercenary interests. Still, encrypting those chats would instantly enhance the privacy of billions of people's personal messages worldwide. Of course, it could also do plenty of damage, creating even more dark spaces on the internet for misinformation to spread and for criminal activity to fester. Just this past week, one of Zuckerberg's most trusted allies, Facebook's chief product officer Chris Cox, announced he was leaving Facebook, a decision that reportedly has a lot to do with these concerns.

A year after the Cambridge Analytica story broke, none of these questions about privacy has yielded easy answers for companies, regulators, or consumers who want the internet to stay convenient and free, and also want control over their information. But the ordeal at least has forced these conversations, once purely the domain of academics and privacy nerds, into the mainstream.

If only the world had seen it coming sooner.


Read more: