Chris Cox has long been the chief product officer for Facebook. He has also recently been promoted to run product at WhatsApp, Messenger, and Instagram, which means he is effectively in charge of product for four of the six largest social media platforms in the world. He recently sat down with WIRED editor in chief Nicholas Thompson at the Aspen Ideas Festival to talk about the responsibilities and plans of the platforms he helps run.

Nicholas Thompson: I'm going to start with a broad question. There are a lot of trade-offs that you talk about. There’s a trade-off between privacy and utility, right. The tougher your privacy settings are, the harder it is to code things and the harder it is for users to add apps on. There’s a trade-off between free speech and having a safe community. There’s a trade-off between a totally neutral platform and making sure the highest quality content thrives. So: Over the last year, as you’ve gone through this and as you think about the future, how has your thinking shifted on where the balance lies?

Chris Cox: It’s shifted immensely, on each of those dimensions. I started at the company 13 years ago; I joined when Facebook was 5 million American college students. It was called “The Facebook.” It was a directory only. There was really no tool for communication. People were using their real names and so it had the promise of being a place you could find each other, find a roommate, find a high school best friend, find your cousin’s new boyfriend, and you could learn about the people around you. The lesson we learned very early on was that these tools could be forces for people to come together around ideas. The first time we had a group with over 1 million people in it was a few days after we launched News Feed. There were 10 million people using the service and 1 million of them joined a group called Students Against News Feed. It was a huge misunderstanding. We did a bad job of explaining how the product worked. We worked through it, but the second and the third largest groups were groups raising awareness about humanitarian issues. The second largest group was a group about Darfur, which at the time was an under-reported humanitarian issue that a lot of college students cared about.

And so we had this feeling from the early days that this platform generally wanted to be a force for good, that people wanted to come together around ideas, and we should let that happen. And so the focus was a lot more open than it is now. if you look at today, we have hundreds of our best people now working on protecting elections. And that’s the right thing for us to do—looking at over 40 countries, working with electoral commissions, data scientists, researchers, understanding the playbook of the Internet Research Agency, but also the playbooks of financially motivated spammers, who use the excitement around elections to try and make money from ad farms. There’s a whole list of things which we have done over the past year and a half. We really said we need to be experts at this. We need to be working with world experts in each of these areas and each of these countries. And that is a big change in disposition that’s happened inside of the company.

NT: Back to those general dimensions that I mentioned. I'll just give my outsider’s guess on how you shifted on all of them. So privacy versus utility, you guys have massively shifted toward privacy. And, in fact, I bet there are people inside the company who worry you’ve been pushed too far by the Cambridge Analytica outrage, and it’s kind of too hard to build things now, but you had to move really far on privacy. On free speech and community, you’re moving much more towards making a safe community and away from the initial ideas of social media platforms from the Arab Spring of free speech. Neutral platform versus high-quality content, you’re definitely moving towards high-quality content, much more of a publisher, less of a neutral platform. Am I right or wrong on those three?

CC: You’re right on all of it. And I think we’re trying to do this in a way where we’re putting decision-making in the hands of institutions who have a history, like fact-checkers. The way we’re combatting the fake news problem is to identify when something’s going viral, then getting it quickly to fact-checkers—we’re in 15 countries now, we want to be in more—and helping the fact-checkers prioritize their work, so that rather than fact-checking whichever story may have come across their desk, they’re looking at the ones that are about to get traction on social media. Then you use that to reduce the distribution of the story and also to educate folks who are about to share it or those who are coming across the story on social media. The partnership with fact-checkers means that we can rely on institutions that have standards, are showing their work, and allow us to not be in a situation where we feel like we need to be making these really difficult calls. And they are difficult calls. I mean, the cover of Time magazine is a difficult call.

NT: The cover of Time magazine is a difficult call because it’s got a picture of a girl crying. It says, “Welcome to America” but the girl wasn’t actually crying because she was separated from her parents, right?

CC: It was part of the debate in the fact-checking community this week.

NT: That’s a great example. Let’s talk about this disinformation stuff. You just laid out some of the ways you’re dealing with it in a text-based world, or text and image-based world. But the internet’s going to be mostly pictures and videos soon, and then we’re going to move to virtual reality, and then we’re going to move to like neural interfaces, where we’re all going to be connecting our brains. How are you going to fight and counter disinformation at these different levels? I kind of know how you’re doing it on text, I don't know how you’re doing it on images, I really don't know how you're doing it in VR.

'The way we’re combatting the fake news problem is to identify when something’s going viral, then getting it quickly to fact-checkers'

CC: So it’ll be the same playbook. We’ll be finding things that start to go viral, we’ll be sending them to fact checkers. The two most interesting [things] for photos are things that are doctored and things that are taken out of context. Those are the two categories where we see the most activity on social media and on the internet. And we want to use the same principles, which is we’re going to find what’s starting to move across Facebook and Instagram, we’re going to get it in front of fact-checkers, we’re going to let fact-checkers decide, and then we’re going to educate people when they see it and reduce its distribution. And then we’ll use artificial intelligence tools and classifiers to basically spread what people have said, if it is a false story, and find other things that look like it.

NT: Wait, so stuff will start to go viral, and it will be controversial, and you’ll send it to humans, and then you’ll use AI? Won’t it be the other way around? Won’t it start to go viral, you’ll use AI, if the AI can't solve it, then it will go to humans?

CC: So you’ll find things that are going viral, that’s just counting. Then you’ll send it to fact-checkers. Then you’ll use fuzzy matching, as it’s called. It’s just finding things that are saying the same thing but are slightly different. This is important for photos, it’s important for links. We recently had a story in France—a health hoax—that said if you’re having a stroke, you should prick your fingers and your stroke will subside. You know, health hoaxes are as old as time. They're part of the rumor mill, they’re a part of gossip, they’re a part of conversation. But they’re really important to help people get educated. And in this instance, there were more than 1,000 stories that were all about this one hoax. And so rather than sending 1,500 stories to fact-checkers, we want to send one, and just have a tool that says these two things are the same.

NT: What is your confidence level? In the 2016 election, there were bad guys putting out this information, there were good guys trying to stop this information, good algorithms, and the bad guys won, right. What is your confidence level that in the 2018 election you’ve gotten good enough at this that you can prevent someone from hijacking an election?

CC: Well we feel very good about every election we’ve had since we’ve put this team together. We’ve been working with electoral commissions ahead of time so we have a sense of how we’re doing in their eyes, which is really important. We’ve been doing that in Mexico [for Sunday’s election] for months now. We announced recently the take-down of 10,000 Pages, Groups and accounts in Mexico and across Latin America because they violated our community standards, as well as removing 200,000 fake Likes, which could help artificially prop up political candidates. So, we’re not going to get 100 percent of everything, but I feel a lot more confident that we’ve developed our best teams with tools that are working. In the Alabama special election we saw thousands of economically motivated—meaning they’re just using it as spam to get people riled up—actors, and each time we find one of these patterns we’re getting more competent at having the right antibodies to each of the types of problems. So a lot more confident, but I can't be 100 percent sure there’s not going to be anything.

NT: So you feel the immune system is evolving more rapidly than the virus.

'We feel very good about every election we’ve had since we’ve put this team together'

CC: I do.

NT: That’s good to hear. Let’s talk about other viruses. One of the most interesting and complicated products in this suite of platforms you run is the toxic comments filter on Instagram. Instagram built a system, they hired a bunch of humans to evaluate comments to say “this one is racist,” “this one is sexist.” They used that to train an algorithm, and now there’s an algorithm that will go through comments on Instagram and basically vaporize anything super mean. When is that product going to be fully deployed on Facebook?

CC: Again, you're in this balance of a platform for letting people say what they want and a platform that’s keeping people safe and helping people have constructive conversations. If it’s hateful, we’re going to take it down.

NT: Will you automatically take it down?

CC: We rely upon reporting, and then we build tools to help find language that is similar to the stuff that’s been reported as hateful. But it’s an area where people need to be involved because there are so many judgment calls around hate speech.

NT: But the filter will knock away stuff without any humans reviewing it or anybody flagging it.

CC: Based on language that’s being used on Instagram. One of the things we’re looking at, especially in my new role, is finding more places that we can re-use tools. We’re doing this in a bunch of places across Facebook and Instagram, for example taking down photos that violate our standards. The comments stuff isn’t as unified yet. We have different approaches. But on Facebook, to your question, the most interesting tool we’ve found is upvoting and downvoting. Good old-fashioned upvoting and downvoting, which is separate from liking, but just lets people surface comments that are helpful and push down comments that are unhelpful.

NT: Reddit, right? That’s the foundation of Reddit.

'On Facebook, the most interesting tool we’ve found is upvoting and downvoting.'

CC: Yeah, that’s Reddit. But it’s really effective at collapsing things that aren't helpful. It doesn't hide them, but it helps keep the conversation constructive, it helps create cross-cutting positive discourse, which is what you really want here. And that’s the direction we’re heading.

NT: So, to summarize, on Instagram, if somebody writes something nasty about me on my feed it will be vaporized automatically. On Facebook, somebody writes something nasty about me, somebody will flag it and it may be vaporized the next day.

CC: With a little more detail underneath it, yes.

NT: When does it become a free speech issue? Is it just when you delete it or don't delete it? Is it also a complicated free speech issue when you’re shrinking the image size or comments are collapsing in?

CC: They’re all on the continuum of free speech and safety. We published in April, for those of you who are interested in reading the 64-page guide, exactly how we decide. We also have the two-page version, which is our community standards, which is just: these are the things we don’t allow on the platform. Then we have the long version which is, here’s exactly how we think about a hate speech issue, how we understand what is a contextualized slur, which is a whole thing, a reclaimed slur, which can be a part of a group expressing identity in solidarity. And so these are all hard calls. We work with world experts on these in these areas to arrive at our policies. We publish the policies so that they can be debated, and that’s kind of where we stand. For the things we don’t remove, there are certain things like misinformation, we want folks to be able to see the content as well as the education around it, so informing people. And that’s where we say this has been disputed, we expand other articles that are linking to the fact-checkers, and we reduce distribution so that these stories don't go viral.

NT: I want to ask one more question about this. When I was looking at the Instagram filter and I was asking why won’t this be implemented on Facebook, one person told me, “Well, it will never be implemented on Facebook, because as soon as you show that you can build a hate speech filter on Facebook, the German government will mandate that you use it, and it will become an impossible situation because every government will say we want to use your filter.” Is one reason you’re not deploying the tool because of the requests that would come if you deployed it?

CC: No. We published our transparency report. So every six months we release a report where we go through each of the categories of content, like fake accounts, terrorist content, hate speech, and we publish how many pieces we review, and how many we took down. The goal is just to have this stuff out in the open so that we can have a conversation about how we’re doing. And we can have scrutiny from people who study each of these areas, scrutiny from journalists, and scrutiny from people in each country to understand how we can do better. We like having this stuff out in the open in general. One of the things you’ll see in there is which things we’re able to take down proactively. And so terrorist content we’re able to take the vast majority of it down before it even shows up on the platform. This is stuff like ISIS. Hate speech is the really, really hard one. Because it is such a human judgment. And it is such a contextual judgment. And it's one where we’re relying on policies written by people who study this for their entire lives. And we’re very committed to it because it [creates] a really bad experience, especially where it can lead to real-world harm, and that’s going to be the driving principle for how we think about the work.

NT: Alright, let’s talk about the algorithm. So at Facebook, one of the most important things is the algorithm that determines News Feed. And my critique of the algorithm has always been that the factors that go into it favor Cheetos over kale. They favor likes and immediate shares. The factors that favor kale, which is like the ratio of shares after reading to shares before reading, time spent reading an article, those things matter less, and the impulse stuff matters more. Obviously, the algorithm has been evolving. You made a whole bunch of changes to it this year, but let’s start with the different things that you can measure on the Cheetos versus kale continuum, how you think about the different measurements, and what new tools you have for measuring this stuff.

'Hate speech is the really, really hard one. Because it is such a human judgment. And it is such a contextual judgment.'

CC: The most important tool is what people tell us. We’ll show people side-by-side, thousands of people every day, which of these things do you want to read? Why? We hear back the same thing: I care about friends and family more than anything. That is why we announced this ranking change in January; there had been a huge influx of video and other content from Pages, which is often great, but it had drowned out a lot of the friends and family stuff. So the most important quality change we made, is to make sure that people don't miss stuff from their friends and family, that’s number one. The second is what we’re able to discern, people want to have conversations around stuff on Facebook. They don't want to be passively consuming content. This is connected with the research on well-being, which says that if you go somewhere and you just sit there and watch and you don't talk to anybody, it can be sad. If you go to the same place and you have five or six conversations that are good around what’s going on in the world, what you care about, you feel better. You learn something. There’s a sense of social support. And that is exactly how we should think about digital and social media, which is, to what extent are they building relationships versus being places that are passive experiences. And so the ranking change we announced in January was helping to prioritize friends and family, but then beyond that, things that were creating conversations between people because we heard from people that’s why I'm here. The third area is focusing on quality. And that’s really about the news that gets distributed on Facebook. And this isn’t why people come to Facebook primarily, but it is an important part.

NT: It’s a very important part.

'That is exactly how we should think about digital and social media, to what extent are they building relationships versus being places that are passive experiences.'

CC: Exactly. For people who are coming to the platform, for democracy, for your paper. And what we’ve tried to do there is reduce clickbait, sensationalism, the things that people may click on in the moment because there’s an alluring headline, but then be disappointed by. And that’s where we’ve done an immense amount of work. We’ve been doing this work for a long time but we’ve doubled down on the work over the last two years.

NT: So let’s say I leave this room, I get to my laptop, and I write two articles. One has the headline: “I had this really profoundly interesting conversation with Chris Cox, here’s a transcript of it, here are the seven smartest things he said”, and I post that on Facebook. And then I take something you say and I kind of take it out of context and say, “Chris Cox says we should shut down Time.” Or let’s take something that you say a little bit out of context and make it salacious. The second one is still going to get a lot more likes and shares, right?

CC: To use my intuition, probably. Yeah.

NT: And so how do you stop that? Or how do you change that?

CC: Well, I think the most important thing there is whether over the long run that is building a good relationship with your readers or not. That is why I think the work on digital subscriptions is so important. A digital subscription is a business model that helps somebody have a long term relationship with a newspaper. Which is different from a one-at-a-time relationship.

NT: It’s a marriage versus a one-night stand.

CC: I wasn’t going to say that, but yeah it’s a longer-term relationship. And you’re seeing, for older institutions and newer ones, you’re seeing digital subscriptions as a growing business model on the internet. And it’s one that we’re committed to helping out on. Because we like the property that it helps create a relationship between a person and an institution. We just announced, actually, this week, a really interesting result on a digital-subscription product we’re building to help publishers take readers and convert them to subscribers on our platform. They get to set the meter, which is how many free reads do you get, they keep the revenue. It looks like it’s performing better than the mobile web, which is what we hoped, is that we can offer them something that improves their business. But it gets to what I think is the heart of the matter, when we start to talk about being in a headline culture, which, by the way, is not unique to social media. And that’s how do we think about business models that are about long relationships? And I think that’s a fascinating conversation, and to me is a really important area to go as an industry.

NT: And as someone who’s just launched a paywall and subscription model at WIRED, that is all music to my ears. Journalists and news organizations have been worried, fretful, since your changes were introduced in January. Maybe even going back to when they were being beta-tested, traffic is going down. We’re talking about it at WIRED. When you see drops of 20 percent, 25 percent in your Facebook referral traffic, there’s some concern that Facebook is getting out of the news. Is it?

CC: No. What we’ve done here is we’ve rebalanced; this is really going back to the ranking change I just talked about in January, where we’re trying to rebalance based on what people tell us. Which is they want to have conversations with people they care about on Facebook primarily. Among the news they get, they want it to be good. They want it to be informative. They don't want to be fooled, they don't want to be deceived, they don't want to look back on it and feel like they were hoodwinked. That’s all the work we’re doing on clickbait, on quality, on working with fact-checkers, etc. and I think we do have immense responsibility on both of those.

NT: Let’s talk about regulation. You were just in Washington, your boss was also just in Washington, we all watched him on TV, probably there’s going to be some kind of regulation. The spectrum basically goes from, we’re going to ask for citizen education, to we’re going to have tough privacy regulation and tough hate speech regulation, all the way to antitrust. What is your sense of the way to make regulation work in a way that allows you to continue to innovate?

CC: I was in Washington last week, meeting with senators, civil society groups. We do a product road show just to help folks understand the work we’re doing on elections. It was a fascinating week to be in Washington. We had all the immigration stuff going on. And to me, and whether this takes the form of regulation or not is an important point, but to me the conversation is just that we need to be spending more time understanding, from people whose jobs it is to be representing the opinions of the state, what are their big issues and what can tech do about it? I think that is so productive. To me, the positive version of this is just a lot more dialogue in each of these arenas, on how should we think about data use, how do we communicate about data use; it’s a very difficult problem. It’s a problem for the next decade, how is a person to think about their data? Where is it, what can they do about it, how can they control it, how should they feel? I'm hopeful that what all of this is leading to is just a lot more clarity in each of these arenas.

NT: So you want more clarity, but let me just go through how you feel about some regulations. Again, I'll just take the approach of guessing what Facebook’s position is. So, antitrust, clearly you are against that. The German hate speech law, my guess would be, you think it was an overreach because it puts the burden of identifying hate speech on you, meaning you have to hire tons of people, and also, the easy way out of it is just to delete everything from the platform.

CC: I'm not even sure if Germany feels like that was a good policy.

'How is a person to think about their data? Where is it, what can they do about it, how can they control it, how should they feel?'

NT: GDPR [Europe’s new data-protection law], it seems like you’re conflicted about it. You rolled out a whole bunch of new stuff here that seems like you’re kind of in favor of a lot of what GDPR did.

CC: Yep, absolutely.

NT: And then on the sort of the easy spectrum, like the Honest Ads Act, it seems like you’re actively [supporting] it.1 So on that end of the spectrum, you’re good with it.

CC: You know, one of the things we did with GDPR is we worked with the folks who were writing the laws, in addition to the usual research groups, where you’re sitting down with privacy experts, you’re sitting down in user research, you’re asking about comprehensibility, your understanding, what is the design of the thing that the most people emerge understanding and feeling good about. It can't be 100 pages long. If you make it one page long everybody says you don't share enough, if you make it 10 pages long no one’s going to read it. It’s a hard one. But it’s nice when you can do it and say, “And, this is something we did in cooperation with the government.” So it helped having a body of people who were saying the thing is certified.

NT: My theory of government regulation is that it’s very hard for governments to regulate tech companies because by the time the bill is passed, everything is evolved past what they were thinking about. So my dream regulation would be government to get you together, to talk a lot, and to threaten you really aggressively, but then not do anything. And then you would self-regulate yourself really closely.

CC: That’s happening right now. I mean, these are arenas where—each one of them is something where we need to be really dialed in, on both exactly how the product works, and the research we’ve done to support that. I'm personally really proud of the work we’ve done in each of these areas, and my biggest takeaway from Washington is, once we explain the work, they’re pretty excited about it. And the biggest thing happening is a misunderstanding. Not understanding the election stuff we’ve done already, not understanding the way we’ve done research to design GDPR…

NT: Not understanding that you sell ads.

CC: Well I don't mean it like that, it’s on us. You know these are really brilliant people, who do study and read the literature.

NT: I interviewed Zuckerberg after the Cambridge Analytica scandal hit, and we were talking a little bit about regulation and he said, one reason why regulation is hard is because AI is going to be the most important tool to solving the problems on our platform and regulation will be put in place before all this AI gets implemented. I agree with that. And I agree with using AI to solve all kinds of problems, even problems we haven’t imagined. But the people who cause problems will also have AI, right. And AI will also have amazing opportunities for hacking—you can hack into the training data. Explain to me sort of conceptually how you think about the arms race between AI in the service of making Facebook a better platform, and AI in the service of using Facebook to try and destroy the world.

CC: First of all, AI should be thought about as a general technology. It’s like electricity. You know, it can be used in a lot of different ways. It’s being talked about in a lot of different timeframes, it’s the buzzword of the festival this year, which is good. It’s tied up in the future of jobs, it’s tied up in the future of medicine, it’s tied up in a lot of the important conversations on how we’re going to make the world a better place, we’re going to take advantage of the power of this technology. It’s also going to be, take this French medical hoax example, if we didn't have a classifier that could quickly look at what are all the stories that look like this, that probably would have been viral. And the most important application of this work for us right now is in that kind of stuff, safety and security. And I am not aware of seeing, in the arms race, that sort of sophistication in this arena so far. So we’re obviously going to pay attention to it but if you look at the score right now, I think it’s massively in favor of security and safety.

NT: The most important thing you do financially is you sell ads. And the best product you’ve built is this tool that can identify who I should target. When I worked at The New Yorker, it was an amazing tool because we used it to sell subscriptions to people who, based on their habits measured by Facebook, are likely to get New Yorker subscriptions. So you built this incredible ad tool based on slicing and dicing populations. The biggest problem with Facebook is filter bubbles and groups where misinformation becomes disinformation and people become radicalized, which, again, is based on slicing and dicing. My presumption would be, one of the reasons filter bubbles exist is because you can get into a small group of like-minded people. And sometimes in that small group of like-minded people, you get more and more radicalized, whether it’s into a political view or it’s into a view about vaccines causing autism. And so the question is whether the business model is tied to the problematic elements of filter bubbles and radicalization within groups.

CC: I don’t think it is. And I'll tell you why. I think one of the most important misunderstandings based on the academic research is the literature around polarization, how social media changes a media diet, which is really the underlying issue. Are you exposed to a broader set of information or a narrower set of information? And the literature says it’s complicated. It’s complicated because a world without social media as a primary source of information in the US is going to be cable news, which, according to the researchers, is a massively polarizing thing.

NT: Oh, definitely.

CC: So what’s interesting is—this is what the empirical research says—is that social media exposes you to a broader media diet, because it connects you with friends around you, “weak ties” it’s called in the literature. This is the person you went to high school with, it’s the person you used to work with, it’s people who you’d never message with, people who you wouldn’t necessarily keep in touch with without Facebook and Instagram. They tend to read something different from you, and you tend to trust them. And it’s where you tend to get the most cross-cutting discourse, which is to say people bonding over an issue that isn’t politics, and then listening to one another on an issue which is politics. The vast majority of groups on Facebook are not political. They are a mother’s group, a group of locksmiths, a group of people who play Quidditch together in London (actual Quidditch!). What we’ve heard, and this is the vast majority of the Groups on the platform, is that these are places where bonding happens and bridging happens. Which, in the literature of community leadership, in the literature of polarization, is an incredibly important thing.

NT: I'm going to not counter it but say, you can believe both that Facebook is less polarizing than cable news, and you can believe the Groups are generally good, and also believe that Facebook should be working hard to counter the polarization that does exist both within Groups and within the regular feed.

CC: Which I agree with.

NT: So then, how do you counter it more?

CC: I think the key thing to look for there is sensationalism, hate, misinformation. These are the things where we’ve seen on the platform, and we need to find them through a combination of reporting and detection, and then we need to deal with it.

NT: You mentioned earlier that changing the business model of journalism toward subscription and away from views has a beneficial effect on the industry. What about changing the way ads work within that context and saying, you can't slice and dice on political content, you can't use custom audiences for a campaign.

CC: The tricky one here is there’s a massive amount of good that’s done when you let a very small business, a barber shop in London, you know, has zero customers, has $10, wants to start advertising, wants to speak to people in this age group because they know who their customers are — they just need a way to reach them. And on the ledger of the good that is enabled when you allow people to reach small audiences, we think it’s vastly good because small entrepreneurs, small businesses, a small news magazine, that wants to reach a particular type of person and couldn't afford to reach people in the way advertising worked prior to the internet. I believe in that. If you go out and talk to small business owners in the US, you get somewhere between 50 and 60 percent say our platform was responsible for helping them grow their business meaningfully, and that translates to more successful small entrepreneurs out there. Then the question is: well what about political and issue advertising, where, again, on the one hand you have people trying to raise money for important causes. You have nonprofits in Texas trying to raise money to help reunite children with their parents. And to say, you can’t do this on our platform, we think, would be wrong. So what we’ve done is to release an archive, to label every single ad where exactly is it coming from, to let people—journalists, civil society, watchdog groups, experts—study the way that the advertising is being used, so that we can have it out in the open. We can have a conversation out in the open and, frankly, we can have help from people who are studying very specifically this one group of people in Ohio and helping us spot when there’s misuse there, and we’re going to go after it.

NT: And then you can use your other tools to help promote the people who are helping find lost children and knock away the ones who are using it to spread Russian propaganda.

CC: It’s interesting. Did anybody here hear about this fundraiser last week? This is one of the more interesting things that happened on our platform last week—a fundraiser for a Texas nonprofit raising money to reunite children with their parents after they were separated at the border. It raised $20 million in six days. It was a couple, Dave and Charlotte Willner in California, their ambition was to raise $1,500. And it created a copycat phenomenon. And it’s powerful because it’s letting people do something. It's a release. And it’s a contribution to what the national conversation was last week.

The interview then turned to audience questions.

1 CLARIFICATION, July 6, 3:55PM: Facebook supports the Honest Ads Act. An earlier version of this story said Facebook is actively lobbying for the bill.


More Great WIRED Stories

Read more: