Image: UKAS SCHULZE/PICTURE-ALLIANCE/DPA/AP IMAGES

Facebook actually stopped harmful content from spreading. 

The company said in a press release Monday that it removed or added a content warning to 1.9 million pieces of “ISIS and al-Qaeda” content in January through March — twice the amount it removed in the previous three months. 

Supposedly, 99 percent of that content was removed because Facebook’s technology and employees found it, not because users reported it. 

“In most cases, we found this material due to advances in our technology, but this also includes detection by our internal reviewers,” wrote Monika Bickert, Facebook’s vice president of global policy management, and Brian Fishman, its global head of counterterrorism policy. 

Facebook’s counter-terrorism team has grown to 200 people from 150 last June, the company said. Overall, terrorist material posted to Facebook was typically removed within a minute, according to the press release.

Facebook also made the unusual (dare we say editorial?) decision to define terrorism on its platform: “Any non-governmental organization that engages in premeditated acts of violence against persons or property to intimidate a civilian population, government, or international organization in order to achieve a political, religious, or ideological aim.”

No doubt cognizant that critics are sensitive about the social network’s political leanings — in a Congressional hearing with Mark Zuckerberg earlier this month, Sen. Ted Cruz (R-Tex) trotted out familiar, paranoid concerns about its supposed anti-conservative bias — Facebook also went out of its way to say its “definition is agnostic to the ideology or political goals of a group.”

Terrorist organizations have used Facebook in the past to recruit new members, boast about attacks, and even share gruesome images of acts of violence, such as beheadings. The U.S. Department of Justice has claimed that ISIS uses Facebook, Twitter and YouTube to target isolated young people in Europe, the United States, and Canada with recruitment messages.

Meanwhile, Facebook is still under attack for allowing the spread of propaganda and misinformation on its platform. It makes sense that it would want to show that technology and minor tweaks to staffing — as opposed to government regulation and changes to its business model — can prevent harmful messages, photos, and videos from going viral. Perhaps not coincidentally, Facebook will report its latest earnings this Wednesday.

“We’re under no illusion that the job is done or that the progress we have made is enough,” the company wrote. “Terrorist groups are always trying to circumvent our systems, so we must constantly improve. Researchers and our own teams of reviewers regularly find material that our technology misses. But we learn from every misstep, experiment with new detection methods and work to expand what terrorist groups we target.”

Read more: