Reading Time: 3 minutes

With 1.3 million new posts every minute, its impossible for the companys moderators to filter out all the nasty stuff

Move fast and break things, was the exhortation that Facebooks founder Mark Zuckerberg originally issued to his developers. Its a typical hackers mantra: while the tools and features they developed for his platform might not be perfect, speed was the key aspiration, even if there were some screw-ups on the way.

In 2016, we began to realise that one of the things that might get broken in Mr Zuckerbergs quest for speed is democracy. Facebook became one of the favourite platforms for disseminating fake news and was the tool of choice for micro-targeting voters with personalised political messages. It also became a live broadcasting medium for those engaging in bullying, rape, inflicting grievous bodily harm and, in one case, murder.

One way of thinking about the internet is that it holds up a mirror to human nature. All human life is there and much of what we see reflected on it is banal (Lolcats, for example), harmless, charming, enlightening and life-enhancing. But some of what we see is horrifying: it is violent, racist, hateful, spiteful, cruel, misogynistic and worse.

There are about 3.4bn users of the internet worldwide. Facebook has now nearly 2bn users, which comes to around 58% of all the people in the world who use the network. It was inevitable therefore that it too would become a mirror for human nature and that people would use it not just for good purposes, but also for bad. And so they have.

Zuckerberg and co were slow to realise that they had a problem. And when it finally dawned on them their initial responses were robotically inept. The first line of defence was that Facebook is merely a conduit, a channel, an enabler of free speech and community building and so has no editorial responsibility for what people post on it. The next tactic was to shift responsibility (and work) on to Facebook users: if anyone spotted objectionable content, then all they had to do was flag it and the company would deal with it.

But that didnt work either, so the next response was an announcement that Facebook was working on a technological fix for the problem: AI programs would find the objectionable stuff and snuff it out. This, however, turns out to be beyond the capabilities of any existing AI, so the company has now resorted to employing a small army (3,000) of human monitors who will examine all the nasty stuff and decide what to do with it.

In a spectacular scoop, the Guardian has obtained copies of the guidelines these censors will apply. They make for sobering reading. Moderators have only about 10 seconds to make a decision. Should something like someone shoot Trump be deleted? (Yes, because hes a head of state.) But what about to snap a bitchs neck, make sure to apply all your pressure to the middle of her throat? (Apparently thats OK, because its not a credible threat.) Lets beat up fat kids is also OK, it seems. Videos of violent deaths, while marked as disturbing, do not always have to be deleted because they can help create awareness of issues such as mental illness. And so on.

As one digs into these training manuals, guidelines and slide-decks, the inescapable thought is that this approach looks doomed to fail for two reasons. One is the sheer scale of the problem: 1.3m new posts every minute, 4,000 new photographs uploaded every second and God knows how many video clips. The second reason is that Facebooks prosperity depends on this user engagement, so radical measures that might rein it in will undermine its business model. But even if only a fraction of the resulting content is unacceptable, dealing with it is a sisyphean task way beyond the capacity of 3,000 people. (The Chinese government employs tens of thousands to monitor its social media.) If Zuckerberg continues down this path, hes on track to be remembered as Canute 2.0.

This is Facebooks problem, but its also ours, because so much public discourse now happens on that platform. And a polluted public sphere is very bad for democracy. What weve learned from the Guardians scoop is that Facebooks baroque, unworkable, ad hoc content-moderation system is unfit for purpose. If we discovered that the output of an ice-cream factory included a small but measurable quantity of raw sewage wed close it in an instant. Message to Zuckerberg: move quickly and fix things. Or else.

Read more: