The algorithms Facebook and other tech companies use to boost engagement and increase profits have led to spectacular failures of sensitivity and worse

Facebook
Facebook is asking users to send them their nude photographs in a project to combat ‘revenge porn’. Photograph: Alamy Stock

Earlier this month, Facebook announced a new pilot programme in Australia aimed at stopping “revenge porn” – the non-consensual sharing of nude or otherwise explicit photos – on its platform. Their answer? Just send Facebook your nudes.

Yes, that’s right: if you’re worried about someone spreading explicit images of you on Facebook, you’re supposed to send those images to Facebook yourself.

If this sounds to you like some kind of sick joke, you’re not alone. Pretty much everyone I talked to about it did a spit-take at the entire premise. But in addition to being ridiculous, it’s a perfect example of the way today’s tech companies are in over their heads, attempting to engineer their way out of complex social problems – without ever questioning whether their very business models have, in fact, created those problems.

To see what I mean, let’s look at how Facebook’s new scheme is meant to work: if you’re concerned about revenge porn, you complete an online form with the Australia eSafety Commissioner’s office. That office then notifies Facebook that you submitted a request. From there, you send the image in question to yourself using Facebook Messenger. A team at Facebook retrieves your image, reviews it, then creates a numerical fingerprint of it known as a “hash”. Facebook then stores your photo’s hash, but not the photo itself, and notifies you to delete your photo from Messenger. After you’ve done so, Facebook says it will also delete the photo from its servers. Then, whenever a user uploads a photo to the platform, an algorithm checks the photo against the database. If the algorithm finds that the photo matches one reported as revenge porn, the user will not be allowed to post it.

Mark
Mark Zuckerberg talks about protecting the integrity of the democratic process, after a Russia-based group paid for political ads on Facebook during the US election. Photograph: Facebook via YouTube

Just think for a moment about all the ways this could go wildly wrong. First off, to make the system work at all, you must not only have digital copies of all the images that might be spread, but also be comfortable with a bunch of strangers at Facebook poring over them, and trust that the hashing system will actually catch future attempts to upload the image. And that’s assuming everything works as planned: that you won’t screw up the upload process and accidentally send them to someone else, that Facebook staff won’t misuse your photos, that your Messenger account won’t be hacked, that Facebook will actually delete the image from its servers when it says it will. In short, you have to trust that Facebook’s design, backend databases and employees are all capable of seamlessly handling extremely personal information. If any one of those things doesn’t work quite right, the user is at risk of humiliation – or worse.

Given Facebook’s track record of dealing with sensitive subjects, that’s not a risk any of us should take. After all, this is the company that let Russian-backed organisations buy ads intended to undermine democracy during the 2016 election (ads which, the company now admits, millions of people saw). This is the company that built an ad-targeting platform that allowed advertisers to target people using antisemitic audience categories, including “Jew haters” and “How to burn Jews”. And this is a company that scooped up a screenshot of a graphic rape threat a journalist had received and posted on her Instagram account, and turned it into a peppy ad for Instagram (which it owns) that was then inserted on her friend’s Facebook pages.

And that’s just from the past few months. Looking further back, we can find lots more distressing stories – like the time in 2012, when Facebook outed two gay students from the University of Texas, Bobbi Duncan and Taylor McCormick. The students had used Facebook’s privacy settings to conceal their orientation from their families, but Facebook posted an update to their profiles saying they had joined the Queer Chorus.

Or how about the launch in 2014 of Facebook’s Year In Review feature, which collected your most popular content from the year and packaged it up for you to relive? My friend Eric Meyer had been avoiding that feature, but Facebook created one for him anyway, and inserted it into his feed. On its cover was the face of his six-year-old daughter, Rebecca, flanked by illustrations of balloons, streamers and people dancing at a party. Rebecca had died earlier that year. But Facebook’s algorithm didn’t know whether that was a good or bad image to surface. It only knew it was popular.

Since Year In Review, Facebook has amped up this algorithmically-generated type of celebratory reminders. Now there’s On This Day, which, despite telling Facebook I don’t want to see these posts, still pops into my feed at least once a week. There’s also Friends Day, a fake holiday where Facebook sends algorithmically generated photo montages of users with their friends – resulting in one man receiving a video set to jazzy music showcasing his car accident and subsequent trip to the hospital.

But Facebook keeps inserting messages and designs into users’ memories. Just last week, my sister-in-law received a notification covered in balloons and thumbs-up signs telling her how many people have liked her posts. The image they showed with it? A picture of her broken foot in a cast. I can assure you, she didn’t feel particularly thumbs-up about falling down a flight of stairs.

Peppa
Peppa and George about to be cooked by a witch in a YouTube spin-off of the Peppa Pig series. Source: YouTube

What all these failures have in common is that they didn’t have to happen. They only occur because Facebook invests far more time and energy in building algorithmically controlled features meant to drive user engagement, or give more control to advertisers, than it does thinking about the social and cultural implications of making it easy for 2 billion people to share content.

It’s not just Facebook that’s turned to algorithms to bump up engagement over the past few years, of course – it’s most of the tech industry, particularly the parts reliant on ad revenue. Earlier this month, writer James Bridle published an in-depth look at the underbelly of creepy, violent content targeted at kids on YouTube – from knock-off Peppa Pig cartoons, such as one where a trip to the dentist morphs into a graphic torture scene, to live-action “gross-out” videos, which show real kids vomiting and in pain.

These videos are being produced and added to YouTube by the thousand, then tagged with what Bridle calls “keyword salad” – long lists of popular search terms packed into their titles. These keywords are designed to game or manipulate the algorithm that sorts, ranks and selects content for users to see. And thanks to a business model aimed at maximising views (and therefore ad revenue), these videos are being auto-played and promoted to kids based on their “similarity” – at least in terms of keywords used – to content that the kids have already seen. That means a child might start out watching a normal Peppa Pig episode on the official channel, finish it, then be automatically immersed in a dark, violent and unauthorised episode – without their parent realising it.

YouTube’s response to the problem has been to hand responsibility to its users, asking them to flag videos as inappropriate. From there, the videos go to a review team that YouTube says comprises thousands of people working 24 hours a day to review content. If the content is found to be inappropriate for children, it will be age-restricted and not appear in the YouTube Kids app. It will still appear on YouTube proper, however, where, officially, users must be at least 13 years old, but in reality, is still a system which countless kids use (just think about how often antsy kids are handed a phone or tablet to keep them occupied in a public space).

Like Facebook’s scheme, this approach has several flaws: since it’s trying to ferret out inappropriate videos from kids’ content, it’s likely that most of the people who will encounter these videos are kids themselves. I don’t expect a lot of six-year-olds to become aggressive content moderators any time soon. And if the content is flagged, it still needs to be reviewed by humans, which, as YouTube has already acknowledged, takes “round the clock” monitoring.

When we talk about this kind of challenge, the tech companies’ response is often that it’s simply the inevitability of scale – there’s no way to serve billions of users endless streams of engaging content without getting it wrong or allowing abuse to slip by some of the time. But of course, these companies don’t have to do any of this. Auto-playing an endless stream of algorithmically selected videos to kids isn’t some sort of mandate. The internet didn’t have to become a smorgasbord of “suggested content”. It’s a choice that YouTube made, because ad views are ad views. You’ve got to break a few eggs to make an omelette, and you’ve got to traumatise a few kids to build a global behemoth worth $600bn.

And that’s the issue: in their unblinking pursuit of growth over the past decade, these companies have built their platforms around features that aren’t just vulnerable to abuse, but literally optimised for it. Take a system that’s easy to game, profitable to misuse, intertwined with our vulnerable people and our most intimate moments, and operating at a scale that’s impossible to control or even monitor, and this is what you get.

The question now is, when will we force tech companies to reckon with what they’ve wrought? We’ve long decided that we won’t let companies sell cigarettes to children or put asbestos into their building materials. If we want, we can decide that there are limits to what tech can do to “engage” us, too, rather than watching these platforms spin further and further away from the utopian dreams they were sold to us on.

Technically Wrong: Sexist Apps, Biased Algorithms and Other Threats of Toxic Tech by Sara Wachter-Boettcher is published by Norton. To order a copy for £20 go to bookshop.theguardian.com or call 0330 333 6846. Free UK p&p over £10, online orders only. Phone orders min p&p of £1.99

Read more: www.theguardian.com