Goodhart’s Law, as phrased by Mary Strathern: “When a measure becomes a target, it ceases to be a good measure.” Honored more in the breach than the observance, alas. Our algorithmic world has turned so many measures into targets, and by doing so ruined them. Let’s talk about just one example: let’s talk about books.

“#1 Bestseller!” That’s a mark of perceived quality; thanks, “wisdom of the crowd.” That’s a measure. So that’s a target. What does “#1 Bestseller” mean these days? Well, whether we’re talking about the Kindle Store…

…or the New York Times bestseller list

A young adult novel has been removed from the No 1 position on the New York Times bestseller lists, after detective work worthy of Nancy Drew by YA writers on Twitter uncovered a trail of strategic preorders being placed in particular US bookshops.

…it may not mean as much as one would hope. It turns out you can pretty much buy your way onto both. (True, the NYT yanked its fake, but the miscreants would have gotten away with it if it weren’t for those meddling YA writers.)

I suppose this is no surprise in this post-truth age. And of course awards and rankings have always been manipulated to some extent. But now that ranking is so often algorithmic and uncurated, the system can be more easily — and, similarly, algorithmically — gamed. Which in turn, of course, becomes a political issue like everything else in the world, or at least in America, these days.

And so prideful authors try to buy their way onto the NYT and WSJ bestseller lists, and apparently sometimes succeed. They buy their way into becoming “Amazon Bestsellers.”

Meanwhile, Amazon tries to crack down on fake reviews, and third parties provide plugins to the the same. Meanwhile, others try to target books they disapprove with “review abuse,” i.e. fake negative reviews. Just like fake news, it’s an arms race between the genuine and the fake, and it is far from apparent who is in the ascendancy.

I take this kind of personally because I’m the author of a clutch of novels myself. I decline, with some disgust, acquaintances’ offers to e.g. trade a fake five-star review of my books for a fake five-star review of their album. I never encourage my friends to post positive reviews. I sigh at the one-star rave reviews written by people who have apparently confused the Amazon system with that of Michelin. I tell myself that people can catch the scent of fake acclaim … but I fear that in truth many people often can’t.

Of course this doesn’t matter so much except to us weirdos who write books; but it’s an all too real example of a problem growing everywhere you turn. Fake news. Fake science. Fake credentials. Fake skills. People game, fake, or outright invent measures of all kinds, whether they be verifiable facts and statistics, social media connections, degrees, or accomplishments — and we suffer from so much information overload these days that we tend to rely on crude algorithms to do our first round of filtering, before we start spending our precious attention. How many babies are thrown out with that bathwater, and how much poison is allowed in?

There is hope, and it goes by the name of artificial intelligence. This kind of subtle pattern recognition, replacing crude measures-as-targets, is exactly the kind of thing that AI is good at. AI companies like Aspectiva are already working on spotting fake reviews, and others are parsing more useful data out of real ones.

Of course, AI comes with its own set of bias, fitting, and black-box problems … but they’re better ones to have than the problems we face now. Let’s hope that gamed ratings, fake reviews, fake news, and fake people will all be found out as such by tomorrow’s neural networks — for a window, at least, before other AIs start writing fake reviews that the first set of AIS cannot detect. The arms race continues.

Read more: