Google has uncovered evidence that Russian operatives exploited its platforms in an attempt to interfere in the 2016 U.S. election, according to the Washington Post.

It says tens of thousands of dollars were spent on ads by Russian agents who were aiming to spread disinformation across Google’s products — including its video content platform YouTube but also via advertising associated with Google search, Gmail, and the company’s DoubleClick ad network.

The newspaper says its report is based on information provided by people familiar with Google’s investigation into whether Kremlin-affiliated entities sought to use its platforms to spread disinformation online.

Asked for confirmation of the report, a Google spokesman told us: “We have a set of strict ads policies including limits on political ad targeting and prohibitions on targeting based on race and religion. We are taking a deeper look to investigate attempts to abuse our systems, working with researchers and other companies, and will provide assistance to ongoing inquiries.”

So it’s telling that Google is not out-and-out denying the report — suggesting the company has indeed found something via its internal investigation, though isn’t ready to go public with whatever it’s unearthed as yet.

Google, Facebook, and Twitter have all been called to testify to a Senate Intelligence Committee on November 1 which is examining how social media platforms may have been used by foreign actors to influence the 2016 US election.

Last month Facebook confirmed Russian agents had utilized its platform in an apparent attempt to sew social division across the U.S. by purchasing $100,000 of targeted advertising (some 3,000+ ads — though the more pertinent question is how far Facebook’s platform organically spread the malicious content; Facebook has claimed only around 10M users saw the Russian ads, though others believe the actual figure is likely to be far higher.)

CEO Mark Zuckerberg has tried to get out ahead of the incoming political and regulatory tide by announcing, at the start of this month, that the company will make ad buys more transparent — even as the U.S. election agency is running a public consultation on whether to extend political ad disclosure rules to digital platforms.

(And, lest we forget, late last year he entirely dismissed the notion of Facebook influencing the election as “a pretty crazy idea” — words he’s since said he regrets.)

Safe to say, tech’s platform giants are now facing the political grilling of their lives, and on home soil, as well as the prospect of the kind of regulation they’ve always argued against finally being looped around them.

But perhaps their greatest potential danger is the risk of huge reputational damage if users learn to mistrust the information being algorithmically pushed at them — seeing instead something dubious that may even have actively malicious intent.

While much of the commentary around the US election social media probe has, thus far, focused on Facebook, all major tech platforms could well be implicated as paid aids for foreign entities trying to influence U.S. public opinion — or at least any/all whose business entails applying algorithms to order and distribute third party content at scale.

Just a few days ago, for instance, Facebook said it had found Russian ads on its photo sharing platform Instagram, too.

In Google’s case the company controls vastly powerful search ranking algorithms, as well as ordering user generated content on its massively popular video platform YouTube.

And late last year The Guardian suggested Google’s algorithmic search suggestions had been weaponized by an organized far right campaign — highlighting how its algorithms appeared to be promoting racist, nazi ideologies and misogyny in search results.

(Though criticism of tech platform algorithms being weaponized by fringe groups to drive skewed narratives into the mainstream dates back further still — such as to the #Gamergate fallout, in 2014, when we warned that popular online channels were being gamed to drive misogyny into the mainstream media and all over social media.)

Responding to The Guardian’s criticism of its algorithms last year, Google claimed: “Our search results are a reflection of the content across the web. This means that sometimes unpleasant portrayals of sensitive subject matter online can affect what search results appear for a given query. These results don’t reflect Google’s own opinions or beliefs — as a company, we strongly value a diversity of perspectives, ideas and cultures.”

But it looks like the ability of tech giants to shrug off questions and concerns about their algorithmic operations — and how they may be being subverted by hostile entities — has drastically shrunk.

According to the Washington Post, the Russian buyers of Google ads do not appear to be from the same Kremlin-affiliated troll farm which bought ads on Facebook — which it suggests is a sign that the disinformation campaign could be “a much broader problem than Silicon Valley companies have unearthed so far”.

Late last month Twitter also said it had found hundreds of accounts linked to Russian operatives. And the newspaper’s sources claim that Google used developer access to Twitter’s firehose of historical tweet data to triangulate its own internal investigation into Kremlin ad buys — linking Russian Twitter accounts to accounts buying ads on its platform in order to identify malicious spend trickling into its own coffers.

A spokesman for Twitter declined to comment on this specific claim but pointed to a lengthy blog post it penned late last month — on “Russian Interference in 2016 US Election, Bots, & Misinformation”. In that Twitter disclosed that the RT (formerly Russia Today) news network spent almost $275,000 on U.S. ads on Twitter in 2016.

It also said that of the 450 accounts Facebook had shared as part of its review into Russian election interference Twitter had “concluded” that 22 had “corresponding accounts on Twitter” — which it also said had either been suspended (mostly) for spam or were suspended after being identified.

“Over the coming weeks and months, we’ll be rolling out several changes to the actions we take when we detect spammy or suspicious activity, including introducing new and escalating enforcements for suspicious logins, Tweets, and engagements, and shortening the amount of time suspicious accounts remain visible on Twitter while pending confirmation. These are not meant to be definitive solutions. We’ve been fighting against these issues for years, and as long as there are people trying to manipulate Twitter, we will be working hard to stop them,” Twitter added.

As in the case with the political (and sometimes commercial) pressure also being applied on tech platforms to speed up takedowns of online extremism, it seems logical that the platforms could improve internal efforts to thwart malicious use of their tools by sharing more information with each other.

In June Facebook, Microsoft, Google and Twitter collectively announced a new partnership aimed at reducing the accessibility of internet services to terrorists, for instance — dubbing it the Global Internet Forum to Counter Terrorism — and aiming to build on an earlier announcement of an industry database for sharing unique digital fingerprints to identify terrorist content.

But whether some similar kind of collaboration could emerge in future to try to collectively police political spending remains to be seen. Joining forces to tackle the spread of terrorist propaganda online may end up being trivially easy vs accurately identifying and publicly disclosing what is clearly a much broader spectrum of politicized content that’s, nonetheless, also been created with malicious intent.

According to the New York Times, Russia-bought ads that Facebook has so far handed over to Congress apparently included a diverse spectrum of politicized content, from pages for gun-rights supporters, to those supporting gay rights, to anti-immigrant pages, to pages that aimed to appeal to the African-American community — and even pages for animal lovers.

One thing is clear: Tech giants will not be able to get away with playing down the power of their platforms in public.

Not at the Congress hearing next month. And likely not for the foreseeable future.

Read more: