Twitter has announced more changes to its rules to try to make it harder for people to use its platform to spread politically charged disinformation and thereby erode democratic processes.
In an update on its “elections integrity work” yesterday, the company flagged several new changes to the Twitter Rules which it said are intended to provide “clearer guidance” on behaviors it’s cracking down on.
In the problem area of “spam and fake accounts”, Twitter says it’s responding to feedback that, to date, it’s been too conservative in how it thinks about spammers on its platform, and only taking account of “common spam tactics like selling fake goods”. So it’s expanding its net to try to catch more types of “inauthentic activity” — by taking into account more factors when determining whether an account is fake.
“As platform manipulation tactics continue to evolve, we are updating and expanding our rules to better reflect how we identify fake accounts, and what types of inauthentic activity violate our guidelines,” Twitter writes. “We now may remove fake accounts engaged in a variety of emergent, malicious behaviors.”
Some of the factors it says it will now also take into account when making a ‘spammer or not’ judgement are:
- Use of stock or stolen avatar photos
- Use of stolen or copied profile bios
- Use of intentionally misleading profile information, including profile location
Kremlin-backed online disinformation agents have been known to use stolen photos for avatars and also to claim accounts are US based, despite spambots being operated out of Russia. So it’s pretty clear why Twitter is cracking down on fake profiles pics and location claims.
Less clear: Why it took so long for Twitter’s spam detection systems to be able to take account of these suspicious signals. But, well, progress is still progress.
(Intentionally satirical ‘Twitter fakes’ (aka parody accounts) should not be caught in this net, as Twitter has had a longstanding policy of requiring parody and fan accounts to be directly labeled as such in their Twitter bios.)
Pulling the threads of spambots
In another major-sounding policy change, the company says it’s targeting what it dubs “attributed activity” — so that when/if it “reliably” identifies an entity behind a rule-breaking account it can apply the same penalty actions against any additional accounts associated with that entity, regardless of whether the accounts themselves were breaking its rules or not.
This is potentially a very important change, given that spambot operators often create accounts long before they make active malicious use of them, leaving these spammer-in-waiting accounts entirely dormant, or doing something totally innocuous, sometimes for years before they get deployed for an active spam or disinformation operation.
So if Twitter is able to link an active disinformation campaign with spambots lurking in waiting to carry out the next operation it could successfully disrupt the long term planning of election fiddlers. Which would be great news.
Albeit, the devil will be in the detail of how Twitter enforces this new policy — such as how high a bar it’s setting itself with the word “reliably”.
Obviously there’s a risk that, if defined too loosely, Twitter could shut innocent newbs off its platform by incorrectly connecting them to a previously identified bad actor. Which it clearly won’t want to do.
The hope is that behind the scenes Twitter has got better at spotting patterns of behavior it can reliably associate with spammers — and will thus be able to put this new policy to good use.
There’s certainly good external research being done in this area. For example, recent work by Duo Security has yielded an open source methodology for identifying account automation on Twitter.
The team also dug into botnet architectures — and were able to spot a cryptocurrency scam botnet which Twitter had previously been recommending other users follow. So, again hopefully, the company has been taking close note of such research, and better botnet analysis underpins this policy change.
There’s also more on this front: “We are expanding our enforcement approach to include accounts that deliberately mimic or are intended to replace accounts we have previously suspended for violating our rules,” Twitter also writes.
This additional element is also notable. It essentially means Twitter has given itself a policy allowing it to act against entire malicious ideologies — i.e. against groups of people trying to spread the same sort of disinformation, not just any a single identified bad actor connected to a number of accounts.
To use the example of the fake news peddler behind InfoWars, Alex Jones, who Twitter finally permanently banned last month, Twitter’s new policy suggests any attempts by followers of Jones to create ‘in the style of’ copycat InfoWars accounts on its platform, i.e. to try to indirectly return Jones’ disinformation to Twitter, would — or, well, could — face the same enforcement action it has already meted out to Jones’ own accounts.
Though Twitter does have a reputation for inconsistently applying its own policies. So it remains to be seen how it will, in fact, act.
And how enthusiastic it will be about slapping down disinformation ideologies — given its longstanding position as a free speech champion, and in the face of criticism that it is ‘censoring’ certain viewpoints.
Hacked materials
Another change being announced by Twitter now is a clampdown on the distribution of hacked materials via its platform.
Leaking hacked emails of political officials at key moments during an election cycle has been a key tactic for democracy fiddlers in recent years — such as the leak of emails sent by top officials in the Democratic National Committee during the 2016 US presidential election.
Or the last minute email leak in France during the presidential election last year.
Twitter notes that its rules already prohibit the distribution of hacked material which contains “private information or trade secrets, or could put people in harm’s way” — but says it’s now expanding “the criteria for when we will take action on accounts which claim responsibility for a hack, which includes threats and public incentives to hack specific people and accounts”.
So it seems, generally, to be broadening its policy to cover a wider support ecosystem around election hackers — or hacking more generally.
Twitter’s platform does frequently host hackers — who use anonymous Twitter accounts to crow about their hacks and/or direct attack threats at other users…