Twitter today published a new version of its rules, in an effort to further clarify its policies about abuse, spam, self-harm and other topics, as well as to better explain how it determines the appropriate action – like suspending an abuser’s account, for example. The company says the updated documentation doesn’t represent changes to the “fundamentals” of its policies; it instead aims to explain the rules in more detail, and include examples.
The publication of the updated rules follows a series of revamps to Twitter’s policies surrounding online abuse, in the wake of extensive criticism that Twitter has become a haven for hate, violence and harassment.
In October, Twitter CEO Jack Dorsey promised the company would take a more aggressive stance in its rules and its enforcement of them. In his announcement, a response to the #WomenBoycottTwitter protest, Dorsey said that Twitter would develop new rules for things like unwanted sexual advances, non-consensual nudity, hate symbols, violent groups, and tweets that glorifies violence.
In the days since, Twitter has followed through by announcing crackdowns on hate symbols and violent groups, revenge porn, and hateful display names. It also released its safety roadmap calendar, which promises specific actions on certain dates.
On Friday, November 3rd, the calendar said that it would release an updated version of the Twitter rules.
In a blog post today, Twitter says that some of the bigger changes to the rules include updated sections on abusive behavior, self-harm, and graphic violence and adult content.
For example, Twitter says it’s now making it clearer that context is important when analyzing abusive behavior and choosing to take action. This, in part, is in response to earlier complaints that a provocative tweet by President Trump against North Korea wasn’t taken down by Twitter. The company then explained the tweet had newsworthy value, which is why it remained posted.
It also said it would soon update its rules to better reflect this policy in the future.
That has now happened.
As Twitter says today, it will consider a tweet’s context before taking action, including “if the behavior is targeted, if a report has been filed and by whom, and if the Tweet itself is newsworthy and in the legitimate public interest.”
The newly updated rules additionally clarify Twitter’s policies around its enforcement related to tweets about self-harm and suicide, and they more clearly define spam. In the latter case, Twitter says the rules now explain that when it reviews accounts for spam-like behavior, it focuses on behavioral signals, not the accuracy of the spam account’s tweets.
But again, the updated documentation is not really putting forth any new policies today, it’s more about offering more detailed, clearer explanations on all these topics. There are expanded sections detailing how Twitter handles selling and squatting on usernames, for example, rewrites of how it defines threats of violence, harassment, abuse, hateful conduct, exposure of private information, and other topics.
The company says it has worked on this clarified version of its rules for the past few months, incorporating feedback from its global Trust and Safety Council.
On November 22, Twitter promises to release another version of the rules that will include new policies around violent groups, hateful imagery, and abusive usernames. (These changes were originally listed on the Safety roadmap for today, however.)
Of course, the issue with Twitter historically hasn’t necessarily been that it was lacking rules, but rather that it seemed unable or unwilling to enforce them. That has allowed a culture of harassment and abuse to arise on its platform. It has even seen its own rules used by the harassers themselves to get their victims’ accounts banned.
In other words, writing up a clearer version of its user agreement is only on part of the equation here. To effectively impact the problem with online abuse, Twitter will need to take actions, too.