Twitter, which is constantly criticized for not doing enough to prevent harassment, has updated its guidelines with more information on how it handles tweets or accounts that encourage other people to hurt themselves or commit suicide.
The update follows an announcement by Twitter Safety last week that users can now report profiles, tweets and direct messages that encourage self-harm and suicide.
While we continue to provide resources to people who are experiencing thoughts of self-harm, it is against our rules to encourage others to harm themselves. Starting today, you can report a profile, Tweet, or Direct Message for this type of content.
— Twitter Safety (@TwitterSafety) February 13, 2018
In a new section on its Help Center titled “Glorifying self-harm and suicide,” Twitter outlined its approach to tweets or accounts that promote or encourage self-harm and suicide. The company says its policy against encouraging other people to hurt themselves is meant to work in tandem with its self-harm prevention measures as part of a “two-pronged approach” that involves “supporting people who are undergoing experiences with self-harm or suicidal thoughts, but prohibiting the promotion or encouragement of self-harming behaviors.” Twitter already has a form that lets users report threats of self-harm or suicide and a team that assesses tweets and reaches out to users they believe are at risk.
Twitter says offenders may be temporarily locked out of their account the first time they violate the policy and their tweets encouraging self-harm or suicide removed. Repeat offenders may have their accounts suspended.
Last fall, Twitter published a new version of its policies toward abuse, spam, self-harm and other issues, following a promise by chief executive officer Jack Dorsey that it would be more aggressive about preventing harassment. Publishing stricter guidelines and putting them into practice, however, are two different things. Many of Twitter’s critics still believe the platform doesn’t do enough to enforce its anti-harassment measures and must provide more information about exactly what kind of content results in a suspension. For example, telling someone to “kill yourself” arguably violates its guidelines, but a quick search of #killyourself returns many recent results, including tweets aimed at specific people.