Twitter announced Tuesday that it will begin to hide all tweets from some accounts in conversations and search results. The goal is to identify and filter trolls and harmful users, based not on any specific tweet, but on how they use the social network holistically.

The new effort is part of Twitter's two-month-old initiative to discern what it means for the platform to be "healthy." Previously, Twitter mostly looked at the content of individual tweets to decide how to moderate them. Now, it's going to consider many more behavioral signals, like whether an account tweets frequently at others who don't follow it. The fresh filtering strategy may be a step toward a more healthy Twitter, but it's already helping to fuel conspiracy theories—especially because the social network isn't yet alerting users who get swept up into the new system.

Those whose tweets are deemed to be "disruptive," but that don't violate Twitter's policies outright, will be secluded at the bottom of a conversation thread or search result, to make room for more productive and respectful conversations. Some of the new signals Twitter will consider include whether you've confirmed your email address, whether you've created multiple accounts from the same IP address, and whether you're frequently blocked by accounts you interact with. Tweets that get filtered this way—as long as they don't violate Twitter's policies—won't be removed, but you will have to click "Show more replies," or elect to "show everything" in your search settings in order to view them.

Twitter's lack of transparency around its filtering mechanisms might be justified, but it also helps to fuel conspiracy theorists.

For now, it's unclear whether users will even know if they've been flagged under the new system. A Twitter spokesperson says the company is working on developing ways to give users the ability to appeal or flag mistakes. These new changes will likely affect a very small fraction of users; Twitter says that less than one percent of total accounts make up the majority of those reported for abuse. Twitter also notes that in early tests, the new signals resulted in fewer abuse reports being filed to begin with. The social network saw a 4 percent drop in reports from search, and an 8 percent drop in reports from conversations, according to the company.

Twitter, like other social networks, has long curated replies and search results. In 2015, Twitter began displaying replies to tweets algorithmically on desktop, based on signals like their content and whether the original account replied to them. The social network brought the same changes to mobile in 2016. Those tweaks changed Twitter for the better, and are why you don't see hoards of bots in the replies to President Trump's tweets anymore, for example.

The new moderation tactics will likely make the service better, and more useful for the average user. If you want to know how Twitter is digesting the president's latest tweet, it's significantly more helpful to have thoughtful replies at the top rather than bots trying to sell "liberal tears" mugs. But these moderation methods also remain opaque; unlike an outright Terms of Service violation, users aren't notified when their tweets—and now their entire accounts—are simply de-prioritized in the system.

Twitter has also long "shadowbanned" users, meaning it limits how visible their tweets are to others when the social network detects they may be behaving in an abusive or spammy manner. Shadowbanning is often portrayed as an unsanctioned last resort, but the practice is clearly detailed in Twitter's Help Center (though it doesn't call it that). The problem, though, is that users can only guess if they're impacted based on factors like a sudden drop in tweet impressions, a metric Twitter displays on the right-hand side of a user's account page. Twitter's new filtering system is similar to shadowbanning, but it incorporates far more behavioral signals, specifically designed to detect when someone is disrupting the conversation.

Of course, Twitter has good reason not to alert troublesome users that their messages are being filtered. If you're running s scam on the social network, any indication that you've been shadowbanned is a signal to give up, make a new account, and try again. Likewise, if you're trying to harass someone and suspect you've been filtered, you might ask your followers to attack them instead.

Twitter's lack of transparency around its filtering mechanisms might be justified, but it also helps to fuel conspiracy theorists who believe they're being unjustly censored. After Twitter announced its new moderation policies Tuesday, a number of its users (predictably) latched onto the news as evidence of the social network's sinister efforts to silence free speech. But Twitter likely isn't bothered. The platform—which has been plagued by spam, abuse, and misinformation problems since it launched—is attempting to execute a strategy its competitors have not. Instead of striving for neutrality, Twitter is again prioritizing better conversations, as it continues to examine what a healthy conversation looks like online.

More Great WIRED Stories

Read more: