Twitter tests new review tool telling users to think twice before replying with offensive language

Twitter is trying to use a new review tool to warn users before posting a reply containing what the company calls “harmful” language,media outlet The Verge reported. Twitter describes it as a limited experiment and only for iOS users. A message from Twitter’s official support channel said it should now be a prompt that pops up in some cases, “If you use potentially harmful language in your response, you can choose to modify your response before posting.”

Twitter tests new review tool telling users to think twice before replying with offensive language

This approach is not entirely new and has been used by many other social platforms before, most notably Instagram. The Facebook-owned app now warns users before they post a title, with a message saying it “looks similar to other reported titles.” The change follows Instagram’s launch last summer of warning sabouts.

It’s unclear how Twitter labels harmful language, but the company does have a hate speech policy and a broader Twitter rule file that lists its positionon on everything from threats of violence, terrorism- to abuse and harassment. Twitter says it won’t delete content just because it’s offensive: “People can post content, including potentially inflammatory content, as long as they don’t violate Twitter rules.” “But the company does have rules that allow it to make exceptions to its broad speech policy.”

Still, the new experiment doesn’t seem to be focused on curbing more extreme forms of content, which Twitter typically removes, suspends or bans users. Instead, it seems to be more about telling users to avoid using unnecessary and inflammatory language that escalates conflicts and can lead to account closures.