To combat toxicity on the platform, social media website Twitter is reportedly developing a feature to makes users reconsider sending messages that its system detects as “mean” or offensive. Initial tests show that 34 percent of users who received the warning chose to revise their message or not send it at all.
NPR reports that Twitter is attempting to make its users more conscientious about the language they use on the platform and is encouraging positivity by implementing a new feature that will detect “mean” replies on its service before a user presses send.
When Twitter detects that a user is about to post something that could be seen as offensive, an automatic prompt will be displayed that states: “Want to review this before tweeting?” The user is then offered three choices: tweet, edit, or delete.
The feature was launched on Wednesday of this week and will initially be enabled on accounts with English-language settings. Twitter has not clarified when other languages will be included in the system. Twitter says that after a year of tests with the add-on, it found a noticeable decrease in offensive replies across the service.
Twitter claims that when prompted, 34 percent of people revised their initial reply or decided not to send it at all. After they were prompted, users composed about 11 percent few offensive replies in the future and were also less likely to receive offensive replies as a result.
Twitter’s move this week is the latest in a number of attempts by the social media firm to modify users’ behavior. Similar efforts have been made by firms such as Instagram which is testing the removal of its “like” counts on posts in an attempt to reduce social media addiction, jealousy, anger, and depression.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan or contact via secure email at the address lucasnolan@protonmail.com
COMMENTS
Please let us know if you're having issues with commenting.