Twitter is experimenting with a new feature, that will warn users before they post abusive or harmful content that may get reported.

“When things get heated, you may say things you don't mean. To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful,” Twitter’s official Twitter Support account posted on Tuesday.

The feature will prompt users to alter their replies if the content typed out by them uses language that can be deemed “harmful.”

Other social media platforms also have similar features to prompt users to change abusive replies. Facebook-owned short video platform Instagram, for instance, warns users with a message that their content “looks similar to others that have been reported” if they type out abusive content.

Twitter has also recently been working on preventing abusive content on its platform. The micro-blogging platform had earlier introduced a feature that could let users limit replies to their tweets to prevent abuses.

In another update, the platform said that it was also experimenting with new layouts for replies to make its user interface easier for Twitterati.

“We’re also experimenting with placing like, Retweet, and reply icons behind a tap for replies. We’re trying this with a small group on iOS and web to see how it affects following and engaging with a convo,” it said

The social media platform will roll out the feature to select iOS and web Twitter users.

comment COMMENT NOW