Instagram will hide comments that could be considered offensive

FILE - In this Nov. 29, 2018 file photo, the Instagram app logo is displayed on a mobile screen in Los Angeles. Though Black Out Tuesday was originally organized by the music community, the social media world went dark on Tuesday in support of the Black Lives Matter movement and the many killings of black people around the world that has caused outrage and protests. Instagram accounts, from top record label to everyday people, was full of black squares posted in response to the deaths of George Floyd, Ahmaud Arbery and Breonna Taylor. (AP Photo/Damian Dovarganes, File)

(CNN) — Instagram will begin automatically hiding potentially offensive comments as part of its ongoing attempt to address online bullying.

The company said the comments that will be hidden will be similar to those that have been reported by users in the past. Instagram said it’s using existing artificial intelligence systems to identify bullying or harassing language in comments.

Instagram announced on Tuesday it would be testing the feature. The day also marks the app’s tenth birthday.

Users will still be able to tap “View Hidden Comments” to see those remarks.

Adam Mosseri, who took the helm of Instagram two years ago, has pledged to fight online bullying. Last year, the Facebook-owned company rolled out a tool called “Restrict.” It allows you to “restrict” another user, meaning that comments on your posts from that person are only visible to them, and not to other people. It also previously added a feature that lets people know when their comment may be considered offensive before it’s posted. The idea is that it gives people the ability to pause and reflect.

Instagram said, since introducing comment warnings, it’s seen “significant improvement” in people editing or not posting the comment, although it did not elaborate further.

On Tuesday, Instagram also said it’s adding an additional warning for people who have posted several comments that could be offensive. The notification prompts them to go back to their comment, otherwise they could risk consequences such as their comment being hidden or even their account getting deleted.

Twitter has conducted similar tests. Earlier this year, it began prompting users to consider rewriting their reply to a tweet before publishing, if it contained potentially harmful language.