Instagram will start warning users that their captions in a photo or video may be considered offensive to others, so they may need to be changed.
The company, which is owned by Facebook, has already trained an artificial intelligence system, which will be able to detect potentially offensive captions.
The new tool will be launched immediately in some countries and in others in the coming months. Among other things, it aims to combat cyberbullying, which has become a serious problem on social media platforms such as Facebook, Instagram, and YouTube.
In fact, Instagram was named the worst platform in the world in a 2017 survey on cyberbullying.
So, when the user now types an offensive or offensive caption on Instagram, he will immediately receive a message that his text looks like others that have already been insultingly described by others. Users will have the option to modify their caption before posting it, but this is not required.
“Apart from limiting the spread of bullying, the warning will help educate people on what we don’t allow on Instagram and when a user’s account is at risk of violating our regulations,” the platform said in a blog post, according to the BBC.
Earlier this July, Instagram launched a similar artificial intelligence tool to alert users when their comments on other people’s posts could be considered offensive. “The results so far are encouraging and we have found that this type of prompting can push people to rethink their words if given the chance,” according to Instagram.
However, whether such “smart” tools will have a substantial effect remains a question, since their character remains optional for users.
Source: Instagram, BBC