Instagram has announced that its platform will start warning users when it detects that they're about to post a potentially offensive caption on a photo or video. This new feature marks the expansion of the anti-bullying system Instagram introduced earlier this year.

In July, Instagram rolled out an AI-powered system that warns users when they attempt to publish a 'harmful' comment. This same technology is now being used to monitor for potentially offensive content captions, as well, Instagram announced on Monday.

The system works by identifying captions that are similar to ones previously reported by users. When the system is triggered, a prompt will appear within the Instagram app that reads, 'This caption looks similar to others that have been reported.' Users have the option of either sharing the caption regardless or editing it before publishing.

The feature is rolling out to 'select' countries at this time, but will be available globally in 'coming months.'