Instagram will show ‘potential’ hate speech lower in your feed, stories

Instagram has announced new measures to tackle the problem of hate speech and “potentially disturbing posts”. It will now show messages that can contain that content lower in a user’s feed and stories, and that decision will be based on the user’s content reporting history. The Meta-owned app says it’s part of its effort to take stronger action against “posts which may contain bullying or hate speech, or which may encourage violence”.

The post adds that these new actions will only affect individual posts, not accounts as a whole. Thus, the account will not be penalized as a whole, but the individual message will be if it falls into this zone. While Instagram removes posts that outright violate its rules, this approach is for posts that don’t outright violate the rules. This move will likely reduce the reach and engagement of this content, although there is plenty of evidence from past history that such an approach generally does not work.

Instagram has previously focused on showing posts lower on Feed and Stories if they contain misinformation identified by independent fact-checkers. Or if they are shared from accounts that have repeatedly shared wrong information in the past. The new efforts extend this approach even further.

For its systems to detect whether the message may contain “bullying, hate speech or a call for violence”, it will take a closer look at aspects such as the caption of the message. This will then be compared to a previous caption that may have broken the rules on Instagram. Moreover, it will also rely on user reports and the history of these reports to decide what content is likely to offend the user.

Remember that posts to your Instagram feed are based on how likely you are to interact with it and what the algorithm thinks you might want to see more of. A similar approach will now be taken for problematic content. The app will also “take into account the likelihood that we think you are flagging a post as one of the signals we use to personalize your feed. If our systems predict that you are likely to flag a post based on your reporting content, we’ll show the post lower in your feed,” according to the post.

In the past, Instagram has attempted to clarify how and why users end up seeing what they do on their feed and stories. In a previous post, Instagram head Adam Mosseri clarified that the company doesn’t have an “algorithm” to decide what people see and don’t see on the app. “We use a variety of algorithms, classifiers, and processes, each with their own purpose,” he wrote.

According to the post, Feed, Explore, and Reels all use their “own algorithm tailored to how people use it,” and different aspects of the app are ranked differently. Instagram also takes “the information ‘it has’ about what’s been posted, who has posted those posts, and your preferences,” which are called “signals,” to decide the final look of your Feed or Stories section.

The most important signals in feed and stories are information about the post, the user’s activity, and the user’s interaction history with someone. Predictions for how content appears on the feed and stories are based on this. Now, these same signals will also be used to decide what content should end up lower on the feed, as it likely contains hate speech or bullying content and the user can report it.

Also, Instagram does not delete posts with incorrect information. This is similar to the Facebook approach where the post is verified by a third party and labeled as either completely misleading or partially misleading. Instagram also says that if someone has “posted false information multiple times” it can make all the content harder to find.

Comments are closed.