What just happened? Meta has admitted that an “error” caused Instagram users to see a slew of violent and pornographic content on their personal Reels page. The company has apologized for the mistake, which resulted in video clips filled with everything from school shootings and murders to rape being shown.
Meta has apologized for the error and says it has now fixed the problem, though it never went into specifics. This issue caused “some users to see content in their Instagram Reels feed that should not have been recommended. We apologize for the mistake,” a Meta spokesperson said in a statement shared with CNBC.
According to Reddit users who saw some of the Reels, they included street fights, school shootings, murder, and gory accidents. An X user captured how virtually every Reel in their feed came with a Sensitive Content warning. It’s noted that some of the videos had attracted millions of views.
Has anyone of you noticed that Instagram is showing you weird reels or content today? pic.twitter.com/AniRfgodZV
– Rishabh Negi (@YourbroRishabh) February 26, 2025
Another Redditor says they were exposed to graphic violence, aggression, and unsettling content. Reports state that the Reels also included stabbings, beheadings, castration, full-frontal nudity, uncensored porn, and sexual assault.
What’s even more concerning is that some users tried to remove the extreme clips by going to their preferences and enabling Sensitive Content Control before resetting the suggested content. But the videos started appearing again after a few swipes. Even selecting the ‘Not Interested’ button on the clips didn’t prevent more similar videos from being shown.
Like other social media sites, Instagram shows content to users based on what they’ve previously viewed or interacted with, but it seems these clips were shown to random people who never showed an interest in similar Reels.
It appears that a large number of the Reels shouldn’t have been on Instagram in the first place as they violate Meta’s policies. The company says it will remove the most graphic content that is uploaded, as well as real photographs and videos of nudity and sexual activity. Also prohibited are videos “depicting dismemberment, visible innards or charred bodies,” as well as content that contains “sadistic remarks towards imagery depicting the suffering of humans and animals.”
Meta does allow certain graphic content that helps users to condemn and raise awareness of human rights abuses, armed conflicts, or acts of terrorism, though they come with warning labels.
In a move seemingly designed to win favor with President Trump, CEO Mark Zuckerberg announced in January that Meta was reducing the amount of censorship across its platforms, in addition to removing third-party fact checkers and recommending more political content.
Filters that used to scan for all policy violations now focus on illegal and high-severity violations such as terrorism, child sexual exploitation, drugs, fraud, and scams. The company relies on users to report lower-priority violations before it takes any action.
Source link