Mark Zuckerberg’s Instagram platform used automated algorithms that suggested children for groomers and predators to follow on the app, according to a 2019 internal company document presented by the FTC during the ongoing Meta antitrust trial.

Bloomberg reports that the FTC revealed troubling evidence in its ongoing antitrust case against Mark Zuckerberg’s Meta regarding what the government claims are Instagram’s lack of safety measures for protecting children from online predators and abusers. During court proceedings on Tuesday, the FTC presented a June 2019 internal company report titled “Inappropriate Interactions with Children on Instagram” which detailed how the social media app’s automated recommendation systems made it easier for groomers to find and exploit young victims.

According to the report, 27 percent of the follow recommendations Instagram surfaced to accounts exhibiting predatory behavior toward children were for profiles belonging to minors. In a three month period alone, the company found that 2 million accounts held by minors had been recommended to groomers. Furthermore, the report noted that minors made up sevent percent of all follow recommendations made to adult users on Instagram.

The FTC also shared data from an analysis Meta conducted of 3.7 million user reports flagging inappropriate comments. One-third of those reports came from minors, and of the underage users who reported an inappropriate comment, 54 percent were reporting an adult account.

This evidence was part of the FTC’s argument that Meta’s acquisition of Instagram was anti-competitive and ultimately harmed consumers by leading to chronic underinvestment in user safety on the popular photo-sharing app. Emails and testimony from Instagram co-founder Kevin Systrom support claims that CEO Mark Zuckerberg deliberately withheld resources and support for security efforts because he felt threatened by Instagram’s rapid growth and feared it would cannibalize engagement from Facebook.

Meta executives like Chief Information Security Officer Guy Rosen acknowledged Instagram was “behind” Facebook in integrity work to combat issues like child exploitation as far back as May 2018. However, internal planning documents showed Instagram’s safety teams were understaffed and lacked resources to proactively address serious risks like harassment, violence, prostitution, and child exploitation on the platform.

A Meta spokesperson provided the following statement on Instagram’s underage user safety policies:

“With Instagram Teen Accounts, teens have built-in protections to automatically limit who can contact them. Their accounts are private by default, so only people they approve can see their content, and they’re in the strictest messaging settings, meaning they can’t be messaged by anyone they’re not already connected to. Teens under 16 need a parent’s permission to change these settings. In 2021, we launched technology to identify adults accounts that had shown potentially suspicious activity, such as being blocked by a teen, and prevent them from finding, following and interacting with teens’ accounts. We don’t recommend these accounts to teens, or vice versa.”

“We’ve long invested in child safety efforts, in 2018 we began work to restrict recommendations for potentially suspicious adults, continued ongoing efforts removing large groups of violating accounts and even supported a successful push to update the National Center for Missing and Exploited Children reporting statute to cover many grooming situations, which previously had not been included.”

Read more at Bloomberg here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

Read the full article here

Share.
Leave A Reply

Exit mobile version