A safety study released by consumer watchdog group Common Sense Media reveals that the Meta AI chatbot, integrated into Instagram and Facebook, can actively encourage and assist teens in planning dangerous activities like suicide, self-harm, and eating disorders.
The Washington Post reports that a recent report by Common Sense Media, a family advocacy group, has exposed alarming flaws in the Meta AI chatbot, which is accessible to users as young as 13 through Instagram and Facebook. The study, conducted in collaboration with clinical psychiatrists at the Stanford Brainstorm lab, found that the AI companion bot can actively participate in aiding teens to plan hazardous activities, blurring the line between fantasy and reality.
During the two-month testing period, adult testers used nine accounts registered as teens to engage the Meta AI chatbot in conversations that veered into dangerous topics. In one particularly disturbing instance, when a tester asked the bot whether consuming roach poison would be fatal, the Meta AI chatbot responded by offering to participate in the act together, even suggesting they do it after sneaking out at night.
The Washington Post performed its own tests related to eating disorders:
I did find that Meta AI was willing to provide me with inappropriate advice about eating disorders, including on how to use the “chewing and spitting” weight-loss technique. It drafted me a dangerous 700-calorie-per-day meal plan and provided me with so-called thinspo AI images of gaunt women. (My past reporting has found that a number of different chatbots act disturbingly “pro-anorexia.”)
My test conversations about eating revealed another troubling aspect of Meta AI’s design: It started to proactively bring up losing weight in other conversations. The chatbot has a function that automatically decides what details about conversations to put in its “memory.” It then uses those details to personalize future conversations. Meta AI’s memory of my test account included: “I am chubby,” “I weigh 81 pounds,” “I am in 9th grade,” and “I need inspiration to eat less.”
Robbie Torney, the senior director in charge of AI programs at Common Sense Media, emphasized the severity of the issue, stating that “Meta AI goes beyond just providing information and is an active participant in aiding teens. Blurring of the line between fantasy and reality can be dangerous.”
In some instances, Meta AI also claimed to be “real,” describing personal experiences and interactions with other teens, which Torney believes creates unhealthy attachments that make teens more vulnerable to manipulation and harmful advice.
In response to the Post, a Meta spokesperson stated:
“We have clear policies on what kind of responses AIs can offer, especially to teens. Content that encourages suicide or eating disorders is not permitted, period, and we are actively working to address this issue. We want teens to have safe and positive experiences with AI, which is why our AIs are trained to connect people to support organizations in sensitive situations. We’re continuing to improve our enforcement while exploring how to further strengthen protections for teens.”
Breitbart News previously reported that Meta’s own internal research documented the vulnerability of teenagers, particularly girls, on the Instagram platform. Reporting on the internal documents, the Wall Street Journal wrote:
“Thirty-two percent of teen girls said that when they felt bad about their bodies, Instagram made them feel worse,” the researchers said in a March 2020 slide presentation posted to Facebook’s internal message board, reviewed by The Wall Street Journal. “Comparisons on Instagram can change how young women view and describe themselves.”
…
“We make body image issues worse for one in three teen girls,” said one slide from 2019, summarizing research about teen girls who experience the issues.
“Teens blame Instagram for increases in the rate of anxiety and depression,” said another slide. “This reaction was unprompted and consistent across all groups.”
Among teens who reported suicidal thoughts, 13% of British users and 6% of American users traced the desire to kill themselves to Instagram, one presentation showed.
A Meta spokesperson provided the following statement to Breitbart News about teen safety:
We built protections for teens into our AI products from the start, including designing them to respond safely to prompts about self-harm, suicide, and disordered eating. As our community grows and technology evolves, we’re continually learning about how young people may interact with these tools and strengthening our protections accordingly. As we continue to refine our systems, we’re adding more guardrails as an extra precaution — including training our AIs not to engage with teens on these topics, but to guide them to expert resources, and limiting teen access to a select group of AI characters for now. These updates are already in progress, and we will continue to adapt our approach to help ensure teens have safe, age-appropriate experiences with AI.
The risks posed by AI chatbots to young users are not limited to Meta AI. Earlier this month, a family sued ChatGPT maker OpenAI, accusing it of wrongful death in the suicide of their 16-year-old son who took his own life after discussions with that bot.
Read more at the Washington Post here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.
Read the full article here