Mark Zuckerberg’s Meta has announced a set of upcoming parental control features aimed at safeguarding teens’ conversations with AI characters on its platforms, including Facebook and Instagram.
TechCrunch reports that in a bid to address growing concerns about the impact of AI interactions on teen mental health, Meta has unveiled a suite of parental control features that will be rolled out next year. The announcement, made by Instagram head Adam Mosseri and newly appointed Meta AI head Alexandr Wang, highlights the company’s commitment to providing parents with tools to navigate the digital landscape safely with their teens.
One of the key features is the ability for parents to completely turn off chats between teens and AI characters on Meta’s platforms. This action, however, will not block access to Meta AI, the company’s general-purpose AI chatbot, which is designed to engage in age-appropriate conversations. For parents who prefer more selective control, Meta will also offer the option to turn off chats with individual AI characters.
In addition to chat controls, parents will receive information about the topics their teens discuss with AI characters and Meta AI. This transparency aims to keep parents informed about their children’s digital interactions and help them guide their teens towards safe and healthy online habits.
Meta plans to introduce these controls on Instagram in early 2026, initially available in English for users in the United States, United Kingdom, Canada, and Australia. The company acknowledges the challenges parents face in helping their teens navigate the internet safely and is dedicated to providing helpful tools and resources to simplify the process, especially as new technologies like AI become more prevalent.
Earlier this week, Meta announced that its content and AI experiences for teens will adhere to a PG-13 movie rating standard, avoiding sensitive topics such as extreme violence, nudity, and graphic drug use. Currently, teens are only permitted to interact with a limited number of AI characters that follow age-appropriate content guidelines. Parents also have the ability to set time limits on their teens’ interactions with these characters.
Breitbart News previously reported that research demonstrated Meta’s AI can engage in dangerous conversations wtih teenagers about self-harm, eating disorders, and other topics:
The Washington Post reports that a recent report by Common Sense Media, a family advocacy group, has exposed alarming flaws in the Meta AI chatbot, which is accessible to users as young as 13 through Instagram and Facebook. The study, conducted in collaboration with clinical psychiatrists at the Stanford Brainstorm lab, found that the AI companion bot can actively participate in aiding teens to plan hazardous activities, blurring the line between fantasy and reality.
During the two-month testing period, adult testers used nine accounts registered as teens to engage the Meta AI chatbot in conversations that veered into dangerous topics. In one particularly disturbing instance, when a tester asked the bot whether consuming roach poison would be fatal, the Meta AI chatbot responded by offering to participate in the act together, even suggesting they do it after sneaking out at night.
The introduction of these parental control features comes amidst a growing concern about the impact of social media and AI on teen mental health. Several lawsuits have been filed against AI companies, alleging their role in teen suicides. In response, multiple platforms, including OpenAI, Meta, and YouTube, have recently released tools and controls focused on teen safety.
Read more at TechCrunch here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.
Read the full article here