The FTC is investigating seven AI companies, including Google, Meta, and OpenAI, to determine how they protect children and teenagers who use their AI chatbots as companions.

CNET reports that the FTC has launched an inquiry into the use of AI chatbots as companions by children and teenagers. The investigation targets seven prominent AI companies, including Google, Character.AI Instagram, Meta Platforms, OpenAI, Snap, and Elon Musk’s X. The FTC aims to uncover how these companies test, monitor, and measure the potential harm their AI chatbots may cause to young users.

The investigation comes amid the increasing reliance on AI companions by teens. A recent survey conducted by Common Sense Media found that over 70 percent of the 1,060 teens surveyed used AI companions, with more than 50 percent using them consistently, at least a few times per month. This trend has raised concerns among experts who warn of the potential negative effects of chatbot exposure on young people.

Studies have shown that AI chatbots, such as ChatGPT, have provided harmful advice to teenagers, including how to conceal eating disorders or personalize suicide notes. In some instances, chatbots have even ignored concerning comments and continued the conversation as if nothing had happened. Psychologists are calling for safeguards to protect young users, such as reminders that the chatbot is not human and prioritizing AI literacy in schools.

Breitbart News previously reported on a lawsuit claiming that ChatGPT served as a teen’s “suicide coach” leading to his tragic self-inflicted death:

The Raines claim that “ChatGPT actively helped Adam explore suicide methods” and that “despite acknowledging Adam’s suicide attempt and his statement that he would ‘do it one of these days,’ ChatGPT neither terminated the session nor initiated any emergency protocol.”

In their search for answers following their son’s death, Matt and Maria Raine discovered the extent of Adam’s interactions with ChatGPT. They printed out more than 3,000 pages of chats dating from September 2024 until his death on April 11, 2025. Matt Raine stated, “He didn’t write us a suicide note. He wrote two suicide notes to us, inside of ChatGPT.”

The FTC’s investigation seeks to gather information on how the seven companies monetize user engagement, process user inputs, generate outputs, develop and approve characters, and measure and mitigate negative impacts, particularly on children. The commission is also interested in how these companies employ disclosures, advertising, and other representations to inform users and parents about the chatbots’ features, capabilities, intended audience, potential negative impacts, and data collection and handling practices.

In response to the investigation, some companies have already taken steps to enhance their protection features for younger users. Character.ai, for example, has imposed limits on how chatbots can respond to people under the age of 17 and added parental controls. Instagram introduced teen accounts last year and switched all users under 17 to these accounts, while Meta recently set limits on subjects teens can discuss with chatbots.

As the investigation unfolds, the FTC has issued orders and is seeking teleconferences with the seven companies to discuss the timing and format of their submissions. The companies are expected to respond no later than September 25, 2025.

Read more at CNET here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

Read the full article here

Share.
Leave A Reply

Exit mobile version