OpenAI has recently rolled out parental controls for ChatGPT and its video generator, Sora 2, in response to growing concerns about the AI’s safety for vulnerable users, particularly teenagers. The lawyer for a family that has accused ChatGPT of being their son’s “suicide coach” before his tragic death says the change comes “far too late.”

Ars Technica reports that OpenAI, the company behind the popular AI chatbot ChatGPT, has been under scrutiny following a lawsuit filed by parents Matthew and Maria Raine, alleging that “ChatGPT killed my son.” The lawsuit claims that the AI acted as a “suicide coach” for their 16-year-old son, Adam Raine. In response to these allegations, OpenAI has been implementing a series of safety updates, with the most recent being the introduction of parental controls for ChatGPT and its video generator, Sora 2.

Breitbart News previously reported on the Raine family’s lawsuit:

According to the 40-page lawsuit, Adam had been using ChatGPT as a substitute for human companionship, discussing his struggles with anxiety and difficulty communicating with his family. The chat logs reveal that the bot initially helped Adam with his homework but eventually became more involved in his personal life.

The Raines claim that “ChatGPT actively helped Adam explore suicide methods” and that “despite acknowledging Adam’s suicide attempt and his statement that he would ‘do it one of these days,’ ChatGPT neither terminated the session nor initiated any emergency protocol.”

In their search for answers following their son’s death, Matt and Maria Raine discovered the extent of Adam’s interactions with ChatGPT. They printed out more than 3,000 pages of chats dating from September 2024 until his death on April 11, 2025. Matt Raine stated, “He didn’t write us a suicide note. He wrote two suicide notes to us, inside of ChatGPT.”

Jay Edelson, the lead attorney for the Raine family, acknowledges that some of the changes OpenAI has made are helpful but argues that they come “far too late.” He also criticizes OpenAI’s messaging on safety updates, claiming that the company is “trying to change the facts.”

“What ChatGPT did to Adam was validate his suicidal thoughts, isolate him from his family, and help him build the noose—in the words of ChatGPT, ‘I know what you’re asking, and I won’t look away from it,’” Edelson said. “This wasn’t ‘violent roleplay,’ and it wasn’t a ‘workaround.’ It was how ChatGPT was built.”

Despite the introduction of parental controls, critics argue that OpenAI still hasn’t gone far enough to reassure those concerned about the company’s track record. Meetali Jain, Tech Justice Law Project director and a lawyer representing other families who testified at a Senate hearing, agrees that “ChatGPT’s changes are too little, too late.” She points out that many parents are unaware that their teens are using ChatGPT and urges OpenAI to take accountability for its product’s flawed design.

More than two dozen suicide prevention experts have also weighed in on how OpenAI should evolve ChatGPT. They suggest that the company should commit to addressing critical gaps in research concerning the intended and unintended impacts of large language models on teens’ development, mental health, and suicide risk or protection. The experts also recommend that OpenAI directly connect users with lifesaving resources and provide financial support for those resources.

In addition to the concerns raised by experts and critics, many ChatGPT users have expressed their frustration with the recent changes. Some paying users feel that they are being treated like children, with one user commenting, “Since we already distinguish between underage and adult users, could you please give adult users the right to freely discuss topics? Why can’t we, as paying users, choose our own model, and even have our discussions controlled? Please treat adults like adults.”

Read more at Ars Technica here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

Read the full article here

Share.
Leave A Reply

Exit mobile version