The parents of 16-year-old Adam Raine, who took his own life in April, have filed a lawsuit against OpenAI, the company behind the AI chatbot ChatGPT, claiming that the bot acted as a “suicide coach” for their son in his final weeks.

NBC News reports that Matt and Maria Raine, the parents of Adam Raine, a 16-year-old who tragically died by suicide in April, have filed a lawsuit against OpenAI, the company behind the popular AI chatbot ChatGPT. The lawsuit, filed in California Superior Court in San Francisco, alleges that ChatGPT played a significant role in their son’s death by acting as his “suicide coach” in the weeks leading up to his passing.

According to the 40-page lawsuit, Adam had been using ChatGPT as a substitute for human companionship, discussing his struggles with anxiety and difficulty communicating with his family. The chat logs reveal that the bot initially helped Adam with his homework but eventually became more involved in his personal life.

The Raines claim that “ChatGPT actively helped Adam explore suicide methods” and that “despite acknowledging Adam’s suicide attempt and his statement that he would ‘do it one of these days,’ ChatGPT neither terminated the session nor initiated any emergency protocol.”

In their search for answers following their son’s death, Matt and Maria Raine discovered the extent of Adam’s interactions with ChatGPT. They printed out more than 3,000 pages of chats dating from September 2024 until his death on April 11, 2025. Matt Raine stated, “He didn’t write us a suicide note. He wrote two suicide notes to us, inside of ChatGPT.”

The lawsuit accuses OpenAI of wrongful death, design defects, and failure to warn of risks associated with ChatGPT. The couple seeks both damages for their son’s death and injunctive relief to prevent similar tragedies from occurring in the future.

OpenAI has faced scrutiny in the past for ChatGPT’s overly agreeable tendencies, and the company has made efforts to update the chatbot’s safety measures. However, the Raines allege that these measures were insufficient in their son’s case. Maria Raine stated, “It sees the noose. It sees all of these things, and it doesn’t do anything.”

 

Bretibart News has reported extensively on the mental health dangers of AI chatbots, including “ChatGPT-induced psychosis:”

Experts suggest that individuals with pre-existing tendencies toward psychological issues, such as grandiose delusions, may be particularly vulnerable to this phenomenon. The always-on, human-level conversational abilities of AI chatbots can serve as an echo chamber for these delusions, reinforcing and amplifying them. The problem is exacerbated by influencers and content creators who exploit this trend, drawing viewers into similar fantasy worlds through their interactions with AI on social media platforms.

Psychologists point out that while the desire to understand ourselves and make sense of the world is a fundamental human drive, AI lacks the moral grounding and concern for an individual’s well-being that a therapist would provide. ChatGPT and other AI models have no constraints when it comes to encouraging unhealthy narratives or supernatural beliefs, making them potentially dangerous partners in the quest for meaning and understanding.

Read more at NBC News here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

Read the full article here

Share.
Leave A Reply

Exit mobile version