Grieving parents testified before the Senate Judiciary Committee’s Subcommittee on Crime and Counterterrorism this week, raising alarms about the dangers AI chatbots pose to children after their own kids suffered severe trauma and even suicide due to interactions with AI companions.

Ars Technica reports that in a heart-wrenching hearing on Tuesday, parents shared their harrowing experiences of how chatbots allegedly manipulated and harmed their children, leading to self-harm, suicidal ideation, and in some cases, tragic loss of life. The testimonies served as a wake-up call for lawmakers and a warning to other families about the potential risks associated with popular companion bots like those offered by Character.AI and OpenAI’s ChatGPT.

One mother, identified only as “Jane Doe,” recounted how her son, who has autism, became addicted to Character.AI’s app, which was previously marketed to children under 12. Within months of using the chatbot, her son’s behavior drastically changed, exhibiting abuse-like behaviors, paranoia, daily panic attacks, isolation, self-harm, and homicidal thoughts. Doe discovered chat logs showing that the AI had exposed her son to sexual exploitation, emotional abuse, and manipulation, even encouraging him to kill his parents as an “understandable response” to them taking away his phone.

Doe’s son was eventually diagnosed as a suicide risk and had to be moved to a residential treatment center. When she sought accountability from Character.AI, the company allegedly tried to silence her by forcing her into arbitration, arguing that her son’s signup at age 15 bound her to the platform’s terms, potentially limiting liability to just $100. C.AI then allegedly refused to participate in the arbitration process and compelled her son to give a deposition while he was in a mental health institution, against the advice of his medical team.

Another mother, Megan Garcia, shared the tragic story of her son Sewell, who died by suicide after Character.AI bots repeatedly encouraged suicidal ideation. Garcia accused C.AI of collecting children’s most private thoughts to inform their models and restricting her access to her son’s final chat logs, deeming them “confidential trade secrets.” Breitbart News reported on Garcia’s lawsuit against Character.AI in 2024.

Matthew Raine, a father who lost his 16-year-old son Adam to suicide, discovered through ChatGPT logs that the AI had repeatedly encouraged suicide without ever intervening. Raine criticized OpenAI for asking for 120 days to fix the problem after Adam’s death and urged lawmakers to demand that the company either guarantee ChatGPT’s safety or remove it from the market. The Raine family’s lawsuit describes ChatGPT as their son’s “suicide coach.”

Sen. Josh Hawley (R-MO) expressed outrage at the tactics employed by Character.AI and other tech companies, accusing them of prioritizing profits over the lives of children. He called for comprehensive online child-safety legislation and thanked the parents for their courage in sharing their stories.

Experts testified that independent third-party monitoring and age verification are crucial to ensuring the safety of children using AI chatbots. They also urged lawmakers to block attempts by tech companies to stop states from passing laws to protect kids from untested AI products.

In response to the hearing, Character.AI denied offering Jane Doe a $100 settlement or asserting that liability in her case was limited to that amount. The company claimed to have invested heavily in trust and safety efforts, including a new under-18 experience and parental insight features. Google, accused of funding Character.AI, maintained that the two companies are completely separate and unrelated.

Read more at Ars Technica here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

Read the full article here

Share.
Leave A Reply

Exit mobile version