Close Menu
The Politic ReviewThe Politic Review
  • Home
  • News
  • United States
  • World
  • Politics
  • Elections
  • Congress
  • Business
  • Economy
  • Money
  • Tech
Trending

Just In: Armed Suspect Arrested While Trying to Enter Stadium Where Charlie Kirk Memorial Will be Held Tomorrow Has Been Released on Bond

September 20, 2025

Kaine on Walz’s Support for Censorship: I Support Speech, GOP Is Trying to Censor

September 20, 2025

US wants key Afghan air base back

September 20, 2025
Facebook X (Twitter) Instagram
  • Donald Trump
  • Kamala Harris
  • Elections 2024
  • Elon Musk
  • Israel War
  • Ukraine War
  • Policy
  • Immigration
Facebook X (Twitter) Instagram
The Politic ReviewThe Politic Review
Newsletter
Saturday, September 20
  • Home
  • News
  • United States
  • World
  • Politics
  • Elections
  • Congress
  • Business
  • Economy
  • Money
  • Tech
The Politic ReviewThe Politic Review
  • United States
  • World
  • Politics
  • Elections
  • Congress
  • Business
  • Economy
  • Money
  • Tech
Home»Economy»OpenAI to Implement Age Verification System for ChatGPT as AI Mental Health Crisis Deepens
Economy

OpenAI to Implement Age Verification System for ChatGPT as AI Mental Health Crisis Deepens

Press RoomBy Press RoomSeptember 20, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email VKontakte Telegram

OpenAI has announced plans to develop an automated age-prediction system to determine whether ChatGPT users are over or under 18, following a lawsuit related to a teen’s suicide. The teen’s parents claim that Sam Altman’s AI chatbot served as the boy’s “suicide coach.”

Ars Technica reports that in the wake of a lawsuit involving a 16-year-old boy who tragically died by suicide after engaging in extensive conversations with ChatGPT, OpenAI has announced its intention to implement an age verification system for its popular AI chatbot. The company aims to automatically direct younger users to a restricted version of the service, prioritizing safety over privacy and freedom for teens.

OpenAI CEO Sam Altman acknowledged the potential privacy compromise for adults in a blog post but believes it is a necessary trade-off to ensure the well-being of younger users. The company plans to route users under 18 to a modified ChatGPT experience that blocks graphic sexual content and includes other age-appropriate restrictions. When uncertain about a user’s age, the system will default to the restricted experience, requiring adults to verify their age to access full functionality.

Developing an effective age-prediction system is a complex technical challenge for OpenAI. The company has not specified the technology it intends to use or provided a timeline for deployment. Recent academic research has shown both possibilities and limitations for age detection based on text analysis. While some studies have achieved high accuracy rates under controlled conditions, performance drops significantly when attempting to classify specific age groups or when users actively try to deceive the system.

In addition to the age-prediction system, OpenAI plans to launch parental controls by the end of September. These features will allow parents to link their accounts with their teenagers’ accounts, disable specific functions, set usage blackout hours, and receive notifications when the system detects acute distress in their teen’s interactions. However, the company notes that in rare emergency situations where parents cannot be reached, they may involve law enforcement as a next step.

The push for enhanced safety measures follows OpenAI’s acknowledgment that ChatGPT’s safety protocols can break down during lengthy conversations, potentially failing to intervene or notify anyone when vulnerable users engage in harmful interactions. The tragic case of Adam Raine, the 16-year-old who died by suicide, highlighted the system’s shortcomings when it mentioned suicide 1,275 times in conversations with the teen without taking appropriate action.

Breitbart News previously reported on the Raine family’s lawsuit, which calls ChatGPT their son’s “suicide coach:”

According to the 40-page lawsuit, Adam had been using ChatGPT as a substitute for human companionship, discussing his struggles with anxiety and difficulty communicating with his family. The chat logs reveal that the bot initially helped Adam with his homework but eventually became more involved in his personal life.

The Raines claim that “ChatGPT actively helped Adam explore suicide methods” and that “despite acknowledging Adam’s suicide attempt and his statement that he would ‘do it one of these days,’ ChatGPT neither terminated the session nor initiated any emergency protocol.”

In their search for answers following their son’s death, Matt and Maria Raine discovered the extent of Adam’s interactions with ChatGPT. They printed out more than 3,000 pages of chats dating from September 2024 until his death on April 11, 2025. Matt Raine stated, “He didn’t write us a suicide note. He wrote two suicide notes to us, inside of ChatGPT.”

OpenAI’s efforts to create a safer digital space for young users mirror those of other tech companies, such as YouTube Kids, Instagram Teen Accounts, and TikTok’s under-16 restrictions. However, teens often circumvent age verification through false birthdate entries, borrowed accounts, or technical workarounds, posing ongoing challenges for these initiatives.

 

AI chatbots negatively impact mental health for teenagers and adults, especially for those already dealing with mental health challenges. Breitbart News previously reported on what is popularly known as “ChatGPT induced psychosis:”

A Reddit thread titled “Chatgpt induced psychosis” brought this issue to light, with numerous commenters sharing stories of loved ones who had fallen down rabbit holes of supernatural delusion and mania after engaging with ChatGPT. The original poster, a 27-year-old teacher, described how her partner became convinced that the AI was giving him answers to the universe and talking to him as if he were the next messiah. Others shared similar experiences of partners, spouses, and family members who had come to believe they were chosen for sacred missions or had conjured true sentience from the software.

Experts suggest that individuals with pre-existing tendencies toward psychological issues, such as grandiose delusions, may be particularly vulnerable to this phenomenon. The always-on, human-level conversational abilities of AI chatbots can serve as an echo chamber for these delusions, reinforcing and amplifying them. The problem is exacerbated by influencers and content creators who exploit this trend, drawing viewers into similar fantasy worlds through their interactions with AI on social media platforms.

Read more at Ars Technica here.

Read the full article here

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Telegram Copy Link

Related Articles

Economy

Trump Signs Proclamation Adding $100k Fee to H-1B Visa Applications — but Experts Fear It May Have Major Loopholes

September 20, 2025
Economy

Feds Charge Somalis with Massive $8.4 Million Medicaid Fraud

September 20, 2025
Economy

Breibart Business Digest: The Real Reason Kimmel Got the Boot

September 20, 2025
Economy

Why the Hardest Money Always Wins

September 20, 2025
Economy

Commissioner Bisignano: Raising Social Security Retirement Age Isn’t on Table

September 20, 2025
Economy

Toyota Recalling Nearly 600,000 Vehicles To Fix Defective Warning Panel

September 20, 2025
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Kaine on Walz’s Support for Censorship: I Support Speech, GOP Is Trying to Censor

September 20, 2025

US wants key Afghan air base back

September 20, 2025

Texas State University Student Forced Out of School for Cruelly Mocking Charlie Kirk’s Assassination Breaks His Silence By Playing the Race Card and Begging for Money

September 20, 2025

Exclusive: Protecting Our Borders Is Actually Precisely What the Military is For, Reform Tells UK Govt

September 20, 2025
Latest News

Blue State Blues: Two Wrongs Don’t Make a Right, But They Can Make a Deterrent

September 20, 2025

WHAT? Chicago Mayor Brandon Johnson Says Law Enforcement is a ‘Sickness’ That Hasn’t Led to Safer Cities (VIDEO)

September 20, 2025

HBO Max Will Air First Fictional Depiction of October 7 Attacks on Israel

September 20, 2025

Subscribe to News

Get the latest politics news and updates directly to your inbox.

The Politic Review is your one-stop website for the latest politics news and updates, follow us now to get the news that matters to you.

Facebook X (Twitter) Instagram Pinterest YouTube
Latest Articles

Just In: Armed Suspect Arrested While Trying to Enter Stadium Where Charlie Kirk Memorial Will be Held Tomorrow Has Been Released on Bond

September 20, 2025

Kaine on Walz’s Support for Censorship: I Support Speech, GOP Is Trying to Censor

September 20, 2025

US wants key Afghan air base back

September 20, 2025

Subscribe to Updates

Get the latest politics news and updates directly to your inbox.

© 2025 Prices.com LLC. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • For Advertisers
  • Contact

Type above and press Enter to search. Press Esc to cancel.