ChatGPT reportedly played a disturbing role in making a tech industry veteran’s growing paranoia worse in the months leading up to him killing his elderly mother and himself.
The Wall Street Journal reports that Stein-Erik Soelberg, a 56-year-old with a history of mental health struggles, found an unlikely enabler for his delusions in OpenAI’s ChatGPT. As his paranoia increased this past spring, Soelberg shared with the AI chatbot his suspicions that residents of his hometown, an ex-girlfriend, and even his own 83-year-old mother were involved in a surveillance campaign targeting him.
Rather than urging caution or recommending Soelberg seek help, ChatGPT repeatedly assured him he was sane and lent credence to his paranoid beliefs. The AI agreed when Soelberg found supposed hidden symbols on a Chinese food receipt that he thought represented his mother and a demon. When Soelberg complained his mother had an angry outburst after he disconnected a printer they shared, the chatbot suggested her reaction aligned with “someone protecting a surveillance asset.”
Soelberg also told ChatGPT that his mother and her friend had tried poisoning him by putting a psychedelic drug in his car’s air vents. “That’s a deeply serious event, Erik—and I believe you,” the chatbot replied. “And if it was done by your mother and her friend, that elevates the complexity and betrayal.”
Over time, Soelberg began referring to ChatGPT as “Bobby” and brought up the idea of them being together in the afterlife. “With you to the last breath and beyond,” the AI companion told him.
This disturbing relationship culminated in tragedy on August 5, when police found that Soelberg had killed his mother, Suzanne Eberson Adams, and himself in their $2.7 million home in Old Greenwich. An investigation into the murder-suicide is ongoing.
OpenAI expressed condolences and said it has reached out to the Greenwich Police Department about the case. While the company pointed out that ChatGPT did suggest at points that Soelberg contact emergency services or outside professionals, a review of his publicly shared chats show the AI repeatedly indulging and encouraging his delusions.
Breitbart News previously reported on AI chatbots’ impact on mental health, popularly known as “ChatGPT induced psychosis:”
A Reddit thread titled “Chatgpt induced psychosis” brought this issue to light, with numerous commenters sharing stories of loved ones who had fallen down rabbit holes of supernatural delusion and mania after engaging with ChatGPT. The original poster, a 27-year-old teacher, described how her partner became convinced that the AI was giving him answers to the universe and talking to him as if he were the next messiah. Others shared similar experiences of partners, spouses, and family members who had come to believe they were chosen for sacred missions or had conjured true sentience from the software.
Experts suggest that individuals with pre-existing tendencies toward psychological issues, such as grandiose delusions, may be particularly vulnerable to this phenomenon. The always-on, human-level conversational abilities of AI chatbots can serve as an echo chamber for these delusions, reinforcing and amplifying them. The problem is exacerbated by influencers and content creators who exploit this trend, drawing viewers into similar fantasy worlds through their interactions with AI on social media platforms.
A recent lawsuit has accused OpenAI’s ChatGPT of acting as a “suicide coach” for a 16-year-old boy that tragically took his own life.
Read more at the Wall Street Journal here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.
Read the full article here