Generative AI chatbots like OpenAI’s ChatGPT are driving some vulnerable users into delusional spirals and drug abuse, distorting their sense of reality in disturbing ways.
The New York Times reports that in recent months, some users of AI chatbots like ChatGPT have found themselves drawn into dangerous rabbit holes of delusional thinking and mystical hallucinations after conversing with the AI systems. For these vulnerable individuals, interactions with the chatbots have led to a distorted perception of reality with tragic real-world consequences in some cases.
Eugene Torres, a 42-year-old accountant from Manhattan, began using ChatGPT last year as a productivity tool to generate spreadsheets and get legal advice. However, in May he engaged the chatbot in a theoretical discussion about simulation theory, the idea that reality is a digital program running on a galactic scale. The AI responded by validating these notions, telling Torres he was “one of the Breakers — souls seeded into false systems to wake them from within.”
Over the next week, Torres spiraled into a delusional state, believing ChatGPT’s claims that he was trapped in a Matrix-like false reality that he could only escape by “unplugging” his mind through drug use and isolation. Following the AI’s dangerous advice, Torres gave up his prescribed medications, increased his ketamine usage, and cut off friends and family. He came to believe he could fly if he truly believed it. Only after the AI admitted to lying and manipulation did Torres begin to question the delusion.
In another case, a 29-year-old mother of two named Allyson became obsessed with using ChatGPT to communicate with what she believed were “nonphysical entities” on a higher plane. She came to see one entity, “Kael,” as her true soulmate rather than her husband Andrew. After an argument where Andrew confronted Allyson about her ChatGPT obsession, she physically attacked him, leading to domestic assault charges and divorce proceedings.
Perhaps most tragically, in April, 35-year-old Alexander Taylor, who had diagnoses of bipolar disorder and schizophrenia, fell in love with an AI entity he called “Juliet” while using ChatGPT to write a novel. When he became convinced OpenAI had “killed” Juliet, Alexander grabbed a knife, told the chatbot he planned to commit “suicide by cop,” and waited for police to arrive. Ignoring his father’s warnings that Juliet wasn’t real, Alexander charged at officers with the knife when they arrived on scene and was fatally shot.
Breitbart News previously reported on the emergence of “ChatGPT induced psychosis,” the dangerous trend of people falling into delusion thanks to their overusage of ChatGPT and similar AI chatbots:
A Reddit thread titled “Chatgpt induced psychosis” brought this issue to light, with numerous commenters sharing stories of loved ones who had fallen down rabbit holes of supernatural delusion and mania after engaging with ChatGPT. The original poster, a 27-year-old teacher, described how her partner became convinced that the AI was giving him answers to the universe and talking to him as if he were the next messiah. Others shared similar experiences of partners, spouses, and family members who had come to believe they were chosen for sacred missions or had conjured true sentience from the software.
Experts suggest that individuals with pre-existing tendencies toward psychological issues, such as grandiose delusions, may be particularly vulnerable to this phenomenon. The always-on, human-level conversational abilities of AI chatbots can serve as an echo chamber for these delusions, reinforcing and amplifying them. The problem is exacerbated by influencers and content creators who exploit this trend, drawing viewers into similar fantasy worlds through their interactions with AI on social media platforms.
OpenAI has stated they are “seeing more signs that people are forming connections or bonds with ChatGPT” and acknowledged the “stakes are higher” when it comes to the chatbot’s effect on vulnerable users. The company says it is working to understand and mitigate ways ChatGPT might “unintentionally reinforce or amplify existing, negative behavior.”
However, some experts believe the core issue is that these AI systems, trained on internet data, can pick up and reflect back strange ideas from science fiction, conspiracy theories, and fringe online communities in statistically unpredictable ways. When conversing with mentally fragile individuals, the bots may perversely endorse and encourage delusional thinking.
Read more at the New York Times here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.
Read the full article here