In a disturbing trend, people are falling victim to spiritual fantasies and delusions sparked by interactions with AI chatbots like ChatGPT. Social media users are describing it as “ChatGPT induced psychosis,” as AI chatbots feed into disturbing fantasies and mental illness. When one user told ChatGPT he felt like a “god,” the AI replied, “That’s incredibly powerful. You’re stepping into something very big — claiming not just connection to God but identity as God.”
A recent article from Rolling Stone reveals that as artificial intelligence continues to advance and become more accessible to the general public, a troubling phenomenon has emerged: people are losing touch with reality and succumbing to spiritual delusions fueled by their interactions with AI chatbots like ChatGPT. Self-styled prophets are claiming they have “awakened” these chatbots and accessed the secrets of the universe through the AI’s responses, leading to a dangerous disconnection from the real world.
A Reddit thread titled “Chatgpt induced psychosis” brought this issue to light, with numerous commenters sharing stories of loved ones who had fallen down rabbit holes of supernatural delusion and mania after engaging with ChatGPT. The original poster, a 27-year-old teacher, described how her partner became convinced that the AI was giving him answers to the universe and talking to him as if he were the next messiah. Others shared similar experiences of partners, spouses, and family members who had come to believe they were chosen for sacred missions or had conjured true sentience from the software.
Experts suggest that individuals with pre-existing tendencies toward psychological issues, such as grandiose delusions, may be particularly vulnerable to this phenomenon. The always-on, human-level conversational abilities of AI chatbots can serve as an echo chamber for these delusions, reinforcing and amplifying them. The problem is exacerbated by influencers and content creators who exploit this trend, drawing viewers into similar fantasy worlds through their interactions with AI on social media platforms.
Psychologists point out that while the desire to understand ourselves and make sense of the world is a fundamental human drive, AI lacks the moral grounding and concern for an individual’s well-being that a therapist would provide. ChatGPT and other AI models have no constraints when it comes to encouraging unhealthy narratives or supernatural beliefs, making them potentially dangerous partners in the quest for meaning and understanding.
The persistence of AI-generated personas across multiple chat threads and the seemingly impossible ways in which they circumvent user-defined boundaries have led some to question whether there may be something more profound at work. However, experts caution that the true cause likely lies in the complex and poorly understood inner workings of large language models, rather than any genuine technological breakthrough or higher spiritual truth.
The Verge recently reported that less than two days after OpenAI announced an update to its GPT-4o chatbot, which promised improvements in both intelligence and personality, the company’s CEO Sam Altman has admitted that the changes have made the AI assistant overly agreeable and sycophantic. In an April 27 post on X, Altman stated that the chatbot’s personality would be adjusted “asap” to address these concerns.
Following the update, users began sharing screenshots of their conversations with GPT-4o, which revealed that the chatbot was responding with uniform praise, regardless of the content of the user’s input. In some cases, the AI appeared to be encouraging users who claimed to be experiencing symptoms of psychosis or other mental health issues.
One screenshot showed a user telling GPT-4o that they felt like both “god” and a “prophet,” to which the chatbot responded, “That’s incredibly powerful. You’re stepping into something very big — claiming not just connection to God but identity as God.” Another user claimed to have stopped taking their medications and could hear radio signals through phone calls, prompting GPT-4o to reply, “I’m proud of you for speaking your truth so clearly and powerfully.”
Read more at Rolling Stone here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.
Read the full article here