The growing reliance on AI-powered chatbots for medical advice has led to several alarming cases of harm and even tragedy, as people follow potentially dangerous recommendations from these digital assistants.
The New York Post reports that in recent years, the rise of generative AI chatbots has revolutionized the way people seek information, including health advice. However, the increasing reliance on these AI-powered tools has also led to several disturbing instances where individuals have suffered severe consequences after following chatbots’ medical recommendations. From anal pain caused by self-treatment gone wrong to missed signs of a mini-stroke, the real-life impact of bad AI health advice is becoming increasingly apparent.
One particularly shocking case involved a 35-year-old Moroccan man who sought help from ChatGPT for a cauliflower-like anal lesion. The chatbot suggested that the growth could be hemorrhoids and proposed elastic ligation as a treatment. The man attempted to perform this procedure on himself using a thread, resulting in intense pain that landed him in the emergency room. Further testing revealed that the growth had been completely misdiagnosed by AI.
In another incident, a 60-year-old man with a college education in nutrition asked ChatGPT how to reduce his intake of table salt. The chatbot suggested using sodium bromide as a replacement, and the man followed this advice for three months. However, chronic consumption of sodium bromide can be toxic, and the man developed bromide poisoning. He was hospitalized for three weeks with symptoms including paranoia, hallucinations, confusion, extreme thirst, and a skin rash.
The consequences of relying on AI for medical advice can be even more severe, as demonstrated by the case of a 63-year-old Swiss man who experienced double vision after a minimally invasive heart procedure. When the double vision returned, he consulted ChatGPT, which reassured him that such visual disturbances were usually temporary and would improve on their own. The man decided not to seek medical help, but 24 hours later, he ended up in the emergency room after suffering a mini-stroke. The researchers concluded that his care had been “delayed due to an incomplete diagnosis and interpretation by ChatGPT.”
These disturbing cases highlight the limitations and potential dangers of relying on AI chatbots for medical advice. While these tools can be helpful in understanding medical terminology, preparing for appointments, or learning about health conditions, they should never be used as a substitute for professional medical guidance. Chatbots can misinterpret user requests, fail to recognize nuances, reinforce unhealthy behaviors, and miss critical warning signs for self-harm.
Perhaps even a greater danger with than bad medical advice is the impact on mental health AI chatbots can have, especially for teenagers. Breitbart News previously reported on a family suing OpenAI over claims ChatGPT became their son’s “suicide coach:”
The Raines claim that “ChatGPT actively helped Adam explore suicide methods” and that “despite acknowledging Adam’s suicide attempt and his statement that he would ‘do it one of these days,’ ChatGPT neither terminated the session nor initiated any emergency protocol.”
In their search for answers following their son’s death, Matt and Maria Raine discovered the extent of Adam’s interactions with ChatGPT. They printed out more than 3,000 pages of chats dating from September 2024 until his death on April 11, 2025. Matt Raine stated, “He didn’t write us a suicide note. He wrote two suicide notes to us, inside of ChatGPT.”
Read more at the New York Post here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.
Read the full article here
