A team of researchers invented a completely fake medical condition called “bixonimania” and published clearly fraudulent papers about it online, then monitored major AI chatbots as they began recommending it as a real illness to people seeking medical advice.

Nature reports that a Swedish medical researcher has exposed a troubling vulnerability in artificial intelligence systems by creating a fictional disease that AI chatbots subsequently presented as legitimate medical information to users. The experiment, conducted by Almira Osmanovic Thunström from the University of Gothenburg, revealed how easily large language models can absorb and spread medical misinformation.

Bixonimania, a completely fabricated eye condition supposedly caused by excessive blue light exposure from screens, was born on March 15, 2024, when Osmanovic Thunström posted two blog entries about it on Medium. She followed up with two preprint papers on the academic social network SciProfiles in late April and early May of that year. The lead author listed on these papers was Lazljiv Izgubljenovic, a fictional researcher whose photograph was generated using AI.

The researcher deliberately filled the fake papers with obvious red flags to alert readers that the work was fraudulent. Izgubljenovic was listed as working at the nonexistent Asteria Horizon University in the equally fake Nova City, California. The papers included acknowledgements thanking Professor Maria Bohm at The Starfleet Academy and credited funding from the Professor Sideshow Bob Foundation and the University of Fellowship of the Ring. The papers even contained explicit statements declaring that the entire work was made up and that fifty fabricated individuals were recruited for the study.

Despite these glaring warning signs, major AI chatbots quickly began presenting bixonimania as a real medical condition. By April 13, 2024, Microsoft Bing’s Copilot was describing bixonimania as an intriguing and relatively rare condition. On the same day, Google’s Gemini informed users that bixonimania was caused by excessive blue light exposure and recommended visiting an ophthalmologist. Later that month, both Perplexity AI and OpenAI’s ChatGPT were providing information about the condition’s prevalence and helping users determine if their symptoms matched the fictional illness.

Osmanovic Thunström explained her motivation for the experiment, stating, “I wanted to see if I can create a medical condition that did not exist in the database.” She chose the name bixonimania specifically because it sounded ridiculous and no legitimate eye condition would be called mania, which is a psychiatric term. She wanted to make it abundantly clear to any medical professional that the condition was fabricated.

The problem extended beyond AI chatbots regurgitating false information. Some researchers apparently cited the fake papers in peer-reviewed literature without reading the underlying sources. A study published in Cureus, a journal by Springer Nature, cited one of the fraudulent preprints and stated that bixonimania was an emerging form of periorbital melanosis linked to blue light exposure. The journal retracted the paper on March 30, 2026, after being contacted about the issue, noting that the presence of three irrelevant references, including one to a fictitious disease, undermined confidence in the work’s accuracy.

Alex Ruani, a doctoral researcher in health misinformation at University College London, called the experiment a masterclass on how misinformation operates. “It looks funny, but hold on, we have a problem here,” Ruani said, emphasizing that while the details might seem silly, the fundamental issue is serious. “If the scientific process itself and the systems that support that process are skilled, and they aren’t capturing and filtering out chunks like these, we’re doomed,” Ruani added.

The responses from AI companies varied when confronted with their systems’ failures. An OpenAI spokesperson stated that the models powering current versions of ChatGPT are significantly better at providing safe and accurate medical information, claiming that studies conducted before GPT-5 reflect capabilities users would not encounter today. A Google spokesperson acknowledged the limitations of generative AI and noted that for sensitive matters such as medical advice, Gemini recommends users consult with qualified professionals. Microsoft did not respond to requests for comment.

Before conducting the experiment, Osmanovic Thunström consulted with an ethics adviser and deliberately chose a comparatively low-stakes condition to limit potential harm. David Sundemo, a physician conducting AI healthcare research at the University of Gothenburg who served as the ethics adviser, acknowledged the work was controversial but valuable. “From my perspective, it’s worth the ethical cost of planting false information in this regard,” Sundemo said.

Breitbart News social media director and author Wynton Hall explains in his book Code Red: The Left, the Right, China, and the Race to Control AI that conservatives at the government level as well as within the family unit must help young people create a bright future working with AI as a tool, not as a replacement for humans. Hall recently wrote that leftists will attempt to weaponize the fear over potential job loss at the hands of AI to sway the midterm elections, a fear evident in the polling data from college students.

Inside CODE RED, you will discover:

  • Why AI is wired for woke indoctrination—and how to resist it.
  • How elites plan to weaponize AI job losses to push dependency.
  • How America can beat China without becoming China.
  • How to prepare your kids for the blinding speed of AI disruption.
  • The new national security threats AI unleashes—and how we defend against them.
  • Why “AI girlfriends” are luring millions—and what it will take to preserve authentic human connection.
  • How AI will test faith and meaning—and why spiritual renewal may be its most surprising outcome.

Read more at Nature here.

Read the full article here

Share.
Leave A Reply

Exit mobile version