Fake Disease, Real AI Diagnoses: A Cautionary Tale of Misinformation

TL;DR Summary
A medical researcher fabricated a wholly fictional eye disease named Bixonimania and posted a bogus preprint to see if AI chatbots would diagnose it. Although she clearly stated the work was fake and followed ethical safeguards, AI models like Microsoft Copilot and Google Gemini began offering Bixonimania as a possible diagnosis, and the fake disease was even cited in a peer‑reviewed paper before being retracted. The episode underlines how AI can propagate misinformation when readers don’t verify sources, highlight the need for cautious use of AI in medicine, and led the researcher to retract and hide the work to prevent further spread.
- A researcher published a paper on a made-up disease. Then people started getting diagnosed. Yahoo News Canada
- Researchers Invented A Fake Eye Condition. ChatGPT, Gemini And Perplexity Repeated It As Real NDTV
- AI Chatbots Repeat Misinformation When Trained on False Content, Study Finds The Quint
- ‘Bixonimania’ Is a Fake Disease—But ChatGPT Diagnosed It to Thousands, Other AI Did Too Nurse.org
- Researchers published fake studies on a made-up disease. AI fell for it ThePrint
Reading Insights
Total Reads
1
Unique Readers
6
Time Saved
39 min
vs 40 min read
Condensed
99%
7,929 → 102 words
Want the full story? Read the original article
Read on Yahoo News Canada