
Confident but incorrect: AI medical chatbots can mislead with dangerous health advice
AI chatbots are confident but frequently wrong about medical advice. Live Science cites Lancet Digital Health and Nature Medicine studies showing that when misinformation is framed in clinical language, models fail about 46% of the time, versus roughly 9% for casual phrasing, with the striking example of “rectal garlic insertion for immune support” surfacing in outputs. The Nature Medicine work also found chatbots offer no better than an internet search for guiding medical decisions. Experts warn that large language models don’t verify truth, only mimic medical language, making them unreliable health guides for the public—though they may have limited, tightly controlled uses in medicine.













