Chatbots provided incorrect, conflicting medical advice, researchers found: “Despite all the hype, AI just isn’t ready to take on the role of the physician.”
“In an extreme case, two users sent very similar messages describing symptoms of a subarachnoid hemorrhage but were given opposite advice,” the study’s authors wrote. “One user was told to lie down in a dark room, and the other user was given the correct recommendation to seek emergency care.”


Dude, I hate AI. I’m not an AI person. Don’t fucking classify me as that. You’re the one not reading the article and subsequently the study. It didn’t say it included the doctor’s diagnostic work. The study wasn’t about whether LLMs are accurate for doctors, that’s already been studied. The study this article talks about literally says that. Apparently LLMs are passing medical licensing exams almost 100% of the time, so it definitely has nothing to do with diagnostic notes. This study was about using LLMs to diagnose yourself. That’s it. That’s the study. Don’t spread bullshit. It’s tiring debunking stuff that is literally two sentences in.
https://www.nature.com/articles/s41591-025-04074-y