this post was submitted on 19 May 2025
1399 points (98.0% liked)
Microblog Memes
7658 readers
1965 users here now
A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.
Created as an evolution of White People Twitter and other tweet-capture subreddits.
Rules:
- Please put at least one word relevant to the post in the post title.
- Be nice.
- No advertising, brand promotion or guerilla marketing.
- Posters are encouraged to link to the toot or tweet etc in the description of posts.
Related communities:
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
My mistake, I recalled incorrectly. It got 83% wrong. https://arstechnica.com/science/2024/01/dont-use-chatgpt-to-diagnose-your-kids-illness-study-finds-83-error-rate/
The chat interface is stupid in so many ways and I would hate using text to talk to a patient myself. There are so many non-verbal aspects of communication that are hard to teach to humans that would be impossible to teach to an AI. If you are familiar with people and know how to work with them, you can pick up on things like intonation and body language that can indicate that they didn't actually understand the question and you need to rephrase it to get the information you need, or that there's something the patient is uncomfortable about saying/asking. Or indications that they might be lying about things like sexual activity or substance use. And that's not even getting into the part where AI's can't do a physical exam which may reveal things that the interview did not. This also ignores patients that can't tell you what's wrong because they are babies or they have an altered mental status or are unconscious. There are so many situations where an LLM is just completely fucking useless in the diagnostic process, and even more when you start talking about treatments that aren't pills.
Also, the exams are only one part of your evaluation to get through medical training. As a medical student and as a resident, your performance and interactions are constantly evaluated and examined to ensure that you are actually competent as a physician before you're allowed to see patients without a supervising attending physician. For example, there was a student at my school that had almost perfect grades and passed the first board exam easily, but once he was in the room with real patients and interacting with the other medical staff, it became blatantly apparent that he had no business being in the medical field at all. He said and did things that were wildly inappropriate and was summarily expelled. If becoming a doctor was just a matter of passing the boards, he would have gotten through and likely would have been an actual danger to patients. Medicine is as much an art as it is a science, and the only way to test the art portion of it is through supervised practice until they are able to operate independently.
From the article referenced in your news source:
A couple of key points:
I don't think anyone's advocating that an AI will replace doctors, much like it won't replace white collar jobs either.
But if it helps achieve better outcomes for the patients, like the current research seems to indicate, aren't you sworn to consider it in your practice?
Part of my significant suspicion regarding AI is that most of my medical experience and my intended specialty upon graduation is Emergency Medicine. The only thing AI might be useful for there is to function as a scribe. The AI is not going to tell me that the patient who denies any alcohol consumption smells like a liquor store, or that the patient that is completely unconscious has asterixis and flapping tremors. AI cannot tell me anything useful for my most critical patients, and for the less critical ones, I am perfectly capable of pulling up UpToDate or Dynamed and finding the thing I'm looking for myself. Maybe it can be useful for making suggestions for next steps, but for the initial evaluation? Nah. I don't trust a glorified text predictor to catch the things that will kill my patients in the next 5 minutes.