this post was submitted on 19 May 2025
1224 points (98.0% liked)

Microblog Memes

7647 readers
1817 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

Rules:

  1. Please put at least one word relevant to the post in the post title.
  2. Be nice.
  3. No advertising, brand promotion or guerilla marketing.
  4. Posters are encouraged to link to the toot or tweet etc in the description of posts.

Related communities:

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] ByteJunk@lemmy.world -3 points 17 hours ago (1 children)

True.

But doctors also screw up diagnosis, medication, procedures. I mean, being human and all that.

I think it's a given that AI outperforms in medical exams -be it multiple choice or open ended/reasoning questions.

Theres also a growing body of literature with scenarios where AI produces more accurate diagnosis than physicians, especially in scenarios with image/pattern recognition, but even plain GPT was doing a good job with clinical histories, getting the accurate diagnostic with it's #1 DxD, and even better when given lab panels.

Another trial found that patients who received email replies to their follow-up queries from AI or from physicians, found the AI to be much more empathetic, like, it wasn't even close.

Sure, the AI has flaws. But the writing is on the wall...

[–] medgremlin@midwest.social 3 points 16 hours ago (1 children)

The AI passed the multiple choice board exam, but the specialty board exam that you are required to pass to practice independently includes oral boards, and when given the prep materials for the pediatric boards, the AI got 80% wrong, and 60% of its diagnoses weren't even in the correct organ system.

The AI doing pattern recognition works on things like reading mammograms to detect breast cancer, but AI doesn't know how to interview a patient to find out the history in the first place. AI (or, more accurately, LLMs) doesn't know how to do the critical thinking it takes to know what questions to ask in the first place to determine which labs and imaging studies to order that it would be able to make sense of. Unless you want the world where every patient gets the literal million dollar workup for every complaint, entrusting diagnosis to these idiot machines is worse than useless.

[–] ByteJunk@lemmy.world 1 points 5 hours ago

Could you provide references? I'm genuinely interested, and what I found seems to say differently:

Overall, GPT-4 passed the board residency examination in four of five specialties, revealing a median score higher than the official passing score of 65%.

AI NEJM

Also I believe you're seriously underestimating the abilities of present day LLMs. They are able to ask relevant follow up questions, as well as interpreting that information to request additional studies, and achieve accurate diagnosis.

See here a study specifically on conversational diagnosis AIs. It has some important limitations, crucially from having to work around the text interface which is not ideal, but otherwise achieved really interesting results.

Call them "idiot machines" all you want, but I feel this is going down the same path as full self driving cars - eventually they'll be doing less errors than humans, and that will save lives.