this post was submitted on 19 May 2025
1417 points (98.0% liked)

Microblog Memes

7658 readers
2177 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

Rules:

  1. Please put at least one word relevant to the post in the post title.
  2. Be nice.
  3. No advertising, brand promotion or guerilla marketing.
  4. Posters are encouraged to link to the toot or tweet etc in the description of posts.

Related communities:

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] ByteJunk@lemmy.world -1 points 5 hours ago (1 children)

From the article referenced in your news source:

_JAMA Pediatrics and the NEJM were accessed for pediatric case challenges (N = 100). The text from each case was pasted into ChatGPT version 3.5 with the prompt List a differential diagnosis and a final diagnosis. _

A couple of key points:

  • These are case challenges, which are usually meant to be hard. I could find no comparison to actual physician results in the article, which would have been nice.
  • More importantly however: it was conducted in June 2023, and used GPT-3.5. GPT-4 improved substantially upon it, especially for complex scientific or scientific problems, and this shows in the newer studies that have used it.

I don't think anyone's advocating that an AI will replace doctors, much like it won't replace white collar jobs either.

But if it helps achieve better outcomes for the patients, like the current research seems to indicate, aren't you sworn to consider it in your practice?

[–] medgremlin@midwest.social 1 points 3 hours ago

Part of my significant suspicion regarding AI is that most of my medical experience and my intended specialty upon graduation is Emergency Medicine. The only thing AI might be useful for there is to function as a scribe. The AI is not going to tell me that the patient who denies any alcohol consumption smells like a liquor store, or that the patient that is completely unconscious has asterixis and flapping tremors. AI cannot tell me anything useful for my most critical patients, and for the less critical ones, I am perfectly capable of pulling up UpToDate or Dynamed and finding the thing I'm looking for myself. Maybe it can be useful for making suggestions for next steps, but for the initial evaluation? Nah. I don't trust a glorified text predictor to catch the things that will kill my patients in the next 5 minutes.