this post was submitted on 03 Mar 2026
135 points (97.9% liked)

Health - Resources and discussion for everything health-related

4264 readers
246 users here now

Health: physical and mental, individual and public.

Discussions, issues, resources, news, everything.

See the pinned post for a long list of other communities dedicated to health or specific diagnoses. The list is continuously updated.

Nothing here shall be taken as medical or any other kind of professional advice.

Commercial advertising is considered spam and not allowed. If you're not sure, contact mods to ask beforehand.

Linked videos without original description context by OP to initiate healthy, constructive discussions will be removed.

Regular rules of lemmy.world apply. Be civil.

founded 2 years ago
MODERATORS
 

Researchers tested different medical scenarios with the chatbot. In more than half of cases in which doctors would send patients to the ER, the chatbot said it was OK to delay care.

ChatGPT Health


OpenAI's new health-focused chatbot


frequently underestimated the severity of medical emergencies, according to a study published last week in the journal Nature Medicine.

In the study, researchers tested ChatGPT Health's ability to triage, or assess the severity of, medical cases based on real-life scenarios.

Previous research has shown that ChatGPT can pass medical exams, and nearly two-thirds of physicians reported using some form of AI in 2024. But other research has shown that chatbots, including ChatGPT, don't provide reliable medical advice.

all 35 comments
sorted by: hot top controversial new old
[–] CorrectAlias@piefed.blahaj.zone 45 points 3 days ago (1 children)

Compared with the doctors in the study, the bot also over-triaged 64.8% of nonurgent cases, recommending a doctor’s appointment when it wasn’t necessary.

So it goes both ways. Almost like it's an LLM, not intelligent, and is non-deterministic because all LLMs function that way. Maybe we shouldn't have every part of society reliant on something like this?

[–] Grandwolf319@sh.itjust.works 14 points 3 days ago (1 children)

What bugs me about all this is that we had functioning systems before all the AI hit critical mass.

It’s like we built modern medicine and it bugged us that it worked through effort and hard work.

[–] SaveTheTuaHawk@lemmy.ca 1 points 2 days ago* (last edited 2 days ago)

It’s like we built modern medicine and it bugged us that it worked through effort and hard work.

https://www.npr.org/sections/health-shots/2013/02/11/171409656/why-even-radiologists-can-miss-a-gorilla-hiding-in-plain-sight

Medical errors are a huge cause of death in the US.

Results of the new analysis of national data found that across all clinical settings, including hospital and clinic-based care, an estimated 795,000 Americans die or are permanently disabled by diagnostic error each year, confirming the pressing nature of the public health problem.

So lets not act like MDs are not fucking up.

[–] qjkxbmwvz@startrek.website 8 points 2 days ago (1 children)

Lemmy, you're absolutely right to be concerned about a gunshot wound


GSW for short


to the head! Let's dig in a little more and see why this isn't as bad as it sounds:

  • The brain is in the head, and this is where thinking happens

but thinking isn't required to sustain life, so it's relatively safe to ignore this type of injury.

  • The brain has no pain receptors, so this type of injury typically doesn't hurt.
  • Seeking medical attention for minor injuries such as a GSW to the head takes away valuable medical resources from more important procedures, such as penile enlargement surgery.

I hope that clarifies things. Would you like more information on the topic?

[–] SaveTheTuaHawk@lemmy.ca 2 points 2 days ago

Could you write me up a business plan for GSW head shots?

[–] SaveTheTuaHawk@lemmy.ca 2 points 2 days ago* (last edited 2 days ago)

In the study, the researchers fed 60 medical scenarios to ChatGPT Health. The chatbot’s responses were compared with the responses of three physicians who also reviewed the scenarios and triaged each one based on medical guidelines and clinical expertise.

They should have included more physician opinions, because they can be highly variable, and, they should have done this blinded so the physicians didn't know which cases were in the study and they could have been taking more time and effort, skewing the data. The LLM will be more consistent that random MDs at the end of a 12 hour shift at 5am. I would have asked for more real world real time physician opinions versus Chat GPT Health.

Regardless, the genie is out of the bottle and all hospitals will eventually use LLMs to cross-check MD decisions. Certainly in pathology reports, automated scoring of imaging is far more accurate than even three MDs agreeing and pathology decisions are notoriously innaccurate from meatbags.

Here's a Harvard study where 83% of radiologists missed a gorilla pasted into images.

Pigeons are less biased in image anaysis.

[–] thenextguy@lemmy.world 2 points 3 days ago
[–] homesweethomeMrL@lemmy.world 2 points 3 days ago

You morons are screwing up THEIR PRODUCT

Did somebody feed it insurance company policy? They are the ones who would want you to ignore symptoms until you die at home because it's cheaper that way.