this post was submitted on 13 May 2026
547 points (97.2% liked)
LinkedinLunatics
6829 readers
583 users here now
A place to post ridiculous posts from linkedIn.com
(Full transparency.. a mod for this sub happens to work there.. but that doesn't influence his moderation or laughter at a lot of posts.)
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
If LLMs didn't hallucinate I'd fully agree with you
You can tune an AI to not hallucinate. Like you can just program it so it will return actual verbatim article snippets.
Which is why your doctor should use it as a tool and validate the results. You know, do their job.
Y'all are just fucking binary. How do you think medical and community members work now? They use a shitty search engine or portal to look up material, and yes, some of it will be garbage they need to wade through.
But God forbid they have a tool that puts that information into a cited overview to supplement a tricky diagnosis. The prejudice and fake workflows that y'all invent is crazy. Looking for little edge cases everywhere catching the AI in a mistake
I have no problem with them using search engines. They can vet and choose answers from reliable sources. From an LLM, it's anybody's guess if anything it pulled up is correct, and a less experienced doctor could be misled into making a dangerous mistake.
Riiiiiiiiiight, LLMs don't cite sources and the portals written in the 90s for journals solve all of that.
It's so amazing to watch you all invent these crazy scenarios, where you've chosen the absolute lowest bar you can find. As if some layman who has no clue how to use this tool is working on some free Claude account because you read about one shitty doctor or lawyer fucking up. It's honestly sad seeing these hoops to jump through.
Professional tools, run by some of the most educated type a professionals in the planet minimize and reduce these risks by providing defaults and interfaces along with education.
FFS they can (and will) kill you accidentally with far more simple shit that can't be mostly mitigated away. But yeah, because LLMs can be used poorly by morons it's worthless 🙄.
Do you know what happens when doctors fuck up?
Like, I don't think this is the slam dunk reply that you think it is.
I'm pretty sure I'd care if thought you could read. I addressed your question.
When was the last time you used them? They can provide sources for pretty much everything they say and that source usually also contains said thing too.
But even if not, even back 2 years ago, it was already good because you had a second look, a different perspective. A medical professional can either know little about everything or much about next to nothing. It should be really obvious how such a tool can help, even if it can not reach expert level.
"Don't worry, when you ask it for sources it gives you some. Sometimes they are even real! And sometimes the real ones even say the thing they were supposed to have said from the AI!"
Fucking lunacy.
I ask LLMs medical questions almost daily and it gets it wrong most of the time. Do I have data or a study to back this claim up, no, but if you check the sources (Perplexity) you can just see the wrong interpretation of studies or disreputable web content sources (county fair science projects, Reddit, quack websites)
It’s sometimes useful for general knowledge if you really don’t care that it might be wrong, but I’m not sure when this would be the case with medical advice
The confidence with which LLMs misstate facts and my inability to know which of my healthcare workers is blindly trusting of these tools makes it dangerous
Well I'm sure free chatgpt is the best we can do 😂
Where did I say chatgpt?
Using their institutional knowledge, and critical thinking to wade through unrelated search results is not the same as asking a sycophantic chat bot that's programmed to always answer with complete confidence whether it's right or wrong
I absolutely love that you all think that there are only these models designed to make you like the product. It's like watching Republicans scream to the world how ignorant you are. Everything is a chatbot to you all 😂
I love that you keep saying that with no proof to back up a claim. Which chatbot do you use then? And if you don't say exactly which chatbot you use everyone will know you're a stupid liar.
Microsoft MAI-DxO is currently winning in our metrics but not widely available. At this point in time it is the obvious winner although not in GA. DxGPT second followed by path AI and AI doc. All these have completed FDA requirements, been studied, and enjoyed a success rate near our better then doctors when used parallel in diagnosis.
Using AI as a tool to find additional information? Sure, could be doable maybe.
Asking sycophantic ”you’re absolutely correct” machines for second opinion? Absolutely not!
Hoffman is advocating for the latter.
The thing is the AIs that doctors are using aren't just the commercially available chatgpt or Gemini, they are specialized, tuned for accuracy and only trained on medical articles.
Imagine believing that they'll use general purpose free chatgpt. Just amazing these scenarios you all invent. I can't tell if it's just straight blind prejudice or you all really don't understand how it can integrate into tooling with very specific models.
Just wild what people have cooked up in their mental model.
They literally do. ChatGPT and Copilot. I work medical field adjacent, and these are what the providers use. Not cooked up in a mental model, witnessed directly.
Take note that I'm only replying to refute your ignorance. I'm not going to engage with whatever AI generated ragebait vitriol you spew at me.
Will those are the idiots who were already lazy and going to kill you 🤷♂️
There is proper tooling, and they should be learning to use it.
Further proof you can't read