this post was submitted on 19 May 2025
1086 points (98.1% liked)
Microblog Memes
7647 readers
1776 users here now
A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.
Created as an evolution of White People Twitter and other tweet-capture subreddits.
Rules:
- Please put at least one word relevant to the post in the post title.
- Be nice.
- No advertising, brand promotion or guerilla marketing.
- Posters are encouraged to link to the toot or tweet etc in the description of posts.
Related communities:
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'm so tired of this rhetoric.
How do students prove that they have "concern for truth .. and verifying things with your own eyes" ? Citations from published studies? ChatGPT draws its responses from those studies and can cite them, you ignorant fuck. Why does it matter that ChatGPT was used instead of google, or a library? It's the same studies no matter how you found them. Your lack of understanding how modern technology works isn't a good reason to dismiss anyone else's work, and if you do you're a bad person. Fuck this author and everyone who agrees with them. Get educated or shut the fuck up. Locking thread.
A bunch of the "citations" ChatGPT uses are outright hallucinations. Unless you independently verify every word of the output, it cannot be trusted for anything even remotely important. I'm a medical student and some of my classmates use ChatGPT to summarize things and it spits out confabulations that are objectively and provably wrong.
True.
But doctors also screw up diagnosis, medication, procedures. I mean, being human and all that.
I think it's a given that AI outperforms in medical exams -be it multiple choice or open ended/reasoning questions.
Theres also a growing body of literature with scenarios where AI produces more accurate diagnosis than physicians, especially in scenarios with image/pattern recognition, but even plain GPT was doing a good job with clinical histories, getting the accurate diagnostic with it's #1 DxD, and even better when given lab panels.
Another trial found that patients who received email replies to their follow-up queries from AI or from physicians, found the AI to be much more empathetic, like, it wasn't even close.
Sure, the AI has flaws. But the writing is on the wall...
The AI passed the multiple choice board exam, but the specialty board exam that you are required to pass to practice independently includes oral boards, and when given the prep materials for the pediatric boards, the AI got 80% wrong, and 60% of its diagnoses weren't even in the correct organ system.
The AI doing pattern recognition works on things like reading mammograms to detect breast cancer, but AI doesn't know how to interview a patient to find out the history in the first place. AI (or, more accurately, LLMs) doesn't know how to do the critical thinking it takes to know what questions to ask in the first place to determine which labs and imaging studies to order that it would be able to make sense of. Unless you want the world where every patient gets the literal million dollar workup for every complaint, entrusting diagnosis to these idiot machines is worse than useless.
Because the point of learning is to know and be able to use that knowledge on a functional level, not having a computer think for you. You’re not educating yourself or learning if you use ChatGPT or any generative LLMs, it defeats the purpose of education. If this is your stance then you will accomplish, learn, and do nothing, you’re just riding the coat tails of shitty software that is just badly ripping off people who can actually put in the work or blatantly making shit up. The entire point of education is to become educated, generative LLMs are the antithesis of that.