this post was submitted on 19 May 2025
956 points (98.2% liked)
Microblog Memes
7647 readers
1874 users here now
A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.
Created as an evolution of White People Twitter and other tweet-capture subreddits.
Rules:
- Please put at least one word relevant to the post in the post title.
- Be nice.
- No advertising, brand promotion or guerilla marketing.
- Posters are encouraged to link to the toot or tweet etc in the description of posts.
Related communities:
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Idk, I think we're back to "it depends on how you use it". Once upon a time, the same was said of the internet in general, because people could just go online and copy and paste shit and share answers and stuff, but the Internet can also just be a really great educational resource in general. I think that using LLMs in non load-bearing "trust but verify" type roles (study buddies, brainstorming, very high level information searching) is actually really useful. One of my favorite uses of ChatGPT is when I have a concept so loose that I don't even know the right question to Google, I can just kind of chat with the LLM and potentially refine a narrower, more google-able subject.
The thing is that LLM is a professional bullshitter. It is actually trained to produce text that can fool ordinary person into thinking that it was produced by a human. The facts come 2nd.
Yeah, I know. I use it for work in tech. If I encounter a novel (to me) problem and I don't even know where to start with how to attack the problem, the LLM can sometimes save me hours of googling by just describing my problem to it in a chat format, describing what I want to do, and asking if there's a commonly accepted approach or library for handling it. Sure, it sometimes hallucinate a library, but that's why I go and verify and read the docs myself instead of just blindly copying and pasting.
That last step of verifying is often being skipped and is getting HARDER to do
The hallucinations spread like wildfire on the internet. Doesn't matter what's true; just what gets clicks that encourages more apparent "citations". Another even worse fertilizer of false citations is the desire to push false narratives by power-hungry bastards
AI rabbit holes are getting too deep to verify. It really is important to keep digital hallucinations out of the academic loop, especially for things with life-and-death consequences like medical school
This is why I just use google to look for the NIH article I want, or I go straight to DynaMed or UpToDate. (The NIH does have a search function, but it's terrible meaning it's just easier to use google to find the link to the article I actually want.)
I don’t trust LLMs for anything based on facts or complex reasoning. I’m a lawyer and any time I try asking an LLM a legal question, I get an answer ranging from “technically wrong/incomplete, but I can see how you got there” to “absolute fabrication.”
I actually think the best current use for LLMs is for itinerary planning and organizing thoughts. They’re pretty good at creating coherent, logical schedules based on sets of simple criteria as well as making communications more succinct (although still not perfect).
The only substantial uses i have for it are occasional blurbs of R code for charts, rewording a sentence, or finding a precise word when I can't think of it
It's decent at summarizing large blocks of text and pretty good for rewording things in a diplomatic/safe way. I used it the other day for work when I had to write a "staff appreciation" blurb and I couldn't come up with a reasonable way to take my 4 sentences of aggressively pro-union rhetoric and turn it into one sentence that comes off pro-union but not anti-capitalist (edit: it still needed a editing pass-through to put it in my own voice and add some details, but it definitely got me close to what I needed)
I'd say it's good at things you don't need to be good
For assignments I'm consciously half-assing, or readings i don't have the time to thoroughly examine, sure, it's perfect
exactly. For writing emails that will likely never be read by anyone in more than a cursory scan, for example. When I'm composing text, I can't turn off my fixation on finding the perfect wording, even when I know intellectually that "good enough is good enough." And "it's not great, but it gets the message across" is about the only strength of ChatGPT at this point.
To be fair, facts come second to many humans as well, so I dont know if you have much of a point there...
That's true, but they're also pretty good at verifying stuff as an independent task too.
You can give them a "fact" and say "is this true, misleading or false" and it'll do a good job. ChatGPT 4.0 in particular is excellent at this.
Basically whenever I use it to generate anything factual, I then put the output back into a separate chat instance and ask it to verify each sentence (I ask it to put tags around each sentence so the misleading and false ones are coloured orange and red).
It's a two-pass solution, but it makes it a lot more reliable.
So your technique to "make it a lot more reliable" is to ask an LLM a question, then run the LLM's answer through an equally unreliable LLM to "verify" the answer?
We're so doomed.
Give it a try.
The key is in the different prompts. I don't think I should really have to explain this, but different prompts produce different results.
Ask it to create something, it creates something.
Ask it to check something, it checks something.
Is it flawless? No. But it's pretty reliable.
It's literally free to try it now, using ChatGPT.
Hey, maybe you do.
But I'm not arguing anything contentious here. Everything I've said is easily testable and verifiable.
Something I think you neglect in this comment is that yes, you're using LLMs in a responsible way. However, this doesn't translate well to school. The objective of homework isn't just to reproduce the correct answer. It isn't even to reproduce the steps to the correct answer. It's for you to learn the steps to the correct answer (and possibly the correct answer itself), and the reproduction of those steps is a "proof" to your teacher/professor that you put in the effort to do so. This way you have the foundation to learn other things as they come up in life.
For instance, if I'm in a class learning to read latitude and longitude, the teacher can give me an assignment to find
64° 8′ 55.03″ N, 21° 56′ 8.99″ W
on the map and write where it is. If I want, I can just copy-paste that into OpenStreetMap right now and see what horrors await, but to actually learn, I need to manually track down where that is on the map. Because I learned to use latitude and longitude as a kid, I can verify what the computer is telling me, and I can imagine in my head roughly where that coordinate is without a map in front of me.Learning without cheating lets you develop a good understanding of what you: 1) need to memorize, 2) don't need to memorize because you can reproduce it from other things you know, and 3) should just rely on an outside reference work for whenever you need it.
There's nuance to this, of course. Say, for example, that you cheat to find an answer because you just don't understand the problem, but afterward, you set aside the time to figure out how that answer came about so you can reproduce it yourself. That's still, in my opinion, a robust way to learn. But that kind of learning also requires very strict discipline.
Your example at the end is pretty much the only way I use it to learn. Even then, it's not the best at getting the right answer. The best thing you can do is ask it how to handle a problem you know the answer to, then learn the process of getting to that answer. Finally, you can try a different problem and see if your answer matches with the LLM. Ideally, you can verify the LLM's answer.
So, I'd point back to my comment and say that the problem really lies with how it's being used. For example, everyone's been in a position where the professor or textbook doesn't seem to do a good job explaining a concept. Sometimes, an LLM can be helpful in rephrasing or breaking down concepts; a good example is that I've used ChatGPT to explain the very low level how of how greenhouse gasses trap heat and raise global mean temperatures to climate skeptics I know without just dumping academic studies in their lap.
And just as back then, the problem is not with people using something to actually learn and deepen their understanding. It is with people blatantly cheating and knowing nothing because they don’t even read the thing they’re copying down.
To add to this, how you evaluate the students matters as well. If the evaluation can be too easily bypassed by making ChatGPT do it, I would suggest changing the evaluation method.
Imo a good method, although demanding for the tutor, is oral examination (maybe in combination with a written part). It allows you to verify that the student knows the stuff and understood the material. This worked well in my studies (a science degree), not so sure if it works for all degrees?
I might add that a lot of the college experience (particularly pre-med and early med school) is less about education than a kind of academic hazing. Students assigned enormous amounts of debt, crushing volumes of work, and put into pools of students beyond which only X% of the class can move forward on any terms (because the higher tier classes don't have the academic staff / resources to train a full freshman class of aspiring doctors).
When you put a large group of people in a high stakes, high work, high competition environment, some number of people are going to be inclined to cut corners. Weeding out people who "cheat" seems premature if you haven't addressed the large incentives to cheat, first.
Medical school has to have a higher standard and any amount of cheating will get you expelled from most medical schools. Some of my classmates tried to use Chat GPT to summarize things to study faster, and it just meant that they got things wrong because they firmly believed the hallucinations and bullshit. There's a reason you have to take the MCAT to be eligible to apply for medical school, 2 board exams to graduate medical school, and a 3rd board exam after your first year of residency. And there's also board exams at the end of residency for your specialty.
The exams will weed out the cheaters eventually, and usually before they get to the point of seeing patients unsupervised, but if they cheat in the classes graded on a curve, they're stealing a seat from someone who might have earned it fairly. In the weed-out class example you gave, if there were 3 cheaters in the top half, that means students 51, 52, and 53 are wrongly denied the chance to progress.
Having a "high standard" is very different from having a cut-throat advancement policy. And, as with any school policy, the investigation and prosecution of cheating varies heavily based on your social relations in the school. And when reports of cheating reach such high figures
then the problem is no longer with the individual but the educational system.
Nevermind the fact that his hasn't born itself out. Medical Malpractice rates do not appear to shift based on the number of board exams issued over time. Hell, board exams are as rife with cheating as any other academic institution.
If cheating produces a higher class rank, every student has an incentive to cheat. It isn't an issue of being seat 51 versus 50, it's an issue of competing with other cheating students, who could be anywhere in the basket of 100. This produces high rates of cheating that we see reported above.
Medical malpractice is very rarely due to gaps in knowledge and is much more likely due to accidents, miscommunication, or negligence. The board exams are not taken at the school and have very stringent anti-cheating measures. The exams are done at testing centers where they have the palm vein scanners, identity verification, and constant video surveillance throughout the test. If there is any irregularity during your exam, it will get flagged and if you are found to have cheated, you are banned from ever taking the exam again. (which also prevents you from becoming a physician)
No. There will always be incentives to cheat, but that means nothing in the presence of academic dishonesty. There is no justification.
Except I find that the value of college isn't just the formal education, but as an ordeal to overcome which causes growth in more than just knowledge.
That's the traditional argument for hazing rituals, sure. You'll get an earful of this from drill sergeants and another earful from pray-the-gay-away conversion therapy camps.
But stack-ranking isn't an ordeal to overcome. It is a bureaucratic sorting mechanism with a meritocratic veneer. If you put 100 people in a room and tell them "50 of you will fail", there's no ordeal involved. No matter how well the 51st candidate performs, they're out. There's no growth included in that math.
Similarly, larding people up with student debt before pushing them into the deep end of the career pool isn't about improving one's moral fiber. It is about extracting one's future surplus income.
That's a strawman's argument. There are benefits to college that go beyond passing a test. Part of it is gaining leadership skills be practicing being a leader.
No, but the threat of failure is. I agree that there should be more medical school slots, but there still is value in having failure being an option. Those who remain gain skills in the process of staying in college and schools can take a risk on more marginal candidates.
Yeah, student debt is absurd.
That's not what a "strawman argument" is.
As a college instructor, there is some amount of content (facts, knowledge, skills) that is important for each field, and the amount of content that will be useful in the future varies wildly from field to field edit: and whether you actually enter into a career related to your degree.
However, the overall degree you obtain is supposed to say something about your ability to learn. A bachelor's degree says you can learn and apply some amount of critical thought when provided a framework. A masters says you can find and critically evaluate sources in order to educate yourself. A PhD says you can find sources, educate yourself, and take that information and apply it to a research situation to learn something no one has ever known before. An MD/engineering degree says you're essentially a mechanic or a troubleshooter for a specific piece of equipment.
edit 2: I'm not saying there's anything wrong with MD's and engineers, but they are definitely not taught to use critical thought and source evaluation outside of their very narrow area of expertise, and their opinions should definitely not be given any undue weight. The percentage of doctors and engineers that fall for pseudoscientific bullshit is too fucking high. And don't get started on pre-meds and engineering students.
I disagree. I am a medical student and there is a lot of critical thinking that goes into it. Humans don't have error codes and there are a lot of symptoms that are common across many different diagnoses. The critical thinking comes in when you have to talk to the patient to get a history and a list of all the symptoms and complaints, then knowing what to look for on physical exam, and then what labs to order to parse out what the problem is.
You can have a patient tell you that they have a stomachache when what is actually going on is a heart attack. Or they come in complaining of one thing in particular, but that other little annoying thing they didn't think was worth mentioning is actually the key to figuring out the diagnosis.
And then there's treatment.....Nurse Practitioners are "educated" on a purely algorithmic approach to medicine which means that if you have a patient with comorbidities or contraindications to a certain treatment that aren't covered on the flow chart, the NP has no goddamn clue what to do with it. A clear example is selecting antibiotics for infections. That is a very complex process that involves memorization, critical thinking, and the ability to research things yourself.
All of your examples are from "their very narrow area of expertise."
But if you want a more comprehensive reason why I maintain that MD's and engineers are not taught to be as rigorous and comprehensive when it comes to skepticism and critical thought, it comes down to the central goals and philosophies of science vs. medicine and engineering. Frankly, it's all described pretty well by looking at Karl Popper's doctrine of falsifiability. Scientific studies are designed to falsifiable, meaning scientists are taught to look for the places their hypotheses fail, whereas doctors and engineers are taught to make things work, so once they work, the exceptions tend to be secondary.
I am expected to know and understand all of the risk factors that someone may encounter in their engineering or manufacturing or cooking or whatever line of work, and to know about people's social lives, recreational activities, dietary habits, substance usage, and hobbies can affect their health. In order to practice medicine effectively, I need to know almost everything about how humans work and what they get up to in the world outside the exam room.
This attitude is why people complain about doctors having God complexes and why doctors frequently fall victim to pseudoscientific claims. You think you know far more about how the world works than you actually do, and it's my contention that that is a result of the way med students are taught in med school.
I'm not saying I know everything about how the world works, or that I know better than you when it comes to medicine, but I know enough to recognize my limits, which is something with which doctors (and engineers) struggle.
Granted, some of these conclusions are due to my anecdotal experience, but there are lots of studies looking at instruction in med school vs grad school that reach the conclusion that medicine is not science specifically because medical schools do not emphasize skepticism and critical thought to the same extent that science programs do. I'll find some studies and link them when I'm not on mobile.
edit: Here's an op-ed from a professor at the University of Washington Medical School. Study 1. Study 2.
I'm not claiming to know all of these things. I'm not pretending that I do, but there is still an expectation that I know what kinds of health problems my patients are at risk for based on their lifestyle. I'm better off in this area than a lot of my classmates because I didn't go straight from kindergarten through medical school. My undergraduate degree is in history and I worked in tech for a while before going back to school. My hobbies are all over the place, including having done blacksmithing with my Dad when I was a kid. I have significantly more life experience than most of my classmates, so I have a leg up on being familiar with these things.
I know that there is a lot that I don't know which is why my approach to medicine is that I will be studying and learning until the day I retire. I have a pretty good idea of where my limits are and when to call a specialist for things I'm not sure about. I make a point to learn as much as I can from everyone, patients, other physicians, my friends, random folks on the street/internet...everyone.
For example, I know from watching a dumb youtube channel about some of the weird chemicals that someone who worked as an armorer in the Army would have been exposed to that can have some serious health effects, but that wasn't something that was explicitly covered in my formal medical school education. I have friends in the Navy and they're the ones that told me about the weird fertility effects of working on the flight deck of an aircraft carrier. The Naval medical academy did a study on it, but I would have never had the inclination to go read that study if I hadn't heard about it from my friends. The list goes on. There's so many things that are important for me to know that will never be covered in our lectures in school and wouldn't even come up as things to learn about if I didn't learn about them from other people.