this post was submitted on 05 Jun 2025
943 points (98.9% liked)
Not The Onion
16546 readers
819 users here now
Welcome
We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!
The Rules
Posts must be:
- Links to news stories from...
- ...credible sources, with...
- ...their original headlines, that...
- ...would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”
Please also avoid duplicates.
Comments and post content must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.
And that’s basically it!
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Having an LLM therapy chatbot to psychologically help people is like having them play russian roulette as a way to keep themselves stimulated.
Addiction recovery is a different animal entirely too. Don't get me wrong, is unethical to call any chatbot a therapist, counselor, whatever, but addiction recovery is not typical therapy.
You absolutely cannot let patients bullshit you. You have to have a keen sense for when patients are looking for any justification to continue using. Even those patients that sought you out for help. They're generally very skilled manipulators by the time they get to recovery treatment, because they've been trying to hide or excuse their addiction for so long by that point. You have to be able to get them to talk to you, and take a pretty firm hand on the conversation at the same time.
With how horrifically easy it is to convince even the most robust LLM models of your bullshit, this is not only an unethical practice by whoever said it was capable of doing this, it's enabling to the point of bordering on aiding and abetting.
Well, that's the thing: LLMs don't reason - they're basically probability engines for words - so they can't even do the most basic logical checks (such as "you don't advise an addict to take drugs") much less the far more complex and subtle "interpreting of a patient's desires, and motivations so as to guide them through a minefield in their own minds and emotions".
So the problem is twofold and more generic than just in therapy/advice:
So in this specific case, LLMs might just put out extreme things with giant consequences that a reasoning being would not (the "bullet in the chamber" of Russian roulette), plus they can't really do the subtle multi-layered elements of analysis (so the stuff beyond "if A then B" and into the "why A", "what makes a person choose A and can they find a way to avoid B by not chosing A", "what's the point of B" and so on), though granted, most people also seem to have trouble doing this last part naturally beyond maybe the first level of depth.
PS: I find it hard to explain multi-level logic. I supposed we could think of it as "looking at the possible causes, of the causes, of the causes of a certain outcome" and then trying to figure out what can be changed at a higher level to make the last level - "the causes of a certain outcome" - not even be possible to happen. Individual situations of such multi-level logic can get so complex and unique that they'll never appear in an LLMs training dataset because that specific combination is so rare, even though they might be pretty logic and easy to determine for a reasoning entity, say "I need to speak to my brother because yesterday I went out in the rain and got drenched as I don't have an umbrella and I know my brother has a couple of extra ones so maybe he can give one of them to me".
AI is great for advice. It's like asking your narcissist neighbor for advice. He might be right. He might have the best answer possible, or he might be just trying to make you feel good about your interaction so you'll come closer to his inner circle.
You don't ask Steve for therapy or ideas on self-help. And if you did, you'd know to do due diligence on any fucking thing out of his mouth.
I'm still not sure what it's "great" at other than a few minutes of hilarious entertainment until you realize it's just predictive text with an eerie amount of data behind it.
Yuuuuup. It's like taking nearly the entirety of the public Internet, shoving it into a fancy auto correct machine, then having it spit out responses to whatever you say, then send them along with no human interaction whatsoever on what reply is being sent to you.
It operates at a massive scale compared to what auto carrot does, but it's the same idea, just bigger and more complex.
Ask it to give you and shell.nix and a bash script to use jQuery to stitch 30,000 jsons together and de-dupe them, drop it all into a sqlite db.
30 seconds, paste and run.
Give it the full script of an app you wrote where you're having a rejex problem and it's particularly nasty regex.
No thought, boom done. It'll even tell you what you did wrong so you won't make the mistake next time.
I've been doing coding and scripting for 25 years. If you know what you want it to do and you know what it should look like when it's done, there's a tremendous amount of advantage there.
Add a function to this flask application to use fuzzywuzzy to delete a name out of the text file, ad a confirmation step. It's the crap that I only need to do once every two or three years, Right have to go and look up all of the documentation. And you know what, if something and it doesn't work and it doesn't know exactly how to fix it I'm more than capable of debugging what it just did because for the most part it documents pretty well and it uses best practices most of the time. It also helps to know where it's weak and things to not ask it to do.
I'm happy it helps you and the things you do.