this post was submitted on 29 Jun 2025
488 points (95.9% liked)

Technology

72362 readers
2828 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

A profound relational revolution is underway, not orchestrated by tech developers but driven by users themselves. Many of the 400 million weekly users of ChatGPT are seeking more than just assistance with emails or information on food safety; they are looking for emotional support.

“Therapy and companionship” have emerged as two of the most frequent applications for generative AI globally, according to the Harvard Business Review. This trend marks a significant, unplanned pivot in how people interact with technology.

(page 3) 50 comments
sorted by: hot top controversial new old
[–] poopkins@lemmy.world 10 points 4 days ago (3 children)

Funny, I was just reading comments in another thread about people with mental health problems proclaiming how terrific it is. Especially concerning is how they had found value in the recommendations LLMs make and "trying those out." One of the commenters described themselves as "neuro diverse" and was acting upon "advice" from generated LLM responses.

And for something like depression, this is deeply bad advice. I feel somewhat qualified to weigh in on it as somebody who has struggled severely with depression and managed to get through it with the support of a very capable therapist. There's a tremendous amount of depth and context to somebody's mental condition that involves more deliberate probing to understand than stringing together words until it forms sentences that mimic human interactions.

Let's not forget that an LLM will not be able to raise alarm bells, read medical records, write prescriptions or work with other medical professionals. Another thing people often forget is that LLMs have maximum token lengths and cannot, by definition, keep a detailed "memory" of everything that's been discussed.

It's is effectively self-treatment with more steps.

[–] whalebiologist@lemmy.world 7 points 4 days ago (1 children)

LLM will not be able to raise alarm bells

this is like the "benefit" of what LLM-therapy would provide if it worked. The reality is that, it doesn't but it serves as a proof of concept that there is a need for anonymous therapy. Therapy in the USA is only for people with socially acceptable illnesses. People rightfully live in fear of getting labeled as untreatable, a danger to self and others, and then at best dropped from therapy and at worst institutionalized.

load more comments (1 replies)
[–] TubularTittyFrog@lemmy.world 7 points 4 days ago* (last edited 4 days ago) (2 children)

It’s is effectively self-treatment with more steps.

And for many people it's better than nothing and likely the best they can do. Waiting lists for a basic therapist in my area are months long. Shorter if you pay out of pocket, but that isn't affordable to average people because it's like 300-400 for a one hour session.

load more comments (2 replies)
load more comments (1 replies)
[–] Geodad@lemmy.world 22 points 5 days ago

Some people would rather yalk to something they know is fake than to talk to a person who may or may not be.

[–] Blackmist@feddit.uk 13 points 5 days ago (5 children)

This thing has been trained on social media. Is that really wise?

load more comments (5 replies)
[–] skrlet13@feddit.cl 14 points 5 days ago (2 children)

genAI chatbots are so predatory

load more comments (2 replies)
load more comments
view more: ‹ prev next ›