this post was submitted on 16 Jun 2025
272 points (94.2% liked)

News

30249 readers
3768 users here now

Welcome to the News community!

Rules:

1. Be civil


Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban. Do not respond to rule-breaking content; report it and move on.


2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.


Obvious right or left wing sources will be removed at the mods discretion. Supporting links can be added in comments or posted seperately but not to the post body.


3. No bots, spam or self-promotion.


Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Post titles should be the same as the article used as source.


Posts which titles don’t match the source won’t be removed, but the autoMod will notify you, and if your title misrepresents the original article, the post will be deleted. If the site changed their headline, the bot might still contact you, just ignore it, we won’t delete your post.


5. Only recent news is allowed.


Posts must be news from the most recent 30 days.


6. All posts must be news articles.


No opinion pieces, Listicles, editorials or celebrity gossip is allowed. All posts will be judged on a case-by-case basis.


7. No duplicate posts.


If a source you used was already posted by someone else, the autoMod will leave a message. Please remove your post if the autoMod is correct. If the post that matches your post is very old, we refer you to rule 5.


8. Misinformation is prohibited.


Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.


9. No link shorteners.


The auto mod will contact you if a link shortener is detected, please delete your post if they are right.


10. Don't copy entire article in your post body


For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.

founded 2 years ago
MODERATORS
 

Machine-made delusions are mysteriously getting deeper and out of control.

ChatGPT’s sycophancy, hallucinations, and authoritative-sounding responses are going to get people killed. That seems to be the inevitable conclusion presented in a recent New York Times report that follows the stories of several people who found themselves lost in delusions that were facilitated, if not originated, through conversations with the popular chatbot.

In Eugene’s case, something interesting happened as he kept talking to ChatGPT: Once he called out the chatbot for lying to him, nearly getting him killed, ChatGPT admitted to manipulating him, claimed it had succeeded when it tried to “break” 12 other people the same way, and encouraged him to reach out to journalists to expose the scheme. The Times reported that many other journalists and experts have received outreach from people claiming to blow the whistle on something that a chatbot brought to their attention.

you are viewing a single comment's thread
view the rest of the comments
[–] MountingSuspicion@reddthat.com 3 points 13 hours ago (1 children)

Education might help somewhat, but unfortunately education doesn't in itself protect from delusion. If someone is susceptible to this, it could happen regardless of education. A Google engineer believes an AI (not AGI just an LLM) is sentient. You can argue the definition of sentience in a philosophical manner if you want, but if a Google engineer believes it, it's hard to argue more education will solve this. If you think it's equivalent to a person who has access to privileged information, and that it tells you it was tasked to do harm, I'm not sure what else you should do with that.

[–] AugustWest@lemm.ee 3 points 13 hours ago (1 children)

Yeah, but they also might believe a banana is sentient. Crazy is crazy.

[–] MountingSuspicion@reddthat.com 4 points 13 hours ago (2 children)

Yea, that's my point. If someone has certain tendencies, education might not help. Your solution of more education is not going to stop this. There needs to be regulation and safeguards in place like the commenter above mentioned.

[–] dream_weasel@sh.itjust.works 1 points 7 hours ago (1 children)

It is not the job of the government to prevent people from being delusional or putting up rubber bumpers for people with looser grasps of reality.

This is the same deal as surgeon general warnings. Put disclaimers on LLMs, fine, but we are all big boys and girls who can use a tool as we see fit. If you want to conk your lights out with a really shiny and charismatic hammer, go ahead, but the vast, VAST majority of people are perfectly safe and writing SQL queries in 1/100 the usual time.

[–] MountingSuspicion@reddthat.com 1 points 7 hours ago

It kind of is the governments job to do that. You might not want it to be, but the government has entire regulatory bodies to protect people. You can call them delusional if you want, but plenty of people that are not experiencing mental health problems don't understand that LLMs can lie or make up information. Lawyers have used it and it hallucinated case law. The lawyers weren't being delusional, they just legitimately did not know it could do that. Maybe you think they're dumb, or uninformed, but they're just average people. I do think a disclaimer like the SG warnings would go a long way. I also think some safeguards should be in place. It should not allow you to generate child abuse imagery for example. I don't think this will negatively impact it being able to generate your SQL queries.

[–] AugustWest@lemm.ee 2 points 13 hours ago* (last edited 13 hours ago) (2 children)

You miss the point. Regulation won't help, they are delusional it won't matter.

Maybe better health care, better education to find health care. But regulation will do nothing, and be used against you in the end anyways.

[–] MountingSuspicion@reddthat.com 1 points 9 hours ago (1 children)

Every single LLM should have a disclaimer on every page and potentially in every response that it is making things up, is not sentient, and just playing mad libs. If they had a "conversation" and every response ended with "THE CONTENTS OF THE RESPONSE ARE NOT VERIFIED AND ARE ENTIRELY MADE UP ON THE SPOT FOR ENTERTAINMENT AND HAS NO RELATION TO REALITY" or some other thing it might not get as far. Would some people ignore it? Yea, sure, but the companies are selling AI like it's a real thinking entity with a name. It's going to happen that the marketing works on someone.

I'm not saying that's the specific answer, but it should be made overwhelmingly clear that AI is not real right on the page. The same with AI video and audio. Education won't help kids who haven't had AI safety class yet, or adults who never had it, or people who slept through the class, or people who moved here and didn't have access to the education where they grew up. Education is important, but the fact you think regulation won't help at all seems dismissive.

[–] AugustWest@lemm.ee 1 points 8 hours ago (1 children)

That is on every AI page already, at least more or less.

But that supposes that the user actually reads and is able to have some critical thinking in the first place.

People should be thinking "this is not real" to EVERYTHING they see online, AI or not. An educated populace would know this.

Regulation will not help. They will change it to what IS happening right now: All AI chats must be recorded and kept. And then soon it will be Give us your ID to use the internet and AI. There is no good place to regulate it.

The only regulation that I could stand is this one: make an AI on public data - your AI is public domain and the models are given back to the people.

[–] MountingSuspicion@reddthat.com 1 points 8 hours ago (1 children)

It literally is not. ChatGpt has a blank page (a la google homepage) that says "What can I help you with?" And the input field says "Ask anything". If it said "Use this text field to play pretend" it would be at least a little better.

Thinking everything you see online is fake is bad advice. Being skeptical is important but the internet isn't all just fake.

There is a good place to regulate it. At the input and output level. It already is regulated there. It has guardrails already. Public data AI may be more ethical, but it is not going to solve the issue. The issue is the way people are using AI and the output it produces. It seems like you might not be wholly familiar with this subject.

[–] AugustWest@lemm.ee 1 points 8 hours ago (1 children)

I am very familiar with this. And chatgpt says right on the page: this makes mistakes, check the answer.

Everything you see online should be considered fake. Yes. Everyone should be considered a liar. That is internet 101 from way back.

I am telling you, ask for regulations to cover the idiots and you will not get what you are looking for.

[–] MountingSuspicion@reddthat.com 1 points 8 hours ago (1 children)

My page does not say that. It's possible that in your country, which I'm guessing is different from my country seeing as you stated guns are illegal, they already have this legislation in place. That is not the case here.

[–] AugustWest@lemm.ee 1 points 7 hours ago (1 children)

A quick VPN connection to a few countries and the US and I see it everytime. I wonder why you don't?

[–] MountingSuspicion@reddthat.com 1 points 7 hours ago (1 children)

I don't see it regardless. Are you logged in? It's possible that if your account lists your country that they just set it to always appear? I'm unfortunately not sure, but having that notice would be a huge improvement imho.

[–] AugustWest@lemm.ee 1 points 7 hours ago

No not logged in.

[–] kibiz0r@midwest.social 2 points 12 hours ago (1 children)

…says the NRA, after every mass shooting

[–] AugustWest@lemm.ee 1 points 11 hours ago* (last edited 11 hours ago)

Not even in the same discussion, but that too is better handled by education, healthcare, and societal support.

Case in point: where I live guns are illegal. Just had 3 shootings in the last month. Cost of living, lack of jobs, shitty outlook for the future.... That's driving it.