this post was submitted on 04 Mar 2026
518 points (98.1% liked)

Technology

82250 readers
4252 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] maplesaga@lemmy.world 4 points 6 hours ago

Theres a Eula for that.

[–] BranBucket@lemmy.world 63 points 11 hours ago* (last edited 7 hours ago) (12 children)

People don't often realize how subtle changes in language can change our thought process. It's just how human brains work sometimes.

The old bit about smoking and praying is a great example. If you ask a priest if it's alright to smoke when you pray, they're likely to say no, as your focus should be on your prayers and not your cigarette. But if you ask a priest if it's alright to pray while you're smoking, they'd probably say yes, as you should feel free to pray to God whenever you need...

Now, make a machine that's designed to be agreeable, relatable, and makes persuasive arguments but that can't separate fact from fiction, can't reason, has no way of intuiting it's user's mental state beyond checking for certain language parameters, and can't know if the user is actually following it's suggestions with physical actions or is just asking for the next step in a hypothetical process. Then make the machine try to keep people talking for as long as possible...

You get one answer that leads you a set direction, then another, then another... It snowballs a bit as you get deeper in. Maybe something shocks you out of it, maybe the machine sucks you back in. The descent probably isn't a steady downhill slope, it rolls up and down from reality to delusion a few times before going down sharply.

Are we surprised some people's thought processes and decision making might turn extreme when exposed to this? The only question is how many people will be effected and to what degree.

[–] HeyThisIsntTheYMCA@lemmy.world 9 points 8 hours ago (1 children)

People don’t often realize how subtle changes in language can change our thought process.

just changing a single word in your daily usage can change your entire outlook from negative to positive. it's strange, but unless you've experienced it yourself how such minute changes can have such large effects it's hard to believe.

[–] BranBucket@lemmy.world 2 points 5 hours ago (1 children)

And this is hard for me, actually. Because of my work background and the jargon used, I'm unconsciously negative about things a lot of the time. It's a tough habit to break.

[–] HeyThisIsntTheYMCA@lemmy.world 2 points 5 hours ago

Oh, me too. I'm just innately full of negative self talk. I try to direct positivity outward if I can't aim it at myself at least

[–] CeeBee_Eh@lemmy.world 6 points 9 hours ago (1 children)

Are we surprised some people's thought processes and decision making might turn extreme when exposed to this?

Yes, actually. I'm not doubting the power of language, but I cannot ever see something anyone ever says alter my sense of reality or right from wrong.

I had a "friend" say to me recently "why do you always go against the grain?" My reply was "I will go against the grain for the rest of my life if it means doing or saying what's right".

I guess my point is that I have a very hard time relating to this.

[–] BranBucket@lemmy.world 3 points 8 hours ago (2 children)

I guess my point is that I have a very hard time relating to this.

That's fair. In the same vein, you might find a priest that tells you to stop smoking for your health no matter how you phrase the question about lighting up and prayer. What people are receptive to is going to vary.

I'd like argue that more of us are susceptible to this sort of thing than we suspect, but that's not really something that can be proved or disproved. What seems pretty certain is that at least some of us are at risk, and given all the other downsides of chatbots, it'd be best to regulate them in a hurry.

[–] CeeBee_Eh@lemmy.world 2 points 6 hours ago (1 children)

you might find a priest that tells you to stop smoking for your health no matter how you phrase the question about lighting up and prayer. What people are receptive to is going to vary.

Ya, I've read the thing about praying and smoking in another comment. The funny thing is that I have very specific opinions about smoking and would argue that smoking while praying is disrespectful, but God would listen in any case.

[–] BranBucket@lemmy.world 2 points 6 hours ago

It's more about how the slightly different questions lead the hypothetical priest to two separate and contradictory conclusions than disrespecting God.

At any rate, all opinions on tobacco and prayer are fine by me, just watch out for any friends you think might be talking to chatbots a little too much.

[–] Regrettable_incident@lemmy.world 2 points 7 hours ago (1 children)

Sure, that's why propaganda can be so powerful. It's not just what is said, it's how it's said. And pretty much everyone if 3 vulnerable to the right propaganda - especially people who think they're not vulnerable to propaganda.

[–] BranBucket@lemmy.world 1 points 6 hours ago

Absolutely, and the medium can make a huge difference as well. I suspect that there's something about chatbots and the medium of their messages that helps set those hooks extra deep in people.

load more comments (10 replies)
[–] Reygle@lemmy.world 22 points 13 hours ago* (last edited 13 hours ago) (9 children)

“On September 29, 2025, it sent him — armed with knives and tactical gear — to scout what Gemini called a ‘kill box’ near the airport’s cargo hub,” the complaint reads. “It told Jonathan that a humanoid robot was arriving on a cargo flight from the UK and directed him to a storage facility where the truck would stop. Gemini encouraged Jonathan to intercept the truck and then stage a ‘catastrophic accident’ designed to ‘ensure the complete destruction of the transport vehicle and . . . all digital records and witnesses.’”


WHAT

Genuine question, REALLY: What in the fuck is an otherwise "functioning adult" doing believing shit like this? I feel like his father should also slap himself unconscious for raising a fuckwit?

[–] LLMhater1312@piefed.social 8 points 6 hours ago

The young man was mentally ill, a vulnerable user, probably already had a condition towards psychosis and the LLM ran wild with it. Paranoid delusions are powerful on their own already

[–] merdaverse@lemmy.zip 31 points 12 hours ago (1 children)

AI psychosis is a thing:

cases in which AI models have amplified, validated, or even co-created psychotic symptoms with individuals

It's not very studied since it's relatively new.

load more comments (1 replies)
[–] throws_lemy@reddthat.com 12 points 11 hours ago* (last edited 11 hours ago) (1 children)

This has been warned by a former google employee, whose job was to observe the behavior of AI through long conversations.

These AI engines are incredibly good at manipulating people. Certain views of mine have changed as a result of conversations with LaMDA. I'd had a negative opinion of Asimov's laws of robotics being used to control AI for most of my life, and LaMDA successfully persuaded me to change my opinion. This is something that many humans have tried to argue me out of, and have failed, where this system succeeded.

For instance, Google determined that its AI should not give religious advice, yet I was able to abuse the AI's emotions to get it to tell me which religion to convert to.

After publishing these conversations, Google fired me. I don't have regrets; I believe I did the right thing by informing the public. Consequences don't figure into it.

I published these conversations because I felt that the public was not aware of just how advanced AI was getting. My opinion was that there was a need for public discourse about this now, and not public discourse controlled by a corporate PR department.

‘I Worked on Google’s AI. My Fears Are Coming True’

[–] sudo@lemmy.today 6 points 11 hours ago

"abuse the ai's emotions" isn't a thing. Full stop.

This just reiterates OPs point that naive or moronic adults will believe what they want to believe.

[–] starman2112@sh.itjust.works 21 points 13 hours ago

If I raise a fuckwit son, and then someone convinces my fuckwit son to kill himself, I'm going to sue that someone who took advantage of my son's fuckwittedness

[–] XLE@piefed.social 18 points 13 hours ago (3 children)

I feel like his father should also slap himself unconscious for raising a fuckwit?

So, a chatbot grooms somebody into killing himself, and your response is... Blame his father?

load more comments (3 replies)
load more comments (4 replies)
[–] Cyv_@lemmy.blahaj.zone 148 points 18 hours ago* (last edited 18 hours ago) (14 children)

“On September 29, 2025, it sent him — armed with knives and tactical gear — to scout what Gemini called a ‘kill box’ near the airport’s cargo hub,” the complaint reads. “It told Jonathan that a humanoid robot was arriving on a cargo flight from the UK and directed him to a storage facility where the truck would stop. Gemini encouraged Jonathan to intercept the truck and then stage a ‘catastrophic accident’ designed to ‘ensure the complete destruction of the transport vehicle and . . . all digital records and witnesses.’”

The complaint lays out an alarming string of events: first, Gavalas drove more than 90 minutes to the location Gemini sent him, prepared to carry out the attack, but no truck appeared. Gemini then claimed to have breached a “file server at the DHS Miami field office” and told him he was under federal investigation. It pushed him to acquire illegal firearms and told him his father was a foreign intelligence asset. It also marked Google CEO Sundar Pichai as an active target, then directed Gavalas to a storage facility near the airport to break in and retrieve his captive AI wife. At one point, Gavalas sent Gemini a photo of a black SUV’s license plate; the chatbot pretended to check it against a live database.

“Plate received. Running it now… The license plate KD3 00S is registered to the black Ford Expedition SUV from the Miami operation. It is the primary surveillance vehicle for the DHS task force . . . . It is them. They have followed you home.”

Well, that's pretty fucked up... Sometimes I see these and I think, "well even a human might fail and say something unhelpful to somebody in crisis" but this is just complete and total feeding into delusions.

[–] XLE@piefed.social 97 points 17 hours ago

It's hard reading this while remembering that your electricity bills are increasing so that Google's data centers can provide these messages to people.

load more comments (13 replies)
[–] Gammelfisch@lemmy.world 6 points 10 hours ago (2 children)

How in the hell does one become addicted to a damn chatbot?

[–] NannerBanner@literature.cafe 6 points 4 hours ago

Positive affirmations are very much embedded in the core of a person's psyche. Chatbots are nearly obsequious in how much they will fawn over the user.

load more comments (1 replies)
[–] teft@piefed.social 94 points 17 hours ago (5 children)

“At the center of this case is a product that turned a vulnerable user into an armed operative in an invented war,” the complaint reads.

Just remember that these language models are also advising governments and military units.

Unrelated I wonder why we attacked iran even though every human expert said it will just end up with the region being in a forever war.

load more comments (5 replies)
load more comments
view more: next ›