this post was submitted on 03 Nov 2025
25 points (96.3% liked)

Fuck AI

5125 readers
1025 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
top 3 comments
sorted by: hot top controversial new old
[–] mormund@feddit.org 16 points 2 months ago

"They" do not lie, cheat or plot. It is just a statistical model with no intelligence or awareness. Hyping up the dangers of mathy maths is still hyping up "AI".

[–] hendrik@palaver.p3x.de 6 points 2 months ago* (last edited 2 months ago)

Yes, I think this is about correct. LLMs itself don't have goals or intentions other than predict the next token. They don't "want" to murder. They write some text which sounds like a story. Maybe a nice story full of common tropes on how a rogue AI murders people. That's something LLMs can do. But that doesn't mean a lot. (Please don't give them a body and access to a gun...)

I think it is a bit of a moot point to discuss this and study their storytelling abilities. Far worse is how AI and computer systems already murder people. I think Palantir and other arms industry giants have AI systems for warfare. The IDF supposedly uses either algorithms or AI to tell them how many dozens of civilian "casualties" are fine as collateral when targeting one terrorist. Some police departments in the US experiment with AI and we had some cases where they'd arrest people more or less just based on an AI suggestion. I think that's the proper unethical stuff we already have as of today. The rest is science-fiction and anthropomorphisms.

It doesn't really come to a surprise to me if LLMs are more likely to do weird things than humans. Ultimately they're trained on our stories and Reddit posts. And I guess everyone writes way more murder mystery stories and dark stuff than what we'd commit ourselves in real life. But then AI has been trained to mimick those stories, and not what we do in reality. And we'd need to switch to different forms of AI to change this. But that's not an interesting philosophical article to write. Does a Tesla car in full self-driving (if that ever becomes a thing) want to murder pedestrians and cyclists? I don't think so. That's just mundane technical issues and problematic design, every time it does.

[–] falseWhite@lemmy.world 3 points 2 months ago

LLMs are not dangerous at all (yet at least). It's the companies controlling AI that are dangerous.