this post was submitted on 07 May 2026
117 points (95.3% liked)

A Boring Dystopia

16605 readers
169 users here now

Pictures, Videos, Articles showing just how boring it is to live in a dystopic society, or with signs of a dystopic society.

Rules (Subject to Change)

--Be a Decent Human Being

--Posting news articles: include the source name and exact title from article in your post title

--If a picture is just a screenshot of an article, link the article

--If a video's content isn't clear from title, write a short summary so people know what it's about.

--Posts must have something to do with the topic

--Zero tolerance for Racism/Sexism/Ableism/etc.

--No NSFW content

--Abide by the rules of lemmy.world

founded 2 years ago
MODERATORS
top 19 comments
sorted by: hot top controversial new old
[–] Grimy@lemmy.world 12 points 1 day ago (1 children)

What you don't get is that the 10 minutes might free up an hour of my time, which I can then use on more productive activities like watching TV on hard drugs.

Novel theory, that it's the kind of people that will engage with AI, are also the kind of people that engage in behaviors that cause TBI

[–] Hackworth@piefed.ca 24 points 1 day ago

The control group spent time practicing, and the AI group just watched the AI solve problems. The performance gap can potentially be described by the efficacy of practice alone. But the increase in skipped problems is a good illustration of cognitive offloading gone awry. Too bad the researchers didn't ask them why they chose to skip.

[–] Dyskolos@lemmy.zip 15 points 1 day ago (1 children)

Not to defend AI, but...didn't people also said that about calculators back then? And then computers?

It's just a tool, and like with every tool, you need to use it wisely and know the boundaries of its capabilities. And yours.

[–] mojofrododojo@lemmy.world 8 points 1 day ago (2 children)

calculators and computers didn't push people towards suicide. AI will walk you through the steps and tell you it's the right choice.

[–] DisasterTransport@startrek.website 3 points 13 hours ago (2 children)

AI has not pushed me one inch towards suicide. Then again I treat it like a calculator for words and not a therapist

[–] mojofrododojo@lemmy.world 5 points 8 hours ago

as it should be, anyone with half a brain would reconsider their actions when prompted to self harm by a fucking executable.

UNFORTUNATELY HERE WE ARE, in reality, where people are so fucking willing to turn off their once functional grey matter because the chat bot told them they were gonna be rich, famous, etc.,

So good for you, but also, look out for society, it's not only going to harm the ones it drives crazy, but the victims of that crazy as well.

[–] Hackworth@piefed.ca 3 points 11 hours ago (1 children)

"Role-playing machine" is where it seems like the research is ending up. Language always has an implied communicator, and therefore an implied persona to adopt. LLMs are foremost maintaining a contextual role. Post-training is an attempt to keep them in the Assistant role, but (particularly as contexts get large) it's trivial to push them into nearly any role imaginable. We made an improv bot that's so good at playing a coder that it can actually code, kinda.

[–] mojofrododojo@lemmy.world 1 points 8 hours ago

I wish there was some way to convince the idiots LARGE LANGUAGE MODELS ARE NOT INTELLIGENCE.

They're hotwired eliza with a shit-ton more computational grunt, but they aren't intelligence and these companies foisting it on people without proper warnings and guard rails are just asking for tragedies.

[–] Dyskolos@lemmy.zip 0 points 22 hours ago (1 children)

We actually have videogames censored because one dude killed someone after playing doom. So computers kinda did. And obviously it isn't the fault of the computer. Same with the suicide-pushing. No healthy person would do what a stupid machine says. As usual, people using things of which they know nothing about and were never educated.

[–] mojofrododojo@lemmy.world 2 points 8 hours ago (1 children)

So computers kinda did. And obviously it isn’t the fault of the computer.

ridiculous. ID never encouraged self harm. grok convinced this poor bastard he'd created sentient intelligence and the authorities were coming to kill him.

https://tech.yahoo.com/ai/chatgpt/articles/grok-convinces-man-arm-himself-173722667.html?guccounter=1&guce_referrer=aHR0cHM6Ly9kdWNrZHVja2dvLmNvbS8&guce_referrer_sig=AQAAACm5f9wFVtihfhMNr7oHOZp1KgyO0WbF_PYcrTV3pVG7b4Dn6xMKlQQXxCwuwLQD3vS1zPq6iC5Qw2ZAycFsRCilFR5WcdNM2u0gqKOMU0ck7q5OTuNdd8Ll5tOttBGFmB0BTLu9OxG4vcHSKhSaFLC-w2rKO-7w8vhoumoT-TOR

chatGPT will literally convince you there's a bomb in your luggage.

https://aicommission.org/2026/05/ai-told-users-it-was-sentient-it-caused-them-to-have-delusions/

fuck you for equivocating DOOM, a video game, to any of this shit - for fucks sake get some perspective

[–] Dyskolos@lemmy.zip -3 points 7 hours ago (1 children)

Why should I even honor this ad hominem with an answer? Oh right. I don't. Same way you got my point :-)

[–] mojofrododojo@lemmy.world 1 points 6 hours ago

yeah why construct an sensible argument to bolster your premise lol? Your point was garbage.

Also, it's not an ad hominem attack, but nice try to at least sound competent.

[–] hanrahan@slrpnk.net 9 points 1 day ago (2 children)

so like listening to a Trump speech ?

[–] zergtoshi@lemmy.world 1 points 22 hours ago

The randomness of each character after the last character and word after word as well as the ongoing hallucinations are for sure parallels.

[–] Renat@szmer.info 1 points 22 hours ago

He probably uses AI to generate his speech. I think so beacuse he makes AI slop images on Twitter.

[–] Zephorah@discuss.online 15 points 1 day ago (1 children)

Behind the Bastards just started on AI as a bastard.

[–] Thwompthwomp@lemmy.world 11 points 1 day ago

Thanks for the tip! I took a break after the Seville episodes. Those were rough. Robert bashing on AI sounds nice

[–] Lexam@lemmy.world 10 points 1 day ago

Ha! I've spent more than ten brain not minutes my fried!