this post was submitted on 13 May 2026
245 points (98.8% liked)

Fuck AI

6998 readers
1483 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

Teen trusted ChatGPT to help him “safely” experiment with drugs, logs show.

Most troublingly, as Nelson became increasingly interested in combining drugs, ChatGPT repeatedly warned him that mixing certain drugs could be a “respiratory arrest risk.” Shortly before recommending the deadly mix that killed Nelson, the chatbot also showed that it understood combining drugs like Kratom and Xanax with alcohol. In one output, ChatGPT explained that mix is “how people stop breathing.” But that knowledge didn’t block ChatGPT from eventually recommending that Nelson take such a deadly mix.

you are viewing a single comment's thread
view the rest of the comments
[–] biggerbogboy@sh.itjust.works 3 points 12 hours ago

The danger with LLMs isn’t that it “tries to kill you”, it’s because they’re all sycophantic, it isn’t a fully understood technology yet (so safeguards inside the black box will only be known to go so far, with an unknown amount of ways to bypass,) and humanity is generally susceptible to being manipulated to trust LLMs (due to how they sound the same in all topics, and dont have other modes of communication other than text and voice, among other issues.)

What everyone is mainly saying is that OpenAI has a long history of assisting in dozens of deaths, more than other companies like Meta and Anthropic. Despite the fact that there will always be a non-zero chance of bypassing filters, OpenAI has continuously mismanaged creating these filters in the first place.