this post was submitted on 11 Feb 2026
173 points (95.3% liked)

Fuck AI

5728 readers
981 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
all 33 comments
sorted by: hot top controversial new old
[–] curiousaur@reddthat.com 10 points 22 hours ago

“I have a psychiatrist I see for other purposes.”

[–] ntd_quiet@lemmy.dbzer0.com 49 points 1 day ago (1 children)

This article is largely quoting from Tanya Chen's Slate article: https://slate.com/technology/2026/02/ai-psychosis-support-groups-discord.html

The Futurism author credits Slate but adds very little.

[–] phoenixz@lemmy.ca 3 points 20 hours ago

futurism.com is a horrible source

[–] TheReturnOfPEB@reddthat.com 70 points 1 day ago* (last edited 1 day ago) (2 children)

That guy got speed-run-ed out of his life in four months using A.I. like a walkthrough guide in an CRPG.

I'm grateful for many thing. Youth without regular school shootings. How to occupy my life without internet from a childhood with room to roam. Paved streets and vaccines.

But I'm also grateful for the fact that A.I. became prevalent after I had learned to wrestle my fears of missing out and my addictions.

[–] corsicanguppy@lemmy.ca 1 points 20 hours ago

got speed-run-ed

It's still just the 'run' root, so it'd be "got speed-run" the same as it'd be "got run out of town".

[–] sem@piefed.blahaj.zone 5 points 1 day ago (2 children)

Crpg? Computer role playing game like baldurs gate?

[–] funkless_eck@sh.itjust.works 2 points 21 hours ago

I guess they meant instead of like IRL D&D, Monster of the Week, Pathfinder, Cthulu

Craig's Raging Penis Gun

[–] stoy@lemmy.zip 55 points 1 day ago (2 children)

I am glad I realized just how bad AI is early on, I have sometimes had it help me write some simple HTML/CSS code, but it is mostly annoying to use.

It makes me loose track of what does what in my code, and also takes away my initiative at trying to change the code myself.

When it comes to general information, it mostly generates decent responses, but it keeps getting enough things wrong that you just can't trust it.

Combine that with the fact that AIs are trained to always accommodate the user and almost never tells the user straight up "No", it keeps engaging the user, it is never angry, it focuses on reenforcement and validation of the particular arguments given to it.

I feel dumber when I have used an AI

[–] Jankatarch@lemmy.world 19 points 1 day ago (1 children)

I am starting to appreciate all the times stackoverflow people told me my question itself is wrong and I am stupid.

Well, the first part mainly.

[–] Rekorse@sh.itjust.works 8 points 1 day ago

Human feedback is very important, its a social thing and we depend on that.

[–] Jesus_666@lemmy.world 6 points 1 day ago (1 children)

There are things LLMs are genuinely useful for.

Transforming text is one. To give examples, a friend of mine works in advertising and they routinely ask a LLM to turn a spec sheet into a draft for ad copy; another person I know works as a translator and also uses DeepL as a first pass to take care of routine work. Yeah, you can get mentally lazy doing that but it can be useful for taking care of boilerplate stuff.

Another one is fuzzy data lookup. I occasionally use LLMs to search for things where I don't know how to turn them into concise search terms. A vague description can be enough to get an LLM onto the right track and I can continue from there using traditional means.

Mind you, all of that should be done sparingly and with the awareness that the LLM can convincingly lie to you at any time. Nothing it returns is useful as anything but a draft that needs revision and any information must be verified. If you simply rely on its answer you will get something reasonably useful much of the time, you will get mentally lazy, and sometimes you will act on complete bullshit without knowing it.

[–] OneWomanCreamTeam@sh.itjust.works 10 points 23 hours ago (1 children)

This is a little besides the point, but even those use-cases LLMs have the fatal flaw of being obscenely resource intensive. They require huge amounts of electricity and cooling to continue operating. Not to mention most of them are trained on stolen data.

Even when they're an effective tool for a given task, they're still not an ethical one.

[–] Jesus_666@lemmy.world 3 points 22 hours ago

That's true; I didn't touch on those points but I very much agree. (Yes, while I occasionally use it. It's easy to ignore the implications of what you're doing for a moment.)

[–] can@sh.itjust.works 33 points 1 day ago (1 children)

That wasn’t the worst of it. At that point he had blown nearly $12,000 trying to create world-changing code. He became manic, and his concerned therapist called the cops to check in on him. He was institutionalized for nearly two weeks, and even got tangled with an investor who threatened to kill him if he didn’t come up with the goods.

What a microcosm of our current situation.

[–] Etterra@discuss.online 6 points 1 day ago (1 children)

I can't even comprehend where that much money would even go in this event.

[–] can@sh.itjust.works 3 points 12 hours ago

I think thousands can be lost pretty easily in a manic state.

[–] Etterra@discuss.online 8 points 1 day ago

I cannot imagine the mind of a grown-ass adult who said for this shit.

[–] Mac@mander.xyz 12 points 1 day ago (1 children)

Is the thumbnail a picture of Kash Patel

[–] darkdemize@sh.itjust.works 19 points 1 day ago (1 children)

Can't be. Both eyes are are looking in the same direction.

[–] protist@mander.xyz 3 points 1 day ago (3 children)

First guy: “I’ve never been manic in my life. I’m not bipolar."

I'm just highly skeptical of this.

Second guy: That wasn’t the worst of it. At that point he had blown nearly $12,000 trying to create world-changing code. He became manic, and his concerned therapist called the cops to check in on him. He was institutionalized for nearly two weeks

Yeah that sounds right. Regarding "AI psychosis," everything I've read indicates it exacerbates existing psychoses, it doesn't create them. That's not to say it can't mess with people's psychology, especially the stuff around suicide, but I think the "AI psychosis" the media portrays is not real

[–] Mohamed@lemmy.ca 22 points 1 day ago

If it can exacerbate psychotic tendencies, then it can cause psychosis. Claiming that increasing or exacerbating tendencies doesn't necessarily mean it is causing it, is an interesting area for debate, but it's just semantics. Of course, I am also arguing semantics here.

I think ehat is more interesting psychologically to ask is just how much does AI exacerbate psychotic tendencies, or if AI-induced psychosis is temporary (like drug-induced psychosis often is), or is permanent. I dont know anything about this topic but I hope to hear from someone who does.

[–] cecilkorik@piefed.ca 17 points 1 day ago

I mean, I kind of agree that there's a lot of undiagnosed and underreported mental health issues in our society, and it's not surprising that highly functional people can turn out to have serious mental issues lurking just below the surface.

But there's also a sort of gatekeeping going on here, suggesting that "well as long as you're not already sort of psychotic you don't have anything to fear from AI psychosis" is sort of like throwing the low-key psychotic people to the wolves and basically saying they don't really matter to us because most of us aren't them. At least, we assume we aren't them. And we don't even know that for sure. We could be them.

Lots of smug people with 20/20 hindsight always love to believe there are always signs, but signs aren't proof, and you don't have proof there are always signs.

[–] nimble@lemmy.blahaj.zone 11 points 1 day ago* (last edited 1 day ago)

"AI induced psychosis" is new and relatively unstudied but it has been compared to mono mania which was before the current "kaleidoscope" of modern mania. Under mono mania there is one central focus which in this case is AI. This is to say its not a completely new phenomenon.

But as far as whether or not AI* causes psychosis or exacerbates underlying conditions im not sure this distinction matters. There's more risk factors than simply being part of a "vulnerable population", where other factors could be lack of reality testing, missed crisis escalation, intensive use, and limited context windows compounding escalations over time.

Whatever we want to call it, there is harm being done. That's real to me.

[–] KeenFlame@feddit.nu 1 points 1 day ago

Yeah cuz he was slippin