this post was submitted on 15 Feb 2026
683 points (99.9% liked)

Fuck AI

5765 readers
1883 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

link to archived Reddit thread; original post removed/deleted

you are viewing a single comment's thread
view the rest of the comments
[–] excral@feddit.org 142 points 5 hours ago (7 children)

I've said it time and time again: AIs aren't trained to produce correct answers, but seemingly correct answers. That's an important distinction and exactly what makes AIs so dangerous to use. You will typically ask the AI about something you yourself are not an expert on, so you can't easily verify the answer. But it seems plausible so you assume it to be correct.

[–] cecilkorik@piefed.ca 1 points 1 minute ago

They are designed to convince people. That's all they do. True, or false, real or fake, doesn't matter, as long as it's convincing. They're like the ultimate, idealized sociopath and con artist. We are being conned by a software designed to con people.

[–] glance@lemmy.world 2 points 28 minutes ago

Even worse is that over time, the seemingly correct answers will drift further away from actually correct answers. I'm the best case, it's because people expect the wrong answers as that's all they've been exposed to. Worse cases would be the answers skew toward a specific end that AI maker wants people to think.

[–] resipsaloquitur@lemmy.world 1 points 16 minutes ago

Plausible confabulation machines.

[–] 0x0f@piefed.social 9 points 1 hour ago* (last edited 1 hour ago) (2 children)

My own advise for people starting to use AI is to use it for things you know very well. Using it for things you do not know well, will always be problematic.

[–] jj4211@lemmy.world 2 points 20 minutes ago

The problem is that we've had a culture of people who don't know things very well control the purse strings relevant to those things.

So we have executives who don't know their work or customers at all and just try to bullshit while their people frantically try to repair the damage the executive does to preserve their jobs. Then they see bullshit generating platforms and see a kindred spirit, and set a goal of replacing those dumb employees with a more "executive" like entity that also can generate reports and code directly. No talking back, no explaining that the request needs clarification, that the data doesn't support their decision, just a "yes, and..." result agreeing with whatever dumbass request they thought would be correct and simple.

Finally, no one talking back to them and making their life difficult and casting doubt on their competency. With the biggest billionaires telling them this is the right way to go, as long as they keep sending money their way.

[–] resipsaloquitur@lemmy.world 1 points 13 minutes ago

The problem is, every time you use it, you become more passive. More passive means less alert to problems.

Look at all the accidents involving "safety attendants" in self-driving cars. Every minute they let AI take the wheel, they become more complacent. Maaaybe I'll sneak a peak at my phone. Well, haven't gotten into an accident in a month, I'll watch a video. In the corner of my vision. Hah, that was good, gotta leave a commen — BANG!

[–] dkppunk@piefed.social 3 points 1 hour ago

AIs aren't trained to produce correct answers, but seemingly correct answers

I prefer to say “algorithmically common” instead of “seemingly correct” but otherwise agree with you.

[–] pankuleczkapl@lemmy.dbzer0.com 21 points 3 hours ago (1 children)

Thankfully, AI is bad at maths for exactly this reason. You don't have to be an expert on a very specific topic to be able to verify a proof and - spoiler alert - most of the proofs ChatGPT 5 has given me are plain incorrect, despite OpenSlop's claims that it is vastly superior to previous models.

[–] jj4211@lemmy.world 1 points 12 minutes ago

I've been through the cycle of the AI companies repeatedly saying "now it's perfect" only admitting it's complete trash when they release the next iteration and claim "yeah it was broken, we admit, but now it's perfect" so many times now...

Problem being there's a massive marketing effort to gaslight everyone and so if I point it out in any vaguely significant context, I'm just not keeping up and most only have dealt with the shitty ChatGPT 5.1, not the more perfect 5.2. Of course in my company they are about the Anthropic models so it is instead Opus 4.5 versus 4.6 now. Even proving the limitations in trying to work with 4.6 gives anthropic money, and at best I earn a "oh, those are probably going to be fixed in 4.7 or 5 or whatever".

Outsiders are used to traditional software that has mistakes, but those are straightforward to address so a close but imperfect software can hit the mark in updates. LLMs not working that way doesn't make sense. They use the same version number scheme after all, so expectations should be similar.

[–] TrackShovel@lemmy.today 6 points 2 hours ago (1 children)

I use it to summarize stuff sometimes, and I honestly spend almost as much time checking it's accurate than I would if I had just read and summarized.

It is useful for 'What does this contain?' so I can see if I need to read something. Or rewording something I have made a pig's ear out of.

I wouldn't trust it for anything important.

The most important thing to do if you do use AI is to not ask leading questions. Keep them simple and direct

[–] snooggums@piefed.world 3 points 1 hour ago (2 children)

It is useful for ‘What does this contain?’ so I can see if I need to read something. Or rewording something I have made a pig’s ear out of.

Skimming and scanning texts is a skill that achieves the same goal more quickly than using an unreliable bullshit generator.

[–] jj4211@lemmy.world 1 points 5 minutes ago

Depending on the material, the LLM can be faster. I have used an LLM to extract viable search terms to then go and read the material myself.

I never trust the summary, but it frequently gives me clues as to what keywords could take me to the right area of a source material. Internet articles that stretch brief content into tedious mess, documentation that is 99% something I already know, but I need something buried in the 1%.

Was searching for a certain type of utility and traditional Internet searches were flooded with shitware that wasn't meeting the criteria I wanted, LLM successfully zeroed in on just the perfect GitHub project.

Then as a reminder to never trust the results, I queried how to make it do a certain thing and it mentioned a command option that seemed like a dumb name that was opposite of what I asked for if it did work and not only would it have been opposite, no such option existed.

[–] TrackShovel@lemmy.today 1 points 41 minutes ago

Lol. Your advice: learn to read, noob

My work is technically dense and I read all day. It's sometimes nice when I'm mentally exhausted to see if it's worth the effort to dig deeper in a 10 second upload. That's all I'm getting at.