this post was submitted on 15 Feb 2026
523 points (100.0% liked)

Fuck AI

5765 readers
1663 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

link to archived Reddit thread; original post removed/deleted

top 50 comments
sorted by: hot top controversial new old
[–] untorquer@lemmy.world 4 points 13 minutes ago

This would suggest the leadership positions aren't required for the function of the business.

[–] wonderingwanderer@sopuli.xyz 25 points 2 hours ago

Dumbasses. Mmm, that's good schadenfreude.

[–] FlashMobOfOne@lemmy.world 11 points 1 hour ago (1 children)

Jesus Christ, you have to have a human validate the data.

[–] 474D@lemmy.world 8 points 1 hour ago (1 children)

Exactly, this is like letting excel auto-fill finish the spreadsheet and going "looks about right"

[–] FlashMobOfOne@lemmy.world 8 points 1 hour ago (2 children)

And that's a good analogy, as people have posted screenshots of Copilot getting basic addition wrong in Excel.

Whoever implemented this agent without proper oversight needs to be fired.

[–] TheSeveralJourneysOfReemus@lemmy.world 2 points 16 minutes ago (1 children)

Excel already has advanced math functions, and even a python integration as of recently. It is theoretically possible to set a document to calculate all prime numbers in a 10000 range. There is no need to integrate genai.

[–] FlashMobOfOne@lemmy.world 3 points 15 minutes ago

Yup, but stupid people can't be bothered to go read a five-minute tutorial. Story of our species.

[–] hector@lemmy.today 3 points 40 minutes ago (1 children)

Except the ceo and executives ultimately responsible will blame their underlings that will be fired, even though it was an executive level decision. They didn't get to the pinnacle of corporate governance by admitting mistakes. That's not what they were taught at their ivy league schools, they were taught to lie and cheat to steal, and further slander their victims to excuse it.

It was bad before the current president set his outstanding example for the rest of the country. See what being a lying cheating piece of shit gets you? Everything. Nothing matters. We have the wrong people in charge across the board, from business to government to institutions.

[–] FlashMobOfOne@lemmy.world 4 points 38 minutes ago

Fair points all around.

And you're not wrong. I work for a law firm and we were tracking his EO's until mid-2025, and they were so riddled with typos, and errors, and URL's pointing to the wrong EO, that we actually ended up having to hide the URL's in the database we built so clients wouldn't think it was us making these errors.

[–] excral@feddit.org 96 points 3 hours ago (2 children)

I've said it time and time again: AIs aren't trained to produce correct answers, but seemingly correct answers. That's an important distinction and exactly what makes AIs so dangerous to use. You will typically ask the AI about something you yourself are not an expert on, so you can't easily verify the answer. But it seems plausible so you assume it to be correct.

[–] TrackShovel@lemmy.today 3 points 30 minutes ago

I use it to summarize stuff sometimes, and I honestly spend almost as much time checking it's accurate than I would if I had just read and summarized.

It is useful for 'What does this contain?' so I can see if I need to read something. Or rewording something I have made a pig's ear out of.

I wouldn't trust it for anything important.

The most important thing to do if you do use AI is to not ask leading questions. Keep them simple and direct

[–] pankuleczkapl@lemmy.dbzer0.com 11 points 1 hour ago

Thankfully, AI is bad at maths for exactly this reason. You don't have to be an expert on a very specific topic to be able to verify a proof and - spoiler alert - most of the proofs ChatGPT 5 has given me are plain incorrect, despite OpenSlop's claims that it is vastly superior to previous models.

[–] sukhmel@programming.dev 34 points 3 hours ago (2 children)

Joke's on you, we make our decisions without asking AI for analytics. Because we don't ask for analytics at all

[–] PhoenixDog@lemmy.world 15 points 1 hour ago

I don't need AI to fabricate data. I can be stupid on my own, thank you.

[–] ivanafterall@lemmy.world 15 points 3 hours ago (1 children)

I feel like no analytics is probably better than decisions based on made-up analytics.

[–] jj4211@lemmy.world 7 points 2 hours ago (1 children)

Yep, without analytics you at least are likely going on anecdotal feel for things which while woefully incomplete is at least probably based on actual indirect experience, like number of customers you've spoken with, how happy they have seemed, how employees have been feeling, etc.

Could be horribly off the mark without actual study of the data, but it is at least roughly directed by reality rather than just random narrative made by a word generator that has nothing to do with your company at all.

[–] sukhmel@programming.dev 1 points 58 minutes ago

I'm not sure, because you see I'm not C-level by far, but I feel the decisions in such cases are made based on imaginary version of clients, and what tops feel the clients want (that is what they think they would want if they were clients)

And they may guess right or wrong, though I agree that they may be more likely to guess right than an LLM, being humans and all

[–] privatepirate@lemmy.zip 1 points 1 hour ago

What dumbass decided to implement an experimental technology and not test it for 5 minutes to make sure it's accurate before giving it to the whole company and telling them to rely upon it?

[–] cronenthal@discuss.tchncs.de 54 points 5 hours ago (5 children)

I somehow hope this is made up, because doing this without checking and finding the obvious errors is insane.

[–] rozodru@piefed.world 12 points 2 hours ago (1 children)

As someone who has to deal with LLMs/AI daily in my work in order to fix the messes they create, this tracks.

AI's sole purpose is to provide you a positive solution. That's it. Now that positive solution doesn't even need to be accurate or even exist. It's built to provide a positive "right" solution without taking the steps to get to that "right" solution thus the majority of the time that solution is going to be a hallucination.

you see it all the time. you can ask it something tech related and in order to get to that positive right solution it'll hallucinate libraries that don't exist, or programs that don't even do what it claims they do. Because logically to the LLM this is the positive right solution WITHOUT utilizing any steps to confirm that this solution even exists.

So in the case of OPs post I can see it happening. They told the LLM they wanted analytics for 3 months and rather than take the steps to get to an accurate solution it ignored said steps and decided to provide positive solution.

Don't use AI/LLMs for your day to day problem solving. you're wasting your time. OpenAI, Anthropic, Google, etc have all programmed these things to provide you with "positive" solutions so you'll keep using them. they just hope you're not savvy enough to call out their LLM's when they're clearly and frequently wrong.

[–] jj4211@lemmy.world 12 points 2 hours ago* (last edited 2 hours ago)

Probably the skepticism is around someone actually trusting the LLM this hard rather than the LLM doing it this badly. To that I will add that based on my experience with LLM enthusiasts, I believe that too.

I have talked to multiple people who recognize the hallucination problem, but think they have solved it because they are good "prompt engineers". They always include a sentence like "Do not hallucinate" and thinks that works.

The gaslighting from the LLM companies is really bad.

[–] HaraldvonBlauzahn@feddit.org 19 points 3 hours ago

Use of AI in companies would not save any time if you were checking each result.

[–] fizzle@quokk.au 7 points 4 hours ago (2 children)

Yeah.

Kinda surprised there isn't already a term for submitting / presenting AI slop without reviewing and confirming.

[–] whotookkarl@lemmy.dbzer0.com 20 points 3 hours ago

Negligence and fraud come to mind

[–] hitmyspot@aussie.zone 5 points 2 hours ago

Slop flop seems like it would work. He’s flopped the slop. That slop was flopped out without checking.

[–] stoy@lemmy.zip 62 points 6 hours ago (3 children)

I suspect this will happen all over with in a few years, AI was good enough at first, but over time reality and the AI started drifting apart

[–] jj4211@lemmy.world 13 points 1 hour ago

They haven't drifted apart, they were never close in the first place. People have been increasingly confident in the models because they've increasingly sounded more convincing, but the tenuous connection to reality has been consistently off.

[–] Kirp123@lemmy.world 63 points 5 hours ago (1 children)

AI is literally trained to get the right answer but not actually perform the steps to get to the answer. It's like those people that trained dogs to carry explosives and run under tanks, they thought they were doing great until the first battle they used them in they realized that the dogs would run under their own tanks instead of the enemy ones, because that's what they were trained with.

[–] wonderingwanderer@sopuli.xyz 12 points 2 hours ago

Holy shit, that's what they get for being so evil that they trained dogs as suicide bombers.

[–] Spezi@feddit.org 27 points 5 hours ago (1 children)

And then, the very same CEOs that demanded the use of AI in decision making will be the ones that blame it for bad decisions.

[–] whyNotSquirrel@sh.itjust.works 24 points 5 hours ago (1 children)

while also blaming employees

[–] Junkers_Klunker@feddit.dk 14 points 5 hours ago

Of course, it is the employees who used it. /s

[–] sundray@lemmus.org 51 points 6 hours ago
[–] tangeli@piefed.social 23 points 5 hours ago (1 children)

But don't worry, when it comes to life or death issues, AI is the best way to help

[–] FinjaminPoach@lemmy.world 18 points 5 hours ago (2 children)

Haha, "chat, how do I stop the patients nose from bleeding"

"Cut his leg off."

"Well, you're the medicAI. Nurse, fetch the bonesaw"

[–] I_Jedi@lemmy.today 10 points 4 hours ago (1 children)

"Hello doctor."

"Hello doctor."

"Hello doctor."

"I don't believe his head is medically necessary."

"We should remove his head."

"I concur."

"I concur."

"We should then use his head as a soccer ball."

"Yes."

"For medical reasons, of course."

"That sounds fun."

"Off with his head."

Source

[–] SuperNovaStar@lemmy.blahaj.zone 1 points 2 hours ago

That was great, thanks for sharing!

[–] Kolanaki@pawb.social 14 points 5 hours ago (3 children)

"Drain all their blood" would technically stop their nose bleed.

load more comments (3 replies)
load more comments
view more: next ›