this post was submitted on 15 Feb 2026
931 points (99.9% liked)

Fuck AI

5765 readers
2093 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

link to archived Reddit thread; original post removed/deleted

you are viewing a single comment's thread
view the rest of the comments
[–] cronenthal@discuss.tchncs.de 69 points 10 hours ago (6 children)

I somehow hope this is made up, because doing this without checking and finding the obvious errors is insane.

[–] joostjakob@lemmy.world 5 points 3 hours ago (1 children)

Having worked in departments providing data all my career, I'm not surprised in the slightest that people do not care in any way about where the numbers they got come from.

[–] wizardbeard@lemmy.dbzer0.com 2 points 2 hours ago

Base line level of trust in co-worker comperence combined with either too much workload to give everything a fine toothed comb through, or too much laziness to bother.

Presented by F, slide deck created by E, based off conclusions made by D, from data formatted to look good to them by C, from work that they asked B to do, which was ultimately done by low man on the totem pole A.

All it takes is for one person in that chain to be considered trustworthy for every level above it to consider it trustworthy info by default.

[–] rozodru@piefed.world 26 points 7 hours ago (1 children)

As someone who has to deal with LLMs/AI daily in my work in order to fix the messes they create, this tracks.

AI's sole purpose is to provide you a positive solution. That's it. Now that positive solution doesn't even need to be accurate or even exist. It's built to provide a positive "right" solution without taking the steps to get to that "right" solution thus the majority of the time that solution is going to be a hallucination.

you see it all the time. you can ask it something tech related and in order to get to that positive right solution it'll hallucinate libraries that don't exist, or programs that don't even do what it claims they do. Because logically to the LLM this is the positive right solution WITHOUT utilizing any steps to confirm that this solution even exists.

So in the case of OPs post I can see it happening. They told the LLM they wanted analytics for 3 months and rather than take the steps to get to an accurate solution it ignored said steps and decided to provide positive solution.

Don't use AI/LLMs for your day to day problem solving. you're wasting your time. OpenAI, Anthropic, Google, etc have all programmed these things to provide you with "positive" solutions so you'll keep using them. they just hope you're not savvy enough to call out their LLM's when they're clearly and frequently wrong.

[–] jj4211@lemmy.world 19 points 6 hours ago* (last edited 6 hours ago) (1 children)

Probably the skepticism is around someone actually trusting the LLM this hard rather than the LLM doing it this badly. To that I will add that based on my experience with LLM enthusiasts, I believe that too.

I have talked to multiple people who recognize the hallucination problem, but think they have solved it because they are good "prompt engineers". They always include a sentence like "Do not hallucinate" and thinks that works.

The gaslighting from the LLM companies is really bad.

[–] cronenthal@discuss.tchncs.de 5 points 4 hours ago (1 children)

"Prompt engineering" is the astrology of the LLM world.

[–] wizardbeard@lemmy.dbzer0.com 0 points 2 hours ago

There are ways to get more relevant info (when using terms that have different meanings based on context), to reduce the needless ass kissing, and to help ensure you get response in formats more useful to you. But being able to provide it context is not some magic fix for the underlying problems of the way this tech is constructed and its limitations. It will never be trustoworthy.

[–] Quacksalber@sh.itjust.works 54 points 9 hours ago (1 children)
[–] Rothe@piefed.social 1 points 3 hours ago

It is a thing that is happening, but the OP instance probably didn't, since it is just a reddit post.

[–] HaraldvonBlauzahn@feddit.org 24 points 8 hours ago

Use of AI in companies would not save any time if you were checking each result.

[–] fizzle@quokk.au 9 points 9 hours ago (2 children)

Yeah.

Kinda surprised there isn't already a term for submitting / presenting AI slop without reviewing and confirming.

[–] whotookkarl@lemmy.dbzer0.com 27 points 8 hours ago

Negligence and fraud come to mind

[–] hitmyspot@aussie.zone 7 points 7 hours ago

Slop flop seems like it would work. He’s flopped the slop. That slop was flopped out without checking.