this post was submitted on 16 Jun 2025
335 points (98.0% liked)

Fuck AI

3114 readers
907 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
 

Source (Via Xcancel)

top 50 comments
sorted by: hot top controversial new old
[–] groucho@lemmy.sdf.org 7 points 10 hours ago

Maybe we don't need 30 remedial IQ points from a magic hallucination box?

you read books and eat vegetables like a loser

my daddy lets me play nintendo 64 and eat cotton candy

we are not the same

[–] RememberTheApollo_@lemmy.world 22 points 14 hours ago (1 children)

“I used many words to ask the AI to tell me a story using unverified sources to give me the answer I want and have no desire to fact check.”

GIGO.

[–] stabby_cicada@slrpnk.net 1 points 3 hours ago* (last edited 2 hours ago) (1 children)

I mean, how many people fact check a book? Even at the most basic level of reading the citations, finding the sources the book cited, and making sure they say what the book claims they say?

In the vast majority of cases, when we read a book, we trust the editors to fact check.

AI has no editors and generates false statements all the time because it has no ability to tell true statements from false. Which is why letting an AI summarize sources, instead of reading those sources for yourself, introduces one very large procedurally generated point of failure.

But let's not pretend the average person fact checks anything. The average person decides who they trust and relies on their trust in that person or source rather than fact checking themselves.

Which is one of the many reasons why Trump won.

[–] RememberTheApollo_@lemmy.world 1 points 2 hours ago

This is a two part problem. The first is that LLMs are going to give you shoddy results riddled with errors. This is known. Would you pick up a book and take it as the truth if analysis of the author’s work said 50% of their facts are wrong?The second part is that the asker has no intent to verify the LLM’s output, they likely just want the output and be done with it. No critical thinking required. The recipient is only interested in a copy-paste way of transferring info.

If someone takes the time to actually read and process a book with the intent of absorbing and adding to their knowledge, mentally they take the time to balance what they read with what they know and hopefully cross referencing that information internally and gauging it with “that sounds right” at least, but hopefully by reading more.

These are not the same thing. Books and LLMs are not the same. Anyone can read the exact same book and offer a critical analysis. Anyone asking an LLM a question might get an entirely different response depending on minor differences in asking.

Sure, you can copy-paste from a book, but if you haven’t read it, then yeah…that’s like copy-pasting an LLM response. No intent of learning, no critical thought, etc.

[–] kryptonianCodeMonkey@lemmy.world 24 points 14 hours ago* (last edited 13 hours ago) (1 children)

Imagine thinking "I outsource all of my thinking to machines, machines that are infamous for completely hallucinating information out of the aether or pulling from sources that are blatantly fabrications. And due to this veil of technology, this black box that just spits out data with no way to tell where it came from, and my unwillingness to put in my own research efforts to verify anything, I will never have any way to tell if the information is just completely wrong. And yet I will claim this to be my personal knowledge, regurgitate this information with full confidence and attach my personal name and reputation to its veracity regardless, and be subject to the consequences when someone with actual knowledge fact checks me," is a clever take. Imagine thinking that taking the easy way out, the lazy way, the manipulative way that gets others to do your work for you, is the virtuous path. Modern day Tom Sawyers, I swear. Sorry, AI bros, have an AI tell you who Tom Sawyer is so you can understand the insult.

[–] joyjoy@lemmy.zip 5 points 13 hours ago

Obviously it's the fact checkers who are wrong /s

[–] Bravo@eviltoast.org 12 points 13 hours ago
[–] nthavoc@lemmy.today 18 points 15 hours ago

After all that long description, AI tells you eating rocks is ok.

[–] lowered_lifted@lemmy.blahaj.zone 23 points 17 hours ago (1 children)

while you were studying books, he studied a cup of coffee. TBH I can spend an hour both reading and drinking coffee at the same time idk why it's got to be its own thing.

[–] ironhydroxide@sh.itjust.works 2 points 12 hours ago* (last edited 12 hours ago)

Look at this guy over here, bragging about multitasking. Next he'll tell us he can drink coffee and write multiple prompts in an hour. /s

[–] leraje@lemmy.blahaj.zone 25 points 18 hours ago (1 children)

You're right OOP, we are not the same. I have the full context, processing time, an enjoyable reading experience and a framework to understand the book in question and its wider relevance. You have a set of bullet points that, when asked to talk about on the mind numbing mens rights/crypto podcast you no doubt have, you cannot talk about, a lot of which will be wrong anyway.

[–] supersquirrel@sopuli.xyz 6 points 14 hours ago* (last edited 14 hours ago) (1 children)

spittakes coffee all over keyboard

I just spent the last 57 minutes drinking that coffee, I was almost done too, thanks a lot.

[–] Deathray5@lemmynsfw.com 2 points 11 hours ago

Did you know that botanically speaking coffee beans are the same as milk and apples and you shouldn't cry over spilt milk

[–] NigelFrobisher@aussie.zone 20 points 17 hours ago

This is the most Butlerian Jihad thing I’ve ever read. They should replace whatever Terminator-lite slop Brian Herbert wrote with this screengrab and called it Dune Book Zero.

[–] karashta@fedia.io 37 points 19 hours ago (1 children)

Imagine being proud of wasting the time drinking coffee instead of reading and understanding for yourself...

Then posting that you are proud of relying on hallucinating, made up slop.

Lmfao.

[–] TonyTonyChopper@mander.xyz 5 points 13 hours ago

They also imply that 2+58 minutes is equal to 2 hours

[–] iAvicenna@lemmy.world 11 points 15 hours ago

Oh no not the reading! Great thing we had AI to create AI and we did not have to depend on all those computer scientists and engineers whose only skill is to read stuff.

[–] ZombiFrancis@sh.itjust.works 2 points 10 hours ago

my overseer agent

Welp. That's all I need!

[–] SpaceNoodle@lemmy.world 106 points 23 hours ago (3 children)

2 minutes + 58 minutes = 2 hours

Bro must have asked the LLM to do the math for him

[–] pulsewidth@lemmy.world 14 points 18 hours ago (1 children)

The additional hour might be the time they have to work so that they can pay for the LLM access.

Because that is another aspect of what LLMs really are, another Silicon Valley rapid-scale venture capital money-pit service hoping that by the time they've dominated the market and spent trillions they can turn around and squeeze their users hard.

Only trouble for fighting this with logic is that the market they're attempting to wipe out is people's ability to assess data and think critically.

[–] PP_BOY_@lemmy.world 4 points 15 hours ago

Indeed. Folks right now dont understand that their queries are being 99.9% subsidized by trillions in VC hoping to dominate a market. Tech tale as old as time and people are falling for it hook, line, and sinker

[–] d00ery@lemmy.world 3 points 13 hours ago

Impressed that he can think of the information he needs in 2 minutes - why even bother researching if you already know what you need ...

Seriously though, reading and understanding generally just leaves me with more, very relevant, questions and some answers.

[–] Brainsploosh@lemmy.world 28 points 23 hours ago

Might be that it takes them an hour to read the summary

[–] Gullible@sh.itjust.works 116 points 1 day ago* (last edited 1 day ago) (4 children)

Two hours to read a book? How long has it been since he touched a piece of adult physical literature?

[–] Wrufieotnak@feddit.org 8 points 16 hours ago

And not THAT kind of adult literature.

[–] HenryBenry@piefed.social 37 points 23 hours ago

ChatGPT please tell me if spot does indeed run.

load more comments (2 replies)
[–] Tartas1995@discuss.tchncs.de 5 points 14 hours ago

I have read books in which the definition of certain words get redefined to be more precise and clear in the communication while making things less verbose. I don't think an ai summary will reliably properly introduce me to the definition on page 100 of a book that took the previous 99 pages to set up the required definitions to understand the definition it gives on page 100.

But I could be wrong.

[–] ideonek@piefed.social 31 points 20 hours ago* (last edited 11 hours ago) (1 children)

Without the knwoledge, you don't even know what precise information you need.

[–] shalafi@lemmy.world 3 points 11 hours ago

When I started learning SQL Server, I was so ignorant I couldn't even search for what I needed.

[–] some_guy@lemmy.sdf.org 61 points 23 hours ago (3 children)

They think this is impressive.

I read books because I want knowledge and understanding. You get bite-sized bits of information. We are not the same.

[–] Rancor_Tangerine@lemmy.world 9 points 16 hours ago (2 children)

They don't value intelligence and think everyone is just as likely to be accurate as the LLM. Their distrust for academics and research makes them think that their first assumptions or guesses are more correct than anything established. That's how they shirk off vaccines evidence and believe news without verifying anything.

Whatever makes their ego feel better must be the truth.

[–] some_guy@lemmy.sdf.org 3 points 14 hours ago

You really nailed it here.

[–] tarknassus@lemmy.world 5 points 16 hours ago

They're the next generation of that guy who is 'always right' and 'knows everything', yet in reality they are often wrong and won't admit it, and they really only know the most superficial things about any given subject.

[–] brendansimms@lemmy.world 1 points 11 hours ago

for a large portion of the population, "if it doesn't make money, then it is worthless" applies to EVERYTHING.

[–] TwitchingCheese@lemmy.world 22 points 22 hours ago (1 children)
[–] LogicalFallacy@lemm.ee 19 points 22 hours ago

"hallucinations"

Orwell's Animal Farm is a novella about animal husbandry . . .

[–] ech@lemm.ee 76 points 1 day ago* (last edited 1 day ago) (3 children)

Did they ask an LLM how LLM's work? Because that shit's fucking farcical. They're not "traversing" anything, bud. You get 17 different versions because each model is making that shit up on the fly.

[–] LeninOnAPrayer@lemm.ee 26 points 1 day ago* (last edited 1 day ago) (1 children)

Nah see they read thousands of pages in like an hour. That's why. They just don't need to anymore because they're so intelligent and do it the smart way with like models and shit to compress it into a half a page summary that is clearly just as useful.

Seriously, that's what they would say.

They don't actually understand what LLMs do either. They just think people that do are smart so they press buttons and type prompts and think that's as good as the software engineer that actually developed the LLMs.

Seriously. They think they are the same as the people that develop the source code for their webui prompt. And most of society doesn't understand that difference so they get away with it.

It's the equivalent of the dude that trade shitcoins thinking he understands crypto like the guy committing all of the code to actually run it.

(Or worse they clone a repo and follow a tutorial to change a config file and make their own shitcoins)

I really think some parts of our tech world need to be made LESS user friendly. Not more.

[–] Aceticon@lemmy.dbzer0.com 3 points 14 hours ago

It's people at the peak point of the Dunning-Krugger curve sharing their "wisdom" with the rest of us.

[–] Jesus_666@lemmy.world 9 points 20 hours ago

There are models designed to read documents and provide summaries; that part is actually realistic. And transforming text (such as by providing a summary) if actually something LLMs are better at than the conversational question answering that's getting all the hype these days.

Of course stuffing an entire book in there is going to require a massive context length and would be damn expensive, especially if multiplied by 17. And I doubt it'd be done in a minute.

And there's still the hallucination issue, especially with everything then getting filtered through another LLM.

So that guy is full of shit but at least he managed to mention one reasonable capability of neural nets. Surely that must be because of the 30+ IQ points ChatGPT has added to his brain...

load more comments (1 replies)
[–] phoenixz@lemmy.ca 3 points 13 hours ago

Ignoring all the obvious problems with AI, this shows another issue as well.

Rewading books is beautiful, it makes you disappear into a world , immerses you, makes your head fantasise about how this world looks like, you go on a long vacation.

You lose all that, you stop using your own brain to outsource all that beauty to a datacenter

I've seen this at work.

We installed a new water sampler and they sent an official installer to set up and commission the device. The guy couldn't answer a damn question about the product without chatGPT. When I asked a relatively complex question that the bot couldn't answer (that was at the third question), I decided that I had enough and spend an hour reading the manual of the thing. Turns out the bot was making up the answers and I learned how to commission the device without the "official support".

[–] PP_BOY_@lemmy.world 46 points 1 day ago* (last edited 1 day ago)

This is the same "I'll do my own research, thanks" crowd btw

spoonfeed me harder Silicon Valley VC daddy

[–] lath@lemmy.world 14 points 20 hours ago

"I ran this Convo through an LLM and it said i should fire and replace you with an LLM for increased productivity and efficiency.

Oh wait, hold on. I read that wrong, it said I should set you on fire...

Well, LLMs can't be wrong so.."

[–] supersquirrel@sopuli.xyz 24 points 1 day ago

2 mins? Sam Altman can spiritually ascend at least 10 divorced dads in that epoch of time.

This is business baby.

[–] rem26_art@fedia.io 14 points 1 day ago

bro needs 58 minutes to drink coffee

load more comments
view more: next ›