this post was submitted on 16 Jun 2025
322 points (97.9% liked)

Fuck AI

3114 readers
909 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
 

Source (Via Xcancel)

top 50 comments
sorted by: hot top controversial new old
[–] groucho@lemmy.sdf.org 5 points 1 hour ago

Maybe we don't need 30 remedial IQ points from a magic hallucination box?

[–] ZombiFrancis@sh.itjust.works 1 points 1 hour ago

my overseer agent

Welp. That's all I need!

[–] Bravo@eviltoast.org 12 points 4 hours ago

you read books and eat vegetables like a loser

my daddy lets me play nintendo 64 and eat cotton candy

we are not the same

[–] RememberTheApollo_@lemmy.world 20 points 5 hours ago

“I used many words to ask the AI to tell me a story using unverified sources to give me the answer I want and have no desire to fact check.”

GIGO.

[–] kryptonianCodeMonkey@lemmy.world 21 points 5 hours ago* (last edited 4 hours ago) (1 children)

Imagine thinking "I outsource all of my thinking to machines, machines that are infamous for completely hallucinating information out of the aether or pulling from sources that are blatantly fabrications. And due to this veil of technology, this black box that just spits out data with no way to tell where it came from, and my unwillingness to put in my own research efforts to verify anything, I will never have any way to tell if the information is just completely wrong. And yet I will claim this to be my personal knowledge, regurgitate this information with full confidence and attach my personal name and reputation to its veracity regardless, and be subject to the consequences when someone with actual knowledge fact checks me," is a clever take. Imagine thinking that taking the easy way out, the lazy way, the manipulative way that gets others to do your work for you, is the virtuous path. Modern day Tom Sawyers, I swear. Sorry, AI bros, have an AI tell you who Tom Sawyer is so you can understand the insult.

[–] joyjoy@lemmy.zip 5 points 4 hours ago

Obviously it's the fact checkers who are wrong /s

[–] nthavoc@lemmy.today 17 points 6 hours ago

After all that long description, AI tells you eating rocks is ok.

[–] Tartas1995@discuss.tchncs.de 5 points 5 hours ago

I have read books in which the definition of certain words get redefined to be more precise and clear in the communication while making things less verbose. I don't think an ai summary will reliably properly introduce me to the definition on page 100 of a book that took the previous 99 pages to set up the required definitions to understand the definition it gives on page 100.

But I could be wrong.

[–] iAvicenna@lemmy.world 11 points 6 hours ago

Oh no not the reading! Great thing we had AI to create AI and we did not have to depend on all those computer scientists and engineers whose only skill is to read stuff.

[–] lowered_lifted@lemmy.blahaj.zone 22 points 8 hours ago (1 children)

while you were studying books, he studied a cup of coffee. TBH I can spend an hour both reading and drinking coffee at the same time idk why it's got to be its own thing.

[–] ironhydroxide@sh.itjust.works 2 points 4 hours ago* (last edited 4 hours ago)

Look at this guy over here, bragging about multitasking. Next he'll tell us he can drink coffee and write multiple prompts in an hour. /s

[–] phoenixz@lemmy.ca 2 points 4 hours ago

Ignoring all the obvious problems with AI, this shows another issue as well.

Rewading books is beautiful, it makes you disappear into a world , immerses you, makes your head fantasise about how this world looks like, you go on a long vacation.

You lose all that, you stop using your own brain to outsource all that beauty to a datacenter

[–] NigelFrobisher@aussie.zone 19 points 8 hours ago

This is the most Butlerian Jihad thing I’ve ever read. They should replace whatever Terminator-lite slop Brian Herbert wrote with this screengrab and called it Dune Book Zero.

[–] leraje@lemmy.blahaj.zone 24 points 9 hours ago (1 children)

You're right OOP, we are not the same. I have the full context, processing time, an enjoyable reading experience and a framework to understand the book in question and its wider relevance. You have a set of bullet points that, when asked to talk about on the mind numbing mens rights/crypto podcast you no doubt have, you cannot talk about, a lot of which will be wrong anyway.

[–] supersquirrel@sopuli.xyz 5 points 5 hours ago* (last edited 5 hours ago) (1 children)

spittakes coffee all over keyboard

I just spent the last 57 minutes drinking that coffee, I was almost done too, thanks a lot.

[–] Deathray5@lemmynsfw.com 1 points 2 hours ago

Did you know that botanically speaking coffee beans are the same as milk and apples and you shouldn't cry over spilt milk

[–] karashta@fedia.io 36 points 10 hours ago (1 children)

Imagine being proud of wasting the time drinking coffee instead of reading and understanding for yourself...

Then posting that you are proud of relying on hallucinating, made up slop.

Lmfao.

[–] TonyTonyChopper@mander.xyz 5 points 5 hours ago

They also imply that 2+58 minutes is equal to 2 hours

[–] ideonek@piefed.social 30 points 12 hours ago* (last edited 2 hours ago) (1 children)

Without the knwoledge, you don't even know what precise information you need.

[–] shalafi@lemmy.world 3 points 2 hours ago

When I started learning SQL Server, I was so ignorant I couldn't even search for what I needed.

[–] SpaceNoodle@lemmy.world 103 points 15 hours ago (3 children)

2 minutes + 58 minutes = 2 hours

Bro must have asked the LLM to do the math for him

[–] d00ery@lemmy.world 3 points 4 hours ago

Impressed that he can think of the information he needs in 2 minutes - why even bother researching if you already know what you need ...

Seriously though, reading and understanding generally just leaves me with more, very relevant, questions and some answers.

[–] pulsewidth@lemmy.world 14 points 9 hours ago (1 children)

The additional hour might be the time they have to work so that they can pay for the LLM access.

Because that is another aspect of what LLMs really are, another Silicon Valley rapid-scale venture capital money-pit service hoping that by the time they've dominated the market and spent trillions they can turn around and squeeze their users hard.

Only trouble for fighting this with logic is that the market they're attempting to wipe out is people's ability to assess data and think critically.

[–] PP_BOY_@lemmy.world 4 points 6 hours ago

Indeed. Folks right now dont understand that their queries are being 99.9% subsidized by trillions in VC hoping to dominate a market. Tech tale as old as time and people are falling for it hook, line, and sinker

[–] Brainsploosh@lemmy.world 28 points 14 hours ago

Might be that it takes them an hour to read the summary

[–] Gullible@sh.itjust.works 115 points 15 hours ago* (last edited 15 hours ago) (3 children)

Two hours to read a book? How long has it been since he touched a piece of adult physical literature?

[–] Wrufieotnak@feddit.org 8 points 7 hours ago

And not THAT kind of adult literature.

[–] HenryBenry@piefed.social 36 points 14 hours ago

ChatGPT please tell me if spot does indeed run.

[–] TheBat@lemmy.world 6 points 12 hours ago (1 children)
[–] Almacca@aussie.zone 6 points 7 hours ago

Welp, that's gonna fuck up my search algorithm for a while.

"Chuck Tingle". :D

[–] some_guy@lemmy.sdf.org 60 points 14 hours ago (3 children)

They think this is impressive.

I read books because I want knowledge and understanding. You get bite-sized bits of information. We are not the same.

[–] brendansimms@lemmy.world 1 points 2 hours ago

for a large portion of the population, "if it doesn't make money, then it is worthless" applies to EVERYTHING.

[–] Rancor_Tangerine@lemmy.world 9 points 7 hours ago (2 children)

They don't value intelligence and think everyone is just as likely to be accurate as the LLM. Their distrust for academics and research makes them think that their first assumptions or guesses are more correct than anything established. That's how they shirk off vaccines evidence and believe news without verifying anything.

Whatever makes their ego feel better must be the truth.

[–] some_guy@lemmy.sdf.org 3 points 5 hours ago

You really nailed it here.

[–] tarknassus@lemmy.world 5 points 7 hours ago

They're the next generation of that guy who is 'always right' and 'knows everything', yet in reality they are often wrong and won't admit it, and they really only know the most superficial things about any given subject.

[–] TwitchingCheese@lemmy.world 22 points 14 hours ago (1 children)
[–] LogicalFallacy@lemm.ee 19 points 13 hours ago

"hallucinations"

Orwell's Animal Farm is a novella about animal husbandry . . .

[–] ech@lemm.ee 75 points 15 hours ago* (last edited 15 hours ago) (3 children)

Did they ask an LLM how LLM's work? Because that shit's fucking farcical. They're not "traversing" anything, bud. You get 17 different versions because each model is making that shit up on the fly.

[–] AlexanderTheDead@lemmy.world 1 points 4 hours ago

I assumed this was a given.

[–] Jesus_666@lemmy.world 9 points 12 hours ago

There are models designed to read documents and provide summaries; that part is actually realistic. And transforming text (such as by providing a summary) if actually something LLMs are better at than the conversational question answering that's getting all the hype these days.

Of course stuffing an entire book in there is going to require a massive context length and would be damn expensive, especially if multiplied by 17. And I doubt it'd be done in a minute.

And there's still the hallucination issue, especially with everything then getting filtered through another LLM.

So that guy is full of shit but at least he managed to mention one reasonable capability of neural nets. Surely that must be because of the 30+ IQ points ChatGPT has added to his brain...

[–] LeninOnAPrayer@lemm.ee 26 points 15 hours ago* (last edited 15 hours ago) (1 children)

Nah see they read thousands of pages in like an hour. That's why. They just don't need to anymore because they're so intelligent and do it the smart way with like models and shit to compress it into a half a page summary that is clearly just as useful.

Seriously, that's what they would say.

They don't actually understand what LLMs do either. They just think people that do are smart so they press buttons and type prompts and think that's as good as the software engineer that actually developed the LLMs.

Seriously. They think they are the same as the people that develop the source code for their webui prompt. And most of society doesn't understand that difference so they get away with it.

It's the equivalent of the dude that trade shitcoins thinking he understands crypto like the guy committing all of the code to actually run it.

(Or worse they clone a repo and follow a tutorial to change a config file and make their own shitcoins)

I really think some parts of our tech world need to be made LESS user friendly. Not more.

[–] Aceticon@lemmy.dbzer0.com 3 points 6 hours ago

It's people at the peak point of the Dunning-Krugger curve sharing their "wisdom" with the rest of us.

I've seen this at work.

We installed a new water sampler and they sent an official installer to set up and commission the device. The guy couldn't answer a damn question about the product without chatGPT. When I asked a relatively complex question that the bot couldn't answer (that was at the third question), I decided that I had enough and spend an hour reading the manual of the thing. Turns out the bot was making up the answers and I learned how to commission the device without the "official support".

[–] towelie@lemm.ee -5 points 4 hours ago* (last edited 4 hours ago) (1 children)

This community is hilarious. Hating on tech oligarchs and the tech hype bro archetype is one thing (like this post), but its another entirely to staunchly oppose a technological advancement like machine learning so much you revel in a community surrounding it. If your anger is directed at your fellow man for using LLMs to summarize their book reports then fine, join the club. Otherwise enjoy angrily shaking your fist at the world. I think it was Aurelius who spoke on the ignorance of opposing the nature of things.

[–] leftzero@lemmynsfw.com 4 points 3 hours ago

Machine learning would be an interesting advancement indeed.

Sadly all resources are focused on LLMs, which are incapable of learning once trained (and don't really learn anything during training that winRAR wouldn't “learn” when compressing a file... much less, in fact, since LLMs are extremely lossy compressors), and which are an evident dead end when it comes to AI.

LLMs are barely better than Eliza at producing text (and equally useless at producing information), and several orders of magnitude costlier.

They are extremely harmful to society, culture, human rights, research (especially AI research) and, in the mid to long term, the economy.

They are a scam, a criminal misuse of money, time, and resources, and the sooner the bubble bursts the sooner we can start recovering from the damage they've caused and the sooner we can get back to researching proper AI (though at this point I'm fairly certain it's already too late; they've caused too much damage and global warming will kill us before we have a chance to invent something that might actually help).

[–] lath@lemmy.world 14 points 12 hours ago

"I ran this Convo through an LLM and it said i should fire and replace you with an LLM for increased productivity and efficiency.

Oh wait, hold on. I read that wrong, it said I should set you on fire...

Well, LLMs can't be wrong so.."

[–] PP_BOY_@lemmy.world 46 points 15 hours ago* (last edited 15 hours ago)

This is the same "I'll do my own research, thanks" crowd btw

spoonfeed me harder Silicon Valley VC daddy

[–] supersquirrel@sopuli.xyz 24 points 15 hours ago

2 mins? Sam Altman can spiritually ascend at least 10 divorced dads in that epoch of time.

This is business baby.

load more comments
view more: next ›