this post was submitted on 16 Jun 2025
330 points (98.0% liked)
Fuck AI
3114 readers
918 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Did they ask an LLM how LLM's work? Because that shit's fucking farcical. They're not "traversing" anything, bud. You get 17 different versions because each model is making that shit up on the fly.
Nah see they read thousands of pages in like an hour. That's why. They just don't need to anymore because they're so intelligent and do it the smart way with like models and shit to compress it into a half a page summary that is clearly just as useful.
Seriously, that's what they would say.
They don't actually understand what LLMs do either. They just think people that do are smart so they press buttons and type prompts and think that's as good as the software engineer that actually developed the LLMs.
Seriously. They think they are the same as the people that develop the source code for their webui prompt. And most of society doesn't understand that difference so they get away with it.
It's the equivalent of the dude that trade shitcoins thinking he understands crypto like the guy committing all of the code to actually run it.
(Or worse they clone a repo and follow a tutorial to change a config file and make their own shitcoins)
I really think some parts of our tech world need to be made LESS user friendly. Not more.
It's people at the peak point of the Dunning-Krugger curve sharing their "wisdom" with the rest of us.
I assumed this was a given.
There are models designed to read documents and provide summaries; that part is actually realistic. And transforming text (such as by providing a summary) if actually something LLMs are better at than the conversational question answering that's getting all the hype these days.
Of course stuffing an entire book in there is going to require a massive context length and would be damn expensive, especially if multiplied by 17. And I doubt it'd be done in a minute.
And there's still the hallucination issue, especially with everything then getting filtered through another LLM.
So that guy is full of shit but at least he managed to mention one reasonable capability of neural nets. Surely that must be because of the 30+ IQ points ChatGPT has added to his brain...