this post was submitted on 08 Jun 2025
797 points (95.8% liked)

Technology

71146 readers
2964 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

LOOK MAA I AM ON FRONT PAGE

(page 4) 50 comments
sorted by: hot top controversial new old
[–] brsrklf@jlai.lu 46 points 1 day ago (2 children)

You know, despite not really believing LLM "intelligence" works anywhere like real intelligence, I kind of thought maybe being good at recognizing patterns was a way to emulate it to a point...

But that study seems to prove they're still not even good at that. At first I was wondering how hard the puzzles must have been, and then there's a bit about LLM finishing 100 move towers of Hanoï (on which they were trained) and failing 4 move river crossings. Logically, those problems are very similar... Also, failing to apply a step-by-step solution they were given.

[–] auraithx@lemmy.dbzer0.com 38 points 1 day ago

This paper doesn’t prove that LLMs aren’t good at pattern recognition, it demonstrates the limits of what pattern recognition alone can achieve, especially for compositional, symbolic reasoning.

[–] technocrit@lemmy.dbzer0.com 16 points 1 day ago* (last edited 1 day ago)

Computers are awesome at "recognizing patterns" as long as the pattern is a statistical average of some possibly worthless data set. And it really helps if the computer is setup to ahead of time to recognize pre-determined patterns.

[–] sev@nullterra.org 49 points 1 day ago (30 children)

Just fancy Markov chains with the ability to link bigger and bigger token sets. It can only ever kick off processing as a response and can never initiate any line of reasoning. This, along with the fact that its working set of data can never be updated moment-to-moment, means that it would be a physical impossibility for any LLM to achieve any real "reasoning" processes.

[–] kescusay@lemmy.world 18 points 1 day ago (3 children)

I can envision a system where an LLM becomes one part of a reasoning AI, acting as a kind of fuzzy "dataset" that a proper neural network incorporates and reasons with, and the LLM could be kept real-time updated (sort of) with MCP servers that incorporate anything new it learns.

But I don't think we're anywhere near there yet.

load more comments (3 replies)
load more comments (29 replies)
[–] ZILtoid1991@lemmy.world 11 points 1 day ago (1 children)

Thank you Captain Obvious! Only those who think LLMs are like "little people in the computer" didn't knew this already.

[–] TheFriar@lemm.ee 6 points 1 day ago (2 children)

Yeah, well there are a ton of people literally falling into psychosis, led by LLMs. So it’s unfortunately not that many people that already knew it.

[–] joel_feila@lemmy.world 3 points 22 hours ago

Dude they made chat gpt a little more boit licky and now many people are convinced they are literal messiahs. All it took for them was a chat bot and a few hours of talk.

load more comments (1 replies)
[–] NostraDavid@programming.dev -2 points 13 hours ago (3 children)

OK, and? A car doesn't run like a horse either, yet they are still very useful.

I'm fine with the distinction between human reasoning and LLM "reasoning".

load more comments (3 replies)
[–] SplashJackson@lemmy.ca 24 points 1 day ago (1 children)
load more comments (1 replies)
[–] MangoCats@feddit.it 0 points 14 hours ago (2 children)

It's not just the memorization of patterns that matters, it's the recall of appropriate patterns on demand. Call it what you will, even if AI is just a better librarian for search work, that's value - that's the new Google.

load more comments (2 replies)
[–] technocrit@lemmy.dbzer0.com 23 points 1 day ago* (last edited 1 day ago) (6 children)

Why would they "prove" something that's completely obvious?

The burden of proof is on the grifters who have overwhelmingly been making false claims and distorting language for decades.

[–] tauonite@lemmy.world 15 points 1 day ago

That's called science

[–] yeahiknow3@lemmings.world 23 points 1 day ago* (last edited 1 day ago) (1 children)

They’re just using the terminology that’s widespread in the field. In a sense, the paper’s purpose is to prove that this terminology is unsuitable.

load more comments (1 replies)
[–] Mbourgon@lemmy.world 10 points 1 day ago (1 children)

Not when large swaths of people are being told to use it everyday. Upper management has bought in on it.

[–] limelight79@lemmy.world 4 points 1 day ago* (last edited 1 day ago)

Yep. I'm retired now, but before retirement a month or so ago, I was working on a project that relied on several hundred people back in 2020. "Why can't AI do it?"

The people I worked with are continuing the research and putting it up against the human coders, but...there was definitely an element of "AI can do that, we won't need people" next time. I sincerely hope management listens to reason. Our decisions would lead to potentially firing people, so I think we were able to push back on the "AI can make all of these decisions"...for now.

The AI people were all in, they were ready to build an interface that told the human what the AI would recommend for each item. Errrm, no, that's not how an independent test works. We had to reel them back in.

load more comments (3 replies)
[–] reksas@sopuli.xyz 37 points 1 day ago (4 children)

does ANY model reason at all?

[–] 4am@lemm.ee 34 points 1 day ago (3 children)

No, and to make that work using the current structures we use for creating AI models we’d probably need all the collective computing power on earth at once.

load more comments (3 replies)
load more comments (3 replies)
[–] BlaueHeiligenBlume@feddit.org 8 points 1 day ago (1 children)

Of course, that is obvious to all having basic knowledge of neural networks, no?

load more comments (1 replies)
[–] LonstedBrowryBased@lemm.ee 12 points 1 day ago (2 children)

Yah of course they do they’re computers

[–] finitebanjo@lemmy.world 20 points 1 day ago (4 children)

That's not really a valid argument for why, but yes the models which use training data to assemble statistical models are all bullshitting. TBH idk how people can convince themselves otherwise.

[–] EncryptKeeper@lemmy.world 15 points 1 day ago (3 children)

TBH idk how people can convince themselves otherwise.

They don’t convince themselves. They’re convinced by the multi billion dollar corporations pouring unholy amounts of money into not only the development of AI, but its marketing. Marketing designed to not only convince them that AI is something it’s not, but also that that anyone who says otherwise (like you) are just luddites who are going to be “left behind”.

load more comments (3 replies)
[–] turmacar@lemmy.world 13 points 1 day ago* (last edited 1 day ago) (2 children)

I think because it's language.

There's a famous quote from Charles Babbage when he presented his difference engine (gear based calculator) and someone asking "if you put in the wrong figures, will the correct ones be output" and Babbage not understanding how someone can so thoroughly misunderstand that the machine is, just a machine.

People are people, the main thing that's changed since the Cuneiform copper customer complaint is our materials science and networking ability. Most things that people interact with every day, most people just assume work like it appears to on the surface.

And nothing other than a person can do math problems or talk back to you. So people assume that means intelligence.

[–] leftzero@lemmynsfw.com 3 points 22 hours ago (1 children)

"if you put in the wrong figures, will the correct ones be output"

To be fair, an 1840 “computer” might be able to tell there was something wrong with the figures and ask about it or even correct them herself.

Babbage was being a bit obtuse there; people weren't familiar with computing machines yet. Computer was a job, and computers were expected to be fairly intelligent.

In fact I'd say that if anything this question shows that the questioner understood enough about the new machine to realise it was not the same as they understood a computer to be, and lacked many of their abilities, and was just looking for Babbage to confirm their suspicions.

[–] turmacar@lemmy.world 2 points 22 hours ago (1 children)

"Computer" meaning a mechanical/electro-mechanical/electrical machine wasn't used until around after WWII.

Babbag's difference/analytical engines weren't confusing because people called them a computer, they didn't.

"On two occasions I have been asked, 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question."

  • Charles Babbage

If you give any computer, human or machine, random numbers, it will not give you "correct answers".

It's possible Babbage lacked the social skills to detect sarcasm. We also have several high profile cases of people just trusting LLMs to file legal briefs and official government 'studies' because the LLM "said it was real".

[–] AppleTea@lemmy.zip 1 points 18 hours ago (1 children)

What they mean is that before Turing, "computer" was literally a person's job description. You hand a professional a stack of calculations with some typos, part of the job is correcting those out. Newfangled machine comes along with the same name as the job, among the first thing people are gonna ask about is where it fall short.

Like, if I made a machine called "assistant", it'd be natural for people to point out and ask about all the things a person can do that a machine just never could.

load more comments (1 replies)
[–] finitebanjo@lemmy.world 9 points 1 day ago

I often feel like I'm surrounded by idiots, but even I can't begin to imagine what it must have felt like to be Charles Babbage explaining computers to people in 1840.

load more comments (2 replies)
load more comments (1 replies)
[–] surph_ninja@lemmy.world 8 points 1 day ago (38 children)

You assume humans do the opposite? We literally institutionalize humans who not follow set patterns.

We also reward people who can memorize and regurgitate even if they don't understand what they are doing.

load more comments (37 replies)
[–] sp3ctr4l@lemmy.dbzer0.com 17 points 1 day ago* (last edited 1 day ago) (2 children)

This has been known for years, this is the default assumption of how these models work.

You would have to prove that some kind of actual reasoning capacity has arisen as... some kind of emergent complexity phenomenon.... not the other way around.

Corpos have just marketed/gaslit us/themselves so hard that they apparently forgot this.

load more comments (2 replies)
load more comments
view more: ‹ prev next ›