this post was submitted on 08 Jun 2025
647 points (95.9% liked)

Technology

71087 readers
3533 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] melsaskca@lemmy.ca 1 points 6 minutes ago

It's all "one instruction at a time" regardless of high processor speeds and words like "intelligent" being bandied about. "Reason" discussions should fall into the same query bucket as "sentience".

[–] Harbinger01173430@lemmy.world 4 points 1 hour ago

XD so, like a regular school/university student that just wants to get passing grades?

[–] minoscopede@lemmy.world 32 points 4 hours ago* (last edited 4 hours ago) (5 children)

I see a lot of misunderstandings in the comments 🫤

This is a pretty important finding for researchers, and it's not obvious by any means. This finding is not showing a problem with LLMs' abilities in general. The issue they discovered is specifically for so-called "reasoning models" that iterate on their answer before replying. It might indicate that the training process is not sufficient for true reasoning.

Most reasoning models are not incentivized to think correctly, and are only rewarded based on their final answer. This research might indicate that's a flaw that needs to be corrected before models can actually reason.

When given explicit instructions to follow models failed because they had not seen similar instructions before.

This paper shows that there is no reasoning in LLMs at all, just extended pattern matching.

[–] REDACTED@infosec.pub 3 points 1 hour ago* (last edited 1 hour ago)

What confuses me is that we seemingly keep pushing away what counts as reasoning. Not too long ago, some smart alghoritms or a bunch of instructions for software (if/then) was officially, by definition, software/computer reasoning. Logically, CPUs do it all the time. Suddenly, when AI is doing that with pattern recognition, memory and even more advanced alghoritms, it's no longer reasoning? I feel like at this point a more relevant question is "What exactly is reasoning?". Before you answer, understand that most humans seemingly live by pattern recognition, not reasoning.

https://en.wikipedia.org/wiki/Reasoning_system

[–] Tobberone@lemm.ee 1 points 1 hour ago

What statistical method do you base that claim on? The results presented match expectations given that Markov chains are still the basis of inference. What magic juice is added to "reasoning models" that allow them to break free of the inherent boundaries of the statistical methods they are based on?

[–] theherk@lemmy.world 9 points 4 hours ago

Yeah these comments have the three hallmarks of Lemmy:

  • AI is just autocomplete mantras.
  • Apple is always synonymous with bad and dumb.
  • Rare pockets of really thoughtful comments.

Thanks for being at least the latter.

[–] Zacryon@feddit.org 4 points 4 hours ago (1 children)

Some AI researchers found it obvious as well, in terms of they've suspected it and had some indications. But it's good to see more data on this to affirm this assessment.

[–] kreskin@lemmy.world 0 points 2 hours ago* (last edited 2 hours ago)

Lots of us who has done some time in search and relevancy early on knew ML was always largely breathless overhyped marketing. It was endless buzzwords and misframing from the start, but it raised our salaries. Anything that exec doesnt understand is profitable and worth doing.

[–] Xatolos@reddthat.com 5 points 4 hours ago

So, what your saying here is that the A in AI actually stands for artificial, and it's not really intelligent and reasoning.

Huh.

[–] FreakinSteve@lemmy.world 21 points 7 hours ago (1 children)

NOOOOOOOOO

SHIIIIIIIIIITT

SHEEERRRLOOOOOOCK

[–] 800XL@lemmy.world 1 points 6 hours ago (1 children)

Extept for Siri, right? Lol

[–] Threeme2189@lemmy.world 2 points 6 hours ago

Apple Intelligence

[–] skisnow@lemmy.ca 18 points 7 hours ago

What's hilarious/sad is the response to this article over on reddit's "singularity" sub, in which all the top comments are people who've obviously never got all the way through a research paper in their lives all trashing Apple and claiming their researchers don't understand AI or "reasoning". It's a weird cult.

[–] RampantParanoia2365@lemmy.world 19 points 8 hours ago* (last edited 5 hours ago) (1 children)

Fucking obviously. Until Data's positronic brains becomes reality, AI is not actual intelligence.

AI is not A I. I should make that a tshirt.

[–] JDPoZ@lemmy.world 10 points 7 hours ago (1 children)

It’s an expensive carbon spewing parrot.

[–] Threeme2189@lemmy.world 4 points 5 hours ago

It's a very resource intensive autocomplete

[–] communist@lemmy.frozeninferno.xyz 8 points 7 hours ago* (last edited 7 hours ago) (1 children)

I think it's important to note (i'm not an llm I know that phrase triggers you to assume I am) that they haven't proven this as an inherent architectural issue, which I think would be the next step to the assertion.

do we know that they don't and are incapable of reasoning, or do we just know that for x problems they jump to memorized solutions, is it possible to create an arrangement of weights that can genuinely reason, even if the current models don't? That's the big question that needs answered. It's still possible that we just haven't properly incentivized reason over memorization during training.

if someone can objectively answer "no" to that, the bubble collapses.

do we know that they don't and are incapable of reasoning.

"even when we provide the algorithm in the prompt—so that the model only needs to execute the prescribed steps—performance does not improve"

[–] Auli@lemmy.ca 15 points 11 hours ago

No shit. This isn't new.

[–] mavu@discuss.tchncs.de 54 points 15 hours ago

No way!

Statistical Language models don't reason?

But OpenAI, robots taking over!

[–] GaMEChld@lemmy.world 18 points 13 hours ago (7 children)

Most humans don't reason. They just parrot shit too. The design is very human.

[–] skisnow@lemmy.ca 6 points 7 hours ago

I hate this analogy. As a throwaway whimsical quip it'd be fine, but it's specious enough that I keep seeing it used earnestly by people who think that LLMs are in any way sentient or conscious, so it's lowered my tolerance for it as a topic even if you did intend it flippantly.

[–] joel_feila@lemmy.world 5 points 8 hours ago

Thata why ceo love them. When your job is 90% spewing bs a machine that does that is impressive

[–] elbarto777@lemmy.world 22 points 11 hours ago

LLMs deal with tokens. Essentially, predicting a series of bytes.

Humans do much, much, much, much, much, much, much more than that.

load more comments (4 replies)
load more comments
view more: next ›