this post was submitted on 08 Jun 2025
824 points (95.4% liked)

Technology

71224 readers
3940 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

LOOK MAA I AM ON FRONT PAGE

(page 6) 50 comments
sorted by: hot top controversial new old
[–] crystalmerchant@lemmy.world 3 points 2 days ago (1 children)

I mean... Is that not reasoning, I guess? It's what my brain does-- recognizes patterns and makes split second decisions.

load more comments (1 replies)
[–] NostraDavid@programming.dev -2 points 1 day ago (3 children)

OK, and? A car doesn't run like a horse either, yet they are still very useful.

I'm fine with the distinction between human reasoning and LLM "reasoning".

load more comments (3 replies)
[–] Naich@lemmings.world 3 points 2 days ago

So they have worked out that LLMs do what they were programmed to do in the way that they were programmed? Shocking.

[–] MangoCats@feddit.it 0 points 1 day ago (2 children)

It's not just the memorization of patterns that matters, it's the recall of appropriate patterns on demand. Call it what you will, even if AI is just a better librarian for search work, that's value - that's the new Google.

load more comments (2 replies)
[–] 1rre@discuss.tchncs.de 3 points 2 days ago (5 children)

The difference between reasoning models and normal models is reasoning models are two steps, to oversimplify it a little they prompt "how would you go about responding to this" then prompt "write the response"

It's still predicting the most likely thing to come next, but the difference is that it gives the chance for the model to write the most likely instructions to follow for the task, then the most likely result of following the instructions - both of which are much more conformant to patterns than a single jump from prompt to response.

load more comments (5 replies)
load more comments
view more: ‹ prev next ›