this post was submitted on 08 Jun 2025
104 points (92.6% liked)
Apple
18923 readers
258 users here now
Welcome
to the largest Apple community on Lemmy. This is the place where we talk about everything Apple, from iOS to the exciting upcoming Apple Vision Pro. Feel free to join the discussion!
Rules:
- No NSFW Content
- No Hate Speech or Personal Attacks
- No Ads / Spamming
Self promotion is only allowed in the pinned monthly thread
Communities of Interest:
Apple Hardware
Apple TV
Apple Watch
iPad
iPhone
Mac
Vintage Apple
Apple Software
iOS
iPadOS
macOS
tvOS
watchOS
Shortcuts
Xcode
Community banner courtesy of u/Antsomnia.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'm not sure what's novel here. No one thought that modern AI could solve arbitrarily complex logic problems, or even that modern AI was particularly good at formal reasoning. I would call myself an AI optimist but I would have been surprised if the article found any result other than the one it did. (Where exactly the models fail is interesting, but the fact that they do at all isn't.) Furthermore, the distinction between reasoning and memorizing patterns in the title of this post is artificial - reasoning itself involves a great deal of pattern recognition.
I just find it shockingly good at producing working bits of code that work perfectly and all the variables and functions/methods seem aptly named and such. Its very curious
Most CEOs and business grads think LLMs are a universal cureall.
There were studies out last week that indicate that most Gen Alpha think LLMs are AGI. The marketing is working.
haha, except pretty much everyone in the c-suite at the company I work for.
Except half the threads on Hacker News and Lobsters and LinkedIn.
I don’t think the study was meant to be novel. It looks like it was only intended to provide scientific evidence about exactly where current AIs fail.
Whats novel is that a major tech company is officially saying what they all know is true.
That Apple is finding itself the only major tech player without their own LLM likely plays heavily into why they are throwing water on the LLM fire, but it is still nice to see one of them admitting the truth.
Also reasoning is pattern recognition with context. None of the "AI" models have contextual capability. For Claude, i refer you to Claude Plays Pokemon on twitch. It is a dumpster fire.