this post was submitted on 08 Jun 2025
99 points (93.8% liked)

Apple

18923 readers
240 users here now

Welcome

to the largest Apple community on Lemmy. This is the place where we talk about everything Apple, from iOS to the exciting upcoming Apple Vision Pro. Feel free to join the discussion!

Rules:
  1. No NSFW Content
  2. No Hate Speech or Personal Attacks
  3. No Ads / Spamming
    Self promotion is only allowed in the pinned monthly thread

Lemmy Code of Conduct

Communities of Interest:

Apple Hardware
Apple TV
Apple Watch
iPad
iPhone
Mac
Vintage Apple

Apple Software
iOS
iPadOS
macOS
tvOS
watchOS
Shortcuts
Xcode

Community banner courtesy of u/Antsomnia.

founded 2 years ago
MODERATORS
top 27 comments
sorted by: hot top controversial new old
[–] WolfLink@sh.itjust.works 3 points 2 hours ago
[–] a1studmuffin@aussie.zone 2 points 1 hour ago

I'd love to see these complexity results compared against humans for a laugh.

[–] dinckelman@lemmy.world 10 points 6 hours ago (1 children)

Was this not already common sense?

[–] TheLowestStone@lemmy.world 7 points 5 hours ago

It is important to always remember that the vast majority of people are stupid.

[–] dataprolet@lemmy.dbzer0.com 65 points 10 hours ago (1 children)

No shit, that's how LLMs work.

[–] MudMan@fedia.io 25 points 10 hours ago

This gets me often. You keep finding papers and studies claiming things I thought were well understood, which ends up revealing corporate hype that had passed me by.

So it turns out that letting a LLM self-prompt for a while before responding makes it a bit tighter in some ways but not self aware, huh? I have learned that this was a thing people were unclear about, and nothing else.

[–] YtA4QCam2A9j7EfTgHrH@infosec.pub 10 points 8 hours ago

But Sam Altman called it a “reasoning model.” Would he lie?!

[–] givesomefucks@lemmy.world 20 points 10 hours ago (2 children)

I really don't want to give a billion dollar corporation credit for "proving" something a shit ton of people have been saying this whole time.

The only people saying this was true AI was the people who work for these companies and the investors who fell for it.

Most of the "big uses" have been literal mechanical Turks with a human pretending to be a program.

It's just when capitalism drives science, it only matters what the wealthy say, and apple is very very wealthy

[–] pennomi@lemmy.world 29 points 10 hours ago (3 children)

There’s nothing wrong with scientifically proving something that’s commonly known. In fact, that’s an important duty of science, even if it’s not a glamorous one.

[–] magnetosphere@fedia.io 10 points 10 hours ago

Exactly. “Conventional wisdom” is often inaccurate, or outright incorrect.

[–] QuarterSwede@lemmy.world 0 points 5 hours ago (1 children)

You aren’t wrong by in this case, nothing needs to be proven by a 3rd party since anyone recently in programming knows how LLMs works. It’s factual.

[–] pennomi@lemmy.world 3 points 5 hours ago (1 children)

LLMs are famously NOT understood, even by the scientists creating them. We’re still learning how they process information.

Moreover, we most definitely don’t know how human intelligence works, or how close/far we are to replicating it. I suspect we’ll be really disappointed by the human mind once we figure out what the fundamentals of intelligence are.

[–] QuarterSwede@lemmy.world 0 points 3 hours ago

They most definitely are understood. The basics of what they’re doing doesn’t change. Garbage in, garbage out.

[–] givesomefucks@lemmy.world -1 points 10 hours ago (1 children)

But others have been showing this for years...

You don't often hear about the 17th time an experiment reaches the same conclusion.

But like I said, people will care about it. Because capitalism drives science so it matters more when a billion dollar corporation says it than countless subject matter experts.

Investors don't listen to them, but they'll listen to apple.

[–] floo@retrolemmy.com 10 points 10 hours ago* (last edited 10 hours ago) (1 children)

OK, then, when was the last time this was scientifically proven? By whom? Please provide citations and references.

[–] givesomefucks@lemmy.world -1 points 8 hours ago (1 children)

Just to be clear...

You want me to show you a study that shows AI needs to be trained to do something?

Because I can do that, I just realized this is apple and don't want to get in something that never ends with a fanboy.

But what would make you happy is something that shows what AI developers spend billions of dollars and violate all types of laws or n pursuit of isn't just some optional step they can skip and it'll still do what it does now.

Cuz thats what it sounds like youre asking for, it's just a little hard to believe.

[–] floo@retrolemmy.com 1 points 5 hours ago* (last edited 5 hours ago)

Exactly as I thought: you’re full of shit.

That explains a lot

[–] MadMadBunny@lemmy.ca 6 points 10 hours ago

People need to be told, as too many have no judgment or critical thinking anymore.

This is important. And it will help them get back to reality.

[–] ArbitraryValue@sh.itjust.works 9 points 10 hours ago* (last edited 10 hours ago) (6 children)

I'm not sure what's novel here. No one thought that modern AI could solve arbitrarily complex logic problems, or even that modern AI was particularly good at formal reasoning. I would call myself an AI optimist but I would have been surprised if the article found any result other than the one it did. (Where exactly the models fail is interesting, but the fact that they do at all isn't.) Furthermore, the distinction between reasoning and memorizing patterns in the title of this post is artificial - reasoning itself involves a great deal of pattern recognition.

[–] cheese_greater@lemmy.world 1 points 1 hour ago

I just find it shockingly good at producing working bits of code that work perfectly and all the variables and functions/methods seem aptly named and such. Its very curious

Most CEOs and business grads think LLMs are a universal cureall.

There were studies out last week that indicate that most Gen Alpha think LLMs are AGI. The marketing is working.

[–] Jimbabwe@lemmy.world 8 points 9 hours ago

No one thought that modern AI could solve arbitrarily complex logic problems, or even that modern AI was particularly good at formal reasoning.

haha, except pretty much everyone in the c-suite at the company I work for.

[–] 6nk06@sh.itjust.works 5 points 9 hours ago

No one thought that modern AI could solve arbitrarily complex logic problems

Except half the threads on Hacker News and Lobsters and LinkedIn.

Whats novel is that a major tech company is officially saying what they all know is true.

That Apple is finding itself the only major tech player without their own LLM likely plays heavily into why they are throwing water on the LLM fire, but it is still nice to see one of them admitting the truth.

Also reasoning is pattern recognition with context. None of the "AI" models have contextual capability. For Claude, i refer you to Claude Plays Pokemon on twitch. It is a dumpster fire.

[–] magnetosphere@fedia.io 6 points 10 hours ago

I don’t think the study was meant to be novel. It looks like it was only intended to provide scientific evidence about exactly where current AIs fail.

[–] xxd@discuss.tchncs.de 6 points 10 hours ago (1 children)

I'm a bit torn on this. On one hand: obviously LLMs do this, since they're essentially just huge pattern recognition and prediction machines, and basically any person probing them with new complex problems has made that exact observation already. On the other hand: a lot of everyday things us humans do are not that dissimilar from recognizing patterns and remembering a solution, and it feels like doing this step well is a reasonable intermediate step towards AGI, and not as hugely far off as this article makes it out to be.

[–] ignirtoq@fedia.io 9 points 10 hours ago

The human brain is not an ordered, carefully engineered thinking machine; it's a massive hodge-podge of heuristic systems to solve a lot of different classes of problems, which makes sense when you remember it evolved over millions of years as our very distant ancestors were exposed to radically different environments and challenges.

Likewise, however AGI is built, in order to communicate with humans and solve most of the same problems, it's probably going to take an amalgamation of different algorithms, just like brains.

All of this to say, I agree memorization will probably be an integral part of that system, but it's also going to be a small part of the final system. So I also agree with the article that we're way off from AGI.