this post was submitted on 22 Jun 2025
108 points (74.3% liked)

Programming Humor

3211 readers
1 users here now

Related Communities !programmerhumor@lemmy.ml !programmer_humor@programming.dev !programmerhumor@kbin.social !programming_horror@programming.dev

Other Programming Communities !programming@beehaw.org !programming@programming.dev !programming@lemmy.ml !programming@kbin.social !learn_programming@programming.dev !functional_programming@programming.dev !embedded_prog@lemmy.ml

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Blue_Morpho@lemmy.world 3 points 3 weeks ago (1 children)

4O got wrecked. My ai fan friend said O3 is their reasoning model so it means nothing. I don't agree but can't find proof.

Has someone done this with O3?

[–] otacon239@lemmy.world 17 points 3 weeks ago (3 children)

It’s a fundamental limitation of how LLMs work. They simply don’t understand how to follow a set of rules in the same way as a traditional computer/game is programmed.

Imagine you have only long-term memory that you can’t add to. You might get a few sentences of short-term memory before you’ve forgetting the context of the beginning of the conversation.

Then add on the fact that chess is very much a forward-thinking game and LLMs don’t stand a chance against other methods. It’s the classic case of “When all you have is a hammer, everything looks like a nail.” LLMs can be a great tool, but they can’t be your only tool.

[–] 30p87@feddit.org 5 points 3 weeks ago (1 children)

Or: If it's possible to create a simple algirithm, that will always be infinitely more accurate than ML.

[–] spankmonkey@lemmy.world 9 points 3 weeks ago

That is because the algorithm has an expected output that can be tested and verified for accuracy since it works consistently every time. If there appears to be inconsistency, it is a design flaw in the algorithm.

[–] spankmonkey@lemmy.world 2 points 3 weeks ago (2 children)

MY biggest disappointment with how AI is being implemented is the inability to incorporate context specific execution if small programs to emulate things like calculators and chess programs. Like why does it doe the hard mode approach to literally everything? When asked to do math why doesn't it execute something that emulates a calculator?

[–] otacon239@lemmy.world 3 points 3 weeks ago

I’ve been waiting for them to make this improvement since they were first introduced. Any day now…

[–] Zos_Kia@lemmynsfw.com 1 points 3 weeks ago

Chatgpt definitely does that. It can write small python programs and execute them, but it doesn't do it systematically, you have to prompt for it. It can even use chart libs to display data.

[–] Blue_Morpho@lemmy.world 0 points 3 weeks ago

It’s a fundamental limitation of how LLMs work.

LLMs have been adding reasoning front ends to them like O3 and deep seek. That's why they can solve problems that simple LLM's failed at.

I found one reference to O3 rated at chess level 800 but I'd really like to see Atari chess vs O3. My telling my friend how I think it would fail isn't convincing.