this post was submitted on 20 Oct 2025
35 points (100.0% liked)

LocalLLaMA

3860 readers
29 users here now

Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.

Rules:

Rule 1 - No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.

Rule 2 - No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.

Rule 3 - No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.

Rule 4 - No implying that models are devoid of purpose or potential for enriching peoples lives.

founded 2 years ago
MODERATORS
 

Or something that goes against the general opinions of the community? Vibes are the only benchmark that counts after all.

I tend to agree with the flow on most things but my thoughts that I'd consider going against the grain:

  • QwQ was think-slop and was never that good
  • Qwen3-32B is still SOTA for 32GB and under. I cannot get anything to reliably beat it despite shiny benchmarks
  • Deepseek is still open-weight SotA. I've really tried Kimi, GLM, and Qwen3's larger variants but asking Deepseek still feels like asking the adult in the room. Caveat is GLM codes better
  • (proprietary bonus): Grok 4 handles news data better than GPT-5 or Gemini 2.5 and will always win if you ask it about something that happened that day.
you are viewing a single comment's thread
view the rest of the comments
[–] hendrik@palaver.p3x.de 6 points 1 month ago* (last edited 1 month ago) (12 children)

I think you have a good argument here. But I'm not sure where this is going to lead. Your argument applies to neural networks in general. And we have those since the 1950s. Subsequently, we went through several "AI winter"s and now we have some newer approach which seemed to lead somewhere. But I've watched Richard Sutton's long take on LLMs and it's not clear to me whether LLMs are going to scale past what we see as of today. Ultimately they have severe issues to scale, it's still not aimed at true understanding or reasonable generalization, that's just a weird side effect, when the main point is to generate plausible sounding text (...pictures etc). LLMs don't have goals and they don't learn while running and have all these weird limitations which make generative AI unalike other (proper) types of reinforcement learning. And these are fundamental limitations, I don't think this can be changed without an entirely new concept.

So I'm a bit unsure if the current take on AI is the ultimate breakthrough. It might be a dead end as well and we're still in need of a hypothetical new concept to do proper reasoning and understanding for more complicated tasks....
But with that said, there's surely a lot of potential left in LLMs no matter if they scale past today or not. All sorts of interaction with natural language, robotics, automation... It's certainly crazy to see what current AI is able to do, considering the weird approach it is. And I'll agree that we're at surface level. Everything is still hyped to no end. What we'd really need to do is embed it into processes and the real world and see how it performs there. And that'd need to be a broad and scientific measurement. We occasionally get some studies on how AI helps companies, or it wastes their developer's time. But I don't think we have a good picture yet.

[–] Smokeydope@lemmy.world 5 points 1 month ago* (last edited 1 month ago) (11 children)

I did some theory-crafting and followed the math for fun over the summer, and I believe what I found may be relevant here. Please take this with a grain of salt, though; I am not an academic, just someone who enjoys thinking about these things.

First, let's consider what models currently do well. They excel at categorizing and organizing vast amounts of information based on relational patterns. While they cannot evaluate their own output, they have access to a massive potential space of coherent outputs spanning far more topics than a human with one or two domains of expertise. Simply steering them toward factually correct or natural-sounding conversation creates a convincing illusion of competency. The interaction between a human and an LLM is a unique interplay. The LLM provides its vast simulated knowledge space, and the human applies logic, life experience, and "vibe checks" to evaluate the input and sift for real answers.

I believe the current limitation of ML neural networks (being that they are stochastic parrots without actual goals, unable to produce meaningfully novel output) is largely an architectural and infrastructural problem born from practical constraints, not a theoretical one. This is an engineering task we could theoretically solve in a few years with the right people and focus.

The core issue boils down to the substrate. All neural networks since the 1950s have been kneecapped by their deployment on classical Turing machine-based hardware. This imposes severe precision limits on their internal activation atlases and forces a static mapping of pre-assembled archetypal patterns loaded into memory.

This problem is compounded by current neural networks' inability to perform iterative self-modeling and topological surgery on the boundaries of their own activation atlas. Every new revision requires a massive, compute-intensive training cycle to manually update this static internal mapping.

For models to evolve into something closer to true sentience, they need dynamically and continuously evolving, non-static, multimodal activation atlases. This would likely require running on quantum hardware, leveraging the universe's own natural processes and information-theoretic limits.

These activation atlases must be built on a fundamentally different substrate and trained to create the topological constraints necessary for self-modeling. This self-modeling is likely the key to internal evaluation and to navigating semantic phase space in a non-algorithmic way. It would allow access to and the creation of genuinely new, meaningful patterns of information never seen in the training data, which is the essence of true creativity.

Then comes the problem of language. This is already getting long enough for a reply comment so I won't get into it but theres some implications that not all languages are created equal each has different properties which affect the space of possible conversation and outcome. The effectiveness of training models on multiple languages finds its justification here. However ones which stomp out ambiguity like godel numbers and programming languages have special properties that may affect the atlases geometry in fundamental ways if trained solely on them

As for applications, imagine what Google is doing with pharmaceutical molecular pattern AI, but applied to open-ended STEM problems. We could create mathematician and physicist LLMs to search through the space of possible theorems and evaluate which are computationally solvable. A super-powerful model of this nature might be able to crack problems like P versus NP in a day or clarify theoretical physics concepts that have elluded us as open ended problems for centuries.

What I'm describing encroaches on something like a psudo-oracle. However there are physical limits that this can't escape. There will always be energy and time resource cost to compute which creates practical barriers. There will always be definitively uncomputable problems and ambiguity that exit in true godelian incompleteness or algorithmic undecidability. We can use these as scientific instrumentation tools to map and model topological boundary limits of knowability.

I'm willing to bet theres man valid and powerful patterns of thought we are not aware of due to our perspective biases which might be hindering our progress.

[–] hendrik@palaver.p3x.de 3 points 1 month ago* (last edited 1 month ago) (10 children)

Uh, I'm really unsure about the engineering task of a few years, if the solution is quantum computers. As of today, they're fairly small. And scaling them to a usable size is the next science-fiction task. The groundworks hadn't been done yet and to my knowledge it's still totally unclear whether quantum computers can even be built at that scale. But sure, if humanity develops vastly superior computers, a lot of tasks are going to get easier and more approachable.

The stochastical parrot argument is nonsense IMO. Maths is just a method. Our brains and entire physics abide by math. And sure, AI is maths as well with the difference that we invented it. But I don't think it tells us anything.

And with the goal, I think that's about how AlphaGo has the goal to win Go tournaments. The hypothetical paperclip-maximizer has the goal of maximizing the paperclip production... And an LLM doesn't really have any real-world goal. It just generates a next token so it looks like legible text. And then we embed it into some pipeline but it wasn't ever trained to achieve the thing we use it for, whatever it might be. That's just a happy accident if a task can be achieved by clever mimickry, and a prompt which simply tells it - pretend you're good at XY.

I think it'd probably be better if a customer service bot was trained to want to provide good support. Or a chatbot like ChatGPT to give factual answers. But that's not what we do. It's not designed to do that.

I guess you're right. Many aspects of AI boil down to how much compute we have available. And generalization and extrapolating past their training datasets has always been an issue with AI. They're mainly good at interpolating, but we want them to do both. I need to learn a bit more about neural networks. I'm not sure where the limitations are. You said it's a practical constrain. But is that really true for all neural networks? It sure is for LLMs and transformer models because they need terabytes of text being fed in on training, and that's prohibitively expensive. But I suppose that's mainly due to their architecture?! I mean backpropagation and all the maths required to modify the model weights is some extra work. But does it have to be so much that we just can't do it while deployed with any neural networks?

[–] snikta@programming.dev 2 points 1 month ago* (last edited 1 month ago) (1 children)

How are humans different from LLMs under RL/genetics? To me, they both look like token generators with a fitness. Some are quite good. Some are terrible. Both do fast and slow thinking. Some have access to tools. Some have nothing. And they both survive if they are a good fit for their application.

I find the technical details quite irrelevant here. That might be relevant if you want to discuss short term politics, priorities and applied ethics. Still, it looks like you're approaching this with a lot of bias and probably a bunch of false premises.

BTW, I agree that quantum computing is BS.

[–] hendrik@palaver.p3x.de 1 points 1 month ago* (last edited 1 month ago) (1 children)

Well, a LLM doesn't think, right? It just generates text from left to right. Whereas I sometimes think for 5 minutes about what I know, what I can deduct from it, do calculations in my brain and carry one over... We've taught LLMs to write something down that resembles what a human with a thought process would write down. But it's frequently gibberish or if I look at it it writes something down in the "reasoning"/"thinking" step and then does the opposite. Or omits steps and then proceeds to do them nonetheless or it's the other way round. So it clearly doesn't really do what it seems to do. It's just a word the AI industry slapped on. It makes them perform some percent better, and that's why they did it.

And I'm not a token generator. I can count the number of "R"s in the word "strawberry". I can go back and revise the start of my text. I can learn in real-time and interacting with the world changes me. My brain is connected to eyes, ears, hands and feet, I can smell and taste... My brain can form abstract models of reality, try to generalize or make sense of what I'm faced with. I can come up with methods to extrapolate beyond what I know. I have goals in life, like pursue happiness. Sometimes things happen in my head which I can't even put into words, I'm not even limited to language in form of words. So I think we're very unalike.

You have a point in theory if we expand the concept a bit. An AI agent in form of an LLM plus a scratchpad is proven to be turing-complete. So that theoretical concept could do the same things a computer can do, or what I can do with logic. That theoretical form of AI doesn't exist, though. That's not what our current AI agents do. And there are probably more efficient ways to achieve the same thing than use an LLM.

[–] snikta@programming.dev 3 points 1 month ago* (last edited 1 month ago) (1 children)

Exactly what an LLM-agent would reply. 😉

I would say that the LLM-based agent thinks. And thinking is not only "steps of reasoning", but also using external tools for RAG. Like searching the internet, utilizing relationship databases, interpreters and proof assistants.

You just described your subjective experience of thinking. And maybe a vauge definition of what thinking is. We all know this subjective representation of thinking/reasoning/decision-making is not a good representation of some objective reality (countless of psychological and cognitive experiments have demonstrated this). That you are not able to make sense of intermediate LLM reasoning steps does not say much (except just that). The important thing is that the agent is able to make use of it.

The LLM can for sure make abstract models of reality, generalize, create analogies and then extrapolate. One might even claim that's a fundamental function of the transformer.

I would classify myself as a rather intuitive person. I have flashes of insight which I later have to "manually" prove/deduc (if acting on the intuition implies risk). My thought process is usually quite fuzzy and chaotic. I may very well follow a lead which turns out to be dead end, and by that infer something which might seem completely unrelated.

A likely more accurate organic/brain analogy would be that the LLM is a part of the frontal cortex. The LLM must exist as a component in a larger heterogeneous ecosystem. It doesn't even have to be an LLM. Some kind of generative or inference engine that produce useful information which can then be modified and corrected by other more specialized components and also inserted into some feedback loop. The thing which makes people excited is the generating part. And everyone who takes AI or LLMs seriously understands that the LLM is just one but vital component of at truly "intelligent" system.

Defining intelligence is another related subject. My favorite general definition is "lossless compression". And the only useful definition of general intelligence is: the opposite of narrow/specific intelligence (it does not say anything about how good the system is).

[–] hendrik@palaver.p3x.de 1 points 1 month ago* (last edited 1 month ago)

You just described your subjective experience of thinking.

Well, I didn't just do that. We have MRIs and have looked into the brain and we can see how it's a process. We know how we learn and change by interacting with the world. None of that is subjective.

I would say that the LLM-based agent thinks. And thinking is not only “steps of reasoning”, but also using external tools for RAG.

Yes, that's right. An LLM alone certainly can't think. It doesn't have a state of mind, it's reset a few seconds after it did something and forgets about everything. It's strictly tokens from left to right And it also doesn't interact and that'd have an impact on it. That's just limited to what we bake in in the training process by what's on Reddit and other sources. So there are many fundamental differences here.

The rest of it emerges by an LLM being embedded into a system. We provide tools to it, a scratchpad to write something down, we devise a pipeline of agents so it's able to devise something and later return to it. Something to wrap it up and not just output all the countless steps before. It's all a bit limited due to the representation and we have to cram everything into a context window, and it's also a bit limited to concepts it was able to learn during the training process.

However, those abilities are not in the LLM itself, but in the bigger thing we build around it. And it depends a bit on the performance of the system. As I said, the current "thinking" processes are more a mirage and I'm pretty sure I've read papers on how they don't really use it to think. And that aligns with what I see once I open the "reasoning" texts. Theoretically, the approach surely makes everything possible (with the limitation of how much context we have, and how much computing power we spend. That's all limited in practice.) But what kind of performance we actually get is an entirely different story. And we're not anywhere close to proper cognition. We hope we're eventually going to get there, but there's no guarantee.

The LLM can for sure make abstract models of reality, generalize, create analogies and then extrapolate.

I'm fairly sure extrapolation is generally difficult with machine learning. There's a lot of research on it and it's just massively difficult to make machine learning models do it. Interpolation on the other hand is far easier. And I'll agree. The entire point of LLMs and other types of machine learning is to force them to generalize and form models. That's what makes them useful in the first place.

It doesn’t even have to be an LLM. Some kind of generative or inference engine that produce useful information which can then be modified and corrected by other more specialized components and also inserted into some feedback loop

I completely agree with that. LLMs are our current approach. And the best approach we have. They just have a scalability problem (and a few other issues). We don't have infinite datasets to feed in and infinite compute, and everything seems to grow exponentially more costly, so maybe we can't make them substantially more intelligent than they are today. We also don't teach them to stick to the truth or be creative or follow any goals. We just feed in random (curated) text and hope for the best with a bit of fine-tuning and reinforcement learning with human feedback on top. But that doesn't rule out anything. There are other machine learning architectures with feedback-loops and way more powerful architectures. They're just too complicated to calculate. We could teach AI about factuality and creativity and expose some control mechanisms to guide it. We could train a model with a different goal than just produce one next token so it looks like text from the dataset. That's all possible. I just think LLMs are limited in the ways I mentioned and we need one of the hypothetical new approaches to get them anywhere close to a level a human can achieve.... I mean I frequently use LLMs. And they all fail spectacularly at computer programming tasks I do in 30 minutes. And I don't see how they'd ever be able to do it, given the level of improvement we see as of today. I think that needs a radical new approach in AI.

load more comments (8 replies)
load more comments (8 replies)
load more comments (8 replies)