Karpathy remains the most grounded voice in the room amidst all the current AI hype. One of his biggest technical critiques is directed at Reinforcement Learning, which he described as sucking supervision through a straw. You do a long, complex task, and at the end, you get a single bit of feedback, right or wrong, and you use that to upweight or downweight the entire trajectory. It's incredibly noisy and inefficient, suggesting we really need a paradigm shift toward something like process supervision. A human would never learn that way because we'd review our work, figure out which parts were good and which were bad, and learn in a much more nuanced way. We're starting to see papers try to address this, but it's a hard problem.
He also pushed back on the idea that we’re recreating evolution or building digital animals. Karpathy argues that because we train on the static artifacts of human thought, in form of internet text, rather than biological survival imperatives, we aren't building organisms. Animals come from evolution, which bakes a huge amount of hardware and instinct directly into their DNA. A zebra can run minutes after it's born. We're building something else that's more akin to ghosts. They are fully digital, born from imitating the vast corpus of human data on the internet. It's a different kind of intelligence, starting from a different point in the space of possible minds.
This leads into his fairly conservative timeline on agents. All these wild predictions about AGI are largely fundraising hype. While the path to capable AI agents is tractable, it's going to take about a decade, not a single year. The agents we have today are still missing too much. They lack true continual learning, robust multimodality, and the general cognitive depth you'd need to hire one as a reliable intern. They just don't work well enough yet.
Drawing from his time leading Autopilot at Tesla, he views coding agents through the lens of the "march of nines." Just like self-driving, getting the demo to work is easy, but grinding out reliability to 99.9999% takes ten years. Right now, agents are basically just interns that lack the cognitive maturity to be left alone.
Finally, he offered some interesting thoughts on architecture and the future. He wants to move away from massive models that memorize the internet via lossy compression, advocating instead for a small, 1-billion parameter cognitive core that focuses purely on reasoning and looks up facts as needed. He sees AI as just a continuation of automation curve we’ve been on for centuries.
AI - Artificial intelligence
162 readers
3 users here now
AI related news and articles.
Rules:
- No Videos.
- No self promotion: Don't post links to your articles.
founded 6 months ago
MODERATORS
1
2
3
4
5
6
7
8
9
10
11
12
13
7
In the AI era, Wikipedia has never been more valuable – Wikimedia Foundation
(wikimediafoundation.org)
14
15
16
17
18
19
20
2
Findings from DX’s 2025 report: AI won’t save you from your engineering culture
(blog.robbowley.net)
21
22
23
24
25
view more: next ›