lagrangeinterpolator

joined 9 months ago
[–] lagrangeinterpolator@awful.systems 11 points 15 hours ago (3 children)

“California is, I believe, the only state to give health insurance to people who come into the country illegally,” Kauffman said nervously. “I think we probably should not be providing that.”

“So you’d rather everyone just be sick, and get everyone else sick?” another reporter asked.

“That’s not what I’m saying,” said Kauffman.

“Isn’t that effectively what happens?” the reporter countered. “They don’t have access to health care and they just have to get sick, right?”

Kauffman contemplated that one for a moment. “Then they have to just get sick,” he said. “I mean, it’s unfortunate, but I think that it’s sort of impossible to have both liberal immigration laws and generous government benefits.”

Do I need to comment on this one?

[–] lagrangeinterpolator@awful.systems 7 points 3 days ago* (last edited 3 days ago) (2 children)

I don't even think many AI developers realize that we're in a hype bubble. From what I see, they genuinely believe that the Models Will Improve and that These Issues Will Get Fixed. (I see a lot of faculty in my department who still have these beliefs.)

What these people do see, however, are a lot of haters who just cannot accept this wonderful new technology for some reason. AI is so magical that they don't need to listen to the criticisms; surely they're trivial by comparison to magic, and whatever they are, These Issues Will Get Fixed. But lately they have realized that with the constant embarrassing AI failures (AI surely doesn't have horrible ethics as well), there are a lot of haters who will swarm the announcement of any AI project now. The haters also tend to be people who actually know stuff and check things (tech journalists are incentivized to not do that), but it doesn't matter because they're just random internet commenters, not big news outlets.

My theory is that now they add a ton of caveats and disclaimers to their announcements in a vain attempt to reduce the backlash. Also if you criticize them, it's actually your fault that it doesn't work. It's Still Early Days. These Issues Will Get Fixed.

[–] lagrangeinterpolator@awful.systems 6 points 3 days ago (1 children)

I knew the Anthropic blog post was bullshit but every single time the reality is 10x worse that I anticipated.

[–] lagrangeinterpolator@awful.systems 10 points 4 days ago* (last edited 4 days ago) (10 children)

I wonder what actual experts in compilers think of this. There were some similar claims about vibe coding a browser from scratch that turned out to be a little overheated: https://pivot-to-ai.com/2026/01/27/cursor-lies-about-vibe-coding-a-web-browser-with-ai/

I do not believe that this demonstrates anything other than they kept making the AI brute force random shit until it happened to pass all the test cases. The only innovation was that they spent even more money than before. Also, it certainly doesn't help that GCC is open source, and they have almost certainly trained the model on the GCC source code (which the model can regurgitate poorly into Rust). Hell, even their blog post talks about how half their shit doesn't work and just calls GCC instead!

It lacks the 16-bit x86 compiler that is necessary to boot Linux out of real mode. For this, it calls out to GCC (the x86_32 and x86_64 compilers are its own).

It does not have its own assembler and linker; these are the very last bits that Claude started automating and are still somewhat buggy. The demo video was produced with a GCC assembler and linker.

I wonder why this blog post was brazen enough to talk about these problems. Perhaps by throwing in a little humility, they can make the hype pill that much easier to swallow.

Sidenote: Rust seems to be the language of choice for a lot of these vibe coded "projects", perhaps because they don't want people immediately accusing them of plagiarism. But Rust syntax still reasonably follows languages like C. In most cases, blindly translating C code into Rust kinda works. Now, Rust does have the borrow checker which requires a lot of thinking to deal with, but I think this is not actually a disadvantage for the AI. Borrow checking is enforced by the compiler, so if you screw up in that department, your code won't even compile. This is great for an AI that is just brute forcing random shit until it "works".

[–] lagrangeinterpolator@awful.systems 12 points 1 week ago (1 children)

I guess I can check back in six months to see how they're doing ... wait a minute, they were saying the same things six months ago, weren't they? That's a bummer.

[–] lagrangeinterpolator@awful.systems 14 points 1 week ago* (last edited 1 week ago) (7 children)

$1000 a week?? Even putting aside literally all of the other issues of AI, it is quite damning that AI cannot even beat humans on cost. AI somehow manages to screw up the one undeniable advantage of software. How do these people delude themselves into thinking that the dogshit they're eating is good?

As a sidenote, I think after the bubble collapses, the people who predict that there will still be some uses for genAI are mostly wrong. In large part, this is because they do not realize just how ruinously expensive it is to run these models, let alone scrape data and train them. Right now, these costs are being subsidized by venture capitalists putting their money into a furnace.

[–] lagrangeinterpolator@awful.systems 7 points 1 week ago (4 children)

I admire how persistent the AI folks are at failing to do the same thing over and over again, but each time coming up with an even more stupid name. Vibe coding? Gas Town? Clawdbot, I mean Moltbook, I mean OpenClaw? It's probably gonna be something different tomorrow, isn't it?

[–] lagrangeinterpolator@awful.systems 8 points 1 week ago (1 children)

It's a big club and you ain't in it!

[–] lagrangeinterpolator@awful.systems 9 points 1 week ago* (last edited 1 week ago)

Holy shit, I didn't even read that part while skimming the later parts of that post. I am going to need formal mathematical definitions for "entangled limit", "all possible computations", "everything machine", "maximally nondeterministic", and "eye wash" because I really need to wash out my eyes. Coming up with technical jargon that isn't even properly defined is a major sign of math crankery. It's one thing to have high abstractions, but it is something else to say fancy words for the sake of making your prose sound more profound.

[–] lagrangeinterpolator@awful.systems 15 points 1 week ago* (last edited 1 week ago) (5 children)

I study complexity theory so this is precisely my wheelhouse. I confess I did not read most of it in detail, because it does spend a ton of space working through tedious examples. This is a huge red flag for math (theoretical computer science is basically a branch of math), because if you truly have a result or idea, you need a precise statement and a mathematical proof. If you're muddling through examples, that generally means you either don't know what your precise statement is or you don't have a proof. I'd say not having a precise statement is much worse, and that is what is happening here.

Wolfram here believes that he can make big progress on stuff like P vs NP by literally just going through all the Turing machines and seeing what they do. It's the equivalent of someone saying, "Hey, I have some ideas about the Collatz conjecture! I worked out all the numbers from 1 to 30 and they all worked." This analogy is still too generous; integers are much easier to work with than Turing machines. After all, not all Turing machines halt, and there is literally no way to decide which ones do. Even the ones that halt can take an absurd amount of time to halt (and again, how much time is literally impossible to decide). Wolfram does reference the halting problem on occasion, but quickly waves it away by saying, "in lots of particular cases ... it may be easy enough to tell what’s going to happen." That is not reassuring.

I am also doubtful that he fully understands what P and NP really are. Complexity classes like P and NP are ultimately about problems, like "find me a solution to this set of linear equations" or "figure out how to pack these boxes in a bin." (The second one is much harder.) Only then do you consider which problems can be solved efficiently by Turing machines. Wolfram focuses on the complexity of Turing machines, but P vs NP is about the complexity of problems. We don't care about the "arbitrary Turing machines 'in the wild'" that have absurd runtimes, because, again, we only care about the machines that solve the problems we want to solve.

Also, for a machine to solve problems, it needs to take input. After all, a linear equation solving machine should work no matter what linear equations I give it. To have some understanding of even a single machine, Wolfram would need to analyze the behavior of the machine on all (infinitely many) inputs. He doesn't even seem to grasp the concept that a machine needs to take input; none of his examples even consider that.

Finally, here are some quibbles about some of the strange terminology he uses. He talks about "ruliology" as some kind of field of science or math, and it seems to mean the study of how systems evolve under simple rules or something. Any field of study can be summarized in this kind of way, but in the end, a field of study needs to have theories in the scientific sense or theorems in the mathematical sense, not just observations. He also talks about "computational irreducibility", which is apparently the concept of thinking about what is the smallest Turing machine that computes a function. This doesn't really help him with his project, but not only that, there is a legitimate subfield of complexity theory called meta-complexity that is productively investigating this idea!

If I considered this in the context of solving P vs NP, I would not disagree if someone called this crank work. I think Wolfram greatly overestimates the effectiveness of just working through a bunch of examples in comparison to having a deeper understanding of the theory. (I could make a joke about LLMs here, but I digress.)

Surely this is a suitable reference for a math article!

view more: next ›