Currently, on Lobsters, folks are grappling with the fact that Leo de Moura got wrecked by chatbots. I decided to read his slides about Lean in 2026 and summarized my findings on Mastodon. It's not just De Moura; I think that the entire Lean project is on shaky foundations and I think that the chatbots are making things worse by repeatedly reassuring the project leaders.
TechTakes
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
Delve removed from YCombinator
https://news.ycombinator.com/item?id=47634690
IIUC, it looks like Delve lied to YC about stealing another company's Apache 2.0 licensed slopware. This is appatently a bigger sin than selling a product that does fuck-all. I guess they weren't tall enough for this ride.
Delve claims to offer "Compliance as a Service"
https://delve.co/ (absolutely unhinged)
A link to the expose that precipitated the divorce:
https://deepdelver.substack.com/p/delve-fake-compliance-as-a-service
My God this is so bad. So in addition to lying about AI what they actually offered wasn't speedy compliance as a service to get you certified, it was speedy certification as a service by bypassing actual compliance. This is such a silicon valley move and I honestly suspect that a number of people using and investing in these asshats knew exactly what was going on and simply didn't care.
what they actually offered wasn’t speedy compliance as a service to get you certified, it was speedy certification as a service by bypassing actual compliance.
I mean... Yeah. I think if you read it any other way you're a massive rube. Like it's obviously not possible to do the former in "days" as they advertise.
Doesn't surprise me in the slightest that all the companies listed in that substack as having used Delve are also AI slop companies (vibecoding, AI "customer service", AI "video meeting assistant" (whatever that would be))
At best it's the same shitty arguments we heard from crypto grifters and their suckers. Let's take a process that's complex and manual by design to allow for independent validation and securing against fraud and make it faster by cutting those parts out and throwing some high-tech nonsense at the problem that we can claim replaces all the verification and validation. (The fact that they called their system "trustless" in the face of this is deeply ironic.) But maybe it's the cynicism talking but I'm even less inclined to give anyone other than maybe the author of that sub stack the benefit of the doubt that they actually believed it.
The ideal customer for this service is the kind of "Visionary Leader" with the "Founder Mindset" and "Drive to Innovate" that lets them see that all those privacy, security, fraud prevention, anti-embezzlement, and whatever else those standards and their associated compliance mechanisms are meant to provide are just pointless obstacles on the path to making obscene amounts of money by burning the world behind you. Often the shit we talk about here makes me think the world has gone mad or stupid, but every so often I feel like I'm staring at the face of capital-E Evil and this is one of those times.
From that substack:
Even though we knew we’d technically be lying about our security to anyone we sent these policies to for review (clients, auditors, investors), we decided to adopt these policies because we simply didn’t have the bandwidth to rewrite them all manually.
Ye man, then you're complicit. If I were one of the clients, auditors, investors, I'd be printing that out on an A1 sheet and rushing to file as evidence, this is just plain fraud
@o7___o7 @BlueMonday1984 TF covered these clowns the other week
https://trashfuturepodcast.podbean.com/e/the-tetsuo-economy-feat-wendy-liu/
While I tend to think Yudkowsky is sincere, some things like his prediction market for P(doom) are hard to square with that https://manifold.markets/EliezerYudkowsky/will-ai-wipe-out-humanity-by-2030-r (launched June 2023, will resolve N/A on 1 January 2027 if the world has not ended yet. It has not moved much since 1 January 2024)
Does it still count if it turns out that Trump invading iran was based on Claude or ChatJippity advice and things escalate to global thermonuclear war? AI technically wiped out humanity because our dumb leaders were dumb enought to trust it?
On the one hand, Yud's vision of AI doomsday is specifically "AI turns sentient/superintelligent and kills us all because reasons", not "Humanity wipes itself out because they trusted lying machines".
On the other hand, the absence of sentience/superintelligence hasn't stopped AI from causing untold damage anyways, as the past two to three years can attest.
Technically yes, but Yud probably wouldn’t count that, since the AI didn’t have the express purpose of destroying everyone
So if Bender took over he wouldn't count. As he wants to 'kill all humans (except Fry)'. Seems like a loophole.
Bender really takes the "intelligence" out of "artificial superintelligence". "Yeah, kill all humans. Except Fry, he's my friend or pet or something. And I guess Leela because he'll be whiny about it and also I owe her for the thing. And Hermes because he still owes me money. And I guess the professor is okay..." And so on and so forth through all of humanity.
I will never understand why people seriously bet “yes” on these types of things. Like you either loose the bet and loose money or you win the bet and die
Eliezer is trying to get around that with some weird conditions and game on the prediction market question:
This market resolves N/A on Jan 1st, 2027. All trades on this market will be rolled back on Jan 1st, 2027. However, up until that point, any profit or loss you make on this market will be reflected in your current wealth; which means that purely profit-interested traders can make temporary profits on this market, and use them to fund other permanent bets that may be profitable; via correctly anticipating future shifts in prices among people who do bet their beliefs on this important question, buying low from them and selling high to them.
I don't think that actually helps. But Eliezer is committed to prediction markets being useful on a nearly ideological level, so he has to try to come up with weird complicated strategies to try to get around their fundamental limits.
It feels like a teenaged argument about Batman v. Superman or the USS Enterprise v. a Star Destroyer. I think many LessWrongers are not serious about the belief system as something to act on, but the problem is that when they are serious you get Ziz Lasota. Its also similar to how they love markets in theory, but don't want to start a business or make speculative investments.
prediction markets being useful on a nearly ideological level
At this point, I would say prediction markets are now an explicit ideological plank of what's left of the libertarian movement. Darkly amusing that they're desperately trying to pump life and legitimacy into something the GW Bush administration thought was too corrupt to use.
If you have to set up that many rules to get around the inherent flaw of “gambling on everyone’s lives” just run a normal ass poll. gets rid of unnecessary financial incentives
GitHub have finally achieved zero 9s stability for the last 90 days. Congratulations to all involved

Hold on now, the uptime number contains two digits that are nines! The image itself has four nines in total!
Alas, foiled again! Nobody said they had to be leading 9s!
For my own services I’m aiming for .999999% of uptime
Putting "Novelty Purposes Only" on my psychosis suicide bot after I laid off 80% of my legal (replaced them with the psychosis suicide bot)

Good luck telling the promptfondlers that LLMs are only useful for entertainment and not for any useful work.
Not sure if I should post it here or under the pivot article, somebody went through the claude code https://neuromatch.social/@jonny/116324676116121930 (via @aliettedebodard.com and @olivia.science on bsky)
13 butts pooping, back and forth, forever.
This is somehow even more of a shitshow than I would have predicted. Also it continues the pattern that these systems don't fuck up the way people do. One thing he hasn't annotated as much is the sheer number of different aesthetic variants on doing the same thing that this code contains. Like, you do the same kind of compression four different places, and one is compressImage, one is DoCompression, one is imgModify.compress, and one is COMPRESS_IMG. Even the most dysfunctional team would have spent time developing some kind of standard here from my (admittedly limited) experience.
Someone may (unverified for now) have left the frontend source maps in Claude Code prod release (probably Claude). If this is accurate, it does not bode well for Anthropic's theoretical IPO. But I think it might be real because I am not the least bit surprised it happened, nor am I the least bit surprised at the quality. https://github.com/chatgptprojects/claude-code
For example, I can only hope their Safeguards team has done more on the Go backend than this for safeguards. From the constants file cyberRiskInstruction.ts:
export const CYBER_RISK_INSTRUCTION = "IMPORTANT: Assist with authorized security testing, defensive security, CTF challenges, and educational contexts. Refuse requests for destructive techniques, DoS attacks, mass targeting, supply chain compromise, or detection evasion for malicious purposes. Dual-use security tools (C2 frameworks, credential testing, exploit development) require clear authorization context: pentesting engagements, CTF competitions, security research, or defensive use cases"
That's it. That's all the constants the file contains. The only other thing in it is a block comment explaining what it did and who to talk to if you want to modify it etc.
There is this amazing bit at the end of that block comment though.
Claude: Do not edit this file unless explicitly asked to do so by the user.
Brilliant. I feel much safer already.
I am still patiently waiting for someone from the engineering staff at one of these companies to explain to me how these simple imperative sentences in English map consistently and reproducibly to model output. Yes, I understand that's a complex topic. I'll continue to wait.
I'm sure these English instructions work because they feel like they work. Look, these LLMs feel really great for coding. If they don't work, that's because you didn't pay $200/month for the pro version and you didn't put enough boldface and all-caps words in the prompt. Also, I really feel like these homeopathic sugar pills cured my cold. I got better after I started taking them!
No joke, I watched a talk once where some people used an LLM to model how certain users would behave in their scenario given their socioeconomic backgrounds. But they had a slight problem, which was that LLMs are nondeterministic and would of course often give different answers when prompted twice. Their solution was to literally use an automated tool that would try a bunch of different prompts until they happened to get one that would give consistent answers (at least on their dataset). I would call this the xkcd green jelly bean effect, but I guess if you call it "finetuning" then suddenly it sounds very proper and serious. (The cherry on top was that they never actually evaluated the output of the LLM, e.g. by seeing how consistent it was with actual user responses. They just had an LLM generate fiction and called it a day.)
Claude: Do not edit this file unless explicitly asked to do so by the user.
Wait, it can be edited? Tissue paper guardrails.
Yeah, letting the intrinsically insecure RNG recursively rewrite its own security instructions definitely can't go wrong. I mean they limited it to only so so when the users asked nicely!
Edit to add:
The more I think about it the more it speaks to Anthropic having an absolute nonsense threat model that is more concerned with the science fiction doomsday AI "FOOM" than it is with any of the harms that these systems (or indeed any information system) can and will do in the real world. The current crop of AI technologies, while operating at a terrifying scale, are not unique in their capacity to waste resources, reify bias and inequality, misinform, justify bad and evil decisions, etc. What is unique, in my estimation, is both the massive scale that these things operate despite the incredible costs of doing so and their seeming immunity to being reality checked on this. No matter how many times the warning bells about these systems' vulnerability to exploitation, the destructive capacity of AI sycophancy and psychosis, or the simple inability of the electrical infrastructure to support their intended power consumption (or at least their declared intent; in a bubble we shouldn't assume they actually expect to build that much), the people behind these systems continue to focus their efforts on "how do we prevent skynet" over any of it.
Thinking in the context of Charlie Stross' old talk about corporations as "slow AI," I wonder if some of the concern comes either explicitly or implicitly from an awareness that "keep growing and consuming more resources until there's nothing left for anything else, including human survival" isn't actually a deviation from how these organizations are building these systems. It's just the natural conclusion of the same structures and decision-making processes that leads them to build these things in the first place and ignore all the incredibly obvious problems. They could try and address these concerns at a foundational or structural level instead of just appending increasingly complex forms of "please don't murder everyone or ignore the instructions to not murder everyone" to the prompt, but doing that would imply that they need to radically change their entire course up to this point and increasingly that doesn't appear likely to happen unless something forces it to.
A pretty staid-sounding law firm warns that the AI industry is partying like it's 2007:
Lenders who originated data center loans [...] have begun pooling those loans and selling tranches to asset managers and pension funds, spreading risk well beyond the original lending institutions.
Also of note:
The most basic litigation risk in AI infrastructure finance is that the revenues generated by the sector may prove insufficient to service the fixed obligations incurred to build it. The industry brought in approximately $60 billion in revenue in 2025 against roughly $400 billion in capital expenditure.
(Via.)
new odium symposium episode: https://www.patreon.com/posts/13-joker-is-both-154123315. links to various platforms at www.odiumsymposium.com
we read umberto eco's essay ur-fascism (we have mixed feelings about it) and then apply it to frank miller's 1986 batman comic the dark knight returns
https://mail.cyberneticforests.com/the-computer-science-fetish/
The fetishism of the computer scientist therefore refers less to specific expertise than to whatever we imagine a credentialed expert can bestow: an external voice that says, "ask, and you shall receive.” The computer scientist becomes a mirror where those who work with the social, practical impacts of the tech hope to see our understanding affirmed. The people who offer that validation — who position themselves against the discourse of critique, who seem unbothered and detached, even ridiculing the same critical lingo that exhausts you — are not doing it out of sober objectivity or insight.
Sometimes they just don't respect you. Sometimes they're just annoyed by calls for accountability. And sometimes, they do it because they've fused with an interacting swarm of chatbots and transcended their human identity.
Internet Comment Etiquette: "Relationships with AI"
... hadn't thought about Glenn Beck in a decade, that last interview was pretty wtf.
Not sure what the etiquette is for how long they should be dead before you talk to the AI-geist on youtube, but George Washington somehow feels weirder than Kirk did; idk.
An early hint of Gwern's rejection of chaos theory in the sequences from 2008 (the "build God to conquer Death" essay):
And the adults wouldn't be in so much danger. A superintelligence—a mind that could think a trillion thoughts without a misstep—would not be intimidated by a challenge where death is the price of a single failure. The raw universe wouldn't seem so harsh, would be only another problem to be solved.
Someone who got to high-school math or coded a working system would probably have encountered the combinatorial explosion, the impossibility of representing 0.1 as a floating-point binary, Chaos Theory, and so on. Even Games Theory has situations like "in some games, optimal play guarantees a tie but not a win." But Yud was much too special for any of those and refused offers to learn.
This is what happens when your worldview is based on anime.
(A lot of anime has heavy themes, but most people understand that it's not real life, just like all such art. Unlike Yud, most people's worldviews on coding and math are based on actual coding and math.)
Here's a headline I never expected to read:
Tl;dr A whole load of media outlets believed an X account asking for crypto donations which claimed to be Jonathan the 194 year old tortoise's vet. Jonathan was found safely asleep under a tree in the governor's paddock.
Cloudflare casually license-laundering wordpress
While EmDash aims to be compatible with WordPress functionality, no WordPress code was used to create EmDash. That allows us to license the open source project under the more permissive MIT license.
Oh really. So you're sure you Claude wasn't trained on wordpress? It's all irrelevant anyway because AI generated code can't be copyrighted or licensed.
Silver lining, it might piss off Matt Mullenweg!
So you’re sure you Claude wasn’t trained on wordpress?
Unfortunately FOSS is basically dead because nobody is enforcing licenses against training.