this post was submitted on 29 Dec 2025
20 points (100.0% liked)

TechTakes

2369 readers
38 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Last substack for 2025 - may 2026 bring better tidings. Credit and/or blame to David Gerard for starting this.)

you are viewing a single comment's thread
view the rest of the comments
[–] sailor_sega_saturn@awful.systems 9 points 2 weeks ago* (last edited 2 weeks ago) (17 children)

https://github.com/leanprover/lean4/blob/master/.claude/CLAUDE.md

Imagine if you had to tell people "now remember to actually look at the code before changing it." -- but I'm sure LLMs will replace us any day now.

Also lol this sounds frustrating:

Update prompting when the user is frustrated: If the user expresses frustration with you, stop and ask them to help update this .claude/CLAUDE.md file with missing guidance.

Edit: I might be misreading this but is this signs of someone working on an LLM driven release process? https://github.com/leanprover/lean4/blob/master/.claude/commands/release.md ??

Important Notes: NEVER merge PRs autonomously - always wait for the user to merge PRs themselves

[–] lagrangeinterpolator@awful.systems 13 points 2 weeks ago* (last edited 2 weeks ago) (7 children)

So many CRITICAL and MANDATORY steps in the release instruction file. As it always is with AI, if it doesn't work, just use more forceful language and capital letters. One more CRITICAL bullet point bro, that'll fix everything.

Sadly, I am not too surprised by the developers of Lean turning towards AI. The AI people have been quite interested in Lean for a while now since they think it is a useful tool to have AIs do math (and math = smart, you know).

[–] istewart@awful.systems 12 points 2 weeks ago (2 children)

The whole culture of writing "system prompts" seems utterly a cargo-cult to me. Like if the ST: Voyager episode "Tuvix" was instead about Lt. Barclay and Picard accidentally getting combined in the transporter, and the resulting sadboy Barcard spent the rest of his existence neurotically shouting his intricately detailed demands at the holodeck in an authoritative British tone.

If inference is all about taking derivatives in a vector space, surely there should be some marginally more deterministic method for constraining those vectors that could be readily proceduralized, instead of apparent subject-matter experts being reduced to wheedling with an imaginary friend. But I have been repeatedly assured by sane, sober experts that it is just simply is not so

[–] mirrorwitch@awful.systems 5 points 2 weeks ago* (last edited 2 weeks ago)

When I first learned that you could program a chatbot merely by giving instructions in English sentences as if it was a human being, I admit I was impressed. I'm a linguist, natural language processing is really hard. There was a certain crossing over boundaries over the idea of telling it at chatbot level, e.g. "and you will never delete files outside this directory", and this "system prompt" actually shaping the behaviour of the chatbot. I don't have much interest in programming anymore but I wondered how this crossing of levels was implemented.

The answer of course is that it's not. Programming a chatbot by talking to it doesn't actually work.

load more comments (1 replies)
load more comments (5 replies)
load more comments (14 replies)