this post was submitted on 23 Feb 2026
24 points (100.0% liked)

TechTakes

2472 readers
103 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this. If you're wondering why this went up late, I was doing other shit)

(EDIT: Changed "29th February" to "1st March" - its not a leap year)

you are viewing a single comment's thread
view the rest of the comments
[–] BurgersMcSlopshot@awful.systems 15 points 1 week ago (4 children)

I just had one of those "brain-doing-brain-stuff-good" moments (I think normal people call them delusions?) pondering about why it is that AI code extruders are seeing widening adoption.

tl;dr - there's a bunch of people uncurious about the nature of the abstractions they use and it's a tragedy.

First a moment of background: My first software dev position was using Lisp and one of the most powerful concepts built into the language runtime was the macro facility, the ability to write code that writes code. The main downsides of Lisp are obsequious Lisp developers and hard-to-master C foreign function interfaces, so what you have is a toolchain of abandoned dependencies made by some real annoying characters, but I digress. The ability to write code that writes code is a powerful concept.

I moved on to working with .Net which sometime around the 4.6 version release got enhancements to built-in language utilities. This led to better code-generators for numerous purposes (certain DI containers started to do dependency resolution at build time for example).

I did Scala for a time, which had a macro facility that was hot garbage and was rewritten between 2 and 3, so I never bothered to learn it. Around this time the orgs I worked for were placing an emphasis on OpenAPI / swagger specs for reasons I don't know because while there was tooling that could be used to generate both the entire http client and the set of interfaces used by the surface, we did neither (where I am at right now we still do neither form of code gen).

Anyways, things like code generation whether via external tooling or internal facilities is magical but it is deterministic magic: Identical input should yield the same result. It is also hard to use well. The ergonomics of the OpenAPI / Swagger codegen tooling is pretty bad though not impossible, and the whole thing under the hood is powered by mustache templates. The .Net stuff is still there and works well, but I don't think many work places want to invest in really understanding that tooling and how it can be employed. Lisp well always be Lisp, good job Lisp. There are other examples of code generation used for practical ends I am sure.

The point is that code generation requires being able to think and define certain forms of abstractions outside of the target functionality of a single program and while it's not hard to do that thinking, it's just high enough of a bar that your typical enterprise engineer won't engage with that (but will always be amazed by the results!).

AI Code Extruders change the cognitive burden that would be required for code generation into something that I guess appeals to engineers. You can specify something in the abstract and a Do-What-I-Mean machine may churn up something minimally useful, determinism be damned. Not only would an engineer not need to consider the abstraction layer between their input and the code but they would be unable to fully interrogate that abstraction because the code extruder does not need to show its work.

Just a thought. Probably a very silly thought.

[–] istewart@awful.systems 7 points 1 week ago

Not only would an engineer not need to consider the abstraction layer between their input and the code but they would be unable to fully interrogate that abstraction because the code extruder does not need to show its work.

I think you're actually right on the money here, nowhere near delusional, especially since you come from a Lisp background. I really appreciate Lisp (and Smalltalk) for the "live-coding" and universal inspectability/debuggability aspects in the tooling. I appreciate test-driven development as I've seen it presented in the Smalltalk context, as it essentially encourages you to "program in the debugger" and be aware of where the blank spots in your program specification are. (Although I'm aware that putting TDD into practice on an industrial scale is an entirely different proposition, especially for toolchains that aren't explicitly built around the concept.)

However, LLM coding assistants are, if not the exact opposite of this sort of tooling, something so far removed as to be in a different and more confusing realm. Since it's usually a cloud service, you have no access to begin debugging, and it's drawing from a black box of vector weights even if you do have access. If you manage to figure out how to poke at that, you're then faced with a non-trivial process of incremental training (further lossy compression) or possibly a rerun of the training process entirely. The lack of legibility and forthright adaptability is an inescapable consequence of the design decision that the computer is now a separate entity from the user, rather than a tool that the user is using.

I've posed the question in another slightly less skeptical forum, what advantage do we gain from now having two intermediate representations of a program: the original, fully-specified programming language, as well as the compiler IR/runtime bytecode? I have yet to receive a satisfactory answer.

load more comments (3 replies)