this post was submitted on 22 Dec 2025
17 points (100.0% liked)

TechTakes

2357 readers
51 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this. Merry Christmas, happy Hannukah, and happy holidays in general!)

top 50 comments
sorted by: hot top controversial new old
[–] Seminar2250@awful.systems 19 points 2 weeks ago* (last edited 2 weeks ago) (9 children)

https://www.windowscentral.com/microsoft/windows-11/my-goal-is-to-eliminate-every-line-of-c-and-c-from-microsoft-by-2030-microsoft-bets-on-ai-to-finally-modernize-windows

My goal is to eliminate every line of C and C++ from Microsoft by 2030. Our strategy is to combine AI *and* Algorithms to rewrite Microsoft’s largest codebases. Our North Star is “1 engineer, 1 month, 1 million lines of code”. To accomplish this previously unimaginable task, we’ve built a powerful code processing infrastructure. Our algorithmic infrastructure creates a scalable graph over source code at scale. Our AI processing infrastructure then enables us to apply AI agents, guided by algorithms, to make code modifications at scale. The core of this infrastructure is already operating at scale on problems such as code understanding."

wow, *and* algorithms? i didn't think anyone had gotten that far

[–] swlabr@awful.systems 28 points 2 weeks ago (2 children)

Q: what kind of algorithms does an AI produce

A: the bubble sort

[–] blakestacey@awful.systems 13 points 2 weeks ago

God damn that's good.

load more comments (1 replies)
[–] rook@awful.systems 16 points 2 weeks ago

I suppose it was inevitable that the insufferable idiocy that software folk inflict on other fields would eventually be turned against their own kind.

https://xkcd.com/1831/

alt textAnd xkcd comic.

Long haired woman: or field has been struggling with this problem for years!

Laptop wielding techbro: struggle no more! I’m here to solve it with algorithms.

6 months later:

Techbro: this is really hard Woman: You don’t say.

[–] V0ldek@awful.systems 14 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Ah yes, I want to see how they eliminate C++ from the Windows Kernel – code notoriously so horrific it breaks and reshapes the minds of all who gaze upon it – with fucking "AI". I'm sure autoplag will do just fine among the skulls and bones of Those Who Came Before

load more comments (1 replies)
[–] o7___o7@awful.systems 11 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Throw in the rust evangelism and you have a techtakes turducken

load more comments (1 replies)
[–] Soyweiser@awful.systems 10 points 2 weeks ago (2 children)

They now updated this to say it is just a research project and none of it will be going live. Pinky promise (ok, I added the pinky promise bit).

load more comments (2 replies)
[–] YourNetworkIsHaunted@awful.systems 10 points 2 weeks ago (2 children)

So maybe I'm just showing my lack of actual dev experience here, but isn't "making code modifications algorithmically at scale" kind of definitionally the opposite of good software engineering? Like, I'll grant that stuff is complicated but if you're making the same or similar changes at some massive scale doesn't that suggest that you could save time, energy and mental effort by deduplicating somewhere?

[–] sailor_sega_saturn@awful.systems 15 points 2 weeks ago (2 children)

This doesn't directly answer your question but I guess I had a rant in me so I might as well post it. Oops.


It's possible to write tools that make point changes or incremental changes with targeted algorithms in a well understood problem space that make safe or probably safe changes that get reviewed by humans.

Stuff like turning pointers into smart pointers, reducing string copying, reducing certain classes of runtime crashes, etc. You can do a lot of stuff if you hand-code C++ AST transformations using the clang / llvm tools.


Of course "let's eliminate 100% of our C code with a chatbot" is... a whole other ballgame and sounds completely infeasible except in the happiest of happy paths.

In my experience even simple LLM changes are wrong somewhere around half the time. Often in disturbingly subtle ways that take an expert to spot. Also in my experience if someone reviews LLM code they also tend to just rubber stamp it. So multiply that across thousands of changes and it's a recipe for disaster.

And what about third party libraries? Corporate code bases are built on mountains of MIT licensed C and C++ code, but surely they won't all switch languages. Which means they'll have a bunch of leaf code in C++ and either need a C++ compatible target language, or have to call all the C++ code via subprocess / C ABI / or cross-language wrappers. The former is fine in theory, but I'm not aware of any suitable languages today. The latter can have a huge impact on performance if too much data needs to be serialized and deserialized across this boundary.

Windows in particular also has decades of baked in behavior that programs depend on. Any change in those assumptions and whoops some of your favorite retro windows games don't work anymore!


In the worst case they'd end up with a big pile of spaghetti that mostly works as it does today but that introduces some extra bugs, is full of code that no one understands, and is completely impossible to change or maintain.

In the best case they're mainly using "AI" for marketing purposes, will try to achieve their goals using more or less conventional means, and will ultimately fall short (hopefully not wreaking too much havoc in the progress) and give up halfway and declare the whole thing a glorious success.

Either way ultimately if any kind of large scale rearchitecting that isn't seen through to the end will cause the codebase to have layers. There's the shiny new approach (never finished), the horrors that lie just beneath (also never finished), and the horrors that lie just beneath the horrors (probably written circa 2003). Any new employees start by being told about the shiny new parts. The company will keep a dwindling cohort of people in some dusty corner of the company who have been around long enough to know how the decades of failed code architecture attempts are duct-taped together.

load more comments (2 replies)
[–] swlabr@awful.systems 11 points 2 weeks ago* (last edited 2 weeks ago) (5 children)

The short answer is no. Outside of this context, I'd say the idea of "code modifications algorithmically at scale" is the intersection of code generation and code analysis, all of which are integral parts of modern development. That being said, using LLMs to perform large scale refactors is stupid.

[–] V0ldek@awful.systems 12 points 2 weeks ago (2 children)

This is like the entire fucking genAI-for-coding discourse. Every time someone talks about LLMs in lieu of proper static analysis I'm just like... Yes, the things you say are of the shape of something real and useful. No, LLMs can't do it. Have you tried applying your efforts to something that isn't stupid?

load more comments (2 replies)
load more comments (4 replies)
load more comments (3 replies)
[–] o7___o7@awful.systems 19 points 2 weeks ago (2 children)

In the future, I'm going to add "at scale" to the end of all my fortune cookies.

[–] istewart@awful.systems 13 points 2 weeks ago (4 children)

I am only mildly concerned that rapidly scaling this particular posting gimmick will cause our usually benevolent and forebearing mods to become fed up at scale

load more comments (4 replies)
[–] fullsquare@awful.systems 12 points 2 weeks ago

you will experience police brutality very soon, at scale

[–] sc_griffith@awful.systems 18 points 2 weeks ago (3 children)

idea: end of year worst of ai awards. "the sloppies"

[–] blakestacey@awful.systems 12 points 2 weeks ago (4 children)

On a related theme:

man wearing humanoid mocap suit kicks himself in the balls

https://bsky.app/profile/jjvincent.bsky.social/post/3mayddynhas2l

load more comments (4 replies)
[–] macroplastic@sh.itjust.works 11 points 2 weeks ago* (last edited 2 weeks ago)

"Top of the Slops"

load more comments (1 replies)
[–] e8d79@discuss.tchncs.de 16 points 2 weeks ago* (last edited 2 weeks ago) (6 children)

I will never forgive Rob Pike for the creation of the shittiest widely adopted programming language since C++, but I very much enjoy this recent thread where he rages about Anthropic.

[–] mlen@awful.systems 10 points 2 weeks ago (2 children)

Digressing: The irony is that it's a language with one of the best standard libraries out there. Wanna run a http reverse proxy with TLS cross compiled for a different os? No problem!

Many times I used it only because of that despite it being a worse language.

load more comments (2 replies)
[–] V0ldek@awful.systems 10 points 2 weeks ago (5 children)

for the creation of the shittiest widely adopted programming language since C++

Hey! JavaScript is objectively worse, thank you very much

load more comments (5 replies)
load more comments (4 replies)
[–] dgerard@awful.systems 16 points 2 weeks ago* (last edited 2 weeks ago) (4 children)

lol, Oliver Habryka at Lightcone is sending out begging emails, i found it in my spam folder

(This email is going out to approximately everyone who has ever had an account on LessWrong. Don't worry, we will send an email like this at most once a year, and you can permanently unsubscribe from all LessWrong emails here)

declared Lightcone Enemy #1 thanks you for your attention in sending me this missive, Mr Habryka

In 2024, FTX sued us to claw back their donations, and around the same time Open Philanthropy's biggest donor asked them to exit our funding area. We almost went bankrupt.

yes that's because you first tried ignoring FTX instead of talking to them and cutting a deal

that second part means Dustin Moskovitz (the $ behind OpenPhil) is sick of Habryka's shit too

If you want to learn more, I wrote a 13,000-word retrospective over on LessWrong.

no no that's fine thanks

We need to raise $2M this year to continue our operations without major cuts, and at least $1.4M to avoid shutting down. We have so far raised ~$720k.

and you can’t even tap into Moskovitz any more? wow sucks dude. guess you’re just not that effective as altruism goes

And to everyone who donated last year: Thank you so much. I do think humanity's future would be in a non-trivially worse position if we had shut down.

you run an overpriced web hosting company and run conferences for race scientists. my bayesian intuition tells me humanity will probably be fine, or perhaps better off.

load more comments (4 replies)
[–] blakestacey@awful.systems 16 points 2 weeks ago (11 children)
[–] misterbngo@awful.systems 11 points 2 weeks ago (1 children)
[–] BurgersMcSlopshot@awful.systems 11 points 2 weeks ago

randomly placed and statistically average, just like real rivers!

load more comments (10 replies)
[–] Soyweiser@awful.systems 13 points 2 weeks ago (3 children)

Remember how slatestarcodex argues that non-violence works better as a method of protest? Turns out the research pointing to that is a bit flawed: https://roarmag.org/essays/chenoweth-stephan-nonviolence-myth/

[–] swlabr@awful.systems 13 points 2 weeks ago (6 children)

realising that preaching nonviolence is actually fascist propaganda is one of those consequences of getting radicalised/deprogramming from being a liberal. You can’t liberate the camps with a sit-in, for example.

load more comments (6 replies)
load more comments (2 replies)
[–] sinedpick@awful.systems 12 points 2 weeks ago

Sean Munger, my favorite history YouTuber, has released a 3-hour long video on technology cultists from railroads all the way to LLMs. I have not watched this yet but it is probably full of delicious sneers.

[–] lagrangeinterpolator@awful.systems 12 points 2 weeks ago (15 children)

AI researchers are rapidly embracing AI reviews, with the new Stanford Agentic Reviewer. Surely nothing could possibly go wrong!

Here's the "tech overview" for their website.

Our agentic reviewer provides rapid feedback to researchers on their work to help them to rapidly iterate and improve their research.

The inspiration for this project was a conversation that one of us had with a student (not from Stanford) that had their research paper rejected 6 times over 3 years. They got a round of feedback roughly every 6 months from the peer review process, and this commentary formed the basis for their next round of revisions. The 6 month iteration cycle was painfully slow, and the noisy reviews — which were more focused on judging a paper's worth than providing constructive feedback — gave only a weak signal for where to go next.

How is it, when people try to argue about the magical benefits of AI on a task, it always comes down to arguing "well actually, humans suck at the task too! Look, humans make mistakes!" That seems to be the only way they can justify the fact that AI sucks. At least it spews garbage fast!

(Also, this is a little mean, but if someone's paper got rejected 6 times in a row, perhaps it's time to throw in the towel, accept that the project was never that good in the first place, and try better ideas. Not every idea works out, especially in research.)

When modified to output a 1-10 score by training to mimic ICLR 2025 reviews (which are public), we found that the Spearman correlation (higher is better) between one human reviewer and another is 0.41, whereas the correlation between AI and one human reviewer is 0.42. This suggests the agentic reviewer is approaching human-level performance.

Actually, now all my concerns are now completely gone. They found that one number is bigger than another number, so I take back all of my counterarguments. I now have full faith that this is going to work out.

Reviews are AI generated, and may contain errors.

We had built this for researchers seeking feedback on their work. If you are a reviewer for a conference, we discourage using this in any way that violates the policies of that conference.

Of course, we need the mandatory disclaimers that will definitely be enforced. No reviewer will ever be a lazy bum and use this AI for their actual conference reviews.

load more comments (15 replies)
[–] scruiser@awful.systems 12 points 2 weeks ago* (last edited 2 weeks ago) (6 children)

I posted about Eliezer hating on OpenPhil for having too long AGI timelines last week. He has continued to rage in the comments and replies to his call out post. It turns out, he also hates AI 2027!

https://www.lesswrong.com/posts/ZpguaocJ4y7E3ccuw/contradict-my-take-on-openphil-s-past-ai-beliefs?commentId=3GhNaRbdGto7JrzFT

I looked at "AI 2027" as a title and shook my head about how that was sacrificing credibility come 2027 on the altar of pretending to be a prophet and picking up some short-term gains at the expense of more cooperative actors. I didn't bother pushing back because I didn't expect that to have any effect. I have been yelling at people to shut up about trading their stupid little timelines as if they were astrological signs for as long as that's been a practice (it has now been replaced by trading made-up numbers for p(doom)).

When we say it, we are sneering, but when Eliezer calls them stupid little timelines and compares them to astrological signs it is a top quality lesswrong comment! Also a reminder for everyone that I don't think we need: Eliezer is a major contributor to the rationalist attitude of venerating super-forecasters and super-predictors and promoting the idea that rational smart well informed people should be able to put together super accurate predictions!

So to recap: long timelines are bad and mean you are a stuffy bureaucracy obsessed with credibility, but short timelines are bad also and going to expend the doomer's crediblity, you should clearly just agree with Eliezer's views, which don't include any hard timelines or P(doom)s! (As cringey as they are, at least they are committing to predictions in a way that can be falsified.)

Also, the mention about sacrificing credibility make me think Eliezer is intentionally willfully playing the game of avoiding hard predictions to keep the grift going (as opposed to self-deluding about reasons not to explain a hard timeline or at least put out some firm P()s ).

[–] V0ldek@awful.systems 12 points 2 weeks ago (1 children)

it has now been replaced by trading made-up numbers for p(doom)

Was he wearing a hot-dog costume while typing this wtf

[–] scruiser@awful.systems 10 points 2 weeks ago

I really don't know how he can fail to see the irony or hypocrisy at complaining about people trading made up probabilities, but apparently he has had that complaint about P(doom) for a while. Maybe he failed to write a call out post about it because any criticism against P(doom) could also be leveled against the entire rationalist project of trying to assign probabilities to everything with poor justification.

[–] o7___o7@awful.systems 11 points 2 weeks ago

Watching this guy fall apart as he's been left behind has sure been something.

[–] Evinceo@awful.systems 11 points 2 weeks ago (1 children)

Eliezer is a major contributor to the rationalist attitude of venerating super-forecasters and super-predictors and promoting the idea that rational smart well informed people should be able to put together super accurate predictions!

This is a necessary component of his imagined AGI monster. Good thing it's bullshit.

[–] blakestacey@awful.systems 13 points 2 weeks ago (1 children)

Super-prediction is difficult, especially about the super-future. —old Danish proverb

[–] blakestacey@awful.systems 11 points 2 weeks ago

And looking that up led me to this passage from Bertrand Russell:

The more tired a man becomes, the more impossible he finds it to stop. One of the symptoms of approaching nervous breakdown is the belief that one’s work is terribly important and that to take a holiday would bring all kinds of disaster. If I were a medical man, I should prescribe a holiday to any patient who considered his work important.

load more comments (3 replies)
[–] BlueMonday1984@awful.systems 11 points 2 weeks ago (18 children)
load more comments (18 replies)
[–] sc_griffith@awful.systems 10 points 2 weeks ago (1 children)

odium symposium christmas bonus episode: we watched and reviewed Sean Hannity's straight-to-Rumble 2023 Christmas comedy "Jingle Smells."

https://www.patreon.com/posts/146610014

load more comments (1 replies)
[–] o7___o7@awful.systems 10 points 2 weeks ago* (last edited 2 weeks ago)
[–] o7___o7@awful.systems 10 points 2 weeks ago* (last edited 2 weeks ago) (5 children)
[–] pikesley@mastodon.me.uk 13 points 2 weeks ago (2 children)

@o7___o7 @BlueMonday1984

> What guardrails work that don’t depend on constant manual billing checks?

Have you considered not blindly trusting the god damn confabulation machine?

[–] pikesley@mastodon.me.uk 11 points 2 weeks ago (1 children)

@o7___o7 @BlueMonday1984

> AI is going to democratize the way people don't know what they're doing

Ok, sometimes you do got to hand it to them

load more comments (1 replies)
load more comments (1 replies)
load more comments (4 replies)
[–] fullsquare@awful.systems 10 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

elsewhere on lemmy, a piece from the atlantic (be warned: they quote lasker/cremieux for some reason) on new shiny glp-1 agonist that you can order off telegram from some random ass chinese lab:

The tests, insofar as they are reliable, do flag problems. According to Finnrick Analytics, a start-up that provides free peptide tests and publicly shares the results, 10 percent of the retatrutide samples it has tested in the past 60 days had issues of sterility, purity, or incorrect dosing. Two other peptide-testing labs, Trustpointe and Janoshik, have said in interviews with Rory Hester, a.k.a. PepTok on YouTube, that they see, respectively, an overall fail rate of 20 percent and a 3 to 5 percent fail rate for sterility alone across all peptides.

isn't dear leader EY taking this? it's still not approved yet, so it's not available on normal market, and because it's peptide it's i.m. only. also, side effects not just for this one, but for entire class include anhedonia, which must be very rational thing to risk without medical need. chat, what's your p(infected sore on EY's ass)

[–] saucerwizard@awful.systems 11 points 2 weeks ago* (last edited 2 weeks ago) (5 children)

I’m running ozempic and I haven’t noticed any anhedonia tbf. I think Yud claimed he had tried them and that they failed to work or something.

(fun fact: ozempics going generic up here in a few months because Novo fucked up the patent application. The peptide market thing gives me the willies.)

[–] fullsquare@awful.systems 11 points 2 weeks ago

good for you ig. ozempic is actually small enough (and profitable enough) to make it synthetically, but novo process is to make linear precursor by fermentation, purify that, then tack on it side chain and N-terminal H-His-Aib- using regular peptide chemistry methods. no such luck with retatrutide tho, it has to be entirely synthetic. the real big deal however will be about small-molecule drug that targets this receptor, because this means pills instead of injections from day 1

load more comments (4 replies)
load more comments (1 replies)
[–] Soyweiser@awful.systems 10 points 2 weeks ago (6 children)

Sunday rant post. I really dislike that so many people are now adopting 'electrons' when they mean power (it is good as a 'this person drinks the coolaid' shibboleth however).

And I was amused to hear people go 'AI (by which they meant the recent llm stuff) malware creating will be a risk in the future, look at the drug discovery that AI is already doing', wonder if drug discovery people have said 'look how great drug discovery will be in the future, look at all the malware development AI is already doing'.

load more comments (6 replies)
load more comments
view more: next ›