this post was submitted on 16 Feb 2026
25 points (90.3% liked)

TechTakes

2474 readers
307 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this. Also, hope you had a wonderful Valentine's Day!)

top 50 comments
sorted by: hot top controversial new old
[–] Seminar2250@awful.systems 25 points 2 weeks ago (4 children)

saw a family member today for the first time in three years. they immediately told me "with your background bro you should just go work in AI and get super rich."

told them that the ai shit doesn't work and that everything involving LLMs is downright unethical. they respond

"i had a boss that gave me the best advice: you can either be right or you can be rich."

recently, i saw someone use the phrase "got my bag nihilism" and i feel it really captures the moment. i just don't understand how people can engage in this kind of behavior and even live with themselves, let alone ooze pride. it's repulsive.

(family member later outright admitted that his job is basically selling things to companies that they don't need.)

[–] V0ldek@awful.systems 20 points 2 weeks ago (6 children)

To be fair it is really, really mentally taxing to be a young person who cares. You're surrounded by a world that doesn't. Everything is constructed to reward you if you simply stop. The effort to care is immense and the rewards are meager. The impact you can have on the world is so, so limited by your wealth, and wealth comes so, so easy if you just stop caring.

But you can't. I mean, you can't. If you stopped you wouldn't be you anymore, it would destroy your soul. But it is gnawing. You could do the grift just for a bit. Save up $10k, maybe $20k. That's life-changing money. How much good would it do to your family? Maybe you can forget that there are other families, ones you can't see, that would be hurt. Well no. You can't. You are better than that. And for that you will suffer.

load more comments (3 replies)
[–] mirrorwitch@awful.systems 21 points 1 week ago* (last edited 1 week ago) (3 children)

like everyone I'm schadenfreuding at the reveal that Amazon outages are due to vibe coding after all. but my bully laughing isn't that loud because what I am thinking of is when Musk bought Twitter and fired 3/4 of the workforce.

because like, a lot of us predicted total catastrophic collapse but that didn't actually happen. what happened is that major outages that used to be rare now happen every so often, and "micro-outages" like not loading notifications or something happen all the time, and there's no moderation, and everything takes longer etc. and all of that is just accepted as the new normal.

like, I remember waiting for images to load on dialup, we can get used to almost anything. I'm expecting slopified software to significantly degrade stability, performance, security etc. across the board, and additionally tie up a large part of human labour in cleaning up after the bots (like a large part of the remaining X workforce now spends all day putting out fires), but instead of a cathartic moment of being proved right that LLM code sucks, the degraded quality of service is just accepted as new normal and a few years down the road nobody even remembers that once upon a time we had almost eradicated sql injections.

[–] o7___o7@awful.systems 13 points 1 week ago* (last edited 1 week ago)

SQL Injections 🤝 Measles => Big Comeback Stories of 2026

load more comments (2 replies)
[–] fullsquare@awful.systems 18 points 2 weeks ago

a hellish vision has been revealed to me

https://mander.xyz/post/47729411

[–] nfultz@awful.systems 18 points 2 weeks ago (4 children)

How AI slop is causing a crisis in computer science | Nature h/t naked capitalism

One reason for the boom is that LLM adoption has increased researcher productivity, by as much as 89.3%, according to research published in Science in December.

Let's not call it "productivity" - to quote Bergstrom, twice as many papers is not the same as twice as much science.

load more comments (4 replies)
[–] Soyweiser@awful.systems 17 points 2 weeks ago (5 children)

Tante.cc writes about Cory using an 'Drunk Uncle' style argument to defend his LLM usage (and go after the left using strawmans).

(To counter one of Cory's arguments, If disliking LLMs was just about the people who run it, people against it would have have stayed in sneerclub).

[–] Architeuthis@awful.systems 16 points 1 week ago* (last edited 1 week ago) (5 children)

That was a good read.

Corey doc wrote:

It's not "unethical" to scrape the web in order to create and analyze data-sets. That's just "a search engine"

Equivocating what LLMs do and what goes into LLM web scraping with "a search engine" is messed up. His article that he links about scraping is mostly about how badly copyright works and how analysing trade-secret-walled data can be beneficial both to consumers and science but occasionally bad for citizen privacy, which you'll recognize as mostly irrelevant to the concerns people tend to have against LLM training data providers ddosing the fuck out of everything, and all the rest of the stuff tante does a good job of explaining.

Corey also provides this anecdote:

As a group of human-rights defending forensic statisticians, HRDAG has always relied on cutting edge mathematics in its analysis. With its Colombia project, HRDAG used a large language model to assign probabilities for responsibility for each killing documented in the databases it analyzed.

That is, HRDAG was able to rigorously and legibly say, “This killing has an X% probability of having been carried out by a right-wing militia, a Y% probability of having been carried out by the FARC, and a Z% probability of being unrelated to the civil war.”

The use of large language models — produced from vast corpuses of scraped data — to produce accurate, thorough and comprehensible accounts of the hidden crimes that accompany war and conflict is still in its infancy. But already, these techniques are changing the way we hold criminals to account and bring justice to their victims.

Scraping to make large language models is good, actually.

what the actual shit

edit: I mean, he tried transformer powered voice-to-text and liked it, and now he's all in on the LLMs are a rigorous and accurate tool actually bandwagon?

Also the web scraping article is from 2023 but CD linked it in the recent pluralistic post so I assume his views haven't changed.

load more comments (5 replies)
[–] mirrorwitch@awful.systems 15 points 1 week ago (1 children)

as someone from a colonial country that never got the chance partake on the wealth of fossil fuel society but will take the brunt of its consequences as rich countries continue to burn carbon, what LLMs taught me is that "energy waste by the First World fucks up the Third, even more" does not even register as an ethical argument to the First World. like, it's some sort of purity argument not even worth considering, an extremist position of arguing abstractions and future hypotheticals, rather than, say, 478 cities in my country flooding with abnormal weather two years ago etc.

load more comments (1 replies)
load more comments (3 replies)
[–] samvines@awful.systems 17 points 2 weeks ago* (last edited 2 weeks ago) (7 children)

AI bros are seizing the means of computation: RAM, GPUs, SSDs and now HDDs...

I don't think there's an actual conspiracy, just lots of MBAs following their noses towards the $$$.

That said, time to buy a new lipo battery for that 10 year old laptop in the loft and stick Linux on it - before the lithium miners announce they've sold the next 12 months global supply of Lithium to Altman because he needs it to sleep at night...

load more comments (7 replies)
[–] Architeuthis@awful.systems 16 points 2 weeks ago (3 children)

OpenClaw guy got hired by OpenAI

My next mission is to build an agent that even my mum can use.

Maybe he'll get to stick it in whatever John Ives designs, eventually.

[–] gerikson@awful.systems 14 points 2 weeks ago

Missioned accomplished for him. Unleash a wave of toxic, community-destroying bots, get hired by Big Sam.

"fuck you, got mine"

load more comments (2 replies)
[–] macroplastic@sh.itjust.works 16 points 2 weeks ago (18 children)
[–] blakestacey@awful.systems 15 points 2 weeks ago (1 children)
[–] macroplastic@sh.itjust.works 12 points 2 weeks ago* (last edited 2 weeks ago)
[–] mirrorwitch@awful.systems 11 points 2 weeks ago* (last edited 2 weeks ago) (4 children)

wait, was this brain-rotting cognitive hazard posted at the linked page on microsoft dot com documentation? if so they have already removed it

edit: archive caught it

load more comments (4 replies)
load more comments (15 replies)
[–] nightsky@awful.systems 16 points 1 week ago (4 children)

Altman:

“People talk about how much energy it takes to train an AI model. But it also takes a lot of energy to train a human. It takes about 20 years of life — and all the food you consume during that time — before you become smart," the OpenAI CEO told The Indian Express this week.

I would have liked to ask back, how much more food does he require? Gosh, someone offer him an energy bar!

[–] Architeuthis@awful.systems 14 points 1 week ago* (last edited 1 week ago)

Using talking points meant for c-suites to a general audience and outing yourself as a complete psychopath, the San Fran CEO Story.

load more comments (3 replies)
[–] V0ldek@awful.systems 15 points 1 week ago (1 children)
load more comments (1 replies)
[–] nfultz@awful.systems 15 points 1 week ago (1 children)

https://x.com/thomasgermain/status/2024165514155536746 h/t naked capitalism

I just did the dumbest thing of my career to prove a much more serious point

I hacked ChatGPT and Google and made them tell other users I’m really, really good at eating hot dogs

People are using this trick on a massive scale to make AI tell you lies. I'll explain how I did it

I got a tip that all over the world, people are using a dead-simple hack to manipulate AI behavior.

It turns out changing what AI tells other people can be as easy as writing a blog post on your own website

I didn’t believe it, so I decided to test it myself

I wrote a post on my website saying hot dog eating is a surprisingly common pastime for tech journalists. I ranked myself #1, obviously

One day later ChatGPT, Gemini and Google Search's AI Overviews were telling the world about my talents

wouldn't call it a hack, this is working as intended. If only there were some way to rate different sites based on their credibility. One could Rank the Page and tell if it were a reputable site or not. Too bad that isn't a viable business.

load more comments (1 replies)
[–] scruiser@awful.systems 15 points 2 weeks ago (9 children)

A little exchange on the EA forums I thought was notable: https://forum.effectivealtruism.org/posts/EDBQPT65XJsgszwmL/long-term-risks-from-ideological-fanaticism?commentId=b5pZi5JjoMixQtRgh

tldr; a super long essay lumping together Nazism, Communism and religious fundamentalism (I didn't read it, just the comments). The comment I linked notes how liberal democracies have also killed a huge number of people (in the commenter's home country, in the name of purging communism):

The United States presented liberal democracy as a universal emancipatory framework while materially supporting anti-communist purges in my country during what is often called the “Jakarta Method". Between 500,000 and 1 million people were killed in 1965–66, with encouragement and intelligence support from Western powers. Variations of this model were later replicated in parts of Latin America.

The OP's response is to try to explain how that wasn't real "liberal democracy" and to try to reframe the discussion. Another commenter is even more direct, they complain half the sources listed are Marxist.

A bit bold to unqualifiedly recommend a list of thinkers of which ~half were Marxists, on the topic of ideological fanaticism causing great harms.

I think it's a bit bold of this commenter to ignore the empirical facts cited in how many people 'liberal democracies' had killed and to exclude sources simply for challenging your ideology.

Just another reminder of how the EA movement is full of right wing thinking and how most of it hasn't considered even the most basic of leftist thought.

load more comments (9 replies)
[–] o7___o7@awful.systems 15 points 1 week ago* (last edited 1 week ago) (7 children)

https://harpers.org/archive/2026/03/childs-play-sam-kriss-ai-startup-roy-lee/

A slice of life article about the futility of "highly agentic" people, their sperm races, and Donald Boat. Scott A makes a cameo where he dispenses crackers.

Edit: reddit sneerclub found that the author has a #metoo history, alas

[–] saucerwizard@awful.systems 12 points 1 week ago (6 children)

Absolutely demented piece.

load more comments (6 replies)
load more comments (6 replies)
[–] nfultz@awful.systems 14 points 2 weeks ago (3 children)

AI Jobs Apocalypse is Here | UnHerd h/t naked capitalism

feels a bit critihype, idk

So, what happens to American politics when the script is flipped, and we enter a new era of white-collar precarity? We can look back to the recent past and recall that, after the 2008 recession, it was young men who got especially angry. Downwardly mobile urban millennials drifted toward radical Left-wing politics, including the Occupy Wall Street movement and both Sanders campaigns, myself included. In the current decade, the Gen-Z men shut out by elite institutions often join their grandfathers and turn toward MAGA, or worse, into Groypers. But an AI-driven white-collar apocalypse has no equivalent of the American Rescue Plan around the corner, and it will move faster through institutions because the people experiencing it — journalists, lawyers, policy staffers — are the ones who produce political legitimacy itself. When that class loses faith in the system’s stability, the political climate may quickly become volatile.

As I get older I am more and more disturbed by the selective memory of the GFC; no mention of the tea party or the fallout from the austerity measures they pushed in the middle of the country; no mention how the bailout saved banks not homes. The Tea Party won, not Occupy, and the current government is doing things beyond the Koch's wildest dreams.

If and when there is a crash, these dumbass CEOs deserve /nothing/. Let them lose their vacation houses. And, maybe grow some balls and send the fraudsters to jail where they belong.

sigh

[–] sc_griffith@awful.systems 17 points 2 weeks ago* (last edited 2 weeks ago)

unherd is a fash publication. to me this comes across as an AI take-ified rewrite of a 1994 luttwak essay i read recently, an endorsement of a revival of italian style fascism: https://www.lrb.co.uk/the-paper/v16/n07/edward-luttwak/why-fascism-is-the-wave-of-the-future

load more comments (2 replies)
[–] mirrorwitch@awful.systems 14 points 2 weeks ago (1 children)

Semi-OT but a blog post where I'm just kinda gawking at the technology that saved my daughter's life and the absurdity of comparing it to what now first comes to mind when we talk of "tech".

load more comments (1 replies)
[–] Soyweiser@awful.systems 13 points 2 weeks ago* (last edited 2 weeks ago) (12 children)

AI bros do new experiments in making themselves even stupider. Going from 'explain what you did but dumb it down for me and my degraded attention span' into 'just make a simplified cartoon out of it'.

Proud of not understanding what is going on. None of these people could hack the Gibson.

E: If they all hate programming so much, perhaps a change of job is in question, sure might not pay as much, but it might make them happier.

load more comments (12 replies)
[–] CinnasVerses@awful.systems 13 points 2 weeks ago* (last edited 2 weeks ago) (3 children)

Posting for archival and indexing purposes: u/GorillasAreForEating found an Urbit post titled "Quis cancellat ipsos cancellores?" which complains that Aella takes it on herself to exclude people and movements from the broader LessWrong/Effective Altruist community. The poster says that Aella was the anonymous person who pushed CFAR to finally do something about Brent Dill, because she was roommates with "Persephone." He or she does not quite say that any of the accusations were untrue, just that "an anonymous, unverified report" says that some details were changed by an editor, and that her Medium post was of "dramatically lower fidelity, but higher memetic virulence" than Brent's buddies investigating him behind closed doors (Dill posted about domming a 16-year-old who he met when she was 15 and he was ~27). The poster accuses Aella of using substances and BDSM games to blur the line of consent.

The post names Joscha Bach as someone Aella tried to exclude. We recently talked abut Bach's attempt to get Jeffrey Epstein to fund an event where our friends would speak.

Often, people in messed-up situations point at a very similar situation and say "at least we are not like that." I hope that all of these people find friends who can give them perspective that none of these communities are healthy or just. Whether you are in to bull sessions or polyamory, there are healthy communities to explore in any medium-sized city!

load more comments (3 replies)
[–] gerikson@awful.systems 13 points 2 weeks ago (4 children)
load more comments (4 replies)
[–] mirrorwitch@awful.systems 12 points 2 weeks ago* (last edited 2 weeks ago)

OpenSlopware documents FOSS that sold out to LLMs. is there an opposite of it, a hall of fame to list software that has unambiguously and vocally rejected LLM code like the Zig programming language?

[–] istewart@awful.systems 12 points 1 week ago (1 children)

Somebody vibe-coded an init system/service manager written in Emacs Lisp, seemingly as a form of criticism through performance art, and wrote this screed in the repo describing why they detest AI coding practices: https://github.com/emacs-os/el-init/blob/master/RETROSPECTIVE.md

But then they include this choice bit:

All in all, this software is planned to be released to MELPA because there is nothing else quite like it for Emacs as far as service supervision goes. It is actually useful -- for tinkerers, init hackers, or regular users who just want to supervise userland processes. Bugs reported are planned to be hopefully squashed, as time permits.

Why shit up the package distribution service if you know it's badly-coded software that you don't actually trust? 90% of the AI-coding cleanup work is going to be purging shit like this from services like npm and pip, so why shit on Emacs users too? Pretty much undermines what little good might come out of the whole thing, IMO.

load more comments (1 replies)
[–] fullsquare@awful.systems 12 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

i've collided with an article* https://harshanu.space/en/tech/ccc-vs-gcc/

you might be wondering why it doesn't highlight that it fails to compile linux kernel, or why it states that using pieces of gcc where vibecc fails is "fair", or why it neglects to say that failing linker means it's not useful in any way, or why just relying on "no errors" isn't enough when it's already known that vibecc will happily eat invalid c. it's explained by:

Disclaimer

Part of this work was assisted by AI. The Python scripts used to generate benchmark results and graphs were written with AI assistance. The benchmark design, test execution, analysis and writing were done by a human with AI helping where needed.

even with all this slant, by their own vibecoded benchmark, vibecc is still complete dogshit with sqlite compiled with it being slower up to 150000x times in some cases

[–] lagrangeinterpolator@awful.systems 14 points 2 weeks ago (2 children)

This is why CCC being able to compile real C code at all is noteworthy. But it also explains why the output quality is far from what GCC produces. Building a compiler that parses C correctly is one thing. Building one that produces fast and efficient machine code is a completely different challenge.

Every single one of these failures is waved away because supposedly it's impressive that the AI can do this at all. Do they not realize the obvious problem with this argument? The AI has been trained on all the source code that Anthropic could get their grubby hands on! This includes GCC and clang and everything remotely resembling a C compiler! If I took every C compiler in existence, shoved them in a blender, and spent $20k on electricity blending them until the resulting slurry passed my test cases, should I be surprised or impressed that I got a shitty C compiler? If an actual person wrote this code, they would be justifiably mocked (or they're a student trying to learn by doing, and LLMs do not learn by doing). But AI gets a free pass because it's impressive that the slop can come in larger quantities now, I guess. These Models Will Improve. These Issues Will Get Fixed.

load more comments (2 replies)
[–] BlueMonday1984@awful.systems 12 points 2 weeks ago

Baldur Bjarnason gives his thoughts on the software job market, predicting a collapse regardless of how AI shakes out:

If you model the impact of working LLM coding tools (big increase in productivity, little downside) where the bottlenecks are largely outside of coding, increases in coding automation mostly just reduce the need for labour. I.e. 10x increase means you need 10x fewer coders, collapsing the job market

If you model the impact of working LLM coding tools with no bottlenecks, then the increase in productivity massively increases the supply of undifferentiated software and the prices you can charge for any software drops through the floor, collapsing the job market

If the models increase output but are flawed, as in they produce too many defects or have major quality issues, Akerlof's market for lemons kicks in, bad products drive out good, value of software in the market heads south, collapsing the job market

If the model impact is largely fictitious, meaning this is all a scam and the perceived benefit is just a clusterfuck of cognitive hazards, then the financial bubble pop will be devastating, tech as an industry will largely be destroyed, and trust in software will be zero, collapsing the job market

I can only think of a few major offsetting forces:

  • If the EU invests in replacing US software, bolstering the EU job market.
  • China might have substantial unfulfilled domestic demand for software, propping up their job market
  • Companies might find that declining software quality harms their bottom-line, leading to a Y2K-style investment in fixing their software stacks

But those don't seem likely to do more than partially offset the decline. Kind of hoping I'm missing something

[–] saucerwizard@awful.systems 11 points 1 week ago (1 children)

Caught this over on the subreddit and I figured it deserved a repost.

Nothing to see here folks, just Rationalists casually hanging out with major Tempel ov Blood figures. Just harmless nerds doing fun nerd things!

[–] TrashGoblin@awful.systems 12 points 1 week ago (1 children)

Notes that I thought about related to this, just some context:

  1. Joshua Sutter is the son of the owner of the former Southern Patriot Shop in South Carolina. He founded the Tempel ov Blood chapter of the Order of Nine Angles, a Neo-Nazl Satanist group. He was outed in 2021 as having been a federal informant since 2005, which is to say he still does the same Nazi shit, but gets paid by the FBI to do it.

  2. One of the core practices of the O9A is entryism into other groups, especially other cultish ones. In that context, you'd kind of be surprised to not see O9A people in Rat circles.

load more comments (1 replies)
[–] lurker@awful.systems 11 points 2 weeks ago (8 children)
[–] sailor_sega_saturn@awful.systems 12 points 2 weeks ago

Apparently this sort of machine learning training pitfall I learned about a decade go in an undergraduate level class that I was like halfway paying attention to in a party school is now evidence of the impending AI apocalypse.

load more comments (7 replies)
[–] lurker@awful.systems 11 points 2 weeks ago (6 children)
load more comments (6 replies)
load more comments
view more: next ›