this post was submitted on 23 Feb 2026
9 points (100.0% liked)

TechTakes

2454 readers
68 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this. If you're wondering why this went up late, I was doing other shit)

all 42 comments
sorted by: hot top controversial new old
[–] nightsky@awful.systems 16 points 5 hours ago (4 children)

404 Media: Meta Director of AI Safety Allows AI Agent to Accidentally Delete Her Inbox

Yue also shared screenshots of her WhatsApp chat with the OpenClaw agent, where she implores it to “not do that,” “stop, don’t do anything,” and “STOP OPENCLAW.”

This is very serious computing and we must all take it very seriously.

[–] lurker@awful.systems 6 points 3 hours ago

this is like the fourth time an AI agent has completely deleted something important (I remember an article about an AI deleting all of a scientists’s research) How many more times does it have to happen before people stop using AI to look after something important???

[–] BlueMonday1984@awful.systems 7 points 5 hours ago

The promptfondlers did it, they made a computer which doesn't do what you tell it to do

[–] lagrangeinterpolator@awful.systems 5 points 4 hours ago* (last edited 4 hours ago)

Maybe I should apply to be a director of AI safety at Meta. I know one safety measure that works: don't use AI.

[–] istewart@awful.systems 4 points 4 hours ago

What, Ctrl-C wouldn't work? kill -9?

[–] Architeuthis@awful.systems 8 points 5 hours ago* (last edited 5 hours ago) (1 children)
[–] CinnasVerses@awful.systems 4 points 2 hours ago* (last edited 2 hours ago)

Usually AI boosters are claiming that soon most humans will be economically useless, not that it would be terrible if there were fewer white people. One reason people avoid having children is that they feel economically insecure and doubt there will be respected places in society for their offspring.

Dwarkesh Patel is the only other Indian American I have seen who is friends with our friends.

[–] lurker@awful.systems 3 points 4 hours ago* (last edited 3 hours ago)

sharing this channel’s posts are the equivalent of shooting fish in a barrel but http://youtube.com/post/UgkxoSpDpLNEr9WawVXnl5Mlw4NeQ6-XsLjl this really just feels like an excuse to repost that METR graph. also wtf is the graph on top

[–] nfultz@awful.systems 9 points 8 hours ago (2 children)

From fellow traveler stats consultant John Mount:

https://johnmount.github.io/mzlabs/JMWriting/WeAreCookedLLMs.html

Somehow he manages to touch on so many different subplots, a shotgun sneer instead of snipe

if “tech-bro” plus a LLM is a “100x engineer”, then “bro” isn’t needed for much longer as the LLM alone must be a “99x engineer.” However, I don’t think “bro plus” is often really a 100x engineer, and the LLM alone isn’t a 99x engineer. However, “bro plus” may outlast their peers who make the mistake of trying to do the actual work in place of talking LLMs up.

The above may or may not be the case. But if it is, then it is the LLM-bros (which include non-technologists, con artists, financiers, men and women) that are destroying everything - not the LLMs.

The problem with this iteration is the full court press of finance and technology. The major players are using financing to dump results at a price way below production costs. This isn't charity, it is to demoralize and kill competition.

claiming "after we take over the world we will consider adding Universal Basic Income (UBI)". The LLM bros already have a lot of the money, and they are not even rehearsing diverting it into basic income now. Why does one believe they would do that when they also have all of the power?

You don't have to hand it to Altman, but he did fund the largest UBI experiment through Open Research with his il gotten gains. OTOH, one interpretation of that data was that UBI "decreases the labor supply" which was then used directly as an argument against it.

Any worry about scope or power of LLMs is fed back as an alignment threat so dire that only the current LLM leaders should be allowed to continue work (inviting regulatory capture). Any claim the LLMs don't work is fed back as "you are prompting it wrong"

Orbital deployment makes all of radiation tolerance, connectivity, power, maintenance, and heat dissipation much harder and much more expensive. We are still at a time where putting an oven or air-frier in space is considered noteworthy (China 2025, NASA 2019 ref).

air friers IN SPACE ha

I am more worried about the LLM-bros and their auto-catalytic money doomsday machine than about the LLMs themselves.

100% - ACMDM is a nice turn of phrase as well.

[–] istewart@awful.systems 1 points 4 minutes ago

if a Franciscan priest gets really good at basketball, is he considered an air friar

[–] nfultz@awful.systems 7 points 8 hours ago

https://www.adexchanger.com/ai/one-chatbots-journey-to-introducing-ads-that-dont-suck/

Often, the ad loads before the chatbot’s query response, said Baird, and Koah’s goal is to “deliver such a relevant result to the user that they just click on the ad before the result loads.”

LLM's bad performance and inefficiency is a feature to /someone/. And chatbots are themselves not immune to enshitification.

[–] o7___o7@awful.systems 4 points 7 hours ago* (last edited 7 hours ago)

Looks like they're gonna ruin BattleBots with AI somehow. Bright Data appear to be web scraping bastards as a service.

I'll never forgive them for what they did to the 80 lb slab of rotating steel.

[–] V0ldek@awful.systems 7 points 9 hours ago (2 children)
[–] BurgersMcSlopshot@awful.systems 3 points 6 hours ago (2 children)

stupid question I probably asked already in the past: dafuq is a ladybird?

[–] V0ldek@awful.systems 3 points 1 hour ago

Imagine if a browser was fascist

[–] irelephant@lemmy.dbzer0.com 3 points 5 hours ago

A WIP browser implementation.

[–] o7___o7@awful.systems 4 points 9 hours ago

Imagine shaving a racist yak

[–] Soyweiser@awful.systems 4 points 14 hours ago* (last edited 13 hours ago) (4 children)

Article on the Ick generated by AI shit from the perspective of a woman "They Built Stepford AI and Called It “Agentic”", talking about how women adopt it less, and gives a reason why this might be so.

On a personal note (I'm a man for the record), while I normally get the uncanny valley effect a lot less than normal people, I do notice it a lot with AI generated people, really odd experience that.

(Author does seem to be a pro AI person however).

[–] jaschop@awful.systems 4 points 6 hours ago

I started to raise my eyebrows when the Second Brain got lumped into the AI wife pile.

Bro, I just write shit down. I am in fact taking responsibility for my schedule and handling my emotions without relying on external support. Am I turning to (checks notes...) the notebook industry for a technological replacement wife?

I mean some valid points, and some of it might explain the gendered AI adoption gap, but too much generalization.

[–] ebu@awful.systems 8 points 9 hours ago* (last edited 9 hours ago)

some parts intriguing, but mostly disappointing. several chunks of the text felt AI-generated. no fewer than 34 "it's not X but Y"'s, by my count, and the out-of-nowhere typographies / tables definitely smell of slop. and obviously, the images definitely were. (can't even be bothered to fix the typos in photoshop? why make a fake poster for The Stepford Wives??)

some notes:

  • i'm not entirely convinced the revulsion response in women can be explained entirely as a reflective recognition of the subjected female self. maybe it's also because AI art is entirely bland and/or fuck ugly

  • some reproductive labors, in the Marxist-feminist sense, are getting subsumed by AI, sure, but they're largely the ones that already got subsumed by the computer. we had pagers with scheduling and appointment reminders in the 80's. about the only thing an LLM can do that our previous tech couldn't is the customer service / "emotional labor" part, albeit poorly. and the other labors are non-optional -- my laundry actually does have to go in the dryer, and no matter how many plastic pictures of clean clothes i generate, they can't actually go in my closet.

  • speaking of, the article appears to use a mangled paraphrase of that Joanna Maciejewska tweet ("I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes"), and then attributes it to "AI enthusiasts" (ew).

  • the article notes that reproductive labor is coded feminine and that the assistants that (attempt to) do this labor are designed female, with feminine voices and affects, despite being, y'know, robots. and not women. the next step to me would be to note that this isn't just reflecting the subjectification of the female and the designation of women to a particular labor class, but actually aiding to construct and reproduce the subject of "female" itself too. maybe throw some Butler in there. but we just breeze right past this. no third-wave? i don't see any feminist arguments past the 80's in here

  • the typography of wives is total bullshit. "The Open-Source Wife" fuuuuucccckk offfff. but. BUT. i do think there is something correct in there about xAI/Grok/Ani basically being the modern adaptation of Vivian James

  • there's an argument that obviously used to be about AI art, and got transmogrified into a nonsense concept, bordering on colorless green ideas.

Women’s labor is being extracted, automated, and sold back without credit.

  • the nonsense below it about "alignment" clearly intends to imply that the machines are only faking being our friends / submissive wives(!!1!).

  • but this is okay because women are uniquely suited to interface with AI! this is because (all) women (innately) communicate with the goal of building relationships (female) instead of the utilitarian (manly) execution of transactions (male). there's an odd essentialist undercurrent that's not really being challenged here, despite the fact that that would render "female robots" impossible

  • "outsource-maxxing" fuuuuuucuk youuuuuuu

  • the conclusion of the article is basically "women are uniquely capable of interacting with (female) AI because they've BEEN the female AI", with a call-to-action for women to basically... well. resume that role, except now using the AI as your girlbestfriend.

[–] corbin@awful.systems 10 points 10 hours ago

This is ahistorical slop. Previously, on Lobsters, I explained the biggest tell here: the overuse and misuse of em-dashes. There's also some bad sentence structure and possibly-confabulated citations to unnamed papers. The images can't be trusted.

The worst problem here is that the article believes that history starts about halfway through the Industrial Revolution. Computing was not gendered prior to the Harvard Computers in the 1880s. Prior to the Industrial Revolution, women spent most of their time on textiles and were compensated for their time and labor; there is a series from Bret Devereaux on the details in ancient and pre-industrial Europe, and a decent summary on /r/AskHistorians of the industrial transition from about 1760 to 1860. The article suggests that the Victorian way of treating women as nannies and housewives was historically universal. Claude identifies as non-binary (or, rather, Claude's authors told it to identify as such) but uses male pronouns when pressed into a binary theory. The Creation of Patriarchy is a real book but only describes the origins of masculine Abrahamic beliefs rather than some sort of unifying principle, and is easily disproven in its universality by looking at contemporary ancient societies like Sparta or the Iroquois Confederation; there's also a Devereaux series on Sparta.

The author's gotta be one of the clearest demonstrations of critihype seen yet. She is selling an anthology on Amazon called How Not To Use AI, which presumably she forgot to consult prior to prompting this essay.

[–] gerikson@awful.systems 8 points 13 hours ago (1 children)

Interesting link but it moves into AI hype near the end.

[–] Soyweiser@awful.systems 6 points 13 hours ago

Yeah was quite disappointed by that, also the anthropomorphization of AI by the end.

[–] nfultz@awful.systems 18 points 22 hours ago (1 children)

https://futurism.com/artificial-intelligence/rentahuman-musk-ai h/t naked capitalism

Liteplo is the genius behind RentAHuman, an online marketplace where humans can lease out their bodies to autonomous AI agents.

gah

Last week, Wired writer Reece Rogers offered his body up to the platform, finding that most of the jobs offered were scams to promote other AI startups.

lmao of course they were

[–] lurker@awful.systems 8 points 20 hours ago (2 children)

it’s always the Elon Musk fans isnt it.

and on the topic of Futurism articles on Elon Musk: https://futurism.com/future-society/court-trouble-jury-hates-elon-musk

one word: LMFAOOOO

[–] pikesley@mastodon.me.uk 5 points 13 hours ago

@lurker @nfultz

> If it was any other defendant and a juror said “I hate that guy and he has no moral compass,” Broome argued, that juror would be dismissed

I mean, maybe, but this is an Objectively Correct Opinion

[–] V0ldek@awful.systems 10 points 18 hours ago (1 children)

Forget who said it (I think e.w. niedermeyer) but if you were a true Musk Hater you would lie your way into that jury no matter the cost

[–] jonhendry@iosdev.space 7 points 17 hours ago (2 children)

@V0ldek @lurker

You’d need to have a clean social media history with no negative comments about Musk, and probably have to avoid such comments after the trial, lest Musk’s lawyers get wind of it.

[–] V0ldek@awful.systems 6 points 17 hours ago (1 children)

It takes dedication, but the payoff is too big to not try

[–] antifuchs@awful.systems 5 points 13 hours ago* (last edited 13 hours ago) (2 children)

Not… sneer? What is this?!

[–] V0ldek@awful.systems 5 points 9 hours ago

Nuke your socials for the trial

Hardest choices, strongest wills, etc.

Imagine the book you could write at the end

[–] o7___o7@awful.systems 4 points 10 hours ago* (last edited 8 hours ago)

Revolutionary Sneerpuku

[–] Soyweiser@awful.systems 3 points 16 hours ago

Well, just dont use your real name online.

[–] BlueMonday1984@awful.systems 18 points 1 day ago (1 children)

Starting this Stubsack off with one programmer's testimony on the effects of the LLM rot:

For the record, I work at a software company that employs ~10k developers.

Before LLMs, I'd encounter [software engineers that seem completely useless or lacking in basic knowledge] a couple of times a month, but I interact with a lot of engineers, specifically the ones that need help or are new at the company or industry at large, so it's a selected sample. Even the most inexperienced ones are willing and able to learn with some guidance.

After LLMs, there's been a significant uptick, and these new ones are grossly incompetent, incurious, impatient, and behave like addicts if their supply of tokens is at all interrupted. If they run out of prompt credits, its an emergency because they claim they can't do any work at all. They can't even explain the architecture of what they are making anymore, and can't even file tickets or send emails without an LLM writing it for them, and they certainly lack in any kind of reading comprehension.

It's bleak and depressing, and makes me want to quit the industry altogether.

[–] BurgersMcSlopshot@awful.systems 5 points 11 hours ago

Jesus fucking christ I need to invent a time machine so I can go back and make my past self be an electrician instead because this. Commercial software engineering has absolutely been captured by some of the silliest people and trends out there.

[–] lurker@awful.systems 5 points 1 day ago (1 children)

the metr graph has gotten weird https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/ the 50% success rate graph went from 6 hours to 14 hours, but the 80% success rate graph only went from 55 minutes to 1 hour and 3 minutes. I have an itch that it's a fluke or outlier but it's also very possible that LLM coding's just weird like that

[–] scruiser@awful.systems 8 points 22 hours ago* (last edited 22 hours ago) (3 children)

You're giving them too much credit. The entire methodology of "determine how long it takes humans to do a task and use that as a proxy for difficulty" was somewhat abstract and questionable in the first place, but with good rigorous implementation, it might have still been worthwhile.

However, their actual methodology is awful. Most of their tasks only have 3 or so human attempts to do them to create a baseline (from a relatively small pool of baseliners), and for longer tasks, they entirely went with a guess-estimate on task completion time. The error bars they show are just for the model trying to do the task (and they are already absurdly big, especially for this most recent jump), if you added in error bars accounting for variability in the task baseline itself, the error bars would get even bigger.

This blog goes into more details explaining the nuances of the problems with their methodology: https://arachnemag.substack.com/p/the-metr-graph-is-hot-garbage

To give a simple example, if the numerous problems resulted in a systematic bias on task estimation, linear improvement could easily look exponential. To give a simple example of how that is possible if they had 5 tasks that had a true baseline (putting aside questions of methodology validity such that true is even meaningful) of 15 minutes, 30 minutes, 45 minutes, 1 hour, and an hour and 15 minutes (respectively) but flaws with human baseliners (for example, lacking specialized skills for longer tasks, phoning it in because they are paid by the hour, metr guesstimating the task time), they had numbers for those 5 tasks of 15 minutes, 1 hour, 2 hours, 4 hours, and 8 hours, successive improvements to get to 50% success on each task would look exponential even though they are actually linear improvements.

METR maybe deserves a tiny bit of credit for trying something even vaguely related to practically meaningful task (compared to all the completely irrelevant bs benchmarks that would be worthless even if they were accurate). But I wouldn't give them any more credit than that, its just that the bar is so low.

[–] JFranek@awful.systems 11 points 20 hours ago (1 children)

Broke: The METR studies are the best research on impacts of AI productivity available today.

Woke: The METR studies are hot garbage.

Bespoke: Both. It's both.

[–] scruiser@awful.systems 2 points 1 hour ago

That a great summary and an accurate indictment of the "study" of LLMs.

[–] scruiser@awful.systems 11 points 22 hours ago* (last edited 22 hours ago)

Doing what METR tried to do right would in fact be really expensive and hard, but for something that the fate of the world allegedly depends on (according to both boosters and doomers) you think they would manage to find the money for it. But the LLM companies don't actually want accurate numbers, they want hype.

[–] lurker@awful.systems 7 points 21 hours ago* (last edited 19 hours ago)

oh yeah I 100% agree that their methodology is flawed, and that blog does a pretty good job of outlining the issues. I just thought the absolutely huge gap was both interesting and funny. Their absolutely huge error bars are not a good sign, between that and the gap it really feels like someone screwed up