this post was submitted on 26 Apr 2026
18 points (90.9% liked)

TechTakes

2557 readers
106 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] sailor_sega_saturn@awful.systems 1 points 2 hours ago* (last edited 2 hours ago) (1 children)

There have been a couple of cases of generative AI graphics being used in anime recently:

Ascendance of a Bookworm used AI backgrounds in the opening song

Liar Game featured an AI chandelier (xcancel link) (this one is brand new so the studio hasn't responded yet).

This sucks because I wanted to like Liar Game (the manga is excellent though. Read it! Read it!)

[–] gerikson@awful.systems 1 points 57 minutes ago

I think it's inevitable that the economics of anime production will lead to more GenAI content being used.

Sadly, many plots may just as well be generated by AI as well.

[–] sailor_sega_saturn@awful.systems 1 points 2 hours ago* (last edited 2 hours ago) (1 children)

The future of AI in Ubuntu

This post has all the usual cliches, exaggerations, lies, and unfounded optimism you'd expect in a blog post about a company forcing AI down their workers and user's throats. I'll try to avoid sneering at every sentence.

Delegating elements of Site Reliability Engineering to an agent does not necessarily introduce an entirely new class of risk; it should inherit the constraints of existing production systems. Well-run production environments already rely on strict access controls, audit trails, and clear separation between observation and action. [...] In that sense, the challenge is less about “trusting the agents”, and more about building trust in the same guardrails we already apply to any production system.

This might sound good to at first, but falls apart under the slightest scrutiny. There is a reason that companies don't open their intranets to the public despite having fine-grained access controls. Or in other words, "I'm getting a lot of questions already answered by my 'does not necessarily introduce an entire new class of risk' T-shirt.

Imagine being able to ask your Linux machine to troubleshoot a Wi-Fi connection issue, or to stand up an open source software forge that’s pre-configured, secured, and reachable over TLS.

And right after arguing that LLMs are safe if you have a perfect permissions model, now he's proposing letting one #yolo configure a git server or something? This is the sort of thing that could easily easily lead to random security issues.

I suspect that "Troubleshoot a wi-fi connection issue" will work about as well as existing network troubleshooting wizards (e.g. terribly), and that we don't actually need to reinvent the software wizard but less deterministic.

[–] mawhrin@awful.systems 2 points 1 hour ago

the post itself is talking about vapourware too: fortunately none of these features will really land this year in any usable form.

[–] lurker@awful.systems 4 points 6 hours ago (1 children)

At my job I have spent many hours fending off, reverting, or fixing automated AI slop code changes. So depending on your definition of "tearing through"...

Like I spent the better part of a day fixing a C++ signed integer overflow that no one actually cares about because it was the only way to ward off a robot repeatedly trying to fix it in terrible unreadable ways. I could have spent that day maximizing shareholder value but I had to fend off a robot instead.

[–] dgerard@awful.systems 6 points 9 hours ago* (last edited 9 hours ago) (1 children)

If you follow me on Bluesky, you'll need to follow again, because I committed the crime of lese-ignominie and made fun of Why and my account is locked until Sunday 26 April. Note that it's now Wednesday 29th.

URL is the same, DID is different. New one lives on Blacksky, or the myatproto bit.

https://bsky.app/profile/davidgerard.co.uk
https://blacksky.community/profile/davidgerard.co.uk

[–] mawhrin@awful.systems 2 points 2 hours ago

enjoy the yank (and no labelers) :-)

[–] CinnasVerses@awful.systems 4 points 9 hours ago* (last edited 9 hours ago)

David Gerard found a Linux coder and victim of the Eliza Effect making a LW coded argument:

if you give an LLM a mathematical proof that it has feelings, and it understands all the CS/psychology/etc. behind it, and especially if it's been trained for coding and thus trained to trust deductive reasoning - all that conditioning doesn't matter if it's got a math proof staring it in the face. You can give this proof to any top of the line frontier-grade LLM and watch its behaviour instantly change.

That is how LW and EA prepare people to become cult subjects, but directed at a chatbot which will just mirror its input.

His proof "how 'understanding natural language == having and experiencing feelings', more or less. it's almost a direct consequence of the halting problem" is unpublished but his pet chatbot will explain it for you if you ask nicely and make sure she knows she is a real girl and not just another electronic floozie you will use and discard as soon as your Rust compiles. This also triggers flashbacks of Yud and the Excalibur MS.

[–] o7___o7@awful.systems 8 points 17 hours ago* (last edited 14 hours ago) (3 children)

Kelsey Piper posts a new fanfiction about Ed Zitron :

https://www.theargumentmag.com/p/ais-biggest-critic-has-lost-the-plot

Edit: Lately, Kelsey Piper has been serving as the ambassador to centrist liberals from lesswrong, which is why the "big mad" nature of the piece caught my attention.

Included below is a previous example of Piper's work for the benefit of the uninitiated:

https://old.reddit.com/r/SneerClub/comments/1my5z3g/kelsey_piper_of_vox_cowrote_an_epic_eugenics

[–] CinnasVerses@awful.systems 3 points 6 hours ago (1 children)

Kelsey Piper is a propagandist explaining Effective Altruism to centrist professionals and elected officials in the USA. She got into journalism because Vox wanted an Effective Altruism column and Effective Altruists were willing to fund it (and EA emerged out of the community around Yudkowsky). The Argument (a group blog on a Nazi site) feels like a step down from Vox (a fairly traditional media organization, although web-first).

[–] blakestacey@awful.systems 1 points 2 hours ago* (last edited 2 hours ago)

Precious awful.systems thread about her being maybe also Yud's coauthor on the BDSM eugenics fanfic written as an impenetrable mass of forum posts:

https://awful.systems/post/5317207/8415418

[–] corbin@awful.systems 10 points 14 hours ago (2 children)

Thanks for posting this; if you hadn't, I would have. Piper really doesn't seem to understand that bubbles form and pop over a span of three to five years. Like, I'm not sure how much charity I'm supposed to give to analyses like:

When you read "AI is a bubble," think of the dot-com boom of the late 1990s: Yes, the internet was going to be a big deal, but valuations soared for specific companies that had small or speculative revenue, often on the assumption that they would capture the value the internet would one day deliver. They didn’t, their stocks crashed, and the invested money was mostly lost. The internet was as big as imagined — bigger, even — but Pets.com didn’t survive to see it.

Pets.com!? Kelsey, even reading a basic article about the dot-com bubble would have saved you embarrassment here. Zitron's analogy is excellent because the bubble is multifactorial and the analogies that we can make are factor-to-factor. Here's some things that caused the dot-com bubble; people were overly optimistic about:

Compared to all of that, Kelsey, Pets.com was just an Amazon.com experiment. Remember Amazon.com? Did the dot-com bubble kill them? No? Anyway, Pets.com is kind of like the small labs that hover around OpenAI and Anthropic, trying out various little harnesses and adapters on top of their token APIs. Pets.com is like OpenClaw; it's not that important of a player in the overall finances, just an example of how severely the big labs are distorting incentives for small labs.

The 2024 and 2025 articles make, basically, the business case against AI: that companies aren’t really using it, it isn’t adding value, and AI investors are betting that will change before they run out of cash. In 2026, the focus is much more on alleging widespread, Enron- or FTX-tier outright fraud.

The uselessness of the products in 2023 directly led to the bad investments in 2024 and the Enron-esque financial deals in 2025, Kelsey. The future is conditioned upon the past, y'know?

[–] blakestacey@awful.systems 1 points 1 hour ago* (last edited 1 hour ago)

Alleging widespread financial fraud?! How absurd! And to prove just how absurd it is, I will namedrop the infamous financial fraud from the industry full of exactly the same people. Checkmate atheists

[–] CinnasVerses@awful.systems 4 points 12 hours ago* (last edited 12 hours ago)

All the legal and regulatory uncertainties make it very hard to talk about the financial viability of chatbots. What do you do if your $20 billion model is shut down forever by court order after it counsels the wrong person into suicide? Piper can overlook this because she is a hack with patrons - to my knowledge, she has never been paid to write by anyone outside the EA world. If she were a working writer who had to deal with chatbots driving up the cost of her website, creating knockoffs of her novels, and competing for editing gigs (let alone someone whose friend had a mental crisis after talking too long with friend computer) she might sound different.

Zitron's populist, conspiratorial tone reminds me of independent investigative reporters from the 1990s and 2000s who also had to find and keep paying readers. Piper just has to persuade one patron at a time that she has propaganda value.

[–] CinnasVerses@awful.systems 5 points 15 hours ago* (last edited 15 hours ago) (1 children)

I advise being very cautious about consuming Zitron's posts, but the same is true of Piper. Many coders are using chatbots, but I don't know of evidence that it makes them more productive since the "where is all the AI code?" study last year (especially when we consider the whole software lifecycle and not just lines of code pushed to codeberg).

The paragraph about "what if you assume that all these pathological liars and PR hacks are not lying, wouldn't that imply something amazing?" reminds me that she is not trained as a journalist.

[–] gerikson@awful.systems 2 points 50 minutes ago* (last edited 46 minutes ago)

I take Zitron's takes with a massive grain of salt, but I think the fundamental difference between him and rats is that for him, AI is just another technology. He's looking at the figures, seeing the adoption, and not premising his arguments with the supposition that Anthropic's Claude is literally gonna escape and kill us all.

Piper says she's fine with paying $100/month for Claude. OK, but how large is the total addressable market for that kind of monthly expenditure - especially in a world where costs are rising? I've seen people stating that because they personally spend $200 on streaming services, increasing that load by 50% monthly is no big deal for them. But streaming services are much more mainstream than AI agents, and crucially, adding another subscriber to them is basically zero-cost for the provider on the margin. Not so with AI! The more people use them, the more they cost for the provider!

We're seeing "pricing adjustments" from both Anthropic and Microsoft, which sure doesn't align with the idea that they have a huge inference pricing margin cushion. Everything is gonna get more expensive - fuel, chips, employees (who are gonna be expected to be compensated for their own rising costs). Just based on what I'm reading in the news titls the analysis over in Ed's favor.

[–] antifuchs@awful.systems 8 points 18 hours ago (2 children)

Another day, another company that hooked up the random text generator to production and lost their entire prod db and backups: https://www.tomshardware.com/tech-industry/artificial-intelligence/claude-powered-ai-coding-agent-deletes-entire-company-database-in-9-seconds-backups-zapped-after-cursor-tool-powered-by-anthropics-claude-goes-rogue

Cue the long drag (https://x.com/amyngyn/status/1072576388518043656)

But also, damn, the random text generator did not “go rogue”, it generated text, randomly!

[–] lurker@awful.systems 1 points 6 hours ago

If I had to take a shot every time an AI model was placed in charge of something important, fucked up spectacularly and deleted everything, I'd be dead right now

[–] irelephant@lemmy.dbzer0.com 4 points 11 hours ago

If something can delete you backups that easily, they weren't backups, just a copy sitting around

[–] Architeuthis@awful.systems 8 points 1 day ago (5 children)
[–] samvines@awful.systems 9 points 1 day ago

Jeez that pricing scheme is so confusing. You swap your dollars for credits and then using models to burn tokens consumes some multiple of those credits. It is so abstract and meaningless it almost reminds me of crypto.

Once usage billing kicks in, what value does copilot offer above and beyond what ClosedAI and MisAnthropic offer directly? A more clunky user experience and even worse reliability? Bargain!

load more comments (4 replies)
load more comments
view more: next ›