this post was submitted on 05 Apr 2026
17 points (90.5% liked)

TechTakes

2531 readers
40 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

you are viewing a single comment's thread
view the rest of the comments
[–] YourNetworkIsHaunted@awful.systems 5 points 7 hours ago (1 children)

Man, this one is a weird read. On one hand I think they're entirely too credulous of the "AI Future" narrative at the heart of all of this. Especially in the opening they don't highlight how the industry is increasingly facing criticism and questions about the bubble, and only pay lip service to how ridiculous all the existential risk AI safety talk sounds (should be is). And they don't spend any ink discussing the actual problems with this technology that those concerns and that narrative help sweep under the rug. For all that they criticize and question Saltman himself this is still, imo, standard industry critihype and I'm deeply frustrated to see this still get the platform it does.

But at the same time, I do think that it's easy to lose sight of the rich variety of greedy assholes and sheltered narcissists that thrive at this level of wealth and power. Like, I wholly believe that Altman is less of a freak than some of his contemporaries while still being an absolute goddamn snake, and I hope that this is part of a sea change in how these people get talked about on a broader level, though I kinda doubt it.

[–] blakestacey@awful.systems 7 points 7 hours ago* (last edited 7 hours ago)

I aired some Reviewer #2 grievances in the bsky comments:

https://bsky.app/profile/ronanfarrow.bsky.social/post/3mitapp7j2s2c

"Kalanick now runs a robotics startup; in his free time, he said recently, he uses OpenAI’s ChatGPT “to get to the edge of what’s known in quantum physics.”"

As a physicist, I have never pressed F to doubt harder.

"In 2022, researchers at a pharmaceutical company tested whether a drug-discovery model could be used to find new toxins; within a few hours, it had suggested forty thousand deadly chemical-warfare agents." To the best of my knowledge, these suggestions were never evaluated by any other researchers.

(The original paper was published as a "comment": https://www.nature.com/articles/s42256-022-00465-9)

Similar claims of AI-facilitated discoveries have turned out to be overblown in other fields.

https://pubs.acs.org/doi/pdf/10.1021/acs.chemmater.4c00643

"In a 2025 study, ChatGPT passed the test more reliably than actual humans did."

If this is referring to Jones and Bergen's "Large Language Models Pass the Turing Test", that's a preprint (arXiv:2503.23674) that has yet to pass peer review over a year after its posting.

"A classic hypothetical scenario in alignment research involves a contest of wills between a human and a high-powered A.I. In such a contest, researchers usually argue, the A.I. would surely win"

Which researchers?

(Hint: Eliezer Yudkowsky is not a researcher.)

AI: "I will convince you to let me out of this box"

Humanity (wringing hands): "Oh, where is our savior? Who will stand fast in the face of all entreaties?"

Bartleby the Scrivener: hello

"...a hub of the effective-altruism movement whose commitments included supporting the distribution of mosquito nets to the global poor."

Phrasing like this subtly underplays how the (to put it briefly) weird people were part of EA all along.

https://repository.uantwerpen.be/docman/irua/371b9dmotoM74

"In late 2022, four computer scientists published a paper motivated in part by concerns about “deceptive alignment,” ... one of several A.I. scenarios that sound like science fiction—but, under certain experimental conditions, it’s already happening."

Barrett et al.'s arXiv:2206.08966? AFAIK, that was never peer-reviewed either; "posted" is not the same as "published". And claims in this area are rife with criti-hype:

https://pivot-to-ai.com/2025/09/18/openai-fights-the-evil-scheming-ai-which-doesnt-exist-yet/

Oh, right, the "Future of Life Institute". Pepperidge Farm remembers:

"In January 2023, Swedish magazine Expo reported that the FLI had offered a grant of $100,000 to a foundation set up by Nya Dagbladet, a Swedish far-right online newspaper."

https://en.wikipedia.org/wiki/Future_of_Life_Institute#Activism

"Tegmark also rejected any suggestion that nepotism could have played a part in the grant offer being made, given that his brother, Swedish journalist Per Shapiro ... has written articles for the site in the past."

https://www.vice.com/en/article/future-of-life-institute-max-tegmark-elon-musk/