this post was submitted on 17 Feb 2026
14 points (100.0% liked)

SneerClub

1233 readers
23 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

See our twin at Reddit

founded 2 years ago
MODERATORS
 

Full doc title: “The AI Doc: Or How I Became an Apocaloptimist”

Per wiki:

The AI Doc: Or How I Became an Apocaloptimist is a 2026 American documentary film directed by Daniel Roher and Charlie Tyrell. It is produced by the Academy Award-winning teams behind Everything Everywhere All at Once (Daniel Kwan and Jonathan Wang) and Navalny (Shane Boris and Diane Becker).

What to say here? This is a doc being produced by the producer and one of the directors of Everything Everywhere All At Once, who notably have been making efforts to, uh, negotiate? I guess? with AI companies vis a vis making movies. Anyway the title is a piece of shit and this trailer makes it look like this is just critihype the movie. I guess we’ll hear more about it in the coming month.

Really interesting framing this as brought about by thinking about the director’s child, given Yud’s recent comments about how one should raise a daughter if you had certain beliefs about AI.

top 21 comments
sorted by: hot top controversial new old
[–] Architeuthis@awful.systems 1 points 56 minutes ago* (last edited 54 minutes ago)

Timnit briefly weighs in about being included in the doc, apparently she regrets it and says the filmmakers "sprinkle some [AI skeptics] in like chocolate chips to perform ethics".

She also calls Yud a eugenicist cult leader with nothing to show for.

[–] lurker@awful.systems 2 points 13 hours ago* (last edited 13 hours ago) (1 children)

I poked around the IMDB page, and there are reviews! currently it's sitting at a 8.5/10 with 31 ratings (though no written reviews it seems like) the metacritic score is a 51/100 with 4 reviews and there are 4 external reviews

[–] swlabr@awful.systems 2 points 9 hours ago* (last edited 9 hours ago)

I read a review (the one hosted on the ebert site) and it seems like this just falls into one of the patterns we’ve already seen when other people not steeped in the X risk miasma engage with it. As in, what should be a documentary about how the AI industry is a bubble and that all the AI ceos are grifters or deluded or both, is instead a “somehow I managed to fall for Yud’s whole thing and am now spreading the word” type deal. Big sigh!

[–] dovel@awful.systems 4 points 22 hours ago (1 children)

Strange that Big Yud is missing from the cast on IMDB, but this could be a simple oversight since the movie is not fully out yet. Unfortunately, the rest of the cast contains a lot of familiar faces to put it mildly.

[–] lurker@awful.systems 1 points 15 hours ago* (last edited 15 hours ago) (1 children)

Sam Altman and the other CEOS being there is such a joke “this technology is so dangerous guys! of course I’m gonna keep blocking regulation for it, I need to make money after all!” Also, I’m shocked Emily Bender and Timmit Gebru are there, aren’t they AI skeptics?

[–] grumpybozo@toad.social 2 points 14 hours ago (1 children)

@lurker I don’t know that I’d call them skeptics universally, they are experts in the AI field who are EXTREMELY skeptical of the TESCREAL complex and of the *hype* of the current fad LLM and image generation tools.

Whatever you call them, it’s *positive* that a documentary includes conflicting viewpoints, from the people who see them. The plausible range of near-term AI developments is smaller than the range of widely-held expectations. A documentary has to address the crazies & the skeptics

[–] lurker@awful.systems 2 points 13 hours ago* (last edited 13 hours ago)

I took a deeper look into the documentary, and it does go into both the pessimist and optimist perspectives, so their inclusion makes more sense. and yeah, I was trying to get at how they're skeptical of the TESCREAL stuff and of current LLM capabilities

[–] Klear@quokk.au 4 points 1 day ago (1 children)

Directed by a Tyrell? Not suspicious at all...

[–] lurker@awful.systems 3 points 1 day ago (1 children)

what’s the lore with Tyrell?

[–] gerikson@awful.systems 10 points 1 day ago

I believe it's the evil megacorp in Bladerunner

[–] lurker@awful.systems 6 points 1 day ago (1 children)

my god I just cringed so hard. I thought the book would be the end….

Also yeah, someone pointed this out on old SneerClub but Yud loves using kids to illustrate his AI fears, and to beat a very dead horse here that’s a weird thing to do in his case

If anyone here wants to jump on the grenade and watch it/acquire a transcript for the rest of us to sneer at you’ll be my hero

[–] lurker@awful.systems 5 points 1 day ago (3 children)

also, what the fuck does “apocaloptimist” mean???? does it mean he’s optimistic about our chances of apocalypse??? (which makes no sense, just say pessimist) has he finally gone crazy and is now saying that apocalypse is the optimistic outcome?

[–] blakestacey@awful.systems 8 points 1 day ago (1 children)

It's someone who learned to stop worrying and love the Bomb.

[–] Soyweiser@awful.systems 4 points 1 day ago

Ow god it is pop culture references all the way down. Tvthropes is skynet! You gotta tell them!

[–] Architeuthis@awful.systems 5 points 1 day ago

I mean, they mostly don't have a problem with AI instances inheriting the earth as long as they're sufficiently rationalist.

[–] swlabr@awful.systems 5 points 1 day ago (2 children)

Pure speculation: my guess is that an “apocaloptimist” is just someone fully bought into all of the rationalist AI delulu. Specifically:

  • AGI is possible
  • AGI will solve all our current problems
  • A future where AGI ends humanity is possible/probable

and they take the extra belief, steeped in the grand tradition of liberal optimism, that we will solve the alignment problem and everything will be ok. Again, just guessing here.

[–] Soyweiser@awful.systems 6 points 1 day ago* (last edited 1 day ago) (2 children)

According to a site : https://apocaloptimist.net/the-apocaloptimist/

"An Apocaloptimist sees the trouble, but is optimistic we can do anything–including fixing all the world’s problems"

So if jesus wins the war during the second coming all problems are fixed.

(~~The thing is also nuts "we are the people actually working on fixing things [by hoping AGI will fix it all for us]", my brother in Eschatology you are running a podcast~~ sorry the guy is unrelated to the agi people theyvare just using his term).

E: does seem the site itself isnt about AI so they just stole this guys term. Nope they just took this clean energy guys term, sorry about sneering at him he seems to actually want to introduce clean energy and works hard (that seems to be a lot of conventions and blogging however, so buying ourself out of the capitalist problems) for it, as far as I can tell.

[–] swlabr@awful.systems 4 points 22 hours ago (1 children)

I feel like I nailed my guess

[–] Soyweiser@awful.systems 3 points 20 hours ago

Think you did, I only followed it up with a google and finding that site.

[–] lurker@awful.systems 1 points 17 hours ago

Surprised it’s a term they stole and not one they made up. But yeah the whole idea of “AGI will solve all our problems” is just silly

[–] lurker@awful.systems 4 points 1 day ago

my personal guess is that “apocaloptimist” is just them trying to make a “better” term for “pessimist”