this post was submitted on 19 May 2025
34 points (100.0% liked)

SneerClub

1095 readers
54 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

See our twin at Reddit

founded 2 years ago
MODERATORS
 

An excerpt has surfaced from the AI2027 podcast with siskind and the ex AI researcher, where the dear doctor makes the case for how an AGI could build an army of terminators in a year if it wanted.

It goes something like: OpenAI is worth as much as all US car companies (except tesla) combined, so it could buy up every car factory and convert it to a murderbot factory, because that's kind of like what the US gov did in WW2 to build bombers, reaching peak capacity in three years, and AGI would obviously be more efficient than a US wartime gov so let's say one year, generally a completely unassailable syllogism from very serious people.

Even /r/ssc commenters are calling him out about the whole AI doomer thing getting more noticeably culty than usual edit: The thread even features a rare heavily downvoted siskind post, -10 at the time of this edit.

The latter part of the clip is the interviewer pointing out that there might be technological bottlenecks that could require upending our entire economic model before stuff like curing cancer could be achieved, positing that if we somehow had AGI-like tech in the 1960s it would probably have to use its limited means to invent the entire tech tree that leads to late 2020s GPUs out of thin air, international supply chains and all, before starting on the road to becoming really useful.

Siskind then goes "nuh-uh!" and ultimately proceeds to give Elon's metaphorical asshole a tongue bath of unprecedented depth and rigor, all but claiming that what's keeping modern technology down is the inability to extract more man hours from Grimes' ex, and that's how we should view the eventual AGI-LLMs, like wittle Elons that don't need sleep. And didn't you know, having non-experts micromanage everything in a project is cool and awesome actually.

you are viewing a single comment's thread
view the rest of the comments
[–] Soyweiser@awful.systems 15 points 1 day ago* (last edited 1 day ago) (16 children)

and that’s how we should view the eventual AGI-LLMs, like wittle Elons that don’t need sleep.

Wonder how many people stopped being AI-doomers after this. I use the same argument against ai-doom.

E: the guy doing the most basic 'It really is easier to imagine the end of the world than the end of capitalism.' bit in the comments and have somebody just explode in 'not being able to imagine it properly' is a bit amusing. I know how it feels to just have a massive hard to control reaction over stuff like that but oof what are you doing man. And that poor anti-capitalist guy is in for a rude awakening when he discovers what kind of place r/ssc is.

E2: Scott is now going 'this clip is taken out of context!' not that the context improves it. (He claims he was explaining what others believe not what he believes, but if that is so, why are you so aggressively defending the stance? Hope this Scott guy doesn't have a history of lying about his real beliefs).

[–] Architeuthis@awful.systems 10 points 1 day ago* (last edited 1 day ago) (12 children)

He claims he was explaining what others believe not what he believes

Others as in specifically his co-writer for AI2027 Daniel Kokotlajo, the actual ex-OpenAI researcher.

I'm pretty annoyed at having this clip spammed to several different subreddits, with the most inflammatory possible title, out of context, where the context is me saying "I disagree that this is a likely timescale but I'm going to try to explain Daniel's position" immediately before. The reason I feel able to explain Daniel's position is that I argued with him about it for ~2 hours until I finally had to admit it wasn't completely insane and I couldn't find further holes in it.

Pay no attention to this thing we just spent two hours exhaustively discussing that I totally wasn't into, it's not really relevant context.

Also the title is inflammatory only in the context of already knowing him for a ridiculous AI doomer, otherwise it's fine. Inflammatory would be calling the video economically illiterate bald person thinks evaluations force-buy car factories, China having biomedicine research is like Elon running SpaceX .

[–] BigMuffin69@awful.systems 10 points 23 hours ago* (last edited 23 hours ago) (2 children)

Daniel Kokotlajo, the actual ex-OpenAI researcher

Unclear to me what Daniel actually did as a 'researcher' besides draw a curve going up on a chalkboard (true story, the one interaction I had with LeCun was showing him Daniel's LW acct that is just singularity posting and Yann thought it was big funny). I admit, I am guilty of engineer gatekeeping posting here, but I always read Danny boy as a guy they hired to give lip service to the whole "we are taking safety very seriously, so we hired LW philosophers" and then after Sam did the uno reverse coup, he dropped all pretense of giving a shit/ funding their fan fac circles.

Ex-OAI "governance" researcher just means they couldn't forecast that they were the marks all along. This is my belief, unless he reveals that he superforecasted altman would coup and sideline him in 1998. Someone please correct me if I'm wrong, and they have evidence that Daniel actually understands how computers work.

[–] Architeuthis@awful.systems 6 points 22 hours ago (1 children)

Didn't mean to imply otherwise, just wanted to point out that the call is coming from inside the house.

[–] BigMuffin69@awful.systems 6 points 22 hours ago

np, im just screaming into the void on this beautiful Monday morning

load more comments (9 replies)
load more comments (12 replies)