An excerpt has surfaced from the AI2027 podcast with siskind and the ex AI researcher, where the dear doctor makes the case for how an AGI could build an army of terminators in a year if it wanted.
It goes something like: OpenAI is worth as much as all US car companies (except tesla) combined, so it could buy up every car factory and convert it to a murderbot factory, because that's kind of like what the US gov did in WW2 to build bombers, reaching peak capacity in three years, and AGI would obviously be more efficient than a US wartime gov so let's say one year, generally a completely unassailable syllogism from very serious people.
Even /r/ssc commenters are calling him out about the whole AI doomer thing getting more noticeably culty than usual edit: The thread even features a rare heavily downvoted siskind post, -10 at the time of this edit.
The latter part of the clip is the interviewer pointing out that there might be technological bottlenecks that could require upending our entire economic model before stuff like curing cancer could be achieved, positing that if we somehow had AGI-like tech in the 1960s it would probably have to use its limited means to invent the entire tech tree that leads to late 2020s GPUs out of thin air, international supply chains and all, before starting on the road to becoming really useful.
Siskind then goes "nuh-uh!" and ultimately proceeds to give Elon's metaphorical asshole a tongue bath of unprecedented depth and rigor, all but claiming that what's keeping modern technology down is the inability to extract more man hours from Grimes' ex, and that's how we should view the eventual AGI-LLMs, like wittle Elons that don't need sleep. And didn't you know, having non-experts micromanage everything in a project is cool and awesome actually.
Oh man this is peak venture capitalism crossed with Factorio - valuations are actually cash, and a factory is a black box where you just upload new software and other stuff comes out.
Let's take your average holder of car manufacturer stock. You're holding the stock because you believe the car manufacturer will continue making competitive products, and you'll get either dividends or higher valuations. Then OpenAI pitches up and offers you - what? They don't even have stock! Even if they did, you're exchanging a stake in something known for stake in an enterprise that have never made any cars, and when asked what kind of business plan they have they look shifty. No fucking way anyone will sell their stake for less than double what they have, especially if they find out the factory they're selling is gonna produce machines that will kill us all.
Yeah the financial illiteracy is quite high, on top of the rest. But dont worry AI nobel prize winners say it is possible!
(Are there multiple ai Nobel prize winners who are ai doomers?)
There's Geoffrey Hinton I guess, even if his 2024 Nobel in (somehow) Physics seemed like a transparent attempt at trend chasing on behalf of the Nobel committee.
That is the one I was thinking of, the way the comments are phrased makes it seem like there are a lot of winners who are doomers. Guess Hinton is a one man brigade.
I think Demis Hassabis (chemistry for alpha fold) has said the chance of AI killing all of humanity is somewhere between 0 and 100%.
That really pins it down, doesn't it?