this post was submitted on 02 Oct 2025
6 points (100.0% liked)
dynomight internet forum
87 readers
1 users here now
dynomight internet forum
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I certainly agree that makes the scenario more concerning. But I worry that it also increases the "surface area of disagreement". Some people might reject the metaphor on the grounds that they think—say—that AI will require such enormous computational resources and there are physical limits on how quickly more compute can be created that AI can't "reproduce".
Ah, so the argument is more general than "reproduction" through running different physical copies, but also includes the AI self-improving? This again seems plausible to me, but still seems like something not everyone would agree with. It's possible, for example, that the "300 IQ AI" only appears at the end of some long process of recursive self-improvement, at which stage physical limits mean it can't get much better without new hardware requiring some kind of human intervention.
I guess my goal is not to lay out the most likely scenario for AI-risk, but rather the scenario that requires the fewest assumptions, that's the hardest to dispute?