I’m not claiming that AGI will necessarily be practical or profitable by human standards - just that, given enough time and uninterrupted progress, it’s hard to see how it wouldn’t happen.
The core of my argument isn’t about funding or feasibility in the short term, it’s about inevitability in the long term. Once you accept that intelligence is a physical process and that we’re capable of improving the systems that simulate it, the only thing that can stop us from reaching AGI eventually is extinction or total collapse.
So, sure - maybe it’s not 10 years away. Maybe not 100. But if humanity keeps inventing, iterating, and surviving, I don’t see a natural stopping point before we get there.
I get it, the core of your argument is given enough time it will happen, which isn't saying much: given infinite time anything will happen. Even extinction and total collapse aren't enough, infinite time means a thinking computer will just emerge fully formed from quantum fluctuations.
But you're voicing it as though it's a certain direction of human technological progress which is frankly untrue. You've just concocted a scenario for technological progress in your head by extrapolating from the current state of it, and you present it as a certainty. But anyone can do the same for equally credible scenarios without AGI. For instance, if the only way to avoid total collapse is to stabilize energy consumption and demographic growth and we somehow manage it, then if making rocks think costs 10^20W and the entire world's labour, then it will not ever happen in any meaningful sense of the word "ever".
PS - to elaborate a bit on that "meaningful sense of the word ever" bit, I don't want to nitpick but some time scales do make asteroid impacts irrelevant. The Sun will engulf the earth in about 5 billion years. Then there's the heat death of the universe. In computing problems you get millions of years popping here and there for problems that feel like they should be easy
In my view, we’re heavily incentivized to develop AGI because of the enormous potential benefits - economic, scientific, and military. That’s exactly what worries me. We’re sprinting toward it without having solved the serious safety and control problems that would come with it.
I can accept that the LLM approach might be a dead end, or that building AGI could be far harder than we think. But to me, that doesn’t change the core issue. AGI represents a genuine civilization-level existential risk. Even if the odds of it going badly are small, the stakes are too high for that to be comforting.
Given enough time, I think we’ll get there - whether that’s in 2 years or 200. The timescale isn’t the problem; inevitability is. And frankly, I don’t think we’ll ever be ready for it. Some doors just shouldn’t be opened, no matter how curious or capable we become.
Right. I don't believe it's inevitable, in fact I believe it's not super likely given where we're at and the economic, scientific and military incentives I'm aware of.
I think the people who are sprinting now do so blindly, not knowing where or how far it is. I think 2 years is a joke or a lie Sam Altman tells gullible investors, and 200 years means we've survived global warming so if we're still there our incentives look nothing then like they do now, and I don't believe in it then either. I think it's at most a maybe on the far, far horizon of the thousands+ years in a world that looks nothing like ours, and in the meantime we have way more pressing problems than the snake oil a few salesmen are trying desperately to sell. Like the salesmen themselves, for example.
I’m not claiming that AGI will necessarily be practical or profitable by human standards - just that, given enough time and uninterrupted progress, it’s hard to see how it wouldn’t happen.
The core of my argument isn’t about funding or feasibility in the short term, it’s about inevitability in the long term. Once you accept that intelligence is a physical process and that we’re capable of improving the systems that simulate it, the only thing that can stop us from reaching AGI eventually is extinction or total collapse.
So, sure - maybe it’s not 10 years away. Maybe not 100. But if humanity keeps inventing, iterating, and surviving, I don’t see a natural stopping point before we get there.
I get it, the core of your argument is given enough time it will happen, which isn't saying much: given infinite time anything will happen. Even extinction and total collapse aren't enough, infinite time means a thinking computer will just emerge fully formed from quantum fluctuations.
But you're voicing it as though it's a certain direction of human technological progress which is frankly untrue. You've just concocted a scenario for technological progress in your head by extrapolating from the current state of it, and you present it as a certainty. But anyone can do the same for equally credible scenarios without AGI. For instance, if the only way to avoid total collapse is to stabilize energy consumption and demographic growth and we somehow manage it, then if making rocks think costs 10^20W and the entire world's labour, then it will not ever happen in any meaningful sense of the word "ever".
PS - to elaborate a bit on that "meaningful sense of the word ever" bit, I don't want to nitpick but some time scales do make asteroid impacts irrelevant. The Sun will engulf the earth in about 5 billion years. Then there's the heat death of the universe. In computing problems you get millions of years popping here and there for problems that feel like they should be easy
In my view, we’re heavily incentivized to develop AGI because of the enormous potential benefits - economic, scientific, and military. That’s exactly what worries me. We’re sprinting toward it without having solved the serious safety and control problems that would come with it.
I can accept that the LLM approach might be a dead end, or that building AGI could be far harder than we think. But to me, that doesn’t change the core issue. AGI represents a genuine civilization-level existential risk. Even if the odds of it going badly are small, the stakes are too high for that to be comforting.
Given enough time, I think we’ll get there - whether that’s in 2 years or 200. The timescale isn’t the problem; inevitability is. And frankly, I don’t think we’ll ever be ready for it. Some doors just shouldn’t be opened, no matter how curious or capable we become.
Right. I don't believe it's inevitable, in fact I believe it's not super likely given where we're at and the economic, scientific and military incentives I'm aware of. I think the people who are sprinting now do so blindly, not knowing where or how far it is. I think 2 years is a joke or a lie Sam Altman tells gullible investors, and 200 years means we've survived global warming so if we're still there our incentives look nothing then like they do now, and I don't believe in it then either. I think it's at most a maybe on the far, far horizon of the thousands+ years in a world that looks nothing like ours, and in the meantime we have way more pressing problems than the snake oil a few salesmen are trying desperately to sell. Like the salesmen themselves, for example.