this post was submitted on 02 Oct 2025
6 points (100.0% liked)
dynomight internet forum
87 readers
1 users here now
dynomight internet forum
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I much prefer your simple framing of the AI-risk question, but the question posed as non-zero vs greater than zero risk is too black and white for me. There is always a non-zero risk of anything happening. To me the question is:
For instance, nuclear breeder reactors represent a major threat of proliferation of nuclear weapons and assorted risks of nuclear war. At the same time, they provide a massive source of energy, allowing use to mitigate global warming risks. What is the net risk balance offered by breeder reactors?
It wouldn't definitely be fine, but would probably be fine for the two hundred years, with risks increasing as the population of aliens approaches ~100,000. In the short term, the aliens are likely to be helpful with a number of more immediate risks. In the long term, on a 200 year time scale, humans are likely to modify themselves and the aliens to be roughly equivalent in capability.
Is a humanity better that walls itself off from life more intelligent than us. Will this make humanity stronger?
I agree with you! There are a lot of things that present non-zero existential risk. I think that my argument is fine as an intellectual exercise, but if you want to use it to advocate for particular policies then you need to make a comparative risk vs. reward assessment just as you say.
Personally, I think the risk is quite large, and enough to justify a significant expenditure of resources. (Although I'm not quite sure how to use those resources to reduce risk...) But this definitely is not implied by the minimal argument.