I much prefer your simple framing of the AI-risk question, but the question posed as non-zero vs greater than zero risk is too black and white for me. There is always a non-zero risk of anything happening. To me the question is:
How big is the AI-risk and over what timescale?
What tools do we have mitigate it? At what cost? And how likely are we to mitigate these risks?
How does AI-risk compare to other pressing risks? And what opportunities for mitigating other risks does AI present? What is the total risk vs reward?
For instance, nuclear breeder reactors represent a major threat of proliferation of nuclear weapons and assorted risks of nuclear war. At the same time, they provide a massive source of energy, allowing use to mitigate global warming risks. What is the net risk balance offered by breeder reactors?
“If a race of aliens with an IQ of 300 came to Earth, that would definitely be fine.”
It wouldn't definitely be fine, but would probably be fine for the two hundred years, with risks increasing as the population of aliens approaches ~100,000. In the short term, the aliens are likely to be helpful with a number of more immediate risks. In the long term, on a 200 year time scale, humans are likely to modify themselves and the aliens to be roughly equivalent in capability.
Is a humanity better that walls itself off from life more intelligent than us. Will this make humanity stronger?
I much prefer your simple framing of the AI-risk question, but the question posed as non-zero vs greater than zero risk is too black and white for me. There is always a non-zero risk of anything happening. To me the question is:
For instance, nuclear breeder reactors represent a major threat of proliferation of nuclear weapons and assorted risks of nuclear war. At the same time, they provide a massive source of energy, allowing use to mitigate global warming risks. What is the net risk balance offered by breeder reactors?
It wouldn't definitely be fine, but would probably be fine for the two hundred years, with risks increasing as the population of aliens approaches ~100,000. In the short term, the aliens are likely to be helpful with a number of more immediate risks. In the long term, on a 200 year time scale, humans are likely to modify themselves and the aliens to be roughly equivalent in capability.
Is a humanity better that walls itself off from life more intelligent than us. Will this make humanity stronger?