Ive got paramilitary troops busting down every private residents door, pulling all the residents outside, and then stealing their shit. With the excuse there might be dissedents in the area. Im not worried about agi.
dynomight internet forum
dynomight internet forum
I much prefer your simple framing of the AI-risk question, but the question posed as non-zero vs greater than zero risk is too black and white for me. There is always a non-zero risk of anything happening. To me the question is:
- How big is the AI-risk and over what timescale?
- What tools do we have mitigate it? At what cost? And how likely are we to mitigate these risks?
- How does AI-risk compare to other pressing risks? And what opportunities for mitigating other risks does AI present? What is the total risk vs reward?
For instance, nuclear breeder reactors represent a major threat of proliferation of nuclear weapons and assorted risks of nuclear war. At the same time, they provide a massive source of energy, allowing use to mitigate global warming risks. What is the net risk balance offered by breeder reactors?
“If a race of aliens with an IQ of 300 came to Earth, that would definitely be fine.”
It wouldn't definitely be fine, but would probably be fine for the two hundred years, with risks increasing as the population of aliens approaches ~100,000. In the short term, the aliens are likely to be helpful with a number of more immediate risks. In the long term, on a 200 year time scale, humans are likely to modify themselves and the aliens to be roughly equivalent in capability.
Is a humanity better that walls itself off from life more intelligent than us. Will this make humanity stronger?
I agree with you! There are a lot of things that present non-zero existential risk. I think that my argument is fine as an intellectual exercise, but if you want to use it to advocate for particular policies then you need to make a comparative risk vs. reward assessment just as you say.
Personally, I think the risk is quite large, and enough to justify a significant expenditure of resources. (Although I'm not quite sure how to use those resources to reduce risk...) But this definitely is not implied by the minimal argument.