dynomight

joined 2 years ago
MODERATOR OF
[–] dynomight@lemmy.world 2 points 2 months ago

This advice generally makes me sad, but still worth thinking about.

[–] dynomight@lemmy.world 1 points 3 months ago (1 children)

Ah, so the argument is more general than "reproduction" through running different physical copies, but also includes the AI self-improving? This again seems plausible to me, but still seems like something not everyone would agree with. It's possible, for example, that the "300 IQ AI" only appears at the end of some long process of recursive self-improvement, at which stage physical limits mean it can't get much better without new hardware requiring some kind of human intervention.

I guess my goal is not to lay out the most likely scenario for AI-risk, but rather the scenario that requires the fewest assumptions, that's the hardest to dispute?

[–] dynomight@lemmy.world 1 points 3 months ago

I agree with you! There are a lot of things that present non-zero existential risk. I think that my argument is fine as an intellectual exercise, but if you want to use it to advocate for particular policies then you need to make a comparative risk vs. reward assessment just as you say.

Personally, I think the risk is quite large, and enough to justify a significant expenditure of resources. (Although I'm not quite sure how to use those resources to reduce risk...) But this definitely is not implied by the minimal argument.

[–] dynomight@lemmy.world 1 points 3 months ago (3 children)

I certainly agree that makes the scenario more concerning. But I worry that it also increases the "surface area of disagreement". Some people might reject the metaphor on the grounds that they think—say—that AI will require such enormous computational resources and there are physical limits on how quickly more compute can be created that AI can't "reproduce".

[–] dynomight@lemmy.world 1 points 4 months ago

It's certainly possible that I'm misinterpreting them, but I don't think I understand what you're suggesting. How do you interpret "Substack eugenics alarm"?

[–] dynomight@lemmy.world 1 points 4 months ago

Interestingly, lots of people now seem excited about alpha school, where pay-for-performance is apparently a core principle!

[–] dynomight@lemmy.world 1 points 4 months ago

This is a tangent but I've always been fascinated by the question of what people would spend their time on given extremely long lifespans. One theory would be art, literature, etc. But maybe you'd get tired of all that and what you'd really enjoy is more basic things like good meals and physical comfort? Or maybe you'd just meditate all the time?

[–] dynomight@lemmy.world 1 points 4 months ago

Deciding if you'll like something before you've tasted it is a great example. Probably we all do that to some degree with all sorts of things?

P.S. Instead of Moby Dick try War and Peace!

[–] dynomight@lemmy.world 2 points 4 months ago

Thanks, I really like the idea of "performing enjoying". I'd heard of the Ben Franklin effect before, but not the conjectured explanation. (The other conjectured explanations on Wikipedia are interesting, too.)

[–] dynomight@lemmy.world 2 points 5 months ago

That's what I see, too—if I'm able to hold my focus exactly constant. It seems to disappear as soon as I move my eyes even a little bit.

[–] dynomight@lemmy.world 1 points 6 months ago

I considered getting a CGM, but they all seemed to require all sorts of cloud services and apps and stuff that wouldn't work for me.

I didn't reap UPP, though I think I read this review: https://www.newyorker.com/magazine/2023/07/31/ultra-processed-people-chris-van-tulleken-book-review (should I?)

[–] dynomight@lemmy.world 1 points 6 months ago

I think this is a fair argument. Current AIs are quite bad about "knowing if they know". I think it's likely that we can/will solve this problem, but I don't have any particularly compelling reason for that, and I agree that my argument fails if it never gets solved.

view more: next ›