gbzm

joined 5 months ago
[–] gbzm@piefed.social 1 points 1 week ago

Goth caterpillar for short

[–] gbzm@piefed.social 25 points 2 weeks ago* (last edited 2 weeks ago) (4 children)

Maybe I'm not fully awake, but I don't understand how that stat could be calculated. How the hell would anyone know the kinks at play in a representative subset of all abusive relationship that have ended in murder?

[–] gbzm@piefed.social 8 points 3 weeks ago (1 children)

Look at it this way: it feels like way more than 40 fridays since Trump took office

[–] gbzm@piefed.social 15 points 1 month ago* (last edited 1 month ago) (1 children)

Late nineties to early two thousands on an arch-based distro is Ayanami Rei territory for sure

EDIT: could also be Serial Expeirments Lain or Major Kusanagi depending on your age and horniness in asking this question

[–] gbzm@piefed.social 1 points 1 month ago

Right. I don't believe it's inevitable, in fact I believe it's not super likely given where we're at and the economic, scientific and military incentives I'm aware of. I think the people who are sprinting now do so blindly, not knowing where or how far it is. I think 2 years is a joke or a lie Sam Altman tells gullible investors, and 200 years means we've survived global warming so if we're still there our incentives look nothing then like they do now, and I don't believe in it then either. I think it's at most a maybe on the far, far horizon of the thousands+ years in a world that looks nothing like ours, and in the meantime we have way more pressing problems than the snake oil a few salesmen are trying desperately to sell. Like the salesmen themselves, for example.

[–] gbzm@piefed.social 2 points 1 month ago* (last edited 1 month ago) (8 children)

I get it, the core of your argument is given enough time it will happen, which isn't saying much: given infinite time anything will happen. Even extinction and total collapse aren't enough, infinite time means a thinking computer will just emerge fully formed from quantum fluctuations.

But you're voicing it as though it's a certain direction of human technological progress which is frankly untrue. You've just concocted a scenario for technological progress in your head by extrapolating from the current state of it, and you present it as a certainty. But anyone can do the same for equally credible scenarios without AGI. For instance, if the only way to avoid total collapse is to stabilize energy consumption and demographic growth and we somehow manage it, then if making rocks think costs 10^20W and the entire world's labour, then it will not ever happen in any meaningful sense of the word "ever".

PS - to elaborate a bit on that "meaningful sense of the word ever" bit, I don't want to nitpick but some time scales do make asteroid impacts irrelevant. The Sun will engulf the earth in about 5 billion years. Then there's the heat death of the universe. In computing problems you get millions of years popping here and there for problems that feel like they should be easy

[–] gbzm@piefed.social 2 points 1 month ago* (last edited 1 month ago) (10 children)

The thing is I'm not assuming substrate dependence. I'm not saying there's something uniquely mysterious about the biological brain, I'm saying what we know about "intelligence" right now is that it's an emergent property observed in brains that have been in interaction with a physical and natural environment through complex sensory feedback loops, materialized by the rest of the human body. This is substrate independent, but the only thing that rocks can do for sure is simulate this whole system, and good simulations of complicated systems are not an easy feat at all, and it's not at all certain that we ever be able to do it without it requiring too much resources for it to be worth the hassle.

The things we've done that most closely resemble human intelligence in computers are very drastic oversimplifications of how biological brains work, sprinkled with mathematical translations of actual cognitive processes. And right now they appear very limited, even though a lot of resources - physical and economic - have been injected into them. We don't understand how brains work enough to refine this simplification very well, and we don't know much about the formation of cognitive processes relevant to "intelligence" either. Yet you assert it's a certainty that we will, that we will encode it in computers, and that the result will have a bunch of properties of current software, easily copyable and editable (which the human-like intelligences we know are not at all), not requiring more power than is output by the Sun, (which humans don't, but they're completely different physical systems), etc.

The same arguments you're making could be made to say, in 1969 after the moon landing, that the human race will definitely colonize the whole solar system. We know it's possible so it will happen at some point is not how technology works, it also needs to be profitable enough for enough industry to be injected in the problem to solve it, and for the result to live up to profitability expectations. Right now no AI firm is even remotely profitable, and the resources in the Kuiper belt or the real estate on Mars aren't enough of an argument that our rockets can reach them, there's no telling that they will ever be ; our economies might well simply lose interest before then.

[–] gbzm@piefed.social 3 points 1 month ago* (last edited 1 month ago) (12 children)

I've given reasons. We can imagine Dyson Spheres, and we know it's possible. It doesn't mean we can actually build them or ever will be able to.

The fact that our brains are able to do stuff that we don't even know how they do doesn't necessarily mean rocks can. If it somehow requires the complexity of biology, depending on how much of this complexity it requires it could just end up meaning creating a fully fledged human, which we can already do, and it hasn't caused a singularity because creating a human costs resources even when we do it the natural way.

[–] gbzm@piefed.social 27 points 1 month ago (5 children)

Why is the mercury arc rectifier getting a "what the fuck"? I don't know much about them, are they more magic than glowing rocks, runes, levitation and demon cores?

[–] gbzm@piefed.social 4 points 1 month ago* (last edited 1 month ago) (1 children)

What if human levels of intelligence requires building something that is so close in its mechanisms to a human brain that it's indistinguishable from a brain, or a complete physical and chemical simulation of a brain? What if the input-output "training" required to make it work in any comprehensible way is so close in fullness and complexity to the human sensory perception system interacting with the world, that it ends up being indistinguishable from a human body or a complete physical simulation of a body, with its whole environment?

There's no reason to assume our brains or their mechanisms can't be replicated artificially, but there's also no reason to assume it can be made practical, or that because we can make it it can self-replicate at no cost in terms of material resources, or refine its own formula. Humans have human-level intelligence, and they've never successfully created anything as intelligent as them.

I'm not saying it won't happen, mind you, I'm just saying it's not a certainty. Plenty of things are impossible, or sufficiently impractical that humans - or any species - may never create it.

view more: next ›