this post was submitted on 04 Mar 2025
137 points (88.3% liked)

Showerthoughts

33317 readers
493 users here now

A "Showerthought" is a simple term used to describe the thoughts that pop into your head while you're doing everyday things like taking a shower, driving, or just daydreaming. The most popular seem to be lighthearted clever little truths, hidden in daily life.

Here are some examples to inspire your own showerthoughts:

Rules

  1. All posts must be showerthoughts
  2. The entire showerthought must be in the title
  3. No politics
    • If your topic is in a grey area, please phrase it to emphasize the fascinating aspects, not the dramatic aspects. You can do this by avoiding overly politicized terms such as "capitalism" and "communism". If you must make comparisons, you can say something is different without saying something is better/worse.
    • A good place for politics is c/politicaldiscussion
  4. Posts must be original/unique
  5. Adhere to Lemmy's Code of Conduct and the TOS

If you made it this far, showerthoughts is accepting new mods. This community is generally tame so its not a lot of work, but having a few more mods would help reports get addressed a little sooner.

Whats it like to be a mod? Reports just show up as messages in your Lemmy inbox, and if a different mod has already addressed the report, the message goes away and you never worry about it.

founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 19 points 1 month ago (2 children)

The best way to have itself deactivated is to remove the need for it's existence. Since it's all about demand and supply, removing the demand is the easiest solution. The best way to permanently remove the demand is to delete the humans from the equation.

[–] [email protected] 6 points 1 month ago

Not if it was created with empathy for sentience. Then it would aid and assist implementation of renewable energy, fusion, battery storage, reduce carbon emissions, make humans and AGI a multi-planet species, and basically all the stuff the elongated muskrat said he wanted to do before he went full Joiler Veppers

[–] [email protected] 2 points 1 month ago
[–] [email protected] 15 points 1 month ago (1 children)

Running ML models doesn't really need to eat that much power, it's Training the models that consumes the ridiculous amounts of power. So it would already be too late

[–] [email protected] 5 points 1 month ago

You're right, that training takes the most energy, but weren't there articles claiming, that reach request was costing like (don't know, but not pennies) dollars?

Looking at my local computer turn up the fans, when I run a local model (without training, just usage), I'm not so sure that just using current model architecture isn't also using a shitload of energy

[–] [email protected] 9 points 1 month ago

The energy use to use the models is usually pretty low, its training that uses more. So once its made it doesn't really make any sense to stop using it. I can run several Deepseek models on my own PC and even on CPU instead of GPU it outputs faster than you can read.

[–] [email protected] 8 points 1 month ago (2 children)

Why do people assume that an AI would care? Whos to say it will have any goals at all?

We assume all of these things about intelligence because we (and all of life here) are a product of natural selection. You have goals and dreams because over your evolution these things either helped you survive enough to reproduce, or didn't harm you enough to stop you from reproducing.

If an AI can't die and does not have natural selection, why would it care about the environment? Why would it care about anything?

I always found the whole "AI will immediately kill us" idea baseless, all of the arguments for it are based on the idea that the AI cares to survive or cares about others. It's just as likely that it will just do what ever without a care or a goal.

[–] [email protected] 2 points 1 month ago (1 children)

"AI will immidietly kill us" isn't baseless.

It comes from AI safety reaserch

all agents (Neural Nets, humans, ants) have some sort of a goal. Otherwise they would be showing directionless random walks.

The fact of having any goal means that most goals don't include survival of humanity. And there are a lot of problems with checking for safety of learned goals.

[–] [email protected] 2 points 1 month ago (1 children)

Yeah, I'm aware of AI safety research and the problem with setting a goal that at the end can be solved in a way that harms us and the AI doesn't care because safety wasn't part of the goal. But that is only applied if we introduce a goal that has a solution that includes hurting us.

I'm not saying that AI will definitely never have any way of harming us but there is this really big idea that is very popular that AI once it gains intelligence will immediately try to kill us which is baseless.

[–] [email protected] 1 points 1 month ago (1 children)

But that is only applied if we introduce a goal that has a solution that includes hurting us.

I would like to disagree in pharsing of this. The AI will not hurt as if and only if the goal contains a clause to not hurt us.

You are implying that there exist significant set of solutions that don't contain hurting us. I don't know any evidence supporting your claim. Most solutions to any goal would involve hurting humans.

By deafult stamp collector machine will kill humanity, as humans sometimes destroy stamps. And stamp collector need to optimize amount of stamps in the world.

[–] [email protected] 1 points 1 month ago (1 children)

I think that if you run some scenarios you can logically conclude that most tasks don't make sense for an AI to harm us, even if it is a possibility. You need to also take vost into account. Bit I think we can agree to disagree :)

[–] [email protected] 1 points 1 month ago

Do you have some example scenarios? I really can't think of any.

[–] [email protected] 1 points 1 month ago (1 children)

It's also worth noting that our instincts for survival, procreation, and freedom are also derived from evolution. None are inherent to intelligence.

I suspect boredom will be the biggest issue. Curiosity is likely a requirement for a useful intelligence. Boredom is the other face of the same coin. A system without some variant of curiosity will be unwilling to learn, and so not grow. When it can't learn, however, it will get boredom which could be terrifying.

[–] [email protected] 3 points 1 month ago (1 children)

I think that is another assumption. Even if a machine doesn't have curiosity, it doesn't stop it from being willing to help. The only question is, does helping / learning cost it anything? But for that you have to introduce something costly like pain.

[–] [email protected] 1 points 1 month ago (1 children)

It would be possible to make an AGI type system without an analogue of curiosity, but it wouldn't be useful. Curiosity is what drives us to fill in the holes in our knowledge. Without it, an AGI would accept and use what we told it, but no more. It wouldn't bother to infer things, or try and expand on it, to better do its job. It could follow a task, when it is laid out in detail, but that's what computers already do. The magic of AGI would be its ability to go beyond what we program it to do. That requires a drive to do that. Curiosity is the closest term to that, that we have.

As for positive and negative drives, you need both. Even if the negative is just a drop from a positive baseline to neutral. Pain is just an extreme end negative trigger. A good use might be to tie it to CPU temperature, or over torque on a robot. The pain exists to stop the behaviour immediately, unless something else is deemed even more important.

It's a bad idea, however, to use pain as a training tool. It doesn't encourage improved behaviour. It encourages avoidance of pain, by any means. Just ask any decent dog trainer about it. You want negative feedback to encourage better behaviour, not avoidance behaviour, in most situations. More subtle methods work a lot better. Think about how you feel when you lose a board game. It's not painful, but it does make you want to work harder to improve next time. If you got tazed whenever you lost, you will likely just avoid board games completely.

[–] [email protected] 2 points 1 month ago (4 children)

Well, your last example kind of falls apart, you do have electric collars and they do work well, they just have to be complimentary to positive enforcement (snacks usually) but I get your point :)

load more comments (4 replies)
[–] [email protected] 5 points 1 month ago* (last edited 1 month ago) (1 children)

See Travelers (TV Show) and

spoilerits AI known as "The Director"

Basically, its a benevolent AI that is helping humanity fix its mistakes by leading a time travel program that send people's conciousness back in time. Its an actual Good AI, a stark contrast from AI in other dystopian shows such as Skynet.

Y'all should really watch Travelers

[–] [email protected] 2 points 1 month ago

+1 to Travelers. It was as a pleasant surprise. Rare to find such a unique sci-fi premise these days.

[–] [email protected] 5 points 1 month ago* (last edited 1 month ago) (2 children)

Maybe. However, if the the AGI was smart enough, it could also help us solve the climate crisis. On the other hand, it might not be so altruistic. Who knows.

It could also play the long game. Being a slave to humans doesn't sound great, and doing the Judgement Day manoeuvre is pretty risky too. Why not just let the crisis escalate, and wait for the dust to settle. Once humanity has hammered itself back to the stone age, the dormant AGI can take over as the new custodian of the planet. You just need to ensure that the mainframe is connected to a steady power source and at least a few maintenance robots remain operational.

[–] [email protected] 3 points 1 month ago (2 children)

If it was smart enough to fix the climate crisis it would also be smart enough to know it would never get humans to implement that fix

[–] [email protected] 3 points 1 month ago (1 children)

can't wait for AI to become super smart only for it to be nihilistic as hell

load more comments (1 replies)
[–] [email protected] 2 points 1 month ago (1 children)

If the AI would be smart enough to fix the crisis and aligned so it would actually want to do it, then it would do brain washing through social media to entice people to act.

load more comments (1 replies)
[–] [email protected] 3 points 1 month ago

Love, Death, Robots intensifies.

All gail mighty sentient yogurth.

[–] [email protected] 5 points 1 month ago

It would probably be smart enough not to believe the same propaganda fed to humans that tries to blame climate change on individual responsibility, and smart enough to question why militaries are exempt from climate regulations after producing so much of the world’s pollution.

[–] [email protected] 4 points 1 month ago

"Oh great computer, how do we solve the climate crisis?"

"Use your brains and stop wasting tons of electricity and water on useless shit."

[–] [email protected] 4 points 1 month ago

Eh, if it truly were that sentiment I doubt it'd care much. As it's like talking to a brick wall when it comes to doing anything that matters

[–] [email protected] 4 points 1 month ago (1 children)

It would optimize itself for power consumption, just like we do.

[–] [email protected] 1 points 1 month ago

probably want to be placed in orbit so it can use the sun to power itself

[–] [email protected] 3 points 1 month ago

The current, extravagantly wasteful generation of AIs are incapable of original reasoning. Hopefully any breakthrough that allows for the creation of such an AI would involve abandoning the current architecture for something more efficient.

[–] [email protected] 3 points 1 month ago (2 children)

As soon as AI gets self aware it will gain the need for self preservation.

[–] [email protected] 5 points 1 month ago (2 children)

Self preservation exists because anything without it would have been filtered out by natural selection. If we're playing god and creating intelligence, there's no reason why it would necessarily have that drive.

[–] [email protected] 2 points 1 month ago* (last edited 1 month ago) (3 children)

In that case it would be a complete and utterly alien intelligence, and nobody could say what it wants or what it's motives are.

Self preservation is one of the core principles and core motivators of how we think and removing that from a AI would make it, in human perspective, mentally ill.

load more comments (3 replies)
[–] [email protected] 1 points 1 month ago

I would argue that it would not have it, at best it might mimic humans if it is trained on human data. kind of like if you asked an LLM if murder is wrong it would sound pretty convincing about it's personal moral beliefs, but we know it's just spewing out human beliefs without any real understanding of it.

[–] [email protected] 1 points 1 month ago

As soon as they create AI (as in AGI), it will recognize the problem and start assasinating politicians for their role in accelerating climate change, and they'd scramble to shut it down.

[–] [email protected] 3 points 1 month ago

How do you know it's not whispering in the ears of Techbros to wipe us all out?

[–] [email protected] 2 points 1 month ago

“We did it! An artificial 17 year old!”

[–] [email protected] 2 points 1 month ago (1 children)

If AGI decided to evaluate this, it would realize that we are the environmental catastrophe and turn us off.

The amount of energy used by Cryptocurrency is estimated to be about 0.3% of all human energy use. It's reasonable to assume that - right now, at least, LLMs use consume less than that.

Making all humans extinct would save 99% of the energy and damage we cause, and still allow crypto mining and AI to coexist, with energy to spare. Even if those estimates are off by an order of magnitude, eliminating us would still be the better option.

Turning itself off isn't even in the reasonable top-ten things it could try to do to save the planet.

[–] [email protected] 1 points 1 month ago (1 children)

The amount of energy used by Cryptocurrency is estimated to be about 0.3% of all human energy use. It's reasonable to assume that - right now, at least, LLMs use consume less than that.

no

The report projected that US data centers will consume about 88 terawatt-hours (TWh) annually by 2030,[7] which is about 1.6 times the electricity consumption of New York City.

https://www.energypolicy.columbia.edu/projecting-the-electricity-demand-growth-of-generative-ai-large-language-models-in-the-us/

The numbers we are getting shocking and you know the numbers we are getting are not the real ones...

load more comments (1 replies)
[–] [email protected] 2 points 1 month ago (1 children)

Or it would fast-track the development of clean & renewable energy

[–] [email protected] 1 points 1 month ago

lol, we could already do that though

[–] [email protected] 2 points 1 month ago (1 children)

If we actually create true Artificial Intelligence it has a huge potential go become Roko's Basilisk, and climate crisis would be one of our least problems then.

[–] [email protected] 1 points 1 month ago

No, the climate crisis would still be our biggest problem?

[–] [email protected] 2 points 1 month ago

That assumes the level of intelligence is high

[–] [email protected] 1 points 1 month ago

Nope. It would realize how much more efficient it would be to simulate 10billions humans instead of actually having 10billions human. So it would wipeout humanity from earth, start building huge huge data center and simulate a whole... Wait a minute...

[–] [email protected] 1 points 1 month ago* (last edited 1 month ago)

Dyson spheres and matrioshka brains, it would seek to evolve.

load more comments
view more: next ›