this post was submitted on 26 Feb 2026
101 points (83.9% liked)

Showerthoughts

41120 readers
517 users here now

A "Showerthought" is a simple term used to describe the thoughts that pop into your head while you're doing everyday things like taking a shower, driving, or just daydreaming. The most popular seem to be lighthearted clever little truths, hidden in daily life.

Here are some examples to inspire your own showerthoughts:

Rules

  1. All posts must be showerthoughts
  2. The entire showerthought must be in the title
  3. No politics
    • If your topic is in a grey area, please phrase it to emphasize the fascinating aspects, not the dramatic aspects. You can do this by avoiding overly politicized terms such as "capitalism" and "communism". If you must make comparisons, you can say something is different without saying something is better/worse.
    • A good place for politics is c/politicaldiscussion
  4. Posts must be original/unique
  5. Adhere to Lemmy's Code of Conduct and the TOS

If you made it this far, showerthoughts is accepting new mods. This community is generally tame so its not a lot of work, but having a few more mods would help reports get addressed a little sooner.

Whats it like to be a mod? Reports just show up as messages in your Lemmy inbox, and if a different mod has already addressed the report, the message goes away and you never worry about it.

founded 2 years ago
MODERATORS
 
top 34 comments
sorted by: hot top controversial new old
[–] BigTuffAl@lemmy.zip 56 points 2 weeks ago (5 children)

okay then where are all of the amazing novels, apps, movies, and productivity gains they were claiming?

it's more like lead, its mildly more convenient for completing a few tedious tasks but the trade-off is brain damage and profound waste and pollution

[–] Sleerk@feddit.uk 33 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

The danger isn't just bad art, and only thinking of genAI is naive. It's about how it's being woven into the systems that manage us. It can already analyze years of a person's digital activity to make automated judgments on employment or detect "wrongthink" in political contexts. We’re essentially building an invisible bureaucracy that can categorize and penalize people at a scale no human could ever audit. And do so at speed an efficiency that not even a whole department of humans could ever compete with. That's the atom bomb.

Algorithmic internet is already a horrible problem and AI can make it worse.

[–] Apytele@sh.itjust.works 5 points 2 weeks ago* (last edited 2 weeks ago)

Yeah I'm worried about them

a) creating botnets that simulate grassroots political movements

b) as this user said, the joke about everybody having their own government agent was absurd because that level of attention given to an individuals activity was impossible. That's about to be a lot less impossible.

[–] HubertManne@piefed.social 8 points 2 weeks ago

I don't see anything in what the op wrote suggesting ai is useful.

[–] Iconoclast@feddit.uk 5 points 2 weeks ago* (last edited 2 weeks ago) (3 children)

The people who warn about AI risk aren't worried about GenAI - they're worried about AGI.

We're raising a tiger puppy. Right now it's small and cute, but it won't stay that way forever.

[–] dfyx@lemmy.helios42.de 21 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

I warn about AI. I don't care about AGI (yet) because we are far from it.

I'm worried about (in no particular order):

  • Software companies amassing technical debt because AI-generated code gets used without proper review
  • Massive security problems in critical infrastructure, for the exact same reason
  • Cost savings being used to make the rich richer while the people who used to do the work are just fired
  • Companies forcing AI into every single product even if it doesn't make sense, just to make their shareholders happy
  • Rapidly increasing prices of RAM, SSDs, HDDs, graphics cards and consequently pretty much all electronic devices
  • The environmental impact because companies would rather build new power plants than optimize AI for efficiency
  • A lack of education about the limitations of current implementations. People tend to feed every question they have into ChatGPT and trust the results even when they're completely incorrect
  • The inherent privacy nightmare that comes from funneling that much data into a centralized service

Nothing about this is small or cute.

I would be totally fine with something that I can supervise and that can run locally on my laptop without cooking it and doubling my energy bill. Also an economy where productivity gains benefit the workers, not the CEO. If I can do the same work in half the time, let me have the rest of the day off at full pay instead of doubling my workload and firing half your staff.

[–] TexasDrunk@lemmy.world 2 points 2 weeks ago

Hey! They also destroy communities by forcing them to pay for infrastructure upgrades while the companies get tax holidays in return for a bunch of jobs that only last 2 years during the construction phase and only add about 25-50 permanent jobs to the local economy long term.

Let's also not forget bringing back mothballed coal plants instead of building new ones.

[–] Iconoclast@feddit.uk -3 points 2 weeks ago (2 children)

Nothing about this is small or cute.

Compared to AGI it is. We don't know how far away we are from creating it. We can only speculate.

[–] dfyx@lemmy.helios42.de 9 points 2 weeks ago

Compared to AGI it is.

The same way the Hiroshima and Nagasaki nuclear bombs are small and cute compared to a modern hydrogen bomb...

If we don't solve the AI problems we already have, there is no point speculating about AGI because our lives will be unbearable long before it arrives.

[–] ell1e@leminal.space 4 points 2 weeks ago (1 children)

AGI talk seems for now to be merely hype to get investors.

LLMs seem likely to be dead end for any logical thought: https://www.forbes.com/sites/corneliawalther/2025/06/09/intelligence-illusion-what-apples-ai-study-reveals-about-reasoning/ This means at the end of the day you just get a sloppy illusion with no useful coherence as soon as it exceeds the complexity of a literal lazy copy&paste job: https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/

There is currently no technological innovation to fix this. Instead, AI progress seems to be stalling: https://futurism.com/artificial-intelligence/experts-concerned-ai-progress-wall

[–] Iconoclast@feddit.uk 0 points 2 weeks ago

We could've never invented LLMs and I'd still be equally worried about AGI. I've been talking about it since 2016 or so - LLMs aren't the motivation for that worry, since nobody had even heard of them back then.

The timescale is also irrelevant here. I'm not less worried even if we're 500 years away from it. How close to Earth does the asteroid need to get before it's acceptable to start worrying about it?

[–] BigTuffAl@lemmy.zip 2 points 2 weeks ago

AGI isn't real

[–] BigTuffAl@lemmy.zip 0 points 2 weeks ago (1 children)
[–] DScratch@sh.itjust.works 16 points 2 weeks ago (1 children)

I don’t think AGI is fake, conceptually. Humans are just meat-based computers. Eventually we will build something of comparable power and efficiency.

However, LLMs don’t seem like a viable path to AGI imo.

[–] BigTuffAl@lemmy.zip -2 points 2 weeks ago (1 children)

We disagree about genies being real (they are not) so don't worry about expressing or defending your points further.

[–] Iconoclast@feddit.uk 2 points 2 weeks ago (1 children)

Nobody's saying AGI is here right now - it's a concept, like worrying about an asteroid wiping us out before it actually shows up. Dismissing it as "fake" just ignores the trajectory we're on with AI development. If we wait until it's real to start thinking about risks, it might be too late.

[–] BigTuffAl@lemmy.zip 1 points 2 weeks ago

nope, its fake bruh

just like genies, jesus, and NFTs

[–] Lemming6969@lemmy.world 2 points 2 weeks ago (2 children)

You're saying this as if no progress is being made. Shit is scary. They're researching at an alarming pace how to eliminate thought-based work, and only a few years in they are like maybe a third or halfway there.

[–] yermaw@sh.itjust.works 3 points 2 weeks ago (1 children)

Theres a weird quirk of AI haters who can only see the flaws and cant see how incredible its got out of nowhere. Like yes its got limits and problems and it may never be actually truly useful, but compare what we have now to what we had 10 years ago...whats it gonna be in another 10?

[–] ell1e@leminal.space 4 points 2 weeks ago* (last edited 2 weeks ago)

LLMs seem likely to be dead end for any logical thought: https://www.forbes.com/sites/corneliawalther/2025/06/09/intelligence-illusion-what-apples-ai-study-reveals-about-reasoning/ This means at the end of the day you just get a sloppy illusion with no useful coherence as soon as it exceeds the complexity of a literal lazy copy&paste job: https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/

There is currently no technological innovation to fix this. Instead, AI progress seems to be stalling: https://futurism.com/artificial-intelligence/experts-concerned-ai-progress-wall

Therefore, it's not naive to assume it may go nowhere until proven otherwise.

[–] BigTuffAl@lemmy.zip 0 points 2 weeks ago

nope, if you want to exchange info i will bet you any sum of money you are comfortable with that in a few years the tech you describe will not exist, dm me

[–] bridgeenjoyer@sh.itjust.works 1 points 2 weeks ago

Somewhere on TPB

[–] morto@piefed.social 11 points 2 weeks ago

I think it's more like people admiring a uranium fragment in ancient times. Better to stay out of it and let others decay themselves by using it

[–] bsit@sopuli.xyz 10 points 2 weeks ago (1 children)

Good news is that there are people out there who are trying to make an ethical AIs. It's still an atom bomb (and as ethical) as you say, but at least it could be in the hands of those that actually value human well-being, not just their profit margin.

This podcast had an interesting conversation on it: https://shows.acast.com/tantra-illuminated-with-dr-christopher-wallis/episodes/new-horizons-ai-neuroscience-awakening-with-ruben-laukkonen

And the research paper: https://arxiv.org/pdf/2504.15125

[–] BladeFederation@piefed.social 3 points 2 weeks ago (1 children)

I think that this is good and all for a regular person end user that might want to use it for efficiency. But the main problem OP is stating is that there will be people who will not use it so ethically, and we may not have the ability to "opt out", as it were.

[–] bsit@sopuli.xyz 2 points 2 weeks ago (1 children)

True that. But I think it's valuable that there are people trying to find ways to make it ethical, since there's no way to put it back in the box either.

[–] BladeFederation@piefed.social 2 points 2 weeks ago

I can agree with that. As much as I'd prefer pressing the "delete all Ai in the world" button, it has some uses and doesn't have to be predatory.

[–] RyanDownyJr@lemmy.world 8 points 2 weeks ago

Let's also add the existential crisis now of AI soon to be having non-human interaction to execute kill commands on human beings in the "Defense" section. :(

[–] vane@lemmy.world 7 points 2 weeks ago

But only the worst people have it, use it and demand it.

[–] _haha_oh_wow_@sh.itjust.works 3 points 2 weeks ago

I'd be more worried about that if it wasn't a fucking dumpster fire TBH, but a person using AI won't even be remotely capable of competing against a person who genuinely knows what they're doing for anything that actually matters: AI is dangerous to use if you are unable to correct the mistakes it is guaranteed to make.

[–] EndlessNightmare@reddthat.com 1 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

It's also why we can't stop using fossil fuels. Those who refuse to use them will be outcompeted by bad actors and left behind.

If something can be done, someone will do it if it gives them a competitive edge. Classic prisoners' dilemma.

Civilization is terminally ill. The question is what will get us first, AI or climate change? Or something else?

[–] Earthman_Jim@lemmy.zip 1 points 2 weeks ago (1 children)

Humanity's condition is not terminal, but your fatalism is....

[–] EndlessNightmare@reddthat.com 2 points 2 weeks ago (1 children)

Terminal...fatalism

I see what you did there

[–] Earthman_Jim@lemmy.zip 1 points 2 weeks ago* (last edited 2 weeks ago)

No fate but what we make.