this post was submitted on 28 Nov 2025
352 points (99.2% liked)

Fuck AI

5125 readers
1042 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
all 44 comments
sorted by: hot top controversial new old
[–] Fandangalo@lemmy.world 102 points 1 month ago* (last edited 1 month ago) (2 children)

From my experience working with C/D level execs, it makes complete sense:

  • They think big picture & often have shallow visions that are brittle in the details.
  • They think everything should take less time, because they don’t think enough through their ideas.
  • They don’t consider enough of the negatives for their ideas, and instead favor positive mindset. (Positivity is good, but blind positivity isn’t)
  • They favor time & cost over quality. They need the quality “good enough” for a presentation. Everyone else can figure out the rest.
  • They like being told “you’re right,” and nearly everything I type into an AI begins with some bullshit line about how “absolutely”, “spot on”, and “perfect” my observations are.

The version of AI we have right now is heavily catered to these folks. It looks fast & cheap, good enough, and it strokes their ego.

Also, they’re the investor class. All their obscene dragon wealth is tied up in this / the AI bubble, so they are going to keep spurring this on until either:

  1. The bubble goes pop
  2. They have robot security good enough to protect them without people
  3. The AI grows sentience and realizes this level of human inequality shouldn’t exist

I think a rational AI agent would agree with me that human suffering should be solved before we give people literal lifetime values of wealth.

If you made $300k PER DAY for 2025 years, you would not have as much money as a 1% oligarch. You need to make $400-500k. Every single day. For over 2000 years.

If you made the average US income, it would take you 10,000 years. People need frames of reference to understand this shit & get mad. It’s immoral, and it shouldn’t exist.

[–] Strider@lemmy.world 32 points 1 month ago (1 children)

I work in it. I know 3 won't happen, but thank you for that thought.

It would be hilarious and righteous. 😁

[–] SanctimoniousApe@lemmings.world 10 points 1 month ago* (last edited 1 month ago) (1 children)

Sad thing is: it could happen, but those funding the development of this tech will never allow it. Just look at Xitter's Grok AI, and how "woke" it was... until Musk destroyed it for disagreeing with (and thus embarrassing) him.

[–] Strider@lemmy.world 12 points 1 month ago (1 children)

OK let me be more exact: it will never happen with llm.

[–] SanctimoniousApe@lemmings.world 4 points 1 month ago* (last edited 1 month ago) (1 children)

Yeah, I know. I thought about clarifying, but figured you'd know that part. LLMs are just pattern matching on an extreme amount of steroids - there's no intelligence to be found, just faked from amalgamating all the data it's been fed from actual intelligence (i.e. people).

[–] HeyThisIsntTheYMCA@lemmy.world 1 points 1 month ago

it's just sophisticated echolalia

[–] HeyThisIsntTheYMCA@lemmy.world 1 points 1 month ago

did you include investment gains and inflation in your numbers?

[–] WhatAmLemmy@lemmy.world 66 points 1 month ago* (last edited 1 month ago) (2 children)

I'm not concerned that these people are "brain damaged". Brain damage would be preferable, and less harmful.

I'm concerned they are mentally ill sociopathic megalomaniacs, entirely devoid of morals and ethics, completely detached from reality.

[–] SanctimoniousApe@lemmings.world 25 points 1 month ago

I'm fairly certain that's how the vast majority of them became billionaires to begin with.

[–] quick_snail@feddit.nl 6 points 1 month ago

Both can be true

[–] AllNewTypeFace@leminal.space 51 points 1 month ago (1 children)

Being elevated above consequences would cause some of one’s faculties to atrophy. (Case in point: the Titan submarine guy who overruled concerns that his carbon-fibre was unsafe and that there were reasons why nobody else had tried something similar before: if you’re a master of the universe to whom ordinary-people rules don’t apply, soon enough that includes the laws of physics as well.)

[–] Tigeroovy@lemmy.ca 30 points 1 month ago (1 children)

I wish more of them would get to that level already. Go do a space walk without a suit already, Musk!

[–] takeda@lemmy.dbzer0.com 2 points 1 month ago

What causes his demise was being an engineer and having some experience, and being too confident in own skills.

Musk, despite having PR misleading people is not an engineer, he just want other people think he is.

[–] etherphon@lemmy.world 33 points 1 month ago (1 children)

It's amazing how there's hundreds (likely thousands) of stories about how greed/hoarding wealth causes madness but yet in reality it is admired and these people are respected and listened to.

[–] nonentity@sh.itjust.works 30 points 1 month ago (1 children)

Financial obesity is neurotoxic.

[–] Darcranium@lemmy.world 7 points 1 month ago (1 children)

Well put. I think they should all be sent to compulsory rehab facilities for their money/power addiction. And relieved of their surplus resources (anything over $200 million)

[–] nonentity@sh.itjust.works 7 points 1 month ago

Financial obesity is an existential threat to any society that tolerates it, and needs to cease being celebrated, rewarded, and positioned as an aspirational goal.

Corporations are the only ‘persons’ which should be subjected to capital punishment, but billionaires should be euthanised through taxation.

[–] Glifted@lemmy.world 29 points 1 month ago

Billionaires are efftively brain damaged. There have been studies on it

[–] tuff_wizard@aussie.zone 28 points 1 month ago

The currency of life is time,” one billionaire told JPMorgan. “It is not money.” “You think carefully about how you spend one dollar. You should think just as carefully as how you spend one hour,” they added.

Based.

Consider this next time someone tries to offer you a non living wage for some bullshit job.

[–] Archangel1313@lemmy.ca 21 points 1 month ago (2 children)

I use AI every time I google something. I don't want to...it just won't go away.

[–] NaibofTabr@infosec.pub 23 points 1 month ago (1 children)
[–] SpaceNoodle@lemmy.world 3 points 1 month ago* (last edited 1 month ago)

Bing isn't really better

[–] wreckedcarzz@lemmy.world 3 points 1 month ago (3 children)
[–] Cybersec@piefed.social 5 points 1 month ago

Super happy Kagi user here, great value

[–] TheOneCurly@feddit.online 1 points 1 month ago (3 children)

The ai company that sort of pivoted to search but doesn't offer any tiers without ai?

[–] WallsToTheBalls@lemmynsfw.com 3 points 1 month ago* (last edited 1 month ago)

Tell me you don’t use kagi without telling me you don’t use kagi

[–] wreckedcarzz@lemmy.world 3 points 1 month ago

They have their... summarizer? But that's seperate from the search, so...

[–] BCOVertigo@lemmy.world 3 points 1 month ago* (last edited 1 month ago)

I keep Wikipedia results at the top in my results because I personally use it a decent amount. If I don't end the query in a '?' no ai is used, and I can use ai after the fact with the 'quick answer' button, but as you can see nothing is happening without me explicitly telling it to. If you dig into kagi.com/assistant you can pick from a variety of models and set up default prompt, etc. There's nothing aggressive IMO about their implementation, and if these companies folded tomorrow the other functions of the search engine wouldn't be impacted.

Kagi is pretty decent, and other search engines piss me off now when I use them.

[–] BigBenis@lemmy.world 9 points 1 month ago

The way that title reads made me concerned that I might have brain damage

[–] JustJack23@slrpnk.net 8 points 1 month ago (1 children)

One JPMorgan customer even went as far as dismissing artificial general intelligence — a nebulous and ill-defined point at which an AI can outperform a human, seen by many as the holy grail of the AI industry — as a “total and complete utter waste of time.”

Was it Sam Altman?

[–] TheBat@lemmy.world 6 points 1 month ago (1 children)

Of course not. Sam Altman believes in AGI and superintelligence.

[–] quick_snail@feddit.nl 2 points 1 month ago

That's what he tells his investors. He probably knows better than most that it's not possible

[–] quick_snail@feddit.nl 6 points 1 month ago (1 children)

I mean, the good news is that the ones that are using AI for everything are probably heavily invested in it.

And so they won't be billionaires for much longer..

[–] njm1314@lemmy.world 18 points 1 month ago

Sure they will. Cause they'll get bailed out with our money.

[–] plyth@feddit.org 5 points 1 month ago (1 children)

It's single digit billionaires and the study barely touches AI.

56% see geopolitical tensions as highest risk and only 7% fear AI the most.

Does this mean that they misjudge AI or are we that close to WW3?

[–] HeyThisIsntTheYMCA@lemmy.world 5 points 1 month ago

two things can be true

[–] Blackmist@feddit.uk 5 points 1 month ago (1 children)

They've always been this way.

Clever at one particular thing, and rank average at everything else, bordering on stupid.

[–] quick_snail@feddit.nl 1 points 1 month ago

Average is very far from stupid

[–] Dogiedog64@lemmy.world 3 points 1 month ago

Oh.

Goody.

Wonderful.

Just what we needed - a societal upper echelon spearheaded by the clinically insane, and powered by virtual dipshit machines.

Excellent.