this post was submitted on 17 Jan 2026
889 points (99.4% liked)

Fuck AI

5262 readers
2032 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] HeyThisIsntTheYMCA@lemmy.world 15 points 2 days ago (3 children)

"make your own prompts" misses by one step. Use of AI robs you of the opportunity to learn/practice/hone your skills in a certain area. why would someone use ai for any reason other than to get out of having to learn something? do you expect llms to be the best source of how to learn [blank]? which [youtuber/podcaster/old bridgetroll/televangelist/fascist/fishnet chat lightbulb] would you suggest explains [blank] better because frankly at this point i'm fucking invested.

[–] Digit@lemmy.wtf 4 points 2 days ago

oh no.

rtfm being replaced by wyop.

::: spoiler wyop

write your own prompt

[–] svcg@lemmy.blahaj.zone 1 points 2 days ago (2 children)

Use of AI robs you of the opportunity to learn/practice/hone your skills in a certain area. why would someone use ai for any reason other than to get out of having to learn something?

This is not really a good argument against AI. Almost everything ever invented was invented to avoid doing something else that would take more time.

Why would anyone use animation software other than to avoid learning to draw your frames in sequence?

Why would anyone use a loom other than to avoid having to learn how to weave?

Why would anyone read a book other than to avoid learning by experience and experimentation?

[–] voodooattack@lemmy.world 4 points 2 days ago (1 children)

Yes, but the 3D animation software doesn’t do the thinking for you, nor replace your artistic vision.

To use a loom you have to learn how to use a loom. You may skip weaving, but you become familiar with textiles anyway.

You’re optimising for less physical effort. That means you work faster, but you don’t grow stronger or more dexterous either.

If you solely use AI, then you’re optimising for less thinking. So what happens then?

You’re optimising out your sole evolutionary advantage as a human, by delegating your thinking to another entity.

[–] svcg@lemmy.blahaj.zone 1 points 1 day ago (1 children)

Look, I don't want to be in a position of defending the plagiarism machines that are burning the world's forests whilst simultaneously somehow using all of the world's fresh water, but come on. The vast majority of people who are using AI image generation are not people who would otherwise have been involved in the creative process.

They are people who want to avoid learning Photoshop (reasonable - it takes a long time to learn, which may or may not be worth it given what you want to do, and also Adobe sucks shit) or want to avoid paying someone who knows how to use Photoshop (understandable - and would obviously be worth consideration if it weren't for all of the other problems with AI).

When you attack AI on the basis of it making people lazy - rather than any of the other things that are wrong with it - it just comes across as "Luddite". (Which is ironic, given that Luddism was originally about machinery resulting in worse working conditions for skilled workers, which is one thing AI actually will do.)

[–] voodooattack@lemmy.world 1 points 1 day ago (1 children)

I’m a senior full-stack developer of 15 years, and more recently, a new tech lead (specifically a Systems Architect) at an AI startup. I’m definitely not attacking AI as a concept in general.

I work with AI agents every day and all day. That’s how I develop and plan our systems. It did not start that way. I was absolutely against the use of AI during development, but a few months back, I need the assistance because I developed carpal tunnel syndrome, so that’s what I automated, just the typing and the implementation of low-level logic so that my wrists can heal. But do you know what stock AI agents do to code when not given proper guidance? Ask any real developer and they’ll tell you about vibe coding. I guarantee these are not going to be success stories.

I’m not just judging people for being lazy, because lazy people like me will innovate ways to stay lazy by inventing/optimising new shit that allows them to stay lazy. That’s a survival instinct and an evolutionary selection mechanism: minimising energy investiture while doing the same thing as everyone around you is an evolutionary advantage.

No. What I’m judging them for is delegating their critical thinking capacity to an external entity, and stunting their own cognitive growth (their literal reason for existing in the first place, their continuity mechanism to stay in the gene pool, and their sole means of improving at being long-term lazy) by being short-term lazy. Makes sense?

Now to generative AI (for the multimedia substrate):

The vast majority of people you speak of are now polluting the collective “training set” with diluted slop distilled from all art historically created thus far, because the content generation equation went from: X people creating Y novel pieces of art per year, to X models creating Y million images per day, all thanks to a handful of idiots with more greed/money than common-sense. That diluted pool is ever-expanding, growing geometrically, and burying actual novelty with each new image Susan generates and shares for her new “Katz Rule” instagram profile.

The thing is: the next model will be trained on that averaged set, and the next, and the next. With each day and each generation increasing in conformity. And that set is what we’re stuck with for new inspiration (and future models) now. Because everyone is looking at screens for inspiration, and not at mountains or rivers, or even the real stars in the sky at night because we ruined that too.

All while we’re doing the things you just mentioned.

All thanks to a few assholes with more selfishness than common-sense chasing after unlimited quarterly growth in a very limited space that’s closing around us fast.

[–] svcg@lemmy.blahaj.zone 1 points 1 day ago (1 children)

I’m a senior full-stack developer of 15 years, and more recently, a new tech lead (specifically a Systems Architect) at an AI startup. I’m definitely not attacking AI as a concept in general.

I, too, am a developer of closer to 15 years than I'd like to admit to myself, though mostly embedded and/or back-end. And while I have no problem with AI in its broad sense (obviously machine learning/spicy statistics, computer vision, and natural language processing and whatnot have potential to be enormously useful), I am generally hostile to generative AI. I thinking using copyrighted material as training data without the copyright holders' permission should be banned. And while I would have no objection to ethically-trained models in a hypothetical future where we have abundant clean energy to run the data centres and also all the new desalination plants we would need, that also remains a problem, and so I have resisted using such tools at work, too.

Now to generative AI (for the multimedia substrate)...

I agree with everything you've said after this point.

No. What I’m judging them for is delegating their critical thinking capacity to an external entity, and stunting their own cognitive growth (their literal reason for existing in the first place, their continuity mechanism to stay in the gene pool, and their sole means of improving at being long-term lazy) by being short-term lazy. Makes sense?

This is the crux of my problem. I find it to be overly judgemental. If you're self-employed and you need a website for your business or whatever, then you could pay someone to do it for you, but then you only have so much money in the budget. You could also learn how to code and/or graphic design and do it yourself, but then you only have so many hours in the day. If vibe coding produced something viable for you in the quickest, cheapest way, then that is obviously the rational and sensible thing to do. You might even spend the time learning something else instead that is more relevant to your interests.

Using generative AI to do something doesn't (necessarily) mean that you don't value the knowledge or skills required to do it the hard way, it only means that you value it less than something else that you might otherwise be doing with your time, and I don't think that is a moral failing.

As an example, I occasionally like to a bit of shitposting. Were it not for all those other things that I don't like about generative AI, I would probably be generating AI slop memes with the best of the them. As it is I mostly just stick to text-based comments with bad puns and references to song lyrics no-one will remember. I could put in the hours to learn how to use GIMP so I could do it without AI, but quite frankly I have books on the go, I've got a couple of musical instruments to learn/practise, and I spent all day at my software job, where I think critically (or so I claim), so I'd rather being doing those things instead. I don't think I have neglected my cognitive growth; I've just chosen to focus it on something different to what you might have.

[–] voodooattack@lemmy.world 1 points 1 day ago* (last edited 1 day ago)

Ohhh. I think we’re both defending different hills! I’m not against the use of generative AI for purposeful creation. What I’m against is the delegation of critical thinking.

It’s the difference between:

  • “Implement this specific feature this specific way. Never disable type checking or relax type strictness, never solve a problem using trial and error, consult documentation first, don’t make assumptions and stop and ask for guidance if you’re unsure about anything”
  • “Paint me a photorealistic depiction of a galaxy spinning around the wick of a candle”

(That last one is admittedly my own guilty contribution to the slop soup and favourite desktop background of at least a whole year)

Versus:

  • “build me an e-shop”
  • “draw me a cat”.

The difference is oversight and vision. The first two are asking AI to execute well-defined tasks with explicit parameters and rules, the first example in particular offers the LLM an out if it finds itself at an impasse.

The latter examples are asking a prediction engine to predict a vague concept. Don’t expect originality/innovation from something that was forcibly constrained to pick from a soup made of prior art then locked down, because that’s what gradient descent essentially does to the neural networks during training: reduce the error margin by restricting the possible solutions for any given problem to only what is possible within the training set, which is also known as plagiarism.

Edit: a slight elaboration on the last part:

Neural networks trained with gradient descent will do the absolute minimum to reach a solution. That’s the nature of the training process.

What this essentially means is that effort scales with prompt complexity! A simple/basic prompt begets you the most generic result possible. Because it allows the network to slide along the shortest path from the input token to a very predictable result.

[–] MutilationWave@lemmy.dbzer0.com 4 points 2 days ago (1 children)

But...

If you know how to draw frames in sequence, you'll be better at using the animation software.

If you know the intricacies of weaving, you'll be more efficient with the loom.

[–] svcg@lemmy.blahaj.zone 2 points 1 day ago

And it follows that if you know how to code, then you would be more efficient with GitHub copilot, yes?

[–] ruan@lemmy.eco.br -2 points 2 days ago* (last edited 2 days ago) (1 children)

LLMs are a pretty good tool to summarize any subject, or present it with different words or in a different approach... They are a statistical word predictor tool after all.

So yeah, if you understand that LLMs:

  • don't possess intelligence;
  • they are just reproducing patterns from the training material used;
  • it's impossible for them to contain ALL "knowledge" from the training material;
  • the "context" provided directly influences the response

Then, I'd say that LLMs can be used as a very good facilitator to learn about almost any subject that has already been documented in any word format in almost any language.

your bulleted list has me suspecting you used an llm to write this. TRAITOR