this post was submitted on 01 Apr 2026
758 points (99.1% liked)

Fuck AI

6622 readers
1684 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old

Ai pushers are dishonest and malicious.

In other news, water is liquid. More at 11.

[–] StarryPhoenix97@lemmy.world 13 points 2 days ago (1 children)

I don't have a problem with AI assisting with open source projects. On its face, it could be helpful to clean up some basic coding problems so a person with skill can come in and update later or remove it if it's truly awful code. But then I remember that there's always an angle. On top of all the other issues with AI coding, what happens if Anthropic tries to pull some legal shenanigans and say that they wrote most of the code, so they own the project? What if they are writing in backdoors and vulnerabilities?

Like I said, on its face it sounds okay, but any time a corporation tries to touch a public project, things go wonky.

[–] faintwhenfree@lemmus.org 3 points 2 days ago

Bigger problem is AI writes so much code and adds so many features that are jank, if the commits are accepted the whole project risks being like a jank. I doubt anthropic can claim open source project is their work.

[–] skisnow@lemmy.ca 74 points 3 days ago* (last edited 3 days ago) (1 children)

It's a great way to get free training for their next model, courtesy of unwitting OSS reviewers.

Spam all the open source projects with slop, mark which ones get rejected and which ones get accepted, and bam there's some new training data for Claude Villanelle, and the only time they've wasted is other people's.

[–] rumba@lemmy.zip 10 points 2 days ago

I've been pondering why all the FOSS PR slop for ages, this HAS to be it.

[–] Tartas1995@discuss.tchncs.de 2 points 1 day ago* (last edited 1 day ago)

Ai written code is not copyright-able. I wonder if that is connected to this.

And given that ai generated content (at least used to) poisons generative AIs... and open source is used to trade AIs...

[–] redsand@infosec.pub 23 points 2 days ago (1 children)

They're fluffing their résumé before the bubble pops. Don't hire these clowns, interview them and ask about their code.

[–] StarryPhoenix97@lemmy.world 3 points 2 days ago

Oh, I didn’t even consider that. Like using open source code to train their program and refine it's coding capabilities.

[–] aliser@lemmy.world 4 points 2 days ago

put some prompt hijacking stuff into your contributing guide so that the slop generators identify themselves, then just ban them. or even better, make some kind of publicly available list of those accounts or EVEN better, a browser extension. fuck ai

[–] GalacticGrapefruit@lemmy.world 18 points 2 days ago

Oh, that is slimy as fuck. 😡

[–] JackbyDev@programming.dev 15 points 2 days ago

Lmao, the bad example "1-shotted by Claude"

[–] hperrin@lemmy.ca 85 points 3 days ago (1 children)

If that’s not illegal, it certainly should be.

[–] skisnow@lemmy.ca 28 points 3 days ago (4 children)

For sure they know they shouldn't be doing it, otherwise they wouldn't be trying to hide it.

load more comments (4 replies)
[–] wesker@lemmy.sdf.org 89 points 3 days ago* (last edited 2 days ago) (11 children)

AI slob lobs AI slop ontop of open-source crop.

EDIT: These 3 Joes coincidentally all downvoted me lmao

  • @veiwtifuljoe@lemmy.world
  • @josephfrusetta@lemmy.world
  • @sporadicallyjoe@lemmy.world
[–] Peruvian_Skies@sh.itjust.works 22 points 3 days ago (4 children)

Sloppers slop slop all over FOSS ops.

load more comments (4 replies)
load more comments (10 replies)
[–] tristan@tarte.nuage-libre.fr 49 points 3 days ago (2 children)

PSA: Prompting an LLM at length about what not to do is the best way to prime it to do that very thing. You’re loading a lot of tokens in memory and expecting a single “not” to do all the heavy lifting.

This is adjacent to ironic process theory.

[–] te_abstract_art@lemmy.world 12 points 3 days ago (4 children)

Is this necessarily true? I remember seeing an article a while back suggesting that prompting "do not hallucinate" is enough to meaningfully reduce the risk of hallucinations in the output.

From my fairly superficial understanding of how LLMs work, "don't do X" will plot a completely different vector for the "X" semantic dimension than prompting "do X". This is different to telling a human, for example, to not think about elephants (congratulations, you're now thinking about elephants. Aren't they cute. Look at that little trunk and smiley mouth)

[–] tristan@tarte.nuage-libre.fr 6 points 2 days ago

Thank you for your reply. I realised I don’t have enough deep knowledge about LLMs apart from empirical experience from working with it to confidently answer your question. It would be interesting to find (or create if it doesn’t exist) more research on the subject.

load more comments (3 replies)
load more comments (1 replies)
[–] DylanMc6@lemmy.dbzer0.com 6 points 2 days ago

The open-source developers should fight back with anti-AI spam

[–] eestileib@lemmy.blahaj.zone 26 points 3 days ago (10 children)

One of my loved ones is defending this and I am having a moral crisis over my relationship with her because of that.

load more comments (10 replies)
[–] TheDoctorDonna@piefed.ca 21 points 3 days ago (2 children)

The company I work for keeps trying to push Claude on us, even is company "social" situations. I never bothered to sign up for an account back when we were prompted so I guess I miss out...oh no?

No, wait - the opposite of oh no.

load more comments (2 replies)
[–] GreenBeanMachine@lemmy.world 10 points 3 days ago* (last edited 3 days ago)

Interesting comments in the mastodon thread, some idiot people will bend over backwards to defend AI slop.

[–] slaacaa@lemmy.world 14 points 3 days ago

Just what the internet needed, more AI slop

[–] r1veRRR@feddit.org 8 points 2 days ago (5 children)

I get the idea of hating this, but there's really absolutely nothing revolutionary about this. Being "undercover" is as trivial as "commit this, do not mention AI".

In the end, at least with code, it's the actual resulting quality that is the main determinant of what should be accepted or not.

[–] ayyy@sh.itjust.works 46 points 2 days ago

You sound like someone who hasn’t had to waste countless hours of their life wading through bullshit merge request spam.

[–] grrgyle@slrpnk.net 11 points 2 days ago

Not trying to be glib, but I don't think you do get the idea of hating this.

[–] Cellari@lemmy.world 19 points 2 days ago (1 children)

So... you think ignoring the rules set by others is allowed if you can bypass them? Because it really does tell much if a repo states it does not want AI generated code, but Claude hides the fact.

[–] CultLeader4Hire@lemmy.world 11 points 2 days ago

I feel like you’re responding to a person who doesn’t understand consent is about saying yes not about saying no

load more comments (2 replies)
[–] yuriRO@lemmy.dbzer0.com 17 points 3 days ago (5 children)

Why are anthropic employees contributing on open source projects? Aren't they super busy with being at the company? How the repo owner knows they are anthropic employees? Maybe im over thinking this, please explain >;0

[–] BradleyUffner@lemmy.world 27 points 3 days ago

It's all tests to see if the AI can go undetected. They are using it as a measure of "quality".

[–] JackbyDev@programming.dev 7 points 2 days ago

Many corporations contribute back to open source projects they use. That in itself is not anything new or even shady. Microsoft really put a lot of work into git (not to be confused with buying GitHub). But being opaque about how you're making the code is, at the very least, disingenuous.

load more comments (3 replies)
[–] prenatal_confusion@feddit.org 15 points 3 days ago (13 children)

Isn't it crazy that 5 years ago we struggled to have a software understand normal sentences? Now this block of text is parsed and the instructions followed. Impressive!

Not trying to flame, honestly Impressed by some aspects of AI. And I know I am using the the term understood loosely.

[–] drath@lemmy.world 3 points 2 days ago (1 children)

I'm more disappointed that LLM's have proven that, to pass Turing test for most people, all you need is essentially a roided out Markov chain. We thought of ourselves as the most advanced species with incredibly complex communications, but it turned out to be mostly yapping in the end...

load more comments (1 replies)
load more comments (12 replies)
load more comments
view more: next ›