this post was submitted on 01 Apr 2026
752 points (99.1% liked)

Fuck AI

6598 readers
911 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] Tartas1995@discuss.tchncs.de 2 points 10 hours ago* (last edited 9 hours ago)

Ai written code is not copyright-able. I wonder if that is connected to this.

And given that ai generated content (at least used to) poisons generative AIs... and open source is used to trade AIs...

[–] MousePotatoDoesStuff@lemmy.world 9 points 21 hours ago

Ai pushers are dishonest and malicious.

In other news, water is liquid. More at 11.

[–] aliser@lemmy.world 4 points 18 hours ago

put some prompt hijacking stuff into your contributing guide so that the slop generators identify themselves, then just ban them. or even better, make some kind of publicly available list of those accounts or EVEN better, a browser extension. fuck ai

[–] StarryPhoenix97@lemmy.world 13 points 1 day ago (1 children)

I don't have a problem with AI assisting with open source projects. On its face, it could be helpful to clean up some basic coding problems so a person with skill can come in and update later or remove it if it's truly awful code. But then I remember that there's always an angle. On top of all the other issues with AI coding, what happens if Anthropic tries to pull some legal shenanigans and say that they wrote most of the code, so they own the project? What if they are writing in backdoors and vulnerabilities?

Like I said, on its face it sounds okay, but any time a corporation tries to touch a public project, things go wonky.

[–] faintwhenfree@lemmus.org 3 points 19 hours ago

Bigger problem is AI writes so much code and adds so many features that are jank, if the commits are accepted the whole project risks being like a jank. I doubt anthropic can claim open source project is their work.

[–] DylanMc6@lemmy.dbzer0.com 6 points 1 day ago

The open-source developers should fight back with anti-AI spam

[–] redsand@infosec.pub 23 points 1 day ago (1 children)

They're fluffing their résumé before the bubble pops. Don't hire these clowns, interview them and ask about their code.

[–] StarryPhoenix97@lemmy.world 3 points 1 day ago

Oh, I didn’t even consider that. Like using open source code to train their program and refine it's coding capabilities.

[–] JackbyDev@programming.dev 15 points 1 day ago

Lmao, the bad example "1-shotted by Claude"

Oh, that is slimy as fuck. 😡

[–] skisnow@lemmy.ca 74 points 1 day ago* (last edited 1 day ago) (1 children)

It's a great way to get free training for their next model, courtesy of unwitting OSS reviewers.

Spam all the open source projects with slop, mark which ones get rejected and which ones get accepted, and bam there's some new training data for Claude Villanelle, and the only time they've wasted is other people's.

[–] rumba@lemmy.zip 10 points 1 day ago

I've been pondering why all the FOSS PR slop for ages, this HAS to be it.

[–] hperrin@lemmy.ca 85 points 2 days ago (1 children)

If that’s not illegal, it certainly should be.

[–] skisnow@lemmy.ca 28 points 1 day ago (4 children)

For sure they know they shouldn't be doing it, otherwise they wouldn't be trying to hide it.

load more comments (4 replies)
[–] tristan@tarte.nuage-libre.fr 49 points 2 days ago (2 children)

PSA: Prompting an LLM at length about what not to do is the best way to prime it to do that very thing. You’re loading a lot of tokens in memory and expecting a single “not” to do all the heavy lifting.

This is adjacent to ironic process theory.

[–] te_abstract_art@lemmy.world 12 points 1 day ago (4 children)

Is this necessarily true? I remember seeing an article a while back suggesting that prompting "do not hallucinate" is enough to meaningfully reduce the risk of hallucinations in the output.

From my fairly superficial understanding of how LLMs work, "don't do X" will plot a completely different vector for the "X" semantic dimension than prompting "do X". This is different to telling a human, for example, to not think about elephants (congratulations, you're now thinking about elephants. Aren't they cute. Look at that little trunk and smiley mouth)

[–] tristan@tarte.nuage-libre.fr 6 points 1 day ago

Thank you for your reply. I realised I don’t have enough deep knowledge about LLMs apart from empirical experience from working with it to confidently answer your question. It would be interesting to find (or create if it doesn’t exist) more research on the subject.

load more comments (3 replies)
load more comments (1 replies)
[–] wesker@lemmy.sdf.org 88 points 2 days ago* (last edited 1 day ago) (15 children)

AI slob lobs AI slop ontop of open-source crop.

EDIT: These 3 Joes coincidentally all downvoted me lmao

  • @veiwtifuljoe@lemmy.world
  • @josephfrusetta@lemmy.world
  • @sporadicallyjoe@lemmy.world
load more comments (15 replies)
[–] r1veRRR@feddit.org 8 points 1 day ago (5 children)

I get the idea of hating this, but there's really absolutely nothing revolutionary about this. Being "undercover" is as trivial as "commit this, do not mention AI".

In the end, at least with code, it's the actual resulting quality that is the main determinant of what should be accepted or not.

[–] grrgyle@slrpnk.net 10 points 1 day ago

Not trying to be glib, but I don't think you do get the idea of hating this.

[–] ayyy@sh.itjust.works 46 points 1 day ago

You sound like someone who hasn’t had to waste countless hours of their life wading through bullshit merge request spam.

[–] Cellari@lemmy.world 19 points 1 day ago (1 children)

So... you think ignoring the rules set by others is allowed if you can bypass them? Because it really does tell much if a repo states it does not want AI generated code, but Claude hides the fact.

[–] CultLeader4Hire@lemmy.world 11 points 1 day ago

I feel like you’re responding to a person who doesn’t understand consent is about saying yes not about saying no

load more comments (2 replies)
[–] GreenBeanMachine@lemmy.world 10 points 1 day ago* (last edited 1 day ago)

Interesting comments in the mastodon thread, some idiot people will bend over backwards to defend AI slop.

load more comments
view more: next ›