this post was submitted on 16 Mar 2026
369 points (95.3% liked)

Fuck AI

6398 readers
1267 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

Related:

This is in a PR where Shougo, another long-time contributor, communicates entirely in walls of unparseable AI slop text: https://github.com/vim/vim/pull/19413

Thank you for the detailed feedback! I've addressed all the issues:

Thank you for the feedback! I agree that following the Vim 8+ naming convention makes sense.

Thank you for the feedback on naming!

Thanks for the suggestion! After thinking about this more, I believe repeat_set() / repeat_get() is the right choice:

Thank you for the feedback. A brief clarification.

https://hachyderm.io/@AndrewRadev/116176001750596207

@AndrewRadev@hachyderm.io

you are viewing a single comment's thread
view the rest of the comments
[–] hperrin@lemmy.ca 188 points 2 days ago (7 children)

I spent literally all day yesterday working on this:

https://sciactive.com/human-contribution-policy/

I’ve started to add it to my projects. Eventually, it will be on all of my projects. I made it so that any project could adopt it, or modify it to their needs. It’s got a thorough and clear definition of what is banned, too, so it should help any argument over pull requests.

Hopefully more projects will outright ban AI generated code (and other AI generated material).

[–] gaiety@lemmy.blahaj.zone 0 points 1 hour ago

This is super cool!

Did want to offer one language critique, it's easy to jump to the word human as the opposite of AI-made, but there are a lot of therians and adjacent entities in the software engineering space. It would be wonderful to find language that is a pro-"human" policy that avoids that word and instead focuses on people of all sorts of identities so as not to be othering.

Sounds strange to some I'm sure, but this has been coming up more and more with coworkers I've had across several companies. It's kind of like moving from "he or she" to "they", a great example is the writings of beeps a prominent software engineer on the GOV.UK site and its accessibility https://beeps.website/about/nonhuman/

Regardless if any changes are made thanks for reading and your policy writeup, again very cool :D

[–] Bibip@programming.dev 4 points 1 day ago (1 children)

hi, i have strong feelings about the use of genai but i come at it from a very different direction (story writing). it's possible for someone to throw together a 300 page story book in an afternoon - in the style of lovecraft if they want, or brandon sanderson, or dan brown (dan brown always sounds the same and so we might not even notice). now, the assumption that i have about said 300 pager is that it will be dogshit, but art is subjective and someone out there has been beside themselves pining for it.

but this has always been true. there have always been people churning out trash hoping to turn a buck. the fact that they can do it faster now doesn't change that they're still in the trash market.

so: i keep writing. i know that my projects will be plagiarized by tech companies. i tell myself that my work is "better" than ai slop.

for you, things are different. writing code is a goal-oriented creative endeavor, but the bar for literature is enjoyment, and the bar for code is functionality. with that in mind, i have some questions:

if someone used genai to generate code snippets and they were able to verify the output, what's the problem? they used an ersatz gnome to save them some typing. if generated code is indistinguishable from human code, how does this policy work?

for code that's been flagged as ai generated- and let's assume it's obvious, they left a bunch of GPT comments all over the place- is the code bad because it's genai or is it bad because it doesn't work?

i'm interested to hear your thoughts

[–] hperrin@lemmy.ca 6 points 1 day ago* (last edited 1 day ago)

That’s a very good question, and I appreciate it.

I put a lot of this in the reasoning section of the policy, but basically there are legal, quality, security, and community reasons. Even if the quality and security reasons are solved (as you’re proposing with the “indistinguishable from human code” aspect), there are still legal and community reasons.

Legal

AI generated material is not copyrightable, and therefore licensing restrictions on it cannot be enforced. It’s considered public domain, so putting that code into your code base makes your license much less enforceable.

AI generated material might be too similar to its copyrighted training data, making it actually copyrighted by the original author. We’ve seen OpenAI and Midjourney get sued for regurgitating their training data. It’s not farfetched to think a copyright owner could go after a project for distributing their copyrighted material after an AI regurgitated it.

Community

People have an implicit trust that the maintainers of a project understand the code. When AI generated code is included, that may not be the case, and that implicit trust is broken.

Admittedly, I’ve never seen AI generated code that I couldn’t understand, but it’s reasonable to think that as AI models get bigger and more capable of producing abstract code, their code could become too obscure or abstracted to be sufficiently understood by a project maintainer.

[–] thethunderwolf@lemmy.dbzer0.com 17 points 2 days ago (1 children)

this is cool

you should make a post about this somewhere here on Lemmy

people should know about it

[–] hperrin@lemmy.ca 13 points 2 days ago

Ok, yeah, I’ll make a post for it.

Feel free to share it anywhere. :)

[–] PlutoniumAcid@lemmy.world 35 points 2 days ago (4 children)

I like this approach, but how can it be enforced? Would you have to read every line and listen to a gut feeling?

[–] Cethin@lemmy.zip 1 points 1 day ago

Obviously you ask an LLM if any of it was generated!

[–] hperrin@lemmy.ca 89 points 2 days ago (1 children)

Basically the best you can do is continue as normal, and if someone submits something that says it is or obviously is AI, point to this policy and reject it. Just having the policy should be a decent deterrent.

[–] Jankatarch@lemmy.world 23 points 2 days ago* (last edited 2 days ago)

Same mindset as "You don't need a perfect lock to protect your house from thieves, you just need one better than what your neighbors have."

If a vibecoder sees this they will not bother with obfuscation and simply move onto the next project.

[–] Magnum@infosec.pub 4 points 2 days ago
[–] xvapx@lemmy.world 2 points 2 days ago

That's great, thank you!
Added to my project's repo.

[–] thethunderwolf@lemmy.dbzer0.com 1 points 2 days ago (1 children)

“AI generated” means that the subject material is in whole, or in meaningful part, the output of a generative AI model or models, such as a Large Language Model. This does not include code that is the result of non-generative tools, such as standard compilers, linters, or basic IDE auto-completions. This does, however, include code that is the result of code block generators and automatic refactoring tools that make use of generative AI models.

As "artificial intelligence" is not that well defined, you could clarify what the policy defines "AI" as by specifying that "AI" involves machine learning.

[–] hperrin@lemmy.ca 11 points 2 days ago

“Generative AI model” is a pretty well defined term, so this prohibits all of those things like ChatGPT, Gemini, Claude Code, Stable Diffusion, Midjourney, etc.

Machine learning is a much more broad category, so banning all outputs of machine learning may have unintended consequences.