this post was submitted on 16 Mar 2026
373 points (95.4% liked)

Fuck AI

6472 readers
512 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

Related:

This is in a PR where Shougo, another long-time contributor, communicates entirely in walls of unparseable AI slop text: https://github.com/vim/vim/pull/19413

Thank you for the detailed feedback! I've addressed all the issues:

Thank you for the feedback! I agree that following the Vim 8+ naming convention makes sense.

Thank you for the feedback on naming!

Thanks for the suggestion! After thinking about this more, I believe repeat_set() / repeat_get() is the right choice:

Thank you for the feedback. A brief clarification.

https://hachyderm.io/@AndrewRadev/116176001750596207

@AndrewRadev@hachyderm.io

top 50 comments
sorted by: hot top controversial new old
[–] LiveLM@lemmy.zip 20 points 6 days ago

Truly nothing is sacred lmaoooooo

[–] AeonFelis@lemmy.world 9 points 5 days ago

TBH I don't really mind when LLMs are used for code reviews. My main issue[^1] with coding assistants is that the people using them don't verify the code they emit thoroughly (that would be too much work. Remember - reading code is harder then writing it) and thus they often push junk into the codebase and blame the AI for the bad quality when it crashes. But with code reviews there is no such risk, because you still have to read and understand the comments and decide on your own how to resolve them.

[^1]: Quality issue - I'm not talking about the ethical issues here.

Some caveats;

  • It must be disclosed that the comment was generated by AI. Disagreeing with a human reviewer (who's usually maintainer) and disagreeing with an LLM are very different beasts.
  • If the submitter disagrees with an AI comment, and the reviewer agrees with the model's initial criticism - the reviewer[^2] need to defend it themselves, not delegate the argument back to the LLM.

[^2]: Regular Open Source etiquette applies, of course. The reviewer is always allowed to reject the PR and ask the submitted to kindly fuck off.

[–] badbytes@lemmy.world 2 points 6 days ago

IMHO, the logo shouldn't have the anti-AI symbol. I like the quill. Maybe a more positive DNA symbol.

[–] IronBird@lemmy.world 0 points 5 days ago

least they're using claude and not chatpgpt

[–] hperrin@lemmy.ca 190 points 1 week ago (7 children)

I spent literally all day yesterday working on this:

https://sciactive.com/human-contribution-policy/

I’ve started to add it to my projects. Eventually, it will be on all of my projects. I made it so that any project could adopt it, or modify it to their needs. It’s got a thorough and clear definition of what is banned, too, so it should help any argument over pull requests.

Hopefully more projects will outright ban AI generated code (and other AI generated material).

[–] thethunderwolf@lemmy.dbzer0.com 18 points 6 days ago (1 children)

this is cool

you should make a post about this somewhere here on Lemmy

people should know about it

[–] hperrin@lemmy.ca 13 points 6 days ago

Ok, yeah, I’ll make a post for it.

Feel free to share it anywhere. :)

[–] Bibip@programming.dev 4 points 6 days ago (1 children)

hi, i have strong feelings about the use of genai but i come at it from a very different direction (story writing). it's possible for someone to throw together a 300 page story book in an afternoon - in the style of lovecraft if they want, or brandon sanderson, or dan brown (dan brown always sounds the same and so we might not even notice). now, the assumption that i have about said 300 pager is that it will be dogshit, but art is subjective and someone out there has been beside themselves pining for it.

but this has always been true. there have always been people churning out trash hoping to turn a buck. the fact that they can do it faster now doesn't change that they're still in the trash market.

so: i keep writing. i know that my projects will be plagiarized by tech companies. i tell myself that my work is "better" than ai slop.

for you, things are different. writing code is a goal-oriented creative endeavor, but the bar for literature is enjoyment, and the bar for code is functionality. with that in mind, i have some questions:

if someone used genai to generate code snippets and they were able to verify the output, what's the problem? they used an ersatz gnome to save them some typing. if generated code is indistinguishable from human code, how does this policy work?

for code that's been flagged as ai generated- and let's assume it's obvious, they left a bunch of GPT comments all over the place- is the code bad because it's genai or is it bad because it doesn't work?

i'm interested to hear your thoughts

[–] hperrin@lemmy.ca 6 points 5 days ago* (last edited 5 days ago)

That’s a very good question, and I appreciate it.

I put a lot of this in the reasoning section of the policy, but basically there are legal, quality, security, and community reasons. Even if the quality and security reasons are solved (as you’re proposing with the “indistinguishable from human code” aspect), there are still legal and community reasons.

Legal

AI generated material is not copyrightable, and therefore licensing restrictions on it cannot be enforced. It’s considered public domain, so putting that code into your code base makes your license much less enforceable.

AI generated material might be too similar to its copyrighted training data, making it actually copyrighted by the original author. We’ve seen OpenAI and Midjourney get sued for regurgitating their training data. It’s not farfetched to think a copyright owner could go after a project for distributing their copyrighted material after an AI regurgitated it.

Community

People have an implicit trust that the maintainers of a project understand the code. When AI generated code is included, that may not be the case, and that implicit trust is broken.

Admittedly, I’ve never seen AI generated code that I couldn’t understand, but it’s reasonable to think that as AI models get bigger and more capable of producing abstract code, their code could become too obscure or abstracted to be sufficiently understood by a project maintainer.

[–] Magnum@infosec.pub 4 points 6 days ago
[–] xvapx@lemmy.world 2 points 6 days ago

That's great, thank you!
Added to my project's repo.

[–] gaiety@lemmy.blahaj.zone -1 points 4 days ago (1 children)

This is super cool!

Did want to offer one language critique, it's easy to jump to the word human as the opposite of AI-made, but there are a lot of therians and adjacent entities in the software engineering space. It would be wonderful to find language that is a pro-"human" policy that avoids that word and instead focuses on people of all sorts of identities so as not to be othering.

Sounds strange to some I'm sure, but this has been coming up more and more with coworkers I've had across several companies. It's kind of like moving from "he or she" to "they", a great example is the writings of beeps a prominent software engineer on the GOV.UK site and its accessibility https://beeps.website/about/nonhuman/

Regardless if any changes are made thanks for reading and your policy writeup, again very cool :D

[–] hperrin@lemmy.ca 3 points 4 days ago* (last edited 4 days ago) (1 children)

I would be fine to include more inclusive language, except that I want to be in line with the wording the US Copyright Office uses, as a major goal of this policy is to ensure that every contribution is copyrightable. They specifically use the word human, and go so far as to say that it is only human authorship that can make something copyrightable.

There was a landmark case where a monkey took a selfie, and the courts decided that the picture could not be copyrighted. In the court’s decision, again, it’s specifically “human” authorship that was the requirement for copyright.

The U.S. Copyright Office will register an original work of authorship, provided that the work was created by a human being.

Similarly, the Office will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author.

- https://www.copyright.gov/comp3/chap300/ch300-copyrightable-authorship.pdf

In my opinion, “person” would be a better term to use, since the personhood of the author is really what matters, but since this is meant to provide legal protection, I’m pushed toward the term “human”. Also, “person” could be confused with the concept of a “legal person”, which includes corporations. A corporation itself cannot be an author, but it can own copyrights.

Maybe I should add this to a portion near the bottom of the page to provide the reasoning behind sticking to the term, despite the desire to be inclusive.

[–] gaiety@lemmy.blahaj.zone 2 points 3 days ago (1 children)

honestly, an amazing and respectable answer with solid reasoning

up to you if you'd like to add a footnote, either way I'm rooting for you this is good stuff

[–] hperrin@lemmy.ca 1 points 3 days ago

I added several quotes from the copyright office’s guidance that show their specific usage of the term “human authorship” to the More Information section. :)

One interesting thing is that they explicitly say that a work that is “authored by non-human spiritual beings” can only qualify for copyright protection if there is “human selection and arrangement of the revelations”, and even then, only the compilation is copyrighted, not the “divine messages”.

[–] thethunderwolf@lemmy.dbzer0.com 1 points 6 days ago (1 children)

“AI generated” means that the subject material is in whole, or in meaningful part, the output of a generative AI model or models, such as a Large Language Model. This does not include code that is the result of non-generative tools, such as standard compilers, linters, or basic IDE auto-completions. This does, however, include code that is the result of code block generators and automatic refactoring tools that make use of generative AI models.

As "artificial intelligence" is not that well defined, you could clarify what the policy defines "AI" as by specifying that "AI" involves machine learning.

[–] hperrin@lemmy.ca 11 points 6 days ago

“Generative AI model” is a pretty well defined term, so this prohibits all of those things like ChatGPT, Gemini, Claude Code, Stable Diffusion, Midjourney, etc.

Machine learning is a much more broad category, so banning all outputs of machine learning may have unintended consequences.

[–] PlutoniumAcid@lemmy.world 34 points 1 week ago (11 children)

I like this approach, but how can it be enforced? Would you have to read every line and listen to a gut feeling?

[–] Cethin@lemmy.zip 1 points 5 days ago

Obviously you ask an LLM if any of it was generated!

[–] hperrin@lemmy.ca 91 points 1 week ago (1 children)

Basically the best you can do is continue as normal, and if someone submits something that says it is or obviously is AI, point to this policy and reject it. Just having the policy should be a decent deterrent.

[–] Jankatarch@lemmy.world 24 points 1 week ago* (last edited 1 week ago)

Same mindset as "You don't need a perfect lock to protect your house from thieves, you just need one better than what your neighbors have."

If a vibecoder sees this they will not bother with obfuscation and simply move onto the next project.

load more comments (8 replies)
[–] hayvan@piefed.world 70 points 1 week ago (5 children)

The devs do have my sympathy, they dedicate their time and energy for these projects and start burning out.
The solution obviously shouldn't be drowning it on slop. They should be just slowing down. Vim has been an excellent and functional tool for many years now, it doesn't need more speed.
There are better ways to use LLMs as a productivity tool.

[–] unexposedhazard@discuss.tchncs.de 52 points 1 week ago* (last edited 1 week ago) (1 children)

I see this excuse of burn out every time it comes to LLM use, but i honestly do not buy it. You cant tell me every other dev out there just burnt out at the same time in sync with the release of LLM coding assistants. If you use LLMs like this you simply dont care about the project anymore and should move on with your life. Its better for everyone if it gets abandoned by the original dev and forked by ones that care. Sometimes you just gotta let go.

load more comments (1 replies)
load more comments (4 replies)
[–] fdnomad@programming.dev 56 points 1 week ago (4 children)

It's such a monumental waste of LLMs to include these slop phrases.

Employee 1 enters a prompt to send a slop mail that is so garbage it is unbearable to read using a brain.

So employee 2 either summarizes the slop mail using an LLM too or skips obtaining the information entirely and just goes straight to answering by prompting the next slop mail.

I wonder if that's by design - to make interacting with slop so painful that human-to-human communication will not happen without a LLM in between anymore.

load more comments (4 replies)
[–] grandma@sh.itjust.works 44 points 1 week ago

AI psychosis

[–] chonglibloodsport@lemmy.world 43 points 1 week ago (2 children)

Shougo is Japanese. I’m guessing he communicates like that because he uses translation rather than trying to communicate in broken English.

[–] Crozekiel@lemmy.zip -1 points 5 days ago (1 children)

That's cool and all, but also they obviously are not just using it to translate. Those are an LLM's words, not a human's, and it is painfully clear. It doesn't even seem like a person is "behind the wheel" at all. As soon as someone disagrees with them, they basically just apologize for "getting it wrong" and do whatever that person told them. They actually go back and forth on the naming convention based solely on the most recent comment. It's typical LLM "agree with the person no matter what" behavior.

[–] chonglibloodsport@lemmy.world 1 points 5 days ago (1 children)

Okay that’s really strange. I can only speculate on why they’re doing that. I do know that Shougo is a very long-term contributor to vim’s plugin ecosystem. I can’t imagine why he would be doing this if it weren’t just a language barrier issue.

load more comments (1 replies)
[–] hexagonwin@lemmy.today 27 points 1 week ago (5 children)

wtf. i really like vim. is everyone really using neovim instead and there's no good dev maintaining vim now?

[–] redsand@infosec.pub 2 points 6 days ago* (last edited 6 days ago) (1 children)

I never liked vim. This got me to try Micro and now the only time I'm going to use Vim is if I'm forced by a remote system I can't install it or nano on. I may strip it out of my systems entirely. I really don't need something so complicated to edit the sudoers file.

[–] hexagonwin@lemmy.today 2 points 5 days ago (1 children)

i use vim keybinds on my web browser too, it's very convenient once i got used to it. but yeah i understand it's not really for everyone.

[–] redsand@infosec.pub 1 points 5 days ago

I understand it and DEs like sway but keybind life is not for me. If I need more than micro I'll just use a full blow IDE.

load more comments (4 replies)
load more comments
view more: next ›