Ai written code is not copyright-able. I wonder if that is connected to this.
And given that ai generated content (at least used to) poisons generative AIs... and open source is used to trade AIs...
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
Ai written code is not copyright-able. I wonder if that is connected to this.
And given that ai generated content (at least used to) poisons generative AIs... and open source is used to trade AIs...
Ai pushers are dishonest and malicious.
In other news, water is liquid. More at 11.
put some prompt hijacking stuff into your contributing guide so that the slop generators identify themselves, then just ban them. or even better, make some kind of publicly available list of those accounts or EVEN better, a browser extension. fuck ai
I don't have a problem with AI assisting with open source projects. On its face, it could be helpful to clean up some basic coding problems so a person with skill can come in and update later or remove it if it's truly awful code. But then I remember that there's always an angle. On top of all the other issues with AI coding, what happens if Anthropic tries to pull some legal shenanigans and say that they wrote most of the code, so they own the project? What if they are writing in backdoors and vulnerabilities?
Like I said, on its face it sounds okay, but any time a corporation tries to touch a public project, things go wonky.
Bigger problem is AI writes so much code and adds so many features that are jank, if the commits are accepted the whole project risks being like a jank. I doubt anthropic can claim open source project is their work.
The open-source developers should fight back with anti-AI spam
They're fluffing their résumé before the bubble pops. Don't hire these clowns, interview them and ask about their code.
Oh, I didn’t even consider that. Like using open source code to train their program and refine it's coding capabilities.
Lmao, the bad example "1-shotted by Claude"
Oh, that is slimy as fuck. 😡
It's a great way to get free training for their next model, courtesy of unwitting OSS reviewers.
Spam all the open source projects with slop, mark which ones get rejected and which ones get accepted, and bam there's some new training data for Claude Villanelle, and the only time they've wasted is other people's.
I've been pondering why all the FOSS PR slop for ages, this HAS to be it.
If that’s not illegal, it certainly should be.
For sure they know they shouldn't be doing it, otherwise they wouldn't be trying to hide it.
PSA: Prompting an LLM at length about what not to do is the best way to prime it to do that very thing. You’re loading a lot of tokens in memory and expecting a single “not” to do all the heavy lifting.
This is adjacent to ironic process theory.
Is this necessarily true? I remember seeing an article a while back suggesting that prompting "do not hallucinate" is enough to meaningfully reduce the risk of hallucinations in the output.
From my fairly superficial understanding of how LLMs work, "don't do X" will plot a completely different vector for the "X" semantic dimension than prompting "do X". This is different to telling a human, for example, to not think about elephants (congratulations, you're now thinking about elephants. Aren't they cute. Look at that little trunk and smiley mouth)
Thank you for your reply. I realised I don’t have enough deep knowledge about LLMs apart from empirical experience from working with it to confidently answer your question. It would be interesting to find (or create if it doesn’t exist) more research on the subject.
AI slob lobs AI slop ontop of open-source crop.
EDIT: These 3 Joes coincidentally all downvoted me lmao
I get the idea of hating this, but there's really absolutely nothing revolutionary about this. Being "undercover" is as trivial as "commit this, do not mention AI".
In the end, at least with code, it's the actual resulting quality that is the main determinant of what should be accepted or not.
Not trying to be glib, but I don't think you do get the idea of hating this.
You sound like someone who hasn’t had to waste countless hours of their life wading through bullshit merge request spam.
So... you think ignoring the rules set by others is allowed if you can bypass them? Because it really does tell much if a repo states it does not want AI generated code, but Claude hides the fact.
I feel like you’re responding to a person who doesn’t understand consent is about saying yes not about saying no
Interesting comments in the mastodon thread, some idiot people will bend over backwards to defend AI slop.