196
Community Rules
You must post before you leave
Be nice. Assume others have good intent (within reason).
Block or ignore posts, comments, and users that irritate you in some way rather than engaging. Report if they are actually breaking community rules.
Use content warnings and/or mark as NSFW when appropriate. Most posts with content warnings likely need to be marked NSFW.
Most 196 posts are memes, shitposts, cute images, or even just recent things that happened, etc. There is no real theme, but try to avoid posts that are very inflammatory, offensive, very low quality, or very "off topic".
Bigotry is not allowed, this includes (but is not limited to): Homophobia, Transphobia, Racism, Sexism, Abelism, Classism, or discrimination based on things like Ethnicity, Nationality, Language, or Religion.
Avoid shilling for corporations, posting advertisements, or promoting exploitation of workers.
Proselytization, support, or defense of authoritarianism is not welcome. This includes but is not limited to: imperialism, nationalism, genocide denial, ethnic or racial supremacy, fascism, Nazism, Marxism-Leninism, Maoism, etc.
Avoid AI generated content.
Avoid misinformation.
Avoid incomprehensible posts.
No threats or personal attacks.
No spam.
Moderator Guidelines
Moderator Guidelines
- Don’t be mean to users. Be gentle or neutral.
- Most moderator actions which have a modlog message should include your username.
- When in doubt about whether or not a user is problematic, send them a DM.
- Don’t waste time debating/arguing with problematic users.
- Assume the best, but don’t tolerate sealioning/just asking questions/concern trolling.
- Ask another mod to take over cases you struggle with, if you get tired, or when things get personal.
- Ask the other mods for advice when things get complicated.
- Share everything you do in the mod matrix, both so several mods aren't unknowingly handling the same issues, but also so you can receive feedback on what you intend to do.
- Don't rush mod actions. If a case doesn't need to be handled right away, consider taking a short break before getting to it. This is to say, cool down and make room for feedback.
- Don’t perform too much moderation in the comments, except if you want a verdict to be public or to ask people to dial a convo down/stop. Single comment warnings are okay.
- Send users concise DMs about verdicts about them, such as bans etc, except in cases where it is clear we don’t want them at all, such as obvious transphobes. No need to notify someone they haven’t been banned of course.
- Explain to a user why their behavior is problematic and how it is distressing others rather than engage with whatever they are saying. Ask them to avoid this in the future and send them packing if they do not comply.
- First warn users, then temp ban them, then finally perma ban them when they break the rules or act inappropriately. Skip steps if necessary.
- Use neutral statements like “this statement can be considered transphobic” rather than “you are being transphobic”.
- No large decisions or actions without community input (polls or meta posts f.ex.).
- Large internal decisions (such as ousting a mod) might require a vote, needing more than 50% of the votes to pass. Also consider asking the community for feedback.
- Remember you are a voluntary moderator. You don’t get paid. Take a break when you need one. Perhaps ask another moderator to step in if necessary.
view the rest of the comments
I appreciate that you didn't mean to say what you said, but words mean things. I can only respond to what you say, not what you meant.
Especially here, where the difference entirely changes whether you're right or not.
Because no, "less human code" doesn't mean "less AI training". It could mean a slowdown in how fast you can expand the training dataset, but again, old code doesn't disappear just because you used it for training before. You don't need a novel training dataset to train. The same data we have plus a little bit of new data is MORE training data, not less.
And less human code is absolutely not the same thing as "new human code will stop being created". That's not even a slip of the tongue, those are entirely different concepts.
There is a big difference between arguing that the pace of improvement will slow down (which is probably true even without any data scarcity) and saying that a lack of new human created code will bring AI training to a halt. That is flat out not a thing.
That this leads to "less developments and advancements in programming in general" is also a wild claim. How many brilliant programmers need to get replaced by AI before that's true? Which fields are generating "developments and advancements in programming"? Are those fields being targeted by AI replacements? More or less than other fields? Does that come from academia or the private sector? Is the pace of development slowing down specifially in that area? Is AI generating "developments and advancements" of its own? Is it doing so faster or slower than human coders? Not at all?
People say a lot of stuff here. Again, on both sides of the aisle. If you know the answers to any of those questions you shouldn't be arguing on the Internet, you should be investing in tech stock. Try to do something positive with the money after, too.
I'd say it's more likely you're just wildly extrapolating from relatively high level observations, though.
Hah, alright. I tried to bring this back to productive conversation, but we don't share the same fundamentals on this topic, nor do we apparently share an understanding of grammatical conventions, or an understanding of how to productively address miscommunications. For example, one of my first responses started by clarifying that "it's not that AI will successfully replace programmers"
I understand that the internet is so full of extreme, polarizing takes, and it's hard to discuss nuance on here.
I'm not trying to give you homework for this conversation - we can absolutely wrap this up.
I just highly recommend that you look into the technological issues of AI training on AI output. If you do discover that I'm wrong, I absolutely do not ask you to return and educate me.
But believe it or not I would be extremely excited to learn I'm wrong, as overcoming that obstacle would be huge for the development of this technology.
Hm. That's rolling the argument back a few steps there. None of the stuff we've talked about in the past few posts has anything to do with the impact of AI-on-AI training.
I mean, you could stretch the idea and argue that there is a filtering problem to be solved or whatever, but that aside everything I'm saying would still be true if AI training exploded any time it's accidentally given a "Hello world" written by a machine.
I didn't roll back anything. The entire conversation has ultimately been us disagreeing on this one point, and we clearly can't overcome that with more back and forth, so I'm happy to agree to disagree. Cheers.
But that point is not the same as LLMs degrading when trained on its own data.
Again, it may be the same as the problem of "how do you separate AI generated data from human generated data", so a filtering issue.
But it's not the same as the problem of degradation due to self-training. Which I'm fairly sure you're also misrepresenting, but I REALLY don't want to get into that.
But hey, if you don't want to keep talking about this that's your prerogative. I just want to make it very clear that the reasons why that's... just not a thing have nothing to do with training on AI-generated data. Your depiction is a wild extrapolation even if you were right about how poisonous AI-generated data is.