this post was submitted on 13 Jan 2026
246 points (97.7% liked)
Technology
78627 readers
2980 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Idiots: This new technology is still quite ineffective. Let's sabotage it's improvement!
Imbeciles: Yeah!
Corpos: Don't steal our stuff! That's piracy!
Also corpos: Your stuff? My stuff now.
Bootlickers: Oh my god this shoe polish is delicious.
Person: Says a thing
Person 2, who disagrees with the thing: YOU'RE A BOOTLICKER!
Super convincing. I'm sure you're going to win people over to your position if you scream loud enough.
You should select something: whether you like the current copyright system or not. You can't do both.
Corporations want the existing copyright system for their own products but simultaneously want to freely scrape data from everyone else.
I see that as a copyright problem, not a specific LLM one.
This issue is largely manifesting through AI scraping right now. Additionally, many intentionally ignore
robots.txt. Currently, LLM scrapers are basically just bad actors on the internet. Courts have also ruled in favor of a number of AI companies when sued in the US, so it's unlikely anything will change. Effectively, if you don't like the status quo, stuff like this is one of your few options.This isn't even mentioning of course whether we actually want these companies to improve their models before resolving the problems of energy consumption and potential displacement of human workers.
All crawlers ignore robots text since the very start. Anyway, if THAT is the problem then IT is a problem, not the LLMs as a whole.
You can tell when you're talking with someone who has been given the position of 'AI Bad', but doesn't actually understand the moral positions or technological details that form the foundation of that argument by how confidently they repeat some detail that is clearly nonsense to anybody with knowledge of the subject.
Third thing: Point out obvious hypocrisy.
AI companies could start, I don't know- maybe asking for permission to scrape a website's data for training? Or maybe try behaving more ethically in general? Perhaps then they might not risk people poisoning the data that they clearly didn't agree to being used for training?
Why should they ask permission to read freely provided data? Nobody's asking for any permission, but LLM trainers somehow should? And what do you want from them from an ethical standpoint?
Is the only imaginable system for AI to exist one in which every website operator, or musician, artist, writer, etc has no say in how their data is used? Is it possible to have a more consensual arrangement?
As far as the question about ethics, there is a lot of ground to cover on that. A lot of it is being discussed. I'll basically reiterate what I said that pertains to data rights. I believe they are pretty fundamental to human rights, for a lot of reasons. AI is killing open source, and claiming the whole of human experience for its own training purposes. I find that unethical.
Killing open source? How?!
For instance
The guy is talking about consulting as I understand. Yes, LLM is great for reading the documentation. That's the purpose of LLM. Now people can use those libraries without spending ages reading through docs. That's progress. I see it as a way to write more open source because it became simpler and less tedious.
He's jumping ship because it's destroying his ability to eke out a living. The problem isn't a small one, what's happening to him isn't a limited case.
We didn't smash automobiles because horse traders were losing their jobs.
Nobody rioted when Computer became an object instead of a white collar job.
Technology is disruptive, that doesn't make all technology bad or unethical. It is specific people/organizations that are involved in unethical projects, not the technology itself.
It seems that every time someone mentions 'AI Bad', they're really talking about a person who is being unethical. People simply say 'AI' is bad when they mean 'OpenAI' or 'NVIDIA' or 'Microsoft' are unethical.
There are companies that are using ethically sourced data for training AI. For example, Adobe's generative AI is trained on data licensed from artists explicitly for training AI. VoiceSwap.ai is licensing training data from vocalists and employing the artists for fine-tuning as well as sharing the revenue from the resulting product. Common Corpus is a massive LLM training set made of data that is either licensed or unprotected by copyright (public domain books, for example).
I have never once said that AI is bad. Literally everything I've argued pertains to the ethics and application of AI. It's reductive to call all arguments critical of how AI is being implemented "AI bad".
It's not even about it being disruptive, though I do think discussions about that are absolutely warranted. Experts have pointed to potentially catastrophic "disruptions" if AI isn't dealt with responsibly, and we are currently anything but responsible in our handling of it. It's unregulated, running rampant and free everywhere claiming to be all things for all people, leaving a mass of problems in its wake.
If a specific individual or company is committed to behaving ethically, I'm not condemning them. A major point to understand is that those small, ethical actors are the extreme minority. The major players, like those you mentioned, are titans. The problems they create are real.
So? Is he more important than those specialists who now can write code without hiring a consultant?
Yes, they should because they generate way more traffic. Why do you think people are trying to protect websites from AI crawlers? Because they want to keep public data secret?
Also, everyone knows AI companies used copyrighted materials and private data without permission. If you think they only used public data you're uninformed or lying on their behalf.
I personally consider the current copyright laws completely messed up, so I see no problem in using any data technically available for processing.
Ok, so you think it's ok for big companies to break the laws you don't like, cool. I'm sure those big companies will not sue you when you infringe on some of their laws you don't like.
And I like the way you just ignored the two other issues I mentioned. Are you fine with AI bots slowing sites like Codeberg to a crawl? Are you fine with AI companies using personal data without consent?
I'm fine with companies using any freely available data.
I'm also fine with them using data they can get for free like, I don't know, weather data they collect themselves?
Data hosted by private individuals and open source projects is not free. Someone has to pay for hosting and AI companies sucking data with army of bots is elevating the cost of hosting beyond the means of those people/projects. They are shifting the costs of providing the "free" data on the community while keeping all the profits.
Private data used without consent is also not free. It's valuable, protected data and AI companies are simply stealing it. Do you consider stolen things free?
I see your attitude is "they don't hurt me personally and I don't care what they do to other people". It's either ignorant or straight antisocial. Also a bit bootlickish.
Data is available therefore it is... well, available. You don't want to pay to host it? Don't then. LLM companies don't hack your servers. They read only the data that you have provided volunterely.
Still ignorant, antisocial and a little big bootlickish.
Do you think that using ad hominem instead of responding to a person's point makes you enlightened, pro-social and independent?
It makes you look like another toxic Internet person, indistinguishable from the FUD bots that swarm social media.
What was his point? He just repeated the same position that in my opinion is ignorant and antisocial. From the beginning he didn't say anything beyond "AI companies can take any data they want" and avoided many direct questions. There's nothing to respond to.
As someone who self-hosts a LLM and trains it on web data regularly to improve my model, I get where your frustration is coming from.
But engaging in discourse here where people already have a heavy bias against machine-learning language models is a fruitless effort. No one here is going to provide you catharsis with a genuine conversation that isnt rhetoric.
Just put the keyboard down and walk away.
I don't have a bias against LLMs, I use them regularly albeit either for casual things (movie recommendation) or an automation tool in work areas where I can somewhat easily validate the output or the specific task is low impact.
I am just curious, do you respect robots.txt?
I think it's worthwhile to show people that views outside of their like-minded bubble exist. One of the nice things about the Fediverse over Reddit is that the upvote and downvote tallies are both shown, so we can see that opinions are not a monolith.
Also, engaging in Internet debate is never to convince the person you're actually talking to. That almost never happens. The point of debate is to present convincing arguments for the less-committed casual readers who are lurking rather than participating directly.
I agree with you that there can be value in "showing people that views outside of their likeminded bubble[s] exist". And you can't change everyone's mind, but I think it's a bit cynical to assume you can't change anyone's mind.
I can't speak for everyone, but I'm absolutely glad to have good-faith discussions about these things. People have different points of view, and I certainly don't know everything. It's one of the reasons I post, for discussion. It's really unproductive to make blanket statements that try to end discussion before it starts.
I don't know, it seems like their comment accurately predicted the response.
Even if you want to see yourself as some beacon of open and honest discussion, you have to admit that there are a lot of people who are toxic to anybody who mentions any position that isn't rabidly anti-AI enough for them.
This is a subject that people (understandably) have strong opinions on. Debates get heated sometimes and yes, some individuals go on the attack. I never post anything with the expectation that no one is going to have bad feelings about it and everyone is just going to hold hands and sing a song.
There are hard conversations that need to be had regardless. All sides of an argument need to be open enough to have it and not just retreat to their own cushy little safe zones. This is the Fediverse, FFS.