this post was submitted on 13 Jan 2026
303 points (97.8% liked)
Technology
78661 readers
3289 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Seems like a bad take from my POV, as someone who uses and has made money using LLMs I feel is not ok to poison them, I wouldn't feel ok with myself getting something for free and even gain money with and at the same time be poisoning it so my take will be: you can always block crawlers in your nginx.conf with some extra steps, you can even use an LLM to do it for you and improve to block all major crawlers. IMHO if it's public data is even public for crawlers is up to you if you set up a block for these on your behalf.
Bots that scrape for training do not usually respect typical methods of asking them kindly to not look at their data.
If we could start from scratch and force these bots to check for some kind of opt in data before scraping, I'd be a hell of a lot more comfortable with Gen AI scraping.
At this point, most models are trained on content taken without consent. In most cases, much of that content would, if a human were to consume it, be considered stolen/pirated. The courts just decided that these AI companies are above those laws for reasons. That reason is money.
As someone who makes and uses software, I feel it is not okay to steal source code. I wouldn't feel okay with myself getting something for free when it's based on the stolen work of tens of thousands of people
AI companies aren't respecting crawler blocking. They're actively working to ensure their crawlers bypass any anti-crawler protections
As a side note, these efforts help AI in the long-term. If we can poison LLMs, then you can guarantee a state actor can as well. AI needs to be able to weather training data attacks, otherwise they become an easily manipulated propaganda tool
What about the following take: LLMs are an abomination that consumes enormous masses of resources for... well... really nothing besides being a tool to further enshittify the Internet and the world as a whole, being a tool for making it easy creating ever more divisive content (not to mention the special content Grok is now known for), killing jobs and replacing genuine human creativity by a cheap, warped imitation thereof.
My opinion is: Everybody who uses or promotes this technology is accomplice in making the world a worse place.
It would not be fair to prevent ai from violating every single copyright on the earth? That is a novel take.
Especially as most do not use ai but companies are trying to force them to, to ultimately replace half the workforce and send the economy into a doom spiral.
"Public" is a tricky term. At this point everything is being treated as public by LLM developers. Maybe not you specifically, but a lot of people aren't happy with how their data is being used to train AI.
Also they always come up with new ways to circumvent blocking mechanisms and push some extra work to admins.
Remember how judges ruled when somebody circumvented copy restrictions on media?