this post was submitted on 13 Jan 2026
318 points (97.9% liked)

Technology

78705 readers
3989 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Website operators are being asked to feed LLM crawlers poisoned data by a project called Poison Fountain.

The project page links to URLs which provide a practically endless stream of poisoned training data. They have determined that this approach is very effective at ultimately sabotaging the quality and accuracy of AI which has been trained on it.

Small quantities of poisoned training data can significantly damage a language model.

The page also gives suggestions on how to put the provided resources to use.

you are viewing a single comment's thread
view the rest of the comments
[–] douglasg14b@lemmy.world 2 points 15 hours ago* (last edited 15 hours ago) (1 children)

I can get a 50Gb/s residential link where I am, and have a whole rack of servers.

Sounds like a good opportunity to crowd fund thousands and thousands of common scrapeable instances that have random poisoning.

[–] vane@lemmy.world 1 points 7 hours ago (1 children)

To be honest bandwidth isn't a problem because it's text files. The problem is to optimize network stack for multiple connections because they're hitting from whole subnets without any delay so literally ddos and cache those html files because at some point CPU becomes bottleneck.

[–] douglasg14b@lemmy.world 1 points 14 minutes ago* (last edited 12 minutes ago)

This is assuming aggressively cached, yes.

Also "Just text files" is what every website is sans media. And you can still, EASILY get 10+ MB pages this way between HTML, CSS, JS, and JSON. Which are all text files.

A gitea repo page for example is 400-500KB transferred (1.5-2.5MB decompressed) of almost all text.

A file page is heavier, coming in around 800-1000KB (Additional JS and CSS)

If you have a repo with 150 files, and the scraper isn't caching assets (many don't) then you just served up 135MB of HTMl/CSS/JS alongside the actual repository assets.