this post was submitted on 11 Nov 2025
136 points (98.6% liked)

Tech

2445 readers
21 users here now

A community for high quality news and discussion around technological advancements and changes

Things that fit:

Things that don't fit

Community Wiki

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] baltakatei@sopuli.xyz 11 points 2 months ago

I have a public gitweb repository. I am constantly being hit by dumb crawlers that, left to their own devices, request every single diff of every single commit simply because links requesting such operations are presented. All of which are unnecessary if they would only do a simple git pull, then my server would happily provide the 50 MB of the entire git repo history. Instead, they download gigabytes of HTML boilerplate, probably never actually get a full commit history, and probably can't even use what data they do scrape since they're just randomly pulling commits in between blocks and bans.

All of this only became an issue around a year ago. Since then, I just accept my public facing static pages are all that's reliable anymore.