this post was submitted on 11 Apr 2025
30 points (100.0% liked)
Lemmy Apps
5991 readers
101 users here now
A home for discussion of Lemmy apps and tools for all platforms.
RULES:
- No spamming
- Be nice and have fun
- Follow the general lemmy.world rules
An extensive list of Lemmy apps is available here:
Visit our partner Communities!
Lemmy Plugins and Userscripts is a great place to enhance the Lemmy browsing experience. [email protected]
Lemmy Integrations is a community about all integrations with the lemmy API. Bots, Scripts, New Apps, etc. [email protected]
Lemmy Bots and Tools is a place to discuss and show off bots, tools, front ends, etc. you’re making that relate to lemmy. [email protected]
Lemmy App Development is a place for Lemmy builders to chat about building apps, clients, tools and bots for the Lemmy platform. [email protected]
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I like the idea, but are there any performance concerns when a post is a graveyard with lots of removed comments?
I did test a few posts with a lot of removed comments (both on my instance and Lemmy World), and the overhead wasn't terrible since scoping the modlog to just the comment ID is pretty lightweight. Since HTTP/2 is pretty common (which can re-use connections), there's not overhead of additional TLS handshakes slowing things down.
From a network traffic standpoint, Gzip compressed JSON is pretty negligible in the grand scheme of things.
What about Lemmy’s rate limiting?
I’m working on a client, and I would be worried that making too many requests for a nice to have feature would rate limit a request for a core feature of the app. Though I supposed some sort of throttle queue would solve that.
I’m already using a throttle queue to slow down refreshing stale data in my app. That way data that I’m asking for right now takes priority over refreshing a bunch of old data.
That's a good point. I'll have to check the default values, but on my own instance, I have very conservative limits in place, and it hasn't proven to be an issue (so far?).
Unless it's changed since I wrote the online docs for Tessreact, the modlog is part of the "Messages" rate limit bucket which is/was something of a catch-all for endpoints that didn't fit elsewhere. Even in the default config, that bucket is the most permissive due to that.
I've been daily-driving my dev version with this feature enabled for a few days, and it hasn't been an issue so far (it only does a modlog lookup if a comment is removed, so not on every comment in the tree). It's also per IP, so unless a lot of people are behind the same public IP, I don't think it's going to pose an issue. I'd have to double check, but I think the most comments it loads in a batch is close to 100, so unless every comment has been removed, that would be the worst-case number of modlog fetches.
So it looks like I'll definitely want to make this feature toggleable even if it does end up defaulting to 'on'.