this post was submitted on 30 Jan 2026
58 points (92.6% liked)

Technology

80478 readers
3590 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 12 comments
sorted by: hot top controversial new old
[–] CarbonatedPastaSauce@lemmy.world 30 points 5 days ago (2 children)

There are lots of reasons to use really low TTLs, but most are a temporary need. Most of the times I had to set low TTLs for records were for hardware migration projects where services were getting new IP addresses. But in a well managed shop this should always be temporary. The TTL would be set low the day before the change, then set back to a normal value the day after the change. I feel the author is correct in that permanently setting low TTLs just covers up a lack of proper planning and change management.

The only thing off the top of my head that I can think absolutely requires a permanently low TTL is DNS based global load balancing for high uptime applications. But I'm sure there are other uses. I agree that the vast majority of things do not need a low TTL on their DNS record.

[–] SpaceNoodle@lemmy.world 9 points 5 days ago

So the options are to herd a million cats, or to set low TTLs? Hmmm ...

[–] CompactFlax@discuss.tchncs.de 3 points 5 days ago

I have a reasonably latent connection and using pihole and an anycast upstream resolver is noticeably slow. It falls out of pihole cache so freaking fast with these low TTL. I have set up unbound with aggressive caching prefetch and if I recall correctly pihole has a toggle to serve expired. Serving expired in unbound, before pihole, breaks stuff that rotates IP fast.

[–] The_Decryptor@aussie.zone 10 points 4 days ago (1 children)

Set that minimum TTL to something between 40 minutes (2400 seconds) and 1 hour; this is a perfectly reasonable range.

Sounds good, let's give that a try and see what breaks.

[–] exu@feditown.com 3 points 4 days ago (1 children)

Yeah, I thought so to. I'll definitely try that

[–] The_Decryptor@aussie.zone 2 points 16 hours ago

I've got some numbers, took longer than I'd have liked because of ISP issues. Each period is about a day, give or take.

With the default TTL, my unbound server saw 54,087 total requests, 17,022 got a cache hit, 37,065 a cache miss. So a 31.5% cache hit rate.

With clamping it saw 56,258 requests, 30,761 were hits, 25,497 misses. A 54.7% cache hit rate.

And the important thing, and the most "unscientific", I didn't encounter any issues with stale DNS results. In that everything still seemed to work and I didn't get random error pages while browsing or such.

I'm kinda surprised the total query counts were so close, I would have assumed a longer TTL would also cause clients to cache results for longer, making less requests (Though e.g. Firefox actually caps TTL to 600 seconds or so). My working idea is that for things like e.g. YouTube video, instead of using static hostnames and rotating out IPs, they're doing the opposite and keeping the addresses fixed but changing the domain names, effectively cache-busting DNS.

[–] exu@feditown.com 6 points 5 days ago (1 children)

Lol, reported for the URL "blog"

[–] L3s@lemmy.world 18 points 5 days ago

Thats our automod, we keep an eye out for blogs. Every now and then we get spammed with personal blogs about off-topic things.

[–] zeezee@slrpnk.net 2 points 4 days ago

tldr;

Set that minimum TTL to something between 40 minutes (2400 seconds) and 1 hour; this is a perfectly reasonable range.

[–] MonkderVierte@lemmy.zip 2 points 4 days ago (1 children)

Btw, is there a way to tweak firefox so it always uses cache and only updates on manual site reload?

[–] chaospatterns@lemmy.world 1 points 4 days ago (1 children)

Are you trying to make an offline website? If so, you could look into using a Service Worker which would give you full control over when the content gets refreshed.

[–] MonkderVierte@lemmy.zip 2 points 4 days ago* (last edited 4 days ago)

Laptop, mobile, bad line; it's annoying if the page (which should already be in cache since i opened it hours ago) says "No internet :(" just because it got unloaded.

And yes, "save webpage" solves that but

  1. i have to think of it beforehand
  2. the site is already there, in the freaking cache.

In short, i want to use Firefox as the document viewer and downloader it is, instead of a webapp platform or whatever it wants to be.