Convince one of your Asian friends to run a mirror and sync everything to them if possible.
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
-
No low-effort posts. This is subjective and will largely be determined by the community member reports.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
HI, kinda late to the party. I'm in a similar rut with intercontinental internet issues, and would like to share my thoughts
While not a full-fledged CDN, you may consider setting up an Asian VPS to serve as a second reverse proxy/ingress route, terminate TLS there, and route plaintext HTTP back to your homelab (this virtual tunnel shall be behind a WireGuard VPN interface). As I've figured out in my blogpost here (see scenario 2), this allows the initial TCP and TLS handshakes to happen nearer to the user instead of going all the way to Europe and back home.
You can consider setting up a separate Jellyfin instance for Asia, but of course that comes with setting up syncing media, maintaining separate user credentials, and so on. So before renting compute, I suggest trying these smaller actions first - if they work you mightn't need a VPS anymore:
- Look into Linux sysctls tuning of network parameters. My personal tweaks for the
/etc/sysctl.confstuff are: - Implement some sort of Smart Queue Management on your router (e.g. CAKE algorithm) to avoid the bufferbloat problem
- Enable HTTP/3+QUIC on your reverse proxy for reduced handshakes. Though it's unlikely native Jellyfin clients also benefit from such features
Curious to see if any of this helps :)
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:
| Fewer Letters | More Letters |
|---|---|
| HTTP | Hypertext Transfer Protocol, the Web |
| IP | Internet Protocol |
| SSL | Secure Sockets Layer, for transparent encryption |
| TCP | Transmission Control Protocol, most often over IP |
| TLS | Transport Layer Security, supersedes SSL |
| VPN | Virtual Private Network |
| VPS | Virtual Private Server (opposed to shared hosting) |
5 acronyms in this thread; the most compressed thread commented on today has 13 acronyms.
[Thread #64 for this comm, first seen 6th Feb 2026, 17:01] [FAQ] [Full list] [Contact] [Source code]
You're describing a CDN. You can't afford it.
I'd look more into boosting whatever your uplink is versus trying to distribute to localized users.
The uplink isn't the problem as it works for viewers in Europe.
Uplink is exactly the problem. Not sure why you think otherwise. The internet doesn't work by multicast.
Maybe we don't talk about the same. The uplink at OPs router isn't the problem, there is enough upload speed so that others in Europe can stream. Users in Asia don't have enough bandwidth, so there's a bottleneck somewhere in between.
And yes, a VPN could help by routing the traffic through other hops, but chances are that it doesn't help or even make it worse, but it's worth trying.
It's probably not bandwidth but latency and packet loss that's the problem.
Latency shouldn't be a big problem if it doesn't have massive spikes. Packet loss could be a problem, seems like Jellyfin doesn't have an option zu increase the buffer size which may help. Or the problem is in combination with transcoding.
Bandwidth does not degrade over distance. That's not how that works...
Again, I'm confused on what you're suggesting the actual issue is here.
If the uplink bandwidth is more than sufficient for users in Europe, and it doesn't degrade over distance, then why is the same uplink not enough for the exact same thing in Asia?
Exactly, bandwidth doesn't degrade over distance, so why would the uplibk bandwidth be the issue for Asia when its fine for Europe.
Ok you're almost there. It is plenty fast for people in Europe but it is slow for those in Asia. So bandwidth is not the issue
When talking about media streaming, there's a number of other things that cause problems Bandwidth, meaning the total amount of information you can send overall, is less likely to be a problem versus jitter, packet loss, and latency spikes.
For this purpose, but OP would tune both the server and the clients to cache ahead more, or send in smaller packets, it could possibly be a good workaround.
Spending an insane amount of money putting what I'm guessing is illegally obtained content on a CDN distribution is crazypants.
Even large streaming services drop their servers close to the users to make the experience good. They just do better at scaling.
You could federated authentication so only one ldap service is maintained. You could also sync media from one device to the other so you don’t need to manually update both.
Isnt that done to reduce the load on a monolithic servers and to also reduce the network transit bill.
You unfortunately cannot solve this yourself, this is where 800lb gorillas like akamai outclass self-hosted.
Netflix alone has many thousands of isps participating in Open Connect alone, these providing CDN peering points all over the world and making Netflix only a few hops away for more end users.
So it's not just me. The peering between europe and asia IS crap!
I've been to thailand in november and the connections to europe were hit or miss the whole time. The latency was poor and the reliability varied day by day.
The only thing that made any difference was switching providers on the EU side. It seems that some ISPs have better peering than others.
Also lowering the MTU for the vpn tunnel seemd to help a lot, but that might've been a placebo.
I've often described Europe as being the 'other end of the internet' since from Australia it's often routed over the Pacific to US(via Hawaii and either Guam or New Zealand), over the US, then over the Atlantic.
tu.berlin is 316ms away.
Tailscale, headscale, or something along those lines may help optimize the route but as others have said to resolve this is an actual fashion you'd need a cdn which requires significant geo-redundant hardware which comes at a pretty significant cost. That being said I think your friend has a good shot if you implement the former.
I was trying to stream my Jellyfin server on vacation..Over Tailnet I couldn't reliably stream anything. Over VPN it was as good as local. I can't believe it's just a routing issue but I wasn't proxied so it should have been the same. So a VPN for one user might fix the issue. The headaches of segmenting the network on that VPN are another problem even if the hardware/router is capable but doable.
Is it possible you misconfigured your tailnet and instead of using a direct connection to your local subnet router you were using an ethereal port via a DERP relay? You can read into it more on tailscales documentation, but essentially you need to leave UPD inbound port 41641 open to your subnet router inbound from WAN.
I checked for relay. I recall it's pretty easy to see on the desktop icon. I'll have to try again next time I'm away to see.
I don't know if it's on the icon, I believe you have to use the cli "tailscale status" to view your tailnet nodes connection types
IMHO Jellyfin is processing everything it sent to clients. So I do not think it possible to put it behind SDN( may be it possible if server side transcoding is off) Please define slow. Slow on what part? It should be like 250ms RRT to your server which is not much for web-based apps.
Define "slow". Pages hang before loading? Or it often stops to buffer a stream?
I am in basically the same situation as you and my single asian user has no issue with it
You don’t necessarily have to host another Jellyfin instance, I would find a server somewhere in-between the middle of your current Europe server and your Asian homies and setup a reverse proxy there and point it to your current Jellyfin instance.
The only hassle with this is you’re going to need a way to expose your Eu Jellyfin to the new server, a VPN would prevent port forwarding 443, perhaps split tunneling?
Not the most elegant solution but at least this way you can make an attempt at optimizing the connection.
Edit - (if you wanted to go the second Jellyfin instance route): Could also copy your current database to the second server, host a second Jellyfin instance and have something like sshfs or sftp sharing the directory to your media library, reverse proxy it as something like asia-jellyfin.your.domain and keep it separated from your Eu server.
This may be completely untrue but maybe the remote users could get a vpn with a server near yours? Without having the slightest idea that it does I could imagine it could help.