this post was submitted on 14 Jan 2026
51 points (96.4% liked)

Selfhosted

54534 readers
640 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Hello people, I recently rented a vps server from OVH and I want to start hosting my own piefed instance and a couple other services. I am running debian 13 with docker, and I have nginx proxy manager almost set up. I want to set up subdomains so when I do social.my.domain it will go to my piefed instance, but how do I tell the machine to send piefed traffic to this subdomain and joplin traffic (for example) to another domain? Can I use nginx/docker natively for that or do I have to install another program. Thanks for the advice.

you are viewing a single comment's thread
view the rest of the comments
[–] kossa@feddit.org 4 points 1 day ago (1 children)

all internal services will be accessible

What? Only when they are configured to listen on outside interfaces. Which, granted, they often are in default configuration, but when OP uses Docker on that host, chances are kinda slim that they run some rando unconfigured database directly. Which still would be password or authentication protected in default config.

I mean, it is never wrong slapping a firewall onto something, I guess. But OTOH those "all services will be exposed and evil haxxors will take you over" is also a disservice.

[–] deadcade@lemmy.deadca.de 3 points 1 day ago (1 children)

I've seen many default docker-compose configurations provided by server software that expose the ports of stuff like databases by default (which exposes it on all host interfaces). Even outside docker, a lot of software, has a default configuration of "listen on all interfaces".

I'm also not saying "evil haxxors will take you over". It's not the end of the world to have a service requiring authentication exposed to the internet, but it's much better to only expose what should be public.

[–] kossa@feddit.org 2 points 1 day ago (1 children)

Yep, fair. Those docker-composes which just forward the ports to the host on all interfaces should burn. At least they should make them 127.0.0.1 forwards, I agree.

[–] kumi@feddit.online 0 points 1 day ago* (last edited 1 day ago)

I'm guilty of a few of these and sorry not sorry but this is not changing.

Often these are written with local dev and testing in mind, and in any case the expectation is that self-hosters will look through them and probably customize them - and in any case be responsble for their own firewalls and proxies - before deploying them to a public-facing server. Larger deployments sometimes have internal load balancers on separate machines so even when reflecting a production deployment, exposing on 0.0.0.0 or running eith network=host might be normal.

Never just run third-party compose files for user services on a machine directly exposed to untrusted networks like the internet.