this post was submitted on 27 Dec 2025
25 points (93.1% liked)

Selfhosted

54004 readers
691 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Howdy selfhosters

I’ve got a bit of an interesting one that started as a learning experience but it’s one I think I got a bit over my head in. I had been running the arr stack via docker-compose on my old Ubuntu desktop pc. I got lucky with a recycler and managed to get a decent old workstation and my company tossed out some 15 SAS hdds. Thankfully those worked. I managed to get the proxmox setup finally and got a few drives mounted in a zfs pool that plex presently reads from. I unfortunately failed to manage to save a last backup copy of my old stack, however that one I’ll admit was a bit messy with using gluetun with a vpn tie to a German server for p2p on the stack. I did preserve a lot of my old data though as a migration for the media libraries.

I’m open to suggestions to have the stack running again on proxmox on the work station, I’m not sure how best to go about it with this since accessing a mount point is only accessible via lxc containers and I can’t really figure how to pass the zfs shares to a vm. I feel like I’m over complicating this but needing to maintain a secure connection since burgerland doesn’t make for the best arr stack hosts in my experience. It feels a bit daunting as I’ve tried to tackle it and give a few LLMs to write me up some guidelines to make it easier but I seemed to just not be able to make that work to teach me.

you are viewing a single comment's thread
view the rest of the comments
[–] lka1988@sh.itjust.works 2 points 1 day ago* (last edited 1 day ago) (1 children)

Proxmox isn't really comparable to Docker (or its 3rd party webui frontends) and was never meant to directly run user-facing services. Proxmox simply provides the virtual infrastructure required to host VMs and LXCs that will run your desired services.

IMO, Dockge (not a typo) is a far cleaner and easier solution than Portainer. Its very simple to set up and can easily link to other Dockge instances on other Docker hosts (I have like 4 or 5 VMs just for Docker). It also doesn't bury your compose files deep inside a specific Docker volume that only allows its own container to access...like Portainer does.

[–] PeriodicallyPedantic@lemmy.ca 1 points 17 hours ago

Yeah I looked into dockge and I really like it, but I still went with portainer because it manages volumes directly rather than having to mount it manually and modify fstab.

I have to admit I don't really understand the philosophy or value proposition of portainer as it relates specifically to homelabs, because I don't really understand the value of VMs or LXCs except as last resorts (when you can't make an application container, since defining applications declaratively almost always better).
Almost everything I want to host, and I see people talking about hosting in their homelabs, are stacks of applications, which makes something like docker compose perfect for purpose.

When I saw proxmox supported OCI containers, I was hopeful it'd provide a nice way to deploy a stack of OCI containers, but it didn't. And in fact, some volume mounting features (that I wanted) could only be accessed by CLI.