this post was submitted on 27 Dec 2025
25 points (93.1% liked)

Selfhosted

54004 readers
841 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Howdy selfhosters

I’ve got a bit of an interesting one that started as a learning experience but it’s one I think I got a bit over my head in. I had been running the arr stack via docker-compose on my old Ubuntu desktop pc. I got lucky with a recycler and managed to get a decent old workstation and my company tossed out some 15 SAS hdds. Thankfully those worked. I managed to get the proxmox setup finally and got a few drives mounted in a zfs pool that plex presently reads from. I unfortunately failed to manage to save a last backup copy of my old stack, however that one I’ll admit was a bit messy with using gluetun with a vpn tie to a German server for p2p on the stack. I did preserve a lot of my old data though as a migration for the media libraries.

I’m open to suggestions to have the stack running again on proxmox on the work station, I’m not sure how best to go about it with this since accessing a mount point is only accessible via lxc containers and I can’t really figure how to pass the zfs shares to a vm. I feel like I’m over complicating this but needing to maintain a secure connection since burgerland doesn’t make for the best arr stack hosts in my experience. It feels a bit daunting as I’ve tried to tackle it and give a few LLMs to write me up some guidelines to make it easier but I seemed to just not be able to make that work to teach me.

top 26 comments
sorted by: hot top controversial new old
[–] eli@lemmy.world 1 points 20 hours ago

Proxmox recommends to not install anything directly on the proxmox host/baremetal.

Personally I would set this up as:

Proxmox installed on whatever single disk or raid 1 array.

Create a TrueNAS(or whatever OS you want) VM inside Proxmox. Mount the rest of the drives directly to the TrueNAS VM via Proxmox's interface.

In the TrueNAS VM take the drives that were mounted directly to it and setup your array and pool(s) to your preference.

Now, I'd say you have two paths from this point:

  • Inside the TrueNAS VM use their tools to create a VM within TrueNAS and use that for your arr stack.

OR

  • Go back to Proxmox and create another VM or container and setup your arr stack in that container and point it to your TrueNAS via network mounts using internal networking from within proxmox(virtual bridge with a virtual LAN).

Either option has pros and cons. Doing everything inside TrueNAS will be a bit more simple, but you do complicate your TrueNAS setup and you're at the mercy of how TrueNAS manages VMs(backups, restores, etc.). On the reverse with Proxmox, setting up the vmbridge and doing the network mounts is more work initially, but keeping the arr stack in a Proxmox VM/container lets you do direct snapshots and backups of the arr stack, and if you ever need to rebuild it or change it to another arr style set of tools then you can blow away the Proxmox VM and start fresh and resetup the network mounts.

Or don't do any of the above and just install TrueNAS on the box directly as the baremetal OS and do everything inside TrueNAS.

[–] lka1988@lemmy.dbzer0.com 12 points 2 days ago* (last edited 2 days ago) (3 children)

For the file server conundrum, something to keep in mind is that Proxmox is not NAS software and isn't really set up to do that kind of thing. Plus, the Proxmox devs have been very clear about not installing anything that isn't absolutely necessary outside of Proxmox (on the same machine).

However, you can set up a file server inside an LXC and share that through an internal VLAN inside Proxmox. Just treat that LXC as a NAS.

For your *arr stack, fire up an exclusive VM just for them. Install Docker on the VM, too, of course.

LLMs

If you're gonna use that, please make sure you comb through the output and understand it before implementing it.

I was able to follow what you said with another comments yt video. I appreciate it. The LLMs were more of just a “explain this to me in simpler terms” or “why doesn’t this work” just cause I was tired after working most the time. it helped but it was also months ago with limited time to record much. 

[–] non_burglar@lemmy.world 2 points 2 days ago (1 children)

This absolutely overkill, just use bind mounts for the arr stack and keep the ZFS pool local.

[–] lka1988@lemmy.dbzer0.com 1 points 1 day ago* (last edited 1 day ago) (1 children)

This absolutely overkill

Hardly. Keeping the file server separate is good for reliability in case you bork an unrelated service, so you don't take out everything else with it. That's also partly why things like VMs, LXC, and Docker exist.

[–] non_burglar@lemmy.world 1 points 1 day ago (1 children)

in case you bork an unrelated service

??

Why would borking another service break a bind mount?

[–] lka1988@lemmy.dbzer0.com 1 points 1 day ago (1 children)

No need to be antagonistic. I merely suggested the method I use for my home lab after learning the "hard way" to containerize and separate certain things.

[–] non_burglar@lemmy.world 1 points 1 day ago

I'm not being antagonistic, I don't known where you're getting that.

Do what you want, I don't care.

[–] gaylord_fartmaster@lemmy.world 5 points 2 days ago (2 children)

On the other hand, I've been mounting my storage drives on the proxmox host with mergerfs and exposing what I need to the LXCs with bind mounts for years, and I haven't had a single issue with it across multiple major version upgrades.

[–] standarduser@lemmy.dbzer0.com 1 points 1 day ago (1 children)

That’s really solid actually. Arguably how tedious was that setup? I’m certainly curious. 

Super simple, like 30 minutes to setup mergerfs and then the bind mounts are a few lines added to the LXC config files at most. This isn't necessarily needed, but I have users setup on the proxmox host with access to specific directories that are kind of a pain in the ass to remap the LXC users to, but were needed to give my *arr stack access to everything needed without giving access to the entire storage pool. Hard links won't work across multiple bind mounts because the container will see them as separate file systems, so if your setup is /mnt/storage/TV, /mnt/storage/downloads, etc. then you'd have to pass just /mnt/storage as the bind mount.

[–] lka1988@lemmy.dbzer0.com 2 points 2 days ago

There you go, that's another option.

[–] PeriodicallyPedantic@lemmy.ca 1 points 1 day ago (2 children)

I was recently trying out proxmox and found it super overkill and complex for serving stacks of software.
I switched to portainer and it was so much nicer to work with for that use case.

The one thing I miss is that proxmox can be packaged as an OS so you don't need to worry about any setup. But I wrote a little script I can use to install portainer and configure systemd to make sure it's always running.

I just started with portainer so I'm not an expert, but you may want to look into it.

[–] Bronzie@sh.itjust.works 3 points 1 day ago (1 children)

Yeah it’s a bit of an unfair comparison that. Hypervisor VS conainer manager.
The reason you run Proxmox is to do «everything» in one place, including docker.

If all you host are containers, then I agree it’s overkill, but if you want VM’s and containers combined, maybe even in a cluster, then Proxmox is hard to beat.

I host LXC’s with Portainer inside Proxmox, as I find it easier to deal with and maintain. Then in a VM I run the full HomeAssistant OS instead of the Docker image.

Unless you don’t need it at all, I’d recommend you give it another try. It’s a very flexible system that «does it all» once you get going.

[–] PeriodicallyPedantic@lemmy.ca 1 points 13 hours ago (1 children)

I understand they have different purposes, but one (container manager) seems far more suited to the typical things that people want to do in their homelabs, which is to host applications and application stacks.
Rarely do I see people need an interactive virtualized environment (in homelabs), except to set up those aforementioned applications, and then containers and containers stack definitions are better because having declarative way to deploy applications is better. Self-hosting projects often provide docker/OCI containers and compose files as the official way to deploy. I'm not deep in the community yet, but so far that has been my experience.
Additionally, some volume mounting options I wanted to use are only available via CLI, which is frustrating.
So I don't really understand what value proposition proxmox provides that has causes homelabs folks to rally around it so passionately.

Having a one-stop-shop that can run VMs is handy for those last-resort scenarios where using an application container just isn't possible, but thankfully I haven't run into that yet. It doesn't seem like OP has run into that yet either, if I read it correctly.
I'm not deep into my self-hosting journey, but it doesn't seem like there are that many things that require a VM for hypervisor 🤞

[–] Bronzie@sh.itjust.works 1 points 5 hours ago

You’re not wrong, but I think you might be leaving some future capabilities on the table, that’s it.
There is nothing wrong with running everything through Portainer at all. It’s how I started myself. The downside is that it’s limited if you ever wish to do e.g. HA OS or a sandboxed OS for testing/playing around. Automatic backups, re-sizing LXC’s or giving more memory is also easier to do with a GUI than in CLI. At least for me hehe.

That’s the great thing about self hosting though: if you’re happy with it, then it’s perfect!
Don’t change anything because someone tells you to if it works for you, friend!

[–] lka1988@sh.itjust.works 2 points 1 day ago* (last edited 23 hours ago) (1 children)

Proxmox isn't really comparable to Docker (or its 3rd party webui frontends) and was never meant to directly run user-facing services. Proxmox simply provides the virtual infrastructure required to host VMs and LXCs that will run your desired services.

IMO, Dockge (not a typo) is a far cleaner and easier solution than Portainer. Its very simple to set up and can easily link to other Dockge instances on other Docker hosts (I have like 4 or 5 VMs just for Docker). It also doesn't bury your compose files deep inside a specific Docker volume that only allows its own container to access...like Portainer does.

[–] PeriodicallyPedantic@lemmy.ca 1 points 13 hours ago

Yeah I looked into dockge and I really like it, but I still went with portainer because it manages volumes directly rather than having to mount it manually and modify fstab.

I have to admit I don't really understand the philosophy or value proposition of portainer as it relates specifically to homelabs, because I don't really understand the value of VMs or LXCs except as last resorts (when you can't make an application container, since defining applications declaratively almost always better).
Almost everything I want to host, and I see people talking about hosting in their homelabs, are stacks of applications, which makes something like docker compose perfect for purpose.

When I saw proxmox supported OCI containers, I was hopeful it'd provide a nice way to deploy a stack of OCI containers, but it didn't. And in fact, some volume mounting features (that I wanted) could only be accessed by CLI.

[–] ilovecheese@feddit.uk 7 points 2 days ago (1 children)

Why not lxc? I've been using this setup for quite some time.

[–] standarduser@lemmy.dbzer0.com 2 points 2 days ago (1 children)

I’m mostly worried of any of the network traffic being leaked since I’m not particularly sure how to have a vpn work on just the lxc containers and manage to connect to the zfs shares

You can pass the storage you need to the LXCs with bind mounts. No network connection needed.

[–] CmdrShepard49@sh.itjust.works 4 points 2 days ago (1 children)

I followed this dude's tutorial to get everything setup in Proxmox: https://youtu.be/qmSizZUbCOA

Oh shit this is very similar! I forgot about this dudes GitHub. That was my guide through last time. Thank you thank you for this! 

[–] dcatt@lemmy.dbzer0.com 3 points 2 days ago
[–] pinche_juan@infosec.exchange 1 points 2 days ago (1 children)

@standarduser a possible solution is to setup an nfs server on Proxmox. Its not the best practice but it's the easiest to set up.

I was just reading about it in the other comments YouTube video. Had a GitHub page that said it was explicitly not recommended too. I can see why now after working on it last night. If it was in professional use setting this would be horrendous.