this post was submitted on 27 Dec 2025
26 points (93.3% liked)

Selfhosted

54041 readers
535 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Howdy selfhosters

I’ve got a bit of an interesting one that started as a learning experience but it’s one I think I got a bit over my head in. I had been running the arr stack via docker-compose on my old Ubuntu desktop pc. I got lucky with a recycler and managed to get a decent old workstation and my company tossed out some 15 SAS hdds. Thankfully those worked. I managed to get the proxmox setup finally and got a few drives mounted in a zfs pool that plex presently reads from. I unfortunately failed to manage to save a last backup copy of my old stack, however that one I’ll admit was a bit messy with using gluetun with a vpn tie to a German server for p2p on the stack. I did preserve a lot of my old data though as a migration for the media libraries.

I’m open to suggestions to have the stack running again on proxmox on the work station, I’m not sure how best to go about it with this since accessing a mount point is only accessible via lxc containers and I can’t really figure how to pass the zfs shares to a vm. I feel like I’m over complicating this but needing to maintain a secure connection since burgerland doesn’t make for the best arr stack hosts in my experience. It feels a bit daunting as I’ve tried to tackle it and give a few LLMs to write me up some guidelines to make it easier but I seemed to just not be able to make that work to teach me.

you are viewing a single comment's thread
view the rest of the comments
[–] lka1988@lemmy.dbzer0.com 12 points 3 days ago* (last edited 3 days ago) (3 children)

For the file server conundrum, something to keep in mind is that Proxmox is not NAS software and isn't really set up to do that kind of thing. Plus, the Proxmox devs have been very clear about not installing anything that isn't absolutely necessary outside of Proxmox (on the same machine).

However, you can set up a file server inside an LXC and share that through an internal VLAN inside Proxmox. Just treat that LXC as a NAS.

For your *arr stack, fire up an exclusive VM just for them. Install Docker on the VM, too, of course.

LLMs

If you're gonna use that, please make sure you comb through the output and understand it before implementing it.

[–] standarduser@lemmy.dbzer0.com 2 points 2 days ago

I was able to follow what you said with another comments yt video. I appreciate it. The LLMs were more of just a “explain this to me in simpler terms” or “why doesn’t this work” just cause I was tired after working most the time. it helped but it was also months ago with limited time to record much. 

[–] non_burglar@lemmy.world 2 points 2 days ago (1 children)

This absolutely overkill, just use bind mounts for the arr stack and keep the ZFS pool local.

[–] lka1988@lemmy.dbzer0.com 1 points 2 days ago* (last edited 2 days ago) (1 children)

This absolutely overkill

Hardly. Keeping the file server separate is good for reliability in case you bork an unrelated service, so you don't take out everything else with it. That's also partly why things like VMs, LXC, and Docker exist.

[–] non_burglar@lemmy.world 1 points 2 days ago (1 children)

in case you bork an unrelated service

??

Why would borking another service break a bind mount?

[–] lka1988@lemmy.dbzer0.com 1 points 2 days ago (1 children)

No need to be antagonistic. I merely suggested the method I use for my home lab after learning the "hard way" to containerize and separate certain things.

[–] non_burglar@lemmy.world 1 points 2 days ago

I'm not being antagonistic, I don't known where you're getting that.

Do what you want, I don't care.

[–] gaylord_fartmaster@lemmy.world 5 points 3 days ago (2 children)

On the other hand, I've been mounting my storage drives on the proxmox host with mergerfs and exposing what I need to the LXCs with bind mounts for years, and I haven't had a single issue with it across multiple major version upgrades.

[–] standarduser@lemmy.dbzer0.com 1 points 2 days ago (1 children)

That’s really solid actually. Arguably how tedious was that setup? I’m certainly curious. 

Super simple, like 30 minutes to setup mergerfs and then the bind mounts are a few lines added to the LXC config files at most. This isn't necessarily needed, but I have users setup on the proxmox host with access to specific directories that are kind of a pain in the ass to remap the LXC users to, but were needed to give my *arr stack access to everything needed without giving access to the entire storage pool. Hard links won't work across multiple bind mounts because the container will see them as separate file systems, so if your setup is /mnt/storage/TV, /mnt/storage/downloads, etc. then you'd have to pass just /mnt/storage as the bind mount.

[–] lka1988@lemmy.dbzer0.com 2 points 3 days ago

There you go, that's another option.