this post was submitted on 26 Nov 2025
79 points (96.5% liked)

Selfhosted

53166 readers
688 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Reading earlier comments in this community made me consider documenting the workings of my homelab to some extent, ie. docker configuration, credentials, ports and links of my services. I've tried to make it consistent and organised but it still feels half baked and insufficient. Everyone suggests documenting everything you do in your homelab but don't state how. Since I've hardly had experience running my own server, I would really appreciate observing the blueprint of some other fellow selfhoster for copying or taking inspiration from rather than considering documentation to be 'left as an exercise for the reader'.

Edit: I already have a note-taking solution with me. What I wish to ask is to know what needs to be documented and what the structure of the documentation should be to accommodate the information.

top 39 comments
sorted by: hot top controversial new old
[–] fruitycoder@sh.itjust.works 7 points 17 hours ago (1 children)

This is what I like about git ops and infra/config as Code personally.

Ideally everything is an a tofu/ansible/helm chart and git lab pipeline/Fleet job. I add comments for anything that I had to learn to make work to those files. Follow good commit hygenine (most of the time). And bam I can almost a year later half asleep stumble back into a thing I did.

[–] howrar@lemmy.ca 2 points 15 hours ago (1 children)

Do you use this for physical machines too?

[–] fruitycoder@sh.itjust.works 2 points 14 hours ago

Yep! Metal3 for servers with BMCs Tinkerbell for everything else.

I also have an ansible playbook that templates everything into a cloud init scripts as a boot strap server.

About 12 nodes in total now, from new servers to freebee junk laptops in it.

[–] comrade_twisty@feddit.org 21 points 23 hours ago* (last edited 23 hours ago) (1 children)

Everyone will have their own system.

I save all my credentials in Bitwarden/Vaultwarden and take notes in Joplin.

The good thing about YOUR homelab is that YOU’RE taking notes solely for YOURSELF and only YOU know how YOU work and how YOU organize YOUR thoughts.

[–] irmadlad@lemmy.world 5 points 23 hours ago

I save all my credentials in Bitwarden/Vaultwarden

Yeah, I don't put key phrases, passwords, etc in my notes.

[–] Olgratin_Magmatoe@slrpnk.net 10 points 20 hours ago* (last edited 20 hours ago)

Whenever I set something up I usually make a markdown file listing the commands and steps to take. I do this as I am setting things up and familiarizing myself, so once I'm done, I have a start to finish guide.

Raw text/markdown files will be readable until the end of time.

[–] osaerisxero@kbin.melroy.org 7 points 19 hours ago

I believe it is traditional to do so written in blood in the style of an apocalypse log, dealer's choice for who's blood. Make sure it's disjointed and nearly incomprehensible, but that everything is there.

Bonus points if you print the config files and write your documentation on them after stapling them to the walls

[–] erebion@news.erebion.eu 10 points 21 hours ago

Ansible is my config and documentation in one.

It's reproducible, idempotent and I don't need anything else.

I write all code myself, that makes it even easier to read.

[–] mathuin@lemmy.world 8 points 21 hours ago (1 children)

I agree with the advice that says “Document your setup such that you could recreate it from your notes from scratch” but I’d take it another step further — consider that someone may have to do some work on your system when you are unable or unavailable. The kind of thing you’d keep with your will, or power of attorney. Just a suggestion.

[–] irmadlad@lemmy.world 6 points 20 hours ago (1 children)

.....and to my family I bequeath my entire collection of Linux iso's

[–] mathuin@lemmy.world 4 points 20 hours ago (1 children)

You jest but if I left my wife my Home Assistant setup undocumented she would pee on my grave.

[–] irmadlad@lemmy.world 3 points 19 hours ago (1 children)

LOL, well I'm single tho I've known my ladyfriend for over 40 years. I offered to set up a server at her house, and connect the two, but she has no interest rifling through all my lab for anything of interest in the case of my passing.

[–] mathuin@lemmy.world 2 points 15 hours ago (1 children)

I’m happily married with a kid, and we recently went through the estate planning process. When I brought up IP stuff and digital properties, their advice was pretty much “Hmm… you should pick someone who understands what you’re talking about, get their approval in advance, and then add them as your legacy contacts and document the heck out of everything”. Realistically nobody is going to want my GitHub stuff or anything like that, but I would like my kid to have access to most* of my files after I pass. I am of course excluding the kind of content that “real friends” delete while your body is still warm.

[–] irmadlad@lemmy.world 3 points 15 hours ago (1 children)

It'd be nice to donate all my equipment to some kid who is very interested. That would be something I'd be interested in.

[–] mathuin@lemmy.world 1 points 15 hours ago

My documented plan includes that kind of donation for my amateur radio equipment, but I’m going to let my survivors handle the home lab.

[–] happy_wheels@lemmy.blahaj.zone 2 points 17 hours ago

Libreoffice calc/MS Excel. Old school tracking and extremly flexible for documentation. I have been doing this for the last decade, both at home and at my workplace. My team loves it, tho YMMV.

[–] irmadlad@lemmy.world 7 points 23 hours ago* (last edited 22 hours ago)

Document everything as if it were a step by step tutorial you will give to someone so that they can duplicate your deployment without any prior knowledge. I'll even include urls to sites I consulted with to achieve production deployment.

ETA: I absolutely care nothing about points. Up voting and down voting used to be a way to weed out bad info. So it always leaves me wondering 'Did I give erroneous advice? What was the reason for the down vote? I mean, if you down voted and said 'I down voted you because I hate your guts', I can deal with that.

[–] pepperprepper@lemmy.world 3 points 19 hours ago* (last edited 19 hours ago) (2 children)
[–] enchantedgoldapple@sopuli.xyz 0 points 8 hours ago

That is a behemoth of a homelab you have set up there. My jaw would've dropped out if it could.

[–] irmadlad@lemmy.world 2 points 19 hours ago

Dude that is a respectable lab you have there! Much envy

[–] wersooth@lemmy.world 7 points 23 hours ago* (last edited 23 hours ago)

I have a repo for the infra files (compose files and terraform files just for playing). I store the docs in the same repo in MD files. As for the secrets, I'm using docker swarm, so I can store the needed passwords there. otherwise Vaulwarden is my go to, self hosted, lightweight password manager, compatible with bitwarden clients I'm a little paranoid if the note-service got db corruptions, I might loose too much info, so git is the way (personal opinion).

edit: add the related MD file next to the compose file, one folder per service, the source and the doc will be coupled in one place.

[–] confusedpuppy@lemmy.dbzer0.com 5 points 23 hours ago

I have two systems that sort of work together.

The first system involves a bunch of text files for each task. OS installation, basic post OS installation tasks and a file for each program I add (like UFW, apparmor, ddclient, docker and so on). They basically look like scripts with comments. If I want to I can just copy/paste everything into a terminal and reach a a specific state that I want to be at.

The second system is a sort of "skeleton" file tree that only contains all the files that I have added or modified.

Here's an example of what my server skeleton file tree looks like


.
├── etc
│   ├── crontabs
│   │   └── root
│   ├── ddclient
│   │   └── ddclient.conf
│   ├── doas.d
│   │   └── doas.conf
│   ├── fail2ban
│   │   ├── filter.d
│   │   │   └── alpine-sshd-key.conf
│   │   └── jail.d
│   │       └── alpine-ssh.conf
│   ├── modprobe.d
│   │   ├── backlist-extra.conf
│   │   └── disable-filesystems.conf
│   ├── network
│   │   └── interfaces
│   ├── periodic
│   │   └── 1min
│   │       └── dynamic-motd
│   ├── profile.d
│   │   └── profile.sh
│   ├── ssh
│   │   └── sshd_config
│   ├── wpa_supplicant
│   │   └── wpa_supplicant.conf
│   ├── fstab
│   ├── nanorc
│   ├── profile
│   └── sysctl.conf
├── home
│   └── pi-user
│       ├── .config
│       │   └── ash
│       │       ├── ashrc
│       │       └── profile
│       ├── .ssh
│       │   └── authorized_keys
│       ├── .sync
│       │   ├── file-system-backup
│       │   │   ├── .sync-server-fs_01_root
│       │   │   └── .sync-server-fs_02_boot
│       │   └── .sync-caddy_certs_backup
│       ├── .nanorc
│       └── .tmux.conf
├── root
│   ├── .config
│   │   └── mc
│   │       └── ini
│   ├── .local
│   │   └── share
│   │       └── mc
│   │           └── history -> /dev/null
│   ├── .ssh
│   │   └── authorized_keys
│   ├── scripts
│   │   ├── automated-backup
│   │   └── maintenance
│   ├── .ash_history -> /dev/null
│   └── .nanorc
├── srv
│   ├── caddy
│   │   ├── Caddyfile
│   │   ├── Dockerfile
│   │   └── docker-compose.yml
│   └── kiwix
│       └── docker-compose.yml
└── usr
    └── sbin
        ├── containers-down
        ├── containers-up
        ├── emountman
        ├── fs-backup-quick
        └── rtransfer

This is useful to me because I can keep track of every change I make. I even have it set up so I can use rsync to quickly chuck all the files into place after a fresh install or after adding/modifying files.

I also created and maintain a "quick install" guide so I can install a fresh OS, rsync all the modified files from my skeleton file tree into place, then run through all the commands in my quick install guide to get myself back to the same state in a minimal amount of time.

[–] henfredemars@infosec.pub 6 points 1 day ago (1 children)

I have a simple pile of Markdown files that I edit with Obsidian. I like the simple text file format because it keeps my documentation forwards-compatible. I use OpenWRT at the heart of my network, so I keep I right there in root’s home. Every long while I back it up to my general Documents which is then synced between my high-storage devices with SyncThing.

[–] enchantedgoldapple@sopuli.xyz 0 points 1 day ago (1 children)

Thanks for your response. I already have Joplin synced with my server as a solution for my documentation. However I meant to ask how you structure your documentation, know what and how to mention, and organise it for future reference.

[–] unimalion@sh.itjust.works 1 points 21 hours ago

Don't know if this helps since dokuwiki lets me link pages, but I have a main page where I just do a one paragraph description of every big thing in use.

each page has:

  • an in depth description,
  • how it's set up,
  • a list of features i use,
  • how it connects to other services,
  • and a miscellaneous for everything else

I'll also add any notes in the misc section in case I need to reference them later. If a service is mentioned, I'll create a page for it and link to it every time I mention it. That way nothing is more than a few clicks away and the documentation grows naturally as long as you don't have any monolithic application. Example: (main -> Docker -> Project_Ozone_2 -> custom configurations Or main -> Joomla -> wysiwyg ->JCE Editor)

I also had a professor tell me to just write everything down first and then focus on formatting to find what kind of structure suits your needs best.

[–] Evil_Shrubbery@thelemmy.club 4 points 22 hours ago* (last edited 22 hours ago) (1 children)

(Bookmarked for when I have the mental capacity to ...)

Do y'all also document backup/restore procedures?
How often do you test it?

[–] irmadlad@lemmy.world 4 points 21 hours ago

Frankly, with my screwed up brain, I document everything. I can turn around twice in my lab and my brain will flat line. When I first started, I would always tell myself that I'd remember stuff. Not anymore.

I created a script for Linux that automatically backs up to a NAS drive, once every two weeks, as a complete image, and I keep 5 on deck. Testing usually happens once every 3 months or so. I also have Duplicati backups that are stored offsite on my VPS.

[–] CaptainPedantic@lemmy.world 5 points 23 hours ago* (last edited 23 hours ago)

I've got a bunch of notes in Trilium.

I have a note for each service with the docker compose file, notes on backups, any weirdness with the setup, and when I update each service. I use Trilium as a crappy version control for the compose file.

I also have a note for the initial setup of my server (mostly setting up docker, setting up mergerfs and snapraid).

Other than that, I have one note for each device for my setup. (Wifi AP, OPNsense router, switch, etc) That I populate with random crap I might need to know later.

[–] cecilkorik@lemmy.ca 4 points 23 hours ago

You're on the right track. Like everything else in self-hosting you will learn and develop new strategies and scale things up to an appropriate level as you go and as your homelab grows. I think the key is to start with something immediately achievable, and iterate fast, aiming for continuous improvement.

My first idea was much like yours, very traditional documentation, with words, in a document. I quickly found the same thing you did, it's half-baked and insufficient. There's simply no way to make make it match the actual state of the system perfectly and it is simply inadequate to use English alone to explain what I did because that ends up being too vague to be useful in a technical sense.

My next realization was that in most cases what I really wanted was to be able to know every single command I had ever run, basically without exception. So I started documenting that instead of focusing on the wording and the explanations. Then I started to feel like I wasn't capturing every command reliably because I would get distracted trying to figure out a problem and forget to, and it was duplication of effort to copy and paste commands from the console to the document or vice versa. That turned into the idea of collecting bunches of commands together into a script, that I could potentially just run, which would at least reduce the risk of gaps and missing steps. Then I could put the commands I wanted to run right into the script, run the script, and then save it for posterity, knowing I'd accurately captured both the commands I ran and the changes I made to get it working by keeping it in version control.

But upon attempting to do so, I found that just a bunch of long lists of commands on their own isn't terribly useful so I started to group all the lists up, attempting to find commonalities by things like server or service, and then starting organize them better into scripts for different roles and intents that I could apply to any server or service, and over time this started to develop into quite a library of scripts. As I was doing this organizing I realized that as long as I made sure the script was functionally idempotent (doesn't change behaviors or duplicate work when run repeatedly, it's an important concept) I can guarantee that all my commands are properly documented and also that they have all been run -- and if they haven't, or I'm not sure, I can just run the script again as it's supposed to always be safe to re-run no matter what state the system is in. So I started moving more and more to this strategy, until I realized that if I just organized this well enough, and made the scripts run automatically when they are changed or updated, I could not only improve my guarantees of having all these commands reliably run, but also quickly run them on many different servers and services all at once without even having to think about it.

There are some downsides of course, this leaves the potential of bugs in the scripts that make it not idempotent or not safe to re-run, and the only thing I can do is try to make sure they don't happen, and if they do, identify and fix these bugs when they happen. The next step is probably to have some kind of testing process and environment (preferably automated) but now I'm really getting into the weeds. But at least I don't really have any concerns that my system is undocumented anymore. I can quickly reference almost anything it's doing or how it's set up. That said, one other risk is that the system of scripts and automation becomes so complex that they start being too complex to quickly untangle, and at that point I'll need better documentation for them. And ultimately you get into a circle of how do you validate the things your scripts are doing are actually working and doing what you expect them to do and that nothing is being missed, and usually you run back into the same ideas that doomed your documentation from the start, consistency and accuracy.

It also opens an attack vector, where somebody gaining access to these scripts not only gains all the most detailed knowledge of how your system is configured but also the potential to inject commands into those scripts and run them anywhere, so you have to make sure to treat these scripts and systems like the crown jewels they are. If they are compromised, you are in serious trouble.

By now I have of course realized (and you all probably have too) that I have independently re-invented infrastructure-as-code. There are tools and systems (ansible and terraform come to mind) to help you do this, and at some point I may decide to take advantage of them but personally I'm not there yet. Maybe soon. If you want to skip the intermediate steps I did, you might even be able to skip directly to that approach. But personally I think there is value in the process, it helps defining your needs and building your understanding that there really isn't anything magical going on behind the scenes and that may help prevent these tools from turning into a black box which isn't actually going to help you understand your system.

Do I have a perfect system? Of course not. In a lot of ways it's probably horrific and I'm sure there are more experienced professionals out there cringing or perhaps already furiously warming up their keyboards. But I learned a lot, understand a lot more than I did when I started, and you can too. Maybe you'll follow the same path I did, maybe you won't. But you'll get there.

[–] No_Bark@lemmy.dbzer0.com 4 points 23 hours ago* (last edited 23 hours ago)

I've been documenting my homelab experiments, set ups, configurations, how-to's, etc in both Trilium and Silverbullet. I use Silverbullet more as a wiki and Trilium for journal style notes. I just got into self hosting earlier this year, so I'm by no means an expert or authority on any of this.

So my Silverbullet set up contains most of my documentation on how to get things set up. I have sections for specific components of the homelab (Proxmox general set up, general networking, specific how tos for getting various VMs and LXCs set up for specific applications, specific how tos on getting docker stacks up and running, etc.)

I didn't document shit the first two times I set up and restarted my entire homelab, but by the third time I learned. And from there I basically just wrote down what I did to get things running properly, and then reviewed the notes afterword to make sure I understood what I wrote. This is never a perfect process, so in the following attempts of resetting my server, I've updated sections or made things more clear so that when I'm coming at this 8 months later I can follow my guide fully and be up and running.

Some of my notes are just copy pasted directly from tutorials I originally followed to get things set up. This way I just have an easily accessible local copy.

When I troubleshoot something, I document the steps I take in Trilium using the journal feature, so I can easily track the times and dates of when I did what. This has helped me out immensely because I forget what the fuck I did the week before all the time.

I learned all this through trial and error. You'll figure out what needs to be documented as you go along, so don't get too caught up trying to make sure you have a perfect documentation plan in place before deploying anything.

I'm one of those people who never really took notes on things or wrote shit down for most my life. Mostly because I've been doing shit that doesn't require extensive documentation, so it was a big learning curve.

Edit: Forgot to mention that I also have a physical paper journal that I've scrawled various notes in. I found it easier to take quick notes on paper while I'm in the middle of working on something, then I transcribe those notes digitally in either Silverbullet or trilium.

[–] SlurpingPus@lemmy.world 1 points 18 hours ago

Use Ansible or some such solution like Puppet, Salt or Chef, just like the big boys do. If you don't have a unified editable config for your machines, you don't really have a homelab, you just have a pile of hardware instead.

[–] non_burglar@lemmy.world 2 points 22 hours ago

I "document" everything by forcing myself to create ansible runbooks for new services and configs. I have some gaps, definitely, but the more of them I create, the easier new services are to deploy.

[–] frongt@lemmy.zip 2 points 22 hours ago

That's the neat part, I don't!

I have a docker-compose file, which is somewhat self-documenting, especially since I give everything descriptive names. Creds go in bitwarden anyway.

But then, my environment isn't that complex, and I don't have anything so custom that I need notes to replicate it.

[–] dbtng@eviltoast.org 2 points 23 hours ago* (last edited 23 hours ago)

I'm not real clear what exactly you need to document.
Infrastructure documentation starts with an IPAM.
A good IPAM can help you document all kinds of stuff.

I use NetBox.
https://github.com/netbox-community/netbox?tab=readme-ov-file#getting-started

I'm running it as a Docker container on a Linux VM.
I just looked at their latest screenshots, and it appears they've done quite a bit with it since I stood up my copy.
It does even more now. I'll have to upgrade.

[–] stratself@lemdro.id 2 points 23 hours ago

I write homelab docs mostly for user guidance like onboarding, login, and service-specific stuff. This helps me better design for people by putting myself in their shoes, and should act as a reference document for any member to come back to.

Previously I built an Mkdocs-Material website with a nice subdomain for it, but since the project went on maintenance mode, I'm gonna migrate all docs back to a Forgejo wiki since it's just Markdown anyways. I also run an issue tracker there, to manage the homelab's roadmaps and features since it's still evolving.

I find this approach benefiting compared to just documenting code. I'm not an IaC person yet, but I hope when I am, the playbooks should describe themselves for the nitty-gritty stuff anyways. I do write some infra notes for myself and perhaps to onboard maintainers, but most homelab developments happen in the issue tracker itself. The rest I try to keep it simple enough for an individual to understand

[–] chrash0@lemmy.world 2 points 23 hours ago

three, maybe four things:

  1. as mentioned: Obsidian. i pay for Sync cuz i like the product and want them to succeed and want reliable offsite backups and conflict resolution. use a ton of links and tags. i’ve been into using DataView to make tables of IoT devices, services, todo items, etc based on tags and other YAML frontmatter.
  2. chezmoi. manages my dotfiles so my machines are consistent. i have scripts that are heavily commented that show how to access MQTT, how to read and parse logs from journald, how to inspect my network, etc. i do think of them as code as documentation, even if they’re also just convenient.
  3. NixOS. this has been my code as config as documentation silver bullet. i use it as a replacement for Docker, k8s, Ansible, etc as it contains definitions for my machines and all the services and configuration they run, including any package dependencies and user configurations. no more statting an assortment of files to figure out the state of the system. it’s in flake.nix
  4. honorable mention to git and whatever git hosting provider is not on your network. track your work over time, and you’ll thank yourself when things go wrong.

some things are resistant to documentation and have a lot of stateful components (HomeAssitant is my biggest problem child from an infra perspective), but mainly being in that graph mindset of “how would i find a path here if i forgot where this was” helps a lot

[–] DrunkAnRoot@sh.itjust.works 1 points 21 hours ago

i make backups of everything an when writing configs i leave a bunch of comments

[–] helix@feddit.org 0 points 18 hours ago (1 children)
[–] GreenKnight23@lemmy.world 1 points 17 hours ago

going for the nuclear option I see...