this post was submitted on 29 Jan 2026
111 points (98.3% liked)

Selfhosted

57233 readers
496 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

There is a post about getting overwhelmed by 15 containers and people not wanting to turn the post into a container measuring contest.

But now I am curious, what are your counts? I would guess those of you running k*s would win out by pod scaling

docker ps | wc -l

For those wanting a quick count.

top 50 comments
sorted by: hot top controversial new old
[–] neidu3@sh.itjust.works 33 points 1 month ago* (last edited 1 month ago) (5 children)
  1. Because I'm old, crusty, and prefer software deployments in a similar manner.
[–] slazer2au@lemmy.world 12 points 1 month ago (2 children)

I salute you and wish you the best in never having a dependency conflict.

[–] neidu3@sh.itjust.works 17 points 1 month ago* (last edited 1 month ago)

I've been resolving them since the late 90s, no worries.

[–] RIotingPacifist@lemmy.world 6 points 1 month ago* (last edited 1 month ago) (1 children)
load more comments (1 replies)
[–] mesamunefire@piefed.social 3 points 1 month ago* (last edited 1 month ago)

Agreed. Im tired after work. Debian/yunohost is good enough.

At work its hundreds of docker containers but all ci/cd takes care of that.

load more comments (3 replies)
[–] kmoney@lemmy.kmoneyserver.com 23 points 1 month ago* (last edited 1 month ago) (1 children)

140 running containers and 33 stopped (that I spin up sometimes for specific tasks or testing new things), so 173 total on Unraid. I have them gouped into:

  • 118 Auto-updates (low chance of breaking updates or non-critical service that only I would notice if it breaks)
  • 55 Manual-updates (either it's family-facing e.g. Jellyfin, or it's got a high chance of breaking updates, or it updates very infrequently so I want to know when that happens, or it's something I want to keep particular note of or control over what time it updates e.g. Jellyfin when nobody's in the middle of watching something)

I subscribe to all their github release pages via FreshRSS and have them grouped into the Auto/Manual categories. Auto takes care of itself and I skim those release notes just to keep aware of any surprises. Manual usually has 1-5 releases each day so I spend 5-20 minutes reading those release notes a bit more closely and updating them as a group, or holding off until I have more bandwidth for troubleshooting if it looks like an involved update.

Since I put anything that might cause me grief if it breaks in the manual group, I can also just not pay attention to the system for a few days and everything keeps humming along. I just end up with a slightly longer manual update list when I come back to it.

[–] a_fancy_kiwi@lemmy.world 8 points 1 month ago (2 children)

I’ve never looked into adding GitHub releases to FreshRSS. Any tips for getting that set up? Is it pretty straight forward?

[–] perishthethought@piefed.social 3 points 1 month ago (2 children)

I just added this URL for Jellyfin and it "just worked":

https://github.com/jellyfin/jellyfin/releases

load more comments (2 replies)
load more comments (1 replies)
[–] drkt@scribe.disroot.org 16 points 1 month ago (7 children)

All of you bragging about 100+ containers, please may in inquire as to what the fuck that's about? What are you doing with all of those?

[–] StrawberryPigtails@lemmy.sdf.org 7 points 1 month ago (3 children)

In my case, most things that I didn't explicitly make public are running on Tailscale using their own Tailscale containers.

Doing it this way each one gets their own address and I don't have to worry about port numbers. I can just type http://cars/ (Yes, I know. Not secure. Not worried about it) and get to my LubeLogger instance. But it also means I have 20ish copies of just the Tailscale container running.

On top of that, many services, like Nextcloud, are broken up into multiple containers. I think Nextcloud-aio alone has something like 5 or 6 containers it spins up, in addition to the master container. Tends to inflate the container numbers.

load more comments (3 replies)
[–] kmoney@lemmy.kmoneyserver.com 4 points 1 month ago* (last edited 1 month ago) (1 children)

A little of this, a little of that...I may also have a problem... >_>;

The ListQuickstart

  • dockersocket
  • ddns-updater
  • duckdns
  • swag
  • omada-controller
  • netdata
  • vaultwarden
  • GluetunVPN
  • crowdsec

Databases

  • postgresql14
  • postgresql16
  • postgresql17
  • Influxdb
  • redis
  • Valkey
  • mariadb
  • nextcloud
  • Ntfy
  • PostgreSQL_Immich
  • postgresql17-postgis
  • victoria-metrics
  • prometheus
  • MySQL
  • meilisearch

Database Admin

  • pgadmin4
  • adminer
  • Chronograf
  • RedisInsight
  • mongo-express
  • WhoDB
  • dbgate
  • ChartDB
  • CloudBeaver

Database Exporters

  • prometheus-qbittorrent-exporter
  • prometheus-immich-exporter
  • prometheus-postgres-exporter
  • Scraparr

Networking Admin

  • heimdall
  • Dozzle
  • Glances
  • it-tools
  • OpenSpeedTest-HTML5
  • Docker-WebUI
  • web-check
  • networking-toolbox

Legally Acquired Media Display

  • plex
  • jellyfin
  • tautulli
  • Jellystat
  • ErsatzTV
  • posterr
  • jellyplex-watched
  • jfa-go
  • medialytics
  • PlexAniSync
  • Ampcast
  • freshrss
  • Jellyfin-Newsletter
  • Movie-Roulette

Education

  • binhex-qbittorrentvpn
  • flaresolverr
  • binhex-prowlarr
  • sonarr
  • radarr
  • jellyseerr
  • bazarr
  • qbit_manage
  • autobrr
  • cleanuparr
  • unpackerr
  • binhex-bitmagnet
  • omegabrr

Books

  • BookLore
  • calibre
  • Storyteller

Storage

  • LubeLogger
  • immich
  • Manyfold
  • Firefly-III
  • Firefly-III-Data-Importer
  • OpenProject
  • Grocy

Archival Storage

  • Forgejo
  • docmost
  • wikijs
  • ArchiveTeam-Warrior
  • archivebox
  • ipfs-kubo
  • kiwix-serve
  • Linkwarden

Backups

  • Duplicacy
  • pgbackweb
  • db-backup
  • bitwarden-export
  • UnraidConfigGuardian
  • Thunderbird
  • Open-Archiver
  • mail-archiver
  • luckyBackup

Monitoring

  • healthchecks
  • UptimeKuma
  • smokeping
  • beszel-agent
  • beszel

Metrics

  • Unraid-API
  • HDDTemp
  • telegraf
  • Varken
  • nut-influxdb-exporter
  • DiskSpeed
  • scrutiny
  • Grafana
  • SpeedFlux

Cameras

  • amcrest2mqtt
  • frigate
  • double-take
  • shinobipro

HomeAuto

  • wyoming-piper
  • wyoming-whisper
  • apprise-api
  • photon
  • Dawarich
  • Dawarich---Sidekiq

Specific Tasks

  • QDirStat
  • alternatrr
  • gaps
  • binhex-krusader
  • wrapperr

Other

  • Dockwatch
  • Foundry
  • RickRoll
  • Hypermind

Plus a few more that I redacted.

[–] drkt@scribe.disroot.org 4 points 1 month ago (2 children)

I look at this list and cry a little bit inside. I can't imagine having to maintain all of this as a hobby.

[–] Chewy7324@discuss.tchncs.de 4 points 1 month ago

From a quick glance I can imagine many of those services don't need much maintenance if any. E.g. RickRoll likely never needs any maintenance beyond the initial setup.

load more comments (1 replies)
[–] irmadlad@lemmy.world 3 points 1 month ago

Not bragging. It is what it is. I run a plethora of things and that's just on the production server. I probably have an additional 10 on the test server.

load more comments (4 replies)
[–] Sibbo@sopuli.xyz 14 points 1 month ago (2 children)

0, it's all organised nicely with nixos

[–] slazer2au@lemmy.world 7 points 1 month ago* (last edited 1 month ago) (1 children)

Boooo, you need some chaos in your life. :D

[–] thinkercharmercoderfarmer@slrpnk.net 7 points 1 month ago (2 children)

That's why I have one host called theBarrel and it's just 100 Chaos Monkeys and nothing else

load more comments (2 replies)

I have 1 podman container on NixOS because some obscure software has a packaging problem with ffmpeg and the NixOS maintainers removed it. docker: command not found

[–] HK65@sopuli.xyz 10 points 1 month ago (1 children)

I know using work as an example is cheating, but around 1400-1500 to 5000-6000 depending on load throughout the day.

At home it's 12.

[–] slazer2au@lemmy.world 7 points 1 month ago (1 children)

I was watching a video yesterday where an org was churning 30K containers a day because they didn't profile their application correctly and scaled their containers based on a misunderstanding how Linux deals with CPU scheduling.

[–] HK65@sopuli.xyz 5 points 1 month ago

Yeah that shit is more common than people think.

A big part of the business of cloud providers is that most orgs have no idea how to do shit. Their enterprise consultants are also wildly variable in competence.

There was also a large amount of useless bullshit that I needed to cut down since being hired at my current spot, but the amount of containers is actually warranted. We do have that traffic, which is both happy and sad, since while business is booming, I have to deal with this.

[–] panda_abyss@lemmy.ca 9 points 1 month ago (4 children)

I am like Oprah yelling “you get a container, you get a container, Containers!!!” At my executables.

I create aliases using toolbox so I can run most utils easily and securely.

load more comments (4 replies)
[–] manmachine@lemmy.world 8 points 1 month ago (1 children)

Zero. Either it’s just a service with no wrappers, or a full VM.

[–] BCsven@lemmy.ca 9 points 1 month ago (1 children)

Why a full VM, that seems like a ton of overhead

load more comments (1 replies)
[–] smiletolerantly@awful.systems 7 points 1 month ago* (last edited 1 month ago) (4 children)

Zero.

About 35 NixOS VMs though, each running either a single service (e.g. Paperless) or a suite (Sonarr and so on plus NZBGet, VPN,...).

There's additionally a couple of client VMs. All of those distribute over 3 Proxmox hosts accessing the same iSCSI target for VM storage.

SSL and WireGuard are terminated at a physical firewall box running OpnSense, so with very few exceptions, the VMs do not handle any complicated network setup.

A lot of those VMs have zero state, those that do have backup of just that state automated to the NAS (simply via rsync) and from there everything is backed up again through borg to an external storage box.

In the stateless case, deploying a new VM is a single command; in the stateful case, same command, wait for it to come up, SSH in (keys are part of the VM images), run restore-<whatever>.

On an average day, I spend 0 minutes managing the homelab.

load more comments (4 replies)
[–] blurry@feddit.org 6 points 1 month ago

44 containers and my average load over 15 min is still 0,41 on an old Intel nuc.

[–] corsicanguppy@lemmy.ca 6 points 1 month ago

How it started : 0

Max : 0

Now : 0

Iso27002 and provenance validation goes brrrrr

[–] imetators@lemmy.dbzer0.com 5 points 1 month ago

9 containers of which 1 is container manager with 8 containers inside (multi-containers counted as 1). And 9 that are installed off the NAS app store. 18 total.

[–] ToTheGraveMyLove@sh.itjust.works 5 points 1 month ago (11 children)

I still haven't figured out containers. 🙁

load more comments (11 replies)
[–] Culf@feddit.dk 5 points 1 month ago (1 children)

Am not using docker yet. Currently I just have one Proxmox LXC, but am planning on selfhosting a lot more in the near future...

[–] irmadlad@lemmy.world 4 points 1 month ago* (last edited 1 month ago)

Awesome! I like ProxMox. Check out the Helper Scripts if you haven't already. Some people like them, some don't.

[–] Decronym@lemmy.decronym.xyz 5 points 1 month ago* (last edited 3 weeks ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
DNS Domain Name Service/System
LXC Linux Containers
NAS Network-Attached Storage
Plex Brand of media server package
SSH Secure Shell for remote terminal access
SSL Secure Sockets Layer, for transparent encryption
VPN Virtual Private Network
VPS Virtual Private Server (opposed to shared hosting)
k8s Kubernetes container management package

9 acronyms in this thread; the most compressed thread commented on today has 8 acronyms.

[Thread #42 for this comm, first seen 29th Jan 2026, 11:00] [FAQ] [Full list] [Contact] [Source code]

[–] otacon239@lemmy.world 4 points 1 month ago

11 running on my little N150 box. Barely ever breaks a sweat.

[–] Dave@lemmy.nz 4 points 1 month ago* (last edited 1 month ago)

Well the containers are grouped into services. I would easily have 15 services running, some run a separate postgres or redis while others do an internal sqlite so hard to say (I'm not where I can look rn).

If we're counting containers then between Nextcloud and Home Assistant I'm probably over 20 already lol.

[–] Jayjader@jlai.lu 4 points 1 month ago

I recently went from 0 to 1. Reinstalled my VPS under debian, and decided to run my forgejo instance with their rootless container. Mostly as a learning experience, but also to easily decouple the forgejo version from whichever version my distro packages.

[–] antsu@discuss.tchncs.de 4 points 1 month ago (1 children)

59 according to docker info.

[–] slazer2au@lemmy.world 6 points 1 month ago (1 children)

Hot damn. That is a far better way then counting the lines from docker ps

[–] irmadlad@lemmy.world 5 points 1 month ago

Hot damn

That literally got a snort, because I feel the same way when I find a much easier/cleaner way of doing something.

[–] mogethin0@discuss.online 4 points 1 month ago

I have 43 running, and this was a great reminder to do some cleanup. I can probably reduce my count by 5-10.

[–] eskuero@lemmy.fromshado.ws 3 points 1 month ago

26 tho this include multi container services like immich or paperless who have 4 each.

[–] Tywele@piefed.social 3 points 1 month ago

35 containers and everything is running stable and most of it is automatically updated. In case something breaks I have daily backups of everything.

[–] Shadow@lemmy.ca 3 points 1 month ago* (last edited 1 month ago)

At my house around 10-15. For lemmy.ca and our other sites, 35ish maybe. At work... hundreds.

[–] gergolippai@lemmy.world 3 points 1 month ago

I'm running 3 or 4 I think... I'm more into dedicated VMs for some reason, so my important things are running in VMs in a proxmox cluster.

[–] plantsmakemehappy@lemmy.zip 3 points 1 month ago

36, with plans for more

[–] MrQuallzin@lemmy.world 3 points 1 month ago

51 containers on my Unraid server, but only 39 running right now

[–] Strit@lemmy.linuxuserspace.show 3 points 1 month ago

I don't have access to my server right now, but it's around 20 containers on my little N100 box.

[–] kylian0087@lemmy.dbzer0.com 3 points 1 month ago

About 62 deployments with 115 "pods"

load more comments
view more: next ›