this post was submitted on 16 Jan 2025
43 points (95.7% liked)

Selfhosted

59249 readers
661 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I’m going to make a backup of 2TB SSD today. I will use clonezilla mainly because that’s all I know. But do you recommend any other ways for any reason?

I want to keep the process simple and easy. And I will likely take backup once a month or so repeatedly. It doesn’t have to be ready all the time. If you need more clarification, ask away.

all 32 comments
sorted by: hot top controversial new old
[–] mbirth@lemmy.ml 17 points 1 year ago (1 children)

Does the data change a lot? Does it need to be a block-based backup (e.g. bootable)? Otherwise, you could go with rsync or restic or borg to only refresh your backup copy with the changed files. This should be far quicker than taking a complete backup of the whole SSD.

[–] tiz@lemmy.ml 3 points 1 year ago

Thank you for the insight. Changes are incremental I suppose. You are correct that it’s more efficient. But I kind of want to back up the whole disk since I can keep a bootable drive with it right?

[–] ikidd@lemmy.world 17 points 1 year ago* (last edited 1 year ago) (3 children)

dd if=/dev/sda0 conv=sync,noerror bs=128K status=progress | gzip -c file.gz

You can add an additional pipe in there if you need to ssh it to another machine if you don't have room on the original.

[–] sntx@lemm.ee 5 points 1 year ago

The added info from pv is also nice ^^

[–] mlaga97@lemmy.mlaga97.space 1 points 1 year ago (1 children)

If zstd is available, it is a lot more efficient and performant over gzip.

[–] ikidd@lemmy.world 1 points 1 year ago

True. I've done that command for so long that I've kinda gotten gzip hardwired into my fingers.

[–] HubertManne@moist.catsweat.com 1 points 1 year ago (1 children)

I did a thing like this but with a ios command that wrote the disk to image and piped it to ssh but then piped it back to a waiting drive. It was great as you could pull the disk and boot right off it. Do you know if that can be done with dd?

[–] ikidd@lemmy.world 4 points 1 year ago

I'd probably dd it straight on to the drive, but I'm sure you could get it to go to New Orleans and play the Macarana before it came back if you used enough pipes.

[–] taiidan@slrpnk.net 12 points 1 year ago

btrfs or zfs send/receive. Harder to do if already established, but by far the most elegant, especially with atomic snapshots to allow versioning without duplicate data.

[–] morethanevil@lemmy.fedifriends.social 11 points 1 year ago (2 children)

You can use Rescuezilla which is basically a GUI fir Clonezilla, but easier to use 😋

[–] tiz@lemmy.ml 2 points 1 year ago

Thanks guys. I went with Rescuezilla in the end. So far so good.

[–] tiz@lemmy.ml 2 points 1 year ago

This is something I should consider!

[–] drkt@scribe.disroot.org 5 points 1 year ago* (last edited 1 year ago) (1 children)

My method requires that the drives be plugged in at all times, but it's completely automatic.

I use rsync from a central 'backups' container that pulls folders from other containers and machines. These are organized in

/BACKUPS/(machine/container)_hostname/...

The /BACKUPS/ folder is then pushed to an offsite container I have sitting at a friends place across town.

For example, I backup my home folder on my desktop which looks like this on the backup container

/BACKUPS/Machine_Apollo/home/dork/

This setup is not impervious to bitflips a far as I'm aware (it has never happened). If a bit flip happens upstream, it will be pushed to backups and become irrecoverable.

[–] tiz@lemmy.ml 1 points 1 year ago (1 children)

I see. This is more of a file system backup right? Do you recommend it over full disk backup for any reason? I can think of saving space.

[–] drkt@scribe.disroot.org 4 points 1 year ago* (last edited 1 year ago) (2 children)

I recommend it over a full disk backup because I can automate it. I can't automate full disk backups as I can't run dd reliably from a system that is itself already running.

It's mostly just to ensure that I have config files and other stuff I've spent years building be available in the case of a total collapse so I don't have to rebuilt from scratch. In the case of containers, those have snapshots. Anytime I'm working on one, I drop a snapshot first so I can revert if it breaks. That's essentially a full disk backup but it's exclusive to containers.

edit: if your goal is to minimize downtime in case of disk failure, you could just use RAID

[–] MangoPenguin@lemmy.blahaj.zone 2 points 1 year ago (1 children)

I can’t automate full disk backups as I can’t run dd reliably from a system that is itself already running.

Can't you do a snapshot like VSS does on windows and back that up on a running system? I assume with a filesystem that supports snapshots that would be possible.

[–] drkt@scribe.disroot.org 1 points 1 year ago

I'm sure there's ways to do it, but I can't do it and it's not something I'm keen to learn given that I've already kind of solved the problem :p

[–] tiz@lemmy.ml 1 points 1 year ago (1 children)

I’m on a similar boat except I might have less time resource available in the future cause I’m getting a job.

Hopefully I could automate full disk backup because if something like Immich breaks, I can just load up from the backup drive. My family also use the services so… I think it’s great you brought up RAID but I believe when Immich or any software mess things up it’s not recoverable right?

[–] drkt@scribe.disroot.org 1 points 1 year ago

I think it’s great you brought up RAID but I believe when Immich or any software mess things up it’s not recoverable right?

RAID is not a backup, no. It's redundancy. It'll keep your service up and running in the case of a disk failure and allow you to swap in a new disk with no data loss. I don't know how Immich works but I would put it in a container and drop a snapshot anytime I were to update it so if it breaks I can just revert.

[–] ShortN0te@lemmy.ml 4 points 1 year ago (1 children)
[–] tiz@lemmy.ml 0 points 1 year ago

Based. Yes. It is an option.

[–] g_damian@lemmy.world 3 points 1 year ago

https://www.fsarchiver.org/quickstart/ It's faster and more efficient than just dd :)

[–] knobbysideup@sh.itjust.works 2 points 1 year ago

Borg for files. Proxmox snapshots for the VMs.

Veeam endpoint free version is nice because it doesn't require a reboot.

[–] kylian0087@lemmy.dbzer0.com 2 points 1 year ago

I personally use Borg to do automatic backups.

[–] Eideen@lemmy.world 1 points 1 year ago

Do you want to use desktop app or systemd timer?

[–] scarilog@lemmy.world 1 points 1 year ago

Oh I've been using Acronis for this purpose for a while, nice to know foss tools exist that accomplish the same thing, I'll probably use this next time.

[–] zorflieg@lemmy.world 1 points 1 year ago

HD Clone X cloner Or MSP360 (Cloud Berry) Standalone Backup

Both cost money but not a lot and are very reliable.