datahoarder
Who are we?
We are digital librarians. Among us are represented the various reasons to keep data -- legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they're sure it's done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.
We are one. We are legion. And we're trying really hard not to forget.
-- 5-4-3-2-1-bang from this thread
DupeGuru
For the case that you use synology: already built in in the storage analyser feature
Otherwise: no clue tbh
I feel like most NAS OSes have this feature built in.
Do you want something that runs on your NAS or from another computer? What OS(es) are you using?
I personally use rdfind as it has an option to replace duplicates with hardlinks instead of deleting them outright (if on the same filesystem). This is useful if you do still need a file to exist at multiple paths.
I then use Czkawka for everything else, especially for similar, non-duplicate files.
Thank you! I'll check it out.
Its very nice. I use -Sr1 so I can then pull into a spreadsheet and look at the files and decide which one I want to keep.