So, I’m selfhosting immich, the issue is we tend to take a lot of pictures of the same scene/thing to later pick the best, and well, we can have 5~10 photos which are basically duplicates but not quite.
Some duplicate finding programs put those images at 95% or more similarity.
I’m wondering if there’s any way, probably at file system level, for the same images to be compressed together.
Maybe deduplication?
Have any of you guys handled a similar situation?
Well how would you know which ones you’d be okay with a program deleting or not? You’re the one taking the pictures.
Deduplication checking is about files that have exactly the same data payload contents. Filesystems don’t have a concept of images versus other files. They just store data objects.
You could store one “average” image, and deltas on it. Like Git stores your previous version + a bunch of branches on top.
Note that Git doesnt store deltas. It will reuse unchanged files, but stores a (compressed) version of every file that has existed in the whole history, under its SHA1 hash.
Indeed! Interesting! I made an experiment now with a non-compressible file (strings < /dev/urandom | head -n something) and it shows you’re right. 2nd commit, where I added a tiny line to that file, increased repo size by almost the size of the whole file.
Thanks for this bit.
I’m not saying to delete, I’m saying for the file system to save space by something similar to deduping.
If I understand correctly, deduping works by using the same data blocks for similar files, so there’s no actual data loss.
I believe this is what some compression algorithms do if you were to compress the similar photos into a single archive. It sounds like that’s what you want (e.g. archive each day), for immich to cache the thumbnails, and only decompress them if you view the full resolution. Maybe test some algorithms like zstd against a group of similar photos vs individually?
FYI file system deduplication works based on file content hash. Only exact 1:1 binary content duplicates share the same hash.
Also, modern image and video encoding algorithms are already the most heavily optimized that computer scientists can currently achieve with consumer hardware, which is why compressing a jpg or mp4 offers negligible savings, and sometimes even increases the file size.
I don’t think there’s anything commercially available that can do it.
However, as an experiment, you could:
You could probably/eventually script this kind of operation if you have software that can automatically identify and group images.