A very unscientific test:
Installed a new 16TB drive into my server, made a whole drive btrfs file system and used ZSTD compression and space_cache=v2 options on the mount for the first time.
ZSTD is supposed to have higher compression and better speeds, but last I checked you can't boot from a ZSTD compressed drive (found out the hard way) so I haven't been using it.
https://btrfs.wiki.kernel.org/index.php/Compression
The new version of space_cache is supposed to fix this:
Then I copied 22 subvolumes of data totaling 6.9TB from a 10TB drive to the new drive - this drive also being a whole drive file system using LZO compression and space_cache v1.
The final result was the new drive is reporting 6.8TB used vs. 6.9TB on the other drive. A savings of 100GB.
Like I said, not a scientific result but interesting none the less. I could change the compression on the 10TB drive and re-compress the entire thing to see if the results are consistent, but that would take a lot of time.
Installed a new 16TB drive into my server, made a whole drive btrfs file system and used ZSTD compression and space_cache=v2 options on the mount for the first time.
ZSTD is supposed to have higher compression and better speeds, but last I checked you can't boot from a ZSTD compressed drive (found out the hard way) so I haven't been using it.
https://btrfs.wiki.kernel.org/index.php/Compression
The new version of space_cache is supposed to fix this:
On very large filesystems (many terabytes) and certain workloads, the performance of the v1 space cache may degrade drastically. The v2 implementation, which adds a new B-tree called the free space tree, addresses this issue.
The final result was the new drive is reporting 6.8TB used vs. 6.9TB on the other drive. A savings of 100GB.
Like I said, not a scientific result but interesting none the less. I could change the compression on the 10TB drive and re-compress the entire thing to see if the results are consistent, but that would take a lot of time.
Comment