DwarFS: A fast high compression read-only file system
github.com> I started working on DwarFS in 2013 and my main use case and major motivation was that I had several hundred different versions of Perl that were taking up something around 30 gigabytes of disk space, and I was unwilling to spend more than 10% of my hard drive keeping them around for when I happened to need them.
It fills me with joy that someone has been coding a fs for 7 years due to perl installs taking too much space. Necessity is the mother of all invention.
Hahaha, I haven't actually been coding on this for that long, it's more that I coded for a few weeks back in 2013 and only found the motivation to resurrect the whole thing a few weeks back.
Funny thing about it is that I've got a similar problem powering https://perl.bot/ (and the associated irc bot). I don't have as many installs as you currently but It's not far off and I want to add more compile time settings to them. I'd need to setup a full build server/system though because I need to regularly update them with new modules.
How opposed would you be to this being reworked to being able to be mainline kernel support too?
> How opposed would you be to this being reworked to being able to be mainline kernel support too?
I don't see any way of getting this anywhere near the kernel without a full rewrite. It's C++ and it depends on libraries that aren't even shipped by a lot of distributions (folly & fbthrift). And, tbh, I don't see much benefit given that FUSE these days doesn't seem to be significantly worse in terms of performance.
> I'd need to setup a full build server/system though because I need to regularly update them with new modules.
Overlay the mounted read-only fs with a read-write fs. Then you can install modules as you like and if you want to start fresh, just throw away the read-write fs. That's what I've done in the past.
Is it possible to rebuild a DwarFS fs to incorporate changes from an overlay fs without decompressing, then recompressing?
It seems feasible that a second DwarFS fs could be built from an overlay/DwarFS, then delete the original overlay/DwarFS fs. That would require 2N storage as the new DwarFS is being built. Is it possible to patch an existing DwarFS?
By overlay, are you referring to overlayfs [0]?
Yes, and I've documented this now:
https://github.com/mhx/dwarfs/blob/main/doc/dwarfs.md#settin...
> Overlay the mounted read-only fs with a read-write fs. Then you can install modules as you like and if you want to start fresh, just throw away the read-write fs. That's what I've done in the past.
It would be nice to be able to build a new read-only filesystem in incremental mode: given a compressed filesystem and some new uncompressed data, incorporate the uncompressed data without completely re-doing all the work.
> "taking up something around 30 gigabytes of disk space, and I was unwilling to spend more than 10% of my hard drive"
I imagine these days you have more than 300GB hard disk space, making this all moot?
256GB SSDs are still everywhere.
Nowadays you can have the same problem with Python and Javascript too!
Same with Ruby (Gems).
I have about the very same problems as mhx, several hundreds of huge perl versions which are almost the same, taking up enourmous amounts of diskspace. E.g. I had to move most of them from my SSD to a spinning disk. I really want to move them back.
Thanks to mhx I can move them now back to my fast disk. This is also perfect for testers.
If they're almost the same, could you use one git repo with different branches for each version? Or archive them with restic into a folder and restore which one you need each time. Either method should deduplicate data if they're mostly the same file structure and content.
Edit: You could even have several read only shadow copies of the repo for parallel working directory usage, if your hard link the .git directory except for the HEAD ref in each.
nice, wonder how this compare with MongoDB compression of files and objects. Seems like a great foundation for archiving data.
It looks like the benefit is some kind of block or file deduplication.
@OP: Can you please explain why you keep 50 gigs of perl around? :-)
I use compressed read-only file systems all the time to save space on my travel laptop. I have one squashfs for firefox, one for the TeX base install, one for LLVM, one for qemu, one for my cross compiler collection. I suspect the gains over squashfs will be far less pronounced than for the pathological "400 perl version".
> @OP: Can you please explain why you keep 50 gigs of perl around? :-)
Sure. I've been the maintainer of a perl portability module (Devel::PPPort) for a long time and every release was tested against basically every possible version (and several build flag permutations) of perl that was potentially out in the wild.
The single case in the known universe.
Very impressive, to say the least.
(Not meant sarcastically :-)
Speculating here, but perl has a very rich test library and harnesses for running tests across multiple perls and platforms.
If you upload a module to CPAN, you automatically get it tested against a huge matrix of configurations:
http://matrix.cpantesters.org/?dist=Log-Any-Adapter-FileHand...
> If you upload a module to CPAN, you automatically get it tested against a huge matrix of configurations.
Very true, and it's definitely a great service!
However, the set of versions/configurations is still limited, and it can take an awful lot of time for the matrix to fill up. I've fixed a bug specific to perl-5.10.0 about a week ago and so far the module hasn't been picked up by that version again.
So while this is definitely good as a service for the general public, it doesn't get you very far if you're trying to build a thing that's supposed to ensure compatibility for other Perl modules across 20 years of Perl history. :)
AppFS provides global file deduplication and also solves the distribution problem, and also you don't need to have all the resources locally
Whew! It was easy to find out how you actually initialize this thing, if it's read-only:
Perhaps not strictly on-topic, but is there any equivalent FS/program in Windows that will allow users to have read-only access to files that are deduplicated in some way?
My use case is the MAME console archives, which are now full of copies of games from different localisations with 99% identical content. 7Z will compress them together and deduplicate, but breaks once the archive exceeds a few gigs.
These archives are already compressed (CHD format, which is 7Z + FLAC for ISOs), but it's deduplication that needs to happen on top of these already compressed files that I'm struggling with.
Sorry for the off-topic ask!
You could use a [WIM image][1]. They can be mounted rw or ro and have file-level deduplication. Microsoft's official tooling is necessary to mount them on Windows as [the only open source implementation I am aware of][2] uses FUSE for mounting.
[1]: https://en.wikipedia.org/wiki/Windows_Imaging_Format [2]: https://wimlib.net/
If you are using windows server, data deduplication[1] is available on non-system volumes to do exactly this.
If you're using a Windows client, there is a way of enabling this, but it's not exactly supported, for a variety of reasons.
[1]https://docs.microsoft.com/en-us/windows-server/storage/data...
It's probably a hack, but you can try "backing up" your files with bup, restic or borg, and mount the resulting snapshot with FUSE
You probably need to de-duplicate before compression, at least for many compression schemes.
s/ask/request/g
Neat! I'd like to see benchmarks for more typical squashfs payloads-- embedded root filesystems totalling under 100MB. Small docker images like alpine would be a decent proxy. The given corpus of thousands of perl versions is more appropriate for comparison against git.
Author here :)
I'll add more benchmarks, this is still WIP and so far I've mainly tried to satisfy my own needs. My intention with DwarFS wasn't to write "a better SquashFS", but to make it better in certain scenarios (huge, highly redundant data) than SquashFS. SquashFS still has the big advantage of being part of the kernel, which makes it a lot more attractive for things like root file systems.
Are there git filesystems? If so, they could be a good comparison point too - gits PACK file format is pretty magic...
Apparently yes: https://github.com/presslabs/gitfs
How much does the compression of the perl repo become when compressed with
lrzip -UL9 filetarball.tar
It would be a good data point for everyone.
I wish there was a semi-compressed transparent filesystem layer which slowly compresses the least recently used files in the background, and un-compresses files upon use. That way you could store much more mostly unused content than space on the disk, without sacrificing accessibility.
I don't know about you guys, but most of the stuff that takes up space on my drives are:
1) Videos from my DSLR
2) RAW images from my DSLR
3) Various movies / TV series I downloaded
4) Game files (most of which are textures and 3D models)
None of that stuff is really compressible.
The RAWs aren't compressible? Are they LZ encoded on the camera?
RAW imagery I have worked with is about 25-50% losslessly compressible on average. Most of your gains in image compression are from quantizing the gamut in clever, (usually) imperceptible ways.
Raw imagery contains a lot of entropy that usually doesn't affect the appearance perceptually, but still has sig figs that frustrate compression.
My Fuji has lossless RAW compression and as far as I know quite a few others too.
I just tried compressing one with 7-zip ultra level compression. Saved maybe 5%. Wouldn't get even that with realtime compression.
You could probably build something easily in nbdkit to do this. (Note this is at the block layer). An advantage of nbdkit is you could write the whole thing in the high-level language of your choice, even a scripting language such as Python, which might make it easier to rapidly explore designs.
Having said that I did try to implement a deduplication layer for nbdkit, but what I found was that it wasn't very effective. It turns out that duplicate data in typical VM filesystems isn't common, and the other parts of the filesystem (block free lists etc) were not sufficiently similar to deduplicate given my somewhat naive approach.
I believe the term of art that applies here is "Hierarchical Storage Management". Along with automatically moving data between high-cost and low-cost storage media, the low-cost storage media for your filesystem of choice for the kind of compressing you described can simply be fast disk on a compressing filesystem.
I believe NT file compression works like this, and before that MSDOS "DriveSpace" ...
NTFS requires that files be manually converted to the compressed format. They're uncompressed in parts as requested, but this is only kept in RAM. I'm not aware of any built-in background task that converts files to/from the compressed format.
You can set the "Compressed" flag of a folder and from then on everything in that folder will be compressed/decompressed transparently. I have most of my disk compressed that way and never have seen problems.
That's cool, I hadn't thought of that. But does it identify "hot"/"cold" files that might benefit from being converted automatically to/from the compressed format? That would be a very nice feature to have.
Checkout CVMFS
It is not what you describe but it can help.
Why not use BTRFS with file deduplication and transparent compression (zstd specifically)?
This is a read-only file system, so it’s able to exploit certain properties of that—- locating similar files next to each other, for example.
I don’t see how read only helps at all.
Btrfs can dedupe at the block level.
Consider how much of work in btrfs is done to just handle the case of modifying existing files—or reducing file system size.. It is basically the reason it uses b-trees. It's in the name!
For example, when dedupping in block level it needs to know (right?) how many times a block is being used, so it can be collected when it runs out of uses.
ISO9660 can also express dedupped (hardlinked) files with the Rock Ridge extensions. I don't know but I'm wondering it could even do block-level dedupping if the generating program abused the format a bit..
A compression benchmark of both filesystems would be of interest in this regard (lzo, zstd and zlib), both read speed and compression wise
Sure, but it sounds like block deduping is only one of several optimizations that DwarFS is able to take advantage of because it’s a read-only FS.
Is Btrfs stable yet?
For most use-cases, yes. But not if you plan on using raid5+ [0]
[0]: https://btrfs.wiki.kernel.org/index.php/Gotchas#Parity_RAID
My own experience indicates it's brittle to power failures and will corrupt in annoying ways in the event of a power failure (or hard reboot) as of ~1 year ago.
Year isnt as useful as kernel version.
I was running whatever Arch's Linux kernel was at the time. 5.4 I believe? Arch is pretty much bleeding edge, so year is relevant.
Yup, in Arch even month, week and day is relevant.
Btrfs is default in Fedora 33.
Fedora also has Wayland as default. If Debian had Btrfs as default, that would be an argument.
It's also been the default for the root filesystem in openSUSE since 2012. It also integrates with the package manager - zypper creates pre- and post- snapshots when installing updates, and the snapper utility makes it easy to roll back if your system is busted after installing an update. It's saved my bacon on multiple occasions.
(btrfs also used to be the default for /home, though that changed at some point. When I made a new install last year the installer suggested xfs by default.)
I'm only ever switching to a new filesystem if its correctness has been formally verified.
For which filesystems is that case?
None. But the tools are there, so it should be possible.
mksquashfs supports gzip, xz, lzo, lz4 and zstd too, you can also compile it to have any of those as a default instead of gzip.
Does the performance benchmark show DwarFS versus single-threaded gzip compressed SquashFS?
> $ time mksquashfs install perl-install.squashfs -comp zstd -Xcompression-level 22
> Parallel mksquashfs: Using 12 processors
Is this viable as a backup/archive format? Would it make sense to e.g. have an incremental backup as a DwarFS file, referring to the base backup in another DwarFS file?
I guess something like borgbackup would be better suited for this.
You could theoretically try to build this with dwarfs, by using overlayfs and then compressing the upper layer again with dwarfs, but that sounds pretty fragile and cumbersome.
This could be awesome for compressing Docker image layers. After all, they can be huge (hundreds of MB) and, if the Dockerfile is well organized, each step should contain a fairly homogeneous set of files (like apt-get artifacts, for example).
It would amazing to see this work on OpenWRT, I think it would fit perfectly using less resources than squashfs. The other location would be on a Raspberry pi for scenarios where power can be cut at any time.
Author here :) I'm not sure low-spec hardware is necessarily the best use case for DwarFS. It doesn't necessarily use less resources than SquashFS, although it can create file systems that are smaller with much less CPU resources. However, it'll still need a reasonable amount of memory at run time to cache active, decompressed blocks.
Are you doing your own caching in userspace, or are you working with the kernel's caching? The latter would substantially reduce memory requirements.
Files that you've accessed will be kept in the kernel's cache. The cache I was talking about is a cache for decompressed blocks. Single files can stretch across multiple blocks, so you need to be able to keep more than one in memory anyway. However, decompressed files are kept in the cache in the hope that further (or even concurrent) reads will access the same blocks. Taking the example from the README where over a 1000 perl binaries are being executed concurrently, that cache typically has hit rates of 99+%:
For example, reducing the cache size from 512M (default) to 32M increases the time it takes to run 1139 binaries from 2.5 seconds to almost 40 seconds.$ dwarfs perl-install.dwarfs mnt -f 23:02:42.673390 dwarfs (0.2.1) 23:02:42.676663 file system initialized [1.94ms] 23:02:49.210158 blocks created: 226 23:02:49.210189 blocks evicted: 194 23:02:49.210216 request sets merged: 123 23:02:49.210241 total requests: 50056 23:02:49.210270 active hits (fast): 1515 23:02:49.210293 active hits (slow): 833 23:02:49.210318 cache hits (fast): 47482 23:02:49.210343 cache hits (slow): 0 23:02:49.210392 fast hit rate: 97.8844% 23:02:49.210417 slow hit rate: 1.66414% 23:02:49.210441 miss rate: 0.451494%Ah, I see. So this specifically saves the decompression time for data you've already decompressed, if another file references the same data?
Precisely.
If you're talking about the kernel's filesystem cache, wouldn't that cache the compressed files? As far as I understand it userspace caching is necessary to cache uncompressed blocks, since the decompression is (presumably) done in userspace. I definitely could be wrong though, let me know if you're talking about a different kind of kernel caching.
Actually, I guess if DwarFS is a kernel module and decompresses blocks before they hit the kernel's filesystem cache, then the kernel cache would do it? I'm not sure how to tell from the README if DwarFS is a kernel module or not. So I guess I'm just confused and looking to learn -- what kind of kernel caching did you have in mind?
It's using FUSE, and FUSE filesystems still participate in some parts of kernel caching.
Thanks, good to know!
I was thinking the same thing! I'm not sure what it would take to make /rom a FUSE based filesystem, to make it bootable. The current boot process involves the bare kernel mounting Squashfs to find it's init=/etc/preinit & booting from there[1].
Would love some theorycrafting on possible ways to work with DwarFS being a FUSE filesystem.
Does anyone remember back in the 90s when we'd install DoubleSpace to get on the fly compression? And then they built it into MSDOS 6 and that was a major game changer?
It was DoubleDrive until Microsoft licensed it and relabeled it as DoubleSpace. Stacker was the far more popular drive compression solution until MSDOS 6 was released.
Oh wow. This would be excellent for language dependencies - ruby gems, node_modules, etc. Integrating this with something like pnpm [1], which already keeps a global store of dependencies would excellent. [1] - https://pnpm.js.org
So I tried it out on my 17BG of perl builds. (just on my laptop, not on my big machine).
mkdwarfs crashed with recursive links (1-level, just pointing to itself) and when I removed dirs while running mkdwarfs, which were part of of the input path. Which is fair, I assume.
> mkdwarfs crashed with recursive links (1-level, just pointing to itself)
That's odd, it shouldn't crash with links at all, as it doesn't actively follow links. Can you please file a bug if you can reproduce this?
> and when I removed dirs while running mkdwarfs, which were part of of the input path
I guess this is fair, but I'll try to take a look anyway. :-)
> On success, mkdwarfs needed 1 hr, and reduced 219 dirs to a size of 970 MB. Not just source files, but also the build and install object files.
My 500 MB image with the 1100+ perls is just installations, from which I've actually removed libperl.a as I've never needed it and it really bloats the image. I've got a separate image with debug information (everything built with -g in case I need to debug the binaries), so the binaries in the main image are essentially all stripped. If I need to debug, I'll just mount the debug image as well, which contains the source files and the stripped debug data.
> 1 hr is a lot, but just think how long squashfs would have needed.
It might be worth trying a lower compression level, especially if you find that mkdwarfs is CPU bound and not I/O bound.
No, I'm fine with the high default compression rate. I only do this once a decade, and one hour is fair for this. A Fedora upgrade needs 5-8 hrs.
I have lot of -g info, because I use it mainly for debugging XS problems with old versions. The hashes change for each object, so reduplication is mostly only useful for source files. I really need high compression, which is the default.
On success, mkdwarfs needed 1 hr, and reduced 219 dirs to a size of 970 MB. Not just source files, but also the build and install object files.
1 hr is a lot, but just think how long squashfs would have needed. Totally impractical. Thanks mhx
I noticed that enabling compression on zfs made a huge difference with the source size of some of my largely text file petitions. I never turned on deduplication because I don’t want to bother with the memory overhead, but I bet that would help even further.
Most ZFS howto's now recommend against dedup on the prolongued memory cost consequences. Yes, you would get some block level compression outcome. But, you enter the cost/benefit hell of balancing CPU and memory at runtime.
Can't you periodically run the dedup out of band (for example whenever you scrub)? https://btrfs.wiki.kernel.org/index.php/Deduplication
I was just googling this myself and I think this is a feature that btrfs has over zfs. There’s no way to do native offline deduplication as far as I could find.
I'm curious, why do you have so many perl installations around. I thought I'd got a fair number of python venvs kicking around for each of the repos I'm dealing with, but nowhere near that many.
My Python shits have pip requirements that easily dump 3-4 gigs in a venv folder. Do that once or twice a month when starting a new project for a couple of years and it gets messy...
I'd like to see a pip freeze of whatever you're doing to consistently need venvs of that size.
my docker builds are fun, tooaffine==2.3.0 attrs==20.3.0 certifi==2020.11.8 click==7.1.2 click-plugins==1.1.1 cligj==0.7.1 cycler==0.10.0 dataclasses==0.6 decorator==4.4.2 Fiona==1.8.17 future==0.18.2 geopandas==0.8.1 imageio==2.9.0 joblib==0.17.0 kiwisolver==1.3.1 llvmlite==0.34.0 matplotlib==3.3.0 munch==2.5.0 networkx==2.5 numba==0.51.2 numpy==1.19.2 pandas==1.1.3 Pillow==7.2.0 pkg-resources==0.0.0 psycopg2-binary==2.8.6 PyCRS==1.0.1 pyparsing==2.4.7 pyproj==3.0.0.post1 python-dateutil==2.8.1 pytz==2020.4 PyWavelets==1.1.1 rasterio==1.1.8 scikit-image==0.17.2 scikit-learn==0.23.2 scipy==1.5.2 Shapely==1.7.1 six==1.15.0 snuggs==1.4.7 threadpoolctl==2.1.0 tifffile==2020.11.18 torch==1.7.0 tqdm==4.48.2 typing-extensions==3.7.4.3
Circa 2 years ago, I was working on a side project and got so annoyed with SquashFS tooling, that I decided to fix it instead. After getting stuck with the spaghetti code behind mksquashfs, I decided to start from scratch, having learnt enough about SquashFS to roughly understand the on-disk format.
Because squashfs-tools seemed pretty unmaintained in late 2018 (no activity on the official site & git tree for years and only one mailing list post "can you do a release?" which got a very annoyed response) I released my tooling as "squashfs-tools-ng" and it is currently packaged by a hand full of distros, including Debian & Ubuntu.[1]
I also thoroughly documented the on-disk format, after reverse engineering it[2] and made a few benchmarks[3].
For my benchmarks I used an image I extracted from the Debian XFCE LiveDVD (~6.5GiB as tar archive, ~2GiB as XZ compressed SquashFS image). By playing around a bit, I also realized that the compressed meta data is "amazingly small", compared to the actual image file data and the resulting images are very close to the tar ball compressed with the same compressor settings.
I can accept a claim of being a little smaller than SquashFS, but the claimed difference makes me very suspicious. From the README, I'm not quite sure: Does the Raspbian image comparison compare XZ compression against SquashFS with Zstd?
I have cloned the git tree and installed dozens of libraries that this folly thingy needs, but I'm currently swamped in CMake errors (haven't touched CMake in 8+ years, so I'm a bit rusty there) and the build fails with some still missing headers. I hope to have more luck later today and produce a comparison on my end using my trusty Debian reference image which I will definitely add to my existing benchmarks.
Also, is there any documentation on how the on-disk format for DwarFS and it's packing works which might explain the incredible size difference?
[1] https://github.com/AgentD/squashfs-tools-ng
[2] https://github.com/AgentD/squashfs-tools-ng/blob/master/doc/...
[3] https://github.com/AgentD/squashfs-tools-ng/tree/master/doc
This is really cool, I'll give squashfs-tools-ng a try!
> Does the Raspbian image comparison compare XZ compression against SquashFS with Zstd?
That's correct. It's not an exhaustive matrix of comparisons.
> Also, is there any documentation on how the on-disk format for DwarFS and it's packing works which might explain the incredible size difference?
The format as of 0.2.0 is actually quite simple. It's a list of compressed data blocks, followed by a metadata block (and a schema describing the metadata block). The metadata format is implemented by and documented in in [1].
There are probably 3 things that contribute to compression level:
1) Block size. DwarFS can use arbitrary block sizes (artificially limited to powers of two), and uses a much larger block size (16M) by default. SquasFS doesn't seem to be able to go higher than 1M.
2) Ordering files by similarity.
3) Segment deduplication. If segments of files overlap with previously seen data, these segments are referenced instead of written again. The minimum size of these segments can be configured and defaults to 2k. For my primary use case, of the 47.6 GB of input data, 28.2 GB are saved by file-level deduplication, and another 12.4 GB by this segment-level deduplication. So before the "real" compression algorithms actually kick in, there are only 7 GB of data left. As these are ordered by similarity, and stored in rather big blocks, some of the 16M blocks can actually be compressed down to less then 100k.
[1] https://github.com/mhx/dwarfs/blob/main/thrift/metadata.thri...
I just want to say thank you for squashfs-tools-ng. For my usecase I had to patch mksquashfs and your tool fits just right. I'm yet to switch however.
> You can pick either clang or g++, but at least recent clang versions will produce substantially faster code
have you investigated why this might be the case?
> have you investigated why this might be the case?
Very briefly. It looks like clang has a different strategy breaking up the code (which is mostly C++ templates) into actual functions vs. inlining it, and the hot code ultimately performs fewer function calls with clang than it does with gcc. But this is nowhere near a proper analysis of what's going on. :)
I have several highly-redundant NTFS backups that I'd like to compress into a read-only fs. Can DwarFS preserve all NTFS metadata?
I think it uses FUSE, which is linux specific.
Is this useful for long-term log storage? say, from a typical webapp (eg. Nginx logs, Rails logs, Postgres logs, etc)
Compression - anyone using lrzip on production servers?
What are the use cases for a read only file system?
Read-only media, for example. Or in general, stuff that doesn't really change. In my case: https://github.com/mhx/dwarfs#history
Game asset packages - all game assets are read only and need to be compressed and nowadays with SSD's you don't want duplication.
Just to clarify that last statement (and something to think about) with HDD's you want duplicate assets so that you don't cause seeks which are VERY slow on 5400rpm HDD's still found on some/alot of systems.
Have you ever used a CD-ROM or DVD-ROM?
the use case for a read only compressed filesystem is that one..
* can search archived files potentially faster because read access is potentially faster
* fit more data on bootable media
Booting. Arguably all containers, too.
squashfs is widely used in Linux install media.
Node_modules