Running a Linux VM from my MacBook Pro is how I spend much of my time during software development. In this post, I compare multiple solutions to this problem, with a focus on how they perform with I/O operations.
Background
On my MacBook Pro, my preferred setup for software development is to use Mac-native development tools like Sublime Text 3 or VSCode paired with a Linux VM running in the background.
Modern virtual machines have near-native performance on CPU and Memory. However, shared folders between the host and VM continue to be a major bottleneck. There are many ways that different providers attempt to solve the problem of shared folders, and this is what my benchmarks were focused on. Below, I call the different approaches to shared folders “filesystem adapters”.
Methodology
All tests were run on a mid-2015 MacBook Pro with a 2.8 GHz Intel Core i7 running macOS 10.14.6.
Environments
The following environments were tested:
Multipass: Solution from Canonical, the makers of Ubuntu, to spin up quick Ubuntu Server instances. On macOS, it uses HyperKit for its virtualization environment and SSHFS for its file system adapter.
$ multipass --version
multipass 1.6.2+mac
Docker Desktop for Mac, gRPC FUSE: Like Multipass, Docker Desktop for Mac also sits on top of HyperKit. However, it uses a custom filesystem adapter called gRPC FUSE. Extensive discussion about the pros and cons of this adapter can be found in this GitHub thread.
$ docker --version
Docker version 20.10.6, build 370c289
Docker Desktop for Mac, osxfs: This is an older filesystem for Docker Desktop for Mac, which is intended to be replaced by gRPC FUSE.
Vagrant: Vagrant is not itself a virtualization environment; it must sit on top of one. However, Vagrant ships its own filesystem adapter based on NFS, which makes it interesting to benchmark.
Note that the Vagrant version used in testing is a bit old, and it sits on top of VMware 8.
$ vagrant --version
Vagrant 2.1.1
VMware Fusion: One of the two main commercial virtual machine environments for macOS hosts. VMware Fusion uses HGFS as its filesystem adapter.
VMware Fusion versions 8.5.10 and 11.5.7 were both tested.
Parallels Desktop 16: The other of the two main commercial virtual machine environments for macOS hosts. Parallels Desktop mounts shared directories under /media/psf.
Parallels Desktop was tested under both the Apple Hypervisor and the Parallels Hypervisor, yielding similar results. The data shown in the following section uses the Parallels Hypervisor.
Benchmarks
The following benchmarks were run. All commands were run within a filesystem shared with the host as described above.
fio –ioengine=libaio:
This is a standard way to measure I/O performance. Source: GitLab Docs
$ fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --bs=4k --iodepth=64 --readwrite=randrw --rwmixread=75 --size=512M --filename=fio_test
fio –ioengine=sync:
This variant of the above command was tested as well, because --direct=1 is not supported in all of the filesystem adapters being tested.
$ fio --randrepeat=1 --ioengine=sync --direct=0 --gtod_reduce=1 --name=test --bs=4k --iodepth=64 --readwrite=randrw --rwmixread=75 --size=128M --filename=fio_test
Create and read 1000 files:
This is a quick and dirty test of filesystem performance using Bash. Source: GitLab Docs
$ mkdir test; cd test
$ time for i in {0..1000}; do echo 'test' > "test${i}.txt"; done
$ time for i in {0..1000}; do cat "test${i}.txt" > /dev/null; done
$ cd ..; rm -r test
Unzip CLDR JSON:
This is an I/O-heavy operation; the zip file expands to over 20k files and directories.
$ wget https://github.com/unicode-org/cldr-json/releases/download/39.0.0/cldr-39.0.0-json-modern.zip
$ time unzip -q -d cldr-39.0.0-json-modern cldr-39.0.0-json-modern.zip
Results
The results are plotted below. You can click on each chart to access an interactive version.
Control Measurements
For comparison, the tests were run on native macOS as well as the native VM filesystem (rather than the shared directory). The results were much better than any of the ones shown above, so I did not plot them on the same charts. Note that the units on the measurements are greater than the units in the charts above.
- Async IOPS Read: 28.3k (Parallels)
- Async IOPS Write: 9.4k (Parallels)
- Async Bandwidth Read: 111 MiB/s
- Async Bandwidth Write: 36.9 MiB/s
- Sync IOPS Read: 27k (macOS), 10.8k (Parallels)
- Sync IOPS Write: 89.9k (macOS), 3.6k (Parallels)
- Sync Bandwidth Read: 1055 MiB/s (macOS), 42.3 MiB/s (Parallels)
- Sync Bandwidth Write: 351 MiB/s (macOS), 14.1 MiB/s (Parallels)
- Create 1000 Files: 0.273 s (macOS), 0.029 s (Parallels)
- Read 1000 Files: 2.343 s (macOS), 0.856 s (Parallels)
- Unzip CLDR JSON: 4.477 s (macOS), 2.089 s (Parallels)
Idle Memory
A bit of data on the idle memory usage of the virtual machines was collected. These measurements are from the Memory value in Activity Monitor reported after all other tests were complete.
- Multipass: 1.80 GB
- Docker: 3.68 GB
- Parallels: 3.56 GB
- VMware 11: 4.04 GB
Note: I would take these numbers with a grain of salt, since they may not be comparing apples to apples, except perhaps to note that Multipass is quite a bit lower than Docker despite the fact that the two have similar underpinnings.
Conclusion
In every benchmark, Parallels has the fastest filesystem adapter, and Docker gRPC FUSE is fairly consistently the slowest. All the others fall somewhere in the middle, with VMware and Vagrant generally performing faster, and Multipass and Docker osxfs generally performing slower.
However, native performance, both on the host and within the virtual machine’s virtual filesystem, is orders of magnitude faster than any of the VM shared filesystems for I/O operations. I hope that additional work can be done to improve performance of shared filesystems from a macOS host to a Linux guest.
What do you think? Did I make fair comparisons? What other options should I add to my benchmarks? Let me know in the comments below!