ZeroFS vs JuiceFS Benchmarks - ZeroFS Documentation

3 min read Original article ↗

ZeroFS vs JuiceFS Benchmarks

Performance comparison conducted on Azure D48lds v6 (48 vCPUs, 96 GiB RAM) with Cloudflare R2 backend.

  • VM: Azure Standard D48lds v6, West Europe (Zone 1)
  • Storage: Cloudflare R2 (S3-compatible)
  • Benchmark suite: github.com/Barre/ZeroFS/bench
  • Operations per test: 10,000

ZeroFS: Direct S3-only architecture. No additional infrastructure required.

JuiceFS: Requires separate metadata database (SQLite/Redis/TiKV) plus S3 storage. Despite this additional complexity and dedicated metadata layer, JuiceFS performs 175-227x slower than ZeroFS in our tests.

Performance at a Glance

Key Performance Differences

Sequential Writes (higher is better)JuiceFS: 175x slower

Data Modifications (higher is better)JuiceFS: 183x slower

File Append (higher is better)JuiceFS: 227x slower

Empty Files (higher is better)ZeroFS: 1.2x faster

Git Clone (lower is better)ZeroFS: 13x faster

Cargo Build (lower is better)JuiceFS: 22x slower

TAR Extract (lower is better)ZeroFS: 76x faster

Storage Used (lower is better)ZeroFS: 32x less

API Operations (lower is better)ZeroFS: 112x less

Synthetic Benchmarks

TestZeroFSJuiceFSDifference
Sequential Writes
Operations/sec984.295.62175x
Mean latency1.01ms177.76ms176x
Success rate100%100%-
Data Modifications
Operations/sec1,098.625.98183x
Mean latency0.91ms166.25ms183x
Success rate100%7.94%-
Single File Append
Operations/sec1,203.565.29227x
Mean latency0.83ms186.16ms224x
Success rate100%2.57%-
Empty Files
Operations/sec1,350.661,150.571.17x
Mean latency0.59ms0.83ms1.4x
Success rate100%100%-

Real-World Operations

OperationZeroFSJuiceFSNotes
Git clone2.6s34.4sZeroFS repository
Cargo build3m 4s>69mJuiceFS aborted - no progress
tar -xf (ZFS source)8.2s10m 26sZFS 2.3.3 release tarball

ZeroFS

  • Consistent sub-millisecond latencies for file operations
  • 100% success rate across all benchmarks
  • Completed all real-world tests

JuiceFS

  • Failed 92% of data modification operations
  • Failed 97% of append operations
  • Unable to complete Rust compilation after 69 minutes
  • Errors: "No such file or directory (os error 2)" on file operations

Sequential Writes

Creates files in sequence. Tests metadata performance and write throughput.

ZeroFS: 10,000 files in 10.16 seconds
JuiceFS: 10,000 files in 29 minutes 37 seconds

Data Modifications

Random writes to existing files. Tests consistency and caching.

ZeroFS: All operations succeeded
JuiceFS: 9,206 failures out of 10,000 operations

Single File Append

Appends to a single file. Tests lock contention and write ordering.

ZeroFS: All operations succeeded
JuiceFS: 9,743 failures out of 10,000 operations

Empty File Creation

Pure metadata operations without data writes.

ZeroFS: 7.4 seconds total
JuiceFS: 8.7 seconds total

This was JuiceFS's best result, suggesting the bottleneck is in data operations rather than metadata.

Compilation Workload

Rust compilation of ZeroFS codebase. Tests mixed read/write patterns.

ZeroFS: Completed in 3 minutes 4 seconds
JuiceFS: Aborted after 69 minutes with no progress past initial dependencies

Archive Extraction

Extracting ZFS 2.3.3 source tarball. Tests sequential file creation with varying sizes.

ZeroFS: 8.2 seconds
JuiceFS: 10 minutes 26 seconds (76x slower)

Final Bucket Statistics

MetricZeroFSJuiceFSDifference
Bucket Size7.57 GB238.99 GB31.6x larger
Class A Operations6.15k359.21k58.4x more
Class B Operations1.84k539.3k293x more

JuiceFS consumed 31x more storage and performed 58-293x more S3 operations for the same workload. This translates directly to higher storage costs and API charges.

In our benchmarks, ZeroFS demonstrated 175-227x higher throughput for write operations while using only S3 for storage. JuiceFS, which requires both S3 and a separate metadata database, experienced high failure rates and significantly slower performance across all tests.

The tests also revealed differences in resource consumption: JuiceFS used 31x more storage space and generated up to 293x more S3 API calls for the same workload, which would impact operational costs in production environments.