How We Cut Our Cloudflare Worker Test Suite from 80s to 24s

5 min read Original article ↗

TLDR: I'm using @cloudflare/vitest-pool-workers and figured out how to cut my test suite from 80 seconds to 24. If your Workers tests feel slow, read on.

I'm building a Cloudflare Workers app with D1, Durable Objects, R2, the whole stack. The official way to test this stuff is Cloudflare's vitest pool package. It runs your tests inside workerd so you get faithful simulation of all the Workers APIs. It's a good tool. I'm happy with it.

It has a few rough edges, though. Over months of building I accumulated about 1,100 tests across 42 files. The suite took 114 seconds to run. I figured that was just the cost of doing business. Big test suite, lots of bindings, 58 database migration files to apply on startup. Of course it's slow.

I set a Claude Code session loose on it to see if it could squeeze out some improvement. It found real wins. Reduced some unnecessary polling delays, merged a bunch of small test files into logical groups, switched from isolated storage to shared storage. It got it down from 114 seconds to about 80. Solid. I was satisfied with 30% faster and moved on with my day.

But there was something in the timing breakdown that didn't add up.

Duration  80.47s (transform 1.87s, setup 59.78s, collect 1.16s,
                  tests 18.35s)

58 seconds of setup. 18 seconds of actual tests. Most of my test suite runtime was setup, not tests. The initial optimization pass had improved the test execution time, but the setup time was untouched.

The setup was running once per test file. I had 42 test files. Each one cost about 1.4 seconds of overhead. 42 times 1.4 is 58 seconds, and there it is.

The number of tests wasn't the problem. I was being punished for how I happened to organize them into files.

Why it's slow

(I'm going to get a bit technical here. If you want to skip ahead to the fix, you won't miss anything.)

In a normal vitest setup, your tests run in Node.js. When a test file imports a module, it's a filesystem read. Fast and efficient. The per-file overhead is maybe 50-100 milliseconds so nobody notices.

The Cloudflare pool is different. Your tests run inside workerd, Cloudflare's V8 runtime. That's what makes it useful. You get real D1, real Durable Objects, real R2. But workerd can't read your filesystem.

So every module import goes through a WebSocket RPC call. workerd asks for a module, sends a request over WebSocket to the Vite dev server running in Node, Vite resolves and transforms it, sends it back over the wire, and workerd evaluates it. If that module imports other modules, repeat for each one. It works, but it's maybe 10-20x slower per module than a filesystem read.

On top of that, vitest deliberately clears the setup file from its module cache before each test file. This is a vitest design choice, not Cloudflare's. It's there for isolation guarantees. In normal vitest you'd never notice because re-importing a file from disk is nearly free. In the Cloudflare pool, that re-import goes through the full WebSocket round-trip, and it triggers again for every test file.

It's not that Cloudflare built a bad tool. The architecture is constrained by the requirement to run inside workerd. And most Workers projects are small. If you have 10 test files, the overhead is 14 seconds. Annoying but tolerable. It's when you get to 30, 40, 50 files that it becomes the dominant cost of your test suite.

The fix

Create one file that imports all your test files:

// src/all.test.ts
import "./db.test.ts";
import "./index.test.ts";
import "./agent/Agent.test.ts";
// ... all 42 files

Point your npm test script at it:

"test": "vitest run src/all.test.ts"

That's the whole solution. Vitest sees one test file, so it pays the per-file overhead once. Your individual test files still work. You can still run npx vitest run src/db.test.ts when you're working on a specific thing. The barrel file is just for running the full suite.

Duration  24.15s (transform 1.70s, setup 4.49s, collect 909ms,
                  tests 17.86s)

Setup went from 58 seconds to 4.5. Total went from 80 seconds to 24. Test execution time didn't change at all because we're running the same tests. The only thing that changed is we stopped paying a 1.4 second tax 41 extra times.

One gotcha: don't run bare vitest run with no file argument. It'll discover all 43 files (the barrel plus the 42 individual ones) and run everything twice.

A few tests will break

When I first ran 1,143 tests through the barrel, 4 failed. All of them were pre-existing test isolation issues that had been hiding behind per-file isolation. Things like hardcoded test user IDs that collided when two test files shared a database, or a webhook handler that crashed when it hit test data from a different file.

The fixes were small. Use crypto.randomUUID() instead of "test-user-123". Add a try/catch around a decrypt loop that shouldn't crash the whole endpoint on bad data. That last one was actually a production robustness improvement that the barrel test surfaced for free.

You might hit something similar. Expect a handful of fixes, not a rewrite. And the fixes make your tests more robust regardless of how you run them.

Why this matters

Your test runtime shouldn't depend on how many files you split your tests into. But with this package, it does. Maybe Cloudflare or Vitest will fix that at some point. Until then, the barrel file workaround is easy and it works. Hopefully this helps get the word out.