Projecting React

18 min read Original article ↗

I've been working on TanStack Start, and one friction has been nagging at me: React is big. Bundled through Vite, it lands at ~60 KB gzip on the client before a single line of app code runs. (The ~45 KB figure commonly cited is the CDN UMD build; modern ESM bundlers don't tree-shake down that far.) The rest of the TanStack toolchain (Router, Query, Store, Form, Virtual) is collectively a fraction of that, and it felt off to ship a stack where the smallest piece you can't remove is also the largest.

The obvious move was Preact. preact/compat has been the pragmatic "React but tiny" path for years, and I sat down to wire it up.

It didn't work. preact-compat has drifted from React 19 enough that it's no longer a true drop-in. Small frictions stacked up around use() semantics, the React 19 server-action surface, portals, error boundaries, and hydration edges. Every fix was a shim on top of a shim, and the delta kept growing. Somewhere around the fifth patch I stopped.

The problem wasn't Preact, it was that I wanted React's public API projected at a different scope.

Code as a materialized view

Around the same time, my friend Kyle Mathews posted something that's been rattling around my head for weeks:

Coding agents turn code from artifact into materialized view. The base tables are the ideas — algorithms, protocols, semantic contracts. The code is one projection. For decades we treated the artifact as authoritative because regenerating was expensive. It isn't anymore.

We only treat code as the source of truth because regenerating it is expensive. Flip that cost and the model inverts: the ideas become the base table, and the code becomes one projection among many. "N specialized projections of the same ideas" is suddenly possible.

React's public API is the base table: the element model, the hooks contract, Suspense semantics, the hydration lifecycle, the SSR streaming protocol. It's been hammered on for a decade, has a massive test suite behind it, and is stable enough to build on. React-the-repo is one projection of that API, optimized for use cases TanStack Start doesn't share: concurrent mode, time slicing, DevTools, a full React Server Components runtime, a long tail of DOM quirks.

The question I kept coming back to: what if I asked an AI agent to produce a different projection, one scoped to exactly what TanStack Start needs?

A note on vinext

I wasn't the first to ask. A few weeks earlier Cloudflare shipped vinext, a Vite plugin that reimplements the Next.js API surface, built by one engineer with AI assistance in one week for about $1,100. It has over 1,700 Vitest tests, 380 Playwright runs, and production builds 4× faster than Next.js 16 with Turbopack. The discourse called it a "slop-fork."

The label stuck for a reason. Vinext is real engineering, and Cloudflare has a clear commercial motive: making Next.js easier to deploy on Workers pulls Vercel customers onto their platform. Both things are true at once. The label ended up mixing "made with AI" with "made to capture market share," and the second half is what people were actually reacting to. Fair enough.

But motive doesn't make something slop, it makes it a product. And this thing I built doesn't have the market-share angle at all. It's an experiment running on two of my own websites and nothing else, so the motive is different even though the technique is the same. I'm going to borrow Kyle's word for both of them and call them projections, not forks.

The shape: core plus toggleable features

TanStack Start is synchronous-friendly. Router state is owned by @tanstack/router-core, external store tearing is handled by @tanstack/react-store, and we have our own devtools. The RSC pipeline uses @vitejs/plugin-rsc against real react-server-dom for Flight serialization. We need Suspense and SSR streaming, but we don't need concurrent scheduling on the rendering path.

Some of that gets cut permanently. Concurrent rendering, time slicing, the lane-based scheduler, React DevTools, and the Flight client deserializer aren't implemented at all. useTransition and useDeferredValue run synchronously, startTransition is just fn(), and the scheduler is a microtask wrapper. Those are product decisions: TanStack Start either doesn't need them, or another piece of the stack handles them.

Everything else got split into two layers. The first is a minimum core (~7.08 KB gzip): fiber reconciler with keyed child diffing, host DOM mount/update, the standard hook surface (useState, useReducer, useEffect, useLayoutEffect, useInsertionEffect, useMemo, useCallback, useRef, useId, useSyncExternalStore, use), native event binding, Fragments, JSX runtime. Every React app needs all of it.

On top of the core sit eight toggleable features, each with a real implementation and a stub:

Feature Stubbed behavior Savings (gzip)
portal Children render in place, container ignored ~30 B
context Provider → Fragment, useContext returns default ~80 B
suspense Suspense → Fragment, thenables retry on settle ~640 B
memo Pass-through every render ~80 B
forwardRef Ref dropped (React 19 "refs as props" still works) ~70 B
lazy Sync-resolvable payloads work; async retries on settle ~20 B
classComponents constructor + render + setState only ~200 B
hydration hydrateRoot throws (use createRoot for SPA) ~1270 B

The Vite plugin swaps index.jsstub.js for any feature flagged off, so the full code never enters the module graph and tree-shaking strips it. User code doesn't change, and there's no runtime branching.

There are two starting points, not a spectrum:

import { redact } from '@tanstack/redact/vite'

redact({ preset: 'full' })  // 9.39 KB: drop-in React, opt OUT what you don't need
redact({ preset: 'nano' })  // 7.08 KB: minimum core, opt IN what you do need

Per-feature overrides merge on top of either preset, so you can run full minus hydration for an SPA, or nano plus context plus suspense for a small interactive app. Most apps want either close-to-React or close-to-minimal, and the toggles let you land precisely where your shape needs.

This isn't scientific pruning, it's product decisions all the way down. Everything that's toggleable is toggleable because some consumer might not need it, and everything I left out entirely is gone because my consumer never will. That's the point of projections: each one is scoped to its consumer, not to some abstract ideal.

One thing in all of that is worth pulling out by itself: RSC works. The projection ships against real react-server-dom for Flight serialization, and the Flight client deserialization is delegated to the same package on the client. None of the RSC machinery is reimplemented, the projection just plays the role of the React runtime underneath it. tanstack.com runs an RSC-rendered blog and docs renderer through this setup with no functional regressions. The one perf cost shows up as an LCP regression on RSC-heavy pages, which is covered in the numbers section.

What actually happened

It took one day of prompting, shaping a projection against a spec rather than writing code from scratch.

By the end of that day, almost the whole React surface was working: the element model, JSX runtime (classic and automatic), Fragment, memo, lazy, forwardRef, the standard hook set (useState, useEffect, useLayoutEffect, useRef, useMemo, useCallback, useContext with correct semantics), class components, error boundaries, useSyncExternalStore, use() for context and promises, SSR with Suspense streaming, and hydration. The whole thing passed the test suite.

The work that came after wasn't building, it was using. Once tannerlinsley.com and tanstack.com were actually running on the shim, real traffic surfaced the bugs that no amount of unit testing was going to catch:

  • Reconciliation order. placeChildrenInOrder had to iterate in reverse. Forward iteration cascades failures on every out-of-order child insertion.
  • Effect cleanup timing. useEffect cleanup had to run at effect-run time (the passive drain), not at dispatch time. Otherwise coalesced renders, the kind TanStack Router triggers on every user action, leak side effects into the DOM.
  • Deferred hydration. use(promise) and lazy components suspending mid-hydration needed matching ancestor-Suspense guards in two code paths.
  • Controlled inputs. Every keystroke had to fire onChange with event.nativeEvent aliased on the dispatched event for library compat.
  • SSR streaming. Shell + bootstrap had to be buffered into a single TextEncoder.encode + enqueue instead of per-chunk. Cut Node stream overhead measurably in the CPU profile.

Each one was a one-shot fix the moment I described it. The pattern was always the same: spot the bug in production, write down what I saw, and get the fix back, with no rabbit holes along the way. Claude already knew the right thing to do, it just needed to know which thing to do, and that information only came from running the code in the wild.

Every one of these is a real React bug shape, and any React core maintainer would recognize them. That's what makes this a real projection instead of a convincing knockoff: the invariants and failure modes are the same.

The numbers

Measured against React 19.2.3:

Entry React 19 gzip Projection (full) Projection (nano)
react-dom/client 60.3 KB 9.39 KB 7.08 KB
react-dom/server 61.1 KB 4.59 KB
Full client runtime (react + react-dom/client + jsx-runtime) ~60 KB ~11.47 KB ~9.92 KB

In either preset, ~80–85% smaller than stock React. And because there's no scheduler, no lanes, no fiber work loop, the render path itself is simpler:

Benchmark Real React Projection Speedup
client-nav (router-driven navigation loop) 34.9 hz 78.1 hz 2.24×
SSR (request loop) ~48 hz 168 hz ~3×
stable-list re-render (480 × 100 ticks) 45.5 ms 37.3 ms 1.22×

700/700 unit + integration tests pass. That last row in the bench table came in late, after Steve Faulkner ran a different bench at me a couple weeks after this post first went up, the workload most React-vs-X comparisons publish numbers on, a 480-row stable list re-rendered 100 times with prop changes on every row, render loop only, no router, no SSR. The first measurement put redact about 34% slower than React on it. Profiling in real Chrome turned up three real costs in the reconciler that hadn't surfaced on TanStack's own workloads, a fast-path bookkeeping detour for unkeyed positional matches, redundant Text.data reads in the leaf path, and a per-render dispatcher pulling 17 closures along on every function-component pass. After two passes of fixes (@tanstack/redact@0.0.10), 37.3 ms median vs React's 45.5 ms, ~18% faster, 30 samples in real Chrome against the deployed React build. A multi-workload sanity check across keyed reorder, mount/unmount, deep tree, and state churn confirms the slow paths weren't regressed.

jsdom is fine for semantic correctness but misleading for perf. About a quarter of its CPU time goes to its own DOM-tracking machinery that real browsers do orders of magnitude faster, which means the cost ranking it produces points at the wrong things. The first round of fixes I tried in jsdom actively regressed the workload in real Chrome and I almost shipped them anyway. Real Chrome via playwright is the bench going forward, the harness is checked into the redact repo so anyone reproducing uses the same approach.

I measured it against real React on two actual production TanStack Start apps, one small and one large, using identical Lighthouse protocol: 30-run median, 3 URLs × 2 form factors, wrangler dev --local serving a production build for both variants.

tannerlinsley.com, shipping on it today

If you're reading this post in your browser right now, you're running on the projection: the HTML, the JavaScript that hydrated it, and the runtime handling your scroll and your clicks. None of it is real React. I moved this site over the same day I drafted this post.

Metric React Projection Δ
Lighthouse perf score (median) 99 100 +1
FCP 1.22s 1.00s −18.1%
LCP 1.42s 1.24s −12.1%
TBT 0ms 0ms
CLS 0 0
Speed Index 1.22s 1.00s −18.1%
JS transferred over the wire 144.1 KB 96.5 KB −33.0%

That's a third off the JavaScript payload on a personal blog that's almost entirely prerendered, with barely any interactivity. Even on a site where React is doing close to nothing, it still accounted for 33% of what was on the wire. You're shipping plumbing by default, not by need.

Mobile gains were bigger: FCP dropped 18–22% and LCP 12–13% across the home page, the index, and a representative post. TBT and CLS stayed flat at zero, with no interactivity regression and no layout shift. Desktop hits the Lighthouse ceiling on both variants (100/100 across all three URLs), so the score table understates the delta, and the timings are the real story.

There's no RSC on this site, so there's no LCP regression anywhere either, and the numbers are clean across the board.

tanstack.com, capable of running it end-to-end

If tannerlinsley.com is the personal blog, tanstack.com is the stress test: TanStack Router, Query, Store, Form, Table, Virtual, the RSC-rendered blog, the docs renderer, every Suspense boundary, every hydration edge, and every third-party integration TanStack Start apps routinely pull in. The projection can drop in and run the whole thing with no regressions we've been able to find, and if it were going to break on something real, it would have by now.

Same protocol, same measurement discipline:

Metric vs. real React
Lighthouse performance score parity (±2, within run noise)
FCP (desktop) −4% to −17%
FCP (mobile) up to −14%
TBT 0ms → 0ms
CLS 0 → 0
LCP (non-RSC pages) parity
LCP (RSC-heavy pages) +8% to +43%
Total app client JS −980 KB (−4.7% of full app bundle)

Lighthouse performance score lands at parity. FCP wins across both form factors, with the biggest gains on the content-heavy docs and blog pages. TBT and CLS stay at zero. The one real regression is LCP on RSC-heavy pages: the LCP element lives in the Flight-streamed subtree, and the projection's use(pendingPromise) + deferred-resume path adds latency vs. React's battle-tested RSC client. All affected pages still score "Good" on Core Web Vitals (<2.5s LCP), and the fix path is clear, so it's on the list, not a blocker.

The net result is nearly a megabyte of client JS off the wire on a full-scale app, parity or better on every other metric, and one known regression with a clear fix, all measured against stock React 19.2.3. Two production-scale sites with very different shapes, and both of them hold up.

Is it irresponsible not to project?

If projecting your dependencies down to your actual shape now takes days instead of months (vinext shows it, this experiment shows it), then shipping upstream's full general-purpose library is itself a decision with consequences. You're betting that upstream understands your shape better than you do, and for most apps that bet is right because they benefit from the generality.

But for libraries that take strong opinions (TanStack Start takes a lot of them), the bet starts to look strange. You're shipping ~52KB of plumbing to every user for features none of your users will use, because the cost of not shipping it used to be too high to justify. That cost just dropped by two orders of magnitude.

If I don't explore this shape myself, someone else will, and not for malicious reasons either. The wins are sitting there and the cost is now low, so someone will find a 2× render-path speedup, or a 50KB bundle savings, or a hydration shape that fits their world better than the upstream default. And if a projection like that ever ends up shipping to my users because I didn't build my own understanding first, the tradeoffs are theirs, not mine, and I won't have learned what my own shape actually is.

At that point, not projecting isn't conservative anymore, it's a cop-out.

Why I'm not marketing this

Projecting is cheap, but marketing a projection is not.

A marketed "alternative React" is a community cost, even when it's positioned carefully, labeled experimental, and explicitly scoped. People will benchmark it, people will compare, and people will ask the maintainers of the real thing to respond to it. Some fraction of developers will try it out, hit a concurrent-mode edge case, not understand what they're seeing, and blame React. The confusion compounds, and community attention is finite.

I don't want to pay that cost, and I don't think anyone would benefit from me paying it. This isn't an alternative React, it's a narrow experiment shaped around the needs of a specific kind of app, and the narrowness is what makes it work. Marketing it would invite interpretations that aren't true.

So it stays quiet and experimental. It's published on npm for my own projects and for anyone curious, but it isn't going into TanStack Start, it's not a dependency of any TanStack package, and right now it runs on exactly two sites (this one and tanstack.com), which is the entire surface area. It might stay that way forever, or it might be worth formalizing at some point into something people can opt into. That's a later conversation.

Vinext took the other path: public, documented, and actively pitched as a plugin anyone can install. That's a valid choice and Cloudflare has the shoulders for it, but it isn't the one I'm making here. The same technique can be shipped very differently, and as projection costs drop, personal software is going to matter more. Not every derivation needs to be a product.

On whether React core should care

I don't expect the React core team to care about this, and I think that's the right reaction.

Honestly, this is software tailor-built for exactly one consumer. It works better for me because it was shaped to my specific usage, and the generality I gave up is exactly what you'd have to give up to get my numbers. If there's anything here for React core, it's that their public API is well-specified enough to project against, which is a compliment.

The React team's job is to ship a general React, and mine is to ship TanStack Start. Those jobs used to overlap more than they do now, because running your own projection of React used to be impossible, and now it isn't. That doesn't mean React has to change, it means consumers like me have more responsibility for our own shape.

If anything, projections like this are useful feedback for a core team: evidence of how a public API gets used in the wild, which parts are load-bearing, and which parts go unused in specific domains. But they aren't proposals, and treating them that way misses what they are.

Distros, remixes, and the shape of the next few years

The analogy I keep coming back to is Linux distros.

There's no "real" Linux in the sense people sometimes imply when they use the word "fork." There's a kernel, and then there are hundreds of distributions projecting that kernel into whatever shape their users want: Debian, Arch, Alpine, NixOS, whatever runs on your router. Nobody thinks Arch is hurting Linux by existing, and that's the point.

Song remixes work the same way. There's the original track, and then there are derivative arrangements, and sometimes the derivative is the version people prefer. That doesn't take anything from the original, it just adds to what the original can be.

I think the next few years of web development look more like distros and remixes than anyone's prepared for. People will build their own projections of the libraries they depend on, shaped around what they actually use, and a few of them will ship as public products like vinext. Most won't, and that's fine, because when regenerating the code is cheap, the ideas underneath are what matter, and any one expression of them is just one option among many.

What's next

@tanstack/redact is published on npm so I can install it on my own projects and so anyone curious can try it, but I'm not marketing it. It doesn't live in TanStack Start, it's not a dependency of any TanStack package, and I'm not pitching it as the next thing you should adopt. Right now it runs on my personal site and on tanstack.com, and that's the whole deployment footprint I'm investing in. The project exists because I wanted to know what a version of React scoped to the shape of a Start app would cost to build, and once I saw the cost, not building it looked like the stranger choice.

Default to using the upstream, since it's almost always the right call. But pay attention to the shape of your own consumers, and notice when you're shipping generalizations you no longer need. The cost of owning your own projection has dropped far enough that "just use the default" is no longer the automatic answer it used to be.

A year ago, building this would have taken months instead of a weekend, and that's the shift worth paying attention to.