Announcing Durable Streams | ElectricSQL

8 min read Original article ↗

Today, we're open-sourcing Durable Streams: a persistent stream primitive and HTTP protocol for reliable, resumable, real-time data streaming into client applications.

We originally built Durable Streams as the delivery layer inside Electric, our Postgres-native sync engine, and are now standardizing it as a standalone protocol.

Why Durable Streams

The internet has strong primitives for server-to-server messaging: Kafka, RabbitMQ, NATS. They give you ordering, delivery semantics, and fault tolerance between backend services—they're battle-tested, reliable, boring infrastructure. We love them for that.

Client streaming is different. WebSocket and SSE connections are easy to start, but they're fragile in practice: tabs get suspended, networks flap, devices switch, pages refresh. When connections break, you either lose in-flight data or you build a bespoke backend storage & client resume protocol on top.

AI products make this painfully visible. Token streaming is the UI for chat and copilots, and agentic apps often stream progress events, tool outputs, and partial results over long-running sessions. When the stream fails, the product fails—even if the model did the right thing.

What are Durable Streams?

The core idea: streams are a first-class primitive that get their own URL. Each stream is an addressable, append-only log that clients can read from any position.

Durable Streams is an open protocol that extends standard HTTP to support ordered, replayable streams with offset-based resumability. It's designed to work anywhere HTTP works: browsers, mobile, native clients, and IoT.

How it works:

  • Every position in a stream has an opaque, monotonic offset.
  • Clients persist the last offset they've processed.
  • On reconnect, clients resume by asking for "everything after offset X".
  • The server doesn't need per-client session state; the stream is durable, and progress is tracked client-side.
  • Streams are addressed by offset-based URLs, so historical reads can be cached by CDNs.
  • This makes it feasible to serve large numbers of clients from a single source stream without turning your origin into a bottleneck.

That's the model: consistent, interoperable, scalable, and durable client streaming.

Why Now (and Why We Built It)

Refined in production

A sync engine can't cheat its way around delivery—we needed a transport layer that guarantees ordered, replayable, resumable delivery from day one. Over the past 18 months of OSS usage and Electric Cloud, we've continuously refined our implementation—debugging edge cases, optimizing recovery paths, hardening failure modes—until now we reliably deliver millions of state changes every day. It's been rewarding to build something that just works.

The AI explosion

At the same time, conversations with users, customers, and industry peers kept surfacing the same theme: AI token streaming needs reliable delivery and persistence. This has become a dominant infrastructure concern (as we've discussed before) as AI applications proliferate. Teams are shipping streaming UIs on best-effort connections, then reimplementing offsets, buffering, replay, and deduplication in application code. The underlying need is a durable stream primitive that makes token streaming (and other client streaming) survivable across the messy realities of browsers/smartphones and networks.

Building a composable ecosystem

As we work on Electric 2.0, we're making a key architectural shift: separating the foundational transport layer from the protocols built on top of it. This creates a composable ecosystem where protocols can be used independently or mixed together.

Foundation:

  • Durable Streams — Durable, resumable delivery over HTTP

Ecosystem:

  • State Protocol — A composable schema for state change events (insert/update/delete) that any protocol can adopt when it needs database-style sync semantics
  • Database adapters — Postgres, MySQL, SQLite replication using the State Protocol
  • AI transport plugins — Durable transport adapters for AI SDKs (Vercel AI SDK, TanStack AI, etc.), enabling resumable token streaming and persistent agent sessions
  • Application protocols — Presence, CRDTs (Yjs, Automerge), real-time collaboration, and custom protocols built on the stream foundation

Durable Streams is that foundational layer extracted as a standalone protocol: the thing we need for Electric, and the thing we keep seeing other teams reinvent for AI streaming and beyond.

How It Works

Durable Streams uses standard HTTP methods:

  • Create + append: create a stream with PUT, append data with POST. Each append returns the next offset in a response header.
  • Read + resume: read from an offset using GET for catch-up, or tail the stream using long-polling or an SSE mode. If a connection drops, reconnect using the last saved offset.
  • Message boundaries: choose between raw byte concatenation (with your own framing, e.g. NDJSON) or a JSON mode where servers return batches as arrays with preserved boundaries.
  • Infrastructure-friendly: runs over plain HTTP (including long-polling or SSE) so it fits behind CDNs and standard API gateways.

Here's what resumable streaming looks like in practice:

typescript

import { DurableStream } from "@durable-streams/client"

const stream = new DurableStream({
  url: "https://your-server.com/v1/stream/my-stream",
})

// Catch-up: read all existing data
const result = await stream.read({ live: false })
const savedOffset = result.offset // Persist this client-side

// Resume from where you left off (after refresh, reconnect, device switch)
// By default, it long-polls or uses SSE for real-time updates.
for await (const chunk of stream.read({
  offset: savedOffset,
})) {
  console.log(new TextDecoder().decode(chunk.data))
  // Save chunk.offset to resume from here next time
}

Use Cases

Durable Streams is a delivery primitive. A few places it fits well:

  • AI / LLM streaming: resume token streams across refreshes and device switches instead of restarting generations.
  • Agentic apps: stream tool outputs and progress events with replay and clean reconnect semantics.
  • Database synchronization: stream row changes with guaranteed ordering and resumability (the mechanism Electric uses to ship updates to clients).
  • Event sourcing: deliver immutable logs clients can replay from any point in time.
  • Real-time collaboration: deliver CRDT / OT updates with replayable history and clean reconnects.

The pattern is the same: consume events from backend systems (databases, LLMs, Kafka, queues), apply auth + transformation, then fan out to clients over HTTP using Durable Streams.

What's in the Repo

The durable-streams repository includes:

  • A reference Node.js server implementation
  • TypeScript clients for read-only and read-write scenarios
  • A CLI tool for testing and development
  • A conformance test suite for cross-implementation compatibility

The goal is for Durable Streams to be a spec with many implementations, not a single codebase. We'd love to see independent server and client implementations in other languages. The reference implementation includes a Node.js server and TypeScript/JavaScript client, but the ecosystem needs implementations in Python, Go, Rust, Java, Swift, Kotlin, and more—along with different storage backends (PostgreSQL, S3, Redis, etc.).

If you're building one, the conformance test suite is there to help ensure compatibility. We're happy to link to implementations from the main repository, and we'd love to chat in Discord.

What's Coming Next

Today's launch focuses on Durable Streams—the foundational transport layer. Next, we're publishing the protocols built on top of it.

Electric's current protocol bundles everything together. As we build Electric 2.0, we're separating the foundation from the ecosystem so each piece can be used independently or composed together:

  • Durable Streams (launching today) — The foundational transport layer
  • State Protocol (coming soon) — A composable protocol for state change events (insert/update/delete) that works over any durable stream
  • Database adapters (coming soon) — Postgres, MySQL, SQLite adapters that use the State Protocol

This means you can use Durable Streams for AI token streaming today, then later adopt the State Protocol when you need database sync semantics. Or build your own protocol on top of Streams for presence, CRDTs, or custom real-time features.

Stay tuned for the State Protocol announcement.

Common Questions

Why not just use SSE or WebSockets?

Both SSE and WebSockets are ephemeral—when a connection drops, in-flight data is lost. There's no standard way to resume from a specific position after reconnection, no defined protocol for retrieving historical data before switching to live mode, and neither integrates well with HTTP caching infrastructure or standard API gateways.

Durable Streams keeps you in plain HTTP with standardized resumption through opaque, monotonic offsets, a unified catch-up and live protocol, and offset-based requests that CDNs and browsers can cache.

How does this relate to Kafka, RabbitMQ, and other message queues?

Durable Streams complements rather than replaces these systems. Kafka and RabbitMQ excel at server-to-server messaging in backend infrastructure. Durable Streams solves the distinct challenge of reliably streaming data to client applications—handling client diversity, network unreliability, and the economics of per-connection delivery.

The recommended pattern: backend systems (Kafka, databases, queues) → application server → Durable Streams → clients.

What about performance?

Durable Streams is built for production scale. In Electric, we sync data through Postgres and Electric in under 15ms end-to-end. Throughput scales with your infrastructure—the protocol itself adds minimal overhead, and we've tested millions of concurrent clients subscribed to a single stream without degradation.

The offset-based design enables aggressive caching at CDN edges, which means read-heavy workloads (common in sync and AI scenarios) scale horizontally without overwhelming origin servers.

Will there be a hosted version?

We'll be launching our own cloud implementation of Durable Streams early next year.

Get Started

Check out the durable-streams repository to get started, and join us in Discord if you want to explore where this fits in your stack.