High-performance job queue for Bun. Zero external dependencies.
Why bunqueue?
| Library | Requires |
|---|---|
| BullMQ | Redis |
| Agenda | MongoDB |
| pg-boss | PostgreSQL |
| bunqueue | Nothing |
- BullMQ-compatible API — Same
Queue,Worker,QueueEvents - Zero dependencies — No Redis, no MongoDB
- SQLite persistence — Survives restarts, WAL mode for concurrent access
- Up to 286K ops/sec — Verified benchmarks
When to use bunqueue
Great for:
- Single-server deployments
- Prototypes and MVPs
- Moderate to high workloads (up to 286K ops/sec)
- Teams that want to avoid Redis operational overhead
- Embedded use cases (CLI tools, edge functions, serverless)
Not ideal for:
- Multi-region distributed systems requiring HA
- Workloads that need automatic failover today
- Systems already running Redis with existing infrastructure
Why not just use BullMQ?
If you're already running Redis, BullMQ is great — battle-tested and feature-rich.
bunqueue is for when you don't want to run Redis. SQLite with WAL mode handles surprisingly high throughput for single-node deployments (tested up to 286K ops/sec). You get persistence, priorities, delays, retries, cron jobs, and DLQ — without the operational overhead of another service.
Install
Requires Bun runtime. Node.js is not supported.
Two Modes
bunqueue runs in two modes depending on your architecture:
| Embedded | Server (TCP) | |
|---|---|---|
| How it works | Queue runs inside your process | Standalone server, clients connect via TCP |
| Setup | bun add bunqueue |
docker run or bunqueue start |
| Performance | 286K ops/sec | 149K ops/sec |
| Best for | Single-process apps, CLIs, serverless | Multiple workers, separate producer/consumer |
| Scaling | Same process only | Multiple clients across machines |
Embedded Mode
Everything runs in your process. No server, no network, no setup.
import { Queue, Worker } from 'bunqueue/client'; const queue = new Queue('emails', { embedded: true }); const worker = new Worker('emails', async (job) => { console.log('Processing:', job.data); return { sent: true }; }, { embedded: true }); await queue.add('welcome', { to: 'user@example.com' });
Server Mode (TCP)
Run bunqueue as a standalone server. Multiple workers and producers connect via TCP.
# Start with persistent data
docker run -d -p 6789:6789 -p 6790:6790 \
-v bunqueue-data:/app/data \
ghcr.io/egeominotti/bunqueue:latestConnect from your app:
import { Queue, Worker } from 'bunqueue/client'; const queue = new Queue('tasks', { connection: { host: 'localhost', port: 6789 } }); const worker = new Worker('tasks', async (job) => { return { done: true }; }, { connection: { host: 'localhost', port: 6789 } }); await queue.add('process', { data: 'hello' });
Performance
SQLite handles surprisingly high throughput for single-node deployments:
| Mode | Peak Throughput | Use Case |
|---|---|---|
| Embedded | 286K ops/sec | Same process |
| TCP | 149K ops/sec | Distributed workers |
Run
bun run benchto verify on your hardware. Full benchmark methodology →
Monitoring
# Start with Prometheus + Grafana
docker compose --profile monitoring up -d- Grafana: http://localhost:3000 (admin/bunqueue)
- Prometheus: http://localhost:9090
Documentation
License
MIT