// Connect to object storage
store, _ := blobstore.Open(ctx, "s3://my-bucket?region=us-east-1", "db1")
db, _ := isledb.OpenDB(ctx, store, isledb.DBOptions{})
// Write
writer, _ := db.OpenWriter(ctx, isledb.DefaultWriterOptions())
writer.Put([]byte("hello"), []byte("world"))
writer.Flush(ctx)
// Read
reader, _ := isledb.OpenReader(ctx, store, isledb.ReaderOpenOptions{CacheDir: "./cache"})
val, ok, _ := reader.Get(ctx, []byte("hello"))
// Tail
tr, _ := isledb.OpenTailingReader(ctx, store, tailOpts)
tr.Tail(ctx, opts, func(kv isledb.KV) error { return nil })
How It Works
LSM-tree ideas, adapted for object storage.
Memtable writes, immutable SST flushes, and reader-side caching provide predictable performance
while your bucket handles durability and scale.
✎
Put(key, value)
Writes land in the memtable. Large values can be stored as blobs.
⚡
Flush(ctx)
Memtable is flushed into immutable SST files on object storage.
☁
S3 / GCS / Azure
Capacity and durability scale with your bucket.
↠
Tail(handler)
Stream new writes continuously, like tail -f for data.
Features
Complete data building blocks in one Go library.
IsleDB combines write/read APIs, compaction strategies, streaming tailing readers, TTL, and fence-based ownership.
☁
Object Storage Native
Use S3, GCS, Azure Blob, MinIO, or local object-compatible storage.
↠
Tailing Reader
Continuously consume new writes via polling and ordered replay.
⇄
Horizontal Readers
Scale reads by adding more readers against the same prefix.
⚙
Three Compaction Modes
Merge compaction, age-based retention, and time-window retention.
⏱
Per-Key TTL
Use TTL APIs to manage data lifetime without external schedulers.
🔒
Epoch Fencing
Prevent split-brain writer/compactor ownership with manifest fencing.
Architecture
Writer → Object Storage → Readers
One writer flushes SSTs, many readers serve queries, and tailers stream changes.
Manifest fencing protects ownership during contention.
✎ Writer
Memtable → SST → Upload
→
☁ Bucket
SSTs + blobs + manifest
→
🔎 Readers
Get · Scan · Tail
Memtable
Buffers writes to reduce object-store PUT overhead.
SST Files
Immutable sorted files that support efficient reads.
Manifest Log
Tracks applied state and replay ordering.
Local Cache
Readers cache SST and blocks to avoid repeated downloads.
Code
Core flow in a few tabs.
Core usage patterns in a compact view.
ctx := context.Background()
store, err := blobstore.Open(ctx, "s3://my-bucket?region=us-east-1", "db1")
db, err := isledb.OpenDB(ctx, store, isledb.DBOptions{})
defer db.Close()
Use Cases
One library, many workloads.
The same storage and replay model can power event ingestion, state materialization,
key-value APIs, and object-storage-first data pipelines.
📨
Event Hub
Ingest app events and fan out with tailing readers.
Common alternative: managed brokers + sinks.
📚
Event Store
Append ordered events and build projections from replay.
Common alternative: dedicated event databases.
🔑
KV API Backing Store
Serve Get/Scan workloads with object-store durability.
Common alternative: managed key-value services.
📈
CDC Pipeline Buffer
Stage changes in object storage before indexing and analytics.
Common alternative: Kafka-centric pipelines.
Decision Guide
When not to use IsleDB
Pick the right tool for the workload. IsleDB is strongest in object-storage-first, append-heavy systems.
❌ Better choices elsewhere
- Sub-10ms latency SLAs
→ Use low-latency serving data stores
- High-frequency point updates to same keys
→ Use update-optimized transactional stores
- Complex queries / joins / transactions
→ Use relational transactional databases
- Small hot datasets (<1GB)
→ Use in-memory stores
✅ Strong fit for IsleDB
- Append-heavy workloads (logs, events, CDC)
- Large datasets where 1–10 second read latency is acceptable
- Multi-reader / fan-out architectures
- Cost-sensitive storage at scale
- Serverless / ephemeral compute
"An isle is self-contained, durable, and reachable from anywhere. That's the model for your data on IsleDB."
IsleDB philosophy — an island of data in an ocean of storage.
Start with a bucket. Build anything.
One go get, one bucket prefix, and a pragmatic API surface for write, read, replay, and stream.