Show HN: SNKV – SQLite's B-tree as a key-value store (C/C++ and Python bindings)
github.comSQLite has six layers: SQL parser → query planner → VDBE → B-tree → pager → OS. (https://sqlite.org/arch.html) For key-value workloads you only need the bottom three.
SNKV cuts the top three layers and talks directly to SQLite's B-tree engine. No SQL strings. No query planner. No VM. Just put/get/delete on the same storage core that powers SQLite.
Python:
pip install snkv
from snkv import KVStore
with KVStore("mydb.db") as db:
db["hello"] = "world"
print(db["hello"]) # b"world"
C/C++ (single-header, drop-in): #define SNKV_IMPLEMENTATION
#include "snkv.h"
KVStore *db;
kvstore_open("mydb.db", &db, KVSTORE_JOURNAL_WAL);
kvstore_put(db, "key", 3, "value", 5);
Benchmarks vs SQLite WITHOUT ROWID (1M records, identical settings): Sequential writes +57%
Random reads +68%
Sequential scan +90%
Random updates +72%
Random deletes +104%
Exists checks +75%
Mixed workload +84%
Bulk insert +10%
Honest tradeoffs:
- LMDB beats it on raw reads (memory-mapped)
- RocksDB beats it on write-heavy workloads (LSM-tree)
- sqlite3 CLI won't open the database (schema layer is bypassed by design)What you get: ACID, WAL concurrency, column families, crash safety —
with less overhead for read-heavy KV workloads. OP seems to self promote this project and other similar vibe coded works every few weeks under two different HN handles. Edit: for me this post appears on the front page of HN. OP this is mission success - add this project to your résumé and stop spamming. Yeah I wish substack would stop doing this too. They keep inserting their brand in HN under different handles. Nine reposts across the last 21 days across a couple of fresh accounts, some of which seem to be banned(?). - Show HN: SNKV – KV store on SQLite's B-tree with 11x less memory than RocksDB (github.com/hash-anu)
3 points by swaminarayan 6 days ago - I read 150K lines of SQLite source — here’s how its B-Tree powers a KV store (github.com/hash-anu)
1 point by hashmakjsn 6 days ago | 1 comment - Show HN: SnkvDB – Single-header ACID KV store using SQLite's B-Tree engine (github.com/hash-anu)
5 points by hashmakjsn 8 days ago | 1 comment - Show HN: SNKV benchmark with RocksDB (github.com/hash-anu)
2 points by hashmakjsn 13 days ago - Show HN: SNKV and LiteFS – Distributed KV store with automatic replication (github.com/hash-anu)
3 points by hashmakjsn 14 days ago - Show HN: In SQLite v3.51.2 skipped query layers and accessed b-tree APIs for KV
1 point by hashmak_jsn 15 days ago - Show HN: Developed key value storage using SQLite's b-tree APIs directly (github.com/hash-anu)
4 points by hashmakjsn 15 days ago - Show HN: Developed key value storage using SQLite b-tree APIs directly (github.com/hash-anu)
1 point by hashmak_jsn 20 days ago - Show HN: SNKV — A Key-Value Store Built Directly on SQLite’s B-Tree APIs (github.com/hash-anu)
1 point by hashmak_jsn 21 days ago I'm surprised by your benchmark results. I've considered building this exact thing before (I think I've talked about it on HN even), but the reason I didn't build it was because I was sure (on an intuitive level) the actual overhead of the SQL layer was negligible for simple k/v queries. Where does the +104% on random deletes (for example) actually come from? Fair skepticism — I had the same intuition going in. The SQL layer overhead alone is probably small, you're right. The bigger
gain comes from a cached read cursor. SQLite opens and closes a cursor on
every operation. SNKV keeps one persistent cursor per column family sitting
open on the B-tree. On random deletes that means seek + delete on an already
warm cursor vs. initialize cursor + seek + delete + close on every call. For deletes there's also prepared statement overhead in SQLite — even with
prepare/bind/step/reset, that's extra work SNKV just doesn't do. I'd genuinely like someone else to run the numbers. Benchmark source is in
the repo if you want to poke at it — tests/test_benchmark.c on the SNKV
side and https://github.com/hash-anu/sqllite-benchmark-kv for SQLite. If your
results differ I want to know. What does "column family" mean in this context? A named key space within the same database file — keys in "users" don't
collide with keys in "sessions" but both share the same WAL and transaction. Did you measure the performance impact of having multiple trees in a single file vs. having one tree per file? I'd assume one per file is faster, is that correct? no dont know about it. I will check it out. Are you using ai for the comment replies too?! Everyone knows the emdash is a giveaway, and they are being left in are you reading what you're writing? It's a nonstop slop funnel as far as I can tell. Only ashamed I've been here for more than 5 minutes. yes It "only" doubles performance so the overheads aren't that heavy Vibe coded trash. that credulous hn readers will upvote this is alarming. What's the evidence that this is vibe-coded? Or trash? If you can't tell then I'm not sure what more needs to be said. I took a look through the commit history and it was glaringly obvious to me. To trust something like data-storage to vibe-coded nonsense is incredibly irresponsible. To promote it is even moreso. I'm just surprised you can't tell, too. the readme seems like it was written to some degree by claude. if u work long enough with claude u start to pick up on its style/patterns 100% I don't know about trash, but this post, this repo and even their comments on this thread are blatantly written by an AI. If you still need to ask for evidence, consider that you might be AI-blind. this is not vibe coded project, this is developed by understanding sqlite code. Have you ever looked into examples ? Have you checked the code ? Now my post got flagged. and If I use AI to understand code than what is wrong with that ? what is the use of AI ? to make person more productive, right ? You've been copying and pasting directly from Claude to reply to comments that ask how this works. You also realise you've been caught and are now replying in a completely different style. You've thrown away all credibility. I believe it was flagged for spamming, not for "vibe code" I am showing examples/ showing demo gif https://github.com/hash-anu/snkv/blob/master/demo.gif. Showing code. I did the same on rust, sqlite btree behind actix. It is amazing that you don't need redis anymore. You never did if you're happy with local files. See dbm, gdbm, Berkeley DB... Related: lite3[1] a binary JSON like serialization format using B-trees. I just looked into .c files into it, it is not using any of sqlite files. In my project I am using only sqlite layers, which are battle tested. It doesn't beat a hashtable, but has faster sequential (=ordered) reads, and can do range iterators. The examples to not reflect that. All random accesses are slower.