Show HN: MuninnDB – ACT-R decay and Hebbian memory for AI agents
github.comHi HN,
After building several AI agent systems, I kept running into the same frustration: memory layers that are either static vector stores or fragile prompt hacks. Retrieval is opaque, forgetting happens at the wrong time, and associations don’t form naturally.
So I threw away the two production memory systems I had and built something different. MuninnDB is a purpose-built cognitive memory database where memories (called engrams) are first-class citizens that:
- Strengthen with repeated co-activation (Hebbian learning) - Decay over time using a verbatim ACT-R formula - Automatically form bidirectional associations - Track their own Bayesian confidence - Return a full mathematical “Why” explanation on every retrieval
Everything runs as a single static Go binary (embedded Pebble LSM storage + HNSW + BM25). No external services, no Redis/Postgres/Pinecone, and no LLM in the hot path. One command (muninn init) auto-configures it with Cursor, Claude Desktop, VS Code, and any other MCP-compatible tool.
The core call is dead simple: Activate(context) returns ranked results + explainable scoring. Background workers handle learning and decay on every read.
GitHub: https://github.com/scrypster/muninndb Website + docs + install (one-liner): https://muninndb.com Quick 13-minute demo video: https://www.youtube.com/watch?v=b29wl0ehrQI
It’s very early (alpha, ~10 days old), but already functional and I’m using it daily. Would love honest feedback or questions from anyone working on agent memory, long-term RAG, or cognitive architectures.
Thanks! Interesting direction. Treating decay/confidence as engine-native primitives is closer to what multi-agent systems need than raw similarity search. One practical thing to watch in production: expose provenance + freshness semantics at query time so downstream agents can decide whether to trust, refresh, or ignore a recalled memory. This is the project I just posted. Happy to dive into any details... the exact ACT-R decay formula, how the Hebbian graph updates in log space, the 6-phase Activate pipeline, or why I went with embedded Pebble. Fire away!