Show HN: MuninnDB – ACT-R decay and Hebbian memory for AI agents
github.comHi HN,
After building several AI agent systems, I kept running into the same frustration: memory layers that are either static vector stores or fragile prompt hacks. Retrieval is opaque, forgetting happens at the wrong time, and associations don’t form naturally.
So I threw away the two production memory systems I had and built something different. MuninnDB is a purpose-built cognitive memory database where memories (called engrams) are first-class citizens that:
- Strengthen with repeated co-activation (Hebbian learning) - Decay over time using a verbatim ACT-R formula - Automatically form bidirectional associations - Track their own Bayesian confidence - Return a full mathematical “Why” explanation on every retrieval
Everything runs as a single static Go binary (embedded Pebble LSM storage + HNSW + BM25). No external services, no Redis/Postgres/Pinecone, and no LLM in the hot path. One command (muninn init) auto-configures it with Cursor, Claude Desktop, VS Code, and any other MCP-compatible tool.
The core call is dead simple: Activate(context) returns ranked results + explainable scoring. Background workers handle learning and decay on every read.
GitHub: https://github.com/scrypster/muninndb Website + docs + install (one-liner): https://muninndb.com Quick 13-minute demo video: https://www.youtube.com/watch?v=b29wl0ehrQI
It’s very early (alpha, ~10 days old), but already functional and I’m using it daily. Would love honest feedback or questions from anyone working on agent memory, long-term RAG, or cognitive architectures.
Thanks! Interesting direction. Treating decay/confidence as engine-native primitives is closer to what multi-agent systems need than raw similarity search. One practical thing to watch in production: expose provenance + freshness semantics at query time so downstream agents can decide whether to trust, refresh, or ignore a recalled memory. @xing_horizon Thanks! I really appreciate the feedback. You're spot on that downstream agents need clear signals to decide whether to trust, refresh, or ignore a recalled memory. Right now `Activate()` already returns:
- Bayesian confidence per engram
- Full mathematical "Why" explanation (the 6-phase pipeline with exact contributions from ACT-R temporal decay, Hebbian strength, content match, etc.)
- `last_access` timestamp + access frequency (which directly feeds the decay calculation) This already gives a solid freshness signal via the temporal weighting. Provenance (original source + creation context) is tracked internally but not yet exposed as clean first-class fields in the response... excellent callout, and that's jumping up the roadmap. Would love to hear what specific provenance/freshness fields have worked best in the multi-agent systems you've worked with. Wanted to follow up on this... I dug into the codebase after your comment. You're right that the data is all there (last_access, access_count, raw Ebbinghaus relevance score, provenance source type) but it's siloed behind secondary tool calls rather than inline in the Activate response. That's the gap. I'm going to surface those fields directly in the next release so agents can make trust/refresh/ignore decisions in a single round-trip without needing to call muninn_read or muninn_provenance after the fact. Thanks for the sharp feedback! Exactly the kind of thing that comes from actually building multi-agent systems in production. Currently using this with both cursorAI and alongside Claude CLI. Both integrations are running flawlessly and have boosted productivity exponentially. I love the direction you are going with this tool and so far absolutely would recommend to anyone looking to give their AI tooling the super-jet it's missing. sniderwebdev, Thank you! This is the project I just posted. Happy to dive into any details... the exact ACT-R decay formula, how the Hebbian graph updates in log space, the 6-phase Activate pipeline, or why I went with embedded Pebble. Fire away! Brilliant Job! I think you have landed on fundamental concepts that will be core to Agents and AGI going forward. Congrats on a very impressive project, video, website, etc. I think this is going to be big! Please consider writing an academic paper since that could help you reach many more top AI researchers, etc.