Settings

Theme

Show HN: Mneme–Persistent memory for AI agents without vector search or RAG

github.com

1 points by xqli a month ago · 5 comments

Reader

ClaireGz a month ago

Interesting direction.

One thing I keep seeing in practice is that “memory” problems are often less about storage and more about structure + retrieval strategy.

Vector search helps sometimes, but for a lot of agent workflows we’ve had better results with explicit context organization (files, metadata, rules) rather than semantic similarity alone.

Curious how you’re thinking about memory updates over time — append-only vs rewriting summaries?

  • xqliOP a month ago

    That matches our experience pretty closely. A lot of “memory” issues we saw weren’t about storage capacity, but about what kind of information is allowed to persist and how it’s structured. Once everything is flattened into one blob, retrieval strategy becomes the only lever left — which is where vectors often get overused.

    In Mneme, updates are intentionally asymmetric: – Facts are append-only and explicitly curated (they’re meant to be boring and stable). – Task state is rewritten as work progresses. – Context is disposable and aggressively compacted or dropped.

    The idea is that only a small subset of information deserves long-term durability; everything else should be easy to overwrite or forget.

    This reduces the need for heavy retrieval logic in the first place, since the model is usually operating over a much smaller, more explicit working set.

xqliOP a month ago

Mneme came out of long-running AI coding sessions where important state kept getting lost due to context compaction.

Instead of retrieval or embeddings, it treats memory as an explicit, structured artifact and separates: – stable facts – task state – ephemeral context

The goal is to make memory boring, inspectable, and durable across sessions.

Happy to answer questions or hear why this is a bad idea

remembradev 20 days ago

Interesting take on avoiding vectors. We landed somewhere in between with Remembra - hybrid search (vectors + BM25 fusion) but the real win came from entity resolution.

The insight: most 'memory failures' in coding sessions aren't retrieval failures, they're identity failures. The agent knows you mentioned 'David' and 'Mr. Kim' and 'the client' - but doesn't know they're the same person.

We added automatic entity graphs that link these references, so when you search for anything about the project, you get the full context of WHO was involved, not just what was said.

Re: append-only vs rewriting - we do both via temporal decay. Old memories naturally lose salience over time (Ebbinghaus-style) unless they keep getting referenced. Lets you avoid explicit curation without accumulating noise.

GitHub if useful: https://github.com/remembra-ai/remembra

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection