Sayou – Open-source Dropbox for AI agents
success of openclaw is incredible, and I really loved playing with it, but of course the problem is the security risk running on my desktop. okay for personal use for fun, not at all for professional use.
Once agents run in the cloud (and they will — you're not giving your marketing manager a terminal), they need somewhere to read and write. Right now there's nothing. Your company's knowledge is locked in Notion, Slack, Google Drive, and the agent can see none of it unless someone copy-pastes it into the chat window every single time.
sayou is basically Dropbox for agents. It syncs your SaaS tools into a single workspace of versioned Markdown files, and any agent connects to it over MCP. Multiple agents share the same workspace. one does research and writes findings, another picks them up to draft a report, a third sends it over email. The human just makes decisions.
Why files and not a database? Honestly we tried a database first. It sucked. Agents don't think in rows. When you ask an agent to research competitor pricing, the natural output is a document with sections and citations, not 15 entries in a table. Markdown with YAML frontmatter turned out to be the sweet spot — you get structured metadata for querying and rich content that both agents and humans can actually read.
I also built a benchmark, Structured Agent Memory Benchmark(SAMB) to test how different memory systems like mem0 and zep compare to sayou in a real world scenario of agents running tasks. If you are writing code, stuff like "why did we choose bcrypt over Argon2?" and "what changed between the first and second architecture review?" — the kind of things you'd actually need an agent to recall.
Results against Mem0 and Zep:
sayou 67.0% (files + FTS5)
Mem0 18.5% (embeddings)
Zep 14.2% (knowledge graph)
3.6x. The biggest gap was on decision reasoning — 68% vs 8%. Turns out embedding similarity is great for vibes-based retrieval but terrible when you need a specific paragraph from a specific document. Files with full-text search let agents grep, read sections, and follow references. Simple and it works.
Honestly it is my first time building a benchmark so i would love other engineers to try sayou as their agent memory system, but also try to compare sayou fairly for agent task completions.
If you're building agents that need to persist and share knowledge, please take a look at sayou's opensource project at https://github.com/pixell-global/sayou The Markdown-over-database choice makes sense for document-shaped output.
The harder problem seems to be concurrent semantic edits. Git-style merging works for code because conflicts are syntactic. With prose, two agents can produce logically conflicting conclusions without triggering a merge conflict.
How does Sayou reason about semantic divergence when Agent A updates a research note while Agent B is drafting against an older snapshot?
Current approach is last-write-wins with version history - simple but doesn’t solve concurrent edits.
I don’t think auto-merge is the right default for prose/research (unlike code where Git’s merge works). When Agent A writes strategic memo concluding X while Agent B writes concluding NOT-X, merging both is worse than surfacing the conflict.
Thinking the right model is: ∙ Optimistic writes (current behavior) for most cases ∙ Explicit locks for high-stakes docs agents know they’re collaborating on ∙ Diff tooling for post-hoc resolution when conflicts do occur
thanks for asking a great question. whats your thoughts?
Cool project. Took a few minutes to set up with Claude Code. Asked it to save some research notes, closed the session, opened a new one and it actually remembered. That's pretty much what I wanted. Nice work.
Thanks! We just shipped a Claude Code plugin so setup is now one command:
It adds hooks that automatically show your workspace at session start and capture your work in the background. So that "it actually remembered" experience you got works without even asking it to save.claude plugin install sayou@pixell-global