memU is a memory framework built for 24/7 proactive agents. It is designed for long-running use and greatly reduces the LLM token cost of keeping agents always online, making always-on, evolving agents practical in production systems. memU continuously captures and understands user intent. Even without a command, the agent can tell what you are about to do and act on it by itself.
π€ OpenClaw (Moltbot, Clawdbot) Alternative
memU Bot β Now open source. The enterprise-ready OpenClaw. Your proactive AI assistant that remembers everything.
- Download-and-use and simple to get started (one-click install, < 3 min).
- Builds long-term memory to understand user intent and act proactively (24/7).
- Cuts LLM token cost with smaller context (~1/10 of comparable usage).
Try now: memu.bot Β· Source: memUBot on GitHub
ποΈ Memory as File System, File System as Memory
memU treats memory like a file systemβstructured, hierarchical, and instantly accessible.
| File System | memU Memory |
|---|---|
| π Folders | π·οΈ Categories (auto-organized topics) |
| π Files | π§ Memory Items (extracted facts, preferences, skills) |
| π Symlinks | π Cross-references (related memories linked) |
| π Mount points | π₯ Resources (conversations, documents, images) |
Why this matters:
- Navigate memories like browsing directoriesβdrill down from broad categories to specific facts
- Mount new knowledge instantlyβconversations and documents become queryable memory
- Cross-link everythingβmemories reference each other, building a connected knowledge graph
- Persistent & portableβexport, backup, and transfer memory like files
memory/
βββ preferences/
β βββ communication_style.md
β βββ topic_interests.md
βββ relationships/
β βββ contacts/
β βββ interaction_history/
βββ knowledge/
β βββ domain_expertise/
β βββ learned_skills/
βββ context/
βββ recent_conversations/
βββ pending_tasks/
Just as a file system turns raw bytes into organized data, memU transforms raw interactions into structured, searchable, proactive intelligence.
βοΈ Star the repository
If you find memU useful or interesting, a GitHub Star βοΈ would be greatly appreciated.β¨ Core Features
| Capability | Description |
|---|---|
| π€ 24/7 Proactive Agent | Always-on memory agent that works continuously in the backgroundβnever sleeps, never forgets |
| π― User Intention Capture | Understands and remembers user goals, preferences, and context across sessions automatically |
| π° Cost Efficient | Reduces long-running token costs by caching insights and avoiding redundant LLM calls |
π How Proactive Memory Works
cd examples/proactive
python proactive.py
Proactive Memory Lifecycle
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β USER QUERY β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β β
βΌ βΌ
ββββββββββββββββββββββββββββββββββββββββββ ββββββββββββββββββββββββββββββββββββββββββββββββββ
β π€ MAIN AGENT β β π§ MEMU BOT β
β β β β
β Handle user queries & execute tasks β βββββΊ β Monitor, memorize & proactive intelligence β
ββββββββββββββββββββββββββββββββββββββββββ€ ββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β β β
β ββββββββββββββββββββββββββββββββββββ β β ββββββββββββββββββββββββββββββββββββββββββββ β
β β 1. RECEIVE USER INPUT β β β β 1. MONITOR INPUT/OUTPUT β β
β β Parse query, understand β β ββββΊ β β Observe agent interactions β β
β β context and intent β β β β Track conversation flow β β
β ββββββββββββββββββββββββββββββββββββ β β ββββββββββββββββββββββββββββββββββββββββββββ β
β β β β β β
β βΌ β β βΌ β
β ββββββββββββββββββββββββββββββββββββ β β ββββββββββββββββββββββββββββββββββββββββββββ β
β β 2. PLAN & EXECUTE β β β β 2. MEMORIZE & EXTRACT β β
β β Break down tasks β β ββββ β β Store insights, facts, preferences β β
β β Call tools, retrieve data β β inject β β Extract skills & knowledge β β
β β Generate responses β β memory β β Update user profile β β
β ββββββββββββββββββββββββββββββββββββ β β ββββββββββββββββββββββββββββββββββββββββββββ β
β β β β β β
β βΌ β β βΌ β
β ββββββββββββββββββββββββββββββββββββ β β ββββββββββββββββββββββββββββββββββββββββββββ β
β β 3. RESPOND TO USER β β β β 3. PREDICT USER INTENT β β
β β Deliver answer/result β β ββββΊ β β Anticipate next steps β β
β β Continue conversation β β β β Identify upcoming needs β β
β ββββββββββββββββββββββββββββββββββββ β β ββββββββββββββββββββββββββββββββββββββββββββ β
β β β β β β
β βΌ β β βΌ β
β ββββββββββββββββββββββββββββββββββββ β β ββββββββββββββββββββββββββββββββββββββββββββ β
β β 4. LOOP β β β β 4. RUN PROACTIVE TASKS β β
β β Wait for next user input β β ββββ β β Pre-fetch relevant context β β
β β or proactive suggestions β β suggestβ β Prepare recommendations β β
β ββββββββββββββββββββββββββββββββββββ β β β Update todolist autonomously β β
β β β ββββββββββββββββββββββββββββββββββββββββββββ β
ββββββββββββββββββββββββββββββββββββββββββ ββββββββββββββββββββββββββββββββββββββββββββββββββ
β β
βββββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββ
βΌ
ββββββββββββββββββββββββββββββββ
β CONTINUOUS SYNC LOOP β
β Agent ββββΊ MemU Bot ββββΊ DB β
ββββββββββββββββββββββββββββββββ
π― Proactive Use Cases
1. Information Recommendation
Agent monitors interests and proactively surfaces relevant content
# User has been researching AI topics MemU tracks: reading history, saved articles, search queries # When new content arrives: Agent: "I found 3 new papers on RAG optimization that align with your recent research on retrieval systems. One author (Dr. Chen) you've cited before published yesterday." # Proactive behaviors: - Learns topic preferences from browsing patterns - Tracks author/source credibility preferences - Filters noise based on engagement history - Times recommendations for optimal attention
2. Email Management
Agent learns communication patterns and handles routine correspondence
# MemU observes email patterns over time: - Response templates for common scenarios - Priority contacts and urgent keywords - Scheduling preferences and availability - Writing style and tone variations # Proactive email assistance: Agent: "You have 12 new emails. I've drafted responses for 3 routine requests and flagged 2 urgent items from your priority contacts. Should I also reschedule tomorrow's meeting based on the conflict John mentioned?" # Autonomous actions: β Draft context-aware replies β Categorize and prioritize inbox β Detect scheduling conflicts β Summarize long threads with key decisions
3. Trading & Financial Monitoring
Agent tracks market context and user investment behavior
# MemU learns trading preferences: - Risk tolerance from historical decisions - Preferred sectors and asset classes - Response patterns to market events - Portfolio rebalancing triggers # Proactive alerts: Agent: "NVDA dropped 5% in after-hours trading. Based on your past behavior, you typically buy tech dips above 3%. Your current allocation allows for $2,000 additional exposure while maintaining your 70/30 equity-bond target." # Continuous monitoring: - Track price alerts tied to user-defined thresholds - Correlate news events with portfolio impact - Learn from executed vs. ignored recommendations - Anticipate tax-loss harvesting opportunities
...
ποΈ Hierarchical Memory Architecture
MemU's three-layer system enables both reactive queries and proactive context loading:
| Layer | Reactive Use | Proactive Use |
|---|---|---|
| Resource | Direct access to original data | Background monitoring for new patterns |
| Item | Targeted fact retrieval | Real-time extraction from ongoing interactions |
| Category | Summary-level overview | Automatic context assembly for anticipation |
Proactive Benefits:
- Auto-categorization: New memories self-organize into topics
- Pattern Detection: System identifies recurring themes
- Context Prediction: Anticipates what information will be needed next
π Quick Start
Option 1: Cloud Version
Experience proactive memory instantly:
π memu.so - Hosted service with 7Γ24 continuous learning
For enterprise deployment with custom proactive workflows, contact info@nevamind.ai
Cloud API (v3)
| Base URL | https://api.memu.so |
|---|---|
| Auth | Authorization: Bearer YOUR_API_KEY |
| Method | Endpoint | Description |
|---|---|---|
POST |
/api/v3/memory/memorize |
Register continuous learning task |
GET |
/api/v3/memory/memorize/status/{task_id} |
Check real-time processing status |
POST |
/api/v3/memory/categories |
List auto-generated categories |
POST |
/api/v3/memory/retrieve |
Query memory (supports proactive context loading) |
Option 2: Self-Hosted
Installation
Basic Example
Requirements: Python 3.13+ and an OpenAI API key
Test Continuous Learning (in-memory):
export OPENAI_API_KEY=your_api_key cd tests python test_inmemory.py
Test with Persistent Storage (PostgreSQL):
# Start PostgreSQL with pgvector docker run -d \ --name memu-postgres \ -e POSTGRES_USER=postgres \ -e POSTGRES_PASSWORD=postgres \ -e POSTGRES_DB=memu \ -p 5432:5432 \ pgvector/pgvector:pg16 # Run continuous learning test export OPENAI_API_KEY=your_api_key cd tests python test_postgres.py
Both examples demonstrate proactive memory workflows:
- Continuous Ingestion: Process multiple files sequentially
- Auto-Extraction: Immediate memory creation
- Proactive Retrieval: Context-aware memory surfacing
See tests/test_inmemory.py and tests/test_postgres.py for implementation details.
Custom LLM and Embedding Providers
MemU supports custom LLM and embedding providers beyond OpenAI. Configure them via llm_profiles:
from memu import MemUService service = MemUService( llm_profiles={ # Default profile for LLM operations "default": { "base_url": "https://dashscope.aliyuncs.com/compatible-mode/v1", "api_key": "your_api_key", "chat_model": "qwen3-max", "client_backend": "sdk" # "sdk" or "http" }, # Separate profile for embeddings "embedding": { "base_url": "https://api.voyageai.com/v1", "api_key": "your_voyage_api_key", "embed_model": "voyage-3.5-lite" } }, # ... other configuration )
OpenRouter Integration
MemU supports OpenRouter as a model provider, giving you access to multiple LLM providers through a single API.
Configuration
from memu import MemoryService service = MemoryService( llm_profiles={ "default": { "provider": "openrouter", "client_backend": "httpx", "base_url": "https://openrouter.ai", "api_key": "your_openrouter_api_key", "chat_model": "anthropic/claude-3.5-sonnet", # Any OpenRouter model "embed_model": "openai/text-embedding-3-small", # Embedding model }, }, database_config={ "metadata_store": {"provider": "inmemory"}, }, )
Environment Variables
| Variable | Description |
|---|---|
OPENROUTER_API_KEY |
Your OpenRouter API key from openrouter.ai/keys |
Supported Features
| Feature | Status | Notes |
|---|---|---|
| Chat Completions | Supported | Works with any OpenRouter chat model |
| Embeddings | Supported | Use OpenAI embedding models via OpenRouter |
| Vision | Supported | Use vision-capable models (e.g., openai/gpt-4o) |
Running OpenRouter Tests
export OPENROUTER_API_KEY=your_api_key # Full workflow test (memorize + retrieve) python tests/test_openrouter.py # Embedding-specific tests python tests/test_openrouter_embedding.py # Vision-specific tests python tests/test_openrouter_vision.py
See examples/example_4_openrouter_memory.py for a complete working example.
π Core APIs
memorize() - Continuous Learning Pipeline
Processes inputs in real-time and immediately updates memory:
result = await service.memorize( resource_url="path/to/file.json", # File path or URL modality="conversation", # conversation | document | image | video | audio user={"user_id": "123"} # Optional: scope to a user ) # Returns immediately with extracted memory: { "resource": {...}, # Stored resource metadata "items": [...], # Extracted memory items (available instantly) "categories": [...] # Auto-updated category structure }
Proactive Features:
- Zero-delay processingβmemories available immediately
- Automatic categorization without manual tagging
- Cross-reference with existing memories for pattern detection
retrieve() - Dual-Mode Intelligence
MemU supports both proactive context loading and reactive querying:
RAG-based Retrieval (method="rag")
Fast proactive context assembly using embeddings:
- β Instant context: Sub-second memory surfacing
- β Background monitoring: Can run continuously without LLM costs
- β Similarity scoring: Identifies most relevant memories automatically
LLM-based Retrieval (method="llm")
Deep anticipatory reasoning for complex contexts:
- β Intent prediction: LLM infers what user needs before they ask
- β Query evolution: Automatically refines search as context develops
- β Early termination: Stops when sufficient context is gathered
Comparison
| Aspect | RAG (Fast Context) | LLM (Deep Reasoning) |
|---|---|---|
| Speed | β‘ Milliseconds | π’ Seconds |
| Cost | π° Embedding only | π°π° LLM inference |
| Proactive use | Continuous monitoring | Triggered context loading |
| Best for | Real-time suggestions | Complex anticipation |
Usage
# Proactive retrieval with context history result = await service.retrieve( queries=[ {"role": "user", "content": {"text": "What are their preferences?"}}, {"role": "user", "content": {"text": "Tell me about work habits"}} ], where={"user_id": "123"}, # Optional: scope filter method="rag" # or "llm" for deeper reasoning ) # Returns context-aware results: { "categories": [...], # Relevant topic areas (auto-prioritized) "items": [...], # Specific memory facts "resources": [...], # Original sources for traceability "next_step_query": "..." # Predicted follow-up context }
Proactive Filtering: Use where to scope continuous monitoring:
where={"user_id": "123"}- User-specific contextwhere={"agent_id__in": ["1", "2"]}- Multi-agent coordination- Omit
wherefor global context awareness
π‘ Proactive Scenarios
Example 1: Always-Learning Assistant
Continuously learns from every interaction without explicit memory commands:
export OPENAI_API_KEY=your_api_key
python examples/example_1_conversation_memory.pyProactive Behavior:
- Automatically extracts preferences from casual mentions
- Builds relationship models from interaction patterns
- Surfaces relevant context in future conversations
- Adapts communication style based on learned preferences
Best for: Personal AI assistants, customer support that remembers, social chatbots
Example 2: Self-Improving Agent
Learns from execution logs and proactively suggests optimizations:
export OPENAI_API_KEY=your_api_key
python examples/example_2_skill_extraction.pyProactive Behavior:
- Monitors agent actions and outcomes continuously
- Identifies patterns in successes and failures
- Auto-generates skill guides from experience
- Proactively suggests strategies for similar future tasks
Best for: DevOps automation, agent self-improvement, knowledge capture
Example 3: Multimodal Context Builder
Unifies memory across different input types for comprehensive context:
export OPENAI_API_KEY=your_api_key
python examples/example_3_multimodal_memory.pyProactive Behavior:
- Cross-references text, images, and documents automatically
- Builds unified understanding across modalities
- Surfaces visual context when discussing related topics
- Anticipates information needs by combining multiple sources
Best for: Documentation systems, learning platforms, research assistants
π Performance
MemU achieves 92.09% average accuracy on the Locomo benchmark across all reasoning tasks, demonstrating reliable proactive memory operations.
View detailed experimental data: memU-experiment
π§© Ecosystem
| Repository | Description | Proactive Features |
|---|---|---|
| memU | Core proactive memory engine | 7Γ24 learning pipeline, auto-categorization |
| memU-server | Backend with continuous sync | Real-time memory updates, webhook triggers |
| memU-ui | Visual memory dashboard | Live memory evolution monitoring |
Quick Links:
- π Try MemU Cloud
- π API Documentation
- π¬ Discord Community
π€ Partners
π€ How to Contribute
We welcome contributions from the community! Whether you're fixing bugs, adding features, or improving documentation, your help is appreciated.
Getting Started
To start contributing to MemU, you'll need to set up your development environment:
Prerequisites
- Python 3.13+
- uv (Python package manager)
- Git
Setup Development Environment
# 1. Fork and clone the repository git clone https://github.com/YOUR_USERNAME/memU.git cd memU # 2. Install development dependencies make install
The make install command will:
- Create a virtual environment using
uv - Install all project dependencies
- Set up pre-commit hooks for code quality checks
Running Quality Checks
Before submitting your contribution, ensure your code passes all quality checks:
The make check command runs:
- Lock file verification: Ensures
pyproject.tomlconsistency - Pre-commit hooks: Lints code with Ruff, formats with Black
- Type checking: Runs
mypyfor static type analysis - Dependency analysis: Uses
deptryto find obsolete dependencies
Contributing Guidelines
For detailed contribution guidelines, code standards, and development practices, please see CONTRIBUTING.md.
Quick tips:
- Create a new branch for each feature or bug fix
- Write clear commit messages
- Add tests for new functionality
- Update documentation as needed
- Run
make checkbefore pushing
π License
π Community
- GitHub Issues: Report bugs & request features
- Discord: Join the community
- X (Twitter): Follow @memU_ai
- Contact: info@nevamind.ai
β Star us on GitHub to get notified about new releases!





