Personal AI assistant that remembers past conversations, runs scheduled tasks, and works on both web and WhatsApp. Supports Claude, OpenAI, and OpenAI-compatible providers (Gemini, Groq, Ollama, etc.).
Quick Start
Open http://localhost:3000 — first run redirects to setup page for API key config.
Requirements
- Node.js 20.11 or later —
npxruns the app and most MCP servers - uv —
uvxruns the time MCP server (Python-based) - Anthropic, OpenAI, or OpenAI-compatible API key (Gemini, Groq, Ollama, etc.)
Data Storage
All data (config, conversations, uploads) stored in ~/.goto-assistant/.
Custom location: GOTO_DATA_DIR=/path/to/data npx goto-assistant
Custom Port
PORT=3001 npx goto-assistant
Why goto-assistant?
One command, no Docker, no framework — just MCP. Chat from the web or WhatsApp.
You
│
chat / ask
│
▼
┌──────────────────┐
│ AI Assistant │
└──┬──┬──┬──┬──┬──┘
│ │ │ │ │
│ │ │ │ └── create / update / run / ──▶ ┌──────────────┐
│ │ │ │ schedule / get results │ Cron │──── ┐
│ │ │ └───── read / write ────────────▶ ├──────────────┤ │
│ │ │ │ Files │ AI tasks
│ │ └──────── remember / recall ───────▶ ├──────────────┤ w/ MCP
│ │ │ Memory │◀── access
│ └── recall conversations & task runs ──▶ ├──────────────┤
│ │ Episodic │
│ ├──────────────┤
└────────── do anything ─────────────────▶ │ + your MCP │
│ servers │
└──────────────┘
That one npx command gives you an AI assistant that can remember across conversations, search past interactions, manage your files, and run tasks on a schedule or on-demand — all through the standard MCP protocol. Add any MCP server to extend it further.
See it in action
Setup
|
First run — provider, API key & WhatsApp setup_with_whatsapp_web.mp4Run |
Adding an MCP server mcp_setup_web.mp4Add MCP servers through the setup wizard. The assistant verifies each server before save (trimmed for brevity — verification may take up to minutes for security purposes). |
Tasks
|
Create a task create_task_web.mp4Ask the assistant to create an on-demand task. |
Update a task update_task_web.mp4Modify task prompts, commands, or settings through chat. |
|
Run a task & compare results run_task_web.mp4Run tasks on demand and compare results across runs. |
Schedule a task schedule_task_web.mp4Schedule tasks to run periodically using natural language. |
|
Chat & manage tasks on WhatsApp whatsapp_chat_web_cropped.mp4Chat with the AI assistant and manage tasks from WhatsApp — the same assistant, on the go. |
Data Privacy
goto-assistant connects directly to AI providers using your own API keys. Both Anthropic and OpenAI have clear policies that API data is not used for model training by default:
Anthropic (Commercial Terms; Privacy Center):
"Anthropic may not train models on Customer Content from Services."
"By default, we will not use your inputs or outputs from our commercial products to train our models."
OpenAI (Platform Data Controls; Enterprise Privacy):
"Data sent to the OpenAI API is not used to train or improve OpenAI models (unless you explicitly opt in to share data with us)."
"We do not train our models on your data by default."
Your conversations and data stay between you and the provider's API. All local data is stored on your machine:
- goto-assistant: conversations, config, uploads, and WhatsApp auth in
~/.goto-assistant/ - mcp-cron: tasks and results in
~/.mcp-cron/
WhatsApp Integration
Chat with the assistant directly from WhatsApp — no extra apps, no Docker, no webhooks needed.
Uses Baileys (WhatsApp Web multi-device protocol) running in-process. Enable it in the setup wizard or toggle it on the setup page, scan the QR code once, and you're connected. Auth persists across restarts.
Messages go through the same AI pipeline as the web chat. The agent only responds in your self-chat ("Message yourself") — it never replies to other people messaging your number.
Architecture
Browser and WhatsApp clients connect to server.ts (WebSocket + REST), which routes messages through router.ts to the Claude or OpenAI agent SDK. Agents access MCP servers (memory, filesystem, cron, messaging, etc.) for extended capabilities. Messaging flows through a channel registry — the mcp-messaging MCP server proxies tool calls to POST /api/messaging/send, which routes to the appropriate channel (WhatsApp, etc.). The episodic-memory MCP server provides full-text search over past conversations and task results using SQLite FTS5, enabling the agent to recall prior interactions.
See docs/architecture.md for the full architecture diagram.
Development Setup
-
Install dependencies:
-
Start the development server:
-
Open
http://localhost:3000— you'll be redirected to the setup page on first run to configure your AI provider and API key. -
Lint and test:
Configuration
App configuration is stored in data/config.json (created on first setup). MCP server configuration is stored separately in data/mcp.json. Environment variables override file config:
ANTHROPIC_API_KEY— API key for ClaudeOPENAI_API_KEY— API key for OpenAI (also used for OpenAI-compatible providers)
For OpenAI-compatible providers, set the base URL in the setup page (e.g. https://generativelanguage.googleapis.com/v1beta/openai for Gemini). The app auto-detects known gateways and switches to the Chat Completions API when needed.
MCP Servers
The assistant comes pre-configured with these MCP servers:
| Server | Package | Capabilities |
|---|---|---|
| memory | @modelcontextprotocol/server-memory |
Persistent knowledge graph across conversations |
| filesystem | @modelcontextprotocol/server-filesystem |
Read, write, and manage local files |
| time | mcp-server-time |
Current time and timezone conversions |
| cron | mcp-cron |
Schedule or run on-demand shell commands and AI prompts with access to MCP servers |
| messaging | built-in | Send messages via connected platforms (WhatsApp, more coming) |
| episodic-memory | built-in | Full-text search over past conversations and task results |
Add your own through the setup page — either via the form or by asking the setup wizard AI chat — or by editing data/mcp.json directly. Any MCP server that supports stdio transport will work — browse the MCP server directory for more.