π nanobot is an ultra-lightweight personal AI assistant inspired by Clawdbot
β‘οΈ Delivers core agent functionality in just ~4,000 lines of code β 99% smaller than Clawdbot's 430k+ lines.
π Real-time line count: 3,448 lines (run bash core_agent_lines.sh to verify anytime)
π’ News
- 2026-02-08 π§ Refactored Providersβadding a new LLM provider now takes just 2 simple steps! Check here.
- 2026-02-07 π Released v0.1.3.post5 with Qwen support & several key improvements! Check here for details.
- 2026-02-06 β¨ Added Moonshot/Kimi provider, Discord integration, and enhanced security hardening!
- 2026-02-05 β¨ Added Feishu channel, DeepSeek provider, and enhanced scheduled tasks support!
- 2026-02-04 π Released v0.1.3.post4 with multi-provider & Docker support! Check here for details.
- 2026-02-03 β‘ Integrated vLLM for local LLM support and improved natural language task scheduling!
- 2026-02-02 π nanobot officially launched! Welcome to try π nanobot!
Key Features of nanobot:
πͺΆ Ultra-Lightweight: Just ~4,000 lines of core agent code β 99% smaller than Clawdbot.
π¬ Research-Ready: Clean, readable code that's easy to understand, modify, and extend for research.
β‘οΈ Lightning Fast: Minimal footprint means faster startup, lower resource usage, and quicker iterations.
π Easy-to-Use: One-click to deploy and you're ready to go.
ποΈ Architecture
β¨ Features
π 24/7 Real-Time Market Analysis |
π Full-Stack Software Engineer |
π Smart Daily Routine Manager |
π Personal Knowledge Assistant |
|---|---|---|---|
| Discovery β’ Insights β’ Trends | Develop β’ Deploy β’ Scale | Schedule β’ Automate β’ Organize | Learn β’ Memory β’ Reasoning |
π¦ Install
Install from source (latest features, recommended for development)
git clone https://github.com/HKUDS/nanobot.git cd nanobot pip install -e .
Install with uv (stable, fast)
uv tool install nanobot-ai
Install from PyPI (stable)
π Quick Start
1. Initialize
2. Configure (~/.nanobot/config.json)
For OpenRouter - recommended for global users:
{
"providers": {
"openrouter": {
"apiKey": "sk-or-v1-xxx"
}
},
"agents": {
"defaults": {
"model": "anthropic/claude-opus-4-5"
}
}
}3. Chat
nanobot agent -m "What is 2+2?"That's it! You have a working AI assistant in 2 minutes.
π₯οΈ Local Models (vLLM)
Run nanobot with your own local models using vLLM or any OpenAI-compatible server.
1. Start your vLLM server
vllm serve meta-llama/Llama-3.1-8B-Instruct --port 8000
2. Configure (~/.nanobot/config.json)
{
"providers": {
"vllm": {
"apiKey": "dummy",
"apiBase": "http://localhost:8000/v1"
}
},
"agents": {
"defaults": {
"model": "meta-llama/Llama-3.1-8B-Instruct"
}
}
}3. Chat
nanobot agent -m "Hello from my local LLM!"Tip
The apiKey can be any non-empty string for local servers that don't require authentication.
π¬ Chat Apps
Talk to your nanobot through Telegram, Discord, WhatsApp, or Feishu β anytime, anywhere.
| Channel | Setup |
|---|---|
| Telegram | Easy (just a token) |
| Discord | Easy (bot token + intents) |
| Medium (scan QR) | |
| Feishu | Medium (app credentials) |
Telegram (Recommended)
1. Create a bot
- Open Telegram, search
@BotFather - Send
/newbot, follow prompts - Copy the token
2. Configure
{
"channels": {
"telegram": {
"enabled": true,
"token": "YOUR_BOT_TOKEN",
"allowFrom": ["YOUR_USER_ID"]
}
}
}Get your user ID from
@userinfoboton Telegram.
3. Run
Discord
1. Create a bot
- Go to https://discord.com/developers/applications
- Create an application β Bot β Add Bot
- Copy the bot token
2. Enable intents
- In the Bot settings, enable MESSAGE CONTENT INTENT
- (Optional) Enable SERVER MEMBERS INTENT if you plan to use allow lists based on member data
3. Get your User ID
- Discord Settings β Advanced β enable Developer Mode
- Right-click your avatar β Copy User ID
4. Configure
{
"channels": {
"discord": {
"enabled": true,
"token": "YOUR_BOT_TOKEN",
"allowFrom": ["YOUR_USER_ID"]
}
}
}5. Invite the bot
- OAuth2 β URL Generator
- Scopes:
bot - Bot Permissions:
Send Messages,Read Message History - Open the generated invite URL and add the bot to your server
6. Run
Requires Node.js β₯18.
1. Link device
nanobot channels login
# Scan QR with WhatsApp β Settings β Linked Devices2. Configure
{
"channels": {
"whatsapp": {
"enabled": true,
"allowFrom": ["+1234567890"]
}
}
}3. Run (two terminals)
# Terminal 1 nanobot channels login # Terminal 2 nanobot gateway
Feishu (ι£δΉ¦)
Uses WebSocket long connection β no public IP required.
1. Create a Feishu bot
- Visit Feishu Open Platform
- Create a new app β Enable Bot capability
- Permissions: Add
im:message(send messages) - Events: Add
im.message.receive_v1(receive messages)- Select Long Connection mode (requires running nanobot first to establish connection)
- Get App ID and App Secret from "Credentials & Basic Info"
- Publish the app
2. Configure
{
"channels": {
"feishu": {
"enabled": true,
"appId": "cli_xxx",
"appSecret": "xxx",
"encryptKey": "",
"verificationToken": "",
"allowFrom": []
}
}
}
encryptKeyandverificationTokenare optional for Long Connection mode.allowFrom: Leave empty to allow all users, or add["ou_xxx"]to restrict access.
3. Run
[!TIP] Feishu uses WebSocket to receive messages β no webhook or public IP needed!
DingTalk (ιι)
Uses Stream Mode β no public IP required.
1. Create a DingTalk bot
- Visit DingTalk Open Platform
- Create a new app -> Add Robot capability
- Configuration:
- Toggle Stream Mode ON
- Permissions: Add necessary permissions for sending messages
- Get AppKey (Client ID) and AppSecret (Client Secret) from "Credentials"
- Publish the app
2. Configure
{
"channels": {
"dingtalk": {
"enabled": true,
"clientId": "YOUR_APP_KEY",
"clientSecret": "YOUR_APP_SECRET",
"allowFrom": []
}
}
}
allowFrom: Leave empty to allow all users, or add["staffId"]to restrict access.
3. Run
βοΈ Configuration
Config file: ~/.nanobot/config.json
Providers
Note
Groq provides free voice transcription via Whisper. If configured, Telegram voice messages will be automatically transcribed.
| Provider | Purpose | Get API Key |
|---|---|---|
openrouter |
LLM (recommended, access to all models) | openrouter.ai |
anthropic |
LLM (Claude direct) | console.anthropic.com |
openai |
LLM (GPT direct) | platform.openai.com |
deepseek |
LLM (DeepSeek direct) | platform.deepseek.com |
groq |
LLM + Voice transcription (Whisper) | console.groq.com |
gemini |
LLM (Gemini direct) | aistudio.google.com |
aihubmix |
LLM (API gateway, access to all models) | aihubmix.com |
dashscope |
LLM (Qwen) | dashscope.console.aliyun.com |
moonshot |
LLM (Moonshot/Kimi) | platform.moonshot.cn |
zhipu |
LLM (Zhipu GLM) | open.bigmodel.cn |
vllm |
LLM (local, any OpenAI-compatible server) | β |
Adding a New Provider (Developer Guide)
nanobot uses a Provider Registry (nanobot/providers/registry.py) as the single source of truth.
Adding a new provider only takes 2 steps β no if-elif chains to touch.
Step 1. Add a ProviderSpec entry to PROVIDERS in nanobot/providers/registry.py:
ProviderSpec( name="myprovider", # config field name keywords=("myprovider", "mymodel"), # model-name keywords for auto-matching env_key="MYPROVIDER_API_KEY", # env var for LiteLLM display_name="My Provider", # shown in `nanobot status` litellm_prefix="myprovider", # auto-prefix: model β myprovider/model skip_prefixes=("myprovider/",), # don't double-prefix )
Step 2. Add a field to ProvidersConfig in nanobot/config/schema.py:
class ProvidersConfig(BaseModel): ... myprovider: ProviderConfig = ProviderConfig()
That's it! Environment variables, model prefixing, config matching, and nanobot status display will all work automatically.
Common ProviderSpec options:
| Field | Description | Example |
|---|---|---|
litellm_prefix |
Auto-prefix model names for LiteLLM | "dashscope" β dashscope/qwen-max |
skip_prefixes |
Don't prefix if model already starts with these | ("dashscope/", "openrouter/") |
env_extras |
Additional env vars to set | (("ZHIPUAI_API_KEY", "{api_key}"),) |
model_overrides |
Per-model parameter overrides | (("kimi-k2.5", {"temperature": 1.0}),) |
is_gateway |
Can route any model (like OpenRouter) | True |
detect_by_key_prefix |
Detect gateway by API key prefix | "sk-or-" |
detect_by_base_keyword |
Detect gateway by API base URL | "openrouter" |
strip_model_prefix |
Strip existing prefix before re-prefixing | True (for AiHubMix) |
Security
Tip
For production deployments, set "restrictToWorkspace": true in your config to sandbox the agent.
| Option | Default | Description |
|---|---|---|
tools.restrictToWorkspace |
false |
When true, restricts all agent tools (shell, file read/write/edit, list) to the workspace directory. Prevents path traversal and out-of-scope access. |
channels.*.allowFrom |
[] (allow all) |
Whitelist of user IDs. Empty = allow everyone; non-empty = only listed users can interact. |
CLI Reference
| Command | Description |
|---|---|
nanobot onboard |
Initialize config & workspace |
nanobot agent -m "..." |
Chat with the agent |
nanobot agent |
Interactive chat mode |
nanobot gateway |
Start the gateway |
nanobot status |
Show status |
nanobot channels login |
Link WhatsApp (scan QR) |
nanobot channels status |
Show channel status |
Scheduled Tasks (Cron)
# Add a job nanobot cron add --name "daily" --message "Good morning!" --cron "0 9 * * *" nanobot cron add --name "hourly" --message "Check status" --every 3600 # List jobs nanobot cron list # Remove a job nanobot cron remove <job_id>
π³ Docker
Tip
The -v ~/.nanobot:/root/.nanobot flag mounts your local config directory into the container, so your config and workspace persist across container restarts.
Build and run nanobot in a container:
# Build the image docker build -t nanobot . # Initialize config (first time only) docker run -v ~/.nanobot:/root/.nanobot --rm nanobot onboard # Edit config on host to add API keys vim ~/.nanobot/config.json # Run gateway (connects to Telegram/WhatsApp) docker run -v ~/.nanobot:/root/.nanobot -p 18790:18790 nanobot gateway # Or run a single command docker run -v ~/.nanobot:/root/.nanobot --rm nanobot agent -m "Hello!" docker run -v ~/.nanobot:/root/.nanobot --rm nanobot status
π Project Structure
nanobot/
βββ agent/ # π§ Core agent logic
β βββ loop.py # Agent loop (LLM β tool execution)
β βββ context.py # Prompt builder
β βββ memory.py # Persistent memory
β βββ skills.py # Skills loader
β βββ subagent.py # Background task execution
β βββ tools/ # Built-in tools (incl. spawn)
βββ skills/ # π― Bundled skills (github, weather, tmux...)
βββ channels/ # π± WhatsApp integration
βββ bus/ # π Message routing
βββ cron/ # β° Scheduled tasks
βββ heartbeat/ # π Proactive wake-up
βββ providers/ # π€ LLM providers (OpenRouter, etc.)
βββ session/ # π¬ Conversation sessions
βββ config/ # βοΈ Configuration
βββ cli/ # π₯οΈ Commands
π€ Contribute & Roadmap
PRs welcome! The codebase is intentionally small and readable. π€
Roadmap β Pick an item and open a PR!
- Voice Transcription β Support for Groq Whisper (Issue #13)
- Multi-modal β See and hear (images, voice, video)
- Long-term memory β Never forget important context
- Better reasoning β Multi-step planning and reflection
- More integrations β Discord, Slack, email, calendar
- Self-improvement β Learn from feedback and mistakes
Contributors
β Star History
nanobot is for educational, research, and technical exchange purposes only




