Wake-word activated voice interface for OpenClaw sessions on PamirAI devices.
Flow: Listen for wake word → Capture speech → Transcribe with Whisper → Send to OpenClaw → Speak response
Requirements
- Raspberry Pi CM5 (PamirAI Distiller)
- Microphone and speaker connected
- Python 3.10+ (Distiller SDK venv)
- OpenClaw local gateway running (see below)
- API keys: Picovoice (wake word), OpenAI (Whisper), AI provider for OpenClaw
Setup
cd /home/distiller/Projects/openclaw-voice-agent # Install dependencies into SDK venv source /opt/distiller-sdk/activate.sh pip install -r requirements.txt # Configure cp config.example.yaml config.yaml # Edit config.yaml with your API keys and settings
Porcupine Wake Word Setup
Get Your Access Key (Free)
- Visit Picovoice Console and sign up (no credit card needed)
- Copy your AccessKey from the dashboard
- Set it in
config.yaml:porcupine: access_key: "YOUR_PICOVOICE_ACCESS_KEY"
Option 1: Built-in Keywords (Easiest)
Use one of the pre-trained keywords:
"terminator","porcupine","bumblebee","americano","blueberry", and more
Works offline and is ready to go:
porcupine: keywords: - "jarvis" sensitivities: - 0.6
Option 2: Custom Wake Word (Better UX)
Train a custom wake word like "hey openclaw":
- Go to Picovoice Console
- Click "Create Custom Keyword"
- Enter your phrase (e.g., "hey openclaw")
- Select your platform: Raspberry Pi (32-bit ARM) (for PamirAI/CM5)
- Train the model (takes ~1 min)
- Download the
.ppnfile - Copy it to your project directory (e.g.,
models/hey-openclaw_en_raspberry-pi.ppn) - Update
config.yaml:porcupine: keyword_paths: - "/home/distiller/Projects/openclaw-voice-agent/models/hey-openclaw_en_raspberry-pi.ppn"
Fine-tune Detection
Adjust sensitivity (0.0–1.0) if needed:
- Higher = catches quieter speech but more false positives
- Lower = requires louder/clearer speech
- Default (0.6) works well for most cases
TTS Providers
| Provider | Quality | Speed | Offline | Setup |
|---|---|---|---|---|
gtts |
OK | Slow | No | None (default) |
elevenlabs |
Excellent | Fast | No | API key required |
piper |
Good | Fast | Yes | Download model |
Running
# Direct source /opt/distiller-sdk/activate.sh python3 openclaw_voice_agent.py # Or with custom config path OPENCLAW_CONFIG=/path/to/config.yaml python3 openclaw_voice_agent.py
Systemd Service
Run the voice agent as a background service that survives SSH disconnects and auto-starts on boot.
Install
sudo cp openclaw-voice-agent.service /etc/systemd/system/
sudo systemctl daemon-reload
sudo systemctl enable openclaw-voice-agent
sudo systemctl start openclaw-voice-agentManage
sudo systemctl start openclaw-voice-agent # Start sudo systemctl stop openclaw-voice-agent # Stop sudo systemctl restart openclaw-voice-agent # Restart (e.g. after code changes) sudo systemctl status openclaw-voice-agent # Check if running
Logs
journalctl -u openclaw-voice-agent -f # Tail logs live journalctl -u openclaw-voice-agent -n 50 # Last 50 lines journalctl -u openclaw-voice-agent --since "5 min ago" # Recent logs
Disable
sudo systemctl stop openclaw-voice-agent
sudo systemctl disable openclaw-voice-agent # Remove from auto-startOpenClaw Setup
OpenClaw is a local, self-hosted Node.js gateway that connects chat platforms to AI agents. It does NOT require an OpenClaw API key; instead it uses your AI provider credentials (Anthropic, OpenAI, etc.).
Prerequisites:
- Node.js 22+
- AI provider API key (Anthropic, OpenAI, etc.)
Start OpenClaw:
# Install (if not already done) npm install -g openclaw # Configure with your AI provider key # Edit ~/.openclaw/openclaw.json with your credentials # Start the gateway openclaw gateway start # Verify it's running openclaw status # Should show: Gateway is running on http://localhost:18789 # List available sessions openclaw sessions list
Documentation:
LED Feedback
The voice agent uses the PamirAI device's onboard LED to provide visual feedback during operation:
- 🔵 BLUE - Wake word detected, listening for your voice
- 🟢 GREEN - Processing your speech, waiting for OpenClaw response
- 🔴 RED - Error occurred during processing
- ⚫ OFF - Idle or task complete
The LED states help you understand what the device is doing without needing text output.
Customization
Control LED behavior in config.yaml:
led: enabled: true # Enable/disable LED feedback index: 0 # Which LED to use (0-6 available on PamirAI)
Configuration
See config.example.yaml for all options. Key settings:
porcupine.access_key- Free from Picovoice Consolewhisper.api_key- OpenAI API keyopenclaw.base_url- Local Gateway URL (default:http://localhost:18789)openclaw.agent_id- Agent to route messages to (default:main)openclaw.session_id- Optional specific session ID (overrides agent_id)openclaw.timeout- Max seconds to wait for agent response (default: 60)tts.provider- Choosegtts,elevenlabs, orpiperaudio.silence_threshold- Adjust if it cuts off too early or waits too long
Important: The voice agent does NOT need an OpenClaw API key. OpenClaw runs locally and uses your AI provider credentials (configured in ~/.openclaw/openclaw.json).
