Meeting Assistant
Meeting Assistant is a high-performance terminal application that transforms spoken conversations into structured knowledge. It combines real-time local transcription with deep AI analysis to generate professional reports, visual mind maps, and insights tailored to your specific professional role.
Why Meeting Assistant?
Manual note-taking is a cognitive burden that distracts from active participation. Meeting Assistant solves this by:
- Eliminating Cognitive Load: Focus entirely on the conversation while the AI handles the documentation.
- Role-Specific Filtering: Specialized personas (Dev, PM, Exec) ensure you only get the insights relevant to your role.
- Truly Offline & Private: Speech-to-text is powered by
whisper.cppand happens entirely offline on your machine. Your raw audio never leaves your local environment. - Flexible AI Intelligence: Use high-performance cloud models (Gemini, OpenAI) or maintain a 100% offline workflow by connecting to a local
Ollamainstance.
Real-World Examples
1. The Daily Standup (Persona: PM)
Focus on identifying blockers and ensuring the timeline is on track.
# Start a session focused on deliverables and blockers
meeting_assistant -l --ui -p gemini --persona pm2. Technical Architecture Review (Persona: Dev + Research)
Focus on capturing complex logic and fact-checking external libraries.
# Capture technical details and research mentioned libraries/APIs
meeting_assistant -l --ui -p gemini --persona dev --research3. Fully Offline Confidential Meeting (Persona: General + Ollama)
When privacy is paramount, run everything on your own hardware.
# Local transcription + Local LLM analysis
meeting_assistant -l --ui -p ollama -L llama3Core Capabilities
Active Intelligence
- Live AI Copilot: Press [Space] during a meeting to query the AI about the current context.
- Contextual Continuity: Whisper retains a rolling memory of the last 200 characters to maintain accuracy across ongoing sentences.
- Visual Mapping: Every meeting generates a Mermaid.js diagram to visualize topics and decisions.
Seamless Integration
- Obsidian v3: Notes use modern Properties and semantic callouts to integrate directly into your second brain.
- Standalone HTML: Generates tidy, CSS-styled reports perfect for sharing via email or Slack.
Installation
Prerequisites
- CMake: 3.14 or higher.
- PortAudio: Required for live microphone input (
brew install portaudioon macOS).
1. Download Whisper Model
The application requires a Whisper model in ggml format. Choose a model based on your hardware and accuracy needs:
| Model | Size | Speed | Accuracy | Recommended For |
|---|---|---|---|---|
| tiny.en | 75 MB | Fastest | Lowest | Real-time testing / Low-power |
| base.en | 142 MB | Very Fast | Good | Standard laptops / Most meetings |
| small.en | 466 MB | Fast | Great | High-accuracy requirements |
| medium.en | 1.5 GB | Slow | Excellent | Post-meeting batch processing |
| large-v3 | 2.9 GB | Slowest | State-of-the-art | Maximum precision (Requires GPU) |
Using curl to download:
mkdir -p models
# Example: Downloading the 'small.en' model
curl -L https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-small.en.bin -o models/ggml-small.en.bin2. Build
mkdir build && cd build cmake -DCMAKE_PREFIX_PATH=/opt/homebrew .. make sudo make install
Dashboard Hotkeys
- [Space]: Open AI Copilot to ask a question during the meeting.
- [N]: Finalize current session and start a New Meeting immediately.
- [Q / ESC]: Save all reports and Quit.
Configuration
Settings are persisted in ~/.meeting_assistant/config.json.
- Template: Copy the provided
config.json.exampleto~/.meeting_assistant/config.json. - CLI: Alternatively, update settings via the command line using the
--save-configflag.
Refer to config.json.example for a full list of supported fields including GitHub/GitLab integration.
License
Apache License 2.0 - See LICENSE for details.
