AI CLI translates natural language into shell commands using LLMs (OpenAI, OpenRouter, or a local server), then applies a safety policy before execution.
Important Safety Warning
LLMs are non-deterministic. The same prompt can produce different commands across runs.
Always review and understand every command before you approve or run it. Treat generated commands as untrusted suggestions, especially for file operations, process control, networking, and anything requiring elevated permissions.
Quick Start (Users)
1) Install
Homebrew (macOS and Linux)
brew install kriserickson/tap/ai-cli
Go
Requires Go 1.25+.
go install github.com/kriserickson/ai-cli@latest
Binary releases
Download prebuilt binaries for macOS (signed and notarized), Linux, and Windows from the latest release.
On macOS or Linux, extract the archive, make it executable, and move it onto your PATH:
VERSION=$(curl -s https://api.github.com/repos/kriserickson/ai-cli/releases/latest | grep '"tag_name"' | cut -d'"' -f4) OS=$(uname -s | tr '[:upper:]' '[:lower:]') ARCH=$(uname -m | sed 's/x86_64/amd64/') curl -LO "https://github.com/kriserickson/ai-cli/releases/download/${VERSION}/ai-${VERSION}-${OS}-${ARCH}.tar.gz" tar -xzf "ai-${VERSION}-${OS}-${ARCH}.tar.gz" chmod +x ai sudo mv ai /usr/local/bin/ai ai version
On Windows (PowerShell), extract the zip and move ai.exe onto your PATH:
$VERSION = (Invoke-RestMethod "https://api.github.com/repos/kriserickson/ai-cli/releases/latest").tag_name Invoke-WebRequest -Uri "https://github.com/kriserickson/ai-cli/releases/download/${VERSION}/ai-${VERSION}-windows-amd64.zip" -OutFile "ai-${VERSION}-windows-amd64.zip" Expand-Archive -Path "ai-${VERSION}-windows-amd64.zip" -DestinationPath .\ai Move-Item .\ai\ai.exe "$env:USERPROFILE\go\bin\ai.exe" -Force ai.exe version
2) Run Setup and Health Checks First
Use ai doctor as the first command. It verifies config and launches setup if your API key is missing.
Typical flow:
- validates config file location
- checks selected provider and model
- checks API key presence
- starts setup wizard if needed
Recommended: Shell Alias for Special Characters
Characters like ?, *, and # have special meaning in most shells. Without protection, a command like ai what is using all my cpu? will fail because zsh tries to glob-expand cpu? before ai ever sees it.
Add a noglob alias to your shell config so you can type naturally:
zsh (~/.zshrc):
bash (~/.bashrc):
# Only needed if you have failglob or nullglob enabled; # noglob is zsh-only, so bash needs a wrapper function instead. ai() { set -f; command ai "$@"; set +f; }
Then reload your shell:
source ~/.zshrc # or source ~/.bashrc
After this, ai what is using all my cpu? will work as expected.
3) Run Your First Commands
Single-shot examples:
ai list files in current directory sorted by size
ai find all files larger than 10MB
ai show what process is using port 8080Interactive mode:
Then type requests one by one:
ai> what ports are listening
ai> compress all log files in /var/log older than 7 days
ai> show biggest folders in my home directory
How Command Execution Safety Works
Every generated command has:
risk:safeorriskycertainty: 0-100
Decision matrix:
| Risk | Allowlisted | Certainty >= threshold | Action |
|---|---|---|---|
| safe | any | yes | Auto-execute |
| safe | any | no | Ask confirmation |
| risky | any | any | Ask confirmation |
When always_confirm=true, every command asks first.
Default allowlist prefixes:
gitlscatechopwdheadtailwcgrepfindwhichman
Restrict or Expand Auto-Run Behavior
Restrict auto-run (safer)
Force confirmation for everything:
ai config set always_confirm true
Increase certainty required for auto-run:
ai config set min_certainty 95Turn auto-run on more aggressively
Allow auto-run decisions from the safety matrix:
ai config set always_confirm false
Lower certainty threshold:
ai config set min_certainty 60Important:
- all risky commands prompt for confirmation, regardless of threshold or allowlist
min_certaintyonly affects whether safe commands auto-run
Allowlist control
Current CLI supports reading allowlist via config output, but not setting it directly with ai config set.
To customize allowlist prefixes, edit ~/.ai-cli/config.toml:
[safety] always_confirm = false tool_calling = "never" min_certainty = 80 allowlist_prefixes = ["git", "ls", "cat", "echo", "pwd", "head", "tail", "wc", "grep", "find", "which", "man"]
After editing, run:
Tool Calling
AI CLI can optionally let the model use built-in read-only tools before it generates shell commands. This is controlled by safety.tool_calling.
| Mode | Description |
|---|---|
never |
Tools are fully disabled. No tool instructions are added to the prompt and no tool loop runs. Default. |
always_prompt |
Prompt before every tool call. |
dangerous_prompt |
Auto-approve safe tool calls, but prompt when a tool triggers a safety rule. |
always_allow |
Execute all tool calls without prompting. |
Set it with:
ai config set tool_calling never ai config set tool_calling always_prompt ai config set tool_calling dangerous_prompt ai config set tool_calling always_allow
Notes:
tool_callingonly controls AI tool usage- generated shell commands still follow
always_confirm,min_certainty, risk classification, and the allowlist dangerous_promptprompts when a tool call hits a safety rule, including restricted file access and sensitive tools likeenvironmentorlist_memoriesnevermakes AI CLI skip the tool loop entirely and respond directly
Memories
Memories let you store named context (like server addresses, port mappings, or project conventions) that automatically gets injected into the AI prompt when the keyword appears in your input.
Managing memories
ai memory add my-server "user@10.1.1.103 -L 9229:localhost:2229" ai memory add staging-db "postgres://app:secret@staging.example.com:5432/mydb" ai memory list ai memory remove my-server
Using memories
Once stored, just use the keyword naturally:
ai connect to my-server # → ssh user@10.1.1.103 -L 9229:localhost:2229 ai dump the users table from staging-db # → pg_dump -t users "postgres://app:secret@staging.example.com:5432/mydb"
Keyword matching is case-insensitive. Multiple memories can match in a single request. Memories are stored in ~/.ai-cli/memory.json.
In interactive mode, the same commands are available:
ai> memory add my-server "user@10.1.1.103 -L 9229:localhost:2229"
ai> memory list
ai> memory remove my-server
Common User Commands
ai status ai doctor ai set-model ai version ai config show ai config get provider ai config set provider openai ai config set llm_key sk-your-key-here
Shell Completion
ai completion generates an autocompletion script for your shell so you can tab-complete subcommands and flags.
zsh
Enable completion in your environment (once, if not already done):
echo "autoload -U compinit; compinit" >> ~/.zshrc
Load for the current session:
source <(ai completion zsh)
Load permanently (macOS with Homebrew):
ai completion zsh > $(brew --prefix)/share/zsh/site-functions/_ai
Load permanently (Linux):
ai completion zsh > "${fpath[1]}/_ai"
bash
Requires the bash-completion package (brew install bash-completion on macOS, or your distro's package manager on Linux).
Load for the current session:
source <(ai completion bash)
Load permanently (macOS):
ai completion bash > $(brew --prefix)/etc/bash_completion.d/ai
Load permanently (Linux):
ai completion bash > /etc/bash_completion.d/aifish
Load for the current session:
ai completion fish | source
Load permanently:
ai completion fish > ~/.config/fish/completions/ai.fish
PowerShell
Load for the current session:
ai completion powershell | Out-String | Invoke-Expression
Load permanently — add the above line to your PowerShell profile ($PROFILE).
Start a new shell after any permanent installation for changes to take effect.
Configuration
Configuration file location:
~/.ai-cli/config.toml
Available ai config get/set keys:
| Key | Description | Default |
|---|---|---|
provider |
Active provider (openai, openrouter, or local) |
openrouter |
model |
Model identifier | anthropic/claude-3.5-sonnet |
llm_key |
API key for the current provider | (empty) |
llm_url |
Base URL for the current provider | (provider default) |
always_confirm |
Always prompt before execution (true/false) |
false |
tool_calling |
Tool usage mode (never, always_prompt, dangerous_prompt, always_allow) |
never |
min_certainty |
Auto-execute threshold (0-100) | 80 |
debug |
Debug mode (none, screen, file) |
none |
history_include_llm_output |
Store structured LLM responses in history (true/false) |
true |
history_include_debug |
Store raw request/response debug payloads in history (true/false) |
false |
history_ask_on_error |
Prompt to send failures back to the AI (true/false) |
true |
history_auto_check_on_error |
Automatically ask the AI to retry on failure (true/false) |
false |
history_retry_max_attempts |
Automatic AI retries after a failed command | 1 |
history_retry_context_depth |
Number of recent command results to send back on retry | 3 |
debug_log_payloads |
When debug=file, include full prompts and responses in llm.log (true/false) |
false |
Notes:
llm_keyandllm_urlread and write the key/URL for whichever provider is currently selectedai config get llm_keyreturns a masked valueallowlist_prefixesexists in config but is currently edited directly inconfig.toml
Debugging
# Persisted config ai config set debug screen ai config set debug file ai config set debug_log_payloads true # opt in to full prompt/response capture for debug=file # Per-command override ai --debug list files ai --debug=screen list files ai --debug=file list files
Debug output still goes to the configured debug sink (screen or ~/.ai-cli/llm.log). History storage is separate and controlled by the history_* settings above.
History And Retry
Each top-level AI instruction now writes to a rotating history log under ~/.ai-cli/history.log with rotated backups like history.log.1.
- New instructions start a fresh session.
- If a generated command fails, AI CLI can prompt to send the failure back to the model with the original request, prior AI response, recent command results, exit code, and captured stdout/stderr.
- In interactive mode,
retryreplays the last failed AI session, andretry <depth>overrides how many recent command results get included.
Example:
ai config set history_auto_check_on_error true ai config set history_retry_context_depth 5 # Per-command override ai --retry-on-error --retry-depth 5 fix the broken command # Inspect stored sessions ai history list ai history show 1741012345678901
Notes:
debug=screenprints request and response payloads to the terminal for the current sessiondebug=filewrites to~/.ai-cli/llm.logdebug=filelogs metadata only by default; full prompt and response bodies requiredebug_log_payloads=truellm.logis created with user-only permissions
Security Notes
ai config showandai config get llm_keymask stored API keys- tool output is treated as untrusted data before it is sent back to the model
- the
environmenttool shows all variable names, but only a small allowlisted subset of values; all other values are hidden or redacted - memories and environment data can still be sensitive, so
dangerous_promptasks before using those tools - enabling
debug=filecan still store sensitive data if you also setdebug_log_payloads=true
Multi-Step Commands
For complex requests, AI CLI may return multiple commands. They run sequentially and stop on first failure unless an AI retry is triggered.
$ ai kill the process on port 8080 [1/2] Find PID on port 8080 $ lsof -ti :8080 [safe] 95% certainty [2/2] Kill the process $ kill -9 $(lsof -ti :8080) [risky] 90% certainty Execute? [Y/n]
Developer Guide
Build, Test, and Install (go-task)
Requires Go 1.25+ and go-task.
Install task once:
go install github.com/go-task/task/v3/cmd/task@latest
Ensure your Go bin directory is in PATH:
- Windows:
%USERPROFILE%\\go\\bin - macOS/Linux:
$HOME/go/bin
Example for zsh:
echo 'export PATH="$PATH:$HOME/go/bin"' >> ~/.zshrc source ~/.zshrc
Common commands:
task # build + test task build # dist/<os>/ai task test task test:verbose task test:pkg PKG=./internal/llm/... task install
task install copies from dist/<os>/ into:
- Windows:
%USERPROFILE%\\go\\bin - macOS/Linux:
/usr/local/bin
Override install target:
task install INSTALL_DIR="$HOME/.local/bin"Windows (PowerShell):
task install INSTALL_DIR="$env:USERPROFILE\\tools\\bin"
Versioning and Releases
Version is injected at build time using ldflags. Default value in source is dev.
Create a release:
git tag v0.2.0 git push origin v0.2.0
Tag push triggers GitHub Actions to:
- run tests
- build cross-platform artifacts
- create a GitHub Release
- upload artifacts to the release
Testing
task test
task test:verbose
task test:pkg PKG=./internal/executor/...
task test:pkg PKG=./internal/llm/...
task test:coverage