Quick Start | Documentation | Examples
promptcmd is a manager and executor for programmable prompts. Define a
prompt template once, then execute it like any other terminal command, complete
with argument parsing, --help text, and stdin/stdout integration.
Unlike tools that require you to manually manage prompt files or rely on implicit tool-calling, promptcmd gives you explicit control over what data your models access. Build compositional workflows by nesting prompts, executing shell commands within templates, and piping data through multi-step AI pipelines.
Key Features
Prompts as CLI Commands
Create a .prompt file, enable it with promptctl, and execute it like any
native tool.
$ promptctl create bashme_a_script_that $ bashme_a_script_that renames all files in current directly to ".backup"
Execute in Remote Shells
Prepend SSH commands with promptctl, your prompts magically appear in your
remote shell sessions.
$ promptctl ssh user@server server $ bashme_a_script_that renames all files in current directly to ".backup"
Local and Remote Provider Support
Use your Ollama endpoint or configure an API key for OpenAI, OpenRouter, Anthropic, Google, or MiniMax. Swap between them with ease.
$ promptctl create render-md $ cat README.md | render-md -m openai $ cat README.md | render-md -m ollama/gpt-oss:20b
Group and Load Balancing
Distribute requests across several providers with equal or weighted distribution for cost optimization.
# config.toml [groups.balanced] providers = ["openai", "google"]
$ cat README.md | render-md -m balancedCaching
Cache responses for a configured amount of time for adding determinism in pipelines and more efficient token consumption.
# config.toml [providers.openai] cache_ttl = 60 # number of seconds
Set/Override during execution:
$ cat README.md | render-md --config-cache-ttl 120Custom Models with Character
Use Variants to define custom models with own personality or specialization in tasks:
[providers.anthropic] api_key = "sk-xxxxx" model = "claude-sonnet-4-5" [providers.anthropic.glados] system = "Use sarcasm and offending jokes like the GlaDoS character from Portal." [providers.anthropic.wheatley] system = "Reply as if you are Wheatley from Portal."
$ tipoftheday -m glados $ tipoftheday -m wheatley
Quick Start
Install
Linux/macOS:
curl -LsSf https://installer.promptcmd.sh | shmacOS (Homebrew):
brew install tgalal/tap/promptcmd
Windows (PowerShell):
powershell -ExecutionPolicy Bypass -c "irm https://installer-ps.promptcmd.sh | iex"
Configure API Keys
Configure your API keys by editing config.toml:
Find your provider's name, e.g., for anthropic:
[providers.anthropic] api_key = "sk-ant-api03-..."
Alternatively, you can set the keys via Environment Variables:
PROMPTCMD_ANTHROPIC_API_KEY="your_api_key"
PROMPTCMD_OPENAI_API_KEY="your_api_key"
PROMPTCMD_ANTHROPIC_API_KEY="your_api_key"
PROMPTCMD_OPENROUTER_API_KEY="your_api_key"
PROMPTCMD_MINIMAX_API_KEY="your_api_key"
Create Your First Prompt
Create a summarize.prompt file:
promptctl create summarize
Insert the following:
--- model: anthropic/claude-sonnet-4-5 input: schema: words?: integer, Summary length in words --- Summarize the following text{{#if words}} in {{words}} words{{/if}}: {{STDIN}}
Enable and use it:
# Enable as a command promptctl enable summarize # Use it cat article.txt | summarize echo "Long text here..." | summarize --words 10 # Auto-generated help summarize --help
That's it. Your prompt is now a native command.
Documentation
Full documentation available at: docs.promptcmd.sh
Examples
Browse the Examples directory or visit https//promptcmd.sh/lib for interactive viewing.
License
GPLv3 License - see LICENSE file for details
