maestro converts natural language instructions into cli commands. It's designed for both offline use with Ollama and online integration with ChatGPT API.
Key Features
- Ease of Use: Simply type your instructions and press enter.
- Direct Execution: Use the
-eflag to directly execute commands with a confirmation prompt for safety. - Context Awareness: Maestro understands your system's context, including the current directory, system, and user.
- Support for Multiple LLM Models: Choose from a variety of models for offline and online usage.
- Offline: Ollama with over 40 models available.
- Online: GPT4-Turbo and GPT3.5-Turbo.
- Lightweight: Maestro is a single small binary with no dependencies.
Installation
- Download the latest binary from the releases page.
- Execute
./maestro -hto start.
Tip: Place the binary in a directory within your
$PATHand rename it tomaestrofor global access, e.g.,sudo mv ./maestro /usr/local/bin/maestro.
Offline Usage with Ollama
Important
You need at least Ollama v0.1.24 or greater
- Install Ollama from here (or use ollama's docker image).
- Download models using
ollama pull <model-name>.- Note: If you haven't changed it, you will need to pull the default model:
ollama pull dolphin-mistral:latest
- Note: If you haven't changed it, you will need to pull the default model:
- Start the Ollama server with
ollama serve. - Configure Maestro to use Ollama with
./maestro -set-ollama-url <ollama-url>, for example,./maestro -set-ollama-url http://localhost:8080.
Online Usage with OpenAI's API
- Obtain an API token from OpenAI.
- Set the token using
./maestro -set-openai-token <your-token>. - Choose between GPT4-Turbo with
-4flag and GPT3.5-Turbo with-3flag.- Example:
./maestro -4 <prompt>
- Example:
