Spectre
Spectre is a CLI tool designed for interacting with llama.cpp servers. It provides an intuitive interface for working with local AI models, making it easy to run, manage, and interact with language models without needing complex setup.
This tool is designed to work with llama.cpp servers, it does not care about the model you're using (it sends gpt-3.5-turbo to the server, server uses whatever is loaded). By default it looks for http://localhost:8080 for the server, you can change this with an environment variable LLAMA_API_URL
Features
- CLI Interface: Intuitive command-line interface for interacting with AI models
- Local Model Support: Works seamlessly with llama.cpp models
- Chat Interface: Provides a chat-like experience for conversations with AI models
- Prompt Management: Easily manage and organize your prompts
- Tool Integration: Built-in tools for code generation, directory management, and more
Installation
# Install via npm npm install -g spectre-ai # Or clone and build locally git clone https://github.com/dinubs/spectre cd spectre npm install npm run build
Usage
# Start the CLI spectre # Or run directly with Node npm run dev
Project Structure
src/
├── api/ # API client for communicating with llama.cpp server
├── chat/ # Chat interface components
├── config/ # Configuration constants
├── tools/ # Utility tools for file operations, patching, etc.
├── types/ # TypeScript type definitions
└── utils/ # Utility functions
How It Works
Spectre works by:
- Connecting to a local llama.cpp server
- Providing a user-friendly CLI interface
- Managing prompts and chat history
- Handling model interactions through HTTP requests
- Offering tooling for code generation and file management
Contributing
Contributions are welcome! Please feel free to submit a Pull Request. This project is primarily AI with human correction when necessary, I like to use spectre to build spectre (it's not perfect, but it's getting there).
License
MIT