Introduction
KDeps is a YAML-based workflow orchestration framework for building stateful REST APIs. Built on ~92,000 lines of Go code with 70% test coverage, it packages AI tasks, data processing, and API integrations into portable units, eliminating boilerplate code for common patterns like authentication, data flow, storage, and validation.
Technical Highlights
Architecture: Clean architecture with 5 distinct layers (CLI → Executor → Parser → Domain → Infrastructure)
Scale: 218 source files, 26 CLI commands, 5 resource executor types, 14 working examples
Testing: 13 integration tests + 35 e2e scripts ensuring production readiness
Multi-Target: Native CLI, Docker containers, and WebAssembly for browser execution
Key Highlights
YAML-First Configuration
Build workflows using simple, self-contained YAML configuration blocks. No complex programming required - just define your resources and let KDeps handle the orchestration.
yaml
apiVersion: kdeps.io/v1
kind: Workflow
metadata:
name: my-agent
version: "1.0.0"
targetActionId: responseResource
settings:
apiServerMode: true
apiServer:
portNum: 16395
routes:
- path: /api/v1/chat
methods: [POST]Fast Local Development
Run workflows instantly on your local machine with sub-second startup time. Docker is optional and only needed for deployment.
bash
# Run locally (instant startup)
kdeps run workflow.yaml
# Hot reload for development
kdeps run workflow.yaml --devUnified API
Access data from any source with just two functions: get() and set(). No more memorizing 15+ different function names.
yaml
# All of these work with get():
query: get('q') # Query parameter
auth: get('Authorization') # Header
data: get('llmResource') # Resource output
user: get('user_name', 'session') # Session storageMustache Expressions
KDeps v2 supports both traditional expr-lang and simpler Mustache-style expressions. Choose what fits your needs!
yaml
# Traditional expr-lang (full power)
prompt: "{{ get('q') }}"
time: "{{ info('current_time') }}"
# Mustache (simpler - 56% less typing!)
prompt: "{{q}}"
time: "{{current_time}}"
# Mix them naturally in the same workflow
message: "Hello {{name}}, your score is {{ get('points') * 2 }}"Key Benefits:
- 56% less typing for simple variables
- No whitespace rules -
{{var}}={{ var }} - Backward compatible - all existing workflows work
- Natural mixing - simple and complex together
When to use:
- Mustache for simple variables:
{{name}},{{email}} - expr-lang for functions and logic:
{{ get('x') }},{{ a + b }}
LLM Integration
Use Ollama for local model serving, or connect to any OpenAI-compatible API endpoint.
| Backend | Description |
|---|---|
| Ollama | Local model serving (default) |
| OpenAI-compatible | Any API endpoint with OpenAI-compatible interface |
Core Features
- Session persistence with SQLite or in-memory storage
- Connection pooling for databases
- Retry logic with exponential backoff
- Response caching with TTL
- CORS configuration for web applications
- WebServer mode for static files and reverse proxying
Quick Start
bash
# Install KDeps (Mac/Linux)
curl -LsSf https://raw.githubusercontent.com/kdeps/kdeps/main/install.sh | sh
# Or via Homebrew (Mac)
brew install kdeps/tap/kdeps
# Create a new agent interactively
kdeps new my-agentExample: Simple Chatbot
workflow.yaml
yaml
apiVersion: kdeps.io/v1
kind: Workflow
metadata:
name: chatbot
version: "1.0.0"
targetActionId: responseResource
settings:
apiServerMode: true
apiServer:
portNum: 16395
routes:
- path: /api/v1/chat
methods: [POST]
agentSettings:
models:
- llama3.2:1bresources/llm.yaml
yaml
apiVersion: kdeps.io/v1
kind: Resource
metadata:
actionId: llmResource
name: LLM Chat
run:
chat:
model: llama3.2:1b
prompt: "{{ get('q') }}"
jsonResponse: true
jsonResponseKeys:
- answerresources/response.yaml
yaml
apiVersion: kdeps.io/v1
kind: Resource
metadata:
actionId: responseResource
requires:
- llmResource
run:
apiResponse:
success: true
response:
data: get('llmResource')Test it:
bash
kdeps run workflow.yaml
curl -X POST http://localhost:16395/api/v1/chat -d '{"q": "What is AI?"}'Architecture
KDeps implements clean architecture with five distinct layers:
┌─────────────────────────────────────────────────────┐
│ CLI Layer (cmd/) │
│ 26 commands: run, build, validate, package, new... │
└─────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────┐
│ Execution Engine (pkg/executor/) │
│ Graph → Engine → Context → Resource Executors │
└─────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────┐
│ Parser & Validator (pkg/parser, validator) │
│ YAML parsing, expression evaluation │
└─────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────┐
│ Domain Models (pkg/domain/) │
│ Workflow, Resource, RunConfig, Settings │
└─────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────┐
│ Infrastructure (pkg/infra/) │
│ Docker, HTTP, Storage, Python, Cloud, ISO, WASM │
└─────────────────────────────────────────────────────┘Resource Executors
Five built-in executor types handle different workload types:
| Executor | Implementation | Features |
|---|---|---|
| LLM | 8 files | Ollama, OpenAI-compatible, streaming, tools |
| HTTP | 2 files | REST APIs, auth, retries, caching |
| SQL | 4 files | 5 database drivers, connection pooling |
| Python | 3 files | uv integration (97% smaller images) |
| Exec | 3 files | Secure shell command execution |
Design Patterns
- Clean Architecture: Zero external dependencies in domain layer
- Graph-Based Orchestration: Topological sorting with cycle detection
- Dependency Injection: Interface-based validators and executors
- Registry Pattern: Dynamic executor registration
- Adapter Pattern: Domain → executor config conversion
Documentation
Getting Started
- Installation - Install KDeps on your system
- Quickstart - Build your first workflow
Configuration
- Workflow - Workflow configuration reference
- Session & Storage - Session persistence and storage
- CORS - Cross-origin resource sharing
- Advanced - Imports, request object, agent settings
Resources
- Overview - Resource types and common configuration
- LLM (Chat) - Language model integration
- LLM Backends - Supported LLM backends
- HTTP Client - External API calls
- SQL - Database queries
- Python - Python script execution
- Exec - Shell command execution
- API Response - Response formatting
Concepts
- Unified API - get(), set(), file(), info()
- Expression Helpers - json(), safe(), debug(), default()
- Expressions - Expression syntax
- Expression Functions Reference - Complete function reference
- Advanced Expressions - Advanced expression features
- Request Object - HTTP request data and file access
- Input Object - Property-based request body access
- Tools - LLM function calling
- Items Iteration - Batch processing with item object
- Validation - Input validation and control flow
- Error Handling - onError with retries and fallbacks
- Route Restrictions - HTTP method and route filtering
Deployment
- Docker - Build and deploy Docker images
- WebServer Mode - Serve frontends and proxy apps
Tutorials
Why KDeps v2?
| Feature | v1 (PKL) | v2 (YAML) |
|---|---|---|
| Configuration | PKL (Apple's language) | Standard YAML |
| Functions | 15+ to learn | 2 (get, set) |
| Startup time | ~30 seconds | < 1 second |
| Docker | Required | Optional |
| Python env | Anaconda (~20GB) | uv (97% smaller) |
| Learning curve | 2-3 days | ~1 hour |
Examples
Explore working examples:
- Simple Chatbot - LLM chatbot
- ChatGPT Clone - Full chat UI
- File Upload - File processing
- HTTP Advanced - API integration
- SQL Advanced - Multi-database
- Batch Processing - Items iteration
- Tools - LLM function calling
- Vision - Image processing
- GitHub: github.com/kdeps/kdeps
- Issues: Report bugs and request features
- Contributing: CONTRIBUTING.md
- Examples: Browse example workflows