LLM agent framework with structured I/O
Build AI agents with type-safe inputs and outputs, automatic tool calling, and powerful agentic loops.
โจ Features
- ๐ฏ Structured I/O - Pydantic models for inputs and outputs
- ๐ค Agentic Loops - Multi-turn execution with tool calling
- ๐ ๏ธ Auto Tool Schemas - Generate from type hints and docstrings
- ๐ Dynamic Tools - Add/remove tools during execution
- โ Parse Error Recovery - Automatic retry on validation failures
- ๐ Step Callbacks - Full control over loop behavior
- ๐ LiteLLM Integration - Works with any LLM provider
- ๐ Streaming Responses - Real-time output with partial structured updates
- ๐พ Provider Caching - Reduce latency and cost with prompt caching
- ๐ก๏ธ Model Fallbacks - Automatic provider failover for high availability
- ๐ณ Branching Workflows - Spawn sub-agents that extend parent capabilities for parallel analysis and map-reduce patterns
Warning
Breaking Change in v0.8.0: Async-First API.
Acorn v0.8.0 makes __call__ async. If you're upgrading from v0.7.x, you need to update your code:
- Sync code: Replace
agent(...)withagent.run(...) - Async code: Add
awaitโawait agent(...)
Auto-migrate with your AI coding agent:
Paste this into your AI coding agent to update your codebase automatically:
Find all usages of acorn Module subclasses being called directly (e.g. `agent(...)`, `MyModule()(...)`).
- If the call site is in a sync function or top-level code, replace it with `agent.run(...)`.
- If the call site is in an async function, add `await` before the call: `await agent(...)`.
Do NOT change the class definitions themselves, only the call sites.
๐ Quick Start
Installation
Set your API key:
# For Anthropic Claude export ANTHROPIC_API_KEY="your-key-here" # Or for OpenAI export OPENAI_API_KEY="your-key-here" # Or any other LiteLLM-supported provider
Single-Turn Example
from pydantic import BaseModel, Field from acorn import Module class Input(BaseModel): text: str = Field(description="The text to summarize") max_words: int = Field(default=100, description="Maximum words in summary") class Output(BaseModel): summary: str = Field(description="The concise summary") word_count: int = Field(description="Number of words in summary") class Summarizer(Module): """Summarize text concisely.""" initial_input = Input final_output = Output model = "anthropic/claude-sonnet-4-5-20250514" # Use it summarizer = Summarizer() result = summarizer( text="Long article text here...", max_words=50 ) print(result.summary) print(f"Words: {result.word_count}")
Multi-Turn Agentic Loop
from pydantic import BaseModel, Field from acorn import Module, tool class Input(BaseModel): topic: str = Field(description="Research topic") depth: str = Field(default="shallow", description="Research depth") class Output(BaseModel): findings: str = Field(description="Summary of findings") sources: list[str] = Field(description="Sources consulted") class ResearchAgent(Module): """Research assistant with tools.""" initial_input = Input max_steps = 5 # Enable agentic loop final_output = Output model = "anthropic/claude-sonnet-4-5-20250514" @tool def search(self, query: str) -> list: """Search for information.""" # Your search implementation return ["result1", "result2"] @tool def analyze(self, data: str) -> str: """Analyze collected data.""" # Your analysis implementation return f"Analysis: {data}" def on_step(self, step): """Called after each step.""" print(f"Step {step.counter}") # Early termination if done if len(step.tool_results) >= 3: step.finish( findings="Sufficient data collected", sources=["source1", "source2"] ) return step # Use it agent = ResearchAgent() result = agent(topic="Large Language Models", depth="shallow")
๐ Documentation
- Getting Started - Installation and first steps
- Module Reference - Complete Module API documentation
- Branching - Sub-agents and parallel processing
๐ Core Concepts
Module
Base class for LLM agents. Configure with:
model- LLM to use (required - no default)temperature- Sampling temperaturemax_tokens- Maximum tokens to generatemax_steps- Max agentic loop iterations (None = single-turn)initial_input- Pydantic model for input schemafinal_output- Pydantic model for output schematools- List of available toolscache- Enable provider-level prompt cachingmodel_fallbacks- List of fallback models for automatic failover
Tools
Functions the LLM can call:
@tool def search(query: str, limit: int = 10) -> list: """Search for information. Args: query: The search query limit: Maximum results to return """ return search_api(query, limit)
Schema is automatically generated from type hints and docstring.
Step Callback
Control agentic loop execution:
def on_step(self, step): # Access step info print(f"Step {step.counter}") print(f"Tools called: {[tc.name for tc in step.tool_calls]}") # Dynamic tool management step.add_tool(new_tool) step.remove_tool("old_tool") # Early termination if condition: step.finish(result="Early exit") return step
๐ฏ Examples
Try them live on the Gradio app or browse the source in examples/:
| Example | Category | Description |
|---|---|---|
| Simple Q&A | Basic | Single-turn question answering with structured output |
| HN Production Readiness | Agentic | Checks if a trending HN project is production-ready |
| Documentation Coverage | Agentic | Scores documentation coverage of a GitHub repo (0โ100) |
| Bus Factor Calculator | Branching | Calculates the bus factor of a GitHub repository |
| License Compatibility | Agentic | Checks dependency license compatibility for conflicts |
| Dependency Bloat Scanner | Branching | Finds redundant and overlapping libraries in your deps |
๐งช Testing
# Run all tests pytest # With coverage pytest --cov=acorn # Specific test file pytest tests/test_agentic_loop.py -v
Current status: 201 tests passing, 85% coverage
๐ฃ๏ธ Roadmap
โ Completed
- Single-turn execution
- Multi-turn agentic loops
- Tool calling with auto-schema generation
- Parse error recovery
- Dynamic tool management
- Step callbacks
- Streaming responses with partial structured output
- Forced termination strategies
- Provider caching
- Model fallbacks
- Branching workflows
๐ Planned
- Async support
- More docs
- Integration examples with different providers (vector DBs, observability tools, etc.)
๐ค Contributing
Contributions welcome! Please:
- Check open issues for areas to help
- Write tests for new features (maintain >80% coverage)
- Update documentation
- Add examples for new features
๐ Acknowledgments
Thanks to @rosenbrockc for donating the acorn pip package name.