botanu is platform that helps AI companies understand the real cost of their AI features per customer, enabling outcome-based pricing and smarter scaling. This SDK is built on OpenTelemetry for event-level cost attribution for AI workflow. For more email- deborah@botanu.ai
Getting Started
An event is one business transaction — resolving a support ticket, processing an order, generating a report. Each event may involve multiple runs (LLM calls, retries, sub-workflows) across multiple services. By correlating every run to a stable event_id, Botanu gives you per-event cost attribution and outcome tracking without sampling artefacts.
Install
An event is one business transaction — resolving a support ticket, processing
an order, generating a report. Each event may involve multiple runs (LLM calls,
retries, sub-workflows) across multiple services. By correlating every run to a
stable event_id, botanu gives you per-event cost attribution and outcome
tracking without sampling artifacts.
export BOTANU_API_KEY=<your-api-key>
Wrap your agent:
import botanu with botanu.event(event_id=ticket.id, customer_id=user.id, workflow="Support"): agent.run(ticket)
That single wrap captures every LLM call, HTTP call, and DB call inside and stamps them with event_id, customer_id, and workflow.
Decorator form
import botanu @botanu.event( workflow="Support", event_id=lambda ticket: ticket.id, customer_id=lambda ticket: ticket.user_id, ) def handle_ticket(ticket): return agent.run(ticket)
Works for both sync and async def functions.
Multi-phase workflows
with botanu.event(event_id=ticket.id, customer_id=user.id, workflow="Support"): with botanu.step("retrieval"): docs = vector_db.query(ticket.query) with botanu.step("generation"): response = llm.complete(docs)
See the Quickstart for the full five-minute walkthrough.
Documentation
| Topic | |
|---|---|
| Installation | Install and configure |
| Quickstart | Zero-to-first-trace in five minutes |
| Configuration | Env vars, YAML, trusted-host auth |
| Run Context | Events, runs, retries, baggage |
| Context Propagation | Cross-service and queue propagation |
| Architecture | SDK + collector split |
| LLM Tracking | Manual LLM instrumentation (usually not needed) |
| Data Tracking | DB, storage, messaging (usually not needed) |
| Content Capture | Prompt/response capture for eval, with PII scrubbing |
| Outcomes | Diagnostic annotations and server-side resolution |
| Auto-Instrumentation | Supported libraries |
| Kubernetes | Zero-code instrumentation at scale |
| Existing OTel / Datadog | Brownfield coexistence |
event / step API |
Primary API reference |
| Best Practices | Patterns that work |
| Anti-Patterns | Patterns that break cost attribution |
Requirements
- Python 3.9 or newer
- An OpenTelemetry Collector (Botanu Cloud runs one for you; self-hosted is supported too)
Contributing
Contributions are welcome. Read the Contributing Guide before opening a pull request.
All commits require DCO sign-off:
git commit -s -m "Your commit message"Looking for a place to start? See the good first issues.
Community
- GitHub Discussions — questions, ideas, show & tell
- GitHub Issues — bugs and feature requests
Governance
See GOVERNANCE.md for roles, decision-making, and the contributor ladder. Current maintainers are in MAINTAINERS.md.
Security
Report security vulnerabilities via GitHub Security Advisories or see SECURITY.md. Do not file a public issue.
Code of Conduct
This project follows the LF Projects Code of Conduct. See CODE_OF_CONDUCT.md.