GitHub - bubbuild/republic: Build LLM workflows like normal Python while keeping a full audit trail by default.

2 min read Original article ↗

Release Build status codecov Commit activity License

Build LLM workflows like normal Python while keeping a full audit trail by default.

Visit https://getrepublic.org for concepts, guides, and API reference.

Republic is a tape-first LLM client: messages, tool calls, tool results, errors, and usage are all recorded as structured data. You can make the workflow explicit first, then decide where intelligence should be added.

Quick Start

from __future__ import annotations

import os

from republic import LLM

api_key = os.getenv("LLM_API_KEY")
if not api_key:
    raise RuntimeError("Set LLM_API_KEY before running this example.")

llm = LLM(model="openrouter:openrouter/free", api_key=api_key)
result = llm.chat("Describe Republic in one sentence.", max_tokens=48)
print(result)

Why It Feels Natural

  • Plain Python: The main flow is regular functions and branches, no extra DSL.
  • Structured error handling: Errors are explicit and typed, so retry and fallback logic stays deterministic.
  • Tools without magic: Supports both automatic and manual tool execution with clear debugging and auditing.
  • Tape-first memory: Use anchor/handoff to bound context windows and replay full evidence.
  • Event streaming: Subscribe to text deltas, tool calls, tool results, usage, and final state.

Authentication

Republic supports:

  • static api_key
  • provider maps such as api_key={"openai": "...", "anthropic": "..."}
  • dynamic api_key_resolver
  • provider-specific OAuth resolvers for OpenAI Codex and GitHub Copilot

Keep the README short and move auth setup into the docs:

Development

See CONTRIBUTING.md for local setup, testing, and release guidance.

License

Apache 2.0


This project is derived from lightning-ai/litai and inspired by pydantic/pydantic-ai; we hope you like them too.