Open-source LLM tracing and observability with remote tool invocation.
Quick Start · SDKs · Features · Langfuse Compatible · Development
Quick Start
Self-host LightRace with a single command using the CLI:
Install the CLI:
# npm (recommended) npm install -g @lightrace/cli # Homebrew brew install SKE-Labs/tap/lightrace # Go go install github.com/SKE-Labs/lightrace-cli@latest
Start the server:
lightrace init # generate config with secrets lightrace start # pull images & start all services
Open http://localhost:3000 and log in with demo@lightrace.dev / password.
Check status:
lightrace status # service health + URLs lightrace status -o env # export SDK connection vars
CLI commands:
init·start·stop·status·logs [service]·db migrate·db reset·versionStart with
--exclude frontendfor API-only mode.
SDKs
Install the SDK for your language and start tracing:
Python
from lightrace import Lightrace, trace lt = Lightrace( public_key="pk-lt-demo", secret_key="sk-lt-demo", host="http://localhost:3000", ) @trace() def run_agent(query: str): return search(query) @trace(type="tool") # remotely invocable from the UI def search(query: str) -> list: return ["result"] run_agent("hello") lt.flush()
TypeScript / JavaScript
import { LightRace, trace } from "lightrace"; const lt = new LightRace({ publicKey: "pk-lt-demo", secretKey: "sk-lt-demo", host: "http://localhost:3000", }); const search = trace("search", { type: "tool" }, async (query: string) => { return ["result"]; }); const runAgent = trace("run-agent", async (query: string) => { return search(query); }); await runAgent("hello"); lt.flush();
Integrations
Both SDKs provide integrations for popular frameworks. Here are a few examples:
LangChain (Python)
from langchain_core.messages import HumanMessage from langchain_openai import ChatOpenAI from lightrace import Lightrace from lightrace.integrations.langchain import LightraceCallbackHandler lt = Lightrace(public_key="pk-lt-demo", secret_key="sk-lt-demo") handler = LightraceCallbackHandler(client=lt) model = ChatOpenAI(model="gpt-4o-mini", max_tokens=256) response = model.invoke( [HumanMessage(content="What is the speed of light?")], config={"callbacks": [handler]}, ) lt.flush() lt.shutdown()
Claude Agent SDK (Python)
import anyio from claude_agent_sdk import AssistantMessage, ResultMessage, TextBlock from lightrace import Lightrace from lightrace.integrations.claude_agent_sdk import traced_query lt = Lightrace(public_key="pk-lt-demo", secret_key="sk-lt-demo") async def main(): async for message in traced_query( prompt="Read the files in the current directory and summarize them.", options={"max_turns": 5}, client=lt, trace_name="file-summarizer", ): if isinstance(message, AssistantMessage): for block in message.content: if isinstance(block, TextBlock): print(block.text) elif isinstance(message, ResultMessage): print(f"Cost: ${message.total_cost_usd:.4f}") lt.flush() lt.shutdown() anyio.run(main)
Claude Agent SDK (TypeScript)
import { Lightrace } from "lightrace"; import { tracedQuery } from "lightrace/integrations/claude-agent-sdk"; const lt = new Lightrace({ publicKey: "pk-lt-demo", secretKey: "sk-lt-demo" }); for await (const message of tracedQuery({ prompt: "Read the files in the current directory and summarize them.", options: { maxTurns: 5 }, client: lt, traceName: "file-summarizer", })) { if (message.type === "result") { const r = message as Record<string, unknown>; console.log(r.result); } } lt.flush(); await lt.shutdown();
See the full list of integrations: Python SDK · JS SDK
Trace Types
Both SDKs support the same observation types:
| Type | Description |
|---|---|
| (default) | Root trace — top-level unit of work |
span |
Generic child operation |
generation |
LLM call (tracks model, tokens, cost) |
tool |
Tool function — remotely invocable from the dashboard |
chain |
Logical grouping of steps |
event |
Point-in-time marker |
Set invoke=False (Python) or invoke: false (JS) on a tool to trace it without registering for remote invocation.
Features
- Trace & observe — hierarchical trace viewer with token usage breakdown, latency, and cost
- Remote tool invocation — re-run any
@trace(type="tool")function directly from the dashboard - Real-time updates — live trace streaming via WebSocket (Redis Pub/Sub)
- Tools page — see connected SDK instances, registered tools, and their schemas
- API key management — create and rotate keys per project
- Multi-project RBAC — Owner, Admin, Member, and Viewer roles per project
Why LightRace?
| LightRace | Langfuse (OSS) | LangSmith | |
|---|---|---|---|
| Self-host (single command) | Yes | Manual setup | No (cloud-only) |
| Remote tool invocation | Yes | No | No |
| Langfuse SDK compatible | Yes | — | No |
| Claude Agent SDK integration | Yes | No | No |
| Real-time trace streaming | Yes | Polling | Yes |
| Open source (Apache 2.0) | Yes | Yes (EE features paid) | No |
Langfuse Compatibility
LightRace is drop-in compatible with Langfuse v3 and v4 SDKs. Point any Langfuse SDK at your LightRace instance:
export LANGFUSE_PUBLIC_KEY=pk-lt-demo export LANGFUSE_SECRET_KEY=sk-lt-demo export LANGFUSE_HOST=http://localhost:3000
Both POST /api/public/ingestion (v3 JSON batch) and OpenTelemetry (v4 OTLP) endpoints are supported.
Architecture
Monorepo with three packages:
packages/
shared/ — Prisma schema, DB + Redis clients, Zod validation
backend/ — Hono server (tRPC + REST ingestion + OTel + tool registry)
frontend/ — Next.js 15 dashboard + Auth.js
Infrastructure: PostgreSQL + Redis + Caddy, all managed as containers.
All traffic enters through Caddy on a single port (default 3000):
| Route | Target |
|---|---|
/api/public/* |
Backend — SDK ingestion, OTLP, tool registration |
/trpc/* |
Backend — tRPC queries and mutations |
/ws |
Backend — WebSocket (real-time subscriptions) |
/* |
Frontend — dashboard and auth |
Development
Setup
# Start infrastructure (Postgres + Redis) docker compose up -d postgres redis # Install dependencies pnpm install # Copy per-package env files cp packages/shared/.env.example packages/shared/.env cp packages/backend/.env.example packages/backend/.env cp packages/frontend/.env.example packages/frontend/.env # Generate Prisma client, run migrations, seed demo data pnpm db:generate && pnpm db:migrate && pnpm db:seed # Start all services (frontend :3001, backend :3002) pnpm dev
Dashboard in dev mode: http://localhost:3001
Scripts
| Command | Description |
|---|---|
pnpm dev |
Start all services (Turborepo) |
pnpm build |
Build all packages |
pnpm typecheck |
TypeScript check |
pnpm test |
Run tests (Vitest) |
pnpm format |
Format (Prettier) |
pnpm db:generate |
Generate Prisma client |
pnpm db:migrate |
Run Prisma migrations |
pnpm db:seed |
Seed demo data |
pnpm db:studio |
Open Prisma Studio |
Tech Stack
Next.js 15 · Hono · tRPC v11 · Prisma 7 · PostgreSQL · Redis · Caddy · Auth.js v5 · Tailwind CSS v4 · Turborepo
Environment Variables
Docker Compose (.env):
| Variable | Description |
|---|---|
GATEWAY_PORT |
Caddy exposed port (default: 3000) |
PUBLIC_URL |
Public URL of the instance |
DB_PASSWORD |
PostgreSQL password |
AUTH_SECRET |
Auth.js JWT secret |
INTERNAL_SECRET |
Frontend-backend shared secret |
LIGHTRACE_DOMAIN |
Set for automatic HTTPS via Caddy |
Per-package .env files are documented in each package's .env.example.
Related
| Lightrace CLI | Self-host with a single command |
| lightrace-python | Python SDK |
| lightrace-js | TypeScript/JavaScript SDK |
License
Apache 2.0