Single-pass runtime reliability gate for LLM agents using token logprobs.
AgentUQ turns provider-native token logprobs into localized runtime decisions for agent steps. It does not claim to know whether an output is true. It tells you where a generation looked brittle or ambiguous and whether the workflow should continue, annotate the trace, regenerate a risky span, retry the step, dry-run verify, ask for confirmation, or block execution.
How you can use it
- Catch brittle action-bearing spans before execution: SQL clauses, tool arguments, selectors, URLs, paths, shell flags, and JSON leaves
- Localize risk to the exact span that matters instead of treating the whole response as one opaque score
- Spend expensive verification selectively by using AgentUQ as the first-pass gate
Install
For the OpenAI example below, also install the provider SDK:
For local development and contributions:
python -m venv .venv
. .venv/bin/activate
pip install -e .[dev]Examples below assume the public package and import namespace agentuq.
Integration status
OpenAI Responses API is the stable integration path in the current docs. Every other documented provider, gateway, and framework integration is preview, including OpenAI Chat Completions, OpenRouter, LiteLLM, Gemini, Fireworks, Together, LangChain, LangGraph, and the OpenAI Agents SDK.
Minimal loop
from openai import OpenAI from agentuq import Analyzer, UQConfig from agentuq.adapters.openai_responses import OpenAIResponsesAdapter client = OpenAI() response = client.responses.create( model="gpt-4.1-mini", input="Return the single word Paris.", include=["message.output_text.logprobs"], top_logprobs=5, temperature=0.0, top_p=1.0, ) adapter = OpenAIResponsesAdapter() analyzer = Analyzer(UQConfig(policy="balanced", tolerance="strict")) record = adapter.capture( response, { "model": "gpt-4.1-mini", "include": ["message.output_text.logprobs"], "top_logprobs": 5, "temperature": 0.0, "top_p": 1.0, }, ) result = analyzer.analyze_step( record, adapter.capability_report( response, { "model": "gpt-4.1-mini", "include": ["message.output_text.logprobs"], "top_logprobs": 5, "temperature": 0.0, "top_p": 1.0, }, ), ) print(result.pretty())
Documentation
The web docs are built with Docusaurus from the canonical Markdown in docs/ and the site app in website/.
- Start here: docs/index.mdx
- Get started: docs/get-started/index.md
- Provider and framework quickstarts: docs/quickstarts/index.md
- Concepts: docs/concepts/index.md
- API reference: docs/concepts/public_api.md
- Maintainers: docs/maintainers/index.md
- Contributing: CONTRIBUTING.md
Repo layout
src/agentuq: library codeexamples: usage examplestests: offline, contract, and optional live testsdocs: canonical documentation contentwebsite: Docusaurus site and Vercel-facing app
Testing
Default pytest runs only offline tests:
Live smoke checks are manual and opt-in:
AGENTUQ_RUN_LIVE=1 python -m pytest -m live
