Settings

Theme

SudoAgent runtime guardrails for AIagent toolcalls policy and approval and audit

github.com

1 points by naolbeyene 3 days ago · 2 comments

Reader

kxbnb a day ago

Nice approach - fail-closed decision logging is the right default. Too many systems treat audit as best-effort, which defeats the purpose when you're investigating an incident.

The framework-agnostic design makes sense for adoption. One thing we've found tricky at keypost.ai is policy composition - when you have overlapping constraints (rate limits + role-based access + cost caps), determining which rule "wins" needs explicit precedence. Does SudoAgent have opinions on conflict resolution, or is that left to the Policy implementation?

Also curious about the human approval latency in practice - do you see teams using it for truly synchronous gates, or more as a "review queue" pattern where work gets batched?

naolbeyeneOP 3 days ago

I built SudoAgent, a Python library that guards “dangerous” function calls at runtime.

It’s meant for agent/tool code (refunds, deletes, API writes, prod changes) where you want a gate outside the prompt.

How it works

Evaluate a policy on the call context (action + args/kwargs)

Optionally request human approval (terminal y/n in v0.1.1)

Write audit entries (JSONL by default) and correlate with request_id

Key semantics

Decision logging is fail-closed (if decision logging fails, the function does not execute)

Outcome logging is best-effort (logging failure won’t change the function return/exception)

Redacts secret key names + value patterns (JWT-like, sk-, PEM blocks)

It’s intentionally minimal and framework-agnostic: implement your own Policy, Approver,or AuditLogger (Slack/web UI/db) and inject them.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection