Settings

Theme

Show HN: K9 Audit – Causal intent-execution audit trail for AI agents

github.com

4 points by zippolyon a day ago · 2 comments · 1 min read

Reader

On March 4, 2026, my Claude Code agent wrote a staging URL into a production config file — three times, 41 minutes apart. Syntax was valid, no error thrown. My logs showed every action. All green.

The problem was invisible because nothing had recorded what the agent intended to do before it acted — only what it actually did.

K9 Audit fixes this with a causal five-tuple per agent step: - X_t: context (who acted, under what conditions) - U_t: action (what was executed) - Y*_t: intent contract (what it was supposed to do) - Y_t+1: actual outcome - R_t+1: deviation score (deterministic — no LLM, no tokens)

Records are SHA256 hash-chained. Tamper-evident. When something goes wrong, `k9log trace --last` gives root cause in under a second.

Works with Claude Code (zero-config hook), LangChain, AutoGen, CrewAI, or any Python agent via one decorator.

pip install k9audit-hook

zippolyonOP 12 hours ago

When it comes to auditing LLM-based agents, using another LLM tool is like having one criminal write a clean record for another. Therefore, I believe that a causal AI observation model must be introduced, and only with determinism can probability theory be audited.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection