Settings

Theme

Show HN: GuardClaw – cryptographically verifiable execution logs for AI agents

github.com

1 points by viruswami5511 8 days ago · 1 comment · 2 min read

Reader

Logs tell you what a system claims happened. GuardClaw proves what was actually recorded.

If an autonomous agent executes trades, runs shell commands, or modifies infrastructure — how do you prove what it actually did? Imagine a trading bot loses $2M — and the only evidence is logs that can be edited.

Traditional logs are mutable. Append-only files aren’t cryptographically linked. Database rows can be edited.

GuardClaw implements GEF-SPEC-1.0 (Guard Execution Format) — a minimal protocol combining:

• RFC 8785 canonicalized envelopes • SHA-256 causal hash chaining • Ed25519 per-entry signatures • Offline verification via CLI

The ledger is a plain JSONL file. No server required.

pip install guardclaw guardclaw verify your_ledger.jsonl

Anyone with the public key can verify the full history — no access to the original runtime required.

The demo intentionally tampers with a signed entry to show deterministic failure:

[2] execution SIG:FAIL CHAIN:OK [3] execution SIG:OK CHAIN:BREAK Violations: 2 — TAMPERED

You can also edit the JSONL file yourself and re-run verification.

Benchmark (1M entries, single-threaded): ~762 writes/sec ~9k full verifies/sec ~39MB RAM for streaming verification

Limitation: if the signing key is compromised, past history can be rewritten. Key management is intentionally out of scope for the protocol.

Would appreciate feedback on the threat model and failure cases.

PyPI: https://pypi.org/project/guardclaw Spec: https://github.com/viruswami5511/guardclaw/blob/master/SPEC.... Demo: https://github.com/viruswami5511/guardclaw-demo

No comments yet.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection