Show HN: I built a "sudo" mechanism for AI agents
github.comHi HN, I’m Yaron, a DevOps engineer working on AI infrastructure.
I built Cordum because I saw a huge gap between "AI Demos" and "Production Safety." Everyone is building Agents, but no one wants to give them write-access to sensitive APIs (like refunds, database deletions, or server management).
The problem is that LLMs are probabilistic, but our infrastructure requires deterministic guarantees.
Cordum is an open-source "Safety Kernel" that sits between your LLM and your execution environment. Think of it as a firewall/proxy for agentic actions.
Instead of relying on the prompt to "please be safe," Cordum enforces policy at the protocol layer: 1. It intercepts the agent's intent. 2. Checks it against a strict policy (e.g., "Refund > $50 requires human approval"). 3. Manages the execution via a state machine.
Tech Stack: - Written in Go (for performance and concurrency). - Uses NATS JetStream for the message bus. - Redis for state management.
It’s still early days, but I’d love your feedback on the architecture and the approach to agent governance.
Repo: https://github.com/cordum-io/cordum
Happy to answer any questions! This matches what I’ve been seeing too. I’ve been building a similar enforcement layer, but focused first on customer-facing AI systems where mistakes create contractual or financial obligations rather than just infra damage. One thing that surprised me in practice: teams don’t just need a state machine and allow/deny. They need the gateway to explain why an action was blocked, what policy path fired, and what approval chain would be required to proceed. Otherwise ops teams can’t debug policy coverage when something goes wrong. Curious whether you’re seeing demand more from infra teams or from CX / RevOps / compliance orgs right now. Really appreciate the feedback to improve:)