Give your AI agents reversibility and governance before they touch your host
github.comI kept running AI coding agents with full filesystem and network access, and no way to review what they did before it hit my system. Docker isolates but doesn't govern. So I built envpod.
Every agent runs in a pod with a copy-on-write overlay. Your host is never touched until you explicitly commit:
$ sudo envpod init my-agent --preset claude-code $ sudo envpod run my-agent -- claude $ sudo envpod diff my-agent # review every change $ sudo envpod commit my-agent # apply to host, or rollback
Also: encrypted credential vault (agent never sees raw API keys), per-pod DNS filtering (whitelist which domains the agent can reach), action queue (irreversible ops wait for approval), and append-only audit trail.
Single 13 MB static Rust binary. No daemon, no container runtime, no dependencies. Warm start in 32ms. 50 pod clones in 408ms. Tested on 9 Linux distros.
41 example configs for Claude Code, Codex, Aider, SWE-agent, browser-use, and more.
Website: https://envpod.dev Discord: https://discord.gg/envpod
Solo dev. Happy to answer architecture questions.