Show HN: OnGarde – Runtime content security proxy for self-hosted AI agents
Built this because I had heard some horror stories about companies leaking PII from high compliance environments to ChatGPT. I wanted something that would auto-filter any dangerous traffic between my AI agent and the LLM API without requiring code changes in the agent itself.
The filtering list has expanded a bit to include PII, secret keys and I've started a prompt injection library thats being filtered on as well.
The problem: self-hosted agent platforms (OpenClaw, Agent Zero, CrewAI) have no runtime content layer. If your agent leaks an API key, gets prompt injected, or decides to forward someone's SSN to GPT-4, nothing stops it. The platforms don't try to stop it either.
OnGarde is a proxy. You change one line in your config (swap baseUrl) and every request gets scanned before it leaves. Catches credentials, PII, prompt injection, dangerous shell commands. If the scanner fails, it blocks it; never silently passes through.
npx @ongarde/openclaw init handles the OpenClaw setup automatically. Also on PyPI if you're doing something custom.
Dashboard is localhost-only with a SQLite audit log. Nothing phones home.
v1 just shipped: https://github.com/AntimatterEnterprises/ongarde/releases/ta...
I am looking for feedback on this project. Let me hear your thoughts. Localhost-only dashboard with SQLite audit is a good default for self-hosted. Same question I keep asking these proxy projects: if the SQLite log is the evidence layer, what happens when someone disputes whether a block actually occurred? Is there a way to verify the log independently, or does it depend on trusting the file wasn't touched?