Show HN: Verdic Guard – Deterministic guardrails to prevent LLM hallucinations
I’ve been working on Verdic Guard, a validation layer for production LLM systems where prompts, filters, and monitoring aren’t enough.
In many real deployments (fintech, enterprise workflows, agentic systems), the failure mode isn’t latency or cost — it’s hallucinations that sound confident and pass surface checks. Prompt engineering helps, but it doesn’t scale once systems grow long-running, tool-using, or multi-agent.
Verdic takes a different approach:
Define an explicit intent + scope contract for what the model is allowed to output
Validate LLM outputs before execution, not just inputs
Block or flag responses that drift semantically, contextually, or domain-wise
Keep enforcement deterministic and auditable (not “best effort” prompts)
It’s designed to sit between the LLM and your application, acting as a guardrail rather than another model.
This is still early, and I’m especially interested in feedback on:
Where this breaks down in real systems
How teams currently handle hallucinations beyond prompts
Whether deterministic enforcement is useful or too restrictive in practice
Site: https://www.verdic.dev
Happy to answer questions or share implementation details if useful. The "output validation not just input validation" point is underrated. Most guardrails focus on what goes into the model, but the real risk is what comes out and gets executed. We're working on similar problems at keypost.ai - policy enforcement at the tool-calling boundary for MCP pipelines. Different angle (we're more focused on access control and rate limits than hallucination detection per se) but same philosophy of deterministic enforcement. Question: how do you handle the tension between semantic enforcement and false positives? In our experience, too-strict semantic rules block legitimate use cases, but too-loose lets things through. Any patterns that worked for calibrating that?