Settings

Theme

Show HN: Verdic Guard – validating LLM outputs against intent, not just prompts

2 points by kundan_s__r 22 days ago · 0 comments · 1 min read


I’m building Verdic Guard, an experiment around a problem I kept running into while working with LLMs in production.

LLMs usually behave well in demos and short interactions, but once they’re embedded into long, multi-step or agentic workflows, they can drift in subtle ways. Prompt tuning, retries, and monitoring help, but they don’t clearly define or enforce what the system is actually allowed to do.

Verdic Guard explores a different approach: treating AI reliability as a validation and enforcement problem, not just a prompting problem. The idea is to define intent, boundaries, and constraints upfront, and then validate outputs against those constraints before they reach users or downstream systems.

This is still early and opinionated. I’m sharing it to get feedback from people who’ve dealt with:

LLMs in long-running workflows

Agentic systems that accumulate context

The gap between “it worked in testing” and “it’s safe in production”

I’d especially love to hear where this framing breaks down, or if you’ve seen better ways to think about output reliability.

Project: https://www.verdic.dev

Happy to answer questions or learn from critiques

No comments yet.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection