Settings

Theme

Verdic – Intent governance layer for AI systems https://www.verdic.dev/

1 points by kundan_s__r a month ago · 0 comments · 2 min read


We built Verdic (https://www.verdic.dev/ )after repeatedly running into the same issue while deploying LLMs in production: most AI failures aren’t about content safety, they’re about intent drift.

As models become more agentic, outputs often shift quietly from descriptive to prescriptive behavior — without any explicit signal that the system is now effectively taking action. Keyword filters and rule-based guardrails break down quickly in these cases.

Verdic is an intent governance layer that sits between the model and the application. Instead of checking topics or keywords, it evaluates:

whether an output collapses future choices into a specific course of action

whether the response exerts normative pressure (directing behavior vs explaining)

The goal isn’t moderation, but behavioral control: detecting when an AI system is operating outside the intent it was deployed for, especially in regulated or decision-critical workflows.

Verdic currently runs as an API with configurable allow / warn / block outcomes. We’re testing it on agentic workflows and long-running chains where intent drift is hardest to detect.

This is an early release. I’m mainly looking for feedback from people deploying LLMs in production, especially around:

agentic systems

AI governance

risk & compliance

failure modes we might be missing

Happy to answer questions or share more details about the approach.

No comments yet.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection