Settings

Theme

kundan_s__r

Karma
4
Created
29 days ago

About

Builder working on AI reliability and intent enforcement in LLM systems.

Interested in failure modes of agentic workflows, hallucination as constraint violation, and how to make probabilistic models behave predictably in production.

Currently exploring intent contracts, semantic drift detection, and validation layers that sit after generation rather than inside prompts.

I like reading HN threads where people disagree thoughtfully.

Recent Submissions

  1. 1. Ask HN: How are you preventing LLM hallucinations in production systems?
  2. 2. Show HN: Verdic Guard – Deterministic guardrails to prevent LLM hallucinations (verdic.dev)
  3. 3. Show HN: Verdic Guard – Deterministic guardrails to prevent LLM hallucinations
  4. 4. Show HN: Verdic Guard – deterministic guardrails for production AI
  5. 5. Show HN: Verdic Guard – validating LLM outputs against intent, not just prompts
  6. 6. Show HN: A policy enforcement layer for LLM outputs (why prompts weren't enough)
  7. 7. Show HN: Verdic Guard – Policy Enforcement and Output Validation for LLMs
  8. 8. Verdic – Intent governance layer for AI systems https://www.verdic.dev/

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection