Settings

Theme

Ask HN: How are you preventing LLM hallucinations in production systems?

3 points by kundan_s__r 20 days ago · 15 comments · 1 min read


Hi HN,

For those running LLMs in real production environments (especially agentic or tool-using systems): what’s actually worked for you to prevent confident but incorrect outputs?

Prompt engineering and basic filters help, but we’ve still seen cases where responses look fluent, structured, and reasonable — yet violate business rules, domain boundaries, or downstream assumptions.

I’m curious:

Do you rely on strict schemas or typed outputs?

Secondary validation models or rule engines?

Human-in-the-loop for certain classes of actions?

Hard constraints before execution (e.g., allow/deny lists)?

What approaches failed for you, and what held up under scale and real user behavior?

Interested in practical lessons and post-mortems rather than theory.

al_borland 20 days ago

I’ve just been ignoring my boss every time he says something about how we should leverage AI. What we’re building doesn’t need it and can’t tolerate hallucinations. They just want to be able to brag up the chain that AI is being used, which is the wrong reason to use it.

If I was forced to use it, I’d probably be writing pretty extensive guardrails (outside of the AI) to make sure it isn’t going off the rails and the results make sense. I’m doing that anyway with all user input, so I guess I’d be treating all LLM generated text as user input and assuming it’s unreliable.

  • kundan_s__rOP 20 days ago

    That’s a very sane stance. Treating LLM output as untrusted input is probably the correct default when correctness matters.

    The worst failures I’ve seen happen when teams half-trust the model — enough to automate, but still needing heavy guardrails. Putting the checks outside the model keeps the system understandable and deterministic.

    Ignoring AI unless it can be safely boxed isn’t anti-AI — it’s good engineering.

raw_anon_1111 19 days ago

Let me give you a little anecdote. I use ChatGPT to learn Spanish. The prompt I use is below.

It gets things wrong about half the time and I have to tell it that it’s wrong. If I can’t trust an LLM to follow simple instructions, why would I trust it “agentically” with business critical decision making?

I work in cloud consulting specializing in app dev and every project I’ve done in the last year and a half has a bedrock based LLM somewhere in the process - ie the running system. But I know what to trust it for and what not to trust it for and I guide my clients appropriately.

The prompt I use for studying Spanish that ChatGPT gets wrong:

—- I am learning Spanish at a A2 level. When I ask you to do a lightning round, I will give you a list of sentences first. You will give me each English sentence one by one and I will translate it to Spanish. If I get it wrong, save it for the next round.

When I ask you to create sentences from a verb, create 1 sentences each for 1-3 single and 1 and 3 plural for present and simple past and 3 for progressive. Each sentence must be at least five words.

These are some words and phrases I need to review: only use these words in sentences for 1-3 present single and only when they make sense, If a target word does not fit naturally, skip it and prioritize a natural sentence. don’t force yourself to use these words. When I list of verb, it means I need to practice it, present and simple past

<a relatively short list of words>

Never use:

<a relative short list of words>

  • kundan_s__rOP 19 days ago

    That’s a very real example of the core problem: LLMs don’t reliably honor constraints, even when they’re explicit and simple. Instruction drift shows up fast in learning tasks — and quietly in production systems.

    That’s why trusting them “agentically” is risky. The safer model is to assume outputs are unreliable and validate after generation.

    I’m working on this exact gap with Verdic Guard (verdic.dev) — treating LLM output as untrusted input and enforcing scope and correctness outside the model. Less about smarter prompts, more about predictable behavior.

    Your Spanish example is basically the small-scale version of the same failure mode.

crosslayer 18 days ago

+1 to treating LLM output as untrusted input.

The failure mode I keep seeing isn’t hallucination per se… it’s blurred responsibility between intent and execution. Once a model can both decide and act, you’ve already lost determinism.

The stable pattern seems to be- LLMs generate proposals, deterministic systems enforce invariants. Creativity upstream, boring execution downstream.

What matters in production isn’t making models smarter, it’s making failure modes predictable and auditable. If you can’t explain why a state transition was allowed, the architecture is already too permissive.

stephenr 20 days ago

I've found that I can use a very similar approach to the one I've used when handling the risks associated with blockchain, cryptocurrencies, "web scale" infrastructure, and of course the chupacabra.

  • kundan_s__rOP 20 days ago

    Fair enough. A healthy dose of skepticism has served us well for every overhyped wave so far. The difference this time seems to be that AI systems don’t just fail noisily — they fail convincingly, which changes how risk leaks into production.

    Treating them with the same paranoia we applied to web scale infra and crypto is probably the right instinct. The chupacabra deserved it too.

    • stephenr 20 days ago

      > they fail convincingly

      To someone that has paid zero attention and/or deliberately ignores any coverage about the numerous (and often hilarious) ways that spicy autocomplete just completely shits the bed? Sure, maybe.

      • kundan_s__rOP 20 days ago

        That’s fair — if you’re already skeptical and paying attention, the failures are obvious and often funny. The risk tends to show up more with non-experts or downstream systems that assume the output is trustworthy because it looks structured and confident.

        Autocomplete failing loudly is annoying; autocomplete failing quietly inside automation is where things get interesting.

        • stephenr 20 days ago

          > The risk tends to show up more with non-experts

          This hits a key point that isn't emphasised enough. A few interactions with technology and people have shaped my view:

          I fiddled with Apple's Image Playground thing sometime last year, and it was quite rewarding to see a result from a simple description. It wasn't exactly what I'd asked for, but it was close, kind of. As someone who has basically zero artistic ability it was fun to be able to create something like that. I didn't think much about it at the time, but recently I thought about this again, and I keep that in mind when seeing people who are waxing poetic about using spicy autocomplete to "write code" or "analyse business requests" or whatever the fuck it is they're using it for. Of course it seems fucking magical and fool proof if you don't know how to do the thing you're asking it for, yourself.

          I had to fly back to my childhood home at very short notice in August to see my (as it turned out, dying) father in hospital. I spoke to more doctors, nurses and specialists in two weeks, than I think I have ever spoken to about my own health in 40+ years). I was relaying the information from doctors to my brother via text message. His initial response to said information was to send me back a Chat fucking GPT summary/analysis of what I'd passed along to him... because apparently my own eyeballs seeing the physical condition of our father, and a doctor explaining the cause, prognosis and chances of recovery were not reliable enough. Better ask Dr Spicy Autocomplete for a fucking second opinion I guess.

          So now my default view about people who choose to use spicy autocomplete for anything besides shits and giggles like "Write a Star Trek fan fiction where Captain Jack Sparrow is in charge of DS9", or "Draw <my wife> as a cute little bunny" is essentially "yeah of course it looks like infallible magic, if you don't know how to do it yourself".

          • kundan_s__rOP 20 days ago

            The real risk with LLMs isn’t when they fail loudly — it’s when they fail quietly and confidently, especially for non-experts or downstream systems that assume structured output equals correctness.

            When you don’t already understand the domain, AI feels infallible. That’s exactly when unvalidated outputs become dangerous inside automation, decision pipelines, and production workflows.

            This is why governance can’t be an afterthought. AI systems need deterministic validation against intent and execution boundaries before outputs are trusted or acted on — not just better prompts or post-hoc monitoring.

            That gap between “sounds right” and “is allowed to run” is where tools like Verdic Guard are meant to sit.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection