Settings

Theme

Ask HN: What's missing in AI prompt validation and security tools?

2 points by sharmasachin98 9 months ago · 2 comments · 1 min read


We've been building a middleware layer that acts like a firewall for LLMs, it sits between the user and the model (OpenAI, Claude, Gemini, etc.) and intercepts prompts and responses in real time.

It blocks prompt injection, flags hallucinations, masks PII, and adds logging + metadata tagging for compliance and audit.

But we’re hitting the classic startup blind spot: we don’t want to build in a vacuum.

What do you feel is still broken or missing when it comes to: - Securing LLM prompts/responses? - Making GenAI safe for enterprise use? - Auditing what the AI actually said or saw?

We’d love your feedback — especially if you’re working on or thinking about GenAI in production settings.

Thanks!

Uzmanali 9 months ago

One big gap I see is context-aware filtering and memory control.

Many tools block clear prompt injections, but few detect contextual misuse. This happens when users gradually direct the model over many sessions or subtly draw out its internal logic.

Your middleware sounds promising; I'm excited to see where it goes.

  • sharmasachin98OP 9 months ago

    Totally agree, context-aware misuse is a big gap, and one we’re actively exploring. We’ve built session-level risk tracking and some early logic to detect drift over time, but it’s definitely still evolving.

    LLM security isn’t a one-and-done, it’s an ongoing process, especially as attack patterns keep getting more subtle.

    If you’ve seen other use cases or edge cases worth considering, we’d love to hear them. And feel free to ask more questions, really appreciate your input!

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection