Settings

Theme

LangPatrol: A static analyzer for LLM prompts that catches bugs before inference

github.com

4 points by mmarvin 10 days ago · 2 comments

Reader

mmarvinOP 10 days ago

We built a small SDK that lint prompts before they ever hit an LLM. In practice it behaves like ESLint for prompts. It runs locally, no external calls, and flags issues that usually waste tokens or produce inconsistent outputs: unresolved template variables, missing contextual references, contradictory instructions, schema contamination when you expect structured output, and prompts that risk overrunning model context.

It exposes a single function in code, CLI for CI is in the works. The analyzer is language agnostic and fast enough to sit in any prompt generation pipeline, we aim for <50ms. There is also a small devserver with a React UI for experimenting interactively.

The goal is to treat prompts as first class artifacts and catch structural defects early rather than debugging after the fact. Happy to answer questions about heuristics, false positives, or how we estimate token overage.

All of it is open source under MIT, and we plan to keep expanding the issue set. We are also exploring a complementary prompt optimization layer that builds on top of the static analysis described above.

Happy to discuss details or help anyone experiment with it.

  • mmarvinOP 10 days ago

    and just fyi, we also run a hosted version that performs inference based validation and optimization, which the local SDK cannot do (for obvious reasons). The SDK is fully usable on its own, but the hosted service is there for teams who want deeper dynamic checks.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection