Show HN: Reprompt – Analyze what you type into AI tools, not what they output
github.comI ran this on my own prompt history and three things surprised me. found 3 API keys buried in copy-pasted stack traces (`reprompt privacy`). 35% of my agent sessions had error loops -- the agent retrying the same failing approach 3+ times (`reprompt agent`). And 50-70% of my conversation turns were filler like "ok try that" (`reprompt distill`).
pip install reprompt-cli
reprompt scan && reprompt
Everything runs locally -- zero network calls, zero telemetry. Also works as an MCP server and GitHub Action.Love the "no LLM calls" approach. Scoring prompts in <1ms locally is exactly the right tradeoff. Most tools overcomplicate this.
Thanks! Turns out structural signals get you surprisingly far. An LLM catches more, but speed is the feature.