Settings

Theme

Show HN: Reprompt – Analyze what you type into AI tools, not what they output

github.com

3 points by LuxBennu a month ago · 3 comments

Reader

LuxBennuOP a month ago

I ran this on my own prompt history and three things surprised me. found 3 API keys buried in copy-pasted stack traces (`reprompt privacy`). 35% of my agent sessions had error loops -- the agent retrying the same failing approach 3+ times (`reprompt agent`). And 50-70% of my conversation turns were filler like "ok try that" (`reprompt distill`).

    pip install reprompt-cli
    reprompt scan && reprompt
Everything runs locally -- zero network calls, zero telemetry. Also works as an MCP server and GitHub Action.
kiyeonjeon a month ago

Love the "no LLM calls" approach. Scoring prompts in <1ms locally is exactly the right tradeoff. Most tools overcomplicate this.

  • LuxBennuOP a month ago

    Thanks! Turns out structural signals get you surprisingly far. An LLM catches more, but speed is the feature.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection