Show HN: Quibble – Adversarial AI document review using Codex and Claude
github.comI built Quibble to get better feedback on technical documents (specs, plans, RFCs) by having two AI models argue about them. I still don't know if it's any good, so I'm sharing it here hopeful for you to try it out and give me feedback!!!
It works like this:
1. Codex reviews your document and raises issues/opportunities
2. Claude responds to the feedback and revises the document
3. Codex checks if the revisions actually addressed the concerns
4. Repeat until consensus or max rounds
The adversarial setup catches things a single-pass review misses. Codex tends to be nitpicky about specifics, Claude tends to defend or over-explain, so the tension between them surfaces real problems.
You can focus reviews on specific aspects (--focus "security and error handling") and resume sessions if you want to iterate further. Try it out like this (no install needed):
npx @mfelix.org/quibble docs/plan.md
Requires the Codex and Claude CLIs on your PATH. Sessions are saved to disk so you can inspect the back-and-forth.
npm: https://www.npmjs.com/package/@mfelix.org/quibble
gh: https://github.com/mfelix/quibble
thanks for reading and checking it out!!!
No comments yet.