Show HN: Chippery, an OpenCode fork that (often) uses 20-40% fewer tokens
chippery.aiI kept hitting token limits with Claude Code on larger codebases and ended up building Chippery (a fork of OpenCode) to reduce context size outside the model.
It uses a symbolic index, navigation layer, semantic and Pagerank-like ranking and some context reduction / compression techniques to avoid resending and rereading the same files and lookups.
I ran benchmarks mostly with Anthropic’s models, and saw roughly 20–40% token reduction depending on workflow on average, in some cases quite a bit beyond, sometimes less.
There’s also a Claude Code hook which offers access to the tools, but it's still a bit clunky.
It’s fully open-source, with an optional paid Pro / lifetime tier for support.
No comments yet.