Settings

Theme

Show HN: OpenSlimedit – Cut AI coding token usage by 21-45% with zero config

github.com

2 points by aSidorenkoCode 3 months ago · 8 comments

Reader

verdverm 3 months ago

I don't think having a tool that rewrites tool descriptions is a good idea.

You should put effort into getting them in a good place and accept the token levels (which is part of that design space).

  • aSidorenkoCodeOP 3 months ago

    Every API call sends the full tool schema for all available tools. In a 10-20 step session, you're paying for the same verbose descriptions over and over. Models don't need a paragraph-long explanation of read on the 15th call.

    This plugin slims descriptions to one-liners like "Read file content." while cutting 21-45% of token usage. No schema changes, no custom tools. Just trimmed boilerplate as an opt-in plugin.

    • verdverm 3 months ago

      > Every API call sends the full tool schema for all available tools.

      Only if you are doing it wrong, search >>> summarization

      Then the other question, is it deterministic between runs or am I going to get a different summary each session, turn, or toolcall? And depending on that frequency, am I using more token than I save by doing summarization for N tools?

      Minimizing token usage is not the goal in of itself, re: the ageless tradeoff of quantity vs quality

      For some context, my system prompt is around 5k tokens at the start. I put file contents there read/write/agents.md, which save millions of tokens and seems to work better than making them message parts.

      > Just trimmed boilerplate

      This is not what I see this tool doing. It's automatically manipulating words in the background that you should put far more care and attention towards. Referring to those words as "boilerplate" you can just throw into a slop machines is revealing

      • aSidorenkoCodeOP 3 months ago

        the benchmarks show no degradation in task completion with the shorter descriptions. We're in the age where frontier LLMs don't need instructions on how to read or edit a file.

        The descriptions aren't dynamically summarized either. They're static in the plugin, same every call, every session. Zero overhead, fully deterministic.

        This has been validated in over 3000 benchmark runs in OpenCode and I ran the entire Exercism Python practice suite (https://github.com/exercism/python/tree/main/exercises/pract...) with and without the plugin with identical results. An initial dataset is shared in the repo.

        • verdverm 3 months ago

          Have you made that benchmarking process open so others could reproduce it?

          > with identical results

          If your results are identical, you should be very sus, something is wrong if this is true. Nothing in agentic is reliable of fully deterministic

          • aSidorenkoCodeOP 3 months ago

            Good benchmark results don't mean identical outputs. The task completion rate is the same: both pass the same exercises. The paths the model takes differ, but the end result is the same -> pass the tests

            The full benchmarking methodology and tooling will be published alongside the paper.

            • verdverm 3 months ago

              you used the word "identical" to describe it, not me

              words matter

              which is why I still think this is a terrible idea, I don't think it holds up in the general case and would, as a peer reviewer, be inclined to believe there is benchmark filtering that makes for good results.

              You should use the same benchmarks everyone else is when you write your paper

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection