Settings

Theme

Ask HN: Tips for reducing LLM token usage?

1 points by vmt-man 6 months ago · 6 comments · 1 min read


I've been using Claude Code with Serena MCP, but for the past few weeks it's been compressing the context more often. I have two Pro accounts, and it's still not enough for my daily needs anymore :(

Also, Claude Code tends to make very broad search requests, and I keep getting an error from MCP about exceeding 25,000 characters. It happens quite often.

What would you recommend?

bigyabai 6 months ago

> What would you recommend?

Invest in a local inference server and run Qwen3. At this point it will still cost less than two pro accounts.

  • brulard 6 months ago

    Don't do that. You'll spend much of your time tinkering with HW/sw instead of doing what you care for. I recently upgraded to Claude Max ($100 version). It's not cheap, but it would pay for itself. On top of that this local setup that is recommended here will be slower, dumber and would cost you right away many hundreds of bucks. And models and tools are improving quickly. I don't want to imagine how much time you would spend upgrading these local models yourself. If you just run Claude, it is taken care of, Claude Code is the best agentic tool there is and is improving every few weeks.

  • vmt-manOP 6 months ago

    What hardware do you suggest? :)

    • bigyabai 6 months ago

      Iunno, whatever you can afford?

      Nvidia hardware is cheap as chips right now. If you got 2x 3060 12gb cards (or a 24gb 4090), you'd have 24gb of CUDA-accelerated VRAM to play with for inference and finetuning. It should be plenty to fit the smaller SOTA models like GLM-4.5 Air, Qwen3 30b A3B, and Llama Scout, and definitely enough to start layering the giant 100b+ parameter options.

      That's what I'd get, at least.

      • vmt-manOP 6 months ago

        > GLM-4.5 Air, Qwen3 30b A3B, and Llama Scout

        Are they good enough compared to Sonnet 4?

        I’ve also used Gemini 2.5 Pro and Flash, and they’re worse. But they’re much bigger, not just 30B.

        • bigyabai 6 months ago

          In my opinion? Qwen3 does live up to the benchmarks, it leaves Sonnet 4 in the dust quality-wise if you can get a fast enough tok/s to use it. I haven't tried GLM or Llama Scout yet, nor do I have a particularly big frame of reference for the quality of Opus 4.

          You might be able to try out Qwen3 via API to see if it suits your needs. Their 30b MOE is really impressive, and the 480b one can only be better (presumably).

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection