Settings

Theme

Open-weight 27B hits 38% on Terminal-Bench 2.0 (Opus 4.1 hit 38% in Aug 2025)

antigma.ai

6 points by ubermon 7 days ago · 10 comments

Reader

annjose 7 days ago

> today's best runnable-offline model is roughly 6–8 months behind today's frontier.

But it doesn't matter because frontier models were extremely good 8 months ago and we were doing real work with them. Now we have more capable open-source agents like pi and OpenCode which work well with these models.

More importantly, offline models is the best choice for privacy, on-device inference and no token/cost anxiety.

  • ubermonOP 7 days ago

    Totally agree! I think we are very early on discovering the full potential of local models

  • swrrt 7 days ago

    Yep, offline mode is useful for edge devices also. I am considering deploying a extremely small model on steam deck actually.

merkleforest 7 days ago

> 2. How local use feels in practice

Do we have stats on how does the models do on Mac M-series chips?

timothyshen123 7 days ago

Interesting find on this. Thanks for sharing

  • ubermonOP 7 days ago

    Thank you! I think there is a lot to dive deep later with different hardware, inference engine, prompt/harness setup etc.

debpack 7 days ago

this is super sick man

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection