Settings

Theme

Ask HN: Are small local LLMs good at coding?

2 points by usermac 7 days ago · 3 comments · 1 min read


I deal with the professional LLMs, of course, but I'm really intrigued by the possibility of local coding offline. I've got a MacBook Air M4 16gb. Does it have any chance at all of doing coding? NOTE: I am not too worried about the context window because the way I work is very targeted and surgical. I'll have it look at one file and have it do something very exact to that file.

benchwright 7 days ago

They can be. I've done some drift detection work against local models and for the most part, they do ok. I think there's always room for an augmented approach where local models can handle programmatic parsing and structure and large models can handle actual coding routines. I try to use witness coding using local+api where possible to see if there's capabilities that can be caught one side or the other.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection