Settings

Theme

LFM2.5-350M: No Size Left Behind

liquid.ai

3 points by jbarrow a month ago · 1 comment

Reader

jbarrowOP a month ago

Very cool to see a company pushing what's possible with (relatively) tiny models! A 350M parameter trained on 28T tokens that, from the benchmarks, is competitive with Qwen3.5-0.8B.

Comparing the architecture to Qwen3.5, it seems:

- fewer, wider layers

- mixing full attention and conv's, instead of the full+linear attention of Qwen3.5

- the vocab is about 1/4 the size

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection