Settings

Theme

Smaller Language Models Can Outperform LLMs

blog.deepgram.com

2 points by jasondrowley 3 years ago · 1 comment

Reader

joebiden2 3 years ago

Suppose this was true, an extremely small language model would outperform small language models. Sounds like inventing a compression which compresses any input by at least one byte.

Sorry for the snark, but the linked article doesn't explain why this obvious inference is false in this case.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection