Settings

Theme

Compressing LLMs with progressive pruning and multi-objective distillation

rig.ai

4 points by adam_patarino 14 days ago · 3 comments

Reader

adam_patarinoOP 14 days ago

Compressing a mixture of experts model to fit on smaller hardware with a reinforcement learning approach called Self-Distillation Policy Optimization, progressive expert pruning, multi-objective knowledge distillation, speculative decoding, and custom quantization.

MikeSynnott 14 days ago

Awesome!

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection