Settings

Theme

True 4-Bit Quantized CNN Training on CPU – 92.34% on Cifar-10

arxiv.org

3 points by shivnathtathe 2 months ago · 4 comments

Reader

jcalvinowens 2 months ago

> true 4-bit precision

This isn't one of the new block floating point schemes, it's bona fide 4-bit precision weights. It boggles my mind that can actually work.

  • yorwba 2 months ago

    Well, the weights are accumulated in full precision and are multiplied by a full-precision scale factor after quantization, and the activations and backward pass are computed in full precision as well, so it's not quite true 4-bit precision training. The resulting model can be stored with just slightly more than 4 bits per parameter, though.

    • jcalvinowens 2 months ago

      I really just don't understand how the quantization error doesn't ruin the results. Is there some reading you'd recommend?

      I can easily understand how the block formats win.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection