True 4-Bit Quantized CNN Training on CPU – 92.34% on Cifar-10
arxiv.org> true 4-bit precision
This isn't one of the new block floating point schemes, it's bona fide 4-bit precision weights. It boggles my mind that can actually work.
Well, the weights are accumulated in full precision and are multiplied by a full-precision scale factor after quantization, and the activations and backward pass are computed in full precision as well, so it's not quite true 4-bit precision training. The resulting model can be stored with just slightly more than 4 bits per parameter, though.
I really just don't understand how the quantization error doesn't ruin the results. Is there some reading you'd recommend?
I can easily understand how the block formats win.