DeepSeek's Multi-Head Latent Attention
liorsinai.github.ioMatrix absorption is unnecessary. What is needed is the order of multiplication associates towards the direction of the absorption. This and the modified Rope are needed to make the caching work.