Timeline of Diffusion Language Models
github.comI'm curious what the actual inference unit economics look like compared to standard autoregressive models. Parallel decoding helps with latency, but does the total compute cost per token make it viable for production workloads yet?