Current sampling approaches of diffusion models often require dozens to hundreds of sequential steps to generate a single sample, which limits their efficiency and scalability for real-time applications. Various distillation techniques have been developed to accelerate sampling, but they often come with limitations, such as high computational costs, complex training, and reduced sample quality.
Extending our previous research on consistency models 1,2, we have simplified the formulation and further stabilized the training process of continuous-time consistency models. Our new approach, called sCM, has enabled us to scale the training of continuous-time consistency models to an unprecedented 1.5 billion parameters on ImageNet at 512×512 resolution. sCMs can generate samples with quality comparable to diffusion models using only two sampling steps, resulting in a ~50x wall-clock speedup. For example, our largest model, with 1.5 billion parameters, generates a single sample in just 0.11 seconds on a single A100 GPU without any inference optimization. Additional acceleration is easily achievable through customized system optimization, opening up possibilities for real-time generation in various domains such as image, audio, and video.
For rigorous evaluation, we benchmarked sCM against other state-of-the-art generative models by comparing both sample quality, using the standard Fréchet Inception Distance (FID) scores (where lower is better), and effective sampling compute, which estimates the total compute cost for generating each sample. As shown below, our 2-step sCM produces samples with quality comparable to the best previous methods while using less than 10% of the effective sampling compute, significantly accelerating the sampling process.
Consistency models offer a faster alternative to traditional diffusion models for generating high-quality samples. Unlike diffusion models, which generate samples gradually through a large number of denoising steps, consistency models aim to convert noise directly into noise-free samples in a single step. This difference is visualized by paths in the diagram: the blue line represents the gradual sampling process of a diffusion model, while the red curve illustrates the more direct, accelerated sampling of a consistency model. Using techniques like consistency training or consistency distillation 1,2, consistency models can be trained to generate high-quality samples with significantly fewer steps, making them appealing for practical applications that require fast generation.
Our sCM distills knowledge from a pre-trained diffusion model. A key finding is that sCMs improve proportionally with the teacher diffusion model as both scale up. Specifically, the relative difference in sample quality, measured by the ratio of FID scores, remains consistent across several orders of magnitude in model sizes, causing the absolute difference in sample quality to diminish at scale. Additionally, increasing the sampling steps for sCMs further reduces the quality gap. Notably, two-step samples from sCMs are already comparable (with less than a 10% relative difference in FID scores) to samples from the teacher diffusion model, which requires hundreds of steps to generate.
References
https://openai.com/index/video-generation-models-as-world-simulators
Albergo, Michael S., Nicholas M. Boffi, and Eric Vanden-Eijnden. “Stochastic interpolants: A unifying framework for flows and diffusions.” arXiv preprint arXiv:2303.08797 (2023).
Albergo, Michael S., and Eric Vanden-Eijnden. “Building normalizing flows with stochastic interpolants.” arXiv preprint arXiv:2209.15571 (2022).
Deng, Jia, et al. “ImageNet: A large-scale hierarchical image database.” 2009 IEEE conference on computer vision and pattern recognition. Ieee, 2009.
Dhariwal, Prafulla, and Alexander Nichol. “Diffusion models beat GANs on image synthesis.” Advances in neural information processing systems 34 (2021): 8780-8794.
Geng, Zhengyang, et al. “Consistency models made easy.” arXiv preprint arXiv:2406.14548 (2024).
Heek, Jonathan, Emiel Hoogeboom, and Tim Salimans. “Multistep consistency models.” arXiv preprint arXiv:2403.06807 (2024).
Ho, Jonathan, and Tim Salimans. “Classifier-free diffusion guidance.” arXiv preprint arXiv:2207.12598 (2022).
Ho, Jonathan, Ajay Jain, and Pieter Abbeel. “Denoising diffusion probabilistic models.” Advances in neural information processing systems 33 (2020): 6840-6851.
Hoogeboom, Emiel, Jonathan Heek, and Tim Salimans. “Simple diffusion: End-to-end diffusion for high resolution images.” International Conference on Machine Learning. PMLR, 2023.
Karras, Tero, et al. “Elucidating the design space of diffusion-based generative models.” Advances in neural information processing systems 35 (2022): 26565-26577.
Karras, Tero, et al. “Analyzing and improving the training dynamics of diffusion models.” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.
Lipman, Yaron, et al. “Flow matching for generative modeling.” arXiv preprint arXiv:2210.02747 (2022).
Liu, Xingchao, Chengyue Gong, and Qiang Liu. “Flow straight and fast: Learning to generate and transfer data with rectified flow.” arXiv preprint arXiv:2209.03003 (2022).
Meng, Chenlin, et al. “On distillation of guided diffusion models.” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
Rombach, Robin, et al. “High-resolution image synthesis with latent diffusion models.” Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.
Song, Yang, and Prafulla Dhariwal. “Improved techniques for training consistency models.” arXiv preprint arXiv:2310.14189 (2023).
Song, Yang, and Stefano Ermon. “Generative modeling by estimating gradients of the data distribution.” Advances in neural information processing systems 32 (2019).
Song, Yang, et al. “Score-based generative modeling through stochastic differential equations.” arXiv preprint arXiv:2011.13456 (2020).
Song, Yang, et al. “Consistency models.” arXiv preprint arXiv:2303.01469 (2023).
Lu, Cheng, et al. “DPM-Solver: A fast ODE solver for diffusion probabilistic model sampling in around 10 steps.” Advances in Neural Information Processing Systems 35 (2022): 5775-5787.
Lu, Cheng, et al. “DPM-Solver++: Fast solver for guided sampling of diffusion probabilistic models.” arXiv preprint arXiv:2211.01095 (2022).
Wang, Zhengyi, et al. “ProlificDreamer: High-fidelity and diverse text-to-3D generation with variational score distillation.” Advances in Neural Information Processing Systems 36 (2024).