The key to this success is the integration of NVIDIA TensorRT, a high-performance, state-of-the-art performance optimization framework. We are proud to host the TensorRT versions of SDXL and make the open ONNX weights available to users of SDXL globally.
We have seen a double of performance on NVIDIA H100 chips after integrating TensorRT and the converted ONNX model, generating high-definition images in just 1.47 seconds. With further optimizations such as 8-bit precision, we are confident we can collaboratively increase both speed and accessibility.
Next, let’s look deeper at the performance benchmark for measuring latency and throughput to compare baseline (non-optimized) vs. NVIDIA’s TensorRT (optimized) model on A10, A100, and H100 GPU accelerators. For latency, the NVIDIA TensorRT (optimized) model is 13%, 26%, and 41% faster than the Baseline (non-optimized model) on A10, A100, and H100 GPU accelerators, respectively. For throughput, the NVIDIA TensorRT (optimized) model is 20%, 33%, and 70% better than the Baseline (non-optimized model) for A10, A100, and H100 GPU accelerators, respectively.