Image Optimization: Figuring Out the Exact Impact on Google’s Performance Score
I wanted to figure out the exact impact of image optimizations on Performance Score before implementing them. The question I wanted to answer was "Does Image Optimization provide actual worth? What's the impact on real-world businesses?"
I took the world’s top 500 websites (from Alexa) and ran performance analyses with Lighthouse (performance analyzer made by Google) and PageDetox (a diagnostic framework made by Uploadcare, where I work).
PageDetox fetched website images, optimized them, and calculated the differences in image weights and page-loading times (I used the same throttle settings as Lighthouse, and decreased derived load times by `saved_bytes / speed`).
From there, I tried to model the improvement in Performance Score by proportionally decreasing the weighted params used in the score calculations: FCP, FMP, SI, CPU, and TTI.
I understand that I have the two weak links in this model:
1) Load time improvement might easily fall to zero if JS rendering takes ages while even the largest images have already been loaded;
2) Linearly decreasing the Performance Score parameters may not be an exact fit.
But I think that (on average) the data I got is legit. Here is a raw data:
https://docs.google.com/spreadsheets/d/1K1qUIPCm2ZOxwCW52Zu5oLo5p9r8xydEh5h97HF3eyU/edit?usp=sharing
What are your thoughts about this prediction model? Where do you see the room for improvement?
No comments yet.