Websites should not use dithered images
simplethread.comThe idea that dithering should be used to reduce your image size is a misunderstanding of image compression.
Dithering is a technique which allows you to represent a color image with a very limited palette, in particular a two bit palette. The result is not much like a normal photo, but much better than nothing if you have a machine that can only output a few colors. (You could think of that as a sort of compression where the goal is not to reduce file size, but to reduce the number of colors needed to comprehend the image.)
JPG, WebP etc. are compression techniques designed to reduce the size of a full color images, especially photos. Because they’re specifically designed for photos, they don’t work as well on things that aren’t “photo like.”
Dithered images are very much not like real photos, so it’s not surprising that compression techniques designed for photos don’t work well on dithered images.
(I’m not an expert on image compression, but as an example, I believe JPEG and similar algorithms expect to find large blocks of basically the same color in photos - such as a blue sky - and save space by simplifying that to a few big regions of all one color. The “speckled” appearance caused by dithering actively defeats that particular optimization.)
> Dithering is a technique which allows you to represent a color image with a very limited palette, in particular a two bit palette.
To be way more pedantic, dithering is a technique to reduce the quantization error (what happens when you map values from a big, possibly infinite set to a smaller, finite set). This is done whenever a system or algorithm converts data from higher dynamic range representation (more bits per quantum of signal, like a pixel or audio sample) to a lower dynamic range representation, it's called a bit-depth reduction.
And like you alluded to, every compression algorithm that might find it worth it to do this will do it internally. However the benefits are great, since lowering the bit depth has pretty awful results on quality.
Bit-depth reduction is used in practice in a few places, I'm not well versed in image compression but you do see it in telephony.
> This is done whenever a system or algorithm converts data from higher dynamic range representation (more bits per quantum of signal, like a pixel or audio sample) to a lower dynamic range representation, it's called a bit-depth reduction.
You are mixing two things here? Bit depth reduction is a specific term, it refers to the reduction in the amount of information you want to _convey_.
Compression is be reducing the number of bits/pixel averaged across the whole image. Good compression algorithms will not be spatially uniform, it's entirely possible that pixels in some parts of the image are compressed with more bits/pixel than pixels in other parts.
I took care not to conflate the two and to isolate the terms outside the context of compression, and image processing at large.
Dithering can be applied to any palettesd imaged; in the earlier days of the net, restricted color palettes could indeed help reduce size. Dithering however can interfere with run length encoding (RLE) compression used in certain formats (i.e. GIF, or the ancient PCX).
As I understand it, JPEG compression does not play nice with dithering as it is based on a matrix of discrete cosine transformations. Smooth transitions from one color to the next are much easier to compress this way than highly detailed features (i.e. a series of small dots due to dithering). For example, if you blur out parts of a photo, you will likely get a smaller image at the same compression level as the original. In other words, dithering basically creates a much harder image for the JPEG algorithm to compress.
Dithering would, in theory, work well for compression if you use a palette. GIF always does that, PNG also has a palette mode.
JPEG, WebP, and AVIF use various frequency-domain transforms. These work best for smooth color transitions, like in photos. They're generally terrible for sharp edges (as found on screenshots), and especially so when many neighboring pixels have drastically different colors.
> Dithering would, in theory, work well for compression if you use a palette. GIF always does that, PNG also has a palette mode.
Even then that's a bit more complicated because both formats will apply LZ compression (respectively LZW and LZ77 because DEFLATE), so depending on the original source the dithering can work against the compression.
Dithering is a way of mitigating the effects of quantization. Quantization is in fact an effective form of compression, as it reduces bits per pixel. Modern compression techniques for photographic images, however, rely on the coherence of the image (similarity between adjacent pixels) and dithering tends to work against that.
In a purely mathematical sense, though, quantization is very much a form of compression. It's a way of reducing the amount of information in the image. https://twitter.com/gabrielpeyre/status/1326776195107713026?...
Dithering was also a good option for CRT screen as they tended to blend the dithering (more or less depending on the video signal quality). It helped to make transparency or new colors as pointed out.
Exactly. Dithering is not a compression scheme; it's an encoding scheme. It's particularly useful in systems where you have limited sample dynamic range but you can sample much more frequently than the Nyquist criterion. It trades sample range for sample frequency while conveying the same information.
Audio CD players use Sigma-Delta modulation (also called "1-bit D/A converters") which is essentially just dithering in one dimension. But CDs don't contain fewer bits because of this.
Yeah, I thought about this as soon as I saw this article compare dithered JPEGs to non-dithered JPEGs. Dithering is going to amplify the high frequencies in your images. JPEG is not the target type of compression for dithering.
you might get better results with a run-length encoding on a dithered image. I’m not aware of any modern image formats that use RLE though.
I've experimented with that. It only works if you pick you pick a form of ordered dithering, like a Bayer index or Interleaved Gradient Noise[0][1]. Which looks pretty terrible compared to error diffusion dithering approaches like Sierra3. And even if you use ordered dithering it only somewhat helps with compression.
[0]https://bartwronski.com/2016/10/30/dithering-part-three-real...
[1] https://www.iryoku.com/next-generation-post-processing-in-ca...
A dithered image tends to produce very short runs of pixels so is likely to get poor compression with RLE.
Ordered dithering with a compression algorithm that can compress "words" (repeated patterns) (like the LZW in GIF) will perform better if the image has large areas of flat colour (not photos).
PNG uses deflate, which can encode runs of repeating values very efficiently. A run of repeating XYZXYZX… will get encoded as
Literal X, literal Y, literal Z, copy distance=3 length=N
Was going to say, this but tiny correction, its the LZ variant that does this, as will most of them. Deflate is just two pass compression of LZSS followed by Huffman. The LZ variant will pick out repeat patterns, while the second will compress the byte representation of the resulting stream if a particular set of values is over represented. AKA its all variations of X, Y, Z in different orders those might get assigned 2 bit values rather than the 8 they normally might take up.
(so this a bit of a rant I have about people calling deflate a compression algorithm, its an algorithm composed of two other fundamental ones).
What a weird take. Have you written a deflate/inflate implementation? Deflate really isn't just LZSS (I assume you mean something like Haruhiko Okumura's LZSS?), followed by Huffman, it's a very intertwined and sophisticated combination of LZ77 and Huffman. How the two work together is integral to why Deflate works as well as it does.
The optimal parse here isn't always to pick the greediest match from the LZ77 perspective and then "run it through Huffman", you have to know the Huffman cost model when picking your LZ77 matches.
I wrote a zip decompressor many years ago, the compression side wasn't really much of a target, because the focus was on compressing with a more speed focused algorithm. At the time it was a pretty distinct portion of my decompression pass. I didn't know they were picking matches based on the compressed size vs just longest match, but I guess it makes sense, but I don't see why that "intertwines" it anymore than any other adjustment one makes to how one finds matches (which is AFAIK generally the largest change in all the LZ variations).
edit: Just as a note, actually doing it as two distinct passes rather than at the same time would be silly since its going to significantly slow it down. So just because its doing the entire thing as a single "pass" doesn't count IMHO.
I’ve written a deflate decompressor and I’m not sure what correction you are trying to make. It seems like you’re replying to this comment to show off some details you know about deflate.
Maybe just encode it as a bitmap, and then use other types of compression like RLE. Dithering reduces the color palette, so in combination with a bitmap it would reduce the memory footprint of each pixel.
The opposite is likely. RLE works best with long "runs" of the same value. Dithering will tend to break up these regions if it improves quantization error.
PCX was RLE. I remember playing with PCX files in C++ back in the day.
JPEG removes fine detail by transforming 8x8 pixel squares into the frequency domain using a 2d discreet cosine transform and quantizing the coefficients. It also optionally subsamples the chroma. The other things JPEG does are reversible.
nathaniel here, writer of the article this article is referencing.
If it's okay with you I'll update the original with a link to this at the top. It's sort of hilarious that I made an whole app based on a premise that is basically just wrong.
I wish I could say I've learned my lesson, but I'll probably continue to make enjoyable mistakes like this for the rest of my life.
Good and gracious response. Well done! If it helps, you are far from alone in making "enjoyable" mistakes. The important thing is to continue learning. Here is one of my favorite quotes on the subject:
“The best thing for being sad," replied Merlin, beginning to puff and blow, "is to learn something. That's the only thing that never fails. You may grow old and trembling in your anatomies, you may lie awake at night listening to the disorder of your veins, you may miss your only love, you may see the world about you devastated by evil lunatics, or know your honour trampled in the sewers of baser minds. There is only one thing for it then — to learn. Learn why the world wags and what wags it. That is the only thing which the mind can never exhaust, never alienate, never be tortured by, never fear or distrust, and never dream of regretting. Learning is the only thing for you. Look what a lot of things there are to learn.”
T.H. White, The Once and Future King via MerlinYou're not as wrong as you think.
Dithering allows you to display images with a limited color palette, thus reducing the file size.
However the image formats chosen here don't really benefit from that.
Dithering is particularly effective in bitmap formats that use a palette (GIF, for instance). Just make sure your gif is actually saved with less bits-per-pixel than your original image.
It's however true that these formats that benefit aren't exactly modern, and there would be better ways of saving the same image.
At the end of the day, dithering can still be aesthetically pleasing. There might yet be a use for your app.
> Dithering allows you to display images with a limited color palette, thus reducing the file size.
I think that sentence is misleading and is what caused the initial misconception.
Reducing the color palette is a technique for reducing file size. For example, say you have a grayscale image using 8 bits per pixel (256 colors). Ignoring other compression techniques, you can cut the size of the image in half by reducing it 4 bits per pixel (16 colors).
The naïve way to reduce bit depth is by discarding the low order bits, since they carry the least information. However, when you do that, the result is banding [1]. Banding is an unsightly artifact.
Dithering exists essentially to deal with banding. It works by (sort of) adding a slight amount of noise to the pixel values to diffuse the transitions between diffferent thresholds.
But anyone with an ounce of information theory will tell you that noise by definition is literally the hardest thing to compress. So when you dither an image, you are throwing away almost all of the gains you got by reducing the bit depth in the first place.
This makes me think that one could store less significant bits of the channels with a lower spatial resolution ...
... Wait, this is what JPEG is, isn’t it? Why aren’t there dithering decoders for it, in this case? Is the block too small?
Interactions like this are why I spend my "social media" time on hacker news. Good on you for building the tool in the first place, and for being willing to admit a mistake.
You have the absolute best possible nickname for this post as well. Why bother doing things based on good premises when bad premises can feel this good?
My life would be better in general if I were more willing to make mistakes in public.
> I wish I could say I've learned my lesson, but I'll probably continue to make enjoyable mistakes like this for the rest of my life.
There's really no better way to learn, in my experience.
This was a fantastic and gracious response. Nicely done!
The app is still cool and useful, just not for its original purpose.
(Interesting choice of username!)
That username is :chef’s kiss: .
FWIW I love the app and have been using it a lot just because.
I think the real value is that it presents a novel idea of considering image compression creatively.
I would give gif a shot to see if you can get some savings applying dithering there
Well done.
Wow I feel old, everyone seems to have forgotten why and how to use these techniques :(
No one was dithering large photos back in the day - that's what JPEG are for.
The point of dithering was to take advantage of the reduced bit depth of indexed-colour GIFs (and later PNGs).
The Dither-Me-This tool does a lovely job of rendering different dither styles but then misses the point completely by exporting 32-bit RGB+alpha PNGs. 24 bits of RGB plus a 8-bit alpha channel? such decadence! A 16 colour dithered image only needs a 4 bit palette.
The type of images where this was useful in web design have mostly been made obsolete by the increased capabilities of CSS, and ability to render SVGs etc. e.g. graphical elements like borders and stripes of colour, or company logos. Or text in a specific font! (we didn't have web fonts in those days)
Why not use JPEGs for everything?
Two reasons: One is that heavily compressed JPEGs can make crisp straight edges blurry or fuzzy, and colours can get a bit desaturated. Bad for logos.
The other is that GIF had a Run Length Encoding compression which meant that large blocks of flat colour would compress very efficiently.
It's worth noting that RLE does not compress dithering efficiently at all... back in the day we would spend a bunch of manual effort to avoid unnecessary dithering (i.e. make sure blocks of flat colour are really all a single colour). But some dithering was unavoidable and sometimes, for specific types of images, it was possible to make a GIF that was smaller and looked cleaner than a comparable JPEG.
Also... WebP and AVIF compression rate looks great, but can it be used for websites today if not supported by Safari? https://caniuse.com/webp https://caniuse.com/avif
>Wow I feel old, everyone seems to have forgotten why and how to use these techniques :(
That is the reason why Web Development keep reinventing the flat tire.
> Also... WebP and AVIF compression rate looks great, but can it be used for websites today if not supported by Safari? https://caniuse.com/webp https://caniuse.com/avif
You load them conditionally based on browser capability or user-agent
Doing this makes a lot of sense for any web setup where you already have something you're calling an "asset pipeline", but can seem pretty nuts where you're writing simple no-JS HTML.
It's really not bad:
and you only need to bother with this on images it might matter (e.g. key large multi MB photos in a blog post or something not all of the 20x20 logo icons that would be less than 2kb even if uncompressed).<picture> <source srcset="image.webp" type="image/webp" /> <img src="image.jpg"> </picture>The HTML isn't bad, sure, but the process of generating appropriately scaled and formatted copies of images can be if you don't have other build steps that require similar automation.
Appropriately scaled is always a problem at which point you hit save/export twice and you've done everything you need.
Unless rescaling an image itself is already too much work, in which case this has nothing to do with image formats or plain HTML sites in the first place.
nothing about web for the last decade has been about no-JS HTML
its an option but doesn't leave you anything to contribute. the web is fragmented most of the toolchain is for simplifying deployment into that fragmented place without caring that it is fragmented. so not really nuts when everything already takes care of it for you. you can stick with no-JS HTML cached on an edge node just as well, or not cached if you expect low traffic, its whatever.
"everything already takes care of it for you" is fine if you're willing to have a dependency on "everything". It's "whatever" relative to how much the people who load your page care about bandwidth.
GIF uses LZW compression, not RLE, but your point stands that it more efficiently compresses long runs of the same color.
Ah you're right... I saw another comment mention RLE which jogged my memory, as you say it was the same point though, to have large areas of flat colour
It was BMP that used RLE wasn't it?
Everyone talking about compression is missing the original point. When GIF format came along, most people still had 8-bit displays. Photos look terrible if you reduce them to 256 colors naively. Dithering helped fill the gap in the early days of the web until 16-bit and higher displays became more common.
The point of dithering was simply that old graphics cards (e.g. VGA) only supported 16 or 256 colors.
Use of indexed-colour GIFs persisted long after better displays became commonplace, as an optimisation technique for reducing the file size of graphics for web pages
The dithered example is a bit disingenuous, since it coaxes the picture into a palette that is clearly not suitable, causing an egregious amount of artifacting.
I don't know what's going on with their dithered image sample:
https://3otebq2knmnf3smsj0374a9u-wpengine.netdna-ssl.com/wp-...
This is the same dithered image with a sane 16 color palette:
https://www.marginalia.nu/color-simple_500-better-palette.pn...
Further, human eyes are kind of bad at blue colors, so whatever compression artifacts you get with JPEG or WebP is going to be really hard to detect. (The blue channel is typically compressed much harder by many algorithms because of this).
Yeah, that ocean example was a cherry-picked strawman set up like a bowling pin.
Isn’t the whole point to switch to a palleted image when you dither? If you are leaving it truecolor it just makes the image worse for no benefit as the article noted. Switching to a palleted PNG however can save a lot of bits in certain circumstances.
That said just pulling out a good lossy encoder makes a lot more sense most of the time. It is easier and it will look better. Dithering is lossy anyway.
Yes, you definitely want a palettized output format. But, even so, dithering makes that harder to compress. You're essentially adding noise, which confounds data compression.
Right, it sounds to me that people are confusing Dithering with Compression. Back in the day we used Dithering to represent high-color-depth images in lower-color-depth devices... it sounds similar to compression but it is not.
It's like confounding stenography with cryptography. In a way, both are used to "hide" a message, but they are completely different beasts.
Yes, exactly. Indexed PNG makes for really small files. Especially when you don't use much colours (for 256-color-index, the saving is not really there, but if you are like <8, it's worth it - if the image fits, style-wise, of course).
Yes, just like audio software can dither when reducing the bit depth (say, from 24 bits to 16 bits per sample). It makes quantization less perceptible, by decoupling the quantization noise from the signal, which sounds less crunchy (audio) or has less posterization (images).
I would also be concerned with how these things scale in terms of the display.
For instance somebody might have a HiDPI or Retina screen or they might be zoomed in or out on a particular web page. Or for that matter maybe you want to scale the size of the image so it fills the screen horizontally or vertically.
The scaling algorithm might maintain the the dither or it might smooth and blend it. Maybe it looks OK in the end but I wouldn't take for granted what happens.
When I zoom in and out on that page some of the images like the greek guy do OK consistently, but the dither takes on an unpleasant structure at certain sizes of the ocean image.
Not to mention the fact that dithering will look different depending on the display’s gamma calibration...
One thing I like about my Gameboy 3DS is that the display is consistent and artists can tune art up for it.
For some game series (say Hyperdimension Neptunia) fan art captures the essence of art in the game but for the big Nintendo games on the 3DS (say Fire Emblem Fates) fan art doesn't look like the game art at all because the game art is calibrated to the characteristics of the display in every way.
Yes, will almost certainly get weird or ugly moire effects if dithered images are not rendered 1:1
Lesson learned: test your optimizations.
I will admit I'm surprised by the results. I assume there's not really a rendering perf hit from WebP vs jpg?
Also, the preposition that lowering the file-size, and therefore transfer time, is the most important factor in environmental impact is, I think, a little under-supported.
That being said, the original Low Tech Magazine article's perf claim is back up from the data, but they also use very low resolution images.
I'd be curious if dithering could be optimized to a particular algorithm. For example, jpeg's quantization is based on the assumption that images are mostly made of low frequency data and higher frequency can be removed without changing the quality of the over all image too much. With dithering, this is almost the exact opposite and all low frequency information is replaced with high frequency information, meaning it won't be nearly as effective.
Low Tech Magazine Article on the whole site...which mentions dithering: https://www.lowtechmagazine.com/2018/09/how-to-build-a-lowte...
"""Compressed through this dithering plugin, images featured in the articles add much less load to the content: compared to the old website, the images are roughly ten times less resource-intensive."""
There's an assertion which implies data, but no data.
But accepting that there was SOME comparison to specifically their "old" image method, the takeaway could be "re-evaluate your optimizations"
Another take with low-tech mag is an aim of describing and utilizing old techniques that did the job just fine (a sentiment I take from a lot of their articles). So, using the latest compression algorithm literally doesn't tick as many of their boxes as dithering.
I don't have the numbers handy anymore, but I did some testing on that assertion and their dithered pngs look absolutely worse than a JPG or WebP of the same filesize, as you'd expect.
the "data" comes in the form of a before and after perf-test:
before: https://krisdedecker.typepad.com/.a/6a00e0099229e88833022ad3...
after: https://krisdedecker.typepad.com/.a/6a00e0099229e88833022ad3...
There are at least two problems with this article in how its argument is constructed.
First, it’s a response to two other articles and does not refer to the images in those articles or the processing techniques used on them, instead grabbing four other images and transforming them, perhaps in the same way as the original article, perhaps not. From this it draws broad conclusions. As the joke goes, at least one side of the sheep appears black from here.
The second is that there is a source of truth for these claims, and it’s in the algorithms and file formats in question. A JPEG image is generated and compressed a certain way, a PNG is encoded in a certain way. There is an actual answer to the question of whether or not dithering saves space and under what circumstances, and it has to do with how the images are encoded and compressed. If one does not want to bother learning enough about the algorithms in question, at the least one could approximate that knowledge by processing a statistically significant number of images and evaluating the results to get some kind of actual data on when and where the technique generates larger or smaller file sizes.
Instead, we’ve now got three articles, two of which say “this works” and one of which says “no it doesn’t” with all the rigor of 18th century naturalists puzzling over the behavior of birds.
Wow, I don't remember reading such a bad article in a very long time. Very disingenuous take.
1) author starts with lossy format at the beginning of the comparison
2) author uses squoosh app for some of his conversions, but not others, even though it supports dithering too - instead uses a random web tool which doesn't care about file size at all
3) not even a mention about image formats supporting limited color palettes
4) no mention of disadvantages of webp and avif (anyone still supports IE 11?)
5) more things like dithered "lossless" webp made from lossy jpg, from the same image you can see that author used much bigger color palette than the one used in Low-tech Magazine images
Funny thing is that aside of browser support modern formats would probably still win even without manipulating the numbers (they are made for this), but I guess the author wanted really convincing victory.
> Very disingenuous take...instead uses a random web tool
You missed that this is a rebuttal to an article that suggests using dithering and to use that specific tool. Hardly "random".
> anyone still supports IE 11?
Fair enough for now, but MS itself is in the process of dropping support for IE 11, so I don't expect others to carry on without them very widely. It will be all retro sites and corporate sites soon (LOL, for completely different reasons -- one wants to visit the past from time-to-time, and the other doesn't know how to escape it)
Ok, not quite random, but still hardly fair. If that's the take, it should be called "Websites should not use images created by this dithering tool" instead.
Usage of IE 11 and other browsers which do not support even webp (old safaris) is higher than usage of screen readers.
Personally I doubt we'll ever experience widespread usage of avif, considering how long it took Apple with webp, we'll probably sooner have jpeg xl.
I think the main premise of the article "Dithering is not Compression". People should use compression algorithms to compress their images. They shouldn't be using "color depth decrease" algorithms to compress their images.
With that complex example, I can get a B&W 4-color dithered png down to 91K and an 8-color down to 131K. The Color Simple one goes down to 142K with a 16-color dithered palette. I'm confused why you'd even compare the dithered JPGs, because that's self-contradictory, or why your dithered PNGs ended up so big. Did you forget to turn it into an indexed PNG?
If you're dithering, you want a lossless format that can used indexed colors. Webp also does better in this if you keep it in lossless mode. My last 16-color simple example goes down to 128K when converted to a lossless indexed webp.
I don't think this article does all the sufficient legwork to come to the conclusion it does. I agree that dithering isn't a be-all or end-all compression technique, but it's not as useless as the article makes it seem.
This seems to miss the point. Of course these dithered images don't result in a saving in filesize when offered up as jpegs.
This feels like a bit of a strawman argument.
Some people here weren't web developers in the 90s and it shows :)
As has been said, dithering is something that people did back in the bad old days of 256 (or 16!) color palletized displays. It is a way to "fake" more colors than available. It was never meant for image compression.
I don't even remember the last time I saw a dithered image anywhere on the internet.... it's been quite some time.
That's not quite right...
Dithering was used to make GIFs smaller, since the reduced palette could fit into a smaller bit depth per pixel. So in that sense it is a type of compression. This was useful and used even when 24-bit colour displays became common.
The issue of 256 colour palette display modes is separate and caused other problems - i.e. you might specify one palette of 16 colours in your GIF, but they might not exactly match the colours provided by the operating system.
That is where the (I imagine now long forgotten...) notion of a "web safe" colour palette came from - these were the 216 colours you could expect any 256 colour OS to provide (produced by dividing up the colour space evenly using only combinations of 00 33 66 99 CC FF values for R, G, B)
You could use colours outside the safe palette in your GIF but the OS would use the nearest available colour from its own palette to display them, and often it did a bad/unpredictable job of choosing a substitute. So it was common to deliberately adjust colours in your image to align with the "web safe" palette, so as not to risk garish substitutions when displayed on a 256 colour display.
>It was never meant for image compression.
It was image compression. You "reduce" the file size of the images by using an algorithmic approach to reduce the color palette, while achieving as much as the OG image quality as possible.
Dithered images are traumatizing, and remind me of the bad old days of low-res displays on dial-up internet.
OK, not traumatizing, but I don't think they look good, particularly when the rationale for using them is bandwidth reduction.
Lot of posts here say that dithering is something we did in "good old days". Dithering is still very much alive, just not as needed on general web sites. But it's critical when you need to squeeze animation sprites for games or multimedia projects. These sprites typically need transparency, and precise control over which frame is displayed, so standard video codecs can't be used.
So 8-bit palettized PNG sprite sheets are the most convenient way to do it in the browser, where it's not efficient to manually unpack some custom format.
I highly recommend pngquant tool with its adaptive dithering algorithm for compressing sprite sheets. One of main features is it uses partial transparency colors in a palette, while few other tools I tried just support on/off transparency, which is far from great. Especially for antialiased semitransparent edges. pngquant works beautifully with these cases.
Previous discussions on the original article that this article is responding to:
Thanks! Macroexpanded:
Dithered images and websites (2020) - https://news.ycombinator.com/item?id=28696014 - Sept 2021 (121 comments)
Curious about what the space usage would be if the dithered image was stored in a format actually suited to dithering, e.g. GIF, rather than the ones here which aren't.
This is my exact takeaway. I can't decide whether this article and many of the commenters are deliberately missing this point or whether it's actually not understood.
I don't get this article. Dithering is generally for limited palettes. JPG, PNG do not have limited palettes. This feels like a strawman.
As the OP says, but maybe not clearly enough, it's a response made to these claims/suggestions by lowtechmagazine.com:
https://www.lowtechmagazine.com/2018/09/how-to-build-a-lowte...
They do use dithering with JPGs and PNGs on their "solar-powered website" variant, which I won't link-to so as not to contribute to draining the battery with an HN effect.
The first image I found to compare from the two versions of their own website... the dithered version is 65K and the original is an incredible 6.2MB... but that's at least in part because the original is 5053x3581 pixels and the dithered is 1213x600! First picture at https://www.lowtechmagazine.com/2021/11/fascine-mattresses-b... is the "non-dithered" variant. There may be other compression mis-choices on the original JPEG as well. To suggest this size difference is due to dithering would be misleading!
Size and compress your images properly (including JPG lossy compression) to save energy resources is good advice; I think the OP is probably right that dithering is not a very useful tool in that toolkit.
It makes me lose some respect for lowtechmagazine, when they go for more style over substance in this particular case, it makes me wonder in others.
I find the original 5053x3581 image to be 5.88 MbB (6176 thousand bytes). The dithered 63.3 KB (65 thousand bytes) image is actually natively only 800x567 pixels.
In other words, they get to 1% of the filesize by reducing the number of pixels to 2.5% of the original count. Once you've done that, you can get a better looking image by using Squoosh than by dithering.
https://imgur.com/a/9bGGA4j shows a comparison of the original, my squooshed version resized to the same dimensions they used (at only 41 thousand bytes), and their 65 thousand byte dithered and resized version.
Their version, 160% the filesize of mine, is much worse, obliterating detail of the clouds, for example, and also being ugly. Unless deliberate dithered ugliness is your stylistic choice, you should not be dithering unpaletted filetypes like jpegs. I'm not saying my version is perfect; I would never compress a jpeg so heavily that the block pattern shows (as it does in the upper left clouds), but where that is apparent my competition had simply deleted that information completely.
> There may be other compression mis-choices on the original JPEG as well.
You bet. I also include a highly compressed version of the original image, which comes in at only 394 thousand bytes, not 6176 thousand like their totally unoptimized one, which is a 94% savings all in itself. I chose to compress to the point that the detail of the men on the large barge mass was without apparently loss of quality. Again, this results in visible compression artifacts in low-contrast areas of the image, like the water surface and clouds. From my experimenting, settling for a 1500 thousand byte image results in a dssim score of very nearly zero and would be what I would consider properly optimized, at a 75% savings.
Also for a website that claims to be concerned with reducing energy usage for sustainability... why the heck do they have a 5053x3581 ~6MB JPG even on their "standard, not solar-powered" website, amirite?!?
If you have a blog, you should consider writing a post titled A High-Class Rebuttal to Low-Tech Magazine
PNG can have limited palette. Back in the days saving an image as 8bit PNG was a nice trick to save on size when one needed transparency and/or shadows.
The article is a rebuttal to this article (https://endtimes.dev/why-you-should-dither-images/), which in fact used photographic images, so definitely not a straw-man article in this case.
Mpqzcp nzxxpyetyr htes pxzetzylwwj-nslcrpo nzxxpyed, dpp tq jzfc nzxxpye xlvpd dpydp ty esp nzyepie zq esp lcetnwp lyo yze ufde esp etewp. Nzxxpyetyr hteszfe vyzhtyr esp nzyepie xlvpd jzf wzzv (opdpcgtyrwj) defato.
Sometimes dithering is the right option depending on your use case. If it’s purely getting the smallest size, for example, go into Adobe Photoshop “save for web” and mess around with the various algorithms, formats, and dithering settings. Photoshop has a live preview with live updating file size. I’ve always found limited palette pngs and gifs with dithering tended to be the smallest. It also depends on the contents of your image and all of that. And there are many cases for complex images where a jpeg will be able to represent the image at a smaller size with greater clarity.
Before you try to save load time by dithering Jpegs, do something about the 10Mb JavaScript monstrosity that you're serving which is probably causing the issue in the first place.
Don't forget dithering your spacer.gif files! very important!
This is what I think of when the dithering articles show up. Way back in the day, dithering and reducing the color palette made a difference, when you were talking images loading over a 28.8k modem. For many widgets, you could shave kilobytes off, when kilobytes really, really meant something.
well tbh they still do today. Every byte you can take off (very easily at that ...) makes websites: . faster to the user, which does contribute to a general reduced level of stress :-) . use less energy: less bandwidth/less CPU (cisco, nginx, your disk/memory cache.... you name it). Less $$ on your bandwidth/cpu bill @[insert cloud provider here] and less BW/CPU used up on your user's mobile metered connection (because nowadays it's mobile, dontcha know) So even though if many instances over-optimisation is definitely overkill ...in nearly all cases a sane amount of optimisation is good.
my 2p
Sort of. I suspect an under-appreciated part of the modern Web performing so poorly is how casually we throw in vector graphics (SVG, mostly) and draw/transform shit with CSS, these days. Those used to be things one simply did not do without an excellent reason, because they're computationally costly for the client. In maybe a couple years' span they went from "wait, don't do that, it's bad for your user" to "LOL SVG icons are so convenient, let's use them everywhere, and then maybe skew and manipulate them with CSS at runtime because why not".
[EDIT] To clarify, this is relevant to cutting bytes because encoding graphics with SVG or drawing them with CSS can mean shipping fewer bytes than JPGs or whatever.
Dither when it makes sense, but using dither in photographic images... defeats the purpose of JPEG and its descendants. Photographic lossy compression is designed to be compressed with non-dithered (as much as possible) images.
They do matter, but not as much as when websites loaded at 0.25kB/s.
yep :)
I felt like the real point of Nathaniel's dithering article was pointed out early:
> reducing file sizes in a stylized way.
I don't think we ever stop and actually consider what creative avenues are open to us in that regard.
It's sort of like how there are technically better methods of printing than risograph, but sometimes that's what you really want and it's still fun to mess with.
Thank you, I was thinking of doing a write up on this topic as well after reading the article "How to build a low tech website" which recently floated on HN.
However I would have shown how you could significantly reduce file size simply using optimized JPEG settings, as JPEGs are most widely supported and can achieve good compression rates too.
If you're at the point of "should I use dithering" to optimize your website, congratulations: you're in the 99th percentile of optimized websites.
You're already a SSG PWA with modular js and a stylesheet devoid of unused selectors.
You've got almost all the unlocks. How much longer do you really want to be playing this game?
Based on the comments and not reading the article everyone seems to have missed the more important point.
Images on a webpage will be scaled to devicePixelRatio. Common devices have non integer devicePixelRatios so dithered images are going to fail
My main takeaway was the shocking efficiency of AVIF. Just ~20% the size of JPEG!
I was sure this was an article from 15 years ago until I saw WebP mentioned.
Why would anyone even consider color dithering these days when it comes to the Web? It feels counterproductive even intuitively.
Playing around with the Squoosh tool mentioned in the article, it's easy to get a <10KiB image that looks way better than the WebP shown at the bottom of the page.
It's 2021. Who is still using dithering? How did this article even make it to HN?
I recall dithering being heavily used in the days of very finite color palettes - BMP, GIF16 etc..
Yeah, intuitively I don't know why you would expect dithering to help, it adds a lot of high-frequency content.
That 6 kB WebP looks awful. This seems a bit apples-to-oranges.
To make the argument that WebP is better than dithering, the author should compare the 30 kB dithered image to a 30 kB WebP. Or even compare to a smaller WebP that lacks obvious compression artifacts.
In the comparison as given, I would not say that the WebP is a better image.
You really think it's more awful than the dithered image, as a representation of the original?
Dithering has it's own aesthetic, and if that's what you want, then by all means, dither away.
But to say that dithered image is a better representation of the original than that webp seems way, way off base to me.
I think you might have mixed up the different images at the end of the article. The image labels are a bit confusing.
Curious if this is a commentary a propos a popular low-tech website we see around HN. I appreciate the dithering pattern, though I recognize it is not as efficient as, say, a compressed jpeg or a palette limited GIF.
If you read the article, it is very clear and direct about being a commentary on Low Tech Magazine
I did read the article, however the way I should have phrased it was that I was curious if the conversation the week prior from HN spawned this article. Always assume the most charitable interpretation before responding.
Shouldn't dithered images also benefit from the limited palette?
Exactly, sending them as true color makes no sense
I grabbed one of the dithered images off the article and saved it as an rgb png and as an indexed png with 255 colors.
rgb: 75kb
indexed: 35kb
So indexing does look like it could save a lot. I do wonder if it can still beat out say jpeg or webp though.
I independently came to the same result back in August, though I provide the commands used and with feedback from someone over at lobste.rs found dithering can be effective, just isn't usually worth it
TLDR: There are much better compression algorithms than dithering
Dithering isn't a compression algorithm[1]. It's an algorithm that adds noise to effectively increase bitdepth when downsampling.
In the case of images, it makes color-reduced versions not look terrible.
Adding noise is almost always a bad thing for compression. The undithered images would compress much better.
It is when combined with quantisation, and the article specifically refers to it as such.
If added noise is the aesthetic effect you are going for, it's probably better to ship a low quality (<50) JPEG of the original image and add the "dithering" in the client by overlaying noise there.
You can't add the noise after quantization, it needs to be done at the same time as quantization, because otherwise you are losing the information needed to intelligently feather the edges.
Think of it this way. You start out with a series of numbers between 0 and 100. Your job is to represent this series as best as possible within a range of just 0 to 10. Without dither, you would just round each original number to its closest multiple of 10; all your 31's become 3 and all your 34's become 3. With dither, nearly all of your 31's become 3 and many of your 34's become 3 but nearly half of them become 4.
Without dither: 31, 31, 34, 34 becomes 3 3 3 3.
With dither: 31, 31, 34, 34 might become 3 3 3 4 on a typical run.
You absolutely cannot calculate 3 3 3 4 based on 3 3 3 3. You need the original full set of information in order to calculate 3 3 3 4.
Now add on the fact that it's not just random noise that makes this work. Neighboring pixels influence whether to round up or round down. You want that influence to come from original high-depth data, not already-rounded data.
Dithering is fairly expensive operation. Please don't do it client-side each time a resource is fetched.
It's much cheaper than decompressing a JPEG
Compressing, perhaps. But decompressing? Not really.
Floyd-steinberg dithering is at least an order of magnitude cheaper than JPEG decompression
Maybe if all else is equal, but let's not forget you'd be doing the dithering in javascript, and the JPEG decompression is implemented in a low level language, in practice probably hand written assembly (as performance-critical compression code often is).
> Unless you’re going for an aesthetic look
I honestly doubt that a lot of people who go to lengths to unearth Atkinson dithering these days don't aim for that, though.
Most likely, you shouldn't use images in the first place.