Settings

Theme

CPNG, a backwards compatible fork of PNG

richg42.blogspot.com

237 points by dirkf 2 years ago · 121 comments

Reader

solardev 2 years ago

What happens if this actually picks up steam, and suddenly PNG is no longer one format, but a bunch of incompatible ones that look somewhat similar, whose fidelity depends on your renderer?

Early in PNG's history, we already had this issue with alpha channels, progressive rendering, and hit-or-miss support for APNG (animated PNGs, meant to replace GIFs but never happened).

It was also an issue for a long time for PSDs and SVGs, where the same file never looked the same on two browsers/devices/apps/versions.

I would bet that these days, generating or decoding PNGs is the bottleneck almost nowhere, but extending the format would cause problems everywhere in real-world usage. Apps and companies can no longer tell whether there's something wrong with their image or if somewhere in the pipeline, some new graphics designer decided to use a bleeding-edge version of a 30-year-old graphics format that nobody else accounted for, and it looks "broken" in half the browsers now. A format can still look broken even if it's "backward compatible", just by virtue of having some features (like HDR) that are only displayable in some renderers but not others.

Why not just make a new format instead and have browsers & devices fall back as necessary, like we already do with webp and srcsets?

  • hinkley 2 years ago

    PNG was always set up for extension. In particular, it has a clever way of allowing ancillary data sections to be marked as unimportant, so a decoder knows whether it can skip them or report the file as unreadable.

    I suspect the big thing these days would be to support brotli and zstd.

    • crq-yml 2 years ago

      A problem that often comes up with extensible formats is that whomever comes along and implements them assumes exactly the test cases they came up with, which can often mean "just the files I have for this project" or "just the output of the encoder I have".

      So there will be formats that can reorder the chunks, and those minimum-viable readers will all break when they encounter a file from a different source, because they hardcoded a read order. This leads to an experience on the user's end where they need a "fixer" tool to reorder it to deal with the bad decoder.

      There were tons of old binary formats that were like this. It can happen with text, but it's less likely to, because a larger proportion of textual formats build over a container like XML or JSON to offload the text parsing, and then they end up with some pre-structured data.

      • chriswarbo 2 years ago

        > There were tons of old binary formats that were like this. It can happen with text, but it's less likely to, because a larger proportion of textual formats build over a container like XML or JSON to offload the text parsing, and then they end up with some pre-structured data.

        Note that PNG also "build[s] over a container", since it's a descendant of IFF.

    • solardev 2 years ago

      Many formats have stuff like that (like cover art in MP3 ID3 tags), but usually they're used for, well, ancillary purposes.

      It's dangerous to use this to change the actual primary output of the file (the image), especially in a way that users and editors can't easily detect.

      • sspiff 2 years ago

        I would say at least in the context of extra data to extend the bit depth for HDR, that data could be considered ancillary?

        We've been rendering images in SDR forever, and most people don't have HDR capable hardware or software yet, so I don't know how you could consider it as broken to render the image without the HDR data?

        • Too 2 years ago

          This assumes the image is presented in isolation.

          I’ve seen countless of issues where you place a PNG logo on top of a css background:#123456 and expect the colors to match, so the logo blends in seamlessly to the whole page.

          On your machine it does and everything looks beautiful. On the customer machine with Internet explorer they don’t, so the logo has an ugly square around it.

          • sspiff 2 years ago

            The difference in experience for seeing a black or white background instead of a transparant one on the one hand, and missing HDR on the other is pretty big.

            95%+ of humans won't even notice HDR being missing. Everyone with eyes will notice a black or white square.

            • Too 2 years ago

              Nobody said anything about black or white. Try googling for png color problems and you’ll find thousands of questions, in all kinds of tools and browsers. The css color and the png color need to match exactly. Just a slight difference will look off if placed next to each other. The risk of css and png rendering two same hex-codes differently increase when putting semi-supported hdr extensions in the mix.

              For this particular use case, yes, transparency is more suitable than trying to match.

    • Retr0id 2 years ago

      You could add an "ignorable" zstd-compressed IDAT variant, but that wouldn't give you backwards-compat in any useful way - the zlib-compressed IDAT still has to be present, and unless you fill it full of zeroes or a placeholder image, the overall file is going to be larger.

  • mmastrac 2 years ago

    APNG is extremely well-supported these days: https://caniuse.com/apng

    Fun fact: APNG is better supported than JSON.stringify

  • smcameron 2 years ago

    > I would bet that these days, generating or decoding PNGs is the bottleneck almost nowhere

    The bulk of my game's startup time is spent decoding PNGs via libpng. There are some alternatives to libpng like fpng[1], or alternate image formats like QOI[2]

    These both exist because png is slow and/or complicated.

    [1] https://github.com/richgel999/fpng [2] https://github.com/phoboslab/qoi (discussed here: https://news.ycombinator.com/item?id=29661498)

    • doophus 2 years ago

      Most games ship textures in a GPU-friendly format such as DXT to avoid problems like this.

  • gred 2 years ago

    > these days, generating or decoding PNGs is the bottleneck almost nowhere

    Anecdotal, but I'm familiar with a system which spends ~50% of CPU cycles on PNG encoding (most of which is actually spent in zlib compression).

    The other approaches I've seen involve creating performance-focused forks of zlib (e.g. zlib-chromium and zlib-cloudflare). This has benefits beyond PNG encode / decode:

    https://aws.amazon.com/blogs/opensource/improving-zlib-cloud...

    • Nyan 2 years ago

      Why not use fpnge?

      • gred 2 years ago

        It's a Java system, so not quite so simple. Maybe it's worthwhile to create some Java bindings? Recent JDKs make it feasible to swap out the underlying zlib implementation, so swapping out zlib-madler with zlib-cloudflare or zlib-ng might provide the best cost/benefit.

        • Nyan 2 years ago

          Someone made this: https://github.com/manticore-projects/fpng-java

          Replacing zlib might give you a few percentage points' worth of difference, whilst fpnge would likely be several times faster.

          • gred 2 years ago

            Very cool, thanks for the pointer! We might be able to run an internal test to check performance vs. a zlib replacement, but I think that AGPL license is going to be a showstopper for anything else...

  • chaxor 2 years ago

    Maybe this is the genius in it. In order to get everyone to move to better formats, you just break everything in the one everyone uses, so they have to move.

    Like Twitter becoming X to push everyone to mastodon. Mastodon is better in every way, so it's a net win.

    • sp332 2 years ago

      But most people didn’t move to Mastodon, and a bunch moved to incompatible networks like Bluesky, cohost, etc. Which is really the kind of problem you don’t want to have when you're just posting a photo.

      • Sardtok 2 years ago

        Please don't use PNG for photos.

        • namrog84 2 years ago

          Then what? Jpegs?

          • pavlov 2 years ago

            The clue about the use case is in the naming:

            Joint Photographic Experts Group

            vs

            Portable Network Graphics

          • solardev 2 years ago

            For redistribution, JPEG or AVIF or WebP. I'd choose JPEG myself... good enough compression with broad compatibility. You'll almost never run into an issue with it. AVIF and WebP are still full of gotchas, so I wouldn't deploy them without a JPEG fallback.

            For sharing masters/originals, it just depends on your team's needs. Probably saving the camera RAW is a good idea, but usually there's been edits done to it, so some Photoshop or Lightroom native format, for example, or at least a TIFF if you need broader compatibility.

          • Aachen 2 years ago

            (edit: The answer should logically follow from the following, but as a TL;DR: yes, they must have meant jpeg or similar, although I disagree about it being such a "please don't". Feel free to, they'll take more bytes but it's not like you're losing information in the process, so you can always still switch to jpeg.)

            Photos have lots of noise anyway, so a good jpeg is not going to be the limiting factor. Due to jpeg being lossy, the file size is much smaller for the type of images that pictures are (I don't know a formal definition, perhaps something with some amount of random noise? Or that there's lots of gradients with nonlinear slopes or so?).

            PNGs are lossless, and encoding photos in a lossless way takes a relatively large amount of space. On the other hand, the quality degradation of running text through a jpeg encoder is noticeable even when tuning the jpeg compressor for it, so a screenshot with text is always PNG. For designed things such as icons or logos or so, the file size is small enough to be a no-brainer for getting PNG's lossless quality.

            Vector, such as SVG, is the ultimate format: the highest possible quality and also the smallest! This only works when you've got access to the source (screenshots already don't work because you're capturing a blend of objects rendered as an array of pixels), and especially not suitable/possible when you're reading from a sensor such as a digital camera. This format doesn't describe pixel patterns, but rather shapes such as "draw a red line from coordinates 3,94 to 1920,1027". Because you can't always use it (you can always render an SVG as a grid of pixels and turn it into a PNG or anything else, but not the other way around), support is a bit lower and you'll see them less often, but imo it's always worth striving for when the platform allows

            • Ayesh 2 years ago

              > screenshots

              For browsers, there are extensions that can create SVG screenshots. They work by either copying or inlining CSS into the SVG. They don't work all the time, but worth giving a try first.

              For Firefox, see https://addons.mozilla.org/en-US/firefox/addon/svg-screensho..., it worked relatively well for me.

              • solardev 2 years ago

                I was skeptical at first, thinking it must just embed the raster inside the SVG, but no, this is actually really cool!

                The author also wrote the accompanying library, dom-to-svg: https://github.com/felixfbecker/dom-to-svg

                It seems to capture the HTML and convert each element to a similar one on SVG. I'm still not sure how it handles raster images (does it just embed them? Vectorize them somehow?) but it's a really cool project. Thanks for sharing!

            • solardev 2 years ago

              Minor nitpick, just for clarity: Vectors aren't really the "ultimate" format, they're just a different format for storing a different kind of information.

              Vectors are good for things that can be mathematically described/simplified without significant loss of information (such as curves, shapes, some gradients, etc., and compositions of those) Many logos and fonts fall into this category, for example, and some clip-art do as well. For the appropriate graphics, vectors can give you higher quality and smaller file sizes. But it's not necessarily the right choice for everything. Real pixels do hold data, and vectorizing them will typically cause some data loss. You COULD vectorize photos and illustrations (vectormagic.com is a good one for that). But you just end up with a poorer approximation of the original raster data, ending up with a bunch of pixel-like polygons, because there's not really a better approximation of that high-resolution pixel data that can be easily described in shapes and lines.

              Rasters are still superior not only for photographs, but for other uses outside of basic image redistribution, such as GIS data layers (where sensor data IS what's important), bump maps and terrain maps for games, 3D voxel images for medical imaging, live-action movies (which can interpolate complex scenes between raster frames), astronomy, etc. Even if you could vectorize the sensor data in those situations, you often wouldn't/shouldn't.

              • Nullabillity 2 years ago

                > Real pixels do hold data, and vectorizing them will typically cause some data loss.

                Chickens and eggs. If you have vector data already, it's better to distribute the raw vectors than to rasterize them. If you only have raster data, distribute the rasters (where viable) instead of trying to trace them. Lossy operations are lossy.

                Vectors are largely preferable to rasters, but that doesn't mean you can reproduce them if you only have the rasters. Just like distributing source code is usually preferable to compiled binaries, but that doesn't mean that we just pump everything through Ghidra and call it a day.

                • solardev 2 years ago

                  Yeah, exactly. The takeaway IMO is that there are different formats best for different kinds of data. There's no "best" or "ultimate" format, it just depends on your use case.

                • ttfkam 2 years ago

                  > Vectors are largely preferable to rasters

                  Not for continuous tone images like photographs, they're not. They are excellent however for line art.

            • ChainOfFools 2 years ago

              If I squint hard enough, jpeg starts to look like a very esoteric spin on svg, a quilt of tiny svg-like tiles whose parameters are defined by an encoder analyzing a bitmap rather than a human visualizing a concept.

    • cortesoft 2 years ago

      > Mastodon is better in every way

      We might wish this to be true, but it isn't. There are more people on Twitter/X, which is the most important part of a social network.

      • Fauntleroy 2 years ago

        Less people is actually a very, very good thing in my mind. Twitter is a great example of what happens when too many people are "talking" at once.

    • tetris11 2 years ago

      I think idiocy and short-term self-interest is more at play here, than any 4D chess shenanigans.

    • bawolff 2 years ago

      Embrace, extend, extinguish. Not just for microsoft anymore!

    • bsder 2 years ago

      > Mastodon is better in every way, so it's a net win.

      Making easily disprovable statements is not the way to win people to your side.

      Searchability is weaker. Login is all over the map. Links don't work. etc.

      These are the standard problems with non-centralized software.

      I really don't understand why Mastodon didn't use/create something like the "magnet" links used for torrents. That way, even if you lost the server, as long as you touched something that had the appropriate hashes, you can access the information.

      I use Mastodon, but it is not better in every way.

  • skybrian 2 years ago

    It seems like it’s not going to look broken? Unlike, say, the difference between black & white and color TV, or an animated image that doesn’t animate, it will be a subtle difference that most users won’t notice or care about. Some designers may be annoyed, but it doesn’t seem like that big a deal.

    • pwdisswordfishc 2 years ago

      Though doesn't that mean the feature is less likely to be implemented in the first place?

      Nobody gave a shit about Unicode grapheme clusters until EEMAWDJI came about. Sadly.

  • zimbatm 2 years ago

    He said that it would be backwards-compatible. It's in the name of the project.

    • solardev 2 years ago

      Sorry, but did you read my post? It's only backward-compatible in the sense that existing renderers can display SOMETHING -- but it's not the same image.

      From the article:

      > [...] like how color TV was introduced but it still worked on B&W TV's. New features can be added, as long as existing decoders ignore them and return a viewable image

      Keyword "ignore them". To my reading, this means that CPNGs will contain several images inside them: A full-featured version with "color" (or HDR, or or whatever) for newer renderers, and a fallback one (in "black and white" in his example) for existing renderers.

      It's not really "backward compatible", but more like "fallback-able".

      • edflsafoiewq 2 years ago

        The first two extensions (constrained deflate, multithreaded encoding/decoding) only present optimization opportunities for the decoder. It is still the same image.

        Single-threaded decoding of large PNGs is enough of a bottleneck that Apple has their own version of PNG with multithreading support.

        • layer8 2 years ago

          It’s also not really a different format then, just a constrained form of PNG.

      • brookst 2 years ago

        > CPNGs will contain several images inside them

        That was not my read. When discussing HDR it’s clear that ONLY the extra bits are stored in a new data structure and are applied to the base image at display time.

        So that gets you a bit-for-bit compatible image on old systems and HDR on new systems without duplicating data.

        I believe that pattern applies throughout: a baseline, standard PNG image plus extra instructions and data for adding information to it.

        • solardev 2 years ago

          Sorry, I didn't mean in terms of the codec or data deduplication. You're right, it's probably not implemented as actually having different images in the same file. That would be quite silly. Sorry for my ambiguity.

          But functionally, this means that two renderers would have different outputs for the same file. It's not so bad when it's just HDR, but if this continues to grow, it WILL get bad. We've seen it many, many times before with hacked-on file extensions.

          If it were Microsoft doing this (it's not), it would be just yet another embrace-extend-extinguish effort. But it's not. I have no doubt this author has good intentions, and this is a cool hack, but I think it will lead to confusion and poor outcomes in widespread usage.

          It's really not great when an image format looks "somewhat" the same between renderers. In fact, I think that's worse than it not working at all. From experience, it leads to these situations where the same image can fragment over time just by virtue of being worked on by different people in different environments (apps, OSes, browsers, devices), and there's not really a clear way to tell which is which without extensively analyzing the metadata.

          If Joe starts work on a HDR version, sends it to Sarah, Sarah adds a bunch of new edits but then saves it in an app that recompresses the image and overwrites the HDR, Joe might never know. And then if the format continues to expand, maybe a bit later Jane and Janet each add mutually incompatible extensions to the file without knowing it. The "base" image might look the same to all of these people, but each had specific edits in mind that the others might not have known about, because their renderers LOOKED like it was rendering the same image when it wasn't.

          This isn't a hypothetical... it happened a lot when PNG first came out, and with PSD (Photoshop files) for a good decade or two until the subscription model came out.

          • Dylan16807 2 years ago

            If someone edits a png I expect them to output a new file. So if they use the SDR version as a starting point... they use the SDR version, that's it, nothing else breaks. Everyone sees the same output from Sarah. You can have the same kind of issues with baseline png already; not every tool does all colors or editing effects correctly, and if someone uses a worse tool to edit then that's unfortunate.

            • solardev 2 years ago

              Right, but not everyone is going to start from the SDR. Some might have the HDR version and not realize they're destroying the info. It gets worse the more forks and features you have.

              There's just no advantage in overloading an old graphics format for something like this in a world where WebP and AVIF already exist.

              As for editing tools, at least they're targeting the same baseline format (maybe with bugs). This is purposefully introducing new features that can only be used in some places, outside of the agreed standard.

              • cycomanic 2 years ago

                Sure there is an advantage, I can put a png file everywhere without needing to care if the users system supports the new capability.

                For a (made up) example let's say Firefox supports hdr, but Chrome doesn't, if I want to include an HDR image into a website, but display a SDR image if the browser can't display it, I need to implement a test for the browser to serve decide if I serve a WebP or PNG. That is a real cost.

                • solardev 2 years ago

                  What you see as a positive I see as potential for implementation details to diverge and cause user issues. We already have issues with WebP and AVIF rendering in some versions of Safari, and those are well established standards, not a lone wolf extension.

                  In theory an extended hacked PNG would have perfect backward compatibility, but the complexities of codecs more likely means it's going to appear as bugs in several implementations. Best case it falls back to SDR, but it's also possible some broken implementation leads to HDR content being displayed in a SDR setup and everything getting all washed out.

                  Having the same file format/extension actually host several different versions never works well. Graphic artists and support teams have to deal with rendering quirks like this all the time because some developer thought their hack was a genius overload :/

                • Sardtok 2 years ago

                  This is built into HTML, where you can supply multiple files in a srcset.

          • jameshart 2 years ago

            I have bad news for you: the existing PNG standard specifies the following optional chunks that compatible renderers are free to follow or ignore, producing different renderings of the same image:

            cHRM - Primary chromaticities and white point

            gAMA - Image gamma

            iCCP - Embedded ICC profile

          • naikrovek 2 years ago

            so all image formats should be at version 1.0 and never be improved? come on.

            • solardev 2 years ago

              Once stable, yes, absolutely! There should be a reference standard that renderers adhere to. Then they should be versioned for future improvement. That doesn't mean there can't be bug and security fixes, but they should be feature-stable.

              There's a reason why JPEG and PNG are so popular: because they ARE stable and unchanging, in a world of failed adoption for APNG, WebP, HVEC, JPEG 2000, JPEG XL, AVIF, etc.

              For image formats, broad compatibility is way way way more important than small gains in codec performance and filesize.

              ----

              Edit: From memory, this is something we learned the hard way back in the 90s and early 2000s, when image (and other) formats saw very rapid development and bandwidth was still hard to get. PNG was a huge improvement over GIF, and that mattered a lot over 56k modems. But the road to its adoption was very difficult, with many features only partially supported for years in different apps.

              When WebP launched, it was objectively better in most ways compared to both PNG and JPEG, but it still saw limited uptake. Part of that was the titans fighting amongst themselves (other tech companies didn't want to adopt a Google format), but also by that point both JPEG and PNG were good enough and connections & CPUs were good enough. The incremental improvement it offered rarely justified it sometimes not working for some users on some browsers.

              It's a similar situation for many other formats: H.264 in a MP4 is good enough, MP3s are still good enough, etc. On the other hand, PDFs and DOC/DOCX have different features depending on what reader you use (such as PDF forms, JS validation, accessibility, or DOCX not rendering the same on different readers), and it's a mess.

              • ksec 2 years ago

                >When WebP launched, it was objectively better in most ways compared to both PNG and JPEG,

                It was not. Optimised JPEG was on par or sometimes better than WebP.

                • solardev 2 years ago

                  For a given image, that's possible, but if memory serves, the good enough defaults usually led to the WebP version being smaller across a library of image samples, statistically speaking. But that may not have been true for an individual image.

                  And either format could be manually optimized even more.

              • sroussey 2 years ago

                GIF had patents from CompuServe, one of the reasons for PNG.

              • jameshart 2 years ago

                > There's a reason why JPEG and PNG are so popular: because they ARE stable and unchanging

                PNG is on version 1.2

              • miragecraft 2 years ago

                The news of JPEG XL’s death is greatly exaggerated.

  • pwdisswordfishc 2 years ago

    > Early in PNG's history, we already had this issue with alpha channels, progressive rendering, and hit-or-miss support for APNG (animated PNGs, meant to replace GIFs but never happened).

    Don't forget pixel aspect ratio. Oh wait, most viewers still ignore that.

ebb_earl_co 2 years ago

> Why continue messing with PNG at all? Because if you can figure out how to squeeze new features into PNG in a backwards compatible way, it's instantly compatible with all browsers, OS's, engines, etc. This is valuable

What a brilliant paragraph. I wish this developer all the success in the world.

  • unconed 2 years ago

    The backwards compatibility is okay, but the author should also plan a more optimal replacement encoding that ditches the legacy compatibility, and require new implementations to support both.

    Otherwise there is no way to sunset the hacks.

    • solardev 2 years ago

      They should just make a new format. It's not clear if this is an improvement over AVIF either.

    • Nyan 2 years ago

      We've already got JPEG-XL as a good replacement format, but some feel that jeopardizing its adoption is a sensible cause.

Voultapher 2 years ago

That's really cool! I stumbled across libpng being 10+x slower to encode than jpg and tiff at work. The LOGLUV32 part is very clever. I particularly like the tonemapped fallback and general idea to build on top instead of reinventing. That said I hope these format extensions don't end up in compatibility hell, where viewing the full info image is hit or miss between different CPNG decoders.

gumby 2 years ago

I loved reading this even though I personally have zero need myself. I enjoyed the rationale and he engineering.

The world needs more work like this. I’m talking about the thoughtful image format but also that applies to the write up too.

fbdab103 2 years ago

What is the modern power ranking on image formats?

For lossless, what is typically the most efficient size wise? Decompression speed?

For lossy?

I am not in a situation where these micro-optimizations mean much to me, and always default to png, but curious to know where the state of the art is today.

  • solardev 2 years ago

    AVIF and WebP, two modern replacements for both JPEG and GIF on the web, both support lossy and lossless encoding.

    WebP is mature and has more browser support, but AVIF is getting there (notably only lacking Edge support). Both can compress in either a lossy JPEG-like fashion or in a lossless PNG-like fashion.

    If you use a image CDN like Imgix, it'll just auto-detect and serve whatever the most optimal format is anyway: https://docs.imgix.com/apis/rendering/auto/auto#format. Cloudinary too: https://cloudinary.com/documentation/image_optimization#how_...

    For non-web, there's also JPEG XL to look at, but if you're not doing rendering an image for redistribution, it's probably better to keep it as raw as possible anyway (i.e. camera raw images plus photoshop layers, or whatever).

    • ComputerGuru 2 years ago

      WebP and AVIIF (and, to a much lesser extent, HEIC, which AVIF is basically a rip off of) absolutely suck for color management since they are a) virtually never original source formats, b) are video codecs. WebP technically supports two different color profile techniques (traditional embedded ICC - broken in every mainstream batch image processor I’ve tried - and nlx video-based color profiles). Unlike WebP and all the other image formats, untagged AVIF can’t be assumed to be sRGB (in part because there is no actual sRGB for video, though close variants exist) and every image processor or image editor will open it with a different base color profile assigned. WebP doesn’t even support exif, making it absolutely horrible for “lossless” operations that effectively aren’t lossless since they necessarily destroy metadata.

      HEIC is also a video codec at heart but has a default color space that also isn’t sRGB (which is a good thing; it’s about time we moved on), untagged HEIC images can (though often aren’t in any default workflow) be assigned Display P3. Assigning/assuming sRGB will absolutely break your images, of course.

    • eyegor 2 years ago

      The worst part about avif support in edge is it was added as an optional feature flag ~8 months ago but it still isn't enabled as a default. Nearly every other browser supports avif by default these days.

      https://winaero.com/avif-support-is-now-available-in-microso...

  • ksec 2 years ago

    >What is the modern power ranking on image formats?

    I will assume this can be outside the web and consider only image format / codec that is state of the art.

    >For lossless, what is typically the most efficient size wise? Decompression speed?

    In terms of lossless, JPEG-XL is the one to look at. Both in terms of size and decompression speed. You will already see communities from professional photographers using this format.

    >For lossy?

    That depends on what sort of quality you are looking for. In some edge cases you could have JPEG XL being better at ultra low bit per pixel, like 0.1 bpp. Otherwise in 95% of cases, at 0.1 bpp to 0.5 bpp, it is HEIF / AVIF. HEIF based on VVC / H.266 Encoder likely to be the state of the art. With current reference FVC / H.267 implementation bringing another 10 - 30% improvement.

    However the higher the quality you go, i.e 0.8 to 1.5 bpp, the more favourable to JPEG XL.

  • jjcm 2 years ago

    As others have mentioned, the pool is webp, avif, and jpeg-xl.

    If you’re building something today, webp likely has the best tradeoff of encoding speed, efficiency, and compatibility.

    For pure tiering, here’s how I’d rank them:

    *Efficiency:*

    Webp: B tier

    AVIF: A tier

    JXL: S tier

    *Encoding Speed:*

    Webp: B tier

    AVIF: D tier

    JXL: A tier

    *Compatibility*

    Webp: A tier

    AVIF: B tier

    JXL: D tier

summerlight 2 years ago

What does it exactly mean by "100% backward compatible"? It looks like some optimizations could be backported to the existing encoder/decoder without breaking the format but this is more of an optimization. My impression is that this is backward compatible in a way similar to APNG (it will return some reasonable images if the file is using a new functionality), but I'm not sure if I understand it correctly.

Retr0id 2 years ago

I've also been pondering a backwards-compatible fork of PNG - but rather than a fork, mine would be a subset. Specifically, it would be an as-simple-as-possible uncompressed* bitmap format, suitable for use in low-complexity embedded systems etc. (e.g. bootloaders, wink wink). By being backwards compatible, you get the added benefit of retaining compatibility with existing image viewers, but without having to implement "all of PNG" in decoders and encoders. Now, the base PNG spec isn't even that big, but the more you constrain the spec, the easier it is to be sure you've implemented it securely.

* If you're wondering how that works in a backwards-compatible way, DEFLATE already supports uncompressed blocks.

  • pwg 2 years ago

    > I've also been pondering a backwards-compatible fork of PNG - but rather than a fork, mine would be a subset. Specifically, it would be an as-simple-as-possible uncompressed* bitmap format, suitable for use in low-complexity embedded systems etc. (e.g. bootloaders, wink wink).

    Look at the NetPBM formats (PPM, PGM, PGM). They are about as simple as they can possibly get (a tiny, ASCII, header, followed by binary bitmap data), and are also uncompressed.

    https://en.wikipedia.org/wiki/Netpbm

    • Retr0id 2 years ago

      They're simple, but they're nowhere near as widely supported as PNG (or BMP)

  • colejohnson66 2 years ago

    BMP already exists for uncompressed bitmap data.

    • Retr0id 2 years ago

      This is true, but BMP became a bit of a kitchen-sink format, supporting all sorts of pixel formats, and optionally, compression.

      i.e. you'd still want to pick a subset to implement. To be honest, you're probably right - BMP would be more sensible starting point, but I'm interested to see how far PNG can be pushed.

  • jezek2 2 years ago

    You may want to look at my PNG/DEFLATE implementation:

    http://public-domain.advel.cz/

    It contains various implementations of the compression, from simple uncompresssed to more complex variants and a quite small PNG loader. There is also a minimalistic PNG writer with uncompressed data.

  • ack_complete 2 years ago

    I would think that something like a bootloader would have fixed input and be able to rely on that, with signing if necessary, rather than the robustness of the decoder. Otherwise, someone who could replace the image would probably be able to replace the code as well.

    Without Deflate compression, PNG would have no compression at all, as the predictor mechanism gives no savings on its own. TARGA with RLE would be a better choice than PNG-0.

  • doubloon 2 years ago

    ok thats pretty wild, you would take the zlib deflate/inflate code, (for example in a library like lodepng) and then like chunk 95% of it in the garbage? so basically every block would just be uncompressed? kind of funny but it would probably work pretty well and your code size could get down way way smaller than the current typical png code.

    seems like the downside is that this is "worse than nothing" compression, the image file would be bigger than the original blit of the data. for example 1024x1024x32bit color means 3 megabytes for one image. or do i miss something?

    • RaisingSpear 2 years ago

      Many PNG compressors allow you to specify the zlib compression level, where 0 = no compression. This will effectively give you an uncompressed image, perhaps with some format overhead.

      Your math is a bit off - a 1024x1024 at 32bpp would be 4MB, ignoring overhead.

      I've actually done something like this in the past - create PNGs with 0 compression, then compress it with something stronger than Deflate (like LZMA). Because the PNG filtering is still applied, this allows greater compression relative to PNG by itself (or a BMP compressed with LZMA).

      • doubloon 2 years ago

        right but the "compression/decompression" code essentially becomes, like what, 10 lines of C? down from several thousand

        • RaisingSpear 2 years ago

          Certainly, if your aim is the simplest code with no regard to compression, you could achieve a PNG writer with a small amount of code.

          In such a case, you could also skip the PNG filtering as well (whilst for my case, you wouldn't want to).

          I think it'd make more sense to go with BMP for a 'simple as possible' image format, that has wide support, than with PNG. PNG is definitely more geared towards a compressed image (as well as all sorts of other features you may not care about).

    • Retr0id 2 years ago

      There's no need to take anyone else's code, emitting uncompressed DEFLATE blocks is trivial. I'm not sure what you mean by garbage?

      > for example 1024x1024x32bit color means 3 megabytes for one image.

      You do miss something, that's 4 megabytes, plus any header/format overhead - but you'd get similar performance out of any uncompressed format, that's just the tradeoff.

      • doubloon 2 years ago

        i mean the PNG code, youd have to have some PNG code in addition to the DEFLATE code yes?

tedunangst 2 years ago

No mention of JPEG XT or JPEG-HDR?

https://en.wikipedia.org/wiki/JPEG_XT

notfed 2 years ago

This sounds pretty amazing.

For CPNG-aware libraries, the performance improvements sound impressive.

For old (CPNG-unaware) libraries: should I expect any performance drop reading a CPNG image compared to if the image had remained PNG? Similarly, how much larger will a CPNG be than a PNG?

TheFuzzball 2 years ago

Maybe next we'll get eXtensible Compatible Network Graphics: XCPNG

brookst 2 years ago

Very cool, and I hope it sees adoption.

Also speaks to either wise or lucky design of PNG itself, that it can support these kinds of backwards-compatible extensions.

  • lifthrasiir 2 years ago

    PNG had more focus on backward and forward compatibility, but the fact that PNG can be "extended" in this way is not that unusual for file formats composed of multiple sections (chunks in PNG). Especially considering that other aspects of PNG effectively failed, for example it is technically possible to add new compression method or color type to IHDR, but that would give you a file completely unreadable by existing decoders. CPNG essentially works by reinterpreting PNG in a different way when certain conditions are met.

ericskiff 2 years ago

This is wonderful. What a great way to continue innovation without facing the adoption hurdles of a new format

jancsika 2 years ago

How different can the fallback be?

Could you do an image of SBF that falls back to an image of Madoff?

ComputerGuru 2 years ago

There’s no mention of what effect these changes have on file size. It seems to me all the non-HDR changes will blow up file sizes for all but the largest of images.

snshn 2 years ago

If APNG couldn't pick up steam and get widespread adoption, not sure how this will. But hopefully I'm wrong.

  • jeroenhd 2 years ago

    APNG is supported in every browser and all the video encoding tools I've used. It's not used all that often, but support for it is built into many software libraries.

vzaliva 2 years ago

How this compares to WebP?

  • kibwen 2 years ago

    I didn't even realize that WebP had an optional lossless compression mode.

    • Dwedit 2 years ago

      WebP's lossless compression mode usually beats PNG by a lot, and even decompresses faster. I consider lossless WebP to completely obsolete PNG. Lossless JXL often beats WebP in compression, but loses in decompression time.

      Except for indexed color images. PNG beats WebP on those images. Meanwhile JXL beats PNG on indexed color images.

jbverschoor 2 years ago

Why not just invest in jxl

Exoristos 2 years ago

Seeping?

topsycatt 2 years ago

If only it was called GNP...

phront 2 years ago

meet a new bunch of security holes

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection