Settings

Theme

Google researchers detail new method for upscaling low-resolution images

dpreview.com

40 points by zonovar 4 years ago · 12 comments

Reader

emrah 4 years ago

It is pretty impressive/crazy how well CDM and SR3 work together to go from 32x32 to 256x256 e.g. the Irish Setter. How could the algos possibly know the lighter coloring (due to breed or lighting) between the dog's eyes?! It's basically inventing pixels

  • shakna 4 years ago

    Even basic upscaling algorithms can guess a surprising amount of detail.

    When I was putting together a simple and fast method, a while back, I compared my own to the very, very, basic and ended up with this [0].

    The far left is the original, the others are just shifting the scale percentage. There's a surprising amount of detail kept, even though all of the algorithms were pushed way beyond what should be considered their limits. (Purposefully - to expose bias that was easier to analyse.)

    [0] https://raw.githubusercontent.com/shakna-israel/upscaler_ana...

    • emrah 4 years ago

      Thank you for sharing. Honestly I don't see any "pixel divining" in your examples. The algos take existing pixels and build on top of that.

      Irish Setter example seems to introduce detail that is not part of the original small image, like the lighter/whitish area between the dog's eyes.

  • sorokod 4 years ago

    How confident can you be that the initial 32x32 was an Irish Setter?

dang 4 years ago

We changed the URL from https://petapixel.com/2021/08/30/googles-new-ai-photo-upscal..., which points to this. Both articles point to this one, which had a recent and related thread:

High Fidelity Image Generation Using Diffusion Models - https://news.ycombinator.com/item?id=27858893 - July 2021 (19 comments)

mikewarot 4 years ago

Here's an unofficial copy of the code: https://github.com/Janspiry/Image-Super-Resolution-via-Itera...

empressplay 4 years ago

There might be hope for Star Trek:Deep Space Nine yet!

hulitu 4 years ago

Finally something to bring back to life those 16x16 px Windows 3.1 icons.

sorokod 4 years ago

If we start with multiple source images that are "small" (by some definition of small) perturbations of each other and upscale them, what can be said about the results?

dakial1 4 years ago

Some of the images in the article seemed to be high-res images that where downscaled to low-res (and it makes sense to see how the upscalling process changes the original), but wouldn't that make it easier for the ML to revert the downscaling process rather than taking an original low-res photo and upscale it?

  • lwneal 4 years ago

    This is true. Downscaling an image and then training a neural network to scale it back up is the way single-image superresolution systems typically work. Research papers need to evaluate their models, and how can you evaluate a scaled-up image unless you have the original ground truth to compare it to?

    This can introduce a dataset shift bias. For example, if you train a network to upscale 1080p movie frames to 4k, the results might be disappointing when you try to scale 4k to 8k.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection