Settings

Theme

The limits of "computational photography"

yager.io

278 points by fabfabfab 3 years ago · 222 comments

Reader

delta_p_delta_x 3 years ago

I would like to see computational photography applied to raw images from DSLRs and MILCs with APS-C and larger sensors. Perhaps Canon, Nikon, Sony, and Fujifilm could have built-in options in their cameras for ‘social media mode’, with a modicum of noise reduction (honestly unnecessary at ISOs lower than about 1600 for modern cameras), but drastically improved HDR and white balance.

Many of these cameras are able to take bracketed[1] exposures, and the SNR in even just one image from such sensors is immense compared to the tiny sensors in phones. Surely with this much more data to work with, HDR is much nicer and without the edge brightening typically seen in phone HDR images.

[1]: https://www.nikonusa.com/en/learn-and-explore/a/tips-and-tec...

  • dusted 3 years ago

    They already are. However, most photographers do not appreciate this type of distortion being applied to their images.

    At a glance, my samsung note 22 ultra takes better picture than my nikon d7500. At a glance.

    However, as soon as you want to actually DO anything to it, like, view it in any real detail, or on anything but a tiny screen, reality returns.. While the phone is absolutely fantastic for a quick snapshot, it just does not come close to the definition of the older camera with the bigger sensor.

    • packetlost 3 years ago

      The workflows for someone using a DSLR/mirrorless are entirely different too. Most people edit their photos on a computer in "post production" (this isn't always true, I myself use Fujifilm's built-in JPEG processing features heavily), whereas phones are typically doing very light edits if anything.

      Even if SNR isn't really the issue with the tiny sensors (it is), the minuscule lenses used by phones almost certainly don't come anywhere close to the detail fidelity of a lens 300x the size.

    • psychphysic 3 years ago

      Similarly sometimes I just want to apply whatever ai magic elixir my phone does to the raw image from my DSLR.

      Messing around in lightroom etc is just not worth it for hundreds of shots I take per day sometimes.

      • throwanem 3 years ago

        If you're making the same or very similar edits to a lot of shots by hand, you don't have to do that. You can create a preset from one shot's edits, then apply it to as many others as you like in a single step.

        • lambdasquirrel 3 years ago

          Yeah, Photoshop has actions too. See, photographers have been doing computational photography too, this whole time. It’s just that we prefer to put it together with a general purpose tool, rather than what formula Apple or Samsung has decided would be best.

          • throwanem 3 years ago

            Which is the point I failed to touch upon in my smartassed earlier comment about the macro rig. We use most of (maybe all) the same transformations in a tool like Lightroom, but prefer to compose them by hand precisely because the assumptions made by phone camera ISPs are not at all guaranteed to hold over the range of raw images we use our cameras to produce. (They don't even hold over the range of raw images people use phone cameras to produce!)

      • matwood 3 years ago

        So true. Not only that, but pulling out my phone and quickly snapping a shot is often better than quickly pulling out my non-phone cameras - especially in challenging lighting conditions.

        I absolutely can get better shots with my non-phone cameras, but they take more work.

        • prmoustache 3 years ago

          I understand carrying and taking shot with a DSLR is much more involved than pulling out a phone from a pocket, but have you ever used compact cameras with 1" censor like a Sony RX100 or Canon G9x ?

          They both fit in your pocket and startup as fast as you can activate the camera on your phone. Out of habit I usually reach the power button inconsciently when grabbing it from my pocket and the lense is usually extending while my arm is moving in position so it is usually immediately ready to snap a picture.

          If you are in a setting where you are almost certain to take picture, a micro four third format also works great and depending on the lense are usually small enough if you are the kind of guy who wears small shoulder or sling bags.

          • matwood 3 years ago

            Well now we're getting into the old adage, whatever camera I have with me. I always have my phone and many times have my Z5 or d7100. There isn't much space to add another camera.

      • steveBK123 3 years ago

        The trick is to not take 100s of shots per day, and failing that, filtered out 70-95% of what you shoot, before editing.

        • SAI_Peregrinus 3 years ago

          That somewhat depends on three main things: your photography style, your equipment quality, and your skill. If you shoot landscape photos, you almost certainly don't want to be taking 100s of shots per day and should spend more time and effort on composition. If you shoot wildlife action (e.g. birds in flight, especially fast ones like swallows) you'll be taking hundreds to thousands of shots per minute of action (30 shots/second from a high-end body like a Sony a1 or Cannon R5 means 1800 shots per minute, even mid-range bodies take 10 shots/second). If you've got good enough equipment and skill that most of those shots are usable, then there's going to be a lot of sorting to find the ones worth editing.

        • Retric 3 years ago

          I find the best photo before editing is often different than the best photo after editing. Granted that’s probably a skill issue on my part.

          • steveBK123 3 years ago

            you can bulk edit with your adjustments taken from one image and apply to a set of images taken in same lighting, etc

      • dwighttk 3 years ago

        I don’t use it, so I don’t know, but doesn’t Google Images offer do this to any photo you upload?

    • dimatura 3 years ago

      Yep, I forgot the name of the settings but my Sony mirrorless has some modes which are in the same spirit of the typical smartphone camera - sort of like an enhanced Auto mode. Though In my experience, definitely less aggressive than what my pixel does.

      Camera manufacturers definitely are at a disadvantage when it comes to AI/ML expertise, infrastructure and data compared to Google or Apple. So I do think these big cos. do a better job on average from a purely technical viewpoint. On the other hand, there's no accounting for taste - stylistically, I do agree with many others that they are often too heavy-handed.

    • prox 3 years ago

      This! My close to 10 year old DSLR still outperforms the latest and greatest on any phone for this reason.

    • causi 3 years ago

      I recently started collecting a few of the digital cameras I lusted after as a kid. It made me realize there's a huge population of cameras out there that would produce incredible photos if not for small details. Details like not having readily available batteries, or an odd cable or memory card limitation. The biggest one for me personally is CCD degradation in cameras that don't have a built in pixel mapping function, so every photo they produce has the exact same set of bad pixels. Sadly as far as I know there's no way to set up automated photo editing where you can just select all the files and hit "fix defect set for Camera A".

      • corysama 3 years ago

        I don’t know what I’m talking about. But, I recall the Unity3D folks talking about a camera calibration process that involved photographing a dozen full frame images of a flat neutral gray surface, averaging all those images to average out the sensor noise and being left with an image containing the per-pixel bias of that particular sensor.

      • dwringer 3 years ago

        I have used a program called Pixel Fixer which can learn a profile of each of your cameras and directly process out the dead pixels in the RAW files in a batch operation.

        It hasn't been updated in about 8 years so it doesn't necessarily support anything too recent, and is not open source, but I had some success with using it.

        • causi 3 years ago

          Does it support jpegs? Most of the cameras in question don't support RAW output but the problems are individual stuck pixels that are identical between every shot.

          • dwringer 3 years ago

            Unfortunately it looks like the author only listed the possibility of future JPEG support, but it was not implemented.

            • causi 3 years ago

              Dang. The only thing I've found that comes close is a program called PixelZap, but it hasn't been updated since 2002 and the functionality to purchase the paid version probably just sends your credit info to Russia.

    • mtrycz2 3 years ago

      My old DSLR has a RAW+JPEG setting which will do both, I use it all the time.

      • hef19898 3 years ago

        My favorite setting as well, I can edit the best shots from RAW, but can share any shot as a JPEG. And those are usually not too bad anyway, sometimes I just stick with the camera JPEG. Only exception, and rarely enough, continous high speed shooting. Even in JPEG Fine, the buffer never runs full. In RAW it does so after appr. 10-13 shots.

      • dusted 3 years ago

        Yea, I also do RAW+JPEG, Some times the JPEG turns out just fine and I won't bother with darktable, other times I use the JPEG as a sort of "what would the camera have done" reference, like, for inspiration for the direction I could develop it.

  • buildbot 3 years ago

    There is no reason to do this. If you want to apply the algorithms and Iphone uses, simply take a burst and post process later for HDR or whatever. I remember being in middle school and messing around with Hugin and my Minolta bridge camera…

    People value quick shots/edits and don’t care about quality or editing things later don’t mind an iPhone doing all this behind the scene - but it is irreversible. The sort of error in the article would drive myself and other photographers up the wall.

    Also, an Iphone has a CPU and ISP that outclass desktops from only a few years ago - camera manufacturers simply don’t have the same compute available.

    On the other hand, some brands do provide interesting computational photography in their cameras at the very high end. Panasonic mirrorless full frame cameras have a pixel shift mode for super res/no bayer interpolation, with some ability to fix motion between steps. Phase One has frame averaging and dual exposure in their IQ4 digital backs, for sequential capture into a single frame and super high dynamic range respectively.

    • astrange 3 years ago

      There is no consumer software that can do what the iPhone does with its sensors.

      iPhone HDR produces an HDR image file. Consumer HDR apps do the opposite - they take an HDR raw and tone map an sRGB JPEG out of it.

      The only portable format that really supports HDR images is EXR, so if you're not generating that you're not getting it.

      (I don't think there's anything that can do deep fusion either, though obviously you need it a lot less.)

      • _aavaa_ 3 years ago

        I don't think that's true. Capture One introduced HDR merging in one of their recent releases [0].

        It does raw in "raw" out. You get a DNG out which is demosaiced. I have not yet looked to see what it's doing under the hood, and how the result compares to EXR. (But in my experience, it seems to struggle with large exposure differences, even when on a tripod).

        [0]: https://support.captureone.com/hc/en-us/articles/44100147302...

        • astrange 3 years ago

          I mean, sure, producing another raw is one way to do it, but I wouldn't really call that an image format.

          • _aavaa_ 3 years ago

            I have no idea why you wouldn’t call it an imagine format.

            But either way that’s changing the goal posts. Here is a consumer software doing what that iPhone does.

            If that’s not to your liking, there is other software which produces 32bit float tiff images, those will support an even higher dynamic without the pain of EXR.

          • buildbot 3 years ago

            You are all over this thread trying to prove that iPhones somehow have a unique setup in the industry that no other camera can match, and people keep telling you how wrong you are. DNG is 100% and image format, and can be “raw” or completely processed depending on the layers internally. Educate yourself before making such blanket statements.

            • astrange 3 years ago

              I know what a DNG is. It's not a final delivery format; it's not compressed and I don't think eg gaming content pipelines are going to accept it for ingest either.

              (Also, macOS won't display them in HDR, but will display EXR files in HDR. Some trivia for you.)

              • buildbot 3 years ago

                -> It's not a final delivery format

                linear DNG?

                -> it's not compressed

                "Lossless and lossy compression (optional): DNG support optional lossless and (since version 1.4) also lossy compression.[36] The lossy compression losses are practically imperceptible in real world images"

                -> I don't think eg gaming content pipelines

                Why is this a goalpost even??? In what context? Photogrammetry? DNG works just fine.... Or the actual engine itself? found one in 30 seconds of searching: https://github.com/EQMG/Acid

                -> macOS won't display them in HDR

                Well, that seems like a macOS problem

                This is ridiculous at this point.

                • _aavaa_ 3 years ago

                  I actually now agree with them. I think they were simply doing a bad job explaining.

                  What I think they mean is that the iPhone camera app is the only one that takes an HDR photo of a scene a produces and HDR output image[0] that can be easily shared and viewed IN HDR on multiple devices.

                  I don't think that EXR is a particularly portable format (do phones even support viewing them in their native photo viewer/manager?) nor one with small file sizes.

                  The DNGs from Capture One or Lightroom have similar issues, they at least can be viewed on phones, but they do not show up as HDR images.

                  Other applications, e.g. Hugin, will produce and SDR image. It's based on an HDR image so it captures a wider dynamic range, but it does it through tone mapping back into a SDR image, rather than actually saving and showing the full dynamic range of the HDR image.

                  [0] I am not convinced that what the iPhone does with their heic images is really HDR, in the sense that the colors still seem to be stored in an SDR range.

              • _aavaa_ 3 years ago

                After this comment I get where you’re coming from.

                Do you consider the heic files from an iPhone hdr files rather than sdr, even though the extra brightness channel is stored as a separate channel rather than extending the range of rgb values?

      • sorenjan 3 years ago

        Hugin produces exr files from image stacks.

      • buildbot 3 years ago

        Sure, there’s not a photoshop button, but does github count https://github.com/timothybrooks/hdr-plus

        It’s just an algorithm.

      • illiac786 3 years ago

        > iPhone HDR produces an HDR image file

        What file format is this? iPhone produces tone mapped HEIC, yes, but that’s not HDR as you pointed out.

        Or do you mean HDR video?

      • aoeusnth1 3 years ago

        Capture One creates .dng files, which are HDR raw equivalents when combining hdr from bracketed RAWs.

      • pizza 3 years ago

        How about JPEG XL?

    • SSLy 3 years ago

      >camera manufacturers simply don’t have the same compute available.

      what forbids them from buying the competitive SoCs from Qualcomm? Pride?

      • scriptproof 3 years ago

        They would have to rewrite entirely their software. There had been some attemps to use these Qualcomm processors in cameras (the Yi M1 for example) and the result was terrible, the autofocus specially.

      • astrange 3 years ago

        The much larger sensors on a dedicated camera take longer to read out / have physical shutters / are very power hungry, so doing bursts is just a worse tradeoff on a dedicated camera than on a phone.

        Also, the camera market is much smaller than the phone market. Also, most camera companies are Japanese and Japanese companies are generally speaking not as good at software as they could be. (Though this is getting less true.)

      • zirgs 3 years ago

        Power consumption is the main issue.

        Software bloat is another. Current digital cameras start up and are ready to shoot almost instantly.

        But it takes a while for a phone to boot up.

        Photographers don't want that.

      • joking 3 years ago

        They don't really even have to do that, they could use the iPhone soc, but the pairing between the phone and the cameras it's not optimal.

        • buildbot 3 years ago

          Huge amounts of power and data - the power cost of sending 48MP over a PCB vs. over wifi is 12 orders of magnitude. 40pJ Vs . 2.5J !!!!

      • _visgean 3 years ago

        i think the battery consumption is also quite bad.

  • jlarocco 3 years ago

    I'm not exactly sure what you're asking for here. In my experience (and as seen in the article) the image processing in most digital cameras will already blow an iPhone out of the water.

    As far as I know, iPhone and Android aren't doing anything that isn't already done by digital cameras. They ramp up the settings on things like noise reduction and sharpness to balance out their tiny sensors, but it's more or less the same algorithms that the cameras are using.

    Good cameras even allow you to tweak the settings and control RAW conversion right on the camera. The author could have botched the noise reduction on his Fujifilm to match the iPhone if he wanted to. [0]

    [0] https://www.jmpeltier.com/fujifilm-in-camera-raw-converter/

    • xgbi 3 years ago

      I think you’re missing what makes iPhone photos look so good; they do much more than just ramping up the sliders on sharpness.

      When you take sunlit panoramas for instance, the iPhone will auto bracket and perform hdr treatment, it’s fantastic : you can see both the ground and the blue sky and the clouds. You can’t do that easily with a dslr, certainly not like a phone is doing right now, which is integrating the last x frames and modulating the digital shutter to capture multiple exposures.

      Same when you take night shots, where the iPhone is integrating 1-2 s of sensor frames and compensating for movement with the accelerometer. You can take handheld shots of the sky, which is completely out of question with a standard dslr. Maybe Sony can do that with a very fast lens, in body stabilization and a very high iso, but this is 10k worth of equipment right there.

      I’m still liking my dslr « real » bokeh and photos, but some of the innovation in digital photos should spill into dslr/mirrorless.

      • morsch 3 years ago

        > You can’t do that easily with a dslr, certainly not like a phone is doing right now, which is integrating the last x frames and modulating the digital shutter to capture multiple exposures.

        Standalone cameras have been doing this for at least a decade. The same is true for nighttime exposures. That's not to say that the iPhone doesn't do it better, maybe it does, it certainly can throw more processing power at it.

        https://www.imaging-resource.com/PRODS/sony-a3000/sony-a3000...

        • matwood 3 years ago

          If you're talking about auto-bracketing (AB), yes. But phones are doing more of an AB+. Are stand alone cameras aligning all the pictures in camera? Are they constantly reading the sensor so when you hit the button it takes the last series of shots, aligns them, and merges them?

          I've been using AB for 20ish years, but it has never been as automatic as on a phone. It could certainly be that I haven't bought the right stand alone camera though.

        • godsfshrmn 3 years ago

          My mirror less Olympus does this as well. Set number of f stops, exposures etc all in-camera

          • CWuestefeld 3 years ago

            Yup. I was recently doing some night sky shooting with my E-M1.3. Not only does it do what you say, but when using its "super exposure" it'll take multiple shots moving its sensor (using its in-body stabilization) by sub-pixel amounts. This produces from a 15Mp sensor, a 50Mp or 80Mp result - and as a side-effect, the stack of images results in weighted averaging of the pixels, thus doing a fantastic job of noise suppression. And yes, it auto-aligns, at least in the 50Mp mode.

          • fsloth 3 years ago

            It composes the hdr as well?

            • leviathant 3 years ago

              The Canon T2i that I bought in 2010 sure did.

              • fsloth 3 years ago

                All the references I found just referred the auto-bracketing (I.e. shooting the the images with different exposure) but I can't find any mention of in-camera composition of the final hdr - all sources just tell you to use an external software for the HDR composition. I'm not doubting you but this sounds like an undermarketed obscure feature - can you find any references for this feature?

      • prmoustache 3 years ago

        Most cameras have an hdr mode nowadays.

        • astrange 3 years ago

          It's a lie though; it produces an SDR image.

          Cameras have HDR video capability, but not HDR stills except for the very latest models.

    • jack1243star 3 years ago

      > As far as I know, iPhone and Android aren't doing anything that isn't already done by digital cameras.

      It's the opposite, most digital cameras aren't doing enough compared to smartphones, which have minimum control and look very good at 2M pixels. A proper camera with default settings looks bland in the hands of an amateur (too little saturation) and handles dynamic range poorly (without bracketing or post-processing). Plus it's never going to beat smartphones at connectivity (SNS), or portability.

      Most people aren't going to spend time processing RAW files, or try to get in-camera picture control/film simulation etc. to look pleasing like photographers do.

      • jlarocco 3 years ago

        I don't buy it.

        Which cameras have worse "Auto" settings than a phone? I have two Fujifilms, a Panasonic, and a Nikon, and they all take better photos on "Auto" than my iPhone or any of my friends' Androids.

        And who buys a camera in this day and age but doesn't want to play with the settings? Point and shoot users stick with their phones now.

        • jsmith99 3 years ago

          It's not about better or worse, it's about trade offs. At small social media sizes pictures will look better from a phone than from an enthusiast camera.

          Sharpening is a destructive process and the appropriate amount depends on how and what size you are using the image. It can also be easily applied anytime later. So enthusiast camera apply minimal sharpening and look under sharpened at small sizes, and phone cameras apply heavy sharpening but have artefacts if you zoom into them.

          Same with saturation and 'ai' tricks like skin smoothing: users of enthusiast cameras often have different preferences.

          • astrange 3 years ago

            > Same with saturation and 'ai' tricks like skin smoothing: users of enthusiast cameras often have different preferences.

            Note, phone cameras don't do skin smoothing, people just think they do. (Unless you get a beauty camera app.)

            The issue is that almost any processing has the effect of removing noise, which is the same as skin smoothing. If you're taking 2-3 images and merging them, it'll remove the noise even if you do something smarter than averaging the pixels - so you actually have to add steps to measure the noise level and add it back.

            The same issue comes up in video compression, where there's dedicated film grain metadata since almost any compression kills the real thing.

        • leokennis 3 years ago

          > Which cameras have worse "Auto" settings than a phone? I have two Fujifilms, a Panasonic, and a Nikon, and they all take better photos on "Auto" than my iPhone or any of my friends' Androids.

          Better or worse is of course entirely subjective. Maybe your real cameras take more neutral pictures, but people these days might expect less neutral pictures: more saturated, more HDR, more contrast.

          The goal of photography is often not to look "real" or "neutral". For example, Ansel Adams took beautiful photos...but very stylized.

          > And who buys a camera in this day and age but doesn't want to play with the settings? Point and shoot users stick with their phones now.

          This point I tend to agree with. Imagine a device that combines the camera/photos interface of a smartphone (full screen viewfinder, touch controls, camera apps, on-device editing etc.) with the hardware of a camera (larger body allowing for bigger sensors, more lenses etc.). Basically the specs of a DSLR with an interface for the masses. Who would carry this with them? It sounds a bit like the novelty "dumb phones" that exist nowadays: very interesting, but a way too small market.

        • rimliu 3 years ago

          Pretty much all of them are worse. Theres is no magic wrt to dynamic range on the DSLR (or mirrorless) so if camera does not HDR processing the result will be worse than smartphone's.

          • astrange 3 years ago

            DSLR/mirrorless sensors have several stops more dynamic range than a phone (don't know exact numbers), so that's not necessarily true. The phone trick of taking multiple images doesn't always work if there's eg rapid motion.

          • buildbot 3 years ago

            Dead wrong. There are cameras that have 15 stops of native dynamic range.

        • stephencanon 3 years ago

          Our Fuji’s defaults are pretty good, but iPhone’s exposure and white balance on auto is light-years ahead of our Nikon gear.

          • astrange 3 years ago

            Fujifilm is deservedly known for being very good at color.

            https://www.thephoblographer.com/2018/01/04/film-emulsion-re...

            Although their digital cameras' emulation modes of their old film stock are kinda lame. They also make those weird non-Bayer-format cameras which I'm not sure are worth the price.

            • ghaff 3 years ago

              I know it was the fave of a lot of nature photographers and the like, but I never liked the greens in Velvia in particular. If I were going to use a slow film, I'd use Kodachrome 25--which had about the same working ISO. (That said, at some point, Kodak improved the look of 100 ISO Ektachrome and that's pretty much all I used from then on.)

    • cycomanic 3 years ago

      > I'm not exactly sure what you're asking for here. In my experience (and as seen in the article) the image processing in most digital cameras will already blow an iPhone out of the water.

      > As far as I know, iPhone and Android aren't doing anything that isn't already done by digital cameras. They ramp up the settings on things like noise reduction and sharpness to balance out their tiny sensors, but it's more or less the same algorithms that the cameras are using.

      That is not true AFAIK, phone cameras use things like "AI" enhancement amongst other things. The fake bokeh is one example of this, but there are others, like "AI sharpening" etc. . Also the amount of NR is typically much higher in a phone camera.

      You are correct they typically also employ a lot more range compression and contrast enhancement to make pictures "pop" more, but you can often achieve similar things by adjusting camera settings especially in lower grade DSLRs.

      • prmoustache 3 years ago

        What they typically do is create a fake ugly bokeh and oversaturaate pictures. That doesn't make better photos in general.

        That makes mostly better thumbnails. Perfect for instagram but shitty for everything else.

    • rocqua 3 years ago

      Not OP, but the ability to very quickly bracket photos. Say at 30fps for 10 frames, and then use software to process that data like how a smartphone does, but with my own control.

      Let me figure out when a scene is static enough to combine multiple shifted pictures for better snr. Ideally, let me enable that feature on my camera. Combine that feature with image stabilization, and let me tweak the conversion from multiple images after the fact.

    • wrldos 3 years ago

      Yep. I’ve got an iPhone 13 Pro. Sometimes it absolutely mutates images to the point they are unusable.

      Bought a Nikon z50 because this was annoying me. Shooting JPEGs and it does a vastly better job and it will still go in my pocket with the 28mm f/2.8 or the 16-50 kit lens. The iPhone is embarrassingly bad in comparison.

  • cultofmetatron 3 years ago

    Its definitely done! and whats better, you can do far more with a proper raw file from a large sensor. the trick is the workflows aren't as simple as having it happen automatically.

    in my workflow I use either dx0 photolab which has excellent facilities for really bringing out a single image.

    sometimes I wanna go hardcore with a landscape and take multiple shots manuallly and blend them together using software like aurora HDR (example: https://www.flickr.com/photos/193526747@N04/52219385902/ ) That image is 5 stacked images combined together using a bit of computational photography and adjusted for saturation.

    if you want somethign taht will get you descent results fast and work with raws. you can also go with luminar https://skylum.com/luminar

    an iphone image looks better at first snap but my z5's images blow them out of the water once I give them some love in the edit room.

  • mikewarot 3 years ago

    Long ago I did some experiments with SuperResolution[1] which is a technique of aligning images to sub-pixel resolution and stacking them to increase the amount of information with the square root of the number of exposures.

    I was using Hugin to align the images from my Nikon DSLR. I found that you can get to at least double the resolution in both dimensions fairly quickly, but you'll never get to "enhance" like in TV shows.

    [1] https://en.wikipedia.org/wiki/Super-resolution_imaging

    • kuschku 3 years ago

      Modern cameras use the optical image stabilization for pixel shift photography, where you can get up to 9× the normal sensor resolution (by moving the sensor slightly).

      • astrange 3 years ago

        Of course, for this to work nothing else about the camera or the scene can move, or else you need a lot of smarts to realign the image and ignore the parts of the image that aren't registering anymore. This is typically pretty difficult for an outdoor photo.

  • pkulak 3 years ago

    I've got an Olympus M43 camera, which has an image sensor that's both significantly smaller than full frame, and significantly larger than a phone. I've found that I can do absolutely amazing things to the RAW image later. There are these processes now that will use ML to remove noise, and it blows my mind every time. I can shoot at ISO 6400 now without even thinking about it. I used to cringe when I had to hit 1600.

    Olympus (OM Systems?) should build this stuff into their cameras. I used to have a bit of inkling to some day "upgrade" to full frame, but not any more.

    • Tarrosion 3 years ago

      I'm also shooting on an Olympus M43, several years old. Any suggestions for software tools you've found helpful?

      • pkulak 3 years ago

        Yeah, didn't want to sound like a shill, but Dx0 is so good I run it in a virtual machine on my Linux box. It's "Deep Prime" noise reduction thing is what I was talking about. There are others, even some stand alone, but it's nice having it all in one place, and Deep Prime does seem to be the best.

  • irthomasthomas 3 years ago

    Most already offer that in their various automatic modes. We don't like it because whatever can be done computationally on a camera, can be done 10x better on a real computer with more hardware and finer controls.

  • NegativeK 3 years ago

    It's going to take ILCs switching to sensors like the Z9, without a physical shutter, doing very high frame rates at full resolution.

    I'd be interested in computational photography on ILCs if they allowed tuning it -- with phones, it comes with a bunch of other stylistic choices, and I want control over that stuff.

  • haswell 3 years ago

    I’d love to see this on “adventures” cameras like the Olympus OM-D line.

    This would also pair well with Fujifilm’s lineup which already includes camera features focused on in-camera processing.

    • rimliu 3 years ago

      I think Olympus was a pioneer at least in some aspects in computation photography. At least "live composite" and "focus stacking" were not very common at the time Olympus introduced them.

  • account42 3 years ago

    Pretty much all RAW processing includes computational photography - at the very least debayering, but likely more.

    • astrange 3 years ago

      Taking the RAW involves it too - autofocus and auto ISO.

      And developing the raw involves picking a white balance and several kinds of lens correction.

  • throwanem 3 years ago

    > I would like to see computational photography applied to raw images from DSLRs and MILCs with APS-C and larger sensors.

    Wash your mouth out with soap! I did not spend five thousand dollars on a D850-based macro rig to have it produce results no better in quality than what I can get from my phone.

jlarocco 3 years ago

I love my big, heavy Nikon DSLR, and there's really no comparing the images it takes with the ones from my iPhone. Especially in "tricky" lighting situations they're not even in the same ballpark.

That said, there can be just as much (or more) "computational photography" going on with a digital camera as there is with a modern phone, the difference is that cameras and processing software give control to the user, and phones typically do not.

  • wombat_trouble 3 years ago

    "Real" cameras do a lot of postprocessing too, but it's generally oriented at producing faithful results. They might remove unambiguous and correctable issues such as vignetting or lens distortion, but they don't cross the line of inventing new details to make the photo look good.

    Computational photography techniques on smartphones, on the other hand, were always designed around squishy "user perception" goals to make photos look impressive, details be damned.

    • jlarocco 3 years ago

      I didn't see any invented "new details" in the article's iPhone photos. Phones have small sensors and crap lenses, so they ramp up noise reduction and sharpness to make up for it. Turn up the ISO and max out the NR on the Fujifilm and the results would be nearly as bad.

      • wyager 3 years ago

        The text looks qualitatively very little like the true text. Almost all details of the text (including shape of strokes!) are hallucinated by the iPhone.

        I could import the X-T5 photo into lightroom or whatever and crank NR all the way up, and I don't think it would look anything like the iPhone image. Also, the less-processed image on the iPhone (which you see for a split second) looked fine, so there wasn't enough noise to justify this level of "correction".

        My guess is the iPhone got confused by the texture of the anodized aluminum.

      • wombat_trouble 3 years ago

        I see invented texture and layering here: https://yager.io/comp/mi.jpeg

        • jefftk 3 years ago

          It's pretty hard to tell without knowing what sort of image was input to the computational photography process

          • MichaelZuo 3 years ago

            The actual image is right below it in the post.

            • jefftk 3 years ago

              That's the image captured by a different camera (it's labeled "Fujifilm X-T5" and is from a different angle). The input I'm interested here is what the phone camera's sensor received prior to processing.

              • MichaelZuo 3 years ago

                He mentions that the phone did display something close to that for a brief moment.

        • kaba0 3 years ago

          That seems just like edge sharpening.

    • wonnage 3 years ago

      At the end of the day you're always "inventing new details" to turn sensor data into an image. Most algorithms involve edge detection and predicting color correlations, and you'll also run into cases where false details are added and reality is changed to fit the priors.

      One can find pathological cases for traditional cameras too - moire is a common problem, Fuji X-Trans sensors historically had a watercolor/worms effect particularly in greenery, etc.

  • kaba0 3 years ago

    Do digital cameras really do anywhere near the same computations? Like, they usually have like some low-end shitty microprocessor at most, while for example an iphone has an insanely powerful CPU. Sure, plenty part of this processing happens in the ISP, but surely not everything.

    • donkeyd 3 years ago

      No, they don't. It's not just the CPU, it's also that phones will often take bracketed exposures and use sensor data to align them correctly and automatically create an HDR image, for example. Either that or taking lots of short exposures and stacking them. It's not just in post processing, the magic already happens while taking the image itself.

      People who think digital cameras do anything close to what modern phones do don't have a clue. They're the opposite of 'Phones can do anything a DSLR can' people and they are just as wrong.

  • softfalcon 3 years ago

    I also love my big heavy Canon DSLR. Can’t quite describe why it gives me so much joy, but I keep it around me at all times. I just love taking photos.

    The quality difference is also very obvious compared to my phone even though my camera is easily 8 years old.

    • rimliu 3 years ago

      I liked my not that big Nikons. Then I realised that I do not take as much photos as I could because I cannot be bothered to take camera and quite heavy 70-200/2.8 and 24-70/2.8 with me. And then I sold all my Nikon gear and instead of getting my dram D850 got myself Olympus. So happy I did this. Turns out you don't need FF and M4/3 has a gazzilion of excellent, small, and light-weight lenses available.

      • matwood 3 years ago

        Nikon is finally releasing some lightweight Z primes. The 28mm and 40mm are great and make my z5 a very portable FF solution.

Llamamoe 3 years ago

I'm surprised the author is unfamiliar with Google Camera and its super-resolution features[1,2], which uses actually clever algorithms to push digital photography beyond what would be physically possible to get out of a naive set of HDR exposures, both in terms of resolution and dynamic range.

It's literal magic.

[1] https://ai.googleblog.com/2018/10/see-better-and-further-wit...

[2] https://petapixel.com/2019/05/28/how-googles-handheld-multi-...

  • datagram 3 years ago

    The author spends a whole paragraph talking about this category of techniques:

    > Slightly more objectionable, but still mostly reasonable, examples of computational photography are those which try to make more creative use of available information. For example, by stitching together multiple dark images to try to make a brighter one. (Dedicated cameras tend to have better-quality but conceptually similar options like long exposures with physical IS.) However, we are starting to introduce the core sin of modern computational photography: imposing a prior on the image contents. In particular, when we do something like stitch multiple images together, we are making an assumption: the contents of the image have moved only in a predictable way in between frames. If you’re taking a picture of a dark subject that is also moving multiple pixels per frame, the camera can’t just straightforwardly stitch the photos together - it has to either make some assumptions about what the subject is doing, or accept a blurry image.

    Their point is that it's not magic; these techniques rely on assumptions about the subject being photographed. As soon as those assumptions no longer hold, you start getting weird outputs.

  • smusamashah 3 years ago

    I have used 3 pixels and never seen any of this bad post processing as shown by author. I never thought iPhones camera could do bad post processing like this.

    Seen this in cheap point and shoot cameras and cheap chinese phones though.

indianmouse 3 years ago

The smartphone cameras have improved a lot in the recent years, but they cannot compete or match a full frame sensor provided the limitations. The size of the sensor and the optics play a major role in the final image quality and one can only do so much with the computational photography or whatever method.

Especially the iPhone photography and videography is always overrated by the fanbois and some of the "professionals". While it might look good on "some" pictures with the heavy post processing, it just doesn't have any details. It might just appeal fine for a 100% view of the picture as is and even the slightest post processing or editing done on the output pictures ruins them a lot.

One has to depend upon what the developer of the application or the manufacturer thinks is the right picture (and who the hell are they to decide what my photo should look like?) and most of the time they are terribly wrong.

Apple is just overrated and for that matter, even some of the Android's as well.

Raw pics from a full frame sensors hold the fort and will continue to hold for a longtime to come unless the phones match DSLR in terms of sensor size and optics size. Until then "computational photography" will make the pictures look terrible and dictate how it has to look like.

I see a lot of comments where folks talk about RAW. But seriously, how does it matter for any normal user who tends to click a pic using the phone instead of a DSLR? If one is photographer, it makes sense, else it is additional workflow to get it in RAW and do the post processing on a computer... I'm just saying...

Thoughts welcome...

  • jiggawatts 3 years ago

    My $0.02 as some one with an expensive full-frame DSLR and and the latest iPhone Pro:

    There are entire categories of image quality that only Apple seems to bother even trying to improve — and then they leapt past everyone.

    A few years ago if you wanted to make a HDR, 4K, 60 fps Dolby Vision wide-gamut video…

    That would have cost you. Tens of thousands on cameras, displays, and software. It would have been a serious undertaking involving a lot of “pro” tools and baroque workflows optimised for Hollywood movie production.

    With an iPhone I can just hit the record button and it does all of that for me, on the phone!

    Did you notice that it also does shake reduction? It’s staggeringly good, about the same as GoPro. Just setting up the stabilisation in DaVinci is half an hour of faffing around.

    The iPhone just has it “on”.

    I could go on.

    A challenge I give people is to take a still photo and send it to someone else that is wide gamut, 10 bit, and HDR, any method they prefer.

    Outside of the Apple ecosystem this is basically impossible in the general case. Everything everywhere is 8-bit SDR sRGB.

    Heck, even professional print shops still request files in sRGB!

    So yes, the software in the Apple ecosystem does have a big impact on the end result of photography.

    I can take a 14-bit dynamic range picture with my Nikon, but I can’t show it to anyone in that quality because of shitty Windows and Linux software, so what’s the point?

    I take pics with my Apple iPhone instead. All the people I want to show pictures to have iDevices, so I can share the full HDR quality that the phone camera is capable of, not some SDR version.

    • wyager 3 years ago

      Agreed, Apple is really pushing consumer display technology. It's tough to find FALD 1600 nit displays outside of the XDR or the MBP. Apple has also done a lot for consumer color science.

      However, as far as the iPhone producing HDR HEIF photos - as I recall from some brief reading, it seems like possibly an intentional choice from Apple to do this in an opaque, nonstandard way, so other image pipelines can't easily take advantage of it. I don't really want to give them credit for that.

      • jiggawatts 3 years ago

        They did it in a nonstandard way because there is no standard way!

        This is my point. Also the HEIF files are just a still frame of a standard video format in a standard container format.

        The lack of third party support is 100% laziness.

    • indianmouse 3 years ago

      Exactly like it was explained. People who speaks French or German (I'm just referring the language. No offense please), could only talk to people who know that language. While the majority is English speaking (Hypothetically!), it might make sense to speak that and also, whatever 10bit or 8Bit, the sensor size and capabilities matter. Most of the perception is seriously overrated with the iDevices (the displays and the sensors etc.,etc.,) while I may not able to articulate it technically, it's just 0.00001% of the appealing population (who might be using it in the intended way and might require that in that format!!)

      When it comes out of the ecosystem, it has to understand and speak English. No matter how good it might in French or German.

      So, the argument is pretty subjective and no point in continuing as no one knows what is under the hood and how it appeals to the eyes. It's subjective and it may or may not have all the required details for it to survice outside of the ecosystem. It's what one calls in vendor lock-in. One needs all idevice and isoftware ecosystem to function and survive in the iworld. And that world is mostly controlled and directed by the company!

      This might look like too much of deviation from the topic, but it is how the idevices are portrayed to the world and how the ifanbois take it. It's is just overrated.

      • jiggawatts 3 years ago

        It's not just overrated fanboyism. Here's a reviewer that shows that a modern iPhone's display quality is very directly comparable to a $35,000 Sony mastering monitor: https://www.youtube.com/watch?v=n_czpXW3yKE

        That's not just insanely good, but it "just works" out of the box. No need to buy a HDR-capable colorimeter, buy separate HDR-capable calibration software for it, and then have to deal with the software workflow around this.

        Just take a picture, share, done. All your friends with iDevices can immediately view your HDR photo or video, with calibrated quality comparable to professionally mastered content.

        The software (the "computational" part of photography) is a complex, end-to-end thing that is absolutely required to utilise the full capability of displays and sensors.

        Apple invested a ton of money into this. They licensed Dolby Vision, and use all sorts of AI-driven automatic picture tuning software to squeeze the most quality possible out of the hardware.

        PS: The way I determine if a display has correct calibration is to load a test image on it, load the same test image on my iPhone, and then hold the phone next to the display! I know the phone is going to be correct, whereas everything else is virtually guaranteed to be atrociously bad.

      • nl 3 years ago

        This isn't true.

        I've done a lot of action photography. The stabilization of video really is pretty close to a GoPro (and why not - it's software). The ability to do 4K video is much better than most cameras.

        > One needs all idevice and isoftware ecosystem to function and survive in the iworld. And that world is mostly controlled and directed by the company!

        I don't understand this. You take the photos or movies off the phone and you can use them however you like. There's no particular vendor lock-in in this aspect, and certinaly no more than in the camera world where you choose which of the Sony/Canon/Nikon ecosystems you want to live in.

        • jiggawatts 3 years ago

          To be fair, the second you export a picture from an iDevice, it auto-converts it back to JPG with SDR sRGB, to make it compatible with the terrible hardware and software in the rest of the world.

          You can upload HDR videos[1] however. I put mine up on YouTube and send Android users the link. This doesn't preserve 100% of the quality, because YouTube doesn't support Dolby Vision and hence is forced to recompress the content into a HDR10 stream. Nonetheless, you get 4K 60fps HDR video that "just works" and generally looks good on any high-end device such as flagship Samsung phone.

          [1] Video formats in general have left still imaging in the dust. It's absurd, but the best approach for sharing high quality photography is to encode the still images into a video, like a slide show, and then share that. How nuts is that!?

          • zimpenfish 3 years ago

            > it auto-converts it back to JPG with SDR sRGB,

            That depends on your settings - Settings → Photos → Transfer to Mac or PC allows "Automatic" (guess whether HEIC or JPG is best) and "Keep Originals" (whatever the photo was shot as). I keep mine on "Automatic" because it sends HEIC to my Mac but JPG to family members who aren't up-to-date with phones etc.

    • fleddr 3 years ago

      "Everything everywhere is 8-bit SDR sRGB."

      Right, because a 10 bit wide gamut desktop display isn't affordable to the vast majority of people?

  • astrange 3 years ago

    > Raw pics from a full frame sensors hold the fort and will continue to hold for a longtime to come unless the phones match DSLR in terms of sensor size and optics size.

    "Full frame" cameras do not have the best image quality, and don't even have the best image quality for their price. (eg used medium format film cameras are cheaper.)

    They're just the best cameras people have heard of. If you're doing product photography you might want a Phase One instead.

    It doesn't matter much though; lighting and lens quality are what really make a photo even in a controlled environment.

    • foldr 3 years ago

      > eg used medium format film cameras are cheaper

      You'd be hard pressed to get better quality from a scanned medium format negative than from a modern 60MP full frame sensor. You might just get there if you use a drum scanner, but any faster and more practical scanning process won't get you there.

  • KineticLensman 3 years ago

    > Especially the iPhone photography and videography is always overrated by the fanbois

    I love my Nikon but for video I need a tripod to eliminate camera shake. My iPhone gives me stable images every time.

matheweis 3 years ago

This is not a new problem and can sometimes have disastrous results.

10 years or so ago a variation of this made headlines all over as certain Xerox Workcentres were transposing numbers during scans, due to a compression algorithm that was sometimes matching a different number than the one actually scanned.

https://www.theregister.com/2013/08/06/xerox_copier_flaw_mea...

neilpanchal 3 years ago

> Ultimately, we are still beholden to the pigeonhole principle, and we cannot create information out of thin air.

*Looks up pigeonhole principle*: https://en.wikipedia.org/wiki/Pigeonhole_principle

> If 5 pigeons occupy 4 holes, then there must be some hole with at least 2 pigeons.

This is so obvious.

> This seemingly obvious statement, a type of counting argument, can be used to demonstrate possibly unexpected results. For example, given that the population of London is greater than the maximum number of hairs that can be present on a human's head, then the pigeonhole principle requires that there must be at least two people in London who have the same number of hairs on their heads.

Oh...

kelsolaar 3 years ago

I always shoot in Raw (using the Lightroom iPhone App) to make sure that this kind of defects never occur. Noise is generally preferable and acceptable that the disaster trail left by denoising et al. At least you can do it yourself in a way that pleases you instead of having a ruined photograph.

  • jillesvangurp 3 years ago

    With the pixel6 (and most android phones) you can set it up to do both. So you have the "nice" version generated by the phone and a dng raw file to work with. I have that set up along with syncthing to deliver the raw photos to my laptop (pro tip, this is super handy).

    The new iphone and the pixel6 both use the same trick where they have a 50 megapixel sensor (probably the same and likely a Sony sensor) that produces 12.5 megapixel raw photos with four pixels combined information. So the dng I get from my phone has already had some processing done to it but not a lot. Also worth noting that both phones have multiple lenses with different focal lengths and sensors. So, it matters a lot which one you use. You'd control this via the camera app typically with its different modes and zoom levels. I'm not sure if it uses exposures from all sensors to calculate a better raw but that would not surprise me.

    In terms of noise, the image quality is actually very good. I've done some night time photography with both the pixel6 and my Fuji XT-30, which is an entry level mirror less camera. The Fuji has better dynamic range and it shows in the dark. But the noise levels are actually pretty good for a phone camera. Very usable results with some simple post processing. Especially compared to my previous Android phone (a Nokia 7 plus) which was noisy even in day light. Mostly doing raw edits is not worth doing that but it's nice to have the option. The phone does a decent job of processing and mostly gets it right when it comes to tone mapping and noise reduction. When it matters, I prefer the Fuji. But sometimes the phone is all you have and you just take a quick photo and it's fine.

    A high end full frame camera will get you more and better pixels and more detail. Even an older entry level dslr will generally run circles around smart phone sensors. And that's just the sensor and camera. The real reason to use these is the wide variety of lenses and level of control over the optics that those provide. In phone bokeh is a nice gimmick. But it's a fake effect compared to a nice quality lens. Likewise you can't really fake the look you get with a good portrait lens (the effect that things in the background seem bigger). Phone lenses have a fixed focal range and generally not that much aperture range. There's a reason people pay large amounts of money to own good lenses. They are really nice to use and deliver a great photo without a lot of digital trickery. And they are optimized for different types of photography. There is no one size fits all lens for all photography.

  • ecpottinger 3 years ago

    That is what I was thinking. The poster said the photo looked okay at the moment they took a picture and the phone's processing takes over and he then gets junk.

    I wonder how hard to is to take 'RAW' photos without adding an app first.

    • nomel 3 years ago

      You can use the camera app. It’s all in the settings menu for Pro models: https://support.apple.com/guide/iphone/take-apple-proraw-pho...

      This is a “there’s already a solution, but the average consumer wouldn’t know about it, because the defaults are made for them” type of problem.

      One could claim that it's a UI problem, and should be exposed in the Camera app. This may be true, but the files are 10 to 12 times larger, with a real "quality hit", as perceived by the average user, for overall aesthetics. I personally think it should be in the settings menu. It's not something you would want enabled without understanding and intent.

      It’s a little frustrating that apple added this feature, for this exact kind of thing, and they’re, inadvertently or not, getting a little dumped on, do to lack of knowledge/research.

      These features (Google also attempted to standardize it, not sure they succeeded) were a big deal in the photography world.

      • ecpottinger 3 years ago

        Sorry, I do not have an iPhone. How easy is the 'RAW' feature to find?

        I understand what you mean about most users do not need such a feature. But while supporting computers I am surprise how many features of programs users do not know exist not just because they do not use them but also how hard it is to get at some features.

        Personally, I seen too many programs where you need to know to turn OFF certain functions before you can enable some other feature. The users often never see what they needed because it is so hard to enable that feature without know they need to turn something else OFF.

        • interpol_p 3 years ago

          To find the RAW feature you need to do two things:

          Go to Settings -> Camera -> toggle Apple ProRAW on

          Then in the Camera app there will be a "RAW" button that you can tap on to enable RAW capture

          Third party camera apps like Halide (https://halide.cam) focus on just using the RAW capabilities by default

          • eigenhombre 3 years ago

            +1 for Halide, which I didn't know about. Apple doesn't give non-Pro 13s access to RAW images (that I know of), but Halide does out of the box. The camera AI was driving me crazy on my 13mini; now I have a solution. Are there other similar apps of similar quality? In particular, I'd love an excellent BW app which is (at least relatively) AI-free.

          • eproxus 3 years ago

            The iPhone RAW mode still does quite heavy processing though.

            Here’s a comparison I just made between the iPhone camera’s RAW and default modes, and Halide: https://i.ibb.co/XFMJVjz/9-DD734-F7-CA00-4-E53-85-C7-283315-...

  • astrostl 3 years ago

    ProRAW appears to not be as processed, but still processed: "Apple ProRAW combines the information of a standard RAW format along with iPhone image processing" [1]

    1: https://support.apple.com/en-gb/HT211965

andreareina 3 years ago

There’s some criticism about the event horizon photo of M87 because they had to do a lot of filling in based on a model of how black holes behave. IIRC they ran a hyperparameter search and picked the ones that were most consistent with the actual photons received.

  • poulpy123 3 years ago

    Interferometers usually don't produce image but take power and phase measurements in the frequency plane. If there are enough points taken an image can be "reconstructed" if not the scientists will do model fitting

epicycles33 3 years ago

Interesting article. I think the real question however is whether imposing complex priors (say driven by a neural net) makes images better _on average_ even if has some failure modes. My guess would be that a fairly weak prior trained on a diverse enough dataset would lead to better average image quality(as judged by everyday people in diverse scenarios) and that's why they are used.

  • wyager 3 years ago

    I think it could absolutely make images better on average, assuming their prior is at all representative. The question, though, is whether the expected benefit to the photographer in cases where it improves the image (usually somewhat small) outweighs the costs when it screws up (perhaps relatively large).

    Now, are most people going to notice that the iPhone wrecked the text on their subject? Probably not. But they probably also wouldn't notice if the model wasn't applied to the image at all. The median consumer probably mostly benefits from (in terms of how much they like the photo) AE, a bit of curve reshaping (using a smoothed histogram CDF algorithm or something), and maybe some extra saturation.

worewood 3 years ago

There was a very good video from Marques Brownlee about the issue [1]. IPhone cameras are getting worse.

[1] https://youtu.be/88kd9tVwkH8

  • WaffleIronMaker 3 years ago

    And, in a similar vein, I enjoyed his video on The Best Smartphone Camera of 2022, where he applied a scientific ranking system taking 21.2 million votes from 600,00 users. I had previously assumed, due to Apple's reputation, that iPhones would take pictures that people like more, but that was not the case.

    [1] https://youtu.be/LQdjmGimh04

smusamashah 3 years ago

This looks like iPhone postprocessing problem more than anything else. I have used 3 Google pixel phones (up to pixel 4) and none of them does bad post-processing. In fact, it improves the resolution of whatever you are taking picture of https://ai.googleblog.com/2018/10/see-better-and-further-wit...

I never saw any of these phones altering the details like in article.

  • shiftpgdn 3 years ago

    Google Pixel phones still do "deep fusion" processing like an iphone, but instead with Google's secret sauce. The photo your phone is showing you is what machine learning thinks the picture should look like, and not the picture you took.

    • smusamashah 3 years ago

      But unlike examples in the article, whatever magic sauce Google uses, end result does not look different from actual thing.

MarkusWandel 3 years ago

I don't have any examples from my own film photography handy, but a quick google brings up

https://www.35mmc.com/10/01/2015/low-light-fun-ilford-hp5-ei...

3200ISO on black-and-white film was pushing it pretty hard. Yet these pictures look good, in a noisy kind of way. Let an algorithm loose on them and it'll "fix" things, first and foremost by smoothing the skin. Even older low-end dedicated digital cameras do this, some brands more than others. The pictures in low light feel more like a badly done painting than a good, honest, albeit noisy photo. One possibility is that the noise from a digital sensor is not as uniformly pleasing as that from film, so it must be masked.

macshome 3 years ago

A photographer friend had a good way of framing this once for me...

"Phones take amazing snapshots, but dedicated cameras can make better photographs."

The new smartphone cameras are capable of pretty amazing things and they can extend taking good pictures to a whole new audience. If you need the control though that large sensors and specific lenses can bring then you will need a dedicated camera.

bee_rider 3 years ago

The iPhone definitely does some extra processing on text. I’m 99% sure that it recognizes letters and fills in “creatively.” I noticed this while taking some photos of text in low-light. Could barely see it with my eyeballs but the phone worked it out.

  • pancrufty 3 years ago

    To be fair, sensors have the luxury of time that our eyes don’t have. See astrophotography for an example of “could barely see it but the sensor worked it out.”

    • nine_k 3 years ago

      Sensors may have the time, but our hands do not; they are unsteady in subtle ways.

      When making photos in low light, I always try to lean my phone against something (a bench, a lamppost, a tree, a building) to let the longer exposure be sharper.

      • pancrufty 3 years ago

        I don’t get your point. Surely you’ve seen that the iPhone is perfectly able to do 10s handheld exposures. Something in the algorithm can unscramble those eggs because there’s no way it should be able to capture clear pictures with nearly no light.

      • adgjlsfhk1 3 years ago

        as long as there is anything to key onto it's possible to remove the shake algorithmically

        • eru 3 years ago

          > as long as there is anything to key onto it's possible to remove the shake algorithmically

          Why the requirement? The phone already has accelerometers, doesn't it?

          In any case, less hake should still be easier for the phone to deal with. Those algorithms aren't flawless.

          • sebzim4500 3 years ago

            There's no way that the accelerators/gyrocscope would be accurate enough to remove the shake. According to google, the iphone gyroscope is accurate to within about 0.5 degrees, roughly two orders of magnitude away from being pixel accurate in a 1x zoom image.

        • nine_k 3 years ago

          Maybe, but it looks like a problem of unscrambling an egg. The true image has been smeared over the sensor, superimposed on itself a bit. Maybe there is a good enough solution for the problem of finding the true image, but I can't imagine it to be computationally inexpensive.

          • QuackyTheDuck 3 years ago

            This.

            Even when you stabilize the footage afterwards (digitally, not by stabilizing the sensor), there will be motion blur due to the lower shutter speed.

  • astrange 3 years ago

    It doesn't do that. Text just has a lot of edges so there's a lot of opportunities for image fusion to register them.

    Apple power adapters actually have a bunch of text on them printed in unreadable light gray; you can try shooting them and while they're a bit clearer than plain old eyesight, they're still pretty unreadable.

account42 3 years ago

> For example, by stitching together multiple dark images to try to make a brighter one. (Dedicated cameras tend to have better-quality but conceptually similar options like long exposures with physical IS.) However, we are starting to introduce the core sin of modern computational photography: imposing a prior on the image contents. In particular, when we do something like stitch multiple images together, we are making an assumption: the contents of the image have moved only in a predictable way in between frames. If you’re taking a picture of a dark subject that is also moving multiple pixels per frame, the camera can’t just straightforwardly stitch the photos together - it has to either make some assumptions about what the subject is doing, or accept a blurry image.

This applies to dedicated cameras too though - physical image stabilization can compensate for camera motion but not for subject motion. The difference is that a) physical IS can compensate throughout each exposure, not just between exposure and b) the photographer is not bound to a black box algorithm but can instead use his own a priori knowledge to align the images if needed.

  • wyager 3 years ago

    Yes, I meant to imply that dedicated cameras are committing the same "sin" here. I only meant physical IS is better because you don't need to do things like periodically read out the sensor. You are getting a true full-duration exposure that won't produce artifacts like tearing or skips.

    • foldr 3 years ago

      The iPhone has physical IS too. It's very good. (Image stabilisation tends to work better for smaller sensors, as you have less weight to move around and less far to move.)

  • foldr 3 years ago

    >In particular, when we do something like stitch multiple images together, we are making an assumption: the contents of the image have moved only in a predictable way in between frames

    From this way of looking at things a normal long exposure also imposes a prior assumption (that nothing is moving). It's just that we're used to the artefact that's generated when this prior isn't true (motion blur).

whywhywhywhy 3 years ago

Haven’t felt like the camera on my iPhone 13 is significantly better than the one on my iPhone 7 at all in terms of basic quality. My shots look about the same.

As someone who upgrades every several years I’ve been wondering how people who upgrade every year and rave about the camera being better are even seeing at this point.

(Stills only I’m talking about)

  • npteljes 3 years ago

    I'm noticing the same, looking at GSMArena's comparison shots. We opted for an iPhone 11 back then, and out of curiosity, I keep an eye out for camera improvements, and I don't see that too much is happening. Comparing the low-light shots of the 11 and the 14 pro max, there is some extra detail, but the post-processing is also noticeably heavier.

  • Terretta 3 years ago

    Telephoto and low light

    Also, the camera, lenses, and sensors don't all update every year. Early on in Apple's tick-tock approach to design iteration, camera updates were the "s" models ("tock") in release cycle.

    Now they seem to be just incrementing the number and you have to pay attention to what if any changes they make. This time they did 4x pixels and do pixel binning for regular shots and low light.

  • substation13 3 years ago

    Have you done a side-by-side comparison? I have noticed huge improvements.

    • shinycode 3 years ago

      Exactly, HDR, colors, shadows, night shots all of that makes a huge difference. Take a night shot side by side between the two phones you listed. If there is no difference it’s because you never took those pictures. Maybe a camera it’s just a camera for you. Point shoot done. The difference in various lightning situation are huge but you don’t take that much photos or care enough ?

AstixAndBelix 3 years ago

Computational photography excels at certain uses. Noise reduction can be miraculous as of recently, exposure bracketing and automatic merging allows to take good pictures of a scene with a bright sky without obscuring everything, lens distortion, vignetting and chromatic aberration corrections work really well.

Of course you cannot really compensate for the lens not resolving enough detail, or not focusing close enough; but since the almost totality of photos taken on a phone will be seen on another mobile device these are the less important bits of the equation. Correct exposure and good colors always look good, regardless of how much you zoom the photo. OP's use case is very limited, and unfortunately didn't provide enough context about the nature of the photo.

fleddr 3 years ago

Yep, smartphone cameras are optically terrible, which is then compensated for with clever tricks. These tricks optimize for popular use cases: people, food, etc.

One aspect that is little discussed is the inflated quality perception of such a photo when seen on the actual device, an iPhone in this case.

iPhones have an incredible screen. OLED, wide gamut color, high PPI. A photo looks radically better on an iPhone compared to opening the same photo on a standard monitor.

  • wyager 3 years ago

    Apple also uses non-standard HEIF tags to allow for HDR photo display of photos taken by Apple devices. Last time I checked, you couldn't (easily) take a photo from a dedicated camera (which has more than enough dynamic range to justify HDR) and turn it into a file that would get rendered as HDR on iPhone.

pvillano 3 years ago

There's an important piece of background to understand why computational methods cannot completely correct chromatic aberration.

A photon can be any color of the rainbow. The reason ink and TVs can get away with using only 3 colors is because our eyes only have 3 types of receptors (cone cells). Each receptor responds to a range of wavelengths. "In-between" wavelengths will trigger multiple receptors. For example, a TV can send a mix of red and green photons and create the same brain signals as yellow photons would. Animals with more types of receptors, such as bees or the mantis shrimp, wouldn't be fooled by a TV with only three base colors.

A camera's sensor performs the same lossy compression as our eyes. Light comes into the camera in a range of wavelengths, and triggers each type of pixel a different amount. Each type of pixel has a sensitivity curve engineered to resemble the sensitivity curve of one of the cone cells in our eyes.

Understanding that natural light isn't just red, green, and blue makes it clear why chromatic aberration can't be fixed computionally. A green pixel can't know when it's receiving green photons that are perfectly aligned, or yellow light that needs to be destorted.

P.S. There cameras that can "see" a greater range of colors. Search for "spectral cameras" and "infra-red goggles"

PPS This is also why a RGB light strip might look white, but objects illuminated by it might look odd. You might be familiar with the fact that a blue object illuminated by a red light will look black. For the same reason, it's possible for a yellow object to be eliminated by red, green, and blue light and still look black.

PPPS This is also why custom wall paints are a mixture of more than three colors. Two paints may look completely the same, but objects illuminated by the light bounced off the walls look completely different.

PPPPS This is also why high-CRI lightbulbs are a thing. If you get something hot, like the sun or a tungsten filament, it will release photons with a wide range of wavelengths. Neon tubes and LEDs emit a single wavelength, so they must be coated with phosphors that fluoresce — emit light at a different wavelength than they absorbed. Using more kinds of phosphors is more expensive, but makes it more likely that whatever object is illuminated gets all the wavelengths it is able to reflect.

manv1 3 years ago

The limits are because at this point there's no way to tell the tool what you're trying to do. The various photo modes are a step in that direction, but they've been stuck.

Once they find a way to interact with the processing engine then the quality will jump again.

For the vast majority of users, the phone camera is super awesome and just fine.

michrassena 3 years ago

I haven't finished the article, but it seems like using the flash on the iPhone might have been enough to lower the ISO for the photo. Lower ISO = lower noise. The end results look like a typical noise reduction process. For screen-sized images, the new phones do quite well. But zoom in and it's often a painterly-blur.

zeckalpha 3 years ago

The inverse has been how wowed I am when a camera is better than my vision. Night mode and giant aperture ratio lenses both wow me.

Mirrorless cameras may have been delayed if it weren't for the competition with phones. DSLRs were only around a few years before camera phones.

  • Tepix 3 years ago

    Lenses in general can make things larger than with normal vision, so why are you so surprised?

    • zeckalpha 3 years ago

      Not larger or smaller, I mean better in low light. Large aperture is very different from large focal length.

onphonenow 3 years ago

A very long article but I’m slightly confused. Is he using proraw? Can you not get unprocessed images from proraw?

I’m not sure what the pipeline looks but I thought this type of situation was where pro raw was supposed to be used?

  • CarVac 3 years ago

    Afaik proraw has the same computational reconstruction, but none of the baked-in contrast, lighting, or color tweaks.

    This gives you a clean low-noise image with editing flexibility but it does have the flaws of deconvolution and stacking and AI denoising.

    Actual raw from a cell phone is insanely noisy and hideously soft from diffraction in the best of cases.

    • foldr 3 years ago

      >Actual raw from a cell phone is insanely noisy and hideously soft from diffraction in the best of cases.

      You can get single shot RAW output from Halide and other third party apps on iPhones. It's actually perfectly usable and not particularly noisy or soft. I haven't personally had any problems with the output of ProRAW (which applies far less aggressive sharpening than the standard JPEG processing). I'm pretty sure the photo in the article would have come out fine if it had been shot using ProRAW.

      • CarVac 3 years ago

        Do you have any full resolution examples to share?

        I'm comparing to large-sensor cameras on large 4k screens, of course.

        What is "usable" or not varies dramatically depending on how large you display it.

        • foldr 3 years ago

          I’m comparing it to multi shot raw (i.e. ProRAW). What I mean to say is that if you would consider using a typical ProRAW or JPEG shot from an iPhone (both based on multiple exposures), then you would also consider using a single-shot RAW. At daylight ISOs the difference in noise and sharpness is small.

          Here's a JPEG from a single shot 12MP RAW made with Halide (can't shoot 48MP single shot RAW for some reason): https://drive.google.com/file/d/1oqQB_UbdBaaoM3vC2IG_9jzP2AG...

          Here's a JPEG from a ProRAW 48MP made with the camera app: https://drive.google.com/file/d/1oqQB_UbdBaaoM3vC2IG_9jzP2AG...

          Here's a JPEG from a ProRAW 48MP made with Halide: https://drive.google.com/file/d/13GW_CIIvOSFEsKcON28Y34NgQAA...

          All images are shot on an iPhone 14 Pro Max and processed in Lightroom (60 on the sharpening slider, no noise reduction). There are certainly many differences between the images, but it's not as if the single shot RAW is a total mess. In fact, in this case the single shot RAW is cleaner and better looking than the ProRAW output of Apple's camera app for the most part. It even has more textural detail in a couple of areas where Apple's processing has smoodged things. This could partly be because I was able to select ISO 57 using Halide, whereas the camera app chose to use ISO 100. (All images were shot on a tripod, so there was no real need to use a higher ISO.) As you can see, Halide's ProRAW doesn't smoodge quite as much. I generally prefer the Halide ProRAW to the single shot RAW, even though it looks a little more processed when pixel peeping.

          There is clearly more noise in the single shot RAW (as you'd expect). However, bear in mind that the JPEG above shows the result of doing fairly heavy sharpening and no noise reduction whatsoever. With more balanced processing the noise is much less noticeable. Here's an example: https://drive.google.com/file/d/1PQKAcE-Cr-M6Uz-rcXTDt3OM0Ej...

          • CarVac 3 years ago

            The "more balanced processing" is still rather heavily sharpened and extremely noisy by my standards, so this all but confirms what I felt before about phone raws.

            • foldr 3 years ago

              The more balanced processing is sharpened because I sharpened it. You can have less sharpening if that’s what you prefer.

              The noise levels are perfectly acceptable. If you're finicky about noise (a pointless obsession, IMO), then of course you won't use single shot RAWs from a cellphone camera. For any ordinary photographic purpose the level of noise from a modern phone sensor is perfectly fine – even without any fancy processing or multishot blending.

    • onphonenow 3 years ago

      Thanks - very interesting

alistairSH 3 years ago

Does anybody know what post-processing is applied to ProRaw images from iOS? I'm guessing true raw (Halide, etc) have none at all, but I recall reading the ProRaw had some applied. I just haven't seen a summary of which steps are applied.

90% of the time, my iPhone photos are fine straight out of the camera as HEIC. But every once in a while, I get something like is described here (or in several other recent similar articles).

hedgehog 3 years ago

The camera software on the 14 Pro is pretty bad, so that's part of their problem. Almost enough to make me return mine.

Traubenfuchs 3 years ago

iOS postprocessing is garbage and must be changed. It's an embarrassment.

The most infuriating thing is that you can usually see the image before post processing if you are quick enough and those look sharp and good, but this trash software can't be turned off.

  • zimpenfish 3 years ago

    > but this trash software can't be turned off.

    Are you talking about proRAW? Or JPEGs straight from the Camera app?

mortenjorck 3 years ago

This is an interesting edge case, and the author makes some instructive observations on the physics side.

It would be really interesting, though, to see an image signal processing expert weigh in on what the algorithm(s) are actually doing in this case.

wnkrshm 3 years ago

There are many more clever methods that one can use with CMOS that fall under computational photography or optics.

One very interesting one is ptychography (in microscopy often Fourier ptychography, since you can use Fourier optics to describe the optical system [0]), which uses a model of an optical system to get an image (iirc x7-x10 resolution) out of many blurry images, while knowing a bit about the optics in front of your image sensor - it can also work in remote sensing to some degree (better with coherent illumination though).

Edit: This is not just averaging or maxing pixels, it reconstructs the image using reconstructed phase information from having low-res pictures with different, known illumination or camera positions.

[0] https://www.youtube.com/watch?v=hece_x37ITg

  • londons_explore 3 years ago

    I think ptychography is the future of phone cameras... You'll see phones with 1000 lenses and 1000 CCD's, across the whole back of the phone.

    They'll all be manufactured as a one piece glass moulding and single CCD chip - and the whole thing will be very cheap to make, having moved all the difficulty into software.

kblev 3 years ago

Is there a way to disable this extra processing?

  • npteljes 3 years ago

    You can go around some of the extra processing by using other camera apps, for example "Open Camera". It can even shoot RAW photos, so that the least amount of processing is applied to the image. Unfortunately, you can't disable all of the processing, because some of it happen on the hardware, or in the camera's kernel module.

    https://opencamera.org.uk/

  • wyager 3 years ago

    Sort of - (I assume for PR/marketing reasons) Apple doesn't let apps get access to actual raw sensor data. It may be possible to skip the steps that are causing the most trouble here.

    • foldr 3 years ago

      Why don't you just try shooting it ProRAW, then try shooting a single shot RAW using a third-party app such as Halide? The latter option certainly removes any fancy computational photography from the pipeline.

  • hapticmonkey 3 years ago

    Shoot in RAW mode. Either with the various third party camera apps, or Apple’s built in “Pro RAW” mode in the iOS camera app.

    • astrange 3 years ago

      Third party camera apps don't use the Camera app processing whether or not they're shooting raw mode. You can shoot JPEGs all day.

hilbert42 3 years ago

I accept that there is a place for computational/algorithmic photography but I remain deeply sceptical of its actual benefits (in its current incarnation), moreover my recent bad experiences with it have only strengthened my conviction.

I have previously discussed having taken photos with a smartphone where certain objects within some images have been so modified by the processing algorithm as to be almost unrecognizable so I won't repeat those various scenarios here. Instead, I'd like to dwell on the implications algorithmic image processing for a moment.

Let's briefly look at the issues:

1. Despite a recent announcement by Canon about a large increase in dynamic range in imaging, (https://news.ycombinator.com/item?id=34527687), I'm unaware of any current imaging sensor breakthrough that would vastly improve both resolution and dynamic range. Thus, essentially, we have to live with what we're already capable of physically squeezing into our present smartphones.

2. Manufacturers are improving both image sensors and optics but only incrementally. Thus, with current tech and absence of truly significant breakthroughs, we have to live with the limitations as outlined in the article (aberrations, lens flare, sensor insensitivity etc.).

3. Essentially, we're stymied both by the limitations of current tech and physical (smartphone) size. Usually, to overcome such limitations, we'd fall back on the old truism 'there's no substitute for capacity' and just make things bigger as we did with photographic emulsions, past camera lenses, loudspeakers, pipe organs, etc. but that's not possible here.

4. Outside incremental improvements in hardware—the Law of Diminishing (hardware) Returns having arrived—manufacturers have had to resort to computational methods. The trouble is that it seems with the present algorithms that the Law of Diminishing (computational) Returns is also already upon us, so what does this mean? Quo vadis?

5. Clearly, in its current form computational/algorithmic processing has hit a stumbling block or at least a major hiatus. Here, further incremental improvements are likely using current methods and there's little doubt that they'll be applied to recreational photography (smartphones and such), however, unfortunately, we now have a serious (and very obvious) problem with the authenticity of images taken by these cameras.

Simply, when software starts guessing what's within images then we've not only lost visual authentication but we have serious downstream issues. It raises questions about whether or not photographic evidence based on computational imaging can be relied upon—or even submitted—as evidence in a court of law (I'd reckon, without ancillary cooperating/conjunctive evidence, such images would not muster if the Rules of Evidence tests were applied.

How serious is this? Clearly, it depends on circumstance but long before 'guessing-what's-in-the-image' became in vogue simple compression was 'suspect' in, for example, serious surveillance work—because compression artifacts in an image raised doubts as to what objects actually were—simply, could objects be identified with 100% certainty, if not then what figure could be placed on such measurements/identifications.

(Such matters are not hypotheticals or idle speculation, I recall in nuclear safeguards a debate over compression artifacts in remote monitoring equipment. Here, authenticating and identifying objects must meet strict criteria and a failure to authenticate (fully identify) them means a failure of surveillance which is a big deal! For example, the failure to distinguish between, say, round cables and pipes with 100% certainty could be a serious problem, as the latter could be used to transport nuclear materials—thus it'd be deemed a failure of surveillance. That's not out of the bounds of possability in a reprocessing plant.)

Obviously, the need to authenticate what's in an image with 100% certainty isn't a daily occurrence for most of us but as these tiny cameras become more and more important and ubiquitous then we'll start seeing them used in areas where their images must be able to be authenticated.

Post haste, we need rules and standards about how these computational algorithms process images and how they should be applied.

6. What's the future. On the hardware side we need better sensors with higher resolution and more sensitivity and improved optics (that, say, use metamaterials etc.). Such developments are on their way but don't hold your breath.

Computational/algorithmic processing has the potential to do much, much better, but again don't hold your breath. There's considerable potential to correct focus and aberration problems etc. using both front-end and back-end computational methods ('front-end correcting lenses etc. on-the-fly and back-end as post-image processing) but much work still has to be done. Note: such methods also don't rely on guessing.

What people often forget is that when a lens cannot fully focus or suffers aberrations, etc. information in the incoming light is not lost—it's just jumbled up (remember your quantum information theory).

In the past untangling this mess has been seen as an almost insurmountable problem and it's still a very, very difficult one to resolve. Nevertheless, I'd wager that eventually computational processing of this order will be commonplace, moreover, it'll likely provide some of the most significant advances in imaging we're ever likely to witness.

  • formerly_proven 3 years ago

    > 6. What's the future. On the hardware side we need better sensors with higher resolution and more sensitivity and improved optics (that, say, use metamaterials etc.). Such developments are on their way but don't hold your breath.

    Two interesting developments here are the pixels in Starvis 2 sensors, which as a first afaik use a 2.5D structure to increase full-well capacity by a lot. And another, non-production sensor by Sony where they developed a self-aligning process and pixels are actually split in two layers, with the top layer only carrying the photodiode and the bottom layer entirely dedicated to the readout transistors. That's promising for lower readout noise and also for increasing full-well capacity.

    • hilbert42 3 years ago

      Thanks for that info on the Starvis 2, etc. I've not had time to fully investigate it nor the earlier Canon announcement (as per link to the HN story on dynamic range some days ago).

      Both these announcements are what I'd call large incremental changes (big changes within existing technologies). If those dynamic range/noise figures turn out to be roughly in line with the publicity then they're much to be welcomed and half the world and I will be glad to see them.

      Moreover, such large changes cannot be ignored by other manufacturers otherwise they'd be left behind. That means they'd have to license the tech quickly, that is unless there's some gotcha like ridiculously low or unreliable production yields etc. Anyway, we'll soon see.

      I still have some reservations until more info emerges. As I mentioned in the earlier post I hope the changes are mainly real hardware improvements and not just little changes coupled with a great deal of back-end processing. As I mentioned there, we can do without 'smoke-and-mirror' announcements (which, unfortunately, are all too frequent).

swayvil 3 years ago

So it's an edge case fail for the noise reduction alg.

How far can a noise reduction algorithm go? Can we use a white painted wall as a mirror?

  • michrassena 3 years ago

    I don't think we're that point yet, but there's some amazing work on deconvolution for seeing around corners.

    • swayvil 3 years ago

      If we could use a rock one light year away as a mirror then we could see 2 years into the past. That's a practical application.

aaroninsf 3 years ago

Q: do apps like Halide and/or "shooting RAW", allow bypassing the post-processing on iOS?

abc_lisper 3 years ago

Why is the X-T5 picture so grainy? Was it taken in dark? Or through a microscope?

killjoywashere 3 years ago

> the relevant metric is what I call “photographic bandwidth” - the information-theoretic limit on the amount of optical data that can be absorbed by the camera under given photographic conditions (ambient light, exposure time, etc.).

You mean, "resolution"?

  • jsmith99 3 years ago

    'resolution' is usually used just to mean the number of pixels: nothing about how much ligh they capture or what sort of lens and processing.

    • wnkrshm 3 years ago

      Talking about optics, optical resolution is also a thing, i.e. whether you can resolve a certain target like a grid of micrometer-sized bars under a microscope.

      Edit: That kind of resolution is induced by the optics and not by the sensor (if your optics can resolve the target, you can always add more optics to magnify the image if you have a low-pixel-resolution sensor).

      Edit2: The poster you replied to is right that optical resolution is a constraint in terms of information that can be reconstructed after being imaged through a specific optical system.

      An optical system filters light in phase space (imagine a space of position, angle and intensity of light in each point of the optical system, in a geometrical optics picture) and since some components are cut off, you cannot reconstruct an image to arbitrary fidelity, you lose information (or are stuck with a certain optical resolution).

ipsum2 3 years ago

Is the full sized image posted anywhere? This could just be the limits of dynamic range blowing out the text in the iPhone pics.

bediger4000 3 years ago

This article mystified me until I realized it was Computational Photography not Philosophy.

  • swayvil 3 years ago

    Sounds like an application for one of those new chatbots.

  • moistly 3 years ago

    Computational Philosophy: “the use of mechanized computational techniques to instantiate, extend, and amplify philosophical research. Computational philosophy is not philosophy of computers or computational techniques; it is rather philosophy using computers and computational techniques. The idea is simply to apply advances in computer technology and techniques to advance discovery, exploration and argument within any philosophical area.”

    The word “simply” is doing a lot of work in that last sentence, I’m sure!

    https://plato.stanford.edu/entries/computational-philosophy/

    (BTW, I just posted same to front page, if the subject interests you & we’re lucky, it’ll generate some discussion)

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection