Night Sight for Pixel phones
theverge.comIf you're interested in learning more about this stuff the head of the Pixel team (Marc Levoy - Prof emeritus at Stanford) has an entire lecture series on this stuff from a class he ran at Google. They are here along with lecture notes:https://sites.google.com/site/marclevoylectures/home
What's really cool is you can see him talk about a lot of these ideas well before they made it into the Pixel phone
If you're thinking of taking a look, I particularly enjoyed Lecture 7 of this series. The practical applications of Fourier analysis in image recognition/filtering/processing are really quite amazing.
Plus, if you're at all curious about the technical details for how exactly something like Night Sight is implemented on the Pixel, understanding what Fourier transformations are and how they are utilized is vital.
Yeah like build this tech into AR glasses.. see day while in night or see a ton clearer at night.
from what I can tell, Marc Levoy is the main reason why the Pixel has all these abilities.
Let's give credit to the entire engineering team.
Indeed let's do that. At our weekly meeting yesterday, there were 18 engineers in the room. Every one of them is working hard to make this feature happen. My role at this point is mostly to kibitz, and help with the field testing.
What a fantastic reply. Much love, Marc.
Prior work at Google Research before it made it into the product:
https://ai.googleblog.com/2017/04/experimental-nighttime-pho...
And by the original researcher in 2016:
The example images at your first link are much more informative than the OP link. Photo pairs should be compared after normalizing the total brightness of each image file. (Heck, I could take a single photo and display it at two different brightness settings and it would give basically same impression as the OP article to anyone not looking super carefully.)
There's also this set of slides on the technology behind SeeInTheDark: http://graphics.stanford.edu/talks/seeinthedark-public-15sep...
multiple frames approach sounds similar to Hubble telescope approach.
That video is a great demo. Very simple, very informative, and accurate from a data point.
What the Pixel cameras are doing is staggeringly good. My father is the founder of https://www.imatest.com/, and has a substantial collection of top-end cameras. He's probably in the top 0.0001% of image quality nerds. But most of the time, he's now entirely happy shooting on his Pixel.
I enjoyed some of the articles on your father's other website, like http://www.normankoren.com/digital_tonality.html, and hope he keeps it running.
Yeah he's a good egg. I'm sure it'll keep going as long as he does. His startup actually grew out of those articles -- which were basically supposed to be his retirement hobby -- and now it employs 20 people! I'm really proud of what he's accomplished.
That's great news. I came into contact with your father's work with vacuum tube audio. I always admired his solid engineering approach without mysticism.
Wow, what a cool guy. I know we don't do AMAs on here but it'd be amazing to get HN to pick his brains for a bit.
We don't do them often, but there have been at least a couple of AMAs on here.
Alan Kay https://news.ycombinator.com/item?id=11939851 Michael Siebel https://news.ycombinator.com/item?id=13895362 Ny AG Eric Schneiderman https://news.ycombinator.com/item?id=15853374 Scott Aaronson https://news.ycombinator.com/item?id=17425377 Sam Altman (he's done a few) https://news.ycombinator.com/item?id=12593689
I've put this proposition to my dad and it sounds like he's up for it! Might do so tomorrow.
If you don't mind my derailing the conversation a bit, could you ask your father what he would recommend for 4K/60fps video as competition for the Panasonic GH5s? In its weight class, it's one of the few 60fps (and 400Mbps bitrate capable) cameras that can be mounted to a relatively small sized gimbal, and flown on a drone that doesn't cost $10,000.
Is there anything expected to be released in the next few months that will be in a similar price, feature set and weight class?
I just asked my dad and he says that he doesn't have a good answer for you. Personally he doesn't do much video -- just still photography -- and professionally he mostly engages with OEMs way upstream from the actual products; often he doesn't even know what the products will be. So, alas, no good answer.
Personally I would suggest checking out the new Fuji X-T3 -- I just got it and am very happy with the video quality. It's actually the only other small camera right now that will do 4k60p 10-bit.
I just spent ten minutes skimming reviews and specs for that - it looks very nearly ideal for what I have in mind. Should be sub $1500 for just the camera body, no lens, and is under 500 grams.
The Blackmagic Pocket Cinema could be a fit for you. I have not researched it, but just a quick look at the their propaganda makes it seem intriguing for you purpose. It does 4K60 RAW which should better the 400Mbps h.264. You'll almost spend as much money on a couple of CFast2.0 cards than the camera itself though. It is also a MFT, so your existing lenses for you GH5 should work fine.
> Google says that its machine learning detects what objects are in the frame, and the camera is smart enough to know what color they are supposed to have.
That is absolutely impressive.
The color and text on the fire extinguishers along with the texture detail seen in the headphones in the last picture are just stunning. Congratulations to anyone who worked on this project!
It's impressive, but it also means that your camera isn't always going to capture what's there -- it'll capture what it guesses was there. I wonder how easily it is fooled to capture something that's not there?
Soon these cameras will be able to take Milky Way photos in San Francisco. (Find a few stars, and it fills in the blanks.) If you want to simulate a long exposure, the cameras will add star trails.
It's too bad that the technology is proprietary. I'm curious what could be done with a larger-sensor camera, from compact cameras to DSLRs.
I think the article is factually correct but makes it sound a little more complicated or advanced that it probably is. I mean, depending on how you interpret it you could think that it does basically "hey, this looks like a fire hydrant, let me paste a fire hydrant in there" which is obviously not exactly something AI can do reliably today, especially on phone hardware.
I'm guessing that it works similarly to low-budget astrophotography but with the computer doing all the busywork for you: when you want to photograph stars or planets and you don't have a fancy tracking mount to compensate for earth's rotation you'll have very mediocre results with long exposure. If you expose a lot to see the object clearly then you get motion blur. If you use a shorter exposition to reduce the blur you don't have enough light to get a clear picture.
One solution is to take a bunch of low-exposure pictures in a row and then add them together (as in, sum the value of the non-gamma-corrected pixels) in post while taking care of moving or rotating each picture to line everything up. This way you simulate a long-exposure while at the same correcting for the displacement.
An other advantage is that you can effectively do "HDR": suppose that you're taking a panorama with the milky way in the sky and some city underneath it, with a long exposure the lights of the city would saturate completely. With shorter exposures you can correct that in post by scaling the intensity of the lights in the city as you add pixels (or summing fewer pictures for these areas). This way you can effectively have several levels of exposures in the same shot and you can tweak all that in post. In the case of the city/milky way example you'll also need to compensate for the motion in the sky but obviously not on land which is also something you can't really do "live".
I have a strong suspicion that it's basically what this software is doing: take a bunch of pictures, do edge/object detection to realign everything (probably also using the phone's IMU data), fit the result on some sort of gamma curve to figure out the correct exposition then add color correction based on a model of the sensor's performance under low light (since I'm sure by default under these conditions the sensor will start breaking down and favor some colors over others). Then maybe go through a subtle edge-enhancing filter to sharpen things a bit more and remove any leftover blurriness.
If I'm right then it's definitely a lot of very clever software but it's not like it's really "making up" anything.
> which is obviously not exactly something AI can do reliably today, especially on phone hardware
But do we know that it runs on phone hardware? If voice interfaces have taught us anything, it's that we can't ever make that assumption again.
The amount of data you'd have to send to run this off board would be enormous, but hey, anything for a jawdropping hype feature, right? It just works, those preview pictures literally made me check the price and size of the pixel 3, and I haven't been interested in anything but a Sony Compact since the z1c came out.
I think it's taking as many pictures as possible and using very slightly different angle to get as good resolution as possible with as little noise as possible (I have no idea if that's really what's happening here)
That's not what this article says, do you have reason to believe it's incorrect? The quote about object detection in the parent post came from the article.
I'm sure the first step is taking many sharper short exposure shots (as opposed to longer exposures, which blur), then doing some tensor magic to stitch into a single image.
Object detection alone won't give you sharp text in low light. You need a minimum number of photons hitting pixels.
From the article: > Although it’s not one single long exposure, Google’s night mode still gathers light over a period of a few seconds, and anything moving through the frame in that time will turn into a blur of motion.
Right, but it's up to them how to process information coming in during these few seconds. Without a tripod it would be just a big blur without separating data into some frames. I don't think optical stabilization would be enough for such long exposures.
Yeah but in fact the eye does the same thing. I think a lot of people are to some degree aware of their photo biases, from color filters to ai filters, and will modify their response or not internally.
I would like super sensitive cameras like this to be used inside fridges to see the very faint glow of food going off.
Chemical reactions by bacteria breaking down food produce light, enough for humans to see in only the darkest of places (if you live in a city, you won't ever encounter dark enough situations).
A camera simulating a 1 hour exposure time in a closed refrigerator ought to be able to see it pretty easily.
Reading between the lines of [1], bioluminescence is used to detect bacteria/fungi/other organisms in food, but only by adding luciferase to a sample and measuring the light emitted when luciferase reacts with ATP. Because living organisms contain ATP, the ATP content can be used as a proxy for contamination by microorganisms.
But I didn't find anything on bioluminescence occurring naturally in the kinds of bacteria you'd want to be warned about. Did you ever personally see glowing food?
[1] http://cdn.intechopen.com/pdfs/27440/InTech-Use_of_atp_biolu...
Rotting meat definitely glows. I've personally seen decaying mammal carcasses glowing in the woods at night (always either green like foxfire or an odd almost monochromatic cyan that must really stimulate the eye's rod cells).
It's very faint and would be difficult to notice without trees to shield it from moonlight. A camera could pick it up with a long exposure.
Thanks, I hadn't known about this effect. Apparently it's been known for a long time. In the 1600's, Robert Boyle claimed to have been able to read an issue of The Philosophical Transactions by the light of a rotting Neck of Veal: https://blogs.royalsociety.org/publishing/boyle-and-biolumin...
Luciferase, easily the coolest (hottest?) name I've come across in a while. Here's the wiki:
https://en.wikipedia.org/wiki/Luciferase
"Luciferase is a generic term for the class of oxidative enzymes that produce bioluminescence"
and,
"Bioluminescence is the production and emission of light by a living organism"
Hmm, I can't find any research about spoiled food being bio luminescent, you say "bacteria breaking down food produce light", but experience seems to suggest lots of food spoilage is due to fungi (molds). Though it'd be fun to see lactobacteria in pickles or kimchi glowing...
On a similar note, it would be nice if some of the drawers of the fridge had red LED lights to keep leafy greens photosynthesizing, to keep them green.
Can you link to some research on that? I'd love to throw together a hack where you use LED light strips to do this. Isn't it possible that the light would increase bacteria though? https://biology.stackexchange.com/questions/66878/blue-light...
I'm sadly unable to find the original study where I found this. If memory serves me it was from around 2003.
Not sure how it would increase bacteria, as bacteria don't photosynthesis light, unless the lights gave off sufficient heat to increase their activity. If anything I would think the light might inhibit bacterial growth.
This is really interesting. I'm tempted to attach red LEDs to a battery and put it in the fridge. But it would be quite tricky to measure whether it works. (Almost impossible to do accurately, without different fridges.)
Put two cardboard boxes in a fridge. Put a leaf of lettuce in each, and a red LED in the top of each box.
Turn one LED on, while the other stays off.
Observe the leaves of lettuce after a week.
Cyanobacteria, purple sulfur bacteria, and I forget another type, those all undergo photosynthesis
I stand corrected! Are any of those likely to be found on vegetables, and break them down?
I never knew I wanted a "smart" fridge, but a fridge that can detect food going off sounds incredible. Sign me up if this is feasible.
A fridge that prevents food going off would be better, no? That wouldn't require smarts, just replacing the air with pure nitrogen, like modern grain silos do. Doing it at "consumer scale" would probably only require the same sort of compressor that your fridge already uses for its cooling.
There are some really low-tech things you can do to extend the life of things in the fridge. At least for vegetables, you can stick them in one of the fabric bags, add water, and they'll survive for weeks. And that costs just a few bucks.
For meat we've got freezers. Is ready food going spoiled a big issue?
Chemical reactions by bacteria breaking down food produce light, enough for humans to see in only the darkest of places (if you live in a city, you won't ever encounter dark enough situations).
Some city folk have doors and window shades. My old apartment kitchen was on the windowless side of the apartment with a door. If you close the door (and unplug the microwave), it was pitch black. Though I never saw any glowing food, not even the spoiling fruit on the counter.
I have the same kitchen setup. I develop film in there with the door closed. Nothing has ever looked fogged.
(I'm talking about developing sheet film in trays, btw... you don't need a darkroom to develop rolls or 4x5 sheets.)
It would be really interesting to then display this on the screen of smart refrigerators. Maybe you could even flag food that is above a certain brightness as spoiled.
Or rfid tag everything and add a scale so that your fridge can tell when you're out of items or low on milk! Would potentially make recycling easier too
I used to help teach a undergraduate senior design course, and the number of students who want to do exactly this is staggering.
I don't know if its really that bad of an idea, but we didn't allow projects that had been done before so they were all rejected.
Really, you can predict goods spoilage just given how old stuff is. You don't need a smart fridge; you just need an ETL pipeline from a ScanSnap (reading in your grocery-store receipts) to an inventory tracker app. You've got "smarts", but they're not in the fridge.
(And in the end, that's better: an inventory-tracker app that's on your phone is able to tell you to throw stuff out without you needing to own a "smart-home hub" or configure your fridge to connect to your wi-fi; and, unlike the fridge, its notifications will probably keep working even if its manufacturer goes out of business.)
I can't recall where I came across this idea but how about fridge shelves that (like mini moving sidewalks) slowly move food towards an edge to drop to a lower shelf and eventually to a composter or trash -- the idea being to put food items at a certain spot on a given shelf based on items' expiration date. Wildly impractical IRL, but clever enough to be memorable (for me anyway).
This reminds me of a similar project: "Learning to See in the Dark". They used a fully-convolutional network trained on short-exposure night-time images and corresponding long-exposure reference images. Their results look quite similar to the Pixel photos.
It's notable that this 'accumulation' method effectively lets you have a near-infinite exposure time, as long as objects in the video frame are trackable (ie. there is sufficient light in each frame to see at least something).
I'd be interested to see how night mode performs when objects in the frame are moving (it should work fine, since it will track the object), or changing (for example, turning pages of a book - I wouldn't expect it to work in that case).
Damn. I’ve had an iPhone since the 3G, but this is really tempting me to get a pixel.
Damn! That's honestly impressive! I started reading thinking it was going to be a simple brightness-up kind of thing, but it's incredible how they are able to recreate the whole photography based on an initial dark raw input.
I must imagine that the sensor is doing an extra but un-perceptible long exposure than then is used to correct the lightning of the dark version.
This might be a weird criticism but... making photos taken in the dark look like they are not actually dark seems kind of like a weird thing to do? I've struggled with my micro 4/3 camera to capture accurate night photographs, but the last thing I wanted of them was to be brighter than I was perceiving them to be.
That said, the effect of some of these photographs is striking, and I'm sure the tech is interesting.
I hear what you're saying. But I almost always capture photos for data, not art, and would really benefit from this. As long as it is a setting, we both win.
I think the idea is that you could capture a better photo with more detail up front then post-process afterwards to how you want it to look. Although I get what you're saying, some of the night photos looked great on the default camera.
Prepare for a lot of cute sleeping baby photos on your feeds, folks. That's what I'll be using this for.
Now if we could only get this on APS-C & 1" compacts like the sony rx100 or fujifilm xf10. With first class smartphone integration and networking.
My thoughts exactly. If a little IS work and some great noise reduction like they've shown here can look this good, can you even imagine what an APS-C or full frame could pull off?
Any hardware reason for it to only work on Pixel phones?
Supports a better Camera API - some people have cracked it to work on non-Pixel devices already though.
See: https://www.celsoazevedo.com/files/android/google-camera/
Can you say more precisely what version you would need to get the new night stuff? I tried installing the latest "recommended" version on my Nexus 5x, but it refused to even install it.
The Huawei P20 shipped out in April with this feature -- I look forward to dxomark's analysis of the Pixel 3 phone compared to the P20, which currently remains on top: https://www.dxomark.com/category/mobile-reviews/
Upgrading from a 3-year old Samsung S6, where I could almost see the battery percentages drop off percent by percent, the P20 Pro's 4000 mAh battery has been great (too bad the wireless charging didn't appear until the new Mate P20 Pro).
The Huawei does not have this feature. Look at the review in your own link and you will see that the images for low light are comparable to the pixel or iPhone. If you compare night sight to those, it's completely different.
https://www.youtube.com/watch?v=wBKHnKkNSyw
Except the Huawei does and in actual same-setting situations the results are better than the Pixel.
You're confusing sensor size for the algorithm again. The Huawei sensor is twice as large as the pixel 2, which is why both cameras without night mode on has a vast improvement in the p20. It's also why the improvement by turning night mode on in the p20 is not as great of a leap as on the pixel.
Incredible.
Did you actually watch the video?
The P20 Pro does exactly what this new night mode does, albeit did it months ago. It does image stacking (which is not a new approach). In direct comparison testing -- in that video -- it yields better results.
This whole discussion has just been an bizarre.
There's really no point in continuing this since I don't think you understand what the size of the camera sensor does in low light. I was simply pointing out, over and over, that it is not the same feature Huawei had. Just like you can't take a $50 point and shoot with extremely complex software that took the same image quality as a $5000 Nikon, and say that "this technology exists". Huawei has better camera hardware in almost every dimension you can imagine, and more than half the pictures in the video you sent are worse on the Huawei, despite those advantages. I did not say that Huawei can't take any pictures that are better than the Pixel. I said it's not the same technology, and dismissing this as "it's already been done" is ludicrous. If the Pixel 2 had the same hardware as the P20 the results would be even more impressive.
Dear god. Incredible.
You are completely and utterly wrong. Yet you continue. Amazing.
And you're not the first to distract with claims that it is someone else's ignorance.
The P20 Pro does image stacking. Period. That is exactly what this new Pixel mode does. In actual results the P20 Pro is better.
>That is exactly what this new Pixel mode does. In actual results the P20 Pro is better.
I'm not the person you've been talking to, but I don't think I'd agree with that statement. To take the video you linked earlier for example, the Pixel frequently gives better results. For example, this one shot https://youtu.be/wBKHnKkNSyw?t=227
Note that the Pixel 2 has a much smaller sensor, and the exposure time on the P20 is 18 times longer, and yet the Pixel generates a much better sharper image. What you're saying is correct, that the P20 is using some very advanced image stabilization to get results that good from 6 seconds of data, but the Pixel seems to clearly offer more advanced software.
The example shot you gave is clearly better on the P20. Also the P20 has a slightly larger sensor.
Further, you're reading entirely too much into the exposure times. They are artificial and either camera software can choose to put whatever number they want in there. The aggregate time. The average time. The theoretical equivalent time. Etc. It is not the actual times.
No, it's not better. The P20 is over-exposed, causing the colors to be off. The level of detail is equivalent, but the colors on the P20 are the worse. And no, it's not slightly larger. The sensor is TWICE as large in area as the pixel 2.
I took actual photos with the pixel 2, and I know the actual time taken. It's less than 3 seconds every time. By all accounts and reviews I've seen, the P20 is 10-25 seconds. Show me a review saying otherwise.
Thanks for explaining this clearly.
At some point, when you realize literally everyone in the thread disagrees with you, you have to wonder where you went wrong. No?
Incredibly another person basically says that you're wrong (adding their own misinformation into this), and you still claim rightness.
This thread is dominated by Pixel...fanboys and Googlers. You can be as wrong as you like, it doesn't really bother me. But your bull headedness about your absolute wrongness is simply spectacular.
I'm hesitant to bring this up, for lots of different reasons, but people I know and mostly trust, who work in places where they should be in a position to know, all tell me the same thing: "Never buy Huawei anything, for any reason". Anybody have any real knowledge about this? @danielmicay is doing some interesting stuff in relation to being able to verify different Android build, but I have not tested yet to see.
I'd assume it'd be in relation to some sort of China-related backdoor or datamining. About half a year ago intelligence agencies also publicly recommended against buying Huwawei/ZTE phones, for presumably whatever reasons the people you mentioned may be privy to.
The secret: They are not made by a US company. It is called "Propaganda" outside the US.
Unfortunately the P20 does not currently have a warranty in the United States.
Kind of a tangent, but it was really cool to see a picture of the author's Schiit Jotunheim headphone amp in the article. One of the founders wrote an amazing book on building a hardware startup: http://lucasbosch.de/schiit/jason-stoddard-shiit-happened-ta....
The biggest challenge to do this technically is to use gyroscope data to work together with the stacking algorithm. It's hard to tune the gyro to work great for any phone. A pure software solution to analyze the perspectives transformation would be too slow.
Is it hard to tune the gyro because its accuracy is too low, or because it's accurate to begin with but degrades over the course of the long exposure?
Is a pure software solution even reliable enough under these conditions? Slowness can be worked around by doing it in the background, and you get a notification when it's complete. Some people would be okay if that's the only way to get photos they wouldn't otherwise be able to get, short of buying an SLR.
They probably have to have really well calibrated gyros for Daydream, with temperature calibration etc., unless the IMU/gyro is in the daydream holder itself.
All those shots look amazing, but they're of stationary objects.
I really want to know how that works for people! 99% of photos I take are of people, and the lighting is always bad.
Are there any photos of people?
There's a photo of the author in the article.
How does Pixel implementation of low light photography differ from Samsung? Are they comparable in photo quality?
Wouldn't video still be extremely blurry? This is mostly for things not moving / pictures
I wonder if this technology will eventually supercede military night vision goggles. Having the ability to add color perception at long distances could have useful for identifying things at night.
you are right that this would be useless in its current state for video. But if it ever got smart enough to start stacking images on a 3d model of the world you could do some incredible stuff.
Why not use the actual article title?
"Google’s Night Sight for Pixel phones will amaze you"
"will amaze you" is an unnecessary clickbaity addition. They could have left Google in though.
How are you going to do a review of Night Sight and not even go outside? Every photo just taken in a room with the lights turned off. Come on, man. Tell your editor he needs to wait until nightfall.
Huh? There are a bunch in the second half of the article.
Ah. An update was posted literally in the last ten minutes. This article came out yesterday (when I first saw it)
I loaded loaded this on a pixel 2. The results are absolutely stunning. It does take ~4 seconds to take a picture, though.
That's great, but we should find a different name for "photos" that change image information in the process.
Interesting, but a tad rich with puffery.
Pre-OIS Google did this with image stacking which was a ghetto version of a long exposure (stacking many short exposure photos, correcting the offsets via the gyro, was necessary to compensate for inevitable camera shake). There is nothing new or novel about image stacking or long exposures.
What are they doing here? Most likely it's simply enabling OIS and enabling longer exposures than normal (note the smooth motion blur of moving objects, which is nothing more than a long exposure), and then doing noise removal. There are zero camera makers who are flipping their desks over this. It is usually a "pro" hidden feature because in the real world subjects move during long exposure and shooters are just unhappy with the result.
The contrived hype around the Pixel's "computational photography" (which seems more incredible in theory than in the actual world) has reached an absurd level, and the astroturfing is just absurd.
I'm sorry, but while you may be a photography fan, you don't know what you're talking about here.
Stacking is quite the opposite of a "ghetto" version of a long exposure - it's the fundamental building block of being able to do the equivalent of a long exposure without its associated problems (motion blur from both camera and subject, high sensor noise if you turn up the gain, and over-saturating any bright spots).
Stacking is the de facto technique used for DSLR astrophotography for exactly these reasons -- see https://photographingspace.com/stacking-vs-single/
However, you're ignoring the _very substantial_ challenges of merging many exposures taken on a handheld camera. Image stabilization is great, but there's a lot of motion over, say, 1 second on a hand-held camera. Much more than the typical IS algorithm is designed to handle.
The techniques are non-trivial: http://graphics.stanford.edu/talks/seeinthedark-public-15sep...
There's a lot going on to accomplish this. It starts with the ability to do high-speed burst reads of raw data from the CCD (so that individual frames don't get motion blurred, and raw so you can process before you lose any fidelity by RGB conversion), and requires a lot of computational horsepower to perform alignment and do merging. I don't know what the Pixel's algorithms are, but merging of many images with hand-held camera motion benefits from state of the art results in applying CNNs to the problem, at least, from some of the results from Vladlen Koltun's group at Intel (who I'd put at the forefront of this, along with Marc Levoy's group at Google):
http://vladlen.info/publications/learning-see-dark/
I wouldn't be so quick to dismiss the technical meat behind state of the art low-light photography on cell phones.
"you don't know what you're talking about here"
You literally repeated exactly what I said image stacking was, yet lead off by claiming that I don't know what I'm talking about. Classic.
The goal of both is to achieve the exact same result -- more photons for a given pixel. Stacking is a necessary compromise under certain circumstances -- lack of sufficient stabilization, particularly noisy sensor or environment, etc.
Further, this implementation is clearly long exposures (note the blur rather than strobe).
Sorry, but no - stacking is as much about dynamic range adaptation as noise, and that's why I'm arguing against your terming it a "ghetto version of a long exposure". It's not. A long exposure has a fundamental problem with saturation, as well as noise. It's not just about lack of stabilization, there's also the motion of the subject. Computational approaches can compensate for subject motion - long exposures cannot. Computational approaches can do dynamic range adaptation to avoid blow-out due to CCD pixel saturation. Long exposures cannot.
If you read the slides from the Levoy talk I cited, you'll note that they explicitly choose to under-expose the individual exposures to minimize motion blur and blowout.
(Marc is now at Google continuing his work on computational photography, and his group contributes to many of the cool things you see on the Pixel series.)
"Computational approaches can compensate for subject motion"
But they don't. They don't in this example. Moving subjects are a blur. As an aside, of course stacked photo frames are underexposed because it wouldn't make much sense otherwise.
Computational photography can do interesting things and holds a tremendous amount of promise. However every single example that I can find of this mode -- across the many astroturfed pages -- show a longer exposure than what the stock app normally allows. And with that the requisite blurring of any moving subject.
Are you arguing that EIS over a series of burst photos is incapable of making things better?
Everything I can see you saying -- much if it agreeable, like the fact that long-exposure OIS makes a lot of what this technology currently does possible without it -- is simply handwaving away the fact that EIS-over-burst with OIS can achieve things that OIS cannot by itself.
It seems to me that it's patently true that EIS has some benefits, and those benefits can be realised over the top of OIS.
There's obviously still a fair limit to OIS. I have somewhat shaky hands and even using something like Olympus' top range 5-axis IBIS, which is the best I've ever seen, I can still only shoot at 1/10". What can EIS do with a burst of 3x 1/20" exposures? Probably counter for my shaking a bit, at least. (If not for subject movement, yet.)
I simply do not see why you're discounting this so heavily.
Where did I discount EIS? EIS+OIS is a golden solution. It's what the Pixel 3 does. It's what the iPhone 8 does. It's what the Huawei P20 does.
This all gets very reductionist, but EIS over a series of bursts is a bad alternative to OIS. It will be garbage in->garbage out. EIS with OIS, however, gives you the benefits of OIS, with the safety valve and "time travelling" effect of EIS (in that it can correct where OIS made the wrong presumption, like the beginning of a pan).
>and even using something like Olympus' top range 5-axis IBIS
The ability of OIS to counter movement is a function of the focal length. Your Olympus probably has a 75mm equivalent or higher lens, where a small degree of movement is a large subject skew. That smartphone probably has a 26-28mm equivalent lens. Small degrees of movement are much more correctable.
EIS is brilliant. OIS is better for small movements, but add EIS and it's great. Computational photography is brilliant. However Google has really, really been pouring out the snake oil for their Pixel line.
> Your Olympus probably has a 75mm equivalent or higher lens
50mm equivalent in 135 terms, but yes, larger than 28. (I've since moved on to an X100F, but that's neither here nor there. :)
> EIS is brilliant. OIS is better for small movements, but add EIS and it's great. Computational photography is brilliant. However Google has really, really been pouring out the snake oil for their Pixel line.
This is what I was missing. It seemed you were arguing that computational photography is not capable of much, but you're more just pointing out that this computational photography is not doing much, despite Google's claims to the contrary.
I'd agree with you that this is not exactly revolutionary stuff.
> But they don't. They don't in this example. Moving subjects are a blur.
If you stack the original exposures together, you'll get ghosting and not a blur. The natural-looking blur is a result of computation.
> of course stacked photo frames are underexposed, wouldn't make sense otherwise
Except it does make sense if you want to capture more shadow detail, this is how HDR images are made.
You're severely underestimating the amount of computation involved in getting these shots. These are all handheld, and as @dgacmu mentions can benefit from exposure bracketing which gives much better results than a single long exposure.
Of course you could already get similar shots from a good camera and technique - the fact these are handheld shots coming from a mobile device, and straight out with the camera is the impressive part.
If it was so simple to take handheld long exposure photos, how come no other phone had done it yet? I think you are overly simplifying this. At a high level it might be a simple concept, but clearly getting it working in practice isn't simple otherwise every company out there would've added it years ago.
Lots of other phones do let you do it, usually in pro or manual setting. It requires a pre-requisite of good OIS (which paradoxically the Pixel 1 didn't have, claiming that it wasn't necessary -- because of some magic AI sauce or something -- and seeing that noise repeated across the tubs. They added it with 2 and 3) and usually it is hidden behind an interface.
Why? Because 99.9999% of smartphone photos in real use (e.g. not in a review), give or take 100%, are of people. People move. Long exposures just lead to bad outcomes and blurred people.
I mean seriously search the net for Pixel 3 night mode. It's like the Suit Is Back. They're even using the same verbiage across them. And the uproarious nonsense about Google using AI to colourize is just...well a place like HN should just be chuckling at it.
> Lots of other phones do let you do it, usually in pro or manual setting.
That's single frame long exposure, not many frames merged. And as you mention, unless you have a tripod or extremely good low light stabilization, in most cases you'll end up with a bad photo.
I would definitely like to see more with actual people in them, your point about humans moving is fair one, and I'd like to see how it handles them. That's where taking multiple shots and merging them vs a single super long shot would make a big difference, as you can smartly deblur things.
> Lots of other phones do let you do it, usually in pro or manual setting. It requires a pre-requisite of good OIS (which paradoxically the Pixel 1 didn't have, claiming that it wasn't necessary -- because of some magic AI sauce or something -- and seeing that noise repeated across the tubs.
Night Sight works perfectly fine on the Pixel 1.
> Why? Because 99.9999% of smartphone photos in real use (e.g. not in a review), give or take 100%, are of people. People move. Long exposures just lead to bad outcomes and blurred people.
I tested Night Sight with pictures of people and it also works fine in those cases. Even pictures takes with the front camera (without a tripod, etc.) look great.
The very article linked notes that moving subjects like people turn into a blur. Of the various submarine stories about this, I've seen a single picture of a kid, and the kid is blurred (despite standing as still as they can).
Your Pixel 1 likely sets a ceiling on the exposure time. The results may be great to you, but I doubt they compare to a Pixel 3. And of course in all of these cases about these great photos, there are zero examples from any other devices. Just with and without on a Pixel device.
While I'd like to believe pulling off these levels of photos is as easy as you proclaim (because I would have a lot of old shots that are usable) the level of discounting you assert here is unfortunate, because it's simply not true. The proof is in the reality that nothing has done this good of a job to date. If you do have examples that match or exceed please share and I will be in line to buy.
Just to be clear, I've been a serious photographer (amateur, not professional, but with all of the gear) for two decades, and actually made a pretty popular computational photography app for Android.
I honestly don't know which part you're doubting. Long exposures? Do you doubt that other cameras can do long exposures? Do you doubt that they can do noise reduction? Do you doubt that OIS allows for hand-held long exposures, especially on wide-angle lenses? What are you doubting, because these are all trivial things that you can validate yourself.
As to examples, you're wide-eyed taking a puff piece with some absolutely banal examples and exaggerated descriptions -- and zero comparable photos from other devices -- by someone who apparently knows very little about photography. How should I counter that? I can find millions of night streetscape photos that absolutely blow away the examples given.
Generally if you're going to pander to a manufacturer, you at least talk about things like lux. In this case it's just "look, between this setting and that setting it's different, therefore no one else can do it".
I don't doubt it's at all possible and that's really great you're a serious photographer with dev skills to write a camera app. But the reality here is it's not something that has been done and packaged up so well to be easily repeatable by those not as elite as yourself with regard to photography. We all understand stacking photos (HDR) is a thing and long exposure is likely as old as photography itself. But you clearly don't see the value in making it usable by anyone, at any time, in a device you carry with you that does many other things well. To give Google credit for advancing smart phone photography is very fair and very deserved. While they may not have advanced pro-level equipment, process or technology if that's what you're looking for you've completely missed the obvious point. I said it last year in a similar post. I have a very viable camera that shoots amazing and consistent photos and videos without having to lug around my DSLR gear. And it's ready in seconds when I pull it out... Are they award winning shots? Nope. But they're priceless to me since I can capture my family and life moments with increasingly better results with less user awareness or input. Do I enjoy manual photography? Sure do - that's not the point and I feel as though you've missed that.
I think it's already been said many times to you, but it's not just long exposure. They take many exposures, throw out the over-exposed (which slrs don't do), and use AI to do smart color mapping in very dark regions. There are no cameras or software out there that produce results like this. Period. There are now plenty of night sight pictures compared next to the new iPhone. The iPhone looks objectively terrible compared to the pixel. Is that because the pixel has a much better sensor? No. Apple would buy it if so. It's computational photography, and there's nothing Apple or Samsung can do in the short term to match it.
There are no cameras or software out there that produce results like this. Period.
The Huawei P20 has a night mode that in many cases is superior. So much for the "period". Further it's gimmicky and has downsides that make it pertinent for a tiny percentage of pictures. Which is why it's hidden behind a "more" option. Apple doesn't have it because Apple is about making everything easy for the 99%.
Image stacking is not difficult. It is not new. Image stacking is effectively a long exposure (I've said this probably a dozen times, but still it's like people are correcting me), with some unique advantages, and some unique disadvantages. In every one of the examples given it is indistinguishable from a long exposure.
See my other comment in this thread. The Huawei does not have anything close. Look at the dxomark review. The Huawei is comparable to both the pixel 2 and iPhone in the dark shots, and may be better or worse depending on the photo. Now look at this:
https://www.reddit.com/r/GooglePixel/comments/9qzyry/pixel_2...
I know this is true because I've tried it. I have the camera loaded on a pixel 2, and the iPhone is pitch dark in images where the pixel is fully illuminated with night sight.
More details here: https://www.anandtech.com/show/12676/the-huawei-p20-p20-pro-...
And remember anandtech did not have night sight. The p20 quality is attributed to the larger sensor.
https://www.youtube.com/watch?v=wBKHnKkNSyw
"The Huawei does not have anything close"
The P20 Pro's night mode is arguably better than the Pixels. It is sharper, works in worse light, and has more natural colours. You also seem to be confused into thinking that DXOMark enables every special mode. They don't. The reviews are overwhelmingly simply the auto settings.
The argument I'm seeing in favor of the Pixel generally is "waves hand {magic AI!}". Sorry, I don't care how much HN is infected by Googlers and Pixel fanboys, there is zero evidence of any magical AI in the Pixel results, and they look absolutely bog standard.
> actually made a pretty popular computational photography app for Android.
Gallus looks interesting, I haven't heard about it before. Any thoughts on when you're bringing it back?
I can see your point: this basically just looks like long exposures, or stacked exposures, which is basically the same thing of letting more photons hit the sensor, aligned using OIS.
Any thoughts on why Apple, as the other leading phone maker with a heavy emphasis on camera quality, has not implemented anything like it? Not to discount the difficulty, but OIS aligned long exposures kind of seems like low hanging fruit. Instead, they keep trying to open the aperture more.
well to be fair google hasnt implemented it yet either. there could be significant reasons for that, like results could be worse than default except in specific situations
Well, Apply has dropped the ball on camera quality some time ago.
Seems pretty damn slow if it's only stacking multiple images.