Image: Florian Kainz/Google

On a full moon night last year, Google software engineer Florian Kainz took a photo of the Golden Gate bridge and the City of San Francisco in the background with professional camera equipment: a Canon EOS-1D X and a Zeiss Otus 28mm F1.4 ZE lens. 

When he showed the results to his colleagues at Google Gcam, a team that focuses on computational photography, they challenged him to re-take the same shot with a smartphone camera. Google's HDR+ camera mode on the Google Nexus and Pixel phones is one of Gcam's most interesting products. It allows for decent image quality at low light levels by shooting a burst of up to ten short exposures and averaging them them into a single image, reducing blur while capturing enough total light for a good exposure. 

However, Florian being an engineer, wanted to find out what smartphone camera can do when taken to the current limits of technology and wrote an Android camera app with manual control over exposure time, ISO and focus distance. When the shutter button is pressed the app waits a few seconds and then records up to 64 frames with the selected settings. The app saves DNG raw files which can then be downloaded for processing on a PC. 

He used the app to capture several night scenes, including an image of the night sky, with a Nexus 6P smartphone, which is capable of shutter speeds up to 2 seconds at high ISOs. On each occasion he shot an additional burst of black frames after covering the camera lens with opaque adhesive tape. Back at the office the frames were combined in Photoshop. Individual images were, as you would expect, very noisy, but computing the mean of all 32 frames cleaned up most of the grain, and subtracting the mean of the 32 black frames removed faint grid-like patterns caused by local variations in the sensor's black level.

The results are very impressive indeed. At 9 to 10MP the images are smaller than the output of most current DSLRs but the photos are sharp across the frame, there is little noise and dynamic range is surprisingly good. Getting to those results took a lot of post-processing work but with smartphone processing becoming even more powerful it should only be a question of time before the sort of complex processing that Florian did manually in Photoshop can be done on the device. You can see all the image results in full resolution and read Florian's detailed description of his capture and editing workflow on the Google Research Blog.

 Image: Florian Kainz/Google