Google's latest Pixel 3 smartphone generation comes with the company's new Night Sight feature that allows for the capture of well-exposed and clean images in near darkness, without using a tripod or flash. Today Google published a post on its Research Blog, explaining in detail the computational photography and machine learning techniques used by the feature and describing the challenges the development team had to overcome in order to capture the desired image results.

Recent Videos

Night Sight builds on Google's multi-frame-merging HDR+ mode that was first introduced in 2014, but takes things a few steps further, merging a larger number of frames and aiming to improve image quality in extremely low light levels between 3 lux and 0.3 lux.

One key difference between HDR+ and Night Sight are longer exposure times for individual frames, allowing for lower noise levels. HDR+ uses short exposures to provide a minimum frame rate in the viewfinder image and instant image capture using zero-shutter-lag technology. Night Sight waits until after you press the shutter button before capturing images which means users need to hold still for a short time after pressing the shutter but achieve much cleaner images.

The longer per-frame exposure times could also result in motion blur caused by handshake or moving objects in the scene. This problem is solved by measuring motion in a scene and setting an exposure time that minimizes blur. Exposure times also vary based on a number of other factors, including whether the camera features OIS and the device motion detected by the gyroscope. In addition to varying per-frame exposure, Night Sight also varies the number of frames that are captured and merged, 6 if the phone is on a tripod and up to 15 if it is handheld.

An interesting problem with very dim scenes is how to properly white balance them, given the often dim or strongly colored light sources present in dark scenes. To solve this, Google took a learning-based approach to white balance, training their AWB algorithm to discriminate between poor and well-white balanced images. They did this by hand-correcting images from the Pixel 3 camera using its traditional AWB and using these corrected images to train the algorithm to suggest appropriate color shifts to achieve a more neutral output. The results are impressive (see below).

Google's learning-based white balance approach yields pleasing colors in this nighttime cityscape, more pleasing than this same shot taken without Night Sight. Photo: Rishi Sanyal

Frame alignment and merging are additional challenges that you can read all about in detail on the Google Research Blog. Of particular importance: Night Sight feeds the 15 frames into its 'Super Res' pipeline, which means you won't just get low noise images, you'll also get far more detail thanks to the ability to resolve inter-pixel detail and forego demosaicing altogether (which also means far less aliasing and moiré). Not just in nighttime shots but even in the daytime, if you enable Night Sight.

Our science editor Rishi Sanyal also had a closer look at Night Sight and the Pixel 3's other computational imaging features in this article.