Deep Fusion
1 2 3 4 5 6 7 8 9 14

Deep Fusion

Google's Night Sight mode isn't just about better photos in low light. Night Sight uses burst photography and super resolution techniques to generate images with more detail, less noise, and less moiré thanks to the lack of demosaicing (slight shifts from frame to frame allow the camera to sample red, green and blue information at each pixel location). 'Deep Fusion', available in a soon-to-be-released update later this year, seems to be Apple's response to Google's Night Sight mode.

Deep Fusion captures up to 9 frames and fuses them into a higher resolution 24MP image. Four short and four secondary frames are constantly buffered in memory, throwing away older frames to make room for newer ones. The buffer guarantees that the 'base frame' - the most important frame to which all other frames are aligned - is taken as close to your shutter press as possible. The buffer ensures a very short, or zero, shutter lag, enabling the camera to capture your desired moment.

After you press the shutter, one long exposure is taken (ostensibly to reduce noise), and subsequently all 9 frames are combined - 'fused' - presumably using a super resolution technique with tile-based alignment (described in the previous slide) to produce a blur and ghosting-free high resolution image. Apple's SVP of Worldwide Marketing Phil Schiller also stated that it's the 'first time a neural engine is responsible for generating the output image'. We look forward to assessing the final results.