Standard image (left) and depth mask (right), Image: Google

Background-blurring portrait or bokeh modes have pretty much become a standard feature on dual-camera equipped phones. Similar effects can be achieved with single-lens devices but operation tends to be more cumbersome, with more manual interference required, and results less realistic than on dual-camera setups.

However, on the the new Pixel 2 models, Google has been able to implement a portrait mode on a single-lens phone that can compete with the dual-camera competition in terms of both operation and image results. And now, Marc Levoy and Yael Pritch—two of the engineers behind the Pixel 2 portrait mode—have taken the time to explain how in a comprehensive post on the Google Research Blog.

HDR+ picture without (left) and with (right) portrait mode, Image: Google

The Google Pixel 2 offers portrait mode on both its rear-facing and front-facing cameras, and uses machine learning and neural networking to generate a foreground-background segmentation. The front camera does its best using just neural network technology, while the rear camera creates a depth-map that is further improved using depth information generated by the Pixel 2 image sensor's dual-pixel technology.

In a final step, the information from both depth maps is combined to calculate the amount of blur applied to each part of the image, and generate the end result.

If you are interested in a more detailed description of the process you can find it, along with a range of sample images and illustrations, on the Google Research Blog. Or stick around DPReview because we'll be doing a deep technical dive on all things 'Portrait Mode' very soon!