Features

The Pixel 2 is packed with cool features that add to its overall value as photographic tool. Below we've called out those that are new as well as those that most pertain to photographers.

Auto HDR+ and ETTR metering

The Pixel 2 handles itself well even when shooting contrasty scenes, thanks to HDR+.

ISO 98 | 1/23000 sec| F1.8

Auto HDR+ is a fancy word Google uses for photographic black magic, and unlike its predecessor it is switched on by default when using the camera app on the Pixel 2. And it really is magic: when you're in the camera app, whether or not you've pressed the shutter, the phone is constantly shooting nine exposures and storing them in its memory. This nine-frame full-resolution buffer is constantly updating, and tends to use short enough exposures to avoid motion blur or highlight clipping. We found that in good light exposures of each frame rarely dropped below 1/120s, and even indoors it rarely dropped below 1/60s (1/30s when things got really dark).

When you press the shutter button, the camera goes back in time and grabs the last nine frames, aligns and then averages them resulting in an image greater than the sum of its parts. The details of the process are fascinating: the sophisticated algorithm breaks each of these frames up into thousands of tiles, and aligns each tile of each frame accordingly. This tile-based alignment allows the software to deal with a number of things: temporal objects can be removed, and shots where the subject is blurred from movement can still be used, simply ignoring the portions of the image that are blurred. If a subject is still sharp in a frame but has moved relative to another frame, you can even imagine moving the sharp 'tiles' of the subject back to re-align it with other frames. The whole point here is that the more frames (and tiles from those frames) you can use after alignment for averaging, the less noisy your overall image will be.

The Pixel 2's sensor can behave like one nine times its size

Perhaps equally as important is the exposure: as we mentioned the Pixel 2 errs on side of preserving highlights and preventing motion blur. Often this means shorter exposures, but shorter exposures combined with a smaller sensor means less dynamic range and more noise. That's where the magic of image stacking comes in (something astrophotographers have been doing for decades). The camera exposes for highlights it cares about, then relies on the stacking and averaging of nine frames to reduce noise by up to ~3.2 stops. In other words, the sensor can often behave like one nine times its size (approaching Micro 4/3).

HDR+ takes nine shorter exposures exposed to preserve highlights, then aligns them in a tile-based manner, after which images are averaged to reduce noise. The exposure philosophy is reminiscent of 'ETTR metering', and averaging nine frames allows the Pixel 2 camera to often behave like it has a sensor 9x its size - that's almost Micro 4/3 size! Photo courtesy of Google.

And this all happens behind the scenes - as far as the user is concerned, they simply end up with a nice photo with good dynamic range and detail. In fact, HDR+ generally puts the Pixel 2 camera far ahead of the competition in its ability to capture candid moments, high contrast scenes, and moving subjects even in low light.

Again, HDR+ is not a new feature, but it's greatly evolved in the Pixel 2 compared to its predecessor, with more under the hood to get the most of it. New hardware includes a faster lens (F1.8 vs F2), optical image stabilization and a dedicated image processing chip. The faster aperture gives it greater light capturing ability, OIS helps steady the shots that will be combined into the final image and the dedicated processor now helps even 3rd party apps take advantage of Google's HDR+ algorithms.

And despite the complexity of all that is going on, the user sees almost zero shutter lag. Which brings us to a feature essential to that short shutter lag...

Dual Pixel Autofocus

The Pixel 2 offers rather intelligent autofocus, similar to that found on higher-end Canon cameras: it has split left- and right-looking pixels behind each microlens on the camera sensor. This allows the camera to sample left and right perspectives behind the lens, which can then be used to judge distance and focus the camera faster on the subject (it's a form of phase-detect AF). All this results in very fast autofocus when shooting stills, and good continuous autofocus when shooting video.

Since Dual Pixel AF uses most if not all of the sensor, it continues to work well in low light

Since Dual Pixel AF uses most if not all of the sensor, it continues to work well in low light. The high resolution image is binned to a lower resolution one with lower noise for autofocus calculations, and the nine-frame image averaging always going in the background helps the autofocus algorithms even more. This means we rarely ever found ourselves waiting for the camera to focus, even indoors in low light. That's in stark contrast to most other smartphones we've tested.

Even in low light shooting, the Pixel 2 acquires focus quickly and accurately. For this image I tapped the area around the singer's mouth and microphone as my point of focus.

ISO 400 | 1/40 sec | F1.8

By default (unless you tap the screen) the Pixel 2 will focus on whatever is most central in the frame, or on a detected face. If it detects any movement, it'll instantly refocus. If there's continuous movement, it'll continuously refocus. Basically, focus works just like you'd want 'Auto AF' or 'AF-A' on ILCs to work.

A problem we frequently encountered though was that if the camera lost the face (say your subject looked briefly sideways or away), it would instantly refocus on the center, which often happened to be the background. If we wanted a candid snapshot at that moment, we'd have to tap on the face, or often the background would be in focus. Oddly enough, Portrait mode would then blur the background but keep the face sharp, but in these cases the face wouldn't be as sharp as it could've been had it been in focus to begin with.

Highlighting the detected and focused face would inspire confidence in the photographer during shooting

We hope this is improved either via better real-time face/human-detection, or a re-weighting of the algorithm to prioritize nearby objects occupying a large, even if non-central, portion of the frame (even if they're not detected as a face).

Furthermore, other smartphones place squares around faces detected, allowing users to easily tap a square to jump between subjects. But the Pixel 2 gives no indication whatsoever when its AI has found a face in the scene. Highlighting the detected and focused face would inspire confidence in the photographer during shooting, or at least let them know if the face should be tapped for focus. With the extensive depth-of-field of phones, it's often hard to tell in real-time what's perfectly focused.

Portrait mode

An example of portrait mode. When shooting in portrait mode with the rear camera, the Pixel 2 uses a progressive blur: objects close to the subject are less blurred than objects significantly further away. This is noticeable if you open this image in a new window at 100% and compare the rock and sticks, lower right, to the buildings upper left.

ISO 56 | 1/2000 sec| F1.8

Blurred backgrounds are the new big thing in smartphone photography and Google takes a two-stepped approach to creating the perfect blur in portrait mode using its rear camera. Since every pixel is split, each 'left-looking' pixel sees a slightly different perspective than each 'right-looking' pixel (or up-looking vs down-looking, depending on your orientation). The disparity between what these pixels see increases the further any object is from the focus plane. You can see for yourself what these different perspectives look like in this animated GIF: if you look on the right, you'll see objects in front of and behind the girl's face moving up and down. That small level of disparity is enough to generate a depth map.

In addition to this, subjects are identified through ‘segmentation', Google’s term for machine learning that uses a convolutional neural network to estimate which pixels in the scene are from a human face or body and which are not. This information is used to fine-tune the depth map. Additionally, HDR+ ensures the depth maps are noise-free enough to be useful - which is why Portrait mode works even under challenging light. You can read more about the entire process in Google's detailed blog post written by the very engineers designing it here.

The blur is quite convincing, and Google's algorithm chooses to keep a large depth-of-focus to ensure the entire face remains sharp.

The photo above is a good example of what you can expect from the Pixel 2’s portrait mode. There's a forced 1.5x crop (~42mm equivalent field-of-view), which we have mixed feelings about particularly as the resulting upscaled image isn't as sharp due to the digital crop. Our subject is in focus, yet well separated from the background, which is progressively blurred as you get further away.

The blur is quite convincing, and Google's algorithm chooses to keep a large depth-of-focus to ensure the entire face remains sharp. However if you zoom to 100% you can see the Pixel 2 got confused around the edges of Jordan’s glasses, blurring them away. While Google’s machine learning is smart enough to identify faces in a variety of situations (indeed it was trained using images of faces in various positions, with hats, sunglasses, ice cream cones and more), sometimes accessories like glasses still pose a point of confusion. Particularly if that accessory isn't well separated from the background.

Portrait mode works in selfie mode as well, and it's actually really impressive considering it relies solely on segmentation:

The front-facing camera also offers a portrait mode, though it does not have dual pixels so can not create a depth map the same way the rear camera does. Instead the camera relies solely on machine learning to create a segmentation mask. The blurring effective is more uniform and not as progressive as when using the rear camera.

ISO 77 | 1/600 sec | F2.4

Panorama and Photo Sphere

The panorama mode does a great job not just matching exposures across a scene, but of capturing a high dynamic range scene such as this one shot at sunrise. A traditional camera exposing for the sky at this level would render the foreground dark or near black without dynamic range compensation modes. Shot on Pixel 2 XL.

Photo by Rishi Sanyal.

Panorama modes are nothing new, but they offer a wonderful and simple way to capture more of a breath-taking scene than you could with an ordinary photo. What's awesome about panoramas on the Pixel 2 is that they retain incredible dynamic range - have a look at the Arizona scene above or this sunset (and don't forget to zoom in to see the amount of detail resolved, even from a moving hot air balloon). The Pixel 2 will often capture more tones in a panorama than a comparable shot on an iPhone X. The Pixel 2 also offers a 'photo sphere' mode that works in much the same way, except you can capture a full 360 degree view - one that you can pan and zoom around, or view using Google's Daydream VR headset.

If you use the Google Cardboard app, the really cool thing is that these panoramas are actually stereoscopic, offering a really immersive experience. How does it generate stereo pairs of images with just a single camera? When you're rotating the camera, the camera swipes through positions both your left and right eyes would have occupied were it also looking around the scene. Computationally, Google is able to extract these stereo pairs from all the images it takes as you rotate your camera - brilliant!

In general, the Pixel 2 does a good job matching exposures in the Panorama mode as well as the photo sphere mode. However we were hard pressed to make a 360 that didn't have at least one wonky area, in terms of stitching.

Free Google Photo storage*

Google Photo

Whether you are shooting 4K video, stills or even Raw files in a third party app, the Pixel 2 (once on Wi-Fi) will sync with your Google Photo account: automatically uploading everything for you. In the 12 years I've been using and writing about digital cameras, this might be the most time-saving feature I've ever encountered in a camera. To be fair, Apple has a similar offering, though we've found the Google version to be more reliable generally.

*Storage for full size photos and videos is free until the end of 2020. After 2020, storage is still free but only for new photos and videos uploaded in the compressed 'high quality' versions to save space. Everything captured before 2021 will remain full size. You can still opt to have originals backed up by default, but you'll have to pay for the extra storage required.