Apple has just announced its iPhone 13 lineup, and one of the stand out features is the new 'Cinematic Mode'. On the iPhone 13 Pro and iPhone 13 Pro Max models, Cinematic Mode allows you to adjust the aperture, or f-stop, to change the depth-of-field effect after you've shot your video. It also allows you change what's in focus, or rack focus from one subject to another, after-the-fact.

Remember the Lytro Cinema camera the size of a small car that promised the same thing? Think that... but in your pocket.

With the iPhone 13/mini and iPhone 13 Pro/Pro Max, you can adjust the focus and f-stop in video after the fact. The interface appears to be similar to the one for Portrait Mode for stills.

When Cinematic Mode is enabled, the iPhone 13 and 13 Pro models generate a depth map in video, which software can later use to selectively blur the foreground and background and simulate any chosen f-stop. Presumably, it can do so either by simple learning-based segmentation (identification of human or primary subjects) or by using the stereo disparity between the wide and ultra-wide, or telephoto and wide, cameras.* While the larger sensors in these phones will lend the footage shallower native depth-of-field than previous phones, the depth map allows for additional computational blur much like the Portrait Mode that is so popular on smartphone devices these days.

Users can also change focus to any portion of the scene (that was acceptably in focus to begin with when shot), or create focus racks after-the-fact.

Since a depth map is saved with the video, users can change the focus or adjust the depth-of-field after-the-fact. You can even introduce focus racks, presumably indicated by the dots in the timeline above.

But that's not all. Apple says it studied the art of cinematography and the creative choices Directors of Photography make to enhance its autofocus algorithms in video so that you have intelligently focused videos to begin with.

Johnnie Manzari, Human Interface Designer at Apple, explains that the iPhone 13 cameras intelligently anticipate and focus on your subject as it enters the frame. One might imagine it can do so with the additional information from the ultra-wide camera. If your subject looks away toward another subject, the camera automatically focuses on whatever they're looking at. When the subject turns their gaze back, the camera will refocus, as you can see below.

Apple also makes it easy to take creative control over the process. For example, say the camera chooses to focus on the nearest subject taking up the majority of the frame. If you wish to rack focus to the gentleman in the background, simply tap him once:

If the camera chooses to focus on the boy in the foreground and you wish to rack focus to the gentleman in the back, simply tap on him.

Then, if you to wish lock focus on him and track him, tap him again and the camera with continually focus on him (note the change in the AF reticle shape):

After tapping your subject once to rack focus to it, tap it again to lock focus on it and start tracking it.

Will this revolutionize cinematography?

We've seen hints of this type of technology before, but what's impressive is Apple's implementation and focus on the overall experience. By studying the creative choices of DPs in order to train the autofocus algorithms, you start with footage that, in all likelihood, will already look good - far better than video that relies on simpler algorithms like center-priority or nearest-priority autofocus (with face detection sprinkled in). Including intuitive ways to take control - tapping and double tapping - ensure you can work around automatic choices you disagree with.

Finally, the ability to make creative decisions around focus and depth of field after-the-fact could revolutionize not just mobile cinematography, but cinematography in general, allowing cinematographers and directors to focus on other creative choices, or the action and the moment during filming.

The quality of the depth-of-field effect remains to be seen, but it's likely similar to what we see in wide-angle portrait mode. And there are bound to be limitations. As great as Portrait Mode is, depth map errors are still visible to discerning viewers, particularly where focus transitions from the subject in focus. While these may be mitigated by the fact that this 'video portrait mode' is only available at 1080p, the depth-of-field effect around subtle details such as hair, even in Apple's video example, does leave something to be desired. This will only get better with machine learning and more capable hardware, of course.

Furthermore, we can clearly see some focus breathing in the racking examples, as if the camera is hunting to confirm focus in addition to the evident focal length magnification due to refocusing. And if a subject isn't already reasonably in focus to begin with (the equivalent aperture is now F6.8 on the Pro models), it won't get very sharp if you choose to refocus onto it. Furthermore, if focus is already racking in a clip, we imagine it'd be difficult if not impossible to change or reverse it!

For now, Cinematic Mode is limited to 1080p at 30 fps. However, it's available in Dolby Vision HDR mode, which intelligently grades each frame of footage and uses dynamic metadata to expand output dynamic range, ensuring that HDR footage doesn't look flat when viewed on a compatible device. This mode also takes advantage of Wide Color Gamut (WCG), so that your footage displays colors beyond the limited sRGB or rec.709 color space.

Apple's not the first manufacturer, by any means, to tackle the cinematography space. Samsung and Chinese manufacturers have already been toying with synthetic blur in video, and other manufacturers such as Sony have been taking video very seriously, releasing smartphones with fine control over shutter angle outputting 10-bit HDR (HLG) video much like Apple has done with the admittedly more sophisticated Dolby Vision. Even at 4K/120p.

But what Apple is trying to do is bring all these features that pros care about into a format that's usable and nearly transparent for the masses. And that's a noble goal. You can see Cinematic Mode in action in the video below, and as always, let us know what you think in the comments.

*Technically, Apple could also use its dual-pixel split photodiode sensor to augment the depth map generated from the pair of cameras, but Apple has never confirmed nor denied this approach when we've asked as to whether it's considered it for its stills Portrait Mode. It could also augment its depth maps using the LiDAR cameras in the Pro models, but given the availability of Cinematic Mode in the base 13 and 13 Mini models, it's unlikely that Apple is reliant on LiDAR.