Emmanuel Lubezki: 'Digital gave me something I could never have done on film'
3 Editor's Note
By Rishi Sanyal
It's not every day you get to chat with not only a leader in a field making (movie) history, but someone who is also a great personal inspiration. Every single frame of The Tree of Life is a photograph to aspire to, every movement of the camera in The New World brimming with intent to evoke a particular feeling. Use of wide-angles lenses close and intimate with subjects (a technique that personally speaks to me) creates immersive experiences. As we began our phone conversation, we were giddy at the thought of getting some insight into Emmanuel 'Chivo' Lubezki's genius. And Emmanuel's down-to-earth approach-ability and candor meant we got right down to talking shop.
And as much as he'll stress that he's not a technical guru, it must be stated that Emmanuel is as unassuming as he is brilliant: I had the pleasure of meeting him in person a couple years back at NAB, and within seconds he was asking me what I thought of the newest cameras (I have a feeling he didn't need my advice), before moving on to talk about the convolution of both camera and actor motion to simulate a magnitude of movement on-screen not possible to subject a real human being to (when filming Gravity), or how camera motion mimics character motion in order to better connect the viewer with the subject (watch how the camera jumps along with Pocahontas in The New World, or how it bobs up and down as Sandra Bullock spins in zero-G in Gravity).
So we didn't hesitate to get down and technical with Mr. Lubezki. And for our audience that meant, at least in part, understanding the way Emmanuel makes images from a fundamental standpoint. So what camera does Emmanuel shoot with for much of his personal stills work? Back in 2014, it was the Nikon D800. And now? The Nikon D810. If there's a trend you've noticed, there's good reason for it. Emmanuel is a master of light, and recreating the dramatic light of fleeting, powerful moments often means capturing them appropriately in the first place to manipulate them in creating the final product. Allow us to elaborate.
If there's one thing clear from Emmanuel's body of work, it's that he'll go to any length to get the perfect shot. No detail is too small to ignore, not even the shape or color of a sunburst peeking through the leaves, if only ephemerally (one might even say it's the fleeting moments that are most memorable). That quest for perfection puts immense demand on the production, and video is not a medium where you can just take another shot every time you don't get it just right. Especially not when shooting film, where often it's hard to know at all whether or not you got it right until viewing dailies. The high cost of not getting it right means that many pro-videographers sweat the technical details, and justifiably so. By recording as much as possible at the time of capture, certain creative decisions can be saved for post, allowing the artist to focus on the things not as easily changed or manipulated during capture.
In the past, I wouldn't have dreamt of capturing this shot in a single exposure. Now, with high dynamic range sensors, I can, but by altering my exposure philosophy. Instead of exposing for my main subject, I expose for highlights, tonemapping darker underexposed tones for dim, low dynamic range displays in post-processing.
Photo: Rishi Sanyal
That's why Emmanuel appears so interested in topics like VR, light field, and dynamic range - in general, rich capture mediums that allow for maximum flexibility post-capture.* More specifically, when it comes to dynamic range, if tones are over or under-exposed, you've often lost them, and the more you capture, the more latitude you have after-the-fact for creative intent. 'If there's one thing engineers can improve in cameras, it's dynamic range, dynamic range, dynamic range,' to paraphrase Mr. Lubezki at the Technical Summit, NAB 2014. In order to capture the incredible vistas and subject detail in the scenes he's wont to shoot, typically during the 'magic hour', Emmanuel must record the detail in the bright skies, sunset-lit clouds, and warm sun flares, as well as render the dark faces of naturally-lit subjects visible. Those subjects may have exposures many, many stops darker than the bright skies captured. Capturing both extremes requires a medium with extensive dynamic range.
In the days of film, you could typically set your exposure for your subject's face, and not worry (too much) about the sky behind your subject blowing to white. Because of the roll-off in exposure negative film displayed, above a certain threshold, film became less sensitive to light the more you exposed it, allowing you to overexpose to give most tones** a higher signal:noise ratio. Not so with digital, which tends to display a linear response to light: expose it two-fold as much, get twice the signal, up to a certain point above which color channels clip, and you're left with detail-less white. That's why it's so important to adopt a different exposure philosophy for digital, and it was fascinating to hear this stated in Emmanuel's own words: essentially, expose digital for highlights you wish to retain, because if you overexpose them, they may be lost forever.
But exposing for the highlights in high contrast scenes often means that darker tones are, traditionally speaking, 'underexposed'. Advents in digital capture technology mean, though, that these underexposed tones can still be fairly usable if you brighten them (tonemapping) so that they're visible on our current low dynamic range, low brightness displays. What's more, these underexposed tones tend to have higher signal:noise ratios than similar tones recorded on film (see DXO's research on why digital may have already surpassed film, when using high thresholds for acceptable noise levels) - meaning these darker tones can still be relatively noise-free. They'll still be limited by shot noise, as all capture mediums are, so more exposure will always yield better results (more light always means less noise) particularly for shadows that start off with less light to begin with. But low levels of electronic read noise coupled with the high pixel saturation capacities of, for example, the Nikon D810 at ISO 64, or larger sensors cameras like the ARRI Alexa 65 Chivo used, mean extensive light capture ability and low noise for all tones, especially tonemapped shadows. These shadows can be so devoid of noise that landscape photography masters like Marc Adamus advocate foregoing graduated neutral density filters in favor of exposing-for-the-highlights (or 'exposing to the right', ETTR) and tonemapping when possible.*** We've previously visually demonstrated the advantages of the D810 at ISO 64 over similar cameras when exposed in this manner, and present those results again here (note the increased detail, and lower noise, in the D810 shadows in our widget below, where all shots were exposed properly for the highlights):
This gets to the heart of much of our discussion with Emmanuel: DPs (Directors of Photography) have learned quickly that when it comes to digital, exposing for the highlights and brightening, or tonemapping, shadows in post-processing is the way to work with this capture medium, when it comes to high contrast scenes. This is still a methodology arguably not well-appreciated in the stills sphere, and it was fascinating to hear from Mr. Lubezki himself that it's a quickly adopted approach in the video world. But even this methodology is limited: (1) shadows inherently are still noisy due to plain physics, and (2) our ability to expose optimally is still limited by the fact that we can't always see our tones accurately during the capture phase. Meaning that even if we want to optimize our exposure by giving the cameras as much light as possible, we often can't tell exactly when tones are irrevocably clipped to white (due to lack of Raw zebras/histograms on many cameras), or lost in murky, noisy shadows (because we can't see the shadows in their final, brightened or tone-mapped, form on a high-resolution, HDR output device during capture).
That's why Emmanuel builds his own look-up tables (LUTs) and installs them on his camera - to get a 'proxy' of how he might process the final footage to assess whether or not, during capture, his highlights and shadows are acceptable. Similar to, in a sense, the flat gamma profiles that come standard on many cameras these days that attempt to, at least in part, give you a sense of the dynamic range available for you to utilize in grading. DPs like Emmanuel Lubezki, of course, take this a step further and customize their own profiles more representative of how they might grade the footage (one might re-introduce some blacks, for example, to avoid the very flat look of flat log gamma profiles), to get a sense of how usable their footage is during capture. And you thought we were technical with our talk of 'ISO-invariance'...
The flip-side of this discussion is the output. HDR, or high dynamic range, is often understood in photography as the merging of exposures to overcome the limitations of current cameras in capturing high contrast scenes. This process can often lead to flat pictures, but it's important to understand that this is a limitation of our current low brightness, low dynamic range displays. When you pack all that tonal range into a limited output range, biased toward a much darker total output than what we're used to seeing in the real world, you get either dark shadows, or a flat looking image from raising those shadows nearer to the brightness levels of brighter tones. HDR displays change all that: brighter whites and darker blacks mean these displays are capable of recreating a range of tones closer to what we're used to seeing in the real world. But that means a whole new workflow: on such devices, shadows don't need as much brightening in shots exposed-for-the-highlights, because all tones are already shifted to the right by virtue of simply being displayed brighter.
Dark shadows you 'push' today (in tools like Photomatix or Photoshop) may need to be 'pulled' (darkened), or pushed less, on a brighter, higher dynamic range display. Cinematographers like Emmanuel are entirely familiar with the concept of editing in an output-aware manner, which is why different grades of The Revenant were created for a normal TV, HDR TV, cinema, etc. And this stresses the need for standards. We'd like to imagine a world where display attributes like brightness and dynamic range are properly profiled, just like color gamut already is, so that all grading can be done in a display-aware manner. Perhaps brightness and contrast edits done by a content creator on a profiled display could automatically re-scale for the dynamic range and brightness of the viewing device, taking into account human perception. Whether or not this is feasible is another manner, and grades will likely always benefit from being done on the intended output display device to optimize for the dynamic range and color gamut of the intended display. Just as today prints benefit from editing on a dim display (~90 nits) that better simulates the illumination of the print. With the advent of new technologies enabling drastically brighter, higher contrast, and wider color gamut displays, though, the need for some sort of standardization that gives content creators confidence that what they edit is what you see will becomes increasingly important.****
Many often talk about technical details as if they were separate and distinct from artistic vision. But what our conversation with Emmanuel has served to solidify in my mind is that the two serve and enable one another. Lubezki talks about an often subconscious appreciation of beauty a photograph or movie evokes, and what is often unappreciated is that this is the result of very intent-ful decisions, many of which are artistic and technical in nature. My hope is that advances in technology and standardization, and an open conversation, will unleash more creative freedom for visionary artists like Emmanuel.
A hearty thanks again to Emmanuel Lubezki!
* A large part of what computational photography is focused on.
** Beyond a certain point, overexposed tones in film would start to look noisy, since brighter tones are recorded as denser film, which means more film grain.
*** Remember though that shot noise will make underexposed tones noisier than brightly exposed tones, so you'll always be better off merging two different exposures, or using a grad ND filter to increase the foreground exposure. However, sensor advancements are increasing dynamic range to the point that cameras like the Nikon D810 yield underexposed shadows of single shots with noise levels roughly equivalent to full-frame ISO 1000 or 2000 after 4 or 5 EV pushes from base ISO, respectively. Certainly not unacceptable, for some.
**** To read more about challenges and efforts in standardization in this arena, visit the website of our friends over at SpectraCal.
Feb 24, 2017
Get your pictures in front of a NASA photo editor by entering Astronomy Photographer of the Year 2017
Feb 24, 2017
Feb 23, 2017
Feb 21, 2017