Part II (to read Part I of our interview click here)
DPR: You’ve talked about testing your cameras to understand what results they can deliver. How do you evaluate those results?
EL: I don't know if you've ever been in a color timing suite, like Technicolor, but when you go there it's very easy to really quickly assess how the cameras are behaving and how the sensors are behaving, and what you're getting and what you’re not getting. It's like a massive Photoshop.
DPR: What do you recommend for filmmakers or photographers that don’t have access to a facility like that?
EL: I would recommend that anyone do some little tests. Even if you're a complete amateur it would be great to take a few photos with your cameras and underexpose and overexpose, then go try to print and see what looks better.
DPR: Let’s go back to InVisage and QuantumFilm, which we're also excited about. In your opinion what potential does this technology have to change digital capture? Are you in this because you see it one day being in high-end professional, larger sensor cameras?
EL: The moment I met with InVisage we started talking about what their objectives were, what their idea of what a good image is, what a good capture is, and so on. As I told you before, it's like music to my ears. I'm very, very excited about the possibility of using the technology in very high end digital cameras, where I can do a movie like The Revenant and really be able to capture those 88 keys of the piano as opposed to much less.
DPR: Or maybe even 100 keys!
EL: That's right. Even more! Why not?
I'm also excited at every level because I'm a photographer. I would love to have a miniature camera so I don't have to carry my big Nikon, but a smaller camera that has this technology that can capture high dynamic range. And also on my phone. That would be a dream!
DPR: Another area where we’re seeing a lot of advances in is autofocus for video, thanks to technologies like on-sensor phase detect autofocus. It's getting so good that we've seen cameras that can track faces in video with no hunting, just refocusing very decisively. Where you can control the rate of refocusing as well. Would you ever trust a machine to autofocus?
EL: You know, I would. But unfortunately in film, like in The Revenant, when doing a very long take I have a human doing that job. And it's not only a human, it's an artist, and it's somebody that's very aware of everything - of the internal rhythm of the image and how it affects the audience. What happens is that sometimes those decisions are like in music, like when the drummer is playing slightly behind the beat to make it more jazzy or sensual, or playing slightly ahead of the beat to give the audience a different feeling to the music [syncopation]. It's the same with focus. It would be great to have an assist that would help get the focus exactly where you want it, but I think in cinema it's always going to require a human making decisions.
DPR: That's a very interesting perspective, because the same thing happens with composition and movement and motion in your films. You control the motion of the camera to elicit a feel in the viewer, and your camera is exploring the scene as a human would take in a scene. It slowly looks around, it finds action, it concentrates on it, it finds action somewhere else and concentrates on that. It's very compelling because you're making the viewer feel right there like they're exploring the scene. With that in mind, what are your thoughts on 360 degree image capture and virtual reality (VR)? Where do you see it in the context of professional filmmaking?
EL: I'm very excited about VR. One of the big issues we have in VR right now is that the cameras are very primitive, and one of the biggest issues is actually the dynamic range of those cameras.
Lubezki is a master of capturing a sense of place, as in this shot of Leonardo DiCaprio in The Revenant.
Courtesy Twentieth Century Fox
DPR: With VR the viewer has the freedom to look around on their own. Do you want them to have that freedom?
EL: Not for everything, but the idea is that with different story-telling tools, we will be able to guide the viewer to what we want them to see. The great thing about VR is that the grammar hasn’t been invented yet, though a lot of people are working on VR projects. The questions you're asking are the key questions. How much time can you submit the audience to VR? How fast can you move the camera before they get sick? How close can the subjects be? How can you guide the audience through the story? Should we use 360º or 270º or only a few degrees? All that is yet to be written, but generally I think VR and cinema are going to live together. I don't think one is going to take over the other one. It's just another way to tell stories, and I think that's fantastic.
DPR: And with Lytro's 'Light Field Volume' capture, you can capture not only multiple degrees of freedom, but also multiple perspectives and focal planes. Capture all that data and choose how to direct the viewer in post.
And I think that is very exciting too. Light field photography, and even light field projection is very exciting.* I haven't seen any examples, but the theory is absolutely beautiful.
DPR: One of the reasons we find your work inspiring is your use of wide angle lenses to create these immersive experiences and lend a certain depth to your imagery. How do you control distortion in your images, or how do you use wide angles to craft a look?
EL: I have a bunch of different lenses that I like, and all of them behave differently. All of them have different distortions and different colors and different contrast. I use them depending on the subject and the environment and the light. In The Revenant I ended up using a lot of lenses called Master Primes. These are very sharp, very hard, very clean lenses. And even though our 'normal' lens was the 14mm, it was a lens that distorts very little for what it is. Fifteen years ago nobody would have even thought about using a 14mm lens unless it was for a music video or something reminiscent of a dream. These lenses are now so good, though, that they barely distort.
Obviously, when you put Leo very close to the camera there’s some [perspective] distortion on his face, but that's also something we wanted because we didn't want Leo to look like a beautiful young man. We wanted Leo to look like a trapper from the 1820s. We get all these advantages by using these lenses, and of course the biggest advantage is how immersive they are because you can have your foreground subject and your background environment present at all times. That makes the image very immersive. At least that was the theory!
When we're doing a close-up of Leo the camera is probably just a few inches from his face, but the lens is so wide you can still see his whole face. By being so close you start to capture things like his breath and his sweat, his blood - it becomes a very visceral experience.
DPR: It seemed as if in some of the scenes the distortion added to the anguish because he was suffering and it was distorted.
EL: You're absolutely right! When we wanted more distortion I put a diopter in front of the lens. With that diopter you lose a lot of depth of field and the lens is even more distorted. There were five or six times in the movie where I wanted the image to be more distorted, almost to feel as if the camera was feeling Leo's angst and pain.
I don't know if it worked, these are just theories!
Using wide angle Master Prime lenses allowed Lubezki to get close to Leonardo DiCaprio while creating a sense of being immersed in the cold, arctic environment.
Courtesy Twentieth Century Fox
DPR: It worked, because we felt it before you even told us that was your intent! One final area we’d like to ask you about is the need for standards. As an artist, is it sometimes difficult to deal with the fact that you don't know how the end viewer is going to see the image. In a theater? On a cell phone? On a poorly calibrated TV? How do you deal with that?
EL: It's a nightmare. It's always been a nightmare. On film it was a nightmare, but I knew that people were going to watch the movie on film so I tried to watch as many prints as possible before they were sent around the world. Sometimes I would be traveling and peek in a theater and realize that the print was not bad but that the projector was running with half the light so it looked murky and mushy, so it's always been very hard.
Right now the biggest issue for me is TVs. I think we're living in the worst moment for TV. The manufacturers have gone insane and make all these TVs with brightness for sports and reality TV. That’s the biggest fight that most filmmakers have now. How can we create a standard for the TV industry so that you can press a button and watch a movie in a movie standard? I'm watching the Godfather, which is the greatest film ever shot, and it looks like a soap opera because it has all this banding and clipping and artifacts and it looks awful. That should be the biggest conversation - the Academy and everyone should get together with the makers of TVs and there should be a standard. But I also think that TVs should be 12 or 10-bit [not 8-bit].
DPR: Where do you see technologies like high dynamic range TV coming into the picture?
EL: It's fantastic. I've been working in high dynamic range and it's incredible! It's much better than anything we've seen. To start, there's black. It's very hard to do photography if you don't have black and you don't have white. A TV that has OLED, or the new HDR monitors, are fantastic because the blacks look black. Then you have all these different gamuts, like dark grays that you didn't have before. And the higher bit-depth is almost perfect in the sense that there's no banding.
DPR: With the increases in color gamut, and HDR brightness and contrast, it seems like there’s really a need for some standards.
EL: This is a case where everybody needs to get together and they have to create a standard. I was lucky on The Revenant because I got to make a ‘normal’ TV pass, an HDR TV pass, an IMAX pass, a Dolby pass, a cinema pass... so I had all these different color timing grades for the movie. But not all filmmakers have that. Often, filmmakers only have one pass and the rest is done by numbers or not even done.
My feeling is that the TVs and monitors are the biggest issues. It would be great to have 16-bit or 12-bit monitors with a little chip that tells you "This is the standard the filmmaker picked, and this is the LUT the filmmaker wants to use to show their work."
DPR: The grading/timing you do should take into account the monitor you're using, then translate the grade automatically for any viewing device, using a profile for that viewing device, to match the artist's intent?
Exactly. Right now I have to do passes for everything. When I was doing what we call the 'normal' TV pass I had a plasma TV in front of me, as well as all sorts of other TVs. It looked like a Best Buy! The movie looked completely different on each one. It was heartbreaking.
DPR: Thanks so much for joining us today. Best of luck at the Oscars next week!
EL: Thank you as well!
* Although light field, as popularized by Lytro, is often associated with focus, perspective, and depth-based editing in post-processing, light field displays have the potential of glasses-free 3D and automatic perspective shifts based on your viewing perspective and location.