The a7R II focuses with 3rd party lenses using phase-detection

Here's where things get disruptive. We've seen on-sensor phase-detect AF (PDAF) for some time now, and every iteration appears to make these implementations better. We saw a huge jump in performance with the introduction of Sony's alpha 6000 (a6000) camera, as well as Samsung's NX1. And even though one could adapt third party glass, such as Canon EF lenses, to the Sony a6000 and a7 cameras, full with electronic communication for aperture control and rudimentary AF, for some reason PDAF was never available. I'd always wondered if there were some technical difficulty, if there were some protocols that had to be reverse engineered, or if there were some other explanation unbeknownst to me as to why this hadn't been done yet.

The Sony a7R II can focus Canon EF lenses mounted via an adapter (shown here) using its on-sensor phase-detection AF system.

Having never received a satisfactory answer as to why on-sensor phase detection measurements couldn't be used when driving attached DSLR lenses, I'd kind of given up hope that we'd see this, thinking it impossible.

And then the a7R II announcement. And a text from Barney: 'it focuses Canon lenses. Using PDAF. Pretty fast'.

Whatever the hurdles, it appears that Sony has overcome them. The a7R II may even be a solid replacement for the a99. Sony claims that the camera can utilize focal plane phase-detection with A-mount lenses, when using a LA-E3 lens mount adapter. One of the biggest complaints of the a7 system was the lack of available glass, but this development renders this point moot - with the a7R II you'll have access to one of the largest, and most desirable, libraries of glass available. And not only that, the AF system of the a7R II might actually provide some benefits to using that glass natively... so let's talk about that.

Subject tracking

Subject tracking is about the camera understanding what you wish to focus on. And while this piece is too small in scope to get into the merits of subject tracking, suffice it to say we firmly believe that proper, usable subject tracking will usher in a new, and better way of autofocusing in the future. And mirrorless cameras will likely lead the way.

Subject tracking is implemented and used, to varying degrees of success, in a number of different ways by different manufacturers. At its very best, subject tracking can mean the camera automatically finds (for example) the largest face in the scene, and always keeps it in focus no matter where in the frame, and how far away, it is. When you press the shutter, that face is in focus at the time of capture. Even during continuous shooting. If it's not simply the largest face in the scene you're going for, good subject tracking implementations will understand what it is you're trying to focus on by 'remembering' whatever was initially under your selected AF point when you initiated focus, and will continue to track it as long as you have the shutter button depressed halfway or fully.

A sophisticated tracking implementation, like Nikon's '3D tracking' will even be specific enough to track the eye of a face (even the correct one), while a less sophisticated one such as Canon's iTR, or intelligent tracking and recognition, will, well, wander and lag, as our test with the new EOS 5DS shows*. That can mean the difference between an in-focus vs. totally out-of-focus shot when you're shooting with fast primes, where it's critical to nail focus on precisely the right feature on a face. 

But why do I say 'mirrorless cameras will likely lead the way'?

Unlike DSLRs, which rely on relatively low-resolution RGB metering sensors to identify subjects in your scene, mirrorless cameras are always scanning their main image sensors. These sensors can provide higher resolution data at high speeds as well, with the increases in readout speeds we're seeing, and they can do so with all of the available light going to the image sensor. Hence, mirrorless cameras have the potential, with enough computing power and clever algorithms, to possibly surpass what we see even from Nikon's full-frame cameras, which represent the current state of the art when it comes to AF tracking. 

And initial marketing videos indicate that the a7R II might not disappoint here. Take a look at tracking of a speed racer in this video from Sony below:

Granted that it is a marketing video above, but if subject tracking is anything like Sony's a6000 'Lock-on AF' in our video below, the a7R II might, somewhat ironically, be the best way to get accurate subject tracking with Canon glass. The video above has a stationary background, but the a6000's subject tracking system is powerful enough to track a moving subject even as you pan and move the framing around drastically:

 

Eye-AF

Eye-AF is essentially a subset of subject tracking: since the camera is always 'seeing' the scene as it reads out its image sensor, image analysis algorithms can find the nearest face, and it the nearest eye, and focus on it. This was previously possible with the original a7 cameras, but is now supported in continuous AF. That is, once the camera has found the eye and locked on to it, it will continue to track it around the scene, no matter where it ends up in three-dimensional space. Frankly, I always found it odd that the camera wouldn't continue to track the eye once it had found it... but, problem solved now.

Focus accuracy

Focus accuracy will unlikely to be an issue with the a7R II. In fact, the a7R II might literally be the only way to get completely accurate phase-detection with glass designed for DSLRs. This is because the phase measurements are made on the imaging plane and are, therefore, not susceptible to all the issues of using a separate AF module. With the dedicated modules of DSLRs, tolerances in body and lens mounts, differences in effective optical path lengths to the AF sensor vs. the image sensor, alignment issues, and (mis)calibration of all the optics on the module, for every AF point no less, can all lead to a less than desirable AF experience, especially with fast lenses where AF tolerances are tight.

Making phase measurements at the imaging plane obviate many of these issues. Furthermore, on-sensor PDAF pixels, which don't see through 'virtual apertures' looking at light entering the peripheries of the lens, are not as susceptible to optical artifacts like residual uncorrected spherical aberration, amongst other things. Essentially, the elements making the phase measurements are essentially sampling most of the image forming light, whereas dedicated PDAF modules of DSLRs are only making measurements that are essentially a proxy for focus. While DSLR PDAF modules certainly have their advantages (being able to focus from extreme defocus, and in very low light, for example), focus accuracy is certainly not one of their strong points.

The a7R II's on-sensor PDAF might go a long way towards mitigating such AF accuracy issues. Perhaps even with your favorite Canon lenses, or your favorite third party lenses made for the Canon mount!

Focus area coverage

399 on-sensor PDAF points. Sound like a marketing gimmick? Perhaps. But when you're having the camera subject track with pinpoint accuracy, the higher the density of points, the better. In other words, it's not about the ability to manually select one of 399 points. It's about the ability of the camera to track, with pinpoint accuracy, what you want in focus.

What's even better than a high density of AF points? Large coverage. In fact, the a7R II offers the most vertical and horizontal coverage of PDAF points across the frame (nearly 60% in either direction) of any full-frame camera to date. Have a look for yourself, below, where we've overlaid the AF grids of the a7R II against that of the Canon 5D Mark III and Nikon D810.

Sony a7R II
Canon EOS 5D Mark III
Nikon D810 

As someone who loves placing subjects well off-center, and refuses to use the simple 'focus & recompose' technique due to focal plane shifts and subject movement that easily throw off focus when shooting at fast apertures, this is huge. Huge. In fact, I'll be able to still focus-and-recompose, but in a much better way: by having the camera automatically follow the subject well out to the sides as I recompose, refocusing at every instant along the way, ensuring focus even as the focal plane shifts, and if the subject moves.

Oh, and I'll be able to do it with Canon-mount glass. Like that beautiful new Sigma 24-35mm f/2 Art. Sony ftw.

OK then, what are the downsides?

If the Sony a7R II can focus DSLR lenses just as well as DSLRs can, is it game over for DSLRs? Well, not so fast. It's not all double rainbows and ponies in Sony and mirrorless AF land, so let's talk a bit about the downsides.

First of all, subject tracking, or 'Lock-on AF' as Sony calls it, isn't always perfect. It'll start on your subject, but in trying to understand your subject, that green box highlighting what the camera thinks is your subject will sometimes grow, shrink, or wander off your subject. Frankly, we found its implementation on previous cameras to be a bit over-engineered: in trying to understand and adapt to your subject, sometimes it misunderstood what it was you were trying to maintain focus on. It often didn't offer the pinpoint precision of Nikon's 3D tracking in the D810 (again, here). We expect this to improve with every generation, though, and the fact that the a7R II offers something as precise as eye-tracking in continuous AF has our hopes up.

What about low-light AF? One large remaining problem with on-sensor phase-detect is that it gives up in low light, with the camera reverting to the slower contrast-detect AF. And that means the dreaded focus-hunting. Despite the sensor getting all the light (rather than having to share it with a dedicated PDAF module, and an optical viewfinder), the phase-detect pixels are small (they're masked, only receiving half the light of any full imaging pixel). For this and other reasons, on-sensor PDAF shuts off on most mirrorless systems in dim light, whereas the latest DSLRs can focus using phase-detection at every AF point down to -3 EV (Nikon D750, D7200).

We don't know the low-light limit of the a7R II's on-sensor phase-detection, but reversion to CDAF in low-light may continue to be a problem compared to DSLRs for now. 

Finally, DSLRs have a honed ability to initially acquire AF very fast. They can do this because of their ability to make phase measurements even when the lens is extremely defocused. This is quite important for certain applications, particularly anything involving long telephotos where, quite often, the image through the viewfinder is a blurry mess because your focus is initially far off your subject. Even more so because any amount of hunting can make or break your shot.

What we're a bit less happy about

Not much, for now. Actual testing and performance evaluation may of course kill off some of the initial enthusiasm, as it often does, but we remain thoroughly impressed with what the camera promises. Ergonomics and menu systems aside, there is one thing though: lossy 14-bit Raw.

As far as we can tell, Sony is still only offering a lossy 14-bit Raw. We spoke to a highly technical representative at Sony about this, and he mentioned that he was aware of the issue, and the desire for at least an option for uncompressed Raw - even if it came at the cost of shooting speed. We were told that Sony is aware of the issue and are working on it, and that it's possible we might see the issue addressed in firmware in the future, but we were not able to get any guarantees or specifics beyond this. The reality is, while this won't affect all of you, nor will it affect every photo, it's a bit of a sticking point for some considering these cameras: it's odd to spend north of three grand on a pro-level camera, and then find posterization artifacts like the ones around the arches in our a7S image below.

100% crop #1
100% crop #2
Full image

You can find the original Raw file here. It's important to note that - in this case - we didn't even have to make a drastic edit to make the posterization visible; in fact, posterization was evident even in out-of-camera JPEGs that had 'DRO' set to 'Auto' (as this mode performs some shadow boosting to even out tones). We feel that Raw shooters looking for the best image quality could do without the potential pitfalls of Sony's compression methods, especially considering the competition offers significantly more 'lossless' Raw**

Furthermore, a little known fact is that the a7 cameras far too easily default to a lower bit-depth analog-to-digital conversion, which ultimately truncates dynamic range drastically by increasing read noise (due to quantization error). Set your camera to continuous shooting, silent shutter, bulb mode, or use dark frame subtraction (long exposure NR) and the cameras default to 12-bit conversion, which can cost 1 to 2 EV dynamic range, nullifying one of the largest advantages of Sony sensors. We sincerely hope Sony's hard at work at ensuring this doesn't happen on a high-end camera like the a7R II, and we'll be testing for this as soon as we get our hands on a production model.

Concluding remarks

One could remark that in challenging the giants, Sony must innovate to offer a competitive advantage. And one would be absolutely right.

But the level of innovation and effort that we're seeing here is formidable. I haven't even touched the surface in this opinion piece - I've completely left out all talk of 4K video, usable phase-detect AF in video, and 5-axis image stabilization with most lenses (remember when some thought that was impossible with a full-frame sensor?). I've even left out mention of size benefits, but for good reason: the competitive advantage of a7R II-like cameras may eventually not even have much to do with size at all but, instead, the features offered. These features, like disruptive AF technologies, sensor stabilization, removal of sources of vibration, 'smarter' sensors of the future, and more, spell out real benefits for photographers and videographers. Too many, in fact, to spell out fully here, and too many to test before definitively stating them as benefits! So I'm going to leave you with a short anecdote:

Just the other night I was at a candle-lit dinner with my fiancée, a Nikon D810 and Sigma 35mm F1.4 lens, trying to shoot at f/2 to let in as much light as possible while allowing for some wiggle room with respect to AF accuracy. I was, in fact, struggling with AF accuracy, wondering if the optimal AF microadjustment value changes not only with temperature, but also as the color of the primary light source changes. I hiked up the shutter speed to 1/80s since I had no image stabilization, and to ensure absolutely no mirror/shutter-induced shake as well. I can't help but wonder what an a7R II might have done here. Image stabilization would easily allow a shutter speed of 1/20s, while lack of AF accuracy issues would've allowed F1.4. That's a potential 3 EV noise advantage right there (the difference between, say, ISO 800 and 6400), with no mirror and no shutter vibration issues to worry about when optimizing my shutter speed to boot.

The point is that worrying less about focus, worrying less about how to get the most out of all those pixels, worrying less about running into the noise floor of my camera because I want to expose to keep those highlights from blowing - these all have one thing in common: they're all about 'technology getting out of the way'.

More candy, please.


*As explained at further length in the detail text of the video on our YouTube page, we believe that Canon's iTR algorithm still relies perhaps too heavily on distance information for subject tracking, as this is how Canon cameras have traditionally worked prior to the introduction of a RGB metering sensor for supplemental data for subject tracking algorithms.

Briefly, the phase-detect AF module knows the distance of your subject initially when you half-press the shutter button, after which the system selects any AF point that reports a subject with similar distance (it's a bit more complicated, as the system also looks for contiguous movement across points, and any distance patterns indicating approaching or receding subjects). While this has worked well for Canon in the past, and likely continues to work well for certain applications where subjects are well isolated in depth, it tends to fall behind in a more generalized sense, as pattern recognition off of actual scene data can provide more accuracy than a system that relies more heavily on distance information.  

**If you want to learn more about Sony's compression scheme, Iliah Borg and Alex Tutubalin of LibRaw/RawDigger fame have done a fairly in-depth study.