What is the best m43 for focus tracking?

Started Mar 31, 2013 | Questions thread
Anders W Forum Pro • Posts: 21,468
Re: what is the best m43 for focus tracking?

JamieTux wrote:

Anders W wrote:

JamieTux wrote:

Hi Anders, I was just talking in relation to tracking AF - and I was thinking that if you take 2 readings and the second is further than the first then you know very quickly the direction of motion.  I wasn't talking about it being THE trump card for everything

We are probably thinking along roughly the same lines. But I find it interesting to consider the details of when and to what extent the ability of PDAF to tell whether the subject is in front of or behind focus and the direction and speed of the movement translates into a clear advantage. Consider to begin with the following examples.

OK before we go down that route - I think the biggest part is that you are using 2 sensors for 2 tasks, colour/light/contrast for up/down, left/right and PDAF for forward and back - as they are independent the computation is decreased massively.

Not following you here. Whether two processes are using the same or different data sources doesn't affect either the magnitude of the processing tasks or the extent to which they can be done in parallel rather than sequentially.

Example 1: Suppose that for a single shot (not a burst), we press the shutter fully, without having half-pressed to prefocus, while aiming at a new moving target. In this case, CDAF is likely to do just as well as PDAF, perhaps better. The circumstances are essentially the same as when focusing on a static target.

Agreed - as long as the lags are low enough

Example 2: Suppose we instead try to prefocus on a moving target and then expect the camera to keep that target in focus while we wait for the proper moment to fire. In this case, a known difference between CDAF and PDAF is that CDAF has to adjust focus a little all the time in order to even know whether there is any motion that brings the subject out of focus whereas PDAF doesn't have to do that. On the other hand, PDAF lenses have difficulties moving in very small steps with great precision so it might be more difficult for PDAF to fine-tune focus as the subject moves. Furthermore, the inability of CDAF to determine by means of a single AF reading whether the subject is in focus, behind focus, or in front of focus can be compensated for by "learning from experience". If the subject is regularly moving in one direction, an intelligent CDAF system would quickly discover that fact by trial and error and then try the expected focus direction before the unexpected in its attempt to keep the subject in focus, thus doing better than it otherwise would. So it doesn't seem to me that PDAF has a major advantage in this case either.

PDAF lenses don't have difficulty with small movements really - it's more to do with torque and inertia, the focussing elements are heavier in PDAF lenses becuase they could be.  However if you've used a lens like the Nikon 105VR you'll have seen that it is capable of very quick VERY small corrections and the Canon 100L IS may be even finer (it allows for forward and backward IS with the right body too).

Well, the inertia and the type of motors used do affect the precision with which the AF mechanism can move. That's at least part of the reason why CDAF has greater accuracy, not only with regard to systematic error (back-focus, front-focus) but also random error. For PDAF systems to reach the same level of accuracy, they would frequently have to do a set of final, small adjustments, after having reached approximate focus in a single go. They limit the extent to which they do that since it would take too much time but instead accept a somewhat larger error margin. Some newer PDAF lenses may be able to overcome this problem to a greater extent than others, but as a generalization, I still think my description is valid.

There's no reason that a CDAF lens would not work on PDAF that I can think of - the CDAF ones have to have low inertia so that they can overshoot and come back or quickly focus both ways to see which is the correct way.

I agree about that. And it would surprise me if MFT lenses aren't designed so as to work well with on-sensor PDAF technology, should Oly and Pany want to go that way in the future.

I agree with the rest of your point though, but I'm still seeing that PDAF has 2 advantages from knowing the direction of focus:

1) One of the planes of movement is on a separate sensor so it's much easier to calculate that movement

See my answer to your first point above.

2) You don't have to analyse as much of the image to track the left/right up/down movement - as the actual focussing distance is set separately - you jsut need to know which sensor(s) it's covering and moving to

Again, even if the data source is the same, the two processes can freely decide which part of the data they need to work with at what time and to what extent.

In CDAF you constantly have to be finding the edge of a feature and checking that it's as sharp as it can be.

Yes, CDAF has to check for focus constantly if the subject is moving. But both PDAF and CDAF need contrast (edges) in order to work.

I THINK (as in might be wrong) that this is actually the way analogue televisions self tuned from the 90s onwards - my brother was a service engineer at Panasonic in the mid 90s and I am sure that their CATS tuning system worked on this principal

Might be. I confess to being perfectly ignorant about this.

There are then, of course, the more complicated scenarios of burst-mode shooting, but I thought we might take the simple ones first.

Always the best way

In the above examples, I am just thinking aloud, and I might have missed something important. So feel free to try to correct me if you think I have missed something important.

I don't think that there is any correcting to do - just complicating factors to add in

I think that the biggest factor for trakcing is probably the amount of data that needs to be processed - and not having separate processors for that job as far as I am aware in CDAF sensors - about to reply to Celngman with more thoughts.

Yes. Data processing capabilities are of considerable importance here. But with those capabilities continuing to rise more or less in agreement with Moore's law, there is hope on the horizon.

Unfortunately sensor tech seems to keep growing with Moore's law too - when I studied electrical engineering at university we were just hitting the absolute limit with the pentium processor design - well 15 or so years later we still keep jumping through them!

Well sensor tech grows in happy collaboration with Moore's law as far as I can see. Sensors are becoming capable of faster and faster read-out rates and processing capabilities are increasing so as to handle the increasing amount of data generated. Since sensor tech unfortunately does not, and cannot, follow Moore's law in all regards, the read-out rates cannot continue to rise indefinitely, because the data would eventually become too noisy (as a result of photon noise) to be of much use. So there is a physical limit involved here.

One think I have just thought of though - I wonder if this is actually the reason that raw data is 12 bit on m43 even on a Sony sensor (which will very likely be capable of 14 bit output), those last 2 bits would give the computers 4 times the information they already have to sift through, the video mode on the GH3 shows that actual data flow is not an issue.

Well, I have enough experience with programming and data processing to say for sure that the difference between 12-bit and 14-bit data is of trifling importance from a processing-time point of view. So MFT could easily go to 14-bit if it made sense based on the precision the sensor is capable of. You are right that the latest Sony sensor is close to the point where it would make sensor to move up to 14-bit and we might well see that happen in the next generation of MFT sensors.

Haven't had a chance to test the GH3 yet by the way

But you have it already or what?

Oh yes it arrived on Wednesday, I am doing a 365 project this year so yesterday and today's pictures (when it's up) are from the GH3 - using my phone as a remote release.  I'm going to try some side by side shots tomorrow - we are due sunny weather so I'll try to find some challenging scenes and I have kids to get subjective views on focus - it's definitely a very different camera to the E-M5.

Looking forward to hearing more about your take on the two. The GH3 was the camera I was waiting for about this time last year when the E-M5 unexpectedly (from my point of view) appeared on the stage. When I chose to buy the E-M5 last summer, I didn't know, of course, what the GH3 would eventually have to offer. But now that I know, I don't regret the choice I made. Still, it pretty much follows from what I have already said that the GH3 is number two on my personal ranking of MFT bodies. It is also clear to me that both cameras have their pros and cons relative to one another. So when I say I prefer the E-M5 it's because its pros (primarily IBIS, size/bulk, live-view highlight warnings) carry more weight than those of the GH3 based on my personal wants/needs.

Of course, a happy solution to the problem of choosing between them is to do what you did: Get both.

 Anders W's gear list:Anders W's gear list
Panasonic Lumix DMC-G1 Olympus OM-D E-M5 Olympus E-M1 Panasonic Lumix G Vario 14-45mm F3.5-5.6 ASPH OIS Panasonic Lumix G Vario 7-14mm F4 ASPH +28 more
Post (hide subjects) Posted by
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark MMy threads
Color scheme? Blue / Yellow