yeah, how does AF work in EOS cameras? I also thought I knew. It's
supposed to be very simple: as soon as the camera sees sharp image,
it stops adjusting focus. And in MF mode with focus confirmation,
it confirms focus as soon as it sees sharp image. As simple as that.
BUT IT IS NOT THE CASE!!!!
I thought everyone knew SLRs used phase detection instead of the contrast (aka sharpness) detection you describe. It's right in the specifications for sure.
What does that mean? It means there is a set of optics in front of 2 linear CMOS focus sensor arrays down below the mirror in Canon SLRs. Light from the lens that goes through the first partially (transmissive) mirror is forced down to these optics by a secondary mirror.
The optics are effectively the same as split image prism. The split that we view when we have a focus screen with this feature is the divider between the two CMOS arrays (they don't have to be CMOS, can be CCDs). Each array gets half of the image. Just like your eye does, it tries to line up the two contrast curves (aka adjust their relative phase) on each side of the dividing point. You can actually watch these curves change in near real time with the service CD. Here is a screen shot of basically what we're talking about from the service CD. The red line would be from one half of the sensor and the white line from the other half which look at each half of the split image:
However, it doesn't line them up identically. This is because the optical path to these optics is never precisely the same length as the optical path to the main camera imaging sensor. So, the camera has an offset calibration value, and uses this to adjust the waveforms so they are ever so slightly staggered by this amount.
If you think about it, this phase information can open up new worlds for the autofocus. A contrast system only can tell if the new AF adjustment helped or hurt the focus. It can only answer the question "is it sharper now." It doesn't even know which way the lens is out of focus before it moves it. However, a phase detection system knows if the left array is lined up higher or lower than the right array ( I'm picking an arbitrary axis here). This tells the camera which way to turn the lens before it even moves. Of course, there is a limit to how far the focus can be off before the contrast curves go right off the sensors. You can see this effect easily in a split prism focus screen... as you have to get the focus remotely close before you can use the prism. When you're out of that range some call it a "hunt" mode, and when you're in that range it is sometimes called "aquisition" mode. Nikon bragged about how the range of the second mode was expanded on the D2H.
Besides the direction, it also knows exactly how far the image is out of focus when in aquisition mode by knowing how many pixels out of phase the two waveforms are. This is of course needed in AI servo mode when you do predictive focus since you can never try to focus after the mirror flips up (which is when you need your moving target to be in focus). So, you can see that it is not a very closed loop operation when in AI servo mode. For speed of operation, perhaps loop stability, and probably a few other reasons, it actually is not entirely closed loop when in one shot either.
The above referenced calibration value is the chief cause of front/rear focus in SLRs. It is trivial to change, and is MUCH more precise than anyone who has ever called someone a "pixel peeper" would suspect. I fixed my D60 by adjusting this with the service CD. My Rebel was dead on. My 1DII was off but the 1D service CD would not detect the 1DII so I had to send it in to be done.
Another cause of front/rear focus which affects each lens differently is the fact that although it is obvious how the camera knows how much the image is out of focus, it is not obvious how much it knows how to move it to adjust this. There are a whole series of lens variables that are sent back to the camera for this purpose. These variables are at least mentioned in one of the lens workbooks. They are the same sorts of variables you see in any servo controlled system to adjust things like acceleration, and calibration of the feedback pulses. Those feedback pulses (so the camera knows how much the lens motor has moved) are usually given by some sort of photodiode arrangement looking through teeth on a gear. The same sort of setup that automotive engines use to monitor engine speeds and position relative to top dead center except in the case of engines these are often hall effect sensors instead of photodiodes. This part of the system is well documented in patents (from Kodak I think) that you can find links to on these forums. You can also pull the lens mount off some cheapo lens you have to see this as well. The calibration of this feedback element was the cause of some Sigma 120-300 misfocusing as I understand.
I'm sure this isn't formatted quite in the right order and I've missed a few things, but I'm in a hurry.
Jason