Canon R7 Dual Pixel CMOS: How does it work?

Started 4 days ago | Discussions thread
Flat view
freixas Junior Member • Posts: 29
Canon R7 Dual Pixel CMOS: How does it work?

I have a lot of questions here. Before jumping to reply, please either read the entire discussion or at least read any responses I've made labeled Update. I would appreciate links to authoritative sources if you have them.

-- hide signature --

I've seen some claims that the R7 supported both dual pixel CMOS AF and contrast detection. I finally found the definitive answer on page 950 of the Advanced User's Guide. It says the focusing method is dual pixel CMOS. It's not a hybrid AF system.

The dual-pixel phase-detection is interesting, but I've yet to find a detailed explanation of it. Most explanations are simplistic.

I was thinking about the left- and right-looking diodes that make up the sensor. Essentially, you are taking two separate images, one shifted horizontally with respect to the other, the separation being a factor of the focus and the max aperture. Near the left and right edges, parts of one of the images will not be on the sensor.

In some modes, the specs claim 100% coverage. Do they have some extra pixels around the left- and right-edges to manage 100%?

100% coverage is only for automatic selection. Which mode is "automatic selection"? The Advanced User's Guide uses this phrase 3 times and never defines it. When you select the AF point, you get only 90% horizontal coverage, which makes more sense.

The two images tell the camera which way to focus and by what amount. To do this, it needs to know which is the left image and which is the right. You can hold the camera vertically in two orientations, so the camera has to take the orientation into account. Does focusing still work if the camera is flipped upside down? I haven't tried it, but I hope so!

Exactly how the two images are correlated is a mystery. The point you are trying to get into focus may start out of focus. It may have a large circle of confusion, so it's light doesn't fall on just one pixel. And the light from that point will be mixed in with the circles of confusion of neighboring points. The advantage of phase detection is that you know how far out of focus a point is and you know the direction the lens should be adjusted to bring it into focus. But an out-of-focus point will generally be low contrast and spread around.

Most sites I visited for an explanation give a simplistic view of the process: there are two sharp images and you just have to move them together. Or there is just one point to worry about. The real algorithm must involve some complex correlation of the two images. II'm still trying to track this down.

The dual pixel RAW files contains two images: A+B (the two diodes combined in some fashion) and A (at least, that's true for the Canon EOS 5D Mk II). It would make sense for the camera to contain two buffers, also A+B and A, but phase detection would seem easier to perform if the buffers were A and B. Which is it?

Because every pixel has two diodes and because the light for each diode comes from a possibly different location (based on whether the pixel points to an object in focus or one out of focus), how exactly does the camera create the final A+B image? If you simply combined each pair of diodes, you would get a sort of double-exposure of any objects that aren't right in the plane of focus. Another mystery.

The final mystery is why the exposure in the EVF affects the autofocus. If the EVF shows an overly dark or bright image, autofocus doesn't work as well. One can explain this by saying that these images have lower contrast than a well exposed image, but it implies that the image sent to the EVF is the image that the phase detection operates on.

For example, if I turn off ExpSim or use auto-ISO, I can get a well-exposed image on the EVF where it might otherwise be dark. I know that ISO gain is applied before the analog signal from the sensor goes through the ADC to become digital numbers. If the phase detection works off the buffers (which it probably does if it has to look at regions of an image) then the ISO will affect the image in the buffer as well as the EVF image. I have a theory that turning ExpSim off essentially enables auto-ISO until you snap the shutter.  This would explain why both ExpSim OFF and auto ISO improve AF.

Slowing the shutter speed (for dark images) also helps the EVF image and the AF, but the explanation has to be a bit more complex. With ExpSim on, I would guess the camera raises the ISO as much as it can and then it slows the frame rate at which images are captured on their way to the EVF. This would improve the image that goes into the buffer from which phase detection works. For bright images, it might need to read the buffer more often. As with most of this post, this is just speculation.

Lacking authoritative sources, some of my questions could be answered with carefully designed tests. Since Canon is unlikely to offer detailed explanations, the rest will require educated guesses, ideally from someone more educated on the processes than me.

Canon EOS R7
If you believe there are incorrect tags, please send us this post using our feedback form.
Flat view
Post (hide subjects) Posted by
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark MMy threads
Color scheme? Blue / Yellow