Why no cross-type OSPDAF points?

Started 4 months ago | Discussions thread
rf-design
rf-design Regular Member • Posts: 470
Re: Why no cross-type OSPDAF points?

I thought that using "on sensor phase detection" lead in natural decision to use both types, left to right and upper to lower, of detection to improve the AF. As classic PDAF use both directions for obvious good reasons there should be a technical reason to compromize. But I found none!

For instance the newest Sigma SDQ-H is using OSPDAF. With the Rawdigger you could inspect the OSPDAF pixels by disabling the low sensitivity correction for display.

The horizontal AF pixel repetition is 1/16, the vertical repetition is 1/32. Only two of the top layer blue pixels are used for OSPDAF. They are vertical stacked and called, refering to the patent, B3 and B4. So both PDAF pixels pairs could be sequential read to the same output line. The mid and lower layer pixel extend over four top layer pixels. To improve reading speed an alternative would be to place the PDAF pixels pairs in separate output lines and to read them, mean conversion to digital, in parallel.

So if only 2 of (16*32)*(1+1/4+1/4)=768 PDAF pixels groups have to be read for AF operation the update rate could be

3.8fps*(768/2)=1459.2fps (3.8fps continuous drive time)

So there is enough read bandwidth to read all PDAF pixels.

Classic PDAF modules using a line sensor typical have a rectangluar pixel aspect ratio which increase the light sensitivity. They only receive a reflected part from the AF mirror but could use more relative image area for a dedicated user AF point. So my guess that if the number of OSPDAF do not increase significant there remain a number of stop light sensitivity difference. For dual pixel PDAF the situation is reversed.

I do not know if at the case of the SDQ-H the PDAF pixels pairs are left to right or upper to lower. But it would be easy to use quadruplet PDAF pixels instead. They would natural fit into the Quattro sheme B1..B4. This would half reduce the PDAF pixel update rate which is high enough. One reason that the PDAF pixels pairs are within a single quadruplet is a possible a possible interaction with the midle pixel area below because the carrier diffusion drift direction depend on the actual potential, or exposure, of the two layers. So the difference between the PDAF pixels are smaller based on midle layer exposure. But also for this interaction there is no reason to put not four PDAF pixels, looking in all four directions into on pixel group.

Here a 1000% magnification of the top (blue) layer.

The first Quattro patent

If the object within the user AF area is out of focus, for instance for a 50mm lens which is focused at 5m, 1m behind the focus, the object will be sharp

50e-3*(1+50e-3/5)-50e-3*(1+50e-3/6)=83.3um

before the image plane.  So at F1.4 the out of focus circle will be

83.3um/1.4=59.2um

The basic operation to extract a phase signal is to correlate only the PDAF pixels pairs. From the above out of focus situation the object is blurred over

59.2um/4.33um=13.747 top layer (blue) pixels.

I guess that there is a lower threshold for object be be blurred otherwise small detailed structures could fall within the PDAF pixels. For the Sigma SDQ-H the blue pixel pitch is 4.33um. So the horizontal PDAF pixel pair spacing is

16*4.33um=69.3um

If for instance a small, about 1 pixel wide, vertical line against a black background have to be focused, it could fall in between two PDAF pixel pairs. In this case the PDAF fails. That is an extreme case but I expect that the OSPDAF operation works better if the number of PDAF pixels are increased because less blur is needed and the risk for small or detailed objects to fail in AF is reduced.

From a technical standpoint both PDAF directions could be equal be built. In the Quattro sensor case this is obvious. The final AF performance will depend on

1. The PDAF pixel read rate

2. The PDAF pixel density, limiting lower blur circle and critical object

3. PDAF correlation processing

The dual pixel PDAF have to highest potential for AF performance but I doubt that with the AF system update rate all pixels are read and correlated. They are limited to a number of user AF areas and possible also a subset of all avaible PDAF pixels.

Comparing dual pixel with high read rate sensors with a limited number of PDAF pixels based on shading it all come down to energy required to do AF. The number of PDAF pixels to read consume energy as well to calculate the AF phase signals. Therefore I think the cameras providing dual pixel PDAF are not using all pixels for AF. Probabily also a subset of an user area.

Looking from the energy standpoint the camera system with the lowest power pixel read energy and the lowest energy for PDAF phase correlation calculation will be the system with the best AF performance.

Sorry being little off the initial question.

 rf-design's gear list:rf-design's gear list
Panasonic LX100 Canon EOS 40D Sigma SD10 Sigma SD14 Sigma SD15 +54 more
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark MMy threads
Color scheme? Blue / Yellow