Finally how many AF pixels on the Z9 /Z8 sensors ?

The image of " scene Intelligence" ( somewhere on the EVF stream/pipeline) is reconstructed a time after complete capture (on board demosaicking for EVL/LV and IA processus). This could explain why some differeces beetween Phase and SR can be observed.
Later SLRs and DSLRs diverted some light from the optical path to a third "detector" then in the camera body base to determine both colour and contrast in different parts of the scene.

Although individual colours are not initially detected at the individual pixel level many SLR and DSLR could use individual colours detected by the RGB sensor to track a moving subject by its colour - when different to background colour.

Electronic processing time, as demonstrated by the humble CD of 35 (?) years ago is much faster than essential for accurate AF.

It is very probable ML AF uses information detected separately to that initially recorded at the individual pixel level prior to demosiacing as part of the AF (and matrix metering) process.
 
One could argue the Z8/Z9 have 45.7 million color pixels to sample colors from to feed 3D tracking 😉
On this sub-thread detail, while the camera has 45.7 pixels, each pixel is covered by a single colour filter - implying 11.4 each for R and B and 22.8 (as there are 2 G) for green.

Digital systems have been reported sampling from up to about 32 surrounding pixels to calculate an exact colour etc.

Probably on Nikon knows how many sampling points current Nikon ML uses.
We have no evidence that color information is used for PDAF. especially before demosaicking !! Just imposible.

The only information, coming from Sony SC is that PDAF photosites ( not pixels) are only under the green color ( higher sensitivity under visble spectra). So they would work only in Luminance.
I've also heard it's the green pixels only. It does make sense as the AF assist light is green & the red assist LEDs of Speedlights are disabled.
May be color is used ( but need confirmation) somewhere from the low resolution Viewer image or more precisely the buffer storing video and jpegs but not the raw data, when using IA. But this is far from the PDAF level necessary to initiate computing for long range focusing.
This thread reminds us that the way DSLRs AF still has a few advantages. Nikon are probably working on column AF so horizontal features can be focused on. Tipping the camera off horizontal can help.
 
The image of " scene Intelligence" ( somewhere on the EVF stream/pipeline) is reconstructed a time after complete capture (on board demosaicking for EVL/LV and IA processus). This could explain why some differeces beetween Phase and SR can be observed.
Later SLRs and DSLRs diverted some light from the optical path to a third "detector" then in the camera body base to determine both colour and contrast in different parts of the scene.

Although individual colours are not initially detected at the individual pixel level many SLR and DSLR could use individual colours detected by the RGB sensor to track a moving subject by its colour - when different to background colour.

Electronic processing time, as demonstrated by the humble CD of 35 (?) years ago is much faster than essential for accurate AF.

It is very probable ML AF uses information detected separately to that initially recorded at the individual pixel level prior to demosiacing as part of the AF (and matrix metering) process.
Leonard, you're basically describing 3D-tracking, which came into its own when the exposure sensor located in the OVF pentaprism/mirror housing evolved from a low-resolution shaped pixel device to a high resolution full color array. The D6, for example, had a 180Kpixel exposure sensor that underpinned its 3D-tracking.

The 3 imaging arrays in DSLRs were 1) the imaging sensor, which provided CDAF; 2) the PDAF line sensor, which preferred red but could have evolved towards full color; and 3) the exposure sensor, which did the pattern recognition tasks of 3D-tracking and of course was full color. These were present throughout the digital era.
 
Z9 has 45 million pixels and every 12th row is used for AF. That makes around 3.5 to 4 million pixels.
Where somebody says or show that a row of photosites is a continuous line of AF photosites only ?

I checked through Sony litterature, as well they never speak or show continuous lines of photosites.

Af photosites are more probably a 12*12 or a 12*18 matrix for keeping the 1.5 ratio.

So the best total amount of photosites given by a 12*12 matrix would be only 318 000 AF photosites or 216 000 for a 12*18. Not 3 millions.

I welcome a solid information coming from Sony or Nikon : This is useful to understand how the AF accuracy in focus distance and consequently DoF can be achieved consistently beyond 500 mm on small parts of the frame. Very useful to know for birders and more.
Maybe this will help? Until there is someone tearing down the Z8/Z9 and putting it under a microscope, this is the best knowledge we have from the Nikon 1 System and the Z5, Z6, and Z7.

https://landingfield.wordpress.com/2021/05/10/decoding-the-slvs-ec-protocol-from-imx410bqt/

I feel like you're chasing something that won't get you anywhere though- as in contact the engineers who made these sensors and you likely won't get an answer back.

That said, Olympus in 2016 moved on (E-M1 II), adding vertical line detectors to their sensors, and in 2022 (OM-1) used the quad bayer pattern itself, thus no loss in information from masking- but it is the slowest stacked sensor out there (it is dense at 80 million photosites). The Canon R1 (2024) has also moved on, but in their RGGB pattern, one green patch is flipped the other axis and the rest is oriented the other way.

It will be interesting to know what Nikon does in the future.

--
I like cameras, they're fun.
 
Last edited:
Leonard, you're basically describing 3D-tracking,
I agree - though we are now discussing evolutionary stages rather than current Mirrorless.
which came into its own when the exposure sensor located in the OVF pentaprism/mirror housing
I am satisfied both spot and sensor weighted metering took place in the pentaprism head assembly right up to the end of the DSLR era as regards viewfinder operation.

This was easily provable by shining a bright light into the viewfinder and noting the exposure read out difference between matrix and the different spot / centre weighted reading.

Matrix metering needed to have information from several parts of the image area, though I was unable to trace confirmation from Nikon it used information from the image sensor or the RGB unit.
evolved from a low-resolution shaped pixel device to a high resolution full color array.
The drawings I have retained from this era show that the RGB colour array was in the camera base and that it used light diverted separately from that used for the nearby auto focus detection unit.

I assume though I have no Nikon confirmation that Nikon viewfinders were slightly darker than Canon due to Nikon diverting some light before it reaches the viewfinder first for auto focus, then some light for RGB, and then some light for spot and centre meter metering
The D6, for example, had a 180Kpixel exposure sensor that underpinned its 3D-tracking.
I agree the D6 has a much larger pixel unit in the camera base.
(snipped) and 3) the exposure sensor, which did the pattern recognition tasks of 3D-tracking and of course was full color.
The RGB element seems to have clearly taken place on a unit in the camera base. This may well have been where matrix metering also took place.

--
Leonard Shepherd
In lots of ways good photography is similar to learning to play a piano - it takes practice to develop skill in either activity.
 
Last edited:

Keyboard shortcuts

Back
Top