Some random Z6 III tech musings

My first thought was that only the capacitors for the ADCs are on the "stacked" part
The problem with that is that the parasitic capacitance of the interconnects is a critical design element in a switched-cap ADC, and moving the switched caps away from the switching FETs would add significantly to the parasitic capacitance as well as degrade matching. Matching has to be very precise to maintain 14-bit resolution.

Edit: the most used architecture for column ADCs is apparently ramp converters, not switched cap converters.
 
Last edited:
So here's what I'm hearing: the "partial stacking" is a bit like having lanes of lanes. You have multiple column-based ADCs feeding each "stacked" pass-along. Nikon uses the terminology of "multiple check-out lanes" (a grocery store analogy!). It's not the same as the Z8/Z9 stacking, nor does it do a separate viewfinder stream.
That makes sense, but I'm guessing that the ADCs are on the stacked ICs. That would allow the use of a silicon process optimized for ADC performance without the compromises required for optimum sensor performance.
My first thought was that only the capacitors for the ADCs are on the "stacked" part, as well maybe some power-conditioning caps/inductors. That is, my guess is that there are no active semiconductors (no diodes, no transistors) in the stacked bits.

The advantage of this is that you can build up the stacked bits on the otherwise-finished sensor using relatively low-temperature processes that don't run the risk of damaging the silicon part of the sensor. For example, you can just lay down polyimide and aluminum to form the capacitors and the connections to and between them. No need for high-temperature annealing, or ion implantation, or planarization, or any other of those harsh processes that go into fabricating semiconductors. This is probably a lot cheaper to do than actually stacking another die on the sensor; in fact it may be no more difficult than adding the Bayer filter.

Capacitors in a cap-based ADC chew up a lot of area, so moving the caps off of the silicon gives you more room for the active components of the ADCs, and therefor makes it easier to provide more channels of parallel ADC conversion. There may be other advantages too, like enabling less cross-talk between capacitors, providing higher capacitance values, or improving the Q of the circuits.

But until I can at least put a stacked sensor under a microscope or saw through one to get a cross section, all I can do is speculate.
Stacking of chips is possible owing to advanced packaging techniques, as you may be well aware of. There are many forms of such packaging. In the 90s and early 2000s through-hole vias were introduced: basically, vertical holes are etched in relatively thin passivated wafers, solder is floated onto the top, filling the holes, and the chip is dropped onto a suitable substrate. Effectively miniaturizing standard chip-on-board packaging. Since then, and since I left the industry, the accuracy with which these direct pad-to-pad processes can be done has greatly improved - allowing very high density vertical interconnect across a chip rather than in restricted areas. Solid State memories stack tens or even hundreds of complex memory chips in this manner using those same low temperature processes you mention. There's no reason to avoid using chips with many and varied active devices. The only time you want to lay up individual devices in the manner you speak is if there's an absolute need for it - such as when precisely trimmed film resistors are the only thing that will work. A/D houses have long since perfected using fairly ordinary elements fabricated using standard bulk processes to reduce costs and increase density.

Bottom line, you're on the right track but thinking too conservatively. Those stacked sensors are stacking complex processing chips underneath the sensor layer - RAM, digital processing, you name it.
 
My first thought was that only the capacitors for the ADCs are on the "stacked" part
The problem with that is that the parasitic capacitance of the interconnects is a critical design element in a switched-cap ADC, and moving the switched caps away from the switching FETs would add significantly to the parasitic capacitance
I think that depends on a few things: how much farther away would the caps be if not in the same layer as the FETs, what's the spacing of the interconnects, what's the k of the insulating material, and so on. Plus you might be able to use larger-valued caps under my hypothetical configuration, so parasitics my be less of an issue.
as well as degrade matching. Matching has to be very precise to maintain 14-bit resolution.
If you want the output to be linear, sure. OTOH, if you're satisfied with the output merely being monotonic (for which you can compensate later, such as by using a calibration LUT) than you don't need to be as precise. Plus it isn't clear to me that metal-polyimide-metal caps would be any less well-matched than on-silicon MIM caps, plus you can probably laser-trim metal-polyimide-metal caps much more easily than you could MIM caps, although I have no ideas on the economics of that.

It's a big design space, as I learned as part of a process development team at Intel many years ago. A lot can go wrong, but a lot of new possibilities may arise as well. I'm always reluctant to say what can't be done: who would have believed you could build the most powerful rocket ever out of good-ol' stainless steel and have it be reusable too?
 
Last edited:
Bottom line, you're on the right track but thinking too conservatively. Those stacked sensors are stacking complex processing chips underneath the sensor layer - RAM, digital processing, you name it.
One reason I'm guessing that the "stacks" are built-up in-situ, as opposed to being separately-fabbed chips, is the aspect ratio they have in the photos. From the work I've done related to singulating die from wafer and handling them after, that extremely long thin aspect ratio looks like a yield-killer. No stacked, MCM, or CoC product I've ever seen or seen described used die with aspect ratios anywhere near that high.

So I'd be a little surprised if there's actually separately-fabbed die there, and accordingly thought about what you might be able to build up in-situ on an otherwise-complete sensor.

"Fully" stacked chips are a whole different process. I think I read that at least Sony does the "stacking" a whole wafer at a time and then singulates the stacked wafers into stacked sensors. That obviously cannot be done easily on a partly-stacked device, unless you want to through away all the silicon on the stacked-wafer that's over the non-stacked area of the sensor. That's my thinking anyway.
 
Last edited:
Bottom line, you're on the right track but thinking too conservatively. Those stacked sensors are stacking complex processing chips underneath the sensor layer - RAM, digital processing, you name it.
One reason I'm guessing that the "stacks" are built-up in-situ, as opposed to being separately-fabbed chips, is the aspect ratio they have in the photos. From the work I've done related to singulating die from wafer and handling them after, that extremely long thin aspect ratio looks like a yield-killer. No stacked, MCM, or CoC product I've ever seen or seen described used die with aspect ratios anywhere near that high.

So I'd be a little surprised if there's actually separately-fabbed die there, and accordingly thought about what you might be able to build up in-situ on an otherwise-complete sensor.

"Fully" stacked chips are a whole different process. I think I read that at least Sony does the "stacking" a whole wafer at a time and then singulates the stacked wafers into stacked sensors. That obviously cannot be done easily on a partly-stacked device, unless you want to through away all the silicon on the stacked-wafer that's over the non-stacked area of the sensor. That's my thinking anyway.
Who says those long skinny stacks aren't multiples of more modest aspect ratio chips all lined up? You have more current knowledge of the state of the art than I do (I retired in 2009), but precision pick-and-place at the micron or tens of microns level can do amazing things.

And back in the 90s fabs were already creating superthin silicon on detachable substrates to minimize stack height and improve thermal performance. I would suspect that Sony's tool vendors have solved the potato-chipping problem and are using thinned wafers to the stacking method you mentioned. It would clearly be one way of reducing the cost of stacking.

My overriding point is that I highly doubt that these stacks include deposited-film devices as a core element - they would be using highly-integrated devices on a process with inherently tight matching characteristics, which would also be much cheaper than deposited-film devices. Again, technology may have moved faster than I suspect in 15 years.
 
Who says those long skinny stacks aren't multiples of more modest aspect ratio chips all lined up? .... precision pick-and-place at the micron or tens of microns level can do amazing things.
You could be right. I'm looking forward to finding out. Someone somewhere must be willing to buy a Z6iii, pull out the sensor, and dissect it for YouTube.
And back in the 90s fabs were already creating superthin silicon on detachable substrates to minimize stack height and improve thermal performance.
I've seen a silicon wafer polished so thin it was transparent and flexible. Wild stuff.
Again, technology may have moved faster than I suspect in 15 years.
Yep. I'm a patent attorney and deal with semiconductor inventions some times, it's amazing what people are doing. Of course, I can't talk about work, but anyone who wants to can search the USPTO database if they want to learn some cool stuff.*

*always with the understanding that the USPTO doesn't require anyone to prove that their invention actually works, unless it's anii-grav or perpetual motion.
 
Bottom line, you're on the right track but thinking too conservatively. Those stacked sensors are stacking complex processing chips underneath the sensor layer - RAM, digital processing, you name it.
One reason I'm guessing that the "stacks" are built-up in-situ, as opposed to being separately-fabbed chips, is the aspect ratio they have in the photos. From the work I've done related to singulating die from wafer and handling them after, that extremely long thin aspect ratio looks like a yield-killer. No stacked, MCM, or CoC product I've ever seen or seen described used die with aspect ratios anywhere near that high.
To clarify, the yield issue with these aspect ratios isn't yield as in a higher number of bad die, it's yield in terms of fewer chips per wafer, due to a lot more area wasted in the scribe regions. And it rarely makes sense from a chip design perspective. But it can be done if needed. I have seen specialty chips with high aspect ratios.
So I'd be a little surprised if there's actually separately-fabbed die there, and accordingly thought about what you might be able to build up in-situ on an otherwise-complete sensor.
I'm betting on separate die, because that is a clear path to higher performance per my first comment.

The "partial stacking might also be a chiplet design instead of a true die stacking.

Building up extra layers on the surface of the die can be quite expensive. The company I design chips for has such a process to create tiny transformers on top of the die. It works great but cost can be an issue.
"Fully" stacked chips are a whole different process. I think I read that at least Sony does the "stacking" a whole wafer at a time and then singulates the stacked wafers into stacked sensors. That obviously cannot be done easily on a partly-stacked device, unless you want to through away all the silicon on the stacked-wafer that's over the non-stacked area of the sensor. That's my thinking anyway.
That's true, but chip stacking with all sorts of die sizes is done, with various interconnects, including wirebond, flip-chip, and through silicon vias. The wafer bonding technique with through silicon vias gives the highest performance and highest interconnect density, but is also the most expensive.

I'll add to my bet that these are flip chip die with the ADCs and associated analog circuitry, in an analog optimized CMOS process.
 
Last edited:
mosswings wrote:
Bottom line, you're on the right track but thinking too conservatively. Those stacked sensors are stacking complex processing chips underneath the sensor layer - RAM, digital processing, you name it.
That's true, but the Z6iii sensor photos show two narrow strips along the top and bottom, not big enough for much processing or RAM. It is apparently not stacked in a wafer bonding technique. But what goes on in these regions top and bottom? Primarily ADC, which is also the performance bottleneck. That's why I suspect that those are two small stacked die, connected by flip chip or by through silicon vias, whose function is primarily high speed ADC, and fabricated in a process optimum for that purpose.
 
mosswings wrote:
Bottom line, you're on the right track but thinking too conservatively. Those stacked sensors are stacking complex processing chips underneath the sensor layer - RAM, digital processing, you name it.
That's true, but the Z6iii sensor photos show two narrow strips along the top and bottom, not big enough for much processing or RAM. It is apparently not stacked in a wafer bonding technique. But what goes on in these regions top and bottom? Primarily ADC, which is also the performance bottleneck. That's why I suspect that those are two small stacked die, connected by flip chip or by through silicon vias, whose function is primarily high speed ADC, and fabricated in a process optimum for that purpose.
Poor wording on my part. In that quote I was referring to the Z8/9 full-stack. The Z6iii's edge-stacked chips are doing something more directly related to low-level output. Your speculation that they're ADCs is a good possibility, though again I begin to worry that you don't get the matching benefits of same-die processing when you do that. But until a chip-teardown analysis company (I now forget the name of the pre-eminent one) does their thing we won't know.
 
I would be surprised if Nikon/Sony actually went with faster ADCs - they're likely doing what Canon does on the R5/R6 - existing (slow) ADCs running in parallel. Canon runs 8 ADCs on the R5/R6 vs 12 ADCs on the Sony/Nikon fully-stacked sensors.
Exactly. If the descriptions I've been given of how the partial stack is created, it's post ADC and deals with multiple columns in parallel.
 
Not a fan of him in general but his AF tests are some of the best available on YT. He just posted his Z6 III AF test results, which he concludes with:

Nikon really knocked it out of the park here. They finally got to the place where I've been saying the AF has needed to be since they went to the Z6 and Z6 II.
I'm not going to comment on his autofocus statements, but if you look at the ball coming at Bryce Harper's bat on the home run sequence, you can clearly see the rolling shutter impact (at 1/2000 electronic, which is what he was using).
 
Not a fan of him in general but his AF tests are some of the best available on YT. He just posted his Z6 III AF test results, which he concludes with:

Nikon really knocked it out of the park here. They finally got to the place where I've been saying the AF has needed to be since they went to the Z6 and Z6 II.
I'm not going to comment on his autofocus statements, but if you look at the ball coming at Bryce Harper's bat on the home run sequence, you can clearly see the rolling shutter impact (at 1/2000 electronic, which is what he was using).
Yep

 

Attachments

  • 4427539.jpg
    4427539.jpg
    231.5 KB · Views: 0
How much of that might be ball impact distortion, figure an impact speed of 120 mph? The bat didn't appear distorted in shots.
 
Lol, duh. I should get up in the morning before posting.
 
Not a fan of him in general but his AF tests are some of the best available on YT. He just posted his Z6 III AF test results, which he concludes with:

Nikon really knocked it out of the park here. They finally got to the place where I've been saying the AF has needed to be since they went to the Z6 and Z6 II.
Thanks for recommending this video: I would never have watched it otherwise.

The thing with Nikon AF picking up a textured background over a foreground subject despite having the AF cursor on the foreground subject is something I've been experiencing since the D810 days (when I started using Nikon), and I've seen it on the D500, D850, and now the Z series. I wonder if there's something intrinsic to Nikon's AF design that makes them like a textured background.

The other night I was shooting a concert with a Z9 and Plena, and had a Wide-L box with face detect on my brightly lit subject and the camera instead chose to focus on the dim acoustical tilework behind her until I let go of AF-On, and pressed it again.

What he says about the f/1.2 lenses' slower AF speed also matches with my experience. I'd throw the Plena in there as well: it is sluggish compared to the f/1.8s and the better zooms. The best focusing lenses I have tend to be the 24-120, and the 24-70/2.8.
 
Not a fan of him in general but his AF tests are some of the best available on YT. He just posted his Z6 III AF test results, which he concludes with:

Nikon really knocked it out of the park here. They finally got to the place where I've been saying the AF has needed to be since they went to the Z6 and Z6 II.
Thanks for posting the link.

This is good news indeed. It looks like Nikon may have now gotten AF, especially Eye AF, right, and I may be able to "return" to my favorite camera brand.
 
Thank you. This video shows clearly some important AF weaknesses :

-The AF looks too slow to follow only the human speed of a slowly running player ( 15 km/h) and of a little girl ( 3 km/h ).

-As well it takes too long to catch the initial focused frame on a full body and face.

It looks very far from Z9 performances in similar situations. Really disappointing as shown in this video . Let's hope official series will be improved. ( Z9 tooks 4 releases and 2 years to come on the par with specs and marketing ...).
 
  • The 4K/120p has a DX cro. Based on the Gerald Undone's 9.47ms rolling shutter measurement for 6K/24p, Nikon could have achieved 4K/120p with just a 1.15x crop (9.47 * .85 = 8.05ms = 1/124), comparable to the A7S III's 4k/120p 1.1x crop. I'm guessing Nikon didn't want to go through the trouble of adding additional crop factor support in the firmware.
  • Richard Butler measured a 14-bit full-sensor sills-mode readout time of 14.6ms (1/68.5). It would be nice if Nikon didn't drop 12-bit raw stills support in Expeed 7, as that would allow the Z6 III's electronic shutter to run at 1/105, which I base on the 9.47ms rolling shutter for 6K video, which is 12-bit. Keeping 12-bit raw support would've further reduced rolling shutter artifacts and also increase the electronic shutter flash x-sync from 1/60.
  • Steve Perry does a great job demonstrating the Z6 III's blackout for the various frame rates in his YouTube video (link jumps to 8:55)
  • I was hoping Nikon would move away from their video-based pre-capture implementation on the Z8/Z9 to a stills-based implementation for the Z6 III, so that it would get raw support but that probably wont come until the next spin of Expeed and will probably require Nikon to be more generous with the DRAM buffer size. The 6K60p full-sensor readout speed at least offers full-sized jpgs for the Z6 III's C60 pre-capture.
  • 3D LUT support would've been great instead of the "Flexible Color" feature Nikon added, although perhaps "Flexible Color" will turn out to be more useful than expected depending on how much flexibility Nikon put into the mechanism.
  • The 6K fully-sampled internal NRAW is new to the Z line and quite useful for those don't want to shoot 8K NRAW to get full-sensor NRAW sampling like on the Z8/Z9.
Regarding the 12-bit vs 14-bit, there’s a lot we don’t know, including whether or not the sensor switches to 12-bit mode or 14-bit mode automatically when certain conditions are met (eg if ISO is 400 or higher), in order to optimize things. Not saying this happens: just pointing out that we don’t know whether it does or doesn’t. And it would make a lot of sense for Nikon to do this.

Remember that a 14-bit sensor read and a (lossless) compressed 14-bit raw file are two different things—and a 14-bit lossless file can always contain a 12-bit sensor read with no loss in quality and with no increase in file size. And in many common scenarios—particularly those requiring speed—12-bit will offer no image quality loss relative to 14-bit (example: low light /high ISO shooting), but it could improve rolling shutter, buffer, speed, etc. We also know that Nikon’s now using intoPix TicoRaw on the Expeed 7; and that the Zf, which uses the same sensor as the Z6 and Z6ii doesn’t allow for a user-selectable 12-bit sensor readout (except video).

I’d be interested to see if the sensor read speeds are the same in all circumstances. I’d also be interested in if you could clarify the shooting conditions on your excellent sensor speed site instead of just “stills”. ;)
 
Last edited:
  • The 4K/120p has a DX cro. Based on the Gerald Undone's 9.47ms rolling shutter measurement for 6K/24p, Nikon could have achieved 4K/120p with just a 1.15x crop (9.47 * .85 = 8.05ms = 1/124), comparable to the A7S III's 4k/120p 1.1x crop. I'm guessing Nikon didn't want to go through the trouble of adding additional crop factor support in the firmware.
  • Richard Butler measured a 14-bit full-sensor sills-mode readout time of 14.6ms (1/68.5). It would be nice if Nikon didn't drop 12-bit raw stills support in Expeed 7, as that would allow the Z6 III's electronic shutter to run at 1/105, which I base on the 9.47ms rolling shutter for 6K video, which is 12-bit. Keeping 12-bit raw support would've further reduced rolling shutter artifacts and also increase the electronic shutter flash x-sync from 1/60.
  • Steve Perry does a great job demonstrating the Z6 III's blackout for the various frame rates in his YouTube video (link jumps to 8:55)
  • I was hoping Nikon would move away from their video-based pre-capture implementation on the Z8/Z9 to a stills-based implementation for the Z6 III, so that it would get raw support but that probably wont come until the next spin of Expeed and will probably require Nikon to be more generous with the DRAM buffer size. The 6K60p full-sensor readout speed at least offers full-sized jpgs for the Z6 III's C60 pre-capture.
  • 3D LUT support would've been great instead of the "Flexible Color" feature Nikon added, although perhaps "Flexible Color" will turn out to be more useful than expected depending on how much flexibility Nikon put into the mechanism.
  • The 6K fully-sampled internal NRAW is new to the Z line and quite useful for those don't want to shoot 8K NRAW to get full-sensor NRAW sampling like on the Z8/Z9.
Regarding the 12-bit vs 14-bit, there’s a lot we don’t know, including whether or not the sensor switches to 12-bit mode or 14-bit mode automatically when certain conditions are met (eg if ISO is 400 or higher), in order to optimize things. Not saying this happens: just pointing out that we don’t know whether it does or doesn’t. And it would make a lot of sense for Nikon to do this.

Remember that a 14-bit sensor read and a (lossless) compressed 14-bit raw file are two different things—and a 14-bit lossless file can always contain a 12-bit sensor read with no loss in quality and with no increase in file size. And in many common scenarios—particularly those requiring speed—12-bit will offer no image quality loss relative to 14-bit (example: low light /high ISO shooting). We also know that Nikon’s now using intoPix TicoRaw on the Expeed 7; and that the Zf, which uses the same sensor as the Z6 and Z6ii doesn’t allow for a user-selectable 12-bit sensor readout (except video).
As you indicated, Nikon seems to have moved away from supporting 12-bit sensor readout for stills, even on slower readout sensors that would benefit from the faster readout with no loss of quality starting at intermediate ISOs. That's why I doubt the Z6 III has 12-bit raw support.
I’d be interested to see if the sensor read speeds are the same in all circumstances. I’d also be interested in if you could clarify the shooting conditions on your excellent sensor speed site instead of just “stills”. ;)
If you click on a given model in the live results table you'll arrive at a detailed table showing all the permutations of stills configurations that were measured, including single-shot, continuous (at various speeds), ISOs, JPEG, etc...
 
Last edited:

Keyboard shortcuts

Back
Top