The current generation of FF stacked sensors support full-sampled readout rates of <= 5ms (1/200 or faster) in photo mode yet their video rolling-shutter performance is several times worse.
For example, Jim measured the Z9's stills-mode full-sensor readout at 1/270 (3.7ms), yet the Z9's 8K video rolling-shutter is 14.5ms and its 4K oversampled rolling-shutter is 14.5ms (per CineD).
It's understood there are likely processing bottlenecks in the video imaging pipeline but I always assumed all processing would be done on a fully-formed frame buffer, which means those bottlenecks shouldn't require any back-pressure to be applied to the sensor readout stage for buffer flow management. And stills-mode has a similar imaging pipeline (demosaicing, WB scaling, lens corrections, NR, picture controls, etc..., minus the oversampling and H.264/H.265 encoding of video), yet maintains the full readout rate of the sensor. Lastly, video doesn't require/use the full sampling of the sensor - it uses 12-bit sampling at most, and perhaps even 10-bit for the non-raw video modes, which offset the higher continuous frame-rate demands of video vs stills.
A few possibilities come to mind.
One idea (and perhaps most likely) is aggregate internal bus bandwidth limitations. The bandwidth demands for all the bus round-trips of the video processing pipeline stages is huge and perhaps doesn't leave enough available bandwidth to run the sensor at its native readout speeds.
Another idea is the video pipeline is unique in that it processes data directly off the sensor instead of a frame buffer, which would necessitate slowing down the sensor readout to buffer match. But this seems exceedingly unlikely considering the multiple required in the full video pipeline, meaning the data has to be deposited into a SDRAM anyway to accommodate.
Another idea is that the full sensor readout speed isn't available for video due to the multi-row readout scheme used in these sensors, ie Jim found that both the Sony A9 and Nikon Z9 appear to do 12-row parallel readout (and implied ADC). Perhaps that scheme isn't suitable for how data enters the video pipeline...although I can't think of a reason why that would be the case.
Another idea is that the sensor readout is slowed for power consumption reasons.
By comparison, the Sony A7s III has a non-stacked BSI sensor that achieves full-sampled video rolling shutter of 8.7ms in 4K (per CineD). The A7s has a 12MP sensor, so naturally the data load is much lower both for sensor readout and processing, but again, it's not clear that system bandwidth is the gating factor for how fast the camera configures the sensor readout for video.
Ideas?
For example, Jim measured the Z9's stills-mode full-sensor readout at 1/270 (3.7ms), yet the Z9's 8K video rolling-shutter is 14.5ms and its 4K oversampled rolling-shutter is 14.5ms (per CineD).
It's understood there are likely processing bottlenecks in the video imaging pipeline but I always assumed all processing would be done on a fully-formed frame buffer, which means those bottlenecks shouldn't require any back-pressure to be applied to the sensor readout stage for buffer flow management. And stills-mode has a similar imaging pipeline (demosaicing, WB scaling, lens corrections, NR, picture controls, etc..., minus the oversampling and H.264/H.265 encoding of video), yet maintains the full readout rate of the sensor. Lastly, video doesn't require/use the full sampling of the sensor - it uses 12-bit sampling at most, and perhaps even 10-bit for the non-raw video modes, which offset the higher continuous frame-rate demands of video vs stills.
A few possibilities come to mind.
One idea (and perhaps most likely) is aggregate internal bus bandwidth limitations. The bandwidth demands for all the bus round-trips of the video processing pipeline stages is huge and perhaps doesn't leave enough available bandwidth to run the sensor at its native readout speeds.
Another idea is the video pipeline is unique in that it processes data directly off the sensor instead of a frame buffer, which would necessitate slowing down the sensor readout to buffer match. But this seems exceedingly unlikely considering the multiple required in the full video pipeline, meaning the data has to be deposited into a SDRAM anyway to accommodate.
Another idea is that the full sensor readout speed isn't available for video due to the multi-row readout scheme used in these sensors, ie Jim found that both the Sony A9 and Nikon Z9 appear to do 12-row parallel readout (and implied ADC). Perhaps that scheme isn't suitable for how data enters the video pipeline...although I can't think of a reason why that would be the case.
Another idea is that the sensor readout is slowed for power consumption reasons.
By comparison, the Sony A7s III has a non-stacked BSI sensor that achieves full-sampled video rolling shutter of 8.7ms in 4K (per CineD). The A7s has a 12MP sensor, so naturally the data load is much lower both for sensor readout and processing, but again, it's not clear that system bandwidth is the gating factor for how fast the camera configures the sensor readout for video.
Ideas?
Last edited: