More precise sensor readout measurement

Something that's always interested me is how cameras implement LV exposure simulation. Initially we might assume the camera simply configures the sensor to the shooting parameters (shutter speed, ISO) to yield the brightness/exposure simulation in Live View. But that has downsides - for fast shutter speed / high ISO situations the LV preview will have lots of noise, and for slow shutter speeds the preview will be stuttered. In fact some cameras implement it this way, such as Panasonic, and it doesn't work well.

Here is a glimpse on how Nikon does it on their Z cameras, using the Z8 as an example. I first took photos of my 500Hz Arduino LED, to establish what the bands look like at various shutter speeds. Note that at very fast shutter speeds (ex: 1/8000) the bands are very distinct with a near 50% duty cycle of light/dark, since each row captures exactly 1/2 of a light cycle, either ON or OFF. As the shutter speed is decreased the black portion of each band gets smaller - this is the result of sensor rows capturing both the light and dark parts of each LED cycle, as a function of the exposures not being aligned with the LED cycle.

Then I connected the Z8 to my HDMI recorder and captured what 1/8000 looks like at various ISOs. You'll see that the camera alters the actual sensor shutter speed to simulate the ISO in LV. You can determine what specific shutter speed the camera selected for each simulated ISO by matching the LV screen grab in the botton half to the shutter speed photos in the top half. Note that this isn't a static relationship - the camera will change the set of shutter speeds it uses to simulate ISO based on the Light Value of the scene, to keep everything centered / reasonable.

Click Here for Full-Size Image of Preview Below

I think I don't fully understand.

So, the Nikon example. A displayed shutter speed of 1/8000 and a variance of simulated ISO. So not ISO?

We can see the sensor readout is varying with the simulated ISO, but is that really sensor shutter time?

I once would have assumed it didn't simulate anything but pointing it at a big screen at an event that concept falls away.

Is the camera trying to ensure the amplitude from middle grey onwards is representative of the exposure or something else?
 
Something that's always interested me is how cameras implement LV exposure simulation. Initially we might assume the camera simply configures the sensor to the shooting parameters (shutter speed, ISO) to yield the brightness/exposure simulation in Live View. But that has downsides - for fast shutter speed / high ISO situations the LV preview will have lots of noise, and for slow shutter speeds the preview will be stuttered. In fact some cameras implement it this way, such as Panasonic, and it doesn't work well.

Here is a glimpse on how Nikon does it on their Z cameras, using the Z8 as an example. I first took photos of my 500Hz Arduino LED, to establish what the bands look like at various shutter speeds. Note that at very fast shutter speeds (ex: 1/8000) the bands are very distinct with a near 50% duty cycle of light/dark, since each row captures exactly 1/2 of a light cycle, either ON or OFF. As the shutter speed is decreased the black portion of each band gets smaller - this is the result of sensor rows capturing both the light and dark parts of each LED cycle, as a function of the exposures not being aligned with the LED cycle.

Then I connected the Z8 to my HDMI recorder and captured what 1/8000 looks like at various ISOs. You'll see that the camera alters the actual sensor shutter speed to simulate the ISO in LV. You can determine what specific shutter speed the camera selected for each simulated ISO by matching the LV screen grab in the botton half to the shutter speed photos in the top half. Note that this isn't a static relationship - the camera will change the set of shutter speeds it uses to simulate ISO based on the Light Value of the scene, to keep everything centered / reasonable.

Click Here for Full-Size Image of Preview Below

I think I don't fully understand.

So, the Nikon example. A displayed shutter speed of 1/8000 and a variance of simulated ISO. So not ISO?
I'm changing the ISO on the camera but it's implemented in the LV exposure preview as a change to the effective sensor shutter speed, independent of the shutter speed I have configured on the camera.
We can see the sensor readout is varying with the simulated ISO, but is that really sensor shutter time?
Sensor readout is not varying with the simulated ISO - only shutter speed is. The number/size of the bands is not changing - only the size of the "on" vs "off" portion of each band changes. This is due to each sensor row capturing a partial "on" vs "off" cycle of the LED as a function of the exposure+shutter speed being out of phase with the LED toggling. The slower the shutter speed, the greater the probability that a sensor row will straddle an LED transition. The same band changes are observed in the photos at each shutter speed (top of the composite image I provided).
I once would have assumed it didn't simulate anything but pointing it at a big screen at an event that concept falls away.

Is the camera trying to ensure the amplitude from middle grey onwards is representative of the exposure or something else?
The camera is trying to keep the LV noise down (from too fast of a shutter speed) and the motion from stuttering (from too low of a shutter speed).
 
Last edited:
Something that's always interested me is how cameras implement LV exposure simulation. Initially we might assume the camera simply configures the sensor to the shooting parameters (shutter speed, ISO) to yield the brightness/exposure simulation in Live View. But that has downsides - for fast shutter speed / high ISO situations the LV preview will have lots of noise, and for slow shutter speeds the preview will be stuttered. In fact some cameras implement it this way, such as Panasonic, and it doesn't work well.

Here is a glimpse on how Nikon does it on their Z cameras, using the Z8 as an example. I first took photos of my 500Hz Arduino LED, to establish what the bands look like at various shutter speeds. Note that at very fast shutter speeds (ex: 1/8000) the bands are very distinct with a near 50% duty cycle of light/dark, since each row captures exactly 1/2 of a light cycle, either ON or OFF. As the shutter speed is decreased the black portion of each band gets smaller - this is the result of sensor rows capturing both the light and dark parts of each LED cycle, as a function of the exposures not being aligned with the LED cycle.

Then I connected the Z8 to my HDMI recorder and captured what 1/8000 looks like at various ISOs. You'll see that the camera alters the actual sensor shutter speed to simulate the ISO in LV. You can determine what specific shutter speed the camera selected for each simulated ISO by matching the LV screen grab in the botton half to the shutter speed photos in the top half. Note that this isn't a static relationship - the camera will change the set of shutter speeds it uses to simulate ISO based on the Light Value of the scene, to keep everything centered / reasonable.

Click Here for Full-Size Image of Preview Below

I think I don't fully understand.

So, the Nikon example. A displayed shutter speed of 1/8000 and a variance of simulated ISO. So not ISO?
I'm changing the ISO on the camera but it's implemented in the LV exposure preview as a change to the effective sensor shutter speed, independent of the shutter speed I have configured on the camera.
Ah, I get ya. Thanks.
We can see the sensor readout is varying with the simulated ISO, but is that really sensor shutter time?
Sensor readout is not varying with the simulated ISO - only shutter speed is. The number/size of the bands is not changing - only the size of the "on" vs "off" portion of each band changes. This is due to each sensor row capturing a partial "on" vs "off" cycle of the LED as a function of the exposure+shutter speed being out of phase with the LED toggling.
I view the readout and one that would describe the on/off (I assume no sensor has any other state) in the time domain changing. I think seeing the state in that form would be interesting (so a Y/time version of your LED source images).
The slower the shutter speed, the greater the probability that a sensor row will straddle an LED transition.
Ah I see, yes I hadn't really thought about that. I had wondered if instead of the PWM frequency you have for the optical source, if moving to something say 1000x faster than has a ramp within it to help gain more information about how the camera (so not just sensor) is measuring the light source.
The same band changes are observed in the photos at each shutter speed (top of the composite image I provided)..
So to simulate some kind of brightness profile in the display it's changing the collected light volume by adjusting the sample time of the image system.

I suppose a user may wish to configure this but perhaps that's a feature idea down in the weeds.

I think the R5 is doing something similar (just pointed it at a monitor).
I once would have assumed it didn't simulate anything but pointing it at a big screen at an event that concept falls away.

Is the camera trying to ensure the amplitude from middle grey onwards is representative of the exposure or something else?
The camera is trying to keep the LV noise down (from too fast of a shutter speed) and the motion from stuttering (from too low of a shutter speed).
Understood, but they do that to achieve something, X. Is that supply some brightness profile?

I remember many discussions about how camera users don't like jerky viewfinders, blackouts so makes sense.

Great post as ever HS.
 
I view the readout and one that would describe the on/off (I assume no sensor has any other state) in the time domain changing. I think seeing the state in that form would be interesting (so a Y/time version of your LED source images).
Here's my attempt to demonstrate how the staggered row readouts and shutter speed intersect with the LED on/off cycles to produce the bands seen. Let me know if this helps.

Demonstration of LED Light Cycles vs Sensor Row Readouts
 
Last edited:
I view the readout and one that would describe the on/off (I assume no sensor has any other state) in the time domain changing. I think seeing the state in that form would be interesting (so a Y/time version of your LED source images).
Here's my attempt to demonstrate how the staggered row readouts and shutter speed intersect with the LED on/off cycles to produce the bands seen. Let me know if this helps.

Demonstration of LED Light Cycles vs Sensor Row Readouts
Hi Horshack.

That's it. My brain had it rotated 90 but that's it, exactly it.

Thank you.
 
Looks pretty standard to me, given the resolution and no -stacked sensor. Who did you compare it with?
If you sort the columns by time then one can compare it with a large number of cameras which show a faster result and of which are not stacked.

I should have said I was comparing Photo time.

They are around 24th/25th in that list. An R5, which isn't very rapid is around 3x quicker.
 
Looks pretty standard to me, given the resolution and no -stacked sensor. Who did you compare it with?
If you sort the columns by time then one can compare it with a large number of cameras which show a faster result and of which are not stacked.

I should have said I was comparing Photo time.

They are around 24th/25th in that list. An R5, which isn't very rapid is around 3x quicker.
Z6 and Z7, compared to corresponding Sony cameras, look good, though.

R5 seems to be using 12-bit when using ES (more than one stop less max PDR than mechanical), while Z7 is probably measured with 14-bits.
 
Last edited:

Keyboard shortcuts

Back
Top