Last edited:
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Thanks Horshack.I have a parallel thread on FM and recently made a post explaining why different shutter speed affects the appearance of the bands:
When the shutter speeds slows relative to the light cycling frequency, each sensor row is capturing multiple cycles of light. This means the brightness difference of the bands between rows receiving +1 or -1 cycles vs other rows is less noticeable. For example, on a light source cycling 1,000 times/sec, a 1/8000 shutter will capture on average 1 cycle of light, meaning some rows will have 100% and others 0% - a very noticeable 100% difference. Drop the shutter down to 1/250 and some rows get 4 cycles while others get 3 - much less noticeable 33% difference. This is why bands become less noticeable at slower shutter speeds. They start to blend together and fill in the black gaps since the brightness difference of missing a light cycle becomes less and less.
To make it clearer I set up my Arduino with two different color LEDs and alternate between them every millisecond instead of alternating power to a single LED at the same rate:
Hi Horshack. In the per row section could you help me to understand. Let me take the R5, it looks like each row takes 3000ns, or 3ms. The tot sensor is less than 16ms. Am I reading this correctly?I added per-row sensor readout rates, in addition to the existing full-sensor rates. The per-row rates make it easier to compare readout speeds across sensors with different resolutions. You can see the measurements in the detail section of the GItHub Pages site. Here's a direct link:
https://horshack-dpreview.github.io/RollingShutter/#table2
Sure. 1 sec = 1,000ms = 1,000,000us = 1,000,000,000ns, so:Hi Horshack. In the per row section could you help me to understand. Let me take the R5, it looks like each row takes 3000ns, or 3ms. The tot sensor is less than 16ms. Am I reading this correctly?I added per-row sensor readout rates, in addition to the existing full-sensor rates. The per-row rates make it easier to compare readout speeds across sensors with different resolutions. You can see the measurements in the detail section of the GItHub Pages site. Here's a direct link:
https://horshack-dpreview.github.io/RollingShutter/#table2
I think some cameras read groups of rows in parallel btw but I am not sure where concrete days on this would be found.
Thanks Horshack. I think the issue is tiredness, my brain failedSure. 1 sec = 1,000ms = 1,000,000us = 1,000,000,000ns, so:Hi Horshack. In the per row section could you help me to understand. Let me take the R5, it looks like each row takes 3000ns, or 3ms. The tot sensor is less than 16ms. Am I reading this correctly?I added per-row sensor readout rates, in addition to the existing full-sensor rates. The per-row rates make it easier to compare readout speeds across sensors with different resolutions. You can see the measurements in the detail section of the GItHub Pages site. Here's a direct link:
https://horshack-dpreview.github.io/RollingShutter/#table2
I think some cameras read groups of rows in parallel btw but I am not sure where concrete days on this would be found.
3,000ns = 3us
I decided to switch the per-row stats from ns to us. I also added a secs version as well (1/x, ie shutters-peed notation):Thanks Horshack. I think the issue is tiredness, my brain failedSure. 1 sec = 1,000ms = 1,000,000us = 1,000,000,000ns, so:Hi Horshack. In the per row section could you help me to understand. Let me take the R5, it looks like each row takes 3000ns, or 3ms. The tot sensor is less than 16ms. Am I reading this correctly?I added per-row sensor readout rates, in addition to the existing full-sensor rates. The per-row rates make it easier to compare readout speeds across sensors with different resolutions. You can see the measurements in the detail section of the GItHub Pages site. Here's a direct link:
https://horshack-dpreview.github.io/RollingShutter/#table2
I think some cameras read groups of rows in parallel btw but I am not sure where concrete days on this would be found.
3,000ns = 3us
The r5 we use some of the 4k video modes to improve rolling shutter but the per row timing is interesting.
I have a parallel thread on FM and recently made a post explaining why different shutter speed affects the appearance of the bands:
When the shutter speeds slows relative to the light cycling frequency, each sensor row is capturing multiple cycles of light. This means the brightness difference of the bands between rows receiving +1 or -1 cycles vs other rows is less noticeable. For example, on a light source cycling 1,000 times/sec, a 1/8000 shutter will capture on average 1 cycle of light, meaning some rows will have 100% and others 0% - a very noticeable 100% difference. Drop the shutter down to 1/250 and some rows get 4 cycles while others get 3 - much less noticeable 33% difference. This is why bands become less noticeable at slower shutter speeds. They start to blend together and fill in the black gaps since the brightness difference of missing a light cycle becomes less and less.
To make it clearer I set up my Arduino with two different color LEDs and alternate between them every millisecond instead of alternating power to a single LED at the same rate:

Thanks Horshack. That confirms the 12.Here's a capture of the Z9's multiple concurrent row readout scheme, which Jim has previously measured on the Z9 and other bodies like the A9 .
The built-in LED on the Arduino board I'm using isn't bright enough for this capture - the noise averages out the 12-row segments. This image was captured with two discrete LED's (red+blue) plugged into GPIO pins through a 220 Ohm resistor.
For the R5, the 4k modes. The camera can create a 4:1 binned output from the 8k sensors or oversampled.All user submissions...
There are a few video measurements missing for the R5/R7 - hope to have those filled in over the next day or so.
https://horshack-dpreview.github.io/RollingShutter/
It's the non-HQ (non-oversampled) mode. I'm working on getting the remainder of the submissions for the R5/R7 video cases.For the R5, the 4k modes. The camera can create a 4:1 binned output from the 8k sensors or oversampled.All user submissions...
There are a few video measurements missing for the R5/R7 - hope to have those filled in over the next day or so.
https://horshack-dpreview.github.io/RollingShutter/
The first route may give a result showing an apparent faster readout time. The extra mode is called 4k HQ Mode.
Do you know what was used?
Thanks HS.It's the non-HQ (non-oversampled) mode. I'm working on getting the remainder of the submissions for the R5/R7 video cases.For the R5, the 4k modes. The camera can create a 4:1 binned output from the 8k sensors or oversampled.All user submissions...
There are a few video measurements missing for the R5/R7 - hope to have those filled in over the next day or so.
https://horshack-dpreview.github.io/RollingShutter/
The first route may give a result showing an apparent faster readout time. The extra mode is called 4k HQ Mode.
Do you know what was used?