The shutter lag of the Z7

Status
Not open for further replies.
Out of curiosity: When testing electronic shutter mode this way, can you detect a difference between having the display at the top of the image, versus at the bottom?
Good idea. I don't think it will make a difference but I will check it later on today.
I made some new tests regarding this. What I found is that depending on the position, the time will be different. In fact, the variation is huge, about 40ms, but only for the electronic shutter. For the other two modes it hardly makes a difference where the counter is in this case, at least it seems like the difference is less than 2ms.

The image is taken at 1/2000s but it doesn't make a difference if the shutter speed would be lower, which I also tested, but 1/2000s gave the best speed if I wanted to reshoot all 3 modes with the same shutter speed to be able to compare equally.

Electronic shutter:







EFCS:







Mechanical shutter:







In this test the mechanical shows the most consistent results, but the time is about the same as before, except that the electronic shutter seems to take a very long time. This explains why we can't use a flash with it. Perhaps the electronic shutter is never fully open, unless the shutter speed is very long, so it would not be practical to use flash.

It seems also that the electronic shutter "moves" the opposite direction of the EFCS and the mechanical.

Interesting experiments.
You're seeing different measurements at different locations for the ES because your technique is including the sensor read-out time in your lag calculation.

The electronic shutter is fully open at shutter speeds equal to and slower than the sensor readout speed. So for the Z7 that's probably 1/15 or thereabouts for 14-bit mode and 1/30 for 12-bit, assuming it's about the same as the A7riii. An ES can support flash operation at that effective x-sync speed but Nikon chose to not support it. Sony supports it but only for the pixel-shift mode.
 
Last edited:
The fact EFCS can keep up with the mechanical second shutter by itself disproves the notion that there's a difference in the reset time between the two cameras, at least for timing up to and including 1/8000, and thus discounts that as the possible source of the EFCS lag difference.
Does not follow.

Are you assuming that the reset is performed with very narrow reset pulse times? If so, do we have documentation showing this?

The only requirement for the reset pulses, in order to track the speed of the mechanical second shutter, is that the end of the reset pulse be timed precisely for each row. That implies nothing about what the duration of the reset pulse is; it could be tens of msec unless we have other data indicating it's much faster.

For a 14-bit imager, accurate reset function to minimize read noise requires around 10-11 time constants for the reset transistor's Rds(on) times the photodiode capacitance. Then there could be other considerations as well, such as Vdd settling time when a large number of photosites are reset simultaneously.
Consider video, in particular the high-frame rate modes like 120fps that both Sony and Nikon support. Even though the sensor isn't fully sampled in that mode (line skipping likely employed), the set of sensor rows that are sampled have to be reset continuously. At 120fps that means a reset every 8.33ms, which at least establishes a minimum speed for EFCS. Granted, video is likely sampled at 12-bits (or even 10-bits) rather than 14-bits, so it's possible the reset logic is different/faster for video/12-bits, but that can be easily proved/disproved by checking if the EFCS lag is higher for 14-bit stills mode vs 12-bit stills mode.

Then consider 1" Exmor sensors (in the RX10/RX100 cameras), which support video rates of 960fps, again heavily sub-sampled/line-skipped but again the same rows are presumably sampled every frame. That yields a continuous row reset every 1.04ms. Smaller sensor of course vs FF, but the sensor size differential is smaller than the indicated difference in how fast the reset runs on those rows.
Not trying to be dismissive, but video is a different world and I have little interest in it. Has anyone made DR measurements of video frames?

What would be interesting, though, is the Nikon 1 DR for frames taken at max speed (it will do 60fps at full resolution), compared to single frames. That's something I could have a look at, although it's just a 12-bit sensor.
 
The fact EFCS can keep up with the mechanical second shutter by itself disproves the notion that there's a difference in the reset time between the two cameras, at least for timing up to and including 1/8000, and thus discounts that as the possible source of the EFCS lag difference.
Does not follow.

Are you assuming that the reset is performed with very narrow reset pulse times? If so, do we have documentation showing this?

The only requirement for the reset pulses, in order to track the speed of the mechanical second shutter, is that the end of the reset pulse be timed precisely for each row. That implies nothing about what the duration of the reset pulse is; it could be tens of msec unless we have other data indicating it's much faster.

For a 14-bit imager, accurate reset function to minimize read noise requires around 10-11 time constants for the reset transistor's Rds(on) times the photodiode capacitance. Then there could be other considerations as well, such as Vdd settling time when a large number of photosites are reset simultaneously.
Consider video, in particular the high-frame rate modes like 120fps that both Sony and Nikon support. Even though the sensor isn't fully sampled in that mode (line skipping likely employed), the set of sensor rows that are sampled have to be reset continuously. At 120fps that means a reset every 8.33ms, which at least establishes a minimum speed for EFCS. Granted, video is likely sampled at 12-bits (or even 10-bits) rather than 14-bits, so it's possible the reset logic is different/faster for video/12-bits, but that can be easily proved/disproved by checking if the EFCS lag is higher for 14-bit stills mode vs 12-bit stills mode.

Then consider 1" Exmor sensors (in the RX10/RX100 cameras), which support video rates of 960fps, again heavily sub-sampled/line-skipped but again the same rows are presumably sampled every frame. That yields a continuous row reset every 1.04ms. Smaller sensor of course vs FF, but the sensor size differential is smaller than the indicated difference in how fast the reset runs on those rows.
Not trying to be dismissive, but video is a different world and I have little interest in it. Has anyone made DR measurements of video frames?

What would be interesting, though, is the Nikon 1 DR for frames taken at max speed (it will do 60fps at full resolution), compared to single frames. That's something I could have a look at, although it's just a 12-bit sensor.
 
What would be interesting, though, is the Nikon 1 DR for frames taken at max speed (it will do 60fps at full resolution), compared to single frames. That's something I could have a look at, although it's just a 12-bit sensor.
Is your Intention to check, whether DR is smaller due to incomplete Pixel reset before exposure due to short resetting time?

BR gusti
 
Out of curiosity: When testing electronic shutter mode this way, can you detect a difference between having the display at the top of the image, versus at the bottom?
Good idea. I don't think it will make a difference but I will check it later on today.
I made some new tests regarding this. What I found is that depending on the position, the time will be different. In fact, the variation is huge, about 40ms, but only for the electronic shutter. For the other two modes it hardly makes a difference where the counter is in this case, at least it seems like the difference is less than 2ms.

The image is taken at 1/2000s but it doesn't make a difference if the shutter speed would be lower, which I also tested, but 1/2000s gave the best speed if I wanted to reshoot all 3 modes with the same shutter speed to be able to compare equally.

Electronic shutter:







EFCS:







Mechanical shutter:







In this test the mechanical shows the most consistent results, but the time is about the same as before, except that the electronic shutter seems to take a very long time. This explains why we can't use a flash with it. Perhaps the electronic shutter is never fully open, unless the shutter speed is very long, so it would not be practical to use flash.

It seems also that the electronic shutter "moves" the opposite direction of the EFCS and the mechanical.

Interesting experiments.
You're seeing different measurements at different locations for the ES because your technique is including the sensor read-out time in your lag calculation.
Sensor read-out time sets the difference between the top and bottom measurements but doesn't contribute significantly to the image-top delay. From the OP's experiment, it appears to be no more than 50msec.
The electronic shutter is reaches fully open at shutter speeds equal to and slower than the sensor readout speed. So for the Z7 that's probably 1/15 or thereabouts for 14-bit mode and 1/30 for 12-bit, assuming it's about the same as the A7riii.
There's another good experiment idea.

OP: Can you try ES with 12-bit?



--
Source credit: Prov 2:6
- Marianne
 
Out of curiosity: When testing electronic shutter mode this way, can you detect a difference between having the display at the top of the image, versus at the bottom?
Good idea. I don't think it will make a difference but I will check it later on today.
I made some new tests regarding this. What I found is that depending on the position, the time will be different. In fact, the variation is huge, about 40ms, but only for the electronic shutter. For the other two modes it hardly makes a difference where the counter is in this case, at least it seems like the difference is less than 2ms.

The image is taken at 1/2000s but it doesn't make a difference if the shutter speed would be lower, which I also tested, but 1/2000s gave the best speed if I wanted to reshoot all 3 modes with the same shutter speed to be able to compare equally.

Electronic shutter:







EFCS:







Mechanical shutter:







In this test the mechanical shows the most consistent results, but the time is about the same as before, except that the electronic shutter seems to take a very long time. This explains why we can't use a flash with it. Perhaps the electronic shutter is never fully open, unless the shutter speed is very long, so it would not be practical to use flash.

It seems also that the electronic shutter "moves" the opposite direction of the EFCS and the mechanical.

Interesting experiments.
You're seeing different measurements at different locations for the ES because your technique is including the sensor read-out time in your lag calculation.
Sensor read-out time sets the difference between the top and bottom measurements but doesn't contribute significantly to the image-top delay. From the OP's experiment, it appears to be no more than 50msec.
50 milliseconds = 1/20 = approx readout time of the sensor, so that corresponds with a top vs bottom image measurement.
 
Your interest in video isn't a prerequisite for applying the reset times I listed to stills as evidence of the reset times.
True, but my point is that video reset times do not necessarily infer anything about stills reset times.
And the video vs stills operational differences can be reasonably fleshed out by looking at 12-bit vs 14-bit stills operation for the EFCS x-sync lag measurements.
Comparing 12-bit to 14-bit stills lags is an important experiment, but I wouldn't attempt to stir that data into the same pot as video data/calculations. I think we should be looking at them separately.
Regarding DR for single vs continuous, we know from Jim's work on the A7-series cameras that it drops to 12-bits for continuous shooting, and there is evidence from his A9 tests and patents that Sony has a dual-mode ADC architecture that can run parallel ADCs in either low-noise mode (14-bit high DR) or high-speed mode (12-bit).
That's simple to implement. Just move the DAC counter's clock up two stages for 12-bit conversions.
Those same experiments could be repeated for the Z7.
 
My problem is that you have a commercial product, for which you presumably are paid for, whose details you don't want to fully share but want others to contribute their ideas to. The irony is that your product is built on the back of millions of lines of open source software and hardware and thousands of free man hours of work (arduino).

I don't have any issue with making money from software/hardware products. That's what put a roof over my head. But I have an issue with that product seeking free contribution from others.
Stop accusing me of something you don't know anything about and which are actually totally false accusations.

I am using ATmega328p for this, which is a Microchip product in case you didn't know. This is called Arduino Uno by common people, so it is easier to say Arduino then Microchip ATmega328p. It is a commercial product, not something a community invented. In fact, the "Uno" boards are just development boards, nothing revolutionary, so anyone can use them. Only the brand name Arduino and the logotype is protected, but I have no interest in using it, so you can relax about that.

I find your accusation very offensive, the "product" is NOT a product until it is offered for customers, and as I said earlier, version 1 is no longer sold. In other words, your accusations are just nonsense, but if you think you have a case then please take it up with Jim, who is the admin here and is participating in the discussions, so if my thread breaks the rules he should delete it and give me a warning or ban.

As for "free contributions from others..." I asked a simple question from Marianne, this is VERY much normal in my territory. Also, she is an adult, so if she does not like my question, she can speak for herself.

All and every product I ever made are ENTIRELY based on my own developments and tests, NEVER used or abused anyone in ANY community. I share as much as I want, not as much as you demand.
 
Last edited by a moderator:
Out of curiosity: When testing electronic shutter mode this way, can you detect a difference between having the display at the top of the image, versus at the bottom?
Good idea. I don't think it will make a difference but I will check it later on today.
I made some new tests regarding this. What I found is that depending on the position, the time will be different. In fact, the variation is huge, about 40ms, but only for the electronic shutter. For the other two modes it hardly makes a difference where the counter is in this case, at least it seems like the difference is less than 2ms.

The image is taken at 1/2000s but it doesn't make a difference if the shutter speed would be lower, which I also tested, but 1/2000s gave the best speed if I wanted to reshoot all 3 modes with the same shutter speed to be able to compare equally.

Electronic shutter:







EFCS:







Mechanical shutter:







In this test the mechanical shows the most consistent results, but the time is about the same as before, except that the electronic shutter seems to take a very long time. This explains why we can't use a flash with it. Perhaps the electronic shutter is never fully open, unless the shutter speed is very long, so it would not be practical to use flash.

It seems also that the electronic shutter "moves" the opposite direction of the EFCS and the mechanical.

Interesting experiments.
You're seeing different measurements at different locations for the ES because your technique is including the sensor read-out time in your lag calculation.
Sensor read-out time sets the difference between the top and bottom measurements but doesn't contribute significantly to the image-top delay. From the OP's experiment, it appears to be no more than 50msec.
The electronic shutter is reaches fully open at shutter speeds equal to and slower than the sensor readout speed. So for the Z7 that's probably 1/15 or thereabouts for 14-bit mode and 1/30 for 12-bit, assuming it's about the same as the A7riii.
There's another good experiment idea.

OP: Can you try ES with 12-bit?
Sorry Marianne, I think I am done with testing for a while. Thanks for the ideas.
 
Your interest in video isn't a prerequisite for applying the reset times I listed to stills as evidence of the reset times.
True, but my point is that video reset times do not necessarily infer anything about stills reset times.
Agree, the correlation would have to be proven, likely by the process of elimination through the other experiments proposed. We have opposite predispositions on this - I believe you're looking for evidence that EFCS resets for video operate the same as they do for stills, whereas I'm looking for evidence that they don't.

Right now we have evidence of how fast EFCS can run (very fast based on empirical evidence of video fps) vs a theory without empirical evidence of how slow 14-bit EFCS might have to run.
And the video vs stills operational differences can be reasonably fleshed out by looking at 12-bit vs 14-bit stills operation for the EFCS x-sync lag measurements.
Comparing 12-bit to 14-bit stills lags is an important experiment, but I wouldn't attempt to stir that data into the same pot as video data/calculations. I think we should be looking at them separately.
Regarding DR for single vs continuous, we know from Jim's work on the A7-series cameras that it drops to 12-bits for continuous shooting, and there is evidence from his A9 tests and patents that Sony has a dual-mode ADC architecture that can run parallel ADCs in either low-noise mode (14-bit high DR) or high-speed mode (12-bit).
That's simple to implement. Just move the DAC counter's clock up two stages for 12-bit conversions.
Here's Jim's article on this, which includes a link to the Sony patent:

https://blog.kasson.com/the-last-word/sony-a9-multiple-acd-operations/
 
Horshack wrote:.

The electronic shutter is reaches fully open at shutter speeds equal to and slower than the sensor readout speed. So for the Z7 that's probably 1/15 or thereabouts for 14-bit mode and 1/30 for 12-bit, assuming it's about the same as the A7riii.
There's another good experiment idea.

OP: Can you try ES with 12-bit?
Doesn't look like the OP wants to continue with this. You or Jim could easily test it by photographing some cycling lighting in 14-bit vs 12-bit and counting the number of light bands. I would do it myself but I won't have a Z7 back for about a week or so.
 
Last edited:
If binary, you probably want to use a Gray code to reduce ambiguity.
I don't think that's necessary, it would be similar to the LED displays, just that some LED diodes between 0 and 8 would be lit. At no time mark would be lit more than necessary, and each represent the time value, so whatever is caught in the image would be the actual time when decoded. This is not an issue if each diode is individually controlled, not multiplexed. Maybe I won't even need an external chip, perhaps I can share the use of some of the Arduino outputs with other parts... ;) The more I think about the more likely it is that this will be the final solution, after all, I'll only need eight outputs...
There is redundancy in a seven-segment display that helps to resolve ambiguity. There is none in straight binary. You're going to have ambiguous results at every transition that produces a carry.

However, you have hinted at a solution that would also help: strobe the display once per count, and very briefly. That, combined with a high shutter speed, would also reduce ambiguity. Perhaps that's what you had in mind all along.

Jim
 
Horshack wrote:.

The electronic shutter is reaches fully open at shutter speeds equal to and slower than the sensor readout speed. So for the Z7 that's probably 1/15 or thereabouts for 14-bit mode and 1/30 for 12-bit, assuming it's about the same as the A7riii.
There's another good experiment idea.

OP: Can you try ES with 12-bit?
Doesn't look like the OP wants to continue with this. You or Jim could easily test it by photographing some cycling lighting in 14-bit vs 12-bit and counting the number of light bands. I would do it myself but I won't have a Z7 back for about a week or so.
I may do it with the scope if I get some time. I'm also interested in what happens to exposure consistency at 1/2000 second, and the scope is the best way to check on that. But I'm pretty busy right now...

Jim
 
My problem is that you have a commercial product, for which you presumably are paid for, whose details you don't want to fully share but want others to contribute their ideas to. The irony is that your product is built on the back of millions of lines of open source software and hardware and thousands of free man hours of work (arduino).

I don't have any issue with making money from software/hardware products. That's what put a roof over my head. But I have an issue with that product seeking free contribution from others.
Stop accusing me of something you don't know anything about and which are actually totally false accusations.

I am using ATmega328p for this, which is a Microchip product in case you didn't know. This is called Arduino Uno by common people, so it is easier to say Arduino then Microchip ATmega328p. It is a commercial product, not something a community invented. In fact, the "Uno" boards are just development boards, nothing revolutionary, so anyone can use them. Only the brand name Arduino and the logotype is protected, but I have no interest in using it, so you can relax about that.

I find your accusation very offensive, the "product" is NOT a product until it is offered for customers, and as I said earlier, version 1 is no longer sold. In other words, your accusations are just nonsense, but if you think you have a case then please take it up with Jim, who is the admin here and is participating in the discussions, so if my thread breaks the rules he should delete it and give me a warning or ban.

As for "free contributions from others..." I asked a simple question from Marianne, this is VERY much normal in my territory. Also, she is an adult, so if she does not like my question, she can speak for herself.

All and every product I ever made are ENTIRELY based on my own developments and tests, NEVER used or abused anyone in ANY community. I share as much as I want, not as much as you demand.
The entire arduino platform (both hardware and software development tools) is open source and required thousands of donated man hours to achieve. That's what you're relying on for your product, whether you realize it or not.
 
The only problem is that it requires each LED segment to be controlled individually. Taking a picture with multiplexed segments is not possible unless the image is synchronized with the display, which of course can be done in this case. LCD displays, as you know, can't be controlled as individual segments, so I have to come up with a solution which is simple enough to make and use, and compact enough for the small box. I have an idea about a simple 8 LEDs in a raw, representing values from 0-256 which is enough for cameras a long as 1ms resolution is all we want, but even this makes my design more complex and require a different Arduino or an external chip, which I am trying to avoid. So any ideas are welcome.
Any ideas are welcome but only in one direction, incoming?
I don't understand what you are saying.
You're limited in what you're willing to share with others but not limited in what you're willing to allow others to share with you.
No, I share principles and open about how I solved it (in principle), but NOT sharing firmware or schematics. I am discussing ideas and, not manufacturing details.

I did NOT ask for more detail than I share. Ideas are welcome, and I shared so far a HUGE number of my own ideas on this forum and on the Internet elsewhere during the last 20 years. Yes, 20 years... I started with sharing technical ideas publicly on the Internet already in 1999, almost 20 years ago, continued with that tradition day one, when I joined DPR on all the parts of DPR where I was active, Oly, Canon, Nikon 1, Nikon DX pro, Nikon FX pro and now here. If you missed all that then it's your problem, do a better home work. What about you? How many usable ideas did you share ever in any degree? In fact, even today I have several openly available and totally free products available for anyone to use... but not everything. So what's your problem? The fact that I don't want to share firmware to spoon feed some people who want everything for free? Because that's what it is if I give away ALL the detail. Anyway, my comment was to Marianne, not to you.

I shared a pretty detailed description of how it is working, any engineer can build one based on those details, except the lazy ones, and if somebody wants to know some specific detail I may provide that as well.
Well, when I first ask you conceptually how it works, you just about bit my head off and questioned why I even needed to know, implying that we should just trust all your data without having any idea how it worked. As far as I know, nobody every asked for schematics or circuit diagrams or any of that.

I, and several others, argued with you about why it was relevant for us to understand what you were actually measuring and how and you argued back. So, that's how you started when we first asked how and that's certainly how we first formed an impression in this thread.

You did then apparently have a change of heart and explain what you were measuring, but only after starting off on a not very cooperative direction. I have no other comments on the way the thread has gone since that initial interaction since I was poisoned by the initial interaction and decided not to play any further.
 
My problem is that you have a commercial product, for which you presumably are paid for, whose details you don't want to fully share but want others to contribute their ideas to. The irony is that your product is built on the back of millions of lines of open source software and hardware and thousands of free man hours of work (arduino).

I don't have any issue with making money from software/hardware products. That's what put a roof over my head. But I have an issue with that product seeking free contribution from others.
Stop accusing me of something you don't know anything about and which are actually totally false accusations.

I am using ATmega328p for this, which is a Microchip product in case you didn't know. This is called Arduino Uno by common people, so it is easier to say Arduino then Microchip ATmega328p. It is a commercial product, not something a community invented. In fact, the "Uno" boards are just development boards, nothing revolutionary, so anyone can use them. Only the brand name Arduino and the logotype is protected, but I have no interest in using it, so you can relax about that.

I find your accusation very offensive, the "product" is NOT a product until it is offered for customers, and as I said earlier, version 1 is no longer sold. In other words, your accusations are just nonsense, but if you think you have a case then please take it up with Jim, who is the admin here and is participating in the discussions, so if my thread breaks the rules he should delete it and give me a warning or ban.

As for "free contributions from others..." I asked a simple question from Marianne, this is VERY much normal in my territory. Also, she is an adult, so if she does not like my question, she can speak for herself.

All and every product I ever made are ENTIRELY based on my own developments and tests, NEVER used or abused anyone in ANY community. I share as much as I want, not as much as you demand.
The entire arduino platform (both hardware and software development tools) is open source and required thousands of donated man hours to achieve. That's what you're relying on for your product, whether you realize it or not.
In fairness here, nobody who chooses to use arduino as a development platform has any obligation, contractually, ethically, socially, morally or any other way to make what they develop on that platform to be open source or open to the public. That's not how the platform works. Same with anyone who develops an app or server on Linux.

Now, if you were hounding the developers of the platform with questions and cajoling the developers of the platform to do things that would benefit you and you were not contributing in any way yourself to the platform, that starts to cross a fuzzy line, but that doesn't seem like what's happening here.

On the other hand, this whole thread got started in a very non-sharing, non-trusting direction and that has certainly poisoned some involved in the conversation.
 
My problem is that you have a commercial product, for which you presumably are paid for, whose details you don't want to fully share but want others to contribute their ideas to. The irony is that your product is built on the back of millions of lines of open source software and hardware and thousands of free man hours of work (arduino).

I don't have any issue with making money from software/hardware products. That's what put a roof over my head. But I have an issue with that product seeking free contribution from others.
Stop accusing me of something you don't know anything about and which are actually totally false accusations.

I am using ATmega328p for this, which is a Microchip product in case you didn't know. This is called Arduino Uno by common people, so it is easier to say Arduino then Microchip ATmega328p. It is a commercial product, not something a community invented. In fact, the "Uno" boards are just development boards, nothing revolutionary, so anyone can use them. Only the brand name Arduino and the logotype is protected, but I have no interest in using it, so you can relax about that.

I find your accusation very offensive, the "product" is NOT a product until it is offered for customers, and as I said earlier, version 1 is no longer sold. In other words, your accusations are just nonsense, but if you think you have a case then please take it up with Jim, who is the admin here and is participating in the discussions, so if my thread breaks the rules he should delete it and give me a warning or ban.

As for "free contributions from others..." I asked a simple question from Marianne, this is VERY much normal in my territory. Also, she is an adult, so if she does not like my question, she can speak for herself.

All and every product I ever made are ENTIRELY based on my own developments and tests, NEVER used or abused anyone in ANY community. I share as much as I want, not as much as you demand.
The entire arduino platform (both hardware and software development tools) is open source and required thousands of donated man hours to achieve. That's what you're relying on for your product, whether you realize it or not.
In fairness here, nobody who chooses to use arduino as a development platform has any obligation, contractually, ethically, socially, morally or any other way to make what they develop on that platform to be open source or open to the public. That's not how the platform works. Same with anyone who develops an app or server on Linux.
I completely agree (except maybe where GPL licensing might apply such as for Linux). My comments about Arduino were only tangential, to point out the absurdity of going on a public forum and selectively sharing details of a paid product while asking for free engineering ideas about the same product, while using free open-source tools for that product.
On the other hand, this whole thread got started in a very non-sharing, non-trusting direction and that has certainly poisoned some involved in the conversation.
Yep
 
What would be interesting, though, is the Nikon 1 DR for frames taken at max speed (it will do 60fps at full resolution), compared to single frames. That's something I could have a look at, although it's just a 12-bit sensor.
Is your Intention to check, whether DR is smaller due to incomplete Pixel reset before exposure due to short resetting time?
That's the idea, but who knows what one might discover along the way?
 
Horshack wrote:.

The electronic shutter is reaches fully open at shutter speeds equal to and slower than the sensor readout speed. So for the Z7 that's probably 1/15 or thereabouts for 14-bit mode and 1/30 for 12-bit, assuming it's about the same as the A7riii.
There's another good experiment idea.

OP: Can you try ES with 12-bit?
Doesn't look like the OP wants to continue with this. You or Jim could easily test it by photographing some cycling lighting in 14-bit vs 12-bit and counting the number of light bands. I would do it myself but I won't have a Z7 back for about a week or so.
I used my signal generator to set up a high-speed flashing LED for this testing. Taking advantage of its delay burst capability, I could dial in precise delay time between the shutter trigger and the first flash, and also set the number of flashes and their width and period to whatever value I wanted.

The green LED was set up without a lens on the camera, and was positioned to illuminate almost all of the sensor surface. For the first test (electronic shutter) I set the flash frequency to 1KHz and flash width to 200usec. I started with just 5 flashes in order to find the delay that started at the very top of the image; that delay is 80-81msec. Shutter speed was at maximum (1/8000) to obtain the narrowest line on the image, for each flash of the LED.

I then increased the number of flashes until the full image was covered with lines. The result is that the measured sensor read time is 51msec. Here is an example image where the first (top) line is at 81msec and the last is at 131msec:

Lines mark milliseconds of elapsed time for ES readout
Lines mark milliseconds of elapsed time for ES readout

After that, I set the camera to 12-bit raw and tried to get it to scan faster by going up to CH speed, but I was unable to achieve a faster scan rate; it stayed at 51msec.

EFCS

For this mode, the scan speed is dictated by the mechanical rear curtain which takes only a few msec to transit, so I increased the LED flash rate to 2KHz (flash pulse width 100usec). Shutter speed was at the maximum 1/2000. In this example, the first LED flash was at 60.5msec after shutter release, so the others are at 61.0, 61.5, 62.0 and 62.5msec (the last two overlap slightly). You can see how the virtual shutter opening grows significantly from the first flash at image bottom, to the last one near the top, due to shutter acceleration. When the opening is narrow at the bottom, the width is not perfectly even due to slight misalignment of the mechanical rear curtain.



EFCS virtual shutter openings at 0.5msec intervals
EFCS virtual shutter openings at 0.5msec intervals



--
Source credit: Prov 2:6
- Marianne
 
Horshack wrote:.

The electronic shutter is reaches fully open at shutter speeds equal to and slower than the sensor readout speed. So for the Z7 that's probably 1/15 or thereabouts for 14-bit mode and 1/30 for 12-bit, assuming it's about the same as the A7riii.
There's another good experiment idea.

OP: Can you try ES with 12-bit?
Doesn't look like the OP wants to continue with this. You or Jim could easily test it by photographing some cycling lighting in 14-bit vs 12-bit and counting the number of light bands. I would do it myself but I won't have a Z7 back for about a week or so.
I used my signal generator to set up a high-speed flashing LED for this testing. Taking advantage of its delay burst capability, I could dial in precise delay time between the shutter trigger and the first flash, and also set the number of flashes and their width and period to whatever value I wanted.

The green LED was set up without a lens on the camera, and was positioned to illuminate almost all of the sensor surface. For the first test (electronic shutter) I set the flash frequency to 1KHz and flash width to 200usec. I started with just 5 flashes in order to find the delay that started at the very top of the image; that delay is 80-81msec. Shutter speed was at maximum (1/8000) to obtain the narrowest line on the image, for each flash of the LED.

I then increased the number of flashes until the full image was covered with lines. The result is that the measured sensor read time is 51msec. Here is an example image where the first (top) line is at 81msec and the last is at 131msec:

Lines mark milliseconds of elapsed time for ES readout
Lines mark milliseconds of elapsed time for ES readout

After that, I set the camera to 12-bit raw and tried to get it to scan faster by going up to CH speed, but I was unable to achieve a faster scan rate; it stayed at 51msec.

EFCS

For this mode, the scan speed is dictated by the mechanical rear curtain which takes only a few msec to transit, so I increased the LED flash rate to 2KHz (flash pulse width 100usec). Shutter speed was at the maximum 1/2000. In this example, the first LED flash was at 60.5msec after shutter release, so the others are at 61.0, 61.5, 62.0 and 62.5msec (the last two overlap slightly). You can see how the virtual shutter opening grows significantly from the first flash at image bottom, to the last one near the top, due to shutter acceleration. When the opening is narrow at the bottom, the width is not perfectly even due to slight misalignment of the mechanical rear curtain.

EFCS virtual shutter openings at 0.5msec intervals
EFCS virtual shutter openings at 0.5msec intervals
Awesome test Marianne, thanks. It's interesting how the EFCS's start-lag (shutter -> first scan on sensor) is ~20ms lower than the ES. I assume that has nothing to do with the difference in shutter speed for your test (1/8000 for ES vs 1/2000 for EFCS), although the possibility gnaws at me :) Also, am I correct in surmising from your writeup that the shutter direction is opposite between ES and EFCS, as the OP found as well?
 
Status
Not open for further replies.

Keyboard shortcuts

Back
Top