My Panasonic S1 II tech thread

There was an open question in BIll's PDR thread about whether the S1 II's impressive PDR results were due to NR (based on FFTs) vs Panasonic doing some type of undocumented DR boost in stills (S1 II manual says DR boost setting only available in video mode, describing it as fixed to "off" for stills). Bill's PDR measurements were based on the mechanical shutter - to help answer the NR vs HDR question we need electronic shutter results as well, since the the S1 II's 14-bit readout speed is identical to the Z6 III, so it can't be doing HDR/dual gain on the electronic shutter but there was a possibility it was doing it for the mechanical, as the readout speed can't be measured for the mechanical shutter.

I just measured the S1 II's noise for the mechanical vs electronic shutter. Here are the results:



The S1 II ISO 100 noise is significantly lower for the mechanical shutter vs electronic, which supports the theory the camera is doing some type of HDR/dual gain readout when using the mechanical shutter. This is further supported by the fact there is no material noise difference between the mechanical vs electronic shutter at ISO 800, which is the high conversion gain point on the sensor and thus doesn't have a second gain available to HDR/merge.
The nature of your questions suggest we are defining DGO differently, at least the scope of DGO. For the purposes in this thread I'm defining DGO as any use of both LCG and HCG in the production of the output, without inferring any specific technique in which that is performed. If you read online, DGO is used pretty loosely in the industry for disparate implementations, so for better or worse I'm following that same tradition. I'd rather not bog this thread down with terminology discussions but nevertheless when my investigation is complete I'll consider a postscript post where I narrow the definition of the terms used. Until then, I'll continue using DGO in this manner.
My comments

1. Why would this effect only in low gain if this was DGO
Because for the grafting technique I proposed there would only be HCG data available at the exposures/FWCs related to ISOs that currently employ HCG.
2. DGO typically uses a higher bit depth of the normal files are those are combined exposure offset 2 stops
See terminology comments above. Also, based on your comments Bill's thread you seem to believe that the rendered output bitdepth must match the input bit depth. That's not the case. For example, in your post I just linked to you didn't believe the S1 II uses 14-bit readout for video since the ProRes raw it generates is 12-bit. However my sensor readout measurements have already confirmed the camera is using 14-bit for video, including its ProRes raw output. Bit-depth resampling between input and output is not at all uncommon. There is no need to inflate the file size beyond what the resulting DR supports.
3. Read noise in all DGO cameras are normally very high those are low
See terminology comments above
4. Input referred noise of this camera is weird and not similar to anything else
No other camera or hybrid on the market I've seen uses DGO techniques. I've only seen it on cinema cameras like the Canon C70 or Alexa ALEV sensor.
5. Not sure about your comment about the gain. The DGO does not read the sensor with low and high gain. It simply sends the information to two circuits and then combines in the digital domain the two exposures.
See terminology comments above
I did not test the camera with the various shutter types but next I would check EFCS vs full mechanical vs electronic to see what results you get
EFCS should yield the same results as mechanical but it's on my list to verify.
Swithing high gain and low gain real time in an exposure is not something that you can do in a short time I believe but maybe this is what Panasonic is doing but then why only with the mechanical shutter?

Note that in none of those scenarios mechanical shutter is interesting for anything which makes me think something else is going on here
Because with an electronic shutter the logic would have to contend with continuous integration from incoming light from the scene while doing the two readouts, whereas that's not an issue with the mechanical shutter. This is also why I believe Panasonic is using a different DR boosting mechanism for video since that only supports the electronic shutter.
The only effect of dgo though is not to reduce read noise but to expand the overall range
DR is defined from noise floor to saturation, ie RN to FWC. The HCG path of DGO reduces read noise.
Nope DGO has nothing to do with dual gain

its based on taking two reads at single and double exposure time hence it doenst work well at slow shutter speed

--
If you like my image I would appreciate if you follow me on social media
instagram http://instagram.com/interceptor121
My flickr sets http://www.flickr.com/photos/interceptor121/
Youtube channel http://www.youtube.com/interceptor121
Underwater Photo and Video Blog http://interceptor121.com
If you want to get in touch don't send me a PM rather contact me directly at my website/social media
 
There was an open question in BIll's PDR thread about whether the S1 II's impressive PDR results were due to NR (based on FFTs) vs Panasonic doing some type of undocumented DR boost in stills (S1 II manual says DR boost setting only available in video mode, describing it as fixed to "off" for stills). Bill's PDR measurements were based on the mechanical shutter - to help answer the NR vs HDR question we need electronic shutter results as well, since the the S1 II's 14-bit readout speed is identical to the Z6 III, so it can't be doing HDR/dual gain on the electronic shutter but there was a possibility it was doing it for the mechanical, as the readout speed can't be measured for the mechanical shutter.

I just measured the S1 II's noise for the mechanical vs electronic shutter. Here are the results:



The S1 II ISO 100 noise is significantly lower for the mechanical shutter vs electronic, which supports the theory the camera is doing some type of HDR/dual gain readout when using the mechanical shutter. This is further supported by the fact there is no material noise difference between the mechanical vs electronic shutter at ISO 800, which is the high conversion gain point on the sensor and thus doesn't have a second gain available to HDR/merge.
The nature of your questions suggest we are defining DGO differently, at least the scope of DGO. For the purposes in this thread I'm defining DGO as any use of both LCG and HCG in the production of the output, without inferring any specific technique in which that is performed. If you read online, DGO is used pretty loosely in the industry for disparate implementations, so for better or worse I'm following that same tradition. I'd rather not bog this thread down with terminology discussions but nevertheless when my investigation is complete I'll consider a postscript post where I narrow the definition of the terms used. Until then, I'll continue using DGO in this manner.
My comments

1. Why would this effect only in low gain if this was DGO
Because for the grafting technique I proposed there would only be HCG data available at the exposures/FWCs related to ISOs that currently employ HCG.
2. DGO typically uses a higher bit depth of the normal files are those are combined exposure offset 2 stops
See terminology comments above. Also, based on your comments Bill's thread you seem to believe that the rendered output bitdepth must match the input bit depth. That's not the case. For example, in your post I just linked to you didn't believe the S1 II uses 14-bit readout for video since the ProRes raw it generates is 12-bit. However my sensor readout measurements have already confirmed the camera is using 14-bit for video, including its ProRes raw output. Bit-depth resampling between input and output is not at all uncommon. There is no need to inflate the file size beyond what the resulting DR supports.
3. Read noise in all DGO cameras are normally very high those are low
See terminology comments above
4. Input referred noise of this camera is weird and not similar to anything else
No other camera or hybrid on the market I've seen uses DGO techniques. I've only seen it on cinema cameras like the Canon C70 or Alexa ALEV sensor.
5. Not sure about your comment about the gain. The DGO does not read the sensor with low and high gain. It simply sends the information to two circuits and then combines in the digital domain the two exposures.
See terminology comments above
I did not test the camera with the various shutter types but next I would check EFCS vs full mechanical vs electronic to see what results you get
EFCS should yield the same results as mechanical but it's on my list to verify.
Swithing high gain and low gain real time in an exposure is not something that you can do in a short time I believe but maybe this is what Panasonic is doing but then why only with the mechanical shutter?

Note that in none of those scenarios mechanical shutter is interesting for anything which makes me think something else is going on here
Because with an electronic shutter the logic would have to contend with continuous integration from incoming light from the scene while doing the two readouts, whereas that's not an issue with the mechanical shutter. This is also why I believe Panasonic is using a different DR boosting mechanism for video since that only supports the electronic shutter.
The only effect of dgo though is not to reduce read noise but to expand the overall range
DR is defined from noise floor to saturation, ie RN to FWC. The HCG path of DGO reduces read noise.
Nope DGO has nothing to do with dual gain

its based on taking two reads at single and double exposure time hence it doenst work well at slow shutter speed
"The Canon EOS C300 Mark III introduced a new technology to the Cinema EOS System: a Dual Gain Output (DGO) sensor."

"On the DGO sensor, each pixel is read out at two different amplification levels, one high and one low, and the two read-outs are then combined to make a single image"

Source: https://www.canon-europe.com/pro/stories/dgo-sensor-explained/
 
Last edited:
There was an open question in BIll's PDR thread about whether the S1 II's impressive PDR results were due to NR (based on FFTs) vs Panasonic doing some type of undocumented DR boost in stills (S1 II manual says DR boost setting only available in video mode, describing it as fixed to "off" for stills). Bill's PDR measurements were based on the mechanical shutter - to help answer the NR vs HDR question we need electronic shutter results as well, since the the S1 II's 14-bit readout speed is identical to the Z6 III, so it can't be doing HDR/dual gain on the electronic shutter but there was a possibility it was doing it for the mechanical, as the readout speed can't be measured for the mechanical shutter.

I just measured the S1 II's noise for the mechanical vs electronic shutter. Here are the results:



The S1 II ISO 100 noise is significantly lower for the mechanical shutter vs electronic, which supports the theory the camera is doing some type of HDR/dual gain readout when using the mechanical shutter. This is further supported by the fact there is no material noise difference between the mechanical vs electronic shutter at ISO 800, which is the high conversion gain point on the sensor and thus doesn't have a second gain available to HDR/merge.
The nature of your questions suggest we are defining DGO differently, at least the scope of DGO. For the purposes in this thread I'm defining DGO as any use of both LCG and HCG in the production of the output, without inferring any specific technique in which that is performed. If you read online, DGO is used pretty loosely in the industry for disparate implementations, so for better or worse I'm following that same tradition. I'd rather not bog this thread down with terminology discussions but nevertheless when my investigation is complete I'll consider a postscript post where I narrow the definition of the terms used. Until then, I'll continue using DGO in this manner.
My comments

1. Why would this effect only in low gain if this was DGO
Because for the grafting technique I proposed there would only be HCG data available at the exposures/FWCs related to ISOs that currently employ HCG.
2. DGO typically uses a higher bit depth of the normal files are those are combined exposure offset 2 stops
See terminology comments above. Also, based on your comments Bill's thread you seem to believe that the rendered output bitdepth must match the input bit depth. That's not the case. For example, in your post I just linked to you didn't believe the S1 II uses 14-bit readout for video since the ProRes raw it generates is 12-bit. However my sensor readout measurements have already confirmed the camera is using 14-bit for video, including its ProRes raw output. Bit-depth resampling between input and output is not at all uncommon. There is no need to inflate the file size beyond what the resulting DR supports.
3. Read noise in all DGO cameras are normally very high those are low
See terminology comments above
4. Input referred noise of this camera is weird and not similar to anything else
No other camera or hybrid on the market I've seen uses DGO techniques. I've only seen it on cinema cameras like the Canon C70 or Alexa ALEV sensor.
5. Not sure about your comment about the gain. The DGO does not read the sensor with low and high gain. It simply sends the information to two circuits and then combines in the digital domain the two exposures.
See terminology comments above
I did not test the camera with the various shutter types but next I would check EFCS vs full mechanical vs electronic to see what results you get
EFCS should yield the same results as mechanical but it's on my list to verify.
Swithing high gain and low gain real time in an exposure is not something that you can do in a short time I believe but maybe this is what Panasonic is doing but then why only with the mechanical shutter?

Note that in none of those scenarios mechanical shutter is interesting for anything which makes me think something else is going on here
Because with an electronic shutter the logic would have to contend with continuous integration from incoming light from the scene while doing the two readouts, whereas that's not an issue with the mechanical shutter. This is also why I believe Panasonic is using a different DR boosting mechanism for video since that only supports the electronic shutter.
The only effect of dgo though is not to reduce read noise but to expand the overall range
DR is defined from noise floor to saturation, ie RN to FWC. The HCG path of DGO reduces read noise.
Nope DGO has nothing to do with dual gain

its based on taking two reads at single and double exposure time hence it doenst work well at slow shutter speed
"The Canon EOS C300 Mark III introduced a new technology to the Cinema EOS System: a Dual Gain Output (DGO) sensor."

"On the DGO sensor, each pixel is read out at two different amplification levels, one high and one low, and the two read-outs are then combined to make a single image"

Source: https://www.canon-europe.com/pro/stories/dgo-sensor-explained/
On the white paper that has now disappeared it showed how it was done. Which is having a switch to two different exposures and then multiplex in digital domains

high gain and low gain werent the way Sony does it

somy doenst have dgo and this is a Sony sensor as it goes in Nikon cameras

more insight is required

--
If you like my image I would appreciate if you follow me on social media
instagram http://instagram.com/interceptor121
My flickr sets http://www.flickr.com/photos/interceptor121/
Youtube channel http://www.youtube.com/interceptor121
Underwater Photo and Video Blog http://interceptor121.com
If you want to get in touch don't send me a PM rather contact me directly at my website/social media
 
There was an open question in BIll's PDR thread about whether the S1 II's impressive PDR results were due to NR (based on FFTs) vs Panasonic doing some type of undocumented DR boost in stills (S1 II manual says DR boost setting only available in video mode, describing it as fixed to "off" for stills). Bill's PDR measurements were based on the mechanical shutter - to help answer the NR vs HDR question we need electronic shutter results as well, since the the S1 II's 14-bit readout speed is identical to the Z6 III, so it can't be doing HDR/dual gain on the electronic shutter but there was a possibility it was doing it for the mechanical, as the readout speed can't be measured for the mechanical shutter.

I just measured the S1 II's noise for the mechanical vs electronic shutter. Here are the results:



The S1 II ISO 100 noise is significantly lower for the mechanical shutter vs electronic, which supports the theory the camera is doing some type of HDR/dual gain readout when using the mechanical shutter. This is further supported by the fact there is no material noise difference between the mechanical vs electronic shutter at ISO 800, which is the high conversion gain point on the sensor and thus doesn't have a second gain available to HDR/merge.
The nature of your questions suggest we are defining DGO differently, at least the scope of DGO. For the purposes in this thread I'm defining DGO as any use of both LCG and HCG in the production of the output, without inferring any specific technique in which that is performed. If you read online, DGO is used pretty loosely in the industry for disparate implementations, so for better or worse I'm following that same tradition. I'd rather not bog this thread down with terminology discussions but nevertheless when my investigation is complete I'll consider a postscript post where I narrow the definition of the terms used. Until then, I'll continue using DGO in this manner.
My comments

1. Why would this effect only in low gain if this was DGO
Because for the grafting technique I proposed there would only be HCG data available at the exposures/FWCs related to ISOs that currently employ HCG.
2. DGO typically uses a higher bit depth of the normal files are those are combined exposure offset 2 stops
See terminology comments above. Also, based on your comments Bill's thread you seem to believe that the rendered output bitdepth must match the input bit depth. That's not the case. For example, in your post I just linked to you didn't believe the S1 II uses 14-bit readout for video since the ProRes raw it generates is 12-bit. However my sensor readout measurements have already confirmed the camera is using 14-bit for video, including its ProRes raw output. Bit-depth resampling between input and output is not at all uncommon. There is no need to inflate the file size beyond what the resulting DR supports.
3. Read noise in all DGO cameras are normally very high those are low
See terminology comments above
4. Input referred noise of this camera is weird and not similar to anything else
No other camera or hybrid on the market I've seen uses DGO techniques. I've only seen it on cinema cameras like the Canon C70 or Alexa ALEV sensor.
5. Not sure about your comment about the gain. The DGO does not read the sensor with low and high gain. It simply sends the information to two circuits and then combines in the digital domain the two exposures.
See terminology comments above
I did not test the camera with the various shutter types but next I would check EFCS vs full mechanical vs electronic to see what results you get
EFCS should yield the same results as mechanical but it's on my list to verify.
Swithing high gain and low gain real time in an exposure is not something that you can do in a short time I believe but maybe this is what Panasonic is doing but then why only with the mechanical shutter?

Note that in none of those scenarios mechanical shutter is interesting for anything which makes me think something else is going on here
Because with an electronic shutter the logic would have to contend with continuous integration from incoming light from the scene while doing the two readouts, whereas that's not an issue with the mechanical shutter. This is also why I believe Panasonic is using a different DR boosting mechanism for video since that only supports the electronic shutter.
The only effect of dgo though is not to reduce read noise but to expand the overall range
DR is defined from noise floor to saturation, ie RN to FWC. The HCG path of DGO reduces read noise.
Nope DGO has nothing to do with dual gain

its based on taking two reads at single and double exposure time hence it doenst work well at slow shutter speed
"The Canon EOS C300 Mark III introduced a new technology to the Cinema EOS System: a Dual Gain Output (DGO) sensor."

"On the DGO sensor, each pixel is read out at two different amplification levels, one high and one low, and the two read-outs are then combined to make a single image"

Source: https://www.canon-europe.com/pro/stories/dgo-sensor-explained/
On the white paper that has now disappeared it showed how it was done. Which is having a switch to two different exposures and then multiplex in digital domains

high gain and low gain werent the way Sony does it

somy doenst have dgo and this is a Sony sensor as it goes in Nikon cameras

more insight is required
The description from Canon's site describes how it's done. The "DG" literally stands for "dual gain". I'm not sure what you're trying to accomplish here.
 
I just wanted to say thanks for all the unfolding investigation and clarification here. Great work, very illuminating.
 
"The Canon EOS C300 Mark III introduced a new technology to the Cinema EOS System: a Dual Gain Output (DGO) sensor."

"On the DGO sensor, each pixel is read out at two different amplification levels, one high and one low, and the two read-outs are then combined to make a single image"

Source: https://www.canon-europe.com/pro/stories/dgo-sensor-explained/
To clarify.

Dual Conversion Gain (DCG) (see Aptina DR-Pix Technology White Paper )
is a change in capacitance within the pixel that affects how the charge is converted (hence conversion) to a voltage.

Other techniques, such as DGO, are outside the pixel and involve performing 2 (or more?) Analog to Digital Conversions (ADC) which is when the voltage is digitized.
 
"The Canon EOS C300 Mark III introduced a new technology to the Cinema EOS System: a Dual Gain Output (DGO) sensor."

"On the DGO sensor, each pixel is read out at two different amplification levels, one high and one low, and the two read-outs are then combined to make a single image"

Source: https://www.canon-europe.com/pro/stories/dgo-sensor-explained/
To clarify.

Dual Conversion Gain (DCG) (see Aptina DR-Pix Technology White Paper )
is a change in capacitance within the pixel that affects how the charge is converted (hence conversion) to a voltage.

Other techniques, such as DGO, are outside the pixel and involve performing 2 (or more?) Analog to Digital Conversions (ADC) which is when the voltage is digitized.
Thanks Bill, I discussed my sloppy use of DGO in my terminology post earlier in the thread, to cover any technique where multiple gains are being applied to the production of the final image, irrespective of the specific method use to apply the gain.
 
In my view these are the only possible methods the S1 II could be using to achieve its low ISO noise advantage for stills vs the Z6 III, which presumably uses the same Sony partially-stacked sensor:
  1. Overlapping T1/T2 exposures, with or without varying gain between them, as described in this paper.
  2. Noise Reduction exclusively
  3. Improved sensor read noise
  4. Dual conversion-gain (LCG+HCG) readouts or dual-gain readouts
#1 - The fact this occurs with the mechanical shutter eliminates overlapping exposures because the only way you could achieve that is by either cycling the mechanical shutter twice or by using an exotic hybrid electronic+mechanical shutter where T1 uses the electronic shutter and T2 uses the mechanical. I excluded that hybrid case today by verifying the absence of rolling shutter for the mechanical case.

#2 - I did various visual experiments today and have concluded it would be impossible for the camera to achieve the low-noise / detail-retention its exhibiting from noise reduction alone. Here is one visual example comparing a -10EV ISO 100 exposure comparing the mechanical vs electronic shutter for detail and noise, processed in ACR with no chroma or lumna NR:

Animation: ISO 100 -10EV, Mechanical vs Electronic, 400% Crop

#3 - The fact the S1 II exhibits almost identical noise performance vs the Z6 III when both are using the electronic shutter should exclude the possibility that the S1 II sensor represents an evolutionary improvement in read noise vs the same sensor line in the Z6 III.

#4 - For now this seems to be the only remaining viable possibility. The argument in its favor is how the noise improvement is limited to the LCG ISOs and how it almost exactly matches my "DGO" EV grafting simulation. The argument against it is that it represents a readout method that I don't believe Sony has disclosed . Sony has documented various dual gain readout methods for an electronic rolling-shutter (HDR features) but I've not seen one documented that would support two readouts on a single exposure integration that would be necessary for a mechanical shutter. There's also the still-unsolved mystery of the noise filtering in the FFTs, which based on my research would only be employed for a T1/T2 overlapping exposure [with varying LCG/HCG], which again the mechanical shutter excludes, which means the only application for the NR would be a direction reduction of noise independent of any HDR technique, but again, I think that's excluded in my analysis of #2.
 
Last edited:
Today I verified the Electronic First Curtain Shutter (EFCS) yields the same reduction in low ISO noise as the fully-mechanical shutter.
 
#1 - The fact this occurs with the mechanical shutter eliminates overlapping exposures because the only way you could achieve that is by either cycling the mechanical shutter twice or by using an exotic hybrid electronic+mechanical shutter where T1 uses the electronic shutter and T2 uses the mechanical. I excluded that hybrid case today by verifying the absence of rolling shutter for the mechanical case.
How about an even more exotic electronic + mechanical shutter in which the electronic shutter skips 3/4 of the pixels and can read the frame in 1/280 s? Then this frame is interpolated to full resolution and combined with the mechanical shutter exposure (1/250 s travel time). Rolling shutter would be comparable. Would the FFT results fit with an interpolation masked by combination (weighted averaging?) with a full pixel readout?

For the skipping mode I was thinking at something like this (reading blocks of four pixels that form a Bayer unit): https://sfat.massey.ac.nz/research/centres/crisp/pdfs/2015_IMVIP_67.pdf
 
Last edited:
I too find this topic very interesting - an unsolved mystery. I appreciate you taking your time to work this challenge and look forward to any further discoveries as you move forward.

Thank you!
 
Today I used that same code on the S1 II to see if its mechanical shutter output with presumed DGO matches what my DGO simulation produces for ISO 100+800 electronic shutter output. Here is the result:

Animation: S1 II ISO 100 Electronic vs ISO 100 Mechanical vs ISO 100+800 Electronic DGO Simulation

The above animation is showing a 200% crop from three raws, all processed identically in ACR with no NR:
  1. S1 II ISO 100 Electronic shutter raw
  2. S1 II ISO 100 Mechanical shutter raw
  3. S1 II ISO DGO simulation raw from ISO 100 and 800 Electronic shutter raws
The mechanical shutter and DGO simulation look identical.

To further test the DGO theory I applied the same DGO simulation logic to S1 II blackframes, to see if the strange absence of ADU tail noise outliers on the S1 II's mechanical shutter blackframes (mentioned in the OP) could be explained by DGO being applied. Here are the resulting histograms:

Animation: S1 II Histograms, ISO 100 Electronic vs Mechanical vs ISO 100+800 Electronic DGO Simulation

The blackframe for the S1 II DGO simulation blackframe is a very close match to the S1 II ISO 100 mechanical shutter blackframe, further supporting that the S1 II is using DGO for stills.

In case the description of what my DGO simulation is doing is a little murky, here is what the Matlab code looks like, which use my Matlab/OctaveRawTools library.

Matlab code for DGO Simulation
In the 5 days since posting the above quoted "DGO simulation reveals more evidence", I've been conducting many experiments, both to explore possibilities other than DGO and to find ways to disprove the DGO theory.

In the last two days I've focused almost entirely on the S1 II ISO 100 blackframe histogram anomalies, specifically the complete lack of noise outliers outside the core ADU values centered around the black level mean. I was pretty certain when I posted 5 days ago that this was definitive evidence for the DGO theory but I wanted to spend more time exploring other possible explanations for the anomaly. Today I'm rather confident there is no other explanation.

For the remainder of this post I'm going to dive into the anomaly details and explain why I believe it represents the smoking-gun evidence for DGO. I'll be using "DGO" to refer to what is actually DCG (dual-conversion gain) but that isn't important - both dual-ADC gain and dual-conversion gain involve reading out a single exposure at different gains and merging them together. The details of how the gain/conversion-gain is applied for that dual readout is not material to the theory.

Histogram anomaly visualized

First, here is the animation of the blackframe histogram values for ISO 100 electronic vs mechanical vs the DGO simulation using ISO 100+800 blackframes:

Animation: S1 II Histograms, ISO 100 Electronic vs Mechanical vs ISO 100+800 Electronic DGO Simulation

The anomaly is the complete absence of ADU tail values below 503 or above 518 for the ISO 100 mechanical shutter, as compared to the ISO 100 or 800 electronic shutter histograms. This is something I've never seen on another camera before, and I've reviewed my library of blackframes from many cameras to confirm my memory.

The histogram of the ISO 100+800 DGO simulation is almost an identical match to the ISO 100 mechanical shutter, which tells me the camera is not only using DGO but also a similar technique to merge the two readouts together.

How DGO produces the histogram anomaly

Now I'll explain how the camera's DGO (and my DGO simulation) produce the tail-outlier histogram anomaly.

A single exposure is read-out using two gains, ISO 100 (LCG) and 800 (HCG). Because the ISO 800 readout is 3EV higher than ISO 100, its ADU values represent an 8x increase. However keep in mind the black level is the same for both ISOs (512).

After performing the dual readouts of the single exposure, the camera (and my DGO simulation) merges the two exposures. My simulation uses a simple EV grafting technique, where I replace the lowest 5EVs of the ISO 100 raw with the lowest 5EVs of the ISO 800 raw (Matlab/Octave code). I believe the camera is likely using a different technique, one which may involve some spatial awareness (producing the FFT noise correlation), but one which scales the ISO 800 data in the same fashion as my simulation. It's the scaling of that data which produces the histogram anomaly.

Scaling of ISO 800 HCG data

Here is the scaling formula my DGO simulation uses, which replaces all ISO 100 (LCG) ADU values that are within the first 5EV of ADU values (ie, black_level + 5 stops):
  • Replaced_ADU = (HCG_ADU_Value - Black_Level)/ISO_Delta_Factor + Black_Level
The ISO_Delta_Factor is 8 (ISO 100 -> 800 is 3EV, or 8x more amplification/gain) and black level is 512, thus simplifying to:
  • Replaced_ADU = (HCG_ADU_Value - 512)/8 + 512
The reason the ISO 800 ADU value has to be divided by 8 is because it represents an 8x brightness increase over the ISO 100 ADU its replacing (3EVs), so this scaling is necessary to keep the resulting brightness the same as the ISO 100 data we're merging into (ie, we're using the lower 5EVs of the ISO 800 raw and all data above 5EV from the ISO 100 raw).

Now let's walk through the formula applied for two sample ADU values from the ISO 800 blackframe: 522 and 517
  • Step 1: Subtract black level (512) from ADU values:
    • 522-512‎ = 10
    • 517-512‎ = 5
  • Step 2: Scale black-subtracted ISO 800 values to ISO 100:
    • 10/8 ‎ = 1.25
    • 5/8‎ = 0.625
  • Step 3: Add black level back into scaled values:
    • 512+1.25‎ = 513.25
    • 512+0.625‎ = 512.625
  • Step 4: Apply integer rounding to make ADUs suitable for raw file ADUs:
    • 513.25 -> 513
    • 512.625 -> 513
  • Result: ISO 800 ADU values 522,517 become ADU values 513,513 in DGO raw
As the above demonstrates, this scaling of the ISO 800 data has the effect of tightening the spread of values around the mean, reducing the tail outliers by the scaling factor. It avoids posterization due to the precision of the scaling and sufficient noise dithering.

I'm moving on to investigating video / DR Boost.
 
Last edited:
#1 - The fact this occurs with the mechanical shutter eliminates overlapping exposures because the only way you could achieve that is by either cycling the mechanical shutter twice or by using an exotic hybrid electronic+mechanical shutter where T1 uses the electronic shutter and T2 uses the mechanical. I excluded that hybrid case today by verifying the absence of rolling shutter for the mechanical case.
How about an even more exotic electronic + mechanical shutter in which the electronic shutter skips 3/4 of the pixels and can read the frame in 1/280 s? Then this frame is interpolated to full resolution and combined with the mechanical shutter exposure (1/250 s travel time). Rolling shutter would be comparable. Would the FFT results fit with an interpolation masked by combination (weighted averaging?) with a full pixel readout?

For the skipping mode I was thinking at something like this (reading blocks of four pixels that form a Bayer unit): https://sfat.massey.ac.nz/research/centres/crisp/pdfs/2015_IMVIP_67.pdf
Interesting idea, I like your creative thinking. I think it would be challenging to apply/scale the subsampled data back to the original sample. Btw I started back up on video this afternoon and it's looking like it might reveal lots of secrets that apply to both the video and stills implementation.
 
This is very interesting...the histograms of video blackframes show the same tail outlier as the stills. Up to this point I was assuming the video implementation of DR boosting would be different than stills due to the electronic shutter vs mechanical...but I'm seeing clues like this they're actually the same.

As to why the stills implementation only supports the mechanical shutter, it may very well be because Panasonic considered the rolling shutter was too slow for stills photography @ 1/29 (34ms).

If it does turn out the video implementation is the same as stills then it should reveal all the secrets I didn't have access to in my stills experiments due to the mechanical shutter"hiding" the second readout.

Here are the ProRes Raw blackframe histograms. Note the ADU gaps are due to the 12-bit encoding of ProRes Raw, even though the sensor readout is 14-bits, based on my previous rolling shutter measurements.

Charts: ProRes Raw 5.8K HQ Blackframe Histograms, V-Log ISO 1000 DR Boost Off vs On
 
Last edited:
Now that it seems more likely that the video DR Boost implementation is similar or the same as the DR-boosting behavior of the photography mechanical shutter, I've started visual experiments to reverse-engineer a potential dual-gain readout of the sensor.

Revisiting sensor readout rate

As I measured here, the 6K24p open-gate (5952x3968) readout measurements using my rolling shutter measurements are as follows:
  • DR Boost Off: 1/66.85 (14.95 ms)
  • DR Boost On: 1/29.31 (34.11 ms)
Those measurements are taken by pulsing an LED at 500 Mhz and counting the number of resulting bands. The bands occur because as the sensor readout progresses vertically down the frame it is capturing the on/off cycles of the LED. The slower the readout, the more bands captured in the resulting image. The total readout rate is then calculated by measuring the number of bands vs total # rows in the image .

How can we reverse-engineer potential DGO?

The pulsing LED is great for precise rolling shutter mechanism but it lacks any spatial information because it's a single LED, ie the rows across the sensor either see the LED on or off, with no information about the progression or what happens in between.

The theory of the S1 II's DR advantage is that it's using a dual-gain readout, where the sensor is performing two full readouts across the sensor - one at lower gain (LCG) and the other at higher (HCG), then combining the two. These two full readouts is why the readout rate is 2x+ longer with DR Boost On (34.11ms) vs Off (14.95 ms).

But consider what is occurring during these two separate readouts. The first readout is simple - reset the sensor rows, wait the integration time (shutter speed), then readout the sensor rows. But what happens for the second readout? Did the first readout deplete the charge in those sensor rows, as normally occurs on a CMOS sensor, and if so, did the camera use a second, overlapping integration so the rows can accumulate a new charge for the second readout (ie, a T1/T2 DOL HDR implementation)?

Or is the sensor somehow retaining the charge after the first readout and is just reading out the second at the different gain? Presumably the photography implementation of this DR boost would have to work this way since it works with the mechanical shutter, which means it's not possible to perform a second exposure integration for T1/T2 since the shutter is only cycled once, unless there is some type of hybrid electronic+mechanical shutter being used.

These are the questions that need to be answered to fully understand how the camera/sensor implemented its HDR.

DGO sniffer

To understand what is happening during DR boost's first and second readouts it would be helpful if we could somehow alter the scene while the sensor readouts are occurring so that it changes between the two readouts. We could then analyze the resulting HDR frame generated by the sensor and try to figure out which parts came from the first version of the scene and which came from the second.

This lead me to create a new Windows app today I'm calling "DGO sniffer". Right now the implementation is simply:
  1. Draw a single vertical line on the screen
  2. 4 ms later, draws a second vertical line next to the first
  3. Repeat until the window is filled with lines
I chose 4 ms because my monitor has a 240 fps refresh rate; drawing the lines any faster than 4 ms will simply mean multiple lines rendered for each refresh cycle.

I then record the screen with the camera and analyzed the individual video frames. I used a 1/2000 shutter @ 24fps. Each video frame will capture a different phase of the DGO sniffer's cycle, since there's no way to interlock/sync the beginning of the sniffer's cycle to the camera's readout of frames. So I have to scrub through the resulting video to find frames where the two somewhat aligned.

Here's one of those frames with DR boost off:

Video Frame: 6K24p DR Boost Off

While the sensor is being read-out one row at a time from top to bottom, my DGO sniffer is drawing a new vertical line at 4ms intervals. That means the number of vertical lines "seen" (ie, captured) for each row in the resulting video frame increases by one every 4ms. But only the first line captured will be a full-height line - the subsequent lines will get smaller and smaller because those lines didn't exist at the time the previous rows were read out. That is what's demonstrated in the above video frame.

Since there's no way to sync the DGO sniffer's drawing to the sensor capture, nor is the multiple of lines to readout even, the number of lines captured by the sensor wont precisely match the maximum lines that could've been captured. For example, a DR Boost-off 6K24p readout is 14.95ms, which means I should be able to capture 14.95ms/4ms number of lines (3.7375). To account for this, I can simply measure the height difference between any two lines and extrapolate the readout rate for the full frame, which I do as a sanity check to confirm the timing of my DGO sniffer app.

Here's that same frame but with the readout rate calculation performed for two lines:

Video Frame: 6k24p DR Boost Off, Readout Rate Calculated

This isn't as precise as the 14.95 ms rate calculated from a flashing LED but that's ok since I'm only doing the calculation to confirm the timing of DGO sniffer. This conformation is necessary because there are many factors which can interfere with the timing of the app, including system latency, display refresh cycles, display response time (ie, black -> white pixel transition times), phase differences between monitor refresh and sensor readout, etc.. etc..

Here is the same confirmation performed with DR Boost On:

Video Frame: 6K24p DR Boost On

Video Frame: 6K24p DR Boost On, Readout Rate Calculated

So far so good. But does this give us any new information about what the sensor is doing for two potential readouts that we're not getting with a simple LED? Not yet. But now that I have a way to precisely control the content the sensor sees between its two potential readouts, I can now alter that content and instrument what the resulting captures looks like to help tease out what the sensor is doing...
 
More DR Boost configuration clues:
  • DR Boost minimum V-Log ISO is 1000 vs 640
  • DR Boost maximum V-Log ISO is 25.6k vs 51.2k
  • DR Boost maximum shutter speed is 1/6400 vs 1/16,000
Edit: Need to rework the math and theory...
 
Last edited:
I've mentioned how Sony doesn't appear to have any published tech that allows a single exposure to be read-out at two separate gains, outside of their DOL feature that requires two exposures (T1/T2, one long, one short).

I somehow forgot Sony came out with their Starvis 2 1/2.8 sensor a few years ago which has what Sony calls "Clear HDR", which is precisely a dual readout of a single exposure at varying gains:

When the Clear HDR feature is on, the image sensor captures two images simultaneously, one with a low gain level set to the bright region and the other with a high gain level adjusted to the dark region*2. The images are then synthesized.

This method has the advantage of delivering images of a moving target without chromatic aberration and other artifacts because the two images are captured at the same time.
The Clear HDR feature is suitable not only for security cameras but also for applications to capture moving targets, such as traffic monitoring systems and dashboard cameras.

Source:
https://www.sony-semicon.com/en/technology/security/index.html

I looked for an application/programming guide to the Sony sensors which have the Clear HDR feature (ex: IMX662, IMX585) but couldn't find any - normally Sony protects the release of those through their vendor channels. I'll see if I can get a hold of one. These would dive into the technical details on how to program the sensor to use the feature, which would reveal how it works.

I'm guessing the IMX820 has Clear HDR (35mm sensor in the S1 II / Z6 III). I'm also guessing the implementation on the IMX820 may require custom ISP support to handle the readout rather than doing it on-sensor. I'd also guess Nikon doesn't have this support on their EXPEED 7 ISP chip but Panasonic has it on theirs. To wit, the above Sony page has a footnote for the Clear HDR feature that reads:

*2) The output image data must be post-processed in the camera in order to obtain the final image. IMX585 can synthesize data internally with Clear HDR mode.

Which means some of the Clear HDR implementations can do the merge on-chip, whereas others require an ISP to externally handle the two readouts generated by the sensor.
 
Last edited:
Still can't find a programming reference online to any of the Sony chips with Clear HDR support but did find the following schematic showing the readout scheme in Clear HDR mode:

Image: Clear HDR frame readout schematic

The data stream depicted in this schematic is what would be fed to an ISP (image processing chip in camera), which would have to store the data into frames for later merging, providing the full-frame implementation doesn't do the merging on-chip.

Look at the timing diagram in the center. You'll see incrementing numbers, which represents each row read off the sensor. Each row # is listed in pairs of HG/LG, for example "HG 22", "LG 22", "HG 23", "LG 23". This is depicting each row being clocked off the sensor, first at high gain and then at low gain.

This is why the readout rate doubles for dual gain. It's actually more than double according to my S1 II measurements - it's around 2.3x slower. I theorized here this may be due to a T1/T2 overlapping exposure but adding a second short exposure wouldn't itself change the measured flashing-LED readout rate since the second exposure would overlap the first (would be detecting as an overlapping LED band instead of more bands). I think instead the slower readout rate is the inherent overhead of switching the gain back and forth for each row readout.
 
Last edited:
This was inherently already disproven for reasons I wont bore you with here's a new DGO sniffer demonstration showing the camera is not performing two completely separate readouts. In other words, proving it's not reading out the complete sensor for one exposure/gain followed by a second readout.

For this demonstration DGO sniffer was modified to first draw three white vertical lines, 4 ms apart. It then draws three red vertical lines, again 4 ms apart. So:
  • At 0 ms, draw first white vertical line
  • At 4 ms, draw second white vertical line, offset to the right of the 1st line
  • At 8 ms, draw third white vertical line, offset to the right of the 2nd line
  • At 12 ms, draw first red vertical line, offset to the right of 1st white line
  • At 16 ms, draw second red vertical line, offset to the right of 2nd white line
  • At 20 ms, draw third red vertical line, offset to the right of 3rd white line
Result:

Video Frame: 6k24p DR Boost On, White then Red lines

If the sensor were performing a full sensor readout followed by a second full readout then the red lines would near the full-height of their adjoining white line instead of just continuing at the height where the white lines left off.

I had more useful experiments planned with further DGO sniffer modifications but they were dependent on getting a hold of a faster fps panel, which doesn't look like will happen. So I've switched to a different idea - using two LEDs of different colors on my Arduino readout testbed - this allows me to control scene content changes down to microseconds.
 
Found the following change notes for an open-source driver for the IMX585, which supports Clear HDR:

https://patchew.org/linux/202507020...com/[email protected]/

It describes registers to set the pixel value thresholds that decide which values use the HCG vs LCG readout. This is similar to the fixed threshold in my Matlab-based DGO simulator.

Interestingly the IMX585 also has blending settings where you can set a ratio of HCG vs LCG for values. Not sure how that's actually applied but perhaps it's the source of noise filtering detected on the FFTs.
 
Last edited:

Keyboard shortcuts

Back
Top