A simple way to put Canon's sensors on top.

Sorry about my poor English, I can take in Swedish and you can translate by your self
I was hoping you would provide a better and more specific technical reference than just a suggestion to look for it myself. In other words, I have provided a description of the process the way I see it while you provided no descriptive basis for your disagreement. Therefore I am right until you prove wrong. I don't need to look for a reference, you do, if you want to prove me wrong. I am not a sensor engineer and I may be wrong, but you are not a sensor engineer either and may be wrong too, so just your opinion alone does not prove me wrong. The bottom line is that no one here seems to know exactly how this works and additional research is needed before judging the idea of the OP. If increasing the depth of the ADC were all that Canon needed to do to catch up with Sony, they would already have. High quality 24-bit ADCs are very common now and cost virtually nothing to produce. No matter how the sensor actually works, the DR problem is definitely not in ADC. The reason Canon and others use 14 bits simply because they don't have more information to feed into the ADC. In fact, I recall that medium format uses 16 bits. Do they have a 16-bit DR? Not a chance.
you must study what FWC means before you are writing a lot of garbage
I assume "garbage" is an acceptable polite word in Swedish for a civilized conversation on a public forum.
so start with google what FWC , QE means

we are many who know who a sensor works, collecting photons due different exposure =time/F-stop and the following iso gain to the signal

But you are not one of them
Thanks for your contribution to the conversation, Michael. As "helpful" as always. Still nothing more than "I am right and you are wrong". Well, everyone on this forum thinks he is right and everyone else is wrong. You need to do a lot better than that if you want to be taken seriously.
 
1.Canon need to invest in new sensor lines to keep up regarding modern sensor design and resolution

2. Canon need to shorten the analog signal chain as Sony/Panasonic/Toshiba/Aptina etc and have for example raw vise ADC to lower the read out noise

3.Today Canon sensor tech is old, based on solutions from 2004 and modified during the years but with still high read out noise at base iso
Canon didn't make film. We bought the best film available from other companies. Recording medium was not Canon's field. Why should Canon make sensors now instead of buying the best available sensors from others? The difference I would pay for 5D3 with a Sony sensor would be more than the cost of this sensor for Canon.
now you are wrong again, do you know how much Canon charge internally for its stitched 24x36mm sensors due to a small output compared to others? and with a old tech regarding read out, QE ?
How could I possibly be wrong if I didn't make any statement?
 
1.Canon need to invest in new sensor lines to keep up regarding modern sensor design and resolution

2. Canon need to shorten the analog signal chain as Sony/Panasonic/Toshiba/Aptina etc and have for example raw vise ADC to lower the read out noise

3.Today Canon sensor tech is old, based on solutions from 2004 and modified during the years but with still high read out noise at base iso
Canon didn't make film. We bought the best film available from other companies. Recording medium was not Canon's field. Why should Canon make sensors now instead of buying the best available sensors from others? The difference I would pay for 5D3 with a Sony sensor would be more than the cost of this sensor for Canon.
now you are wrong again, do you know how much Canon charge internally for its stitched 24x36mm sensors due to a small output compared to others? and with a old tech regarding read out, QE ?
How could I possibly be wrong if I didn't make any statement?
 
Again, I don't disagree, it is just beyond my point.
Perhaps not. See below.
In engineering DR the numerator is FWC in e- and the denominator is given by the signal when SNR=1. This happens when the mean photosite reading is 1e-.
The DXO definition of DR requires SNR > 1, not SNR = 1:

http://www.dxomark.com/About/In-depth-measurements/Measurements/Noise

Therefore the minimum signal above noise is 2, not 1.
Unless one recalls calculus, limits, Zeno's Paradoxes - and they work in floating point like a photographic signal does.

That's why DxO reads the lower boundary of their published 'screen' dynamic range where the relative Full SNR curve intercepts SNR=0.0000000000000000db :-)
However, this also is beyond my point. Even if it is 1, the DR is still not infinite in the absence of the read noise, but limited by the photon noise of the lowest signal.
Agreed. Can we call it shot noise for our purposes lest someone think that it is the noise inherent in light from the scene? For this discussion it's the noise inherent in the photoelectrons from the sensor.
What's the SNR of the arriving photons for the example at hand when SNR=1 in e-? Is QE part of the equation?
Sorry, I am not sure why you are asking. It doesn't matter, because you cannot measure it. My guess, it would be a square root of the QE reversed. For example, if you have 10 photons and the QE of 10% producing 1 electron, your photon SNR is 3.16, which you cannot measure, but the SNR of the electron signal is 1. I may be off in details, but this is my intuitive understanding in general.
Yes, the point being that if we measured an SNR of 1 in e- from the raw data -which information theory says is quite possible since it works in floating point- but the QE were 20% instead of 10% the SNR of the arriving photons would no longer be 3.16 (and vice versa). So QE affects both the SNR and DR of the recorded information, looping back to one of Mikael's points.
 
Right, if the ADC maxed out at 16383 ADUs. However in fact Canon Full Frames like the 5Diii are limited to around 13235.
OK, that's weird. Why wouldn't it be a power of two?
In other words when the photosite is loaded to its max, 76600e-, the ADC reads 13235 ADUs. This means that each ADU represents 5.88e-/ADU. So 27e- correspond to 27/5.88=4.6 ADU
I get that, then, but I don't get why the ADC doesn't max out at an integral power of 2.
The main reason is that most Canons do not produce output that requires the full 14 bits, so Canon engineers use some of the bits liberally for other purposes, like dedicating 2047 or so to the black level (16k-2k = 14k), probably cutting it short at the top end to preserve linearity, etc.

Recall that the ideal parameter for photographic ADC designers is to define the bit depth so that read noise ends up at around 1 ADU (0.5-1.5 is the current range). The 5Diii's read noise is 6.5ADUs at its best (other FF Canons are similar in this sense), getting worse as you increase ISO. This means that the bottom two bits (4 ADUs) are never used, simply fluctuating randomly from pixel to pixel when exposed to a uniform signal. Other than when doing stacking (astrophotography for instance), one should be able to shoot such Canons at 12 bits with zero loss of information. The extra two bits simply bloat the raw files with random noise.
This, of course, results in the same DR that you computed above DR = log2 (2^14 / 5.8) = 11.5 stops, using the read noise for the noise floor over the area of one pixel.
If you were to use a 20-bit ADC to encode the same range you would have 'gain' of something like 76600/2^20 = 0.073e-/DN. So 27e- would correspond to 367DN and 76600e- to 2^20DN (DN here meaning 20-bit ADUs). DR of course reamains what it is.
My misunderstanding on the computation of ADUs is likely why I don't understand why this would be the case. Does not the noise floor drop at ISO 3200 while the FWC remains the same (with the 20 bits allowing the full FWC to be recorded)?
The photogapher's model is of sensor (1) and Amp/ADC (2) noise summed in quadrature and observed AFTER the ADC, as shown by my graph above.
This makes sense.
(1) Amp/ADC noise is present only at the output of the ADC and constant in ADUs, independent of gain and ISO, 6.5 ADUs in the 5Diii example;
Gotcha.
(2) Sensor noise at the output of the sensor also remains constant at 2.8e-, but it gets amplified by the amp as ISO is raised before being fed to the ADC. So 2.8e- at ISO100 becomes 5.6e- at ISO200, 11.2e- at ISO400 ... 89e- at ISO3200. This value is added in quadrature to the Amp/ADC's constant noise determining the overall system noise floor , which obviously increases as ISO increases, therefore reducing the headroom to the fixed FWC ceiling.
Sure (I assume you mean "pixel noise" when you write "sensor noise").
Yes, read noise per photosite, what you refer to below.
You can also model it as if all noise were present at the output of the sensor, with the Amp/ADC contributing zero noise. That's what you show in your table above. But everything at the output of the sensor, signal and noise alike, gets amplified by the amplifier before being converted to ADUs according to the ISO dependent 'gain' which I showed you how to determine earlier. When you do that, you realize that the noise floor in ADUs grows just as in the graph, even though the input referred noise in e- at the output of the sensor appears to be going down,simpy reflecting the fact that the contribution of the sensor to total read noise is becoming dominant.
Here's where you lose me. Let's say we have a pixel noise of 2 electrons, an ADC noise of 27 electrons, and an amplification of 32x (ISO 3200).

If we amplify the signal in an analog fashion, then the pixel noise becomes 32 electrons (assuming noiseless analog amplification) and the ADC noise remains 27 electrons.
The Amp/ADC noise in e- is a theoretical simplification, pretending that the noise was present at the output of the pixel (it wasn't, it came into the picture later as the signal went through the Amp and ADC). So here is how it would actually look like at the output of the ADC - after amp gain (igain in the table) has been applied. 'Measured' Total Read Noise in e- is straight off Sensorgen.info. 'Modelled' is what Total Read Noise would be if the Photosite and Amp/ADC read noise components had the fixed values highlighted in bold.

Not a perfect model, but it gives us an idea of how things may work inside the 6D.

Not a perfect model, but it gives us an idea of how things may work inside the 6D.
I would work it out from here, but the ADC, per what you wrote out ahead, would not max out at 2^20 electrons per ADU, so, if would work it from here, I'd appreciate it.
Right. I'll be back.
 
Last edited:
It is hard to read your English, but it sounds like you agree that the well capacity is lower at higher ISO. And therefore the wells do fill up at any ISO. In other words, an overexposure at ISO 3200 will saturate the sensor, not just the ADC. You seem to disagree earlier. Those here who think that you can overexpose at ISO 3200 by 5 stops without clipping the sensor and then just use a wider ADC to get more DR are wrong.
...the ISO setting on the camera has squat to do with the FWC of the pixels. For example, if a pixel can absorb 80000 photons, then it can do so at any ISO setting, since the ISO setting merely applies a gain to that signal *after* the fact.

Indeed, it doesn't make any sense at all that the ISO setting would affect the pixels' ability to absorb light -- it merely affects what is done with that light after it turns photons into electrons.
I understand what you are saying. If the sensor and amplifier were separate consecutive units, you would be correct. I am not a sensor engineer and I may be wrong. If you provide a link to a convincing evidence then I would learn and benefit from this forum. But until then I maintain that the amplification is done by applying a higher voltage to the sensor thus reducing the well capacity. To be clear, I understand that there is an amplifier after the sensor, but it is not where the ISO amplification is done. Prove me wrong by a clear reference. I don't mind to be wrong, my goal here is to learn, not to satisfy my ego.
From Ron's site
http://www.ronbigelow.com/articles/noise-1/noise-1.htm

"....the photons reach the sensor and excite electrons.... These excited electrons are freed from the molecules to which they are attached. When a voltage is applied,
A higher voltage is applied for higher ISO.
these free electrons create a current and flow into a capacitor. This creates a charge on the capacitor. The charge is then measured to create a voltage measurement. This voltage measurement is processed by the camera to determine how much light reached the pixel during exposure"

"...After the freed electrons flow into a capacitor and the voltage of the capacitor is measured, the voltage is amplified before any further processing is performed"

".....The way that a digital camera increases the ISO is to apply a greater amount of amplification to the voltages that come from the pixels' capacitors"
The higher voltage applied to the sensor (see above), the higher the output voltage is. The amplification happens in the sensor. Again, I am just playing devil's advocate here, but this site is popular, not technical, and does not prove me wrong.
So if the increase in voltage associated with ISO is applied to the signal from the capacitor, and not the individual pixels prior to capacitor storage, is actual Full Well Capacity not affected by ISO?
The jury is still out.
Press correspondent, what you are talking about is avalanche amplification. A single photon can indirectly dislodge multiple electrons. However this process is very noisy and for that reason is not used in image sensors. It also requires quite high voltages and is therefore not practical. On top of that the process is very temperature dependent and difficult to control with precision.
So remains that the amplification happens during readout of the pixel.

The popular description of how a pixel works is given above but it actually works slightly different. At the start of an exposure the embedded capacitor in the pixel is pre-charged. When a photon dislodges an electron the electron moves to the electrode with opposite charge because of the electric field and neutralizes part of the charge causing the voltage to drop. An amplifier amplifies the difference between the pre-charge voltage and the voltage on the capacitor. This gives a measure of how many electrons have reached the electrode.
The full well capacity is reached when the potential between the electrodes becomes zero.
The popular description doesn't work because there is no incentive for the electrons to go to either electrode if the capacitor was uncharged and the electrons would just wander around aimlessly until they fall back from their excited state releasing their energy as heat or radiation.
If the electrons were to build up a voltage difference, successive electrons would have to fight the electric field to reach the correct electrode and would rather go the other way neutralizing the voltage difference again.

Another remark about ADCs. Yes there are 24-bit and even 32-bit ADCs used in audio that achieve an SNR of 130 dB but the ADC used for sensors must be much faster than an audio ADC so they cannot be compared. Also the power consumption goes up with every bit you add.
The Sony approach of using many ADCs in parallel helps to lower the conversion frequency at the expense of probably more power consumption in total.
They are a serious heat source on the sensor which may influence the performance negatively if not done right. You don't want the ADCs close to active pixels but it is better to keep the signal paths short. This is problem with FF sensors because of their physical size.

Mikael, I want to know what a raw vise ADC is but when I googled I only found posts from you in different forums. Makes me wonder. Can you give me a link where these ADCs are described?
 
Last edited:
Great Bustard wrote: Here's where you lose me. Let's say we have a pixel noise of 2 electrons, an ADC noise of 27 electrons, and an amplification of 32x (ISO 3200).

If we amplify the signal in an analog fashion, then the pixel noise becomes 32 electrons (assuming noiseless analog amplification) and the ADC noise remains 27 electrons.
Let's go through it twice, once in e- and once in ADUs.

In e-. In this idealized model sensor/pixel noise is always 2e-, independent of ISO. Amp/ADC noise is a constant 4.6ADU at the output of the ADC, but will be affected by different gains at different ISOs when referred to the input of the Amp in e-. For instance at ISO 3200 Amp/ADC noise can be thought to represent 0.9e- [=4.6ADU*0.19e-/ADU] when referred to the input of the Amp - where it will also find sensor/pixel noise. Summing the two in quadrature, Total Read Noise would be 2.2e- [=sqrt(2^2+0.9^2)] before the fictional noiseless Amp/ADC. Saturation at this ISO occurs at 2478e-, for a 'dynamic range' of 10.1 stops [=log2(2478/2.2)]

In ADU. The Amp/ADC may be noiseless in the idealized calculation above but it still amplifies. Therefore with total read noise of 2.2e- at the input of the Amp, Total Read Noise at the output of the ADC will be 11.6ADUs [=2.2e-/0.19e-/ADU] - keep in mind that in the table below I used 1.9e- as photosite read noise instead of 2. Saturation (a better word than FWC in the table below) continues to occur at 13235ADU, for a 'dynamic range' of 10.1 stops [=13235/11.6]
The Amp/ADC noise in e- is a theoretical simplification, pretending that the noise was present at the output of the pixel (it wasn't, it came into the picture later as the signal went through the Amp and ADC).

Not a perfect model, but it gives us an idea of how things may work inside the 6D.

Not a perfect model, but it gives us an idea of how things may work inside the 6D.
I would work it out from here, but the ADC, per what you wrote out ahead, would not max out at 2^20 electrons per ADU, so, if would work it from here, I'd appreciate it.
Ok, so let's choose units - e- is easier and more intuitive when dealing with the Signal right out of a photosite.

We want to digitize a variable analog Signal with maximum potential value of 76606 Volts (err... e-). How many levels/gradations do we want to break it down into?

256 would mean 299e-/level and we know from experience that it does not give us enough resolution (posterization). How about 2^10 or 2^20 levels? Those would mean 76.6 and 0.073 e-/level respectively. Wait. How many e-/level do we need given what we know about Information Science and the Human Visual System?

The answer is that ideally we would want our linear ADC's levels spaced roughly as the Total read noise in e-. So at ISO100 for the 6D that would mean about 2858 levels (as opposed to the actual 13235), corresponding to 26.8 e-/level - or rounding it up, say 12 bits [=log2(76606/26.8)]. If encoded at 12 bits, the 6D's Total Read Noise would span a maximum of 1.43 ADUs [=4095/2858] at ISO100, pretty close to the ideal according to John Sheehy.

If we used an ADC of higher bit depth (say 14, 16 or 20 bits) all the extra levels would still encode the same Signal range (up to 76606e-) but in finer and more numerous slices, filling the relative raw files with data but without contributing any additional information.

So why use a 14-bit ADC in the 5Diii and 6D (other than for specialized applications that require stacking or the like)? I don't know (anybody knows?), but it seems to me that it would be wasteful to encode their 76606e- maximum signal at bit depths higher than that.

Jack
 
Last edited:
At higher ISO settings, Canon sensors are as good as the best of them. So, if Canon were to offer an optional ISOless interface, increase the bit depth of the capture files to 20 bits, and have the cameras shoot permanently at ISO 3200 in the ISOless shooting mode, they'd match or beat Sony's sensors in terms of noise and DR.

By making the ISOless UI optional, it would still allow those who find the current noise and DR levels to be "good enough" and prefer setting the ISO themselves to continue as before without any bother.
A simplest way to put Canon's sensors on top. You take all good sensors and you stack them up. Then you take a Canon sensor and you put it on top.
 
Found an interesting confirmation of both, the idea of a higher resolution ADC for a better DR and my point that the DR is limited by the amount of light:

http://www.clarkvision.com/articles/digital.sensor.performance.summary/index.html#dynamic_range

"If 16-bit or higher analog-to-digital converters were used, with correspondingly lower noise amplifiers, the dynamic range could increase by about 2 stops on the larger pixel cameras. The smallest pixel cameras do not collect enough photons to benefit from higher bit converters concerning dynamic range per pixel."
 
Great Bustard wrote: Here's where you lose me. Let's say we have a pixel noise of 2 electrons, an ADC noise of 27 electrons, and an amplification of 32x (ISO 3200).

If we amplify the signal in an analog fashion, then the pixel noise becomes 32 electrons (assuming noiseless analog amplification) and the ADC noise remains 27 electrons.
Let's go through it twice, once in e- and once in ADUs.

In e-. In this idealized model sensor/pixel noise is always 2e-, independent of ISO. Amp/ADC noise is a constant 4.6ADU at the output of the ADC, but will be affected by different gains at different ISOs when referred to the input of the Amp in e-. For instance at ISO 3200 Amp/ADC noise can be thought to represent 0.9e- [=4.6ADU*0.19e-/ADU] when referred to the input of the Amp - where it will also find sensor/pixel noise. Summing the two in quadrature, Total Read Noise would be 2.2e- [=sqrt(2^2+0.9^2)] before the fictional noiseless Amp/ADC. Saturation at this ISO occurs at 2478e-, for a 'dynamic range' of 10.1 stops [=log2(2478/2.2)]

In ADU. The Amp/ADC may be noiseless in the idealized calculation above but it still amplifies. Therefore with total read noise of 2.2e- at the input of the Amp, Total Read Noise at the output of the ADC will be 11.6ADUs [=2.2e-/0.19e-/ADU] - keep in mind that in the table below I used 1.9e- as photosite read noise instead of 2. Saturation (a better word than FWC in the table below) continues to occur at 13235ADU, for a 'dynamic range' of 10.1 stops [=13235/11.6]
The Amp/ADC noise in e- is a theoretical simplification, pretending that the noise was present at the output of the pixel (it wasn't, it came into the picture later as the signal went through the Amp and ADC).

Not a perfect model, but it gives us an idea of how things may work inside the 6D.

Not a perfect model, but it gives us an idea of how things may work inside the 6D.
I would work it out from here, but the ADC, per what you wrote out ahead, would not max out at 2^20 electrons per ADU, so, if would work it from here, I'd appreciate it.
Ok, so let's choose units - e- is easier and more intuitive when dealing with the Signal right out of a photosite.

We want to digitize a variable analog Signal with maximum potential value of 76606 Volts (err... e-). How many levels/gradations do we want to break it down into?

256 would mean 299e-/level and we know from experience that it does not give us enough resolution (posterization). How about 2^10 or 2^20 levels? Those would mean 76.6 and 0.073 e-/level respectively. Wait. How many e-/level do we need given what we know about Information Science and the Human Visual System?

The answer is that ideally we would want our linear ADC's levels spaced roughly as the Total read noise in e-. So at ISO100 for the 6D that would mean about 2858 levels (as opposed to the actual 13235), corresponding to 26.8 e-/level - or rounding it up, say 12 bits [=log2(76606/26.8)]. If encoded at 12 bits, the 6D's Total Read Noise would span a maximum of 1.43 ADUs [=4095/2858] at ISO100, pretty close to the ideal according to John Sheehy.

If we used an ADC of higher bit depth (say 14, 16 or 20 bits) all the extra levels would still encode the same Signal range (up to 76606e-) but in finer and more numerous slices, filling the relative raw files with data but without contributing any additional information.

So why use a 14-bit ADC in the 5Diii and 6D (other than for specialized applications that require stacking or the like)? I don't know (anybody knows?), but it seems to me that it would be wasteful to encode their 76606e- maximum signal at bit depths higher than that.

Jack
OK, I get it now. Let me run this alternative explanation by you, and tell me if it works. Instead of thinking in read noise in terms of electrons, we would think of it in terms of NSR/FWC. So, at base ISO, the 6D would have an NSR/FWC of 26.8 / 76606 = 0.035%. At ISO 3200, the 6D has an NSR/FWC of 2.3 / 2478 = 0.093%. This shows that the read noise is 0.093% / 0.035% = 2.7x more of an issue at ISO 3200 than at ISO 100 and is invariant as a function of bit depth.

Does this seem about right?
 
Last edited:
Found an interesting confirmation of both, the idea of a higher resolution ADC for a better DR and my point that the DR is limited by the amount of light:

http://www.clarkvision.com/articles/digital.sensor.performance.summary/index.html#dynamic_range

"If 16-bit or higher analog-to-digital converters were used, with correspondingly lower noise amplifiers, the dynamic range could increase by about 2 stops on the larger pixel cameras. The smallest pixel cameras do not collect enough photons to benefit from higher bit converters concerning dynamic range per pixel."
This doesn't address the issue at all. Clark is saying that if you keep the read noise constant and the pixels can absorb 4x more light, then the DR would increase by two stops. This is understood and not disputed.

However, I think the OP has been resolved:

http://www.dpreview.com/forums/post/53074128

and the answer is that greater bit depth on the current design as I hypothesized will not work.
 
At higher ISO settings, Canon sensors are as good as the best of them. So, if Canon were to offer an optional ISOless interface, increase the bit depth of the capture files to 20 bits, and have the cameras shoot permanently at ISO 3200 in the ISOless shooting mode, they'd match or beat Sony's sensors in terms of noise and DR.

By making the ISOless UI optional, it would still allow those who find the current noise and DR levels to be "good enough" and prefer setting the ISO themselves to continue as before without any bother.
A simplest way to put Canon's sensors on top. You take all good sensors and you stack them up. Then you take a Canon sensor and you put it on top.
:-D
 
Press correspondent, what you are talking about is avalanche amplification. A single photon can indirectly dislodge multiple electrons. However this process is very noisy and for that reason is not used in image sensors. It also requires quite high voltages and is therefore not practical. On top of that the process is very temperature dependent and difficult to control with precision.
So remains that the amplification happens during readout of the pixel.

The popular description of how a pixel works is given above but it actually works slightly different. At the start of an exposure the embedded capacitor in the pixel is pre-charged. When a photon dislodges an electron the electron moves to the electrode with opposite charge because of the electric field and neutralizes part of the charge causing the voltage to drop. An amplifier amplifies the difference between the pre-charge voltage and the voltage on the capacitor. This gives a measure of how many electrons have reached the electrode.
The full well capacity is reached when the potential between the electrodes becomes zero.
The popular description doesn't work because there is no incentive for the electrons to go to either electrode if the capacitor was uncharged and the electrons would just wander around aimlessly until they fall back from their excited state releasing their energy as heat or radiation.
If the electrons were to build up a voltage difference, successive electrons would have to fight the electric field to reach the correct electrode and would rather go the other way neutralizing the voltage difference again.

Another remark about ADCs. Yes there are 24-bit and even 32-bit ADCs used in audio that achieve an SNR of 130 dB but the ADC used for sensors must be much faster than an audio ADC so they cannot be compared. Also the power consumption goes up with every bit you add.
The Sony approach of using many ADCs in parallel helps to lower the conversion frequency at the expense of probably more power consumption in total.
They are a serious heat source on the sensor which may influence the performance negatively if not done right. You don't want the ADCs close to active pixels but it is better to keep the signal paths short. This is problem with FF sensors because of their physical size.

Mikael, I want to know what a raw vise ADC is but when I googled I only found posts from you in different forums. Makes me wonder. Can you give me a link where these ADCs are described?
Thank you, this was helpful!
 
Found an interesting confirmation of both, the idea of a higher resolution ADC for a better DR and my point that the DR is limited by the amount of light:

http://www.clarkvision.com/articles/digital.sensor.performance.summary/index.html#dynamic_range

"If 16-bit or higher analog-to-digital converters were used, with correspondingly lower noise amplifiers, the dynamic range could increase by about 2 stops on the larger pixel cameras. The smallest pixel cameras do not collect enough photons to benefit from higher bit converters concerning dynamic range per pixel."
This doesn't address the issue at all. Clark is saying that if you keep the read noise constant and the pixels can absorb 4x more light, then the DR would increase by two stops. This is understood and not disputed.
I see. Then he only backs up my point.
However, I think the OP has been resolved:

http://www.dpreview.com/forums/post/53074128

and the answer is that greater bit depth on the current design as I hypothesized will not work.
It is not resolved. I believe you allowed yourself to get confused. The amplification reduces the read noise, therefore the source of this noise is after the amplifier, but before the ADC. (Otherwise it would just be a 12-bit ADC, but it is not, unless Canon lies to us). The amplifier in CMOS is separate for every pixel and is located in the photocell. Therefore logically the main source of the read noise is the connecting line or whatever else happens between the amplifier in the photocell and ADC.

To increase the DR at the base ISO by reducing this read noise, you would need to do the following:

1. Use higher voltage amplifiers in photocells to avoid clipping while amplifying the full signal value

2. Amplify the signal 4 times as if it were ISO 400 to add 2 bits or 16 times to add 4 bits

3. Reduce the signal back to the normal voltage by a passive filter at the ADC input (or use a higher voltage ADC)

4. Use at least the existing 14-bit ADC for 2 extra bits or a 16-bit for 4 extra bits (per #2 above)

This should result in a 2-stop increase of DR at the base ISO. Please tell me where you see a flaw in this logic.
 
Found an interesting confirmation of both, the idea of a higher resolution ADC for a better DR and my point that the DR is limited by the amount of light:

http://www.clarkvision.com/articles/digital.sensor.performance.summary/index.html#dynamic_range

"If 16-bit or higher analog-to-digital converters were used, with correspondingly lower noise amplifiers, the dynamic range could increase by about 2 stops on the larger pixel cameras. The smallest pixel cameras do not collect enough photons to benefit from higher bit converters concerning dynamic range per pixel."
This doesn't address the issue at all. Clark is saying that if you keep the read noise constant and the pixels can absorb 4x more light, then the DR would increase by two stops. This is understood and not disputed.
I see. Then he only backs up my point.
In fact, he doesn't. He would only back up your point *if* your assumption that the actual pixel FWC decreases with higher ISOs. It does not. Only the effective FWC decreases, and that is a function of the bit depth. However, as I mentioned:
However, I think the OP has been resolved:

http://www.dpreview.com/forums/post/53074128

and the answer is that greater bit depth on the current design as I hypothesized will not work.
adding more bits will not solve the problem.
It is not resolved.
It really is. I have my answer.
I believe you allowed yourself to get confused.
I didn't "allow [myself] to become confused" -- I simply was confused, 'cause I took the read noise figures of sensorgen as an absolute as opposed to effective read noise values. Now that I look back at it, what I was thinking was absolutely stupid. But given that I've so much experience with being stupid, it doesn't hurt so much, anymore. ;-)
The amplification reduces the read noise, therefore the source of this noise is after the amplifier, but before the ADC.
No, the amplification does not reduce the read noise -- that was my error. Click on the link above for an explanation.
(Otherwise it would just be a 12-bit ADC, but it is not, unless Canon lies to us). The amplifier in CMOS is separate for every pixel and is located in the photocell. Therefore logically the main source of the read noise is the connecting line or whatever else happens between the amplifier in the photocell and ADC.
Neither here nor there, in terms of the OP.
To increase the DR at the base ISO by reducing this read noise, you would need to do the following:

1. Use higher voltage amplifiers in photocells to avoid clipping while amplifying the full signal value

2. Amplify the signal 4 times as if it were ISO 400 to add 2 bits or 16 times to add 4 bits

3. Reduce the signal back to the normal voltage by a passive filter at the ADC input (or use a higher voltage ADC)

4. Use at least the existing 14-bit ADC for 2 extra bits or a 16-bit for 4 extra bits (per #2 above)

This should result in a 2-stop increase of DR at the base ISO. Please tell me where you see a flaw in this logic.
Honestly, I don't see the logic of any of that at all. Conversely, aus_pic_hunter's post above made a great deal of sense to me:

http://www.dpreview.com/forums/post/53072359
 
We want to digitize a variable analog Signal with maximum potential value of 76606 Volts (err... e-). How many levels/gradations do we want to break it down into?
We CAN get no more than 12 stops out of it, because of the read noise, as you correctly described below. However, if the read noise were reduced, then we might WANT to get up to 16 stops:

Log2(76606) = 16.2

Since approximately 25 units are added to the read noise after the amplifier, it may be possible to reduce this noise as I describe below in this thread and conceptually get more than 12 stops of DR at the base ISO:

http://www.dpreview.com/forums/post/53074931
How many e-/level do we need given what we know about Information Science and the Human Visual System?
There are two possible scenarios:

1. The scene contains two areas, one of which is darker then the other. After the other area is "pulled", the overall DR of the scene is normal and not very wide. This is often called "pulling shadows". In this case the answer to your question is, the more DR you have, the deeper shadows you can pull.

2. The scene has a uniformly wide DR. In this case the answer depends on the properties of the display medium and human vision. Specifically, more than 10 or 11 stops of DR cannot be reproduced with the modern display media due to the properties of the human vision and require and HDR technique that changes the image. Here I explain this in more details:

http://www.dpreview.com/forums/post/52075894

Therefore, Canon's current DR is sufficient for everything, but pulling shadows and HDR. Any scene can be reproduced as is with the Canon DR within the limitations of the current display media. For example, to display a full DR of D800 as is (without HDR or pulling shadows), you would need a much brighter display than what we have now, like this new 4,000 brightness tech:

http://www.engadget.com/2013/12/05/dolby-demos-new-imaging-tech-that-pushes-more-light-to-your-tele/
The answer is that ideally we would want our linear ADC's levels spaced roughly as the Total read noise in e-.
Only if the read noise cannot be reduced. See above.
If we used an ADC of higher bit depth (say 14, 16 or 20 bits) all the extra levels would still encode the same Signal range (up to 76606e-) but in finer and more numerous slices, filling the relative raw files with data but without contributing any additional information.

So why use a 14-bit ADC in the 5Diii and 6D (other than for specialized applications that require stacking or the like)? I don't know (anybody knows?), but it seems to me that it would be wasteful to encode their 76606e- maximum signal at bit depths higher than that.
100% agree. The way it is, the lowest 2 bits are wasted at any ISO. I can think of a few possible technical and other reasons for using a 14-bit ADC:

1. Completely remove any contribution of the ADC to noise. At 12 bits it may be slightly noticeable while 13 is an odd number and may not be the best for other reasons. Hence 14.

2. Canon expected lower read noise, but it happened to be higher after the 14-bit ADC was already ready to go.

3. Canon plans to improve sensors by reducing the read noise without having to also improve the ADC.

4. The cost of the 14-bit ADC was not much higher or was even lower than the 12-bit one.

5. Marketing and paper specs to compete with Nikon and others.
 
I see. Then he only backs up my point.
In fact, he doesn't. He would only back up your point *if* your assumption that the actual pixel FWC decreases with higher ISOs. It does not. Only the effective FWC decreases, and that is a function of the bit depth.
Not that point. The point that the DR is limited by light quantization. For example, if your maximum signal is 16 photons, you only have 4 stops of DR without downsampling.

You are correct that a mere increase of the ADC depth would not increase the DR, but amplifying the ISO-100 signal would, as I described above. Yes, the actual value of the read noise is not reduced, but it would be reduced in my step 3: "Reduce the signal back to the normal voltage by a passive filter at the ADC". I will let you sleep on it and if you still don't get my 4-step logic tomorrow, I will try to explain in more details.
 
I see. Then he only backs up my point.
In fact, he doesn't. He would only back up your point *if* your assumption that the actual pixel FWC decreases with higher ISOs. It does not. Only the effective FWC decreases, and that is a function of the bit depth.
Not that point. The point that the DR is limited by light quantization. For example, if your maximum signal is 16 photons, you only have 4 stops of DR without downsampling.
If you are measuring DR over the area of a pixel and choosing a 100% NSR as the noise floor, but this is not a meaningful way to discuss the DR of the photo.
You are correct that a mere increase of the ADC depth would not increase the DR, but amplifying the ISO-100 signal would, as I described above.

Yes, the actual value of the read noise is not reduced, but it would be reduced in my step 3: "Reduce the signal back to the normal voltage by a passive filter at the ADC". I will let you sleep on it and if you still don't get my 4-step logic tomorrow, I will try to explain in more details.
Let's say the signal is 100 electrons, with a pixel read noise of 2 electrons, an ADC noise of 27 electrons, and all other sources of electronic noise are insignificant in comparison.

If we shoot at ISO 100, then the total read noise is sqrt (2² + 27²) = 27.1 electrons. If we push the photo 5 stops, the read noise is 27.1 x 32 = 866 electrons relative to an effective signal of 100 x 32 = 3200 electrons, for a relative pixel read noise of 866 / 3200 = 27%.

On the other hand, if we shot at ISO 3200, the effective pixel noise is 2 x 32 = 64 electrons, which then passes through the ADC for a total noise of sqrt (64² + 27²) = 69.5 electrons, resulting in a relative pixel read noise of 69.5 / 3200 = 2.2%. This is exactly what shooting at ISO 3200 is less noisy than shooting at ISO 100 and pushing five stops.

Now, let's discuss the DR per pixel using the read noise as the noise floor. Let's say each pixel has a FWC of 80000 electrons, a pixel noise of 2 electrons, an ADC noise of 27 electrons, and, again, other sources of electronic noise are insignificant in comparison. This will result in a DR of log2 (80000 / 27.1) = 11.5 stops.

Let's say we had a bit depth so large that we could amplify the signal as much as we wanted without clipping. We'll use the same 5 stops for this example. Then the effective signal would be 80000 x 32 = 2560000 electrons and the effective pixel noise would be 2x32 = 64 electrons, giving a read noise per pixel of sqrt (64² + 27²) = 69.5 electrons, and thus a DR of log2 (2560000 / 69.5) = 15.2 stops.

Hey! That's what I was proposing in the OP! OK -- what am I doing wrong?
 
Last edited:
Great Bustard wrote: Here's where you lose me. Let's say we have a pixel noise of 2 electrons, an ADC noise of 27 electrons, and an amplification of 32x (ISO 3200).

If we amplify the signal in an analog fashion, then the pixel noise becomes 32 electrons (assuming noiseless analog amplification) and the ADC noise remains 27 electrons.
Let's go through it twice, once in e- and once in ADUs.

In e-. In this idealized model sensor/pixel noise is always 2e-, independent of ISO. Amp/ADC noise is a constant 4.6ADU at the output of the ADC, but will be affected by different gains at different ISOs when referred to the input of the Amp in e-. For instance at ISO 3200 Amp/ADC noise can be thought to represent 0.9e- [=4.6ADU*0.19e-/ADU] when referred to the input of the Amp - where it will also find sensor/pixel noise. Summing the two in quadrature, Total Read Noise would be 2.2e- [=sqrt(2^2+0.9^2)] before the fictional noiseless Amp/ADC. Saturation at this ISO occurs at 2478e-, for a 'dynamic range' of 10.1 stops [=log2(2478/2.2)]

In ADU. The Amp/ADC may be noiseless in the idealized calculation above but it still amplifies. Therefore with total read noise of 2.2e- at the input of the Amp, Total Read Noise at the output of the ADC will be 11.6ADUs [=2.2e-/0.19e-/ADU] - keep in mind that in the table below I used 1.9e- as photosite read noise instead of 2. Saturation (a better word than FWC in the table below) continues to occur at 13235ADU, for a 'dynamic range' of 10.1 stops [=13235/11.6]
The Amp/ADC noise in e- is a theoretical simplification, pretending that the noise was present at the output of the pixel (it wasn't, it came into the picture later as the signal went through the Amp and ADC).

Not a perfect model, but it gives us an idea of how things may work inside the 6D.

Not a perfect model, but it gives us an idea of how things may work inside the 6D.
I would work it out from here, but the ADC, per what you wrote out ahead, would not max out at 2^20 electrons per ADU, so, if would work it from here, I'd appreciate it.
Ok, so let's choose units - e- is easier and more intuitive when dealing with the Signal right out of a photosite.

We want to digitize a variable analog Signal with maximum potential value of 76606 Volts (err... e-). How many levels/gradations do we want to break it down into?

256 would mean 299e-/level and we know from experience that it does not give us enough resolution (posterization). How about 2^10 or 2^20 levels? Those would mean 76.6 and 0.073 e-/level respectively. Wait. How many e-/level do we need given what we know about Information Science and the Human Visual System?

The answer is that ideally we would want our linear ADC's levels spaced roughly as the Total read noise in e-. So at ISO100 for the 6D that would mean about 2858 levels (as opposed to the actual 13235), corresponding to 26.8 e-/level - or rounding it up, say 12 bits [=log2(76606/26.8)]. If encoded at 12 bits, the 6D's Total Read Noise would span a maximum of 1.43 ADUs [=4095/2858] at ISO100, pretty close to the ideal according to John Sheehy.

If we used an ADC of higher bit depth (say 14, 16 or 20 bits) all the extra levels would still encode the same Signal range (up to 76606e-) but in finer and more numerous slices, filling the relative raw files with data but without contributing any additional information.

So why use a 14-bit ADC in the 5Diii and 6D (other than for specialized applications that require stacking or the like)? I don't know (anybody knows?), but it seems to me that it would be wasteful to encode their 76606e- maximum signal at bit depths higher than that.

Jack
OK, I get it now. Let me run this alternative explanation by you, and tell me if it works. Instead of thinking in read noise in terms of electrons, we would think of it in terms of NSR/FWC. So, at base ISO, the 6D would have an NSR/FWC of 26.8 / 76606 = 0.035%. At ISO 3200, the 6D has an NSR/FWC of 2.3 / 2478 = 0.093%. This shows that the read noise is 0.093% / 0.035% = 2.7x more of an issue at ISO 3200 than at ISO 100 and is invariant as a function of bit depth.

Does this seem about right?
OK, what's wrong with what I did downthread?

http://www.dpreview.com/forums/post/53076121

Something's amiss.
 
I see. Then he only backs up my point.
In fact, he doesn't. He would only back up your point *if* your assumption that the actual pixel FWC decreases with higher ISOs. It does not. Only the effective FWC decreases, and that is a function of the bit depth.
Not that point. The point that the DR is limited by light quantization. For example, if your maximum signal is 16 photons, you only have 4 stops of DR without downsampling.
If you are measuring DR over the area of a pixel and choosing a 100% NSR as the noise floor, but this is not a meaningful way to discuss the DR of the photo.
Yes it is. It matches the DXO definition for the screen DR as opposed to the print DR. Both are perfectly meaningful despite the stubborn insistence to the contrary of several members here. If the maximum signal is 1, the screen DR is zero, but the print DR is not.
You are correct that a mere increase of the ADC depth would not increase the DR, but amplifying the ISO-100 signal would, as I described above.

Yes, the actual value of the read noise is not reduced, but it would be reduced in my step 3: "Reduce the signal back to the normal voltage by a passive filter at the ADC". I will let you sleep on it and if you still don't get my 4-step logic tomorrow, I will try to explain in more details.
Let's say the signal is 100 electrons, with a pixel read noise of 2 electrons, an ADC noise of 27 electrons, and all other sources of electronic noise are insignificant in comparison.

If we shoot at ISO 100, then the total read noise is sqrt (2² + 27²) = 27.1 electrons. If we push the photo 5 stops, the read noise is 27.1 x 32 = 866 electrons relative to an effective signal of 100 x 32 = 3200 electrons, for a relative pixel read noise of 866 / 3200 = 27%.

On the other hand, if we shot at ISO 3200, the effective pixel noise is 2 x 32 = 64 electrons, which then passes through the ADC for a total noise of sqrt (64² + 27²) = 69.5 electrons, resulting in a relative pixel read noise of 69.5 / 3200 = 2.2%. This is exactly what shooting at ISO 3200 is less noisy than shooting at ISO 100 and pushing five stops.

Now, let's discuss the DR per pixel using the read noise as the noise floor. Let's say each pixel has a FWC of 80000 electrons, a pixel noise of 2 electrons, an ADC noise of 27 electrons
There is no ADC noise. An ADC is a device capable of the specified resolution. Its noise cannot be even 1 bit, because it would reduce its resolution by 1 bit and make it not what it is presented to be. The noise enters before the ADC. However this point has no bearing on the consequent discussion or the final conclusion. It does not matter whatsoever where the noise originates, as long as it is after the amplifier, which is separate for each pixel and located inside the photocell.
, and, again, other sources of electronic noise are insignificant in comparison. This will result in a DR of log2 (80000 / 27.1) = 11.5 stops.

Let's say we had a bit depth so large that we could amplify the signal as much as we wanted without clipping. We'll use the same 5 stops for this example. Then the effective signal would be 80000 x 32 = 2560000 electrons and the effective pixel noise would be 2x32 = 64 electrons, giving a read noise per pixel of sqrt (64² + 27²) = 69.5 electrons, and thus a DR of log2 (2560000 / 69.5) = 15.2 stops.

Hey! That's what I was proposing in the OP!
Not exactly. Your proposal was that it was the resolution of the ADC that made a difference, but in fact it is the amplification. However, this technicality does not change the result.
OK -- what am I doing wrong?
Nothing. You are simply following the 4-step logic from my post above that technically matches your OP proposal (with the above caveat). The result is correct with the assumption in my step #1 that the amplifier does not clip.
 

Keyboard shortcuts

Back
Top