Sensor "Amplification" and "native" ISO

Negative. As you raise gain/ISO the absolute noise level in ADUs can only get worse.
I did not say anything about noise there. I was just trying to find a meaning of 1-1 conversion or, if you wish, "no gain" conversion.
In nature there is no such thing as no noise as far as photography is concerned. The signal can be considered not to exist without noise, because noise is inherently part of its nature. All events are linked randomly: photons arrive according to poisson statistics; and they get converted to photoelectrons through a separate random process which also happens to have a poisson distribution. The noise that is added later in the amplification and conversion process is independent again but this time gaussian.

So there is no such thing as a 1-1 conversion. There are a number of subsequent random processes: Dr. Joofa .
There is, but a noisy one, if you wish. 1-1 means you do not change the scale after conversion to electrons.
 
Thanks; and to Jack before that. So the read noise of the D810 is pushing the limits of its bit depth in the deep shadows.

I remember seeing in Emil's page that such posterization would not hurt too much the visual perception of the noise bit it is getting close, I guess.
 
Thanks; and to Jack before that. So the read noise of the D810 is pushing the limits of its bit depth in the deep shadows.

I remember seeing in Emil's page that such posterization would not hurt too much the visual perception of the noise bit it is getting close, I guess.
And, as you saw in the blog post that I linked to earlier, there's not enough read noise in the D810 to adequately dither the LSB when the camera's set to 12-bit mode.

How much noise does it take to get adequate dithering? About half an LSB rms:


Jim
 
So the read noise of the D810 is pushing the limits of its bit depth in the deep shadows.
Different strokes for different folks, but many sources (including Janesick) suggest that the best ADC compromise is around 1 LSB = read noise. Much more than 1 = waste of space; much less than one = potential quantization (Jim has some brilliant examples of that with the a7S).
 
Last edited:
The reason I asked this was a discussion about different degrees of "gain" for different sensor sizes (but same mp count) in another thread. If we accept as "no gain" to be 1 photon = 1 raw value, then this is a basis for comparison. Of course, we can use different units.
But that can´t work. 14 bits means 16384 raw values tops. Current FF sensors have much much higher full well capacity.
It could, at ISO 400.
And yet they are not photon counting machines, in general it takes more than 1 photon to push 1 electron.
I am talking about photons registered.
Then it would be more clear if we call those "photons registered" electrons, would it not?
 
The reason I asked this was a discussion about different degrees of "gain" for different sensor sizes (but same mp count) in another thread. If we accept as "no gain" to be 1 photon = 1 raw value, then this is a basis for comparison. Of course, we can use different units.
But that can´t work. 14 bits means 16384 raw values tops. Current FF sensors have much much higher full well capacity.
It could, at ISO 400.
And yet they are not photon counting machines, in general it takes more than 1 photon to push 1 electron.
I am talking about photons registered.
Then it would be more clear if we call those "photons registered" electrons, would it not?
No, because I want to eliminate the nuts and bolts from the process. Photons registered vs. RAW values and in the end - photons hitting my eye. If somebody invents a sensor which sees light in a different manner, I will be still asking the same question.
 
Then it would be more clear if we call those "photons registered" electrons, would it not?
No, because I want to eliminate the nuts and bolts from the process. Photons registered vs. RAW values and in the end - photons hitting my eye. If somebody invents a sensor which sees light in a different manner, I will be still asking the same question.
And you may get a different answer. The devil is in the details.

Jim
 
The reason I asked this was a discussion about different degrees of "gain" for different sensor sizes (but same mp count) in another thread. If we accept as "no gain" to be 1 photon = 1 raw value, then this is a basis for comparison. Of course, we can use different units.
But that can´t work. 14 bits means 16384 raw values tops. Current FF sensors have much much higher full well capacity.
It could, at ISO 400.
And yet they are not photon counting machines, in general it takes more than 1 photon to push 1 electron.
I am talking about photons registered.
Then it would be more clear if we call those "photons registered" electrons, would it not?
No, because I want to eliminate the nuts and bolts from the process. Photons registered vs. RAW values and in the end - photons hitting my eye. If somebody invents a sensor which sees light in a different manner, I will be still asking the same question.
How odd . . I fold.
 
No, because I want to eliminate the nuts and bolts from the process. Photons registered vs. RAW values and in the end - photons hitting my eye. If somebody invents a sensor which sees light in a different manner, I will be still asking the same question.
How odd . . I fold.
Well, I forgot that somebody did invent another "sensor" - film...
 
A quick look at sensorgen.com reveals that an FF camera like the 5D2/3 must count every ~4 photons as 1 RAW value with 14 bit encoding. The 70D should count every 1.6 photons in average as 1 value. There is some offset, of course, and noise.

Now, in my naïve understanding of how the RAW values are generated, the circuit still gets 4 impulses (correct me if I am wrong) for each RAW value. They might be mostly noise, but that is not my point. To create the RAW value, the circuit must suppress the extra info and round the number to the closest multiple of 4 or so. Does this sound right?

That means the ISO 400 is the "native" one - each additional photon increases the RAW value by 1, in average. Below ISO 400, the "amplification" is negative on a log scale, and there is "amplification" above ISO 400 only.

??? This is just an armchair analysis; a reclining sofa one, to be more precise. I know nothing about the sensor architecture.
IF the only signal read by the ADC was discreet electron charges, then that would be more significant, but the signal from read noise in the readout circuitry is not in units of discrete electron charges in the photosite, so there is nothing special about ISO 400; so called "Unity gain" is purely academic. In fact, if we were to count only discrete electron charges and using an ADC, ISO 400 would not be enough because even minimal noise in an ADC is 0.29 ADU, so you would really need about 2.5 to 3 RAW levels per electron to have a resulting histogram that clearly distinguished the various electron counts with space between them.

When other read noise is present, you can still count electrons (counting registered photons indirectly, if that wasn't clear) if the read noise is below 0.15 electrons, because then each count then has a fairly distinct histogram bell curve if the digital gain is high enough (and you could reduce each curve to a single value, effectively removing read noise with no filtering of signal) , but that has to be about 12 levels or more per photosite electron charge, to keep the curves distinct with space between them. Any noise whose character is heavy in outliers, of course, would need significantly less than 0.15 ADU of read noise to have distinct curves with space between them, and a gain of more than 12. 0.15 assumes Gaussian or Normal noise distribution.
 
"Native ISO" is not a very useful term, IMO, and leads to more confusion than illumination. The only sense in which a sensor has an ISO is that there is a maximum amount of charge before the photosites saturate, and any ISO rating of that saturation level depends completely upon a convention as to how much highlight headroom is needed for a camera to officially sport an ISO. There is nothing stopping any sensor from having any high ISO you want to try to do with it; you just get a stop more read noise relative to ISO-metered signal with each doubling of ISO, and a half stop more photon shot noise, so as you climb the ISO scale, you lose the ability to distinguish details at progressively coarser levels of detail, especially when the read noise has correlated components (banding and blotching).
 
Last edited:
The reason I asked this was a discussion about different degrees of "gain" for different sensor sizes (but same mp count) in another thread. If we accept as "no gain" to be 1 photon = 1 raw value, then this is a basis for comparison. Of course, we can use different units.
There is a big difference between counting and digitizing with "unity gain". Digitizing with unity gain is never going to give you assured electron counts, as an slight variation in linearity or gain will obfuscate counts. You need enough gain such that the resulting histogram has distinct gaps between each of the (integer) electron counts, and that would be at something about 2.5x or greater, assuming no noise other than that of the digitization itself.
 
IF the only signal read by the ADC was discreet electron charges...
Yeah, forget those obnoxious electrons that are trying to hog the spotlight.

Sorry, I couldn't resist.

Jim
 
At base ISO, the amplification of a 5DIII is, as you say, about 4 or 5 e-/ADU (= e-/raw value) in 14 bit mode.

It is not clear why the 5DIII (or any other Canon for that matter) needs a 14-bit ADC when read noise at base ISO is 34e-, or about 7 ADUs: even at 12-bits the lowest bit would be totally swamped by noise from the electronics, flipping back and forth randomly, making its storage a waste of space.
14 bits is not really necessary, but it protects data from unnecessary integer-level arithmetic, such as when a camera stretches or squeezes histograms, or boosts digitizations done when the lens is using an f-number near or lower than the f-number of the microlenses. IMO, Canon goes about these things in the wrong way, but since they are doing them the wrong way, better to have a virtually analog digitization with a safety bit or more. Of course, the more accuracy-friendly method would be to use a high-quality, unmolested 12+-bit output, and put all of the scaling/corrective factors into the metadata, and have the converter convert the values to floating point upon applying them.
To create the RAW value, the circuit must suppress the extra info and round the number to the closest multiple of 4 or so. Does this sound right?
Yes, so 11 bits would be plenty for the cameras in question at base ISO and anything more is just a waste. Same goes for the 70D, different numbers but same conclusion.
11 bits is not enough for Canon DSLRs at base ISO; that would give read noise of 0.61 to 0.8, which would posterize blacks and near-blacks. Ideally, you want the digitization to be fine enough so that read noise was at least 1.3 ADU, and certainly not less than 0.9, where things deteriorate rapidly. 11 bits wouldn't be sufficient until 800 or 1600, and above 3200 you could drop one bit per doubling of ISO, but you always need enough bit depth to properly digitize just the read noise, as a minimum, for arbitrarily high ISOs.
 
14 bits is not really necessary, but it protects data from unnecessary integer-level arithmetic, such as when a camera stretches or squeezes histograms, or boosts digitizations done when the lens is using an f-number near or lower than the f-number of the microlenses. IMO, Canon goes about these things in the wrong way, but since they are doing them the wrong way, better to have a virtually analog digitization with a safety bit or more. Of course, the more accuracy-friendly method would be to use a high-quality, unmolested 12+-bit output, and put all of the scaling/corrective factors into the metadata, and have the converter convert the values to floating point upon applying them.
12 bits is not enough to have adequate RN dither at base ISO on the D810:


It is enough at ISO 100, though.

Going from 14 to 12 bits on the D810 increases the quantization noise to more than the read noise at ISOs 64, 100, and (marginally) 200.

In the Sony a7II, changing the bit depth from 13 to 12 bits at base ISO makes the RN inadequte as dither:


Jim
 
At base ISO, the amplification of a 5DIII is, as you say, about 4 or 5 e-/ADU (= e-/raw value) in 14 bit mode.

It is not clear why the 5DIII (or any other Canon for that matter) needs a 14-bit ADC when read noise at base ISO is 34e-, or about 7 ADUs: even at 12-bits the lowest bit would be totally swamped by noise from the electronics, flipping back and forth randomly, making its storage a waste of space.
14 bits is not really necessary, but it protects data from unnecessary integer-level arithmetic
Sure, in fact many modern DSLRs perform integer-level-arithmetic at 16 bits internally, independently of ADC bit depth. But for the 5DIII 14 bit mode should not make one bit (;-) of difference as far as the amount or quality of the information captured in the raw data compared to 12 bit mode, correct?
To create the RAW value, the circuit must suppress the extra info and round the number to the closest multiple of 4 or so. Does this sound right?
Yes, so 11 bits would be plenty for the cameras in question at base ISO and anything more is just a waste. Same goes for the 70D, different numbers but same conclusion.
11 bits is not enough for Canon DSLRs at base ISO; that would give read noise of 0.61 to 0.8, which would posterize blacks and near-blacks. Ideally, you want the digitization to be fine enough so that read noise was at least 1.3 ADU, and certainly not less than 0.9, where things deteriorate rapidly. 11 bits wouldn't be sufficient until 800 or 1600, and above 3200 you could drop one bit per doubling of ISO, but you always need enough bit depth to properly digitize just the read noise, as a minimum, for arbitrarily high ISOs.
On second look 7 ADUs is less than 3 bits of read noise (it's not 2*3 it's 2^3, doh), so 12 bits make more sense than 11 for the 5DIII, even using the 1 LSB = read noise ADC design criterion.

Jack
 

Keyboard shortcuts

Back
Top