Why do we still use analog gain with ISO invariant sensors?

samcd

Member
Messages
42
Reaction score
28
With a fixed shutter speed and aperture, there are two ways we can brighten an image:

- by using a higher ISO (analog gain)

- by increasing the exposure slider in post (digital gain)

My understanding is that most modern sensors are approximately ISO invariant (beyond a second base ISO, if they have one), meaning that these two approaches will produce very similar noise levels in the final image.

But the digital gain approach retains the maximum amount of highlight room, while every additional stop of ISO decreases highlight room by a stop. As a result, by using ISO instead of digital gain we could lose multiple stops of dynamic range without seeing any real noise improvement.

Wouldn't we be better off if, beyond their second base ISO, cameras just baked an exposure adjustment into the exif info instead of applying more analog gain? So if "proper" image brightness required ISO 1600, the camera would instead use its highest base ISO (say, ISO 400) and then tell Lightroom to start at +2 on the exposure slider. Compared to using ISO 1600 we'd get more highlight protection and about the same noise.

Why do cameras not do this? Is there a problem I'm not seeing?
 
With a fixed shutter speed and aperture, there are two ways we can brighten an image:

- by using a higher ISO (analog gain)

- by increasing the exposure slider in post (digital gain)
ISO in digital cameras is not defined as "analog gain."

It affects the relationship between exposure and image lightness, but the way in which it operates (analog gain, digital multiplication) is intentionally unspecified and left open to whatever is convenient for a particular implementation.
Wouldn't we be better off if, beyond their second base ISO, cameras just baked an exposure adjustment into the exif info instead of applying more analog gain? So if "proper" image brightness required ISO 1600, the camera would instead use its highest base ISO (say, ISO 400) and then tell Lightroom to start at +2 on the exposure slider. Compared to using ISO 1600 we'd get more highlight protection and about the same noise.

Why do cameras not do this? Is there a problem I'm not seeing?
In-camera .JPGs have the lightness adjustment baked into them. If the camera did not apply the adjustment to the image data, but instead recorded in a hint in the EXIF data, camera-generated JPGs would not look correct in any applications except for the ones that knew to look for and apply the non-standard (for .JPG) hint.

Raw files are a different story.
 
Both very good points.

So why does ISO typically operate via analog gain in raw files where applying digital gain in post would produce better results?
 
With a fixed shutter speed and aperture, there are two ways we can brighten an image:

- by using a higher ISO (analog gain)

- by increasing the exposure slider in post (digital gain)

My understanding is that most modern sensors are approximately ISO invariant (beyond a second base ISO, if they have one), meaning that these two approaches will produce very similar noise levels in the final image.

But the digital gain approach retains the maximum amount of highlight room, while every additional stop of ISO decreases highlight room by a stop. As a result, by using ISO instead of digital gain we could lose multiple stops of dynamic range without seeing any real noise improvement.

Wouldn't we be better off if, beyond their second base ISO, cameras just baked an exposure adjustment into the exif info instead of applying more analog gain? So if "proper" image brightness required ISO 1600, the camera would instead use its highest base ISO (say, ISO 400) and then tell Lightroom to start at +2 on the exposure slider. Compared to using ISO 1600 we'd get more highlight protection and about the same noise.

Why do cameras not do this? Is there a problem I'm not seeing?
The only reason (and benefit) to raise ISO is for the shadows. You are right that raising the ISO reduces the highlight headroom, but it raises the shadows by the same amount. When you have deep shadows, that may raise some of the scene above the black point.

If you want to compare the same exposure at low and high ISO, use a scene with deep shadows and compare the darkest parts of the image. Running out of headroom is a bigger problem, but if that's not the case for a particular image, the shadows comparison will tell you if you're better off at a higher ISO.
 
So why does ISO typically operate via analog gain [snip]
I don't think that it does. I've lurked/participated in countless discussions about this here at DPR, and have learned that calling ISO "sensor sensitivity" is a) not correct and b) will kick off many arguments. What I've gathered is that no one can say exactly what turning the ISO dial does in a given camera (it's a secret sauce; delivering better high-ISO performance is a closely guarded trade secret), but it is likely accomplished by some combination of physical signal amplification (ie analog gain) and digital amplification (adding/multiplying numbers).

Bobn2 is a great one to ask about this, if you can find him.

Aaron
 
With a fixed shutter speed and aperture, there are two ways we can brighten an image:

- by using a higher ISO (analog gain)

- by increasing the exposure slider in post (digital gain)

My understanding is that most modern sensors are approximately ISO invariant (beyond a second base ISO, if they have one), meaning that these two approaches will produce very similar noise levels in the final image.

But the digital gain approach retains the maximum amount of highlight room, while every additional stop of ISO decreases highlight room by a stop. As a result, by using ISO instead of digital gain we could lose multiple stops of dynamic range without seeing any real noise improvement.

Wouldn't we be better off if, beyond their second base ISO, cameras just baked an exposure adjustment into the exif info instead of applying more analog gain? So if "proper" image brightness required ISO 1600, the camera would instead use its highest base ISO (say, ISO 400) and then tell Lightroom to start at +2 on the exposure slider. Compared to using ISO 1600 we'd get more highlight protection and about the same noise.

Why do cameras not do this? Is there a problem I'm not seeing?
Because you create a problem where there are no problems in practice !!

The dynamic range that you have at low ISO is only potential DR, max DR. This does not mean you will take advantage of this DR.

Just shoot normally and avoid to clip highlights (using exposure compensstion is easy and fast). You will raise ISO in low exposure and get the best of your sensor !!!! Even if the improvement is minor compared to base ISO (but still visible...).

If you think about it, your workflow has very little advantage if any. It may be slightly fsster but then you have to adjust lightness during post processing. And yoj do not take the full advantage of your sensor...
 
With a fixed shutter speed and aperture, there are two ways we can brighten an image:

- by using a higher ISO (analog gain)

- by increasing the exposure slider in post (digital gain)
Three ways:
  1. the camera applies analog gain
  2. the camera applies digital gain
  3. digital gain is applied in post processing
Where it is of course possible that a combination of 1. and 2. is applied by the camera.

In another current ISO thread someone said that the bit depth of the A/D converter is a limitation, so analog gain is necessary.
But the digital gain approach retains the maximum amount of highlight room, while every additional stop of ISO decreases highlight room by a stop. As a result, by using ISO instead of digital gain we could lose multiple stops of dynamic range without seeing any real noise improvement.
Is that true? My Nikon Z fc can produce 14-bit raw files. I assume that applies at the base ISO 100. But if I then set the ISO to 51200, that's a 9 bit shift so that would mean I now only have 5 bits of image data...? That doesn't seem right. I assume at this extreme ISO I don't get the full 14 bits, but just 5?
Wouldn't we be better off if, beyond their second base ISO, cameras just baked an exposure adjustment into the exif info instead of applying more analog gain? So if "proper" image brightness required ISO 1600, the camera would instead use its highest base ISO (say, ISO 400) and then tell Lightroom to start at +2 on the exposure slider.
Sure, if the sensor does indeed work that way then this would be a good way to handle raw files. As pointed out above, that wouldn't really work for JPEGs.
 
My Nikon Z fc can produce 14-bit raw files. I assume that applies at the base ISO 100. But if I then set the ISO to 51200, that's a 9 bit shift so that would mean I now only have 5 bits of image data...? That doesn't seem right. I assume at this extreme ISO I don't get the full 14 bits, but just 5?
If you're using ISO 51200, and you're following the recommendations of the camera's metering system, and you don't blow out the in-camera .JPG highlights (e.g., in Raw + .JPG mode), it seems to follow that you are recording very little light.

ISO 51200 gives the camera permission to gather only 1/512th of the total light that it would normally gather at ISO 100. Maybe at ISO 100, your sensor is collecting 14-bit samples that look like

xxxxxxxxxxxxxx

but at ISO 51200 it is collecting 14-bit samples that look like

000000000xxxxx

before an ISO adjustment shift. Same sensitivity (in terms of how total light maps to a sensor readout, before ISO adjustment), but if you are intentionally underexposing the sensor by an extreme amount, you're throwing away most of the possible range.
 
Last edited:
With a fixed shutter speed and aperture, there are two ways we can brighten an image:

- by using a higher ISO (analog gain)

- by increasing the exposure slider in post (digital gain)

My understanding is that most modern sensors are approximately ISO invariant (beyond a second base ISO, if they have one), meaning that these two approaches will produce very similar noise levels in the final image.

But the digital gain approach retains the maximum amount of highlight room, while every additional stop of ISO decreases highlight room by a stop. As a result, by using ISO instead of digital gain we could lose multiple stops of dynamic range without seeing any real noise improvement.

Wouldn't we be better off if, beyond their second base ISO, cameras just baked an exposure adjustment into the exif info instead of applying more analog gain? So if "proper" image brightness required ISO 1600, the camera would instead use its highest base ISO (say, ISO 400) and then tell Lightroom to start at +2 on the exposure slider. Compared to using ISO 1600 we'd get more highlight protection and about the same noise.

Why do cameras not do this? Is there a problem I'm not seeing?
By the way, in most cases when you use for instance ev=0 you have as much headroom with whatever ISO !

With auto exposure modes you do NOT have more risks to blow out highlights than usual.

So the risks with using high ISO is not very important and in the rare case where you blow out highlights, just decrease ev.

If the benefit to use high ISO is small, it has almost only advantages, really.
 


ISO 51200 gives the camera permission to gather only 1/512th of the total light that it would normally gather at ISO 100. Maybe at ISO 100, your sensor is collecting 14-bit samples that look like

xxxxxxxxxxxxxx

but at ISO 51200 it is collecting 14-bit samples that look like

000000000xxxxx

before an ISO adjustment shift. Same sensitivity (in terms of how total light maps to a sensor readout, before ISO adjustment), but if you are intentionally underexposing the sensor by an extreme amount, you're throwing away most of the possible range.


Ok, time for some experimentation. I took two photos at 1/1000 and f/13. The first one at ISO 51200 and the second one at ISO 100. According to RawDigger the first one had maximum pixel values in the 8000 range. The second one: 25. So not exactly a factor 512, but in the neighborhood.

Now let's see how those photos look:

 ISO 51200 converted with Nikon NX Studio, noise reduction turned off.
ISO 51200 converted with Nikon NX Studio, noise reduction turned off.

ISO 100 converted with Nikon NX Studio, noise reduction turned off, the max +5 exposure compensation and +100 brightness (but still not enough)
ISO 100 converted with Nikon NX Studio, noise reduction turned off, the max +5 exposure compensation and +100 brightness (but still not enough)

Now obviously with the ISO 51200 there is still some noise massaging going on. So I exported the raw file to TIFF in RawDigger:

bd48f6636a154169b05991bd3d5cb245.jpg

So I'd say that setting ISO 51200 does more than just multiplying the output from the sensor by 512.
 
With a fixed shutter speed and aperture, there are two ways we can brighten an image:

- by using a higher ISO (analog gain)

- by increasing the exposure slider in post (digital gain)

My understanding is that most modern sensors are approximately ISO invariant (beyond a second base ISO, if they have one), meaning that these two approaches will produce very similar noise levels in the final image.

But the digital gain approach retains the maximum amount of highlight room, while every additional stop of ISO decreases highlight room by a stop. As a result, by using ISO instead of digital gain we could lose multiple stops of dynamic range without seeing any real noise improvement.

Wouldn't we be better off if, beyond their second base ISO, cameras just baked an exposure adjustment into the exif info instead of applying more analog gain? So if "proper" image brightness required ISO 1600, the camera would instead use its highest base ISO (say, ISO 400) and then tell Lightroom to start at +2 on the exposure slider. Compared to using ISO 1600 we'd get more highlight protection and about the same noise.

Why do cameras not do this? Is there a problem I'm not seeing?
Since I've already been invoked in this thread, I'll add my twopenny's worth.

Firstly, thinking of what ISO does in terms of 'gain' usually leads to a number of misunderstandings. The key concept to get here is that the input and output of a camera are completely different. The input is an amount of light energy, usually quantified as exposure. The output is a perceptual specification of how light or dark the image (or patches of it) should look, technically called 'value' or 'lightness'. I'll go with the latter. Unfortunately a lot of photographers have come to using the word 'exposure' for both, which leads to the idea that to make the output 'brighter' then more light is required, so we need to apply 'gain' to get more (substitute) light. It's not true. Conceptually what is happening is that the sensor is making exposure measurements in each pixel and then a computational task is applied to derive a set of lightness values from those measurements. That's a conversion, and can be done from any set of exposure measurements however big or small. The fundamental purpose of variable gain in a camera system is to allow more accurate measurement of low exposure values. If you have an ADC which records 14 bits at the exposure for 100 ISO, then at 3200 ISO it will only get to use 9 bits. By boosting the ADC input 32 fold we get to use all 14 bits.

How many bits we actually need depends on the sensor (and here I mean just the analog bit of the sensor). The amount of information per pixel that the sensor gives is determined by how many photoelectrons each pixel can collect and how much electronic noise the sensor electronics add. This in turn depends on the largest exposure with which the sensor is designed to operate, the efficiency of the sensor in converting light to photoelectrons, the area of the pixel and the noisiness of the pixel electronics. If we work out the maximum information output per pixel in bits (a bit is fundamentally a unit of information) then we can work out how many bits of ADC we need to capture all of that information. We can Use Bill Claff's site to get some estimates for different sensors, as follows.

So let's choose a camera, say a Sony A7. That can collect 52,000 and has an electronic noise of 2.7 electrons (there can be a partial electron because these aren't actual electrons, it is noise expressed as electron equivalents). This makes the per-pixel dynamic range of the sensor 52,000/2.7 = 19,529. As bits this is log2 19,529 = 14.23 bits.

So we would need at least a 15-bit ADC to capture all of the information from the sensor and thus make variable gain un-necessary. An added complication is that ADCs are imperfect, and internal noise reduces how much real information the can collect. This is expressed by engineers as 'effective number of bits' (ENOB). Wide ADCs typically lose a couple of bits, so a 14-bit ADC might have an ENOB of around 12. The ADC's in the A7 are pretty good, and only lose a bit or so, (ENOB of 13) but it's still not enough for the camera to operate without variable gain at the ADC input.

As to why they don't just make the ADCs better, it's because there is a constant tradeoff between bit width, speed and cost, in any ADC tech. A fully ISO invariant camera was just about achieved with the Nikon D7000 (13.5 bits sensor DR, 13 bit ADC ENOB) but since then the market has gone for higher pixel counts and faster frame rates, both of which require more conversions per second. So in effect your ISO invariance has been traded for 20+ FPS capture and 40+ MP.
 
So why does ISO typically operate via analog gain [snip]
I don't think that it does. I've lurked/participated in countless discussions about this here at DPR, and have learned that calling ISO "sensor sensitivity" is a) not correct and b) will kick off many arguments. What I've gathered is that no one can say exactly what turning the ISO dial does in a given camera (it's a secret sauce; delivering better high-ISO performance is a closely guarded trade secret), but it is likely accomplished by some combination of physical signal amplification (ie analog gain) and digital amplification (adding/multiplying numbers).

Bobn2 is a great one to ask about this, if you can find him.

Aaron
Not too difficult to figure out; compare the Raw histograms of the same scene shot with the same exposure at different ISO settings. Analog gain just shifts the histogram; digital gain also generates missing codes because of the round-off errors and constant bit depth.
 
Last edited:
With a fixed shutter speed and aperture, there are two ways we can brighten an image:

- by using a higher ISO (analog gain)

- by increasing the exposure slider in post (digital gain)

My understanding is that most modern sensors are approximately ISO invariant (beyond a second base ISO, if they have one), meaning that these two approaches will produce very similar noise levels in the final image.

But the digital gain approach retains the maximum amount of highlight room, while every additional stop of ISO decreases highlight room by a stop. As a result, by using ISO instead of digital gain we could lose multiple stops of dynamic range without seeing any real noise improvement.

Wouldn't we be better off if, beyond their second base ISO, cameras just baked an exposure adjustment into the exif info instead of applying more analog gain? So if "proper" image brightness required ISO 1600, the camera would instead use its highest base ISO (say, ISO 400) and then tell Lightroom to start at +2 on the exposure slider. Compared to using ISO 1600 we'd get more highlight protection and about the same noise.

Why do cameras not do this? Is there a problem I'm not seeing?
As far as I know, cameras no longer use analog gain when increasing ISO. It is done digitally after the conversion from analog to digital.
 
Both very good points.

So why does ISO typically operate via analog gain in raw files
I don't believe that is true.
 
. A fully ISO invariant camera was just about achieved with the Nikon D7000 (13.5 bits sensor DR, 13 bit ADC ENOB) but since then the market has gone for higher pixel counts and faster frame rates, both of which require more conversions per second
But ISO invariance is not a goal per see !!

ISO invariance means also that SNR can not improve when we raise ISO. Unless the sensor is perfect, I doubt this is necessarily good news.

This certainly means that the sensor is good enough at base ISO but if they can improve SNR even better when we raise ISO, then i prefer of course !.
. So in effect your ISO invariance has been traded for 20+ FPS capture and 40+ MP.
I really don't think this is the reason. The D7000 had still some important margins for improvment so they were able to improve thanks to analog gain. This is certainly good news that the sensors following D7000 were not ISO invariant in fact.

Now the sensors are so efficient that there is very little margin for improvment. The consequence is that they will be more and more ISO invariant.
 
With a fixed shutter speed and aperture, there are two ways we can brighten an image:

- by using a higher ISO (analog gain)

- by increasing the exposure slider in post (digital gain)

My understanding is that most modern sensors are approximately ISO invariant (beyond a second base ISO, if they have one), meaning that these two approaches will produce very similar noise levels in the final image.

But the digital gain approach retains the maximum amount of highlight room, while every additional stop of ISO decreases highlight room by a stop. As a result, by using ISO instead of digital gain we could lose multiple stops of dynamic range without seeing any real noise improvement.

Wouldn't we be better off if, beyond their second base ISO, cameras just baked an exposure adjustment into the exif info instead of applying more analog gain? So if "proper" image brightness required ISO 1600, the camera would instead use its highest base ISO (say, ISO 400) and then tell Lightroom to start at +2 on the exposure slider. Compared to using ISO 1600 we'd get more highlight protection and about the same noise.

Why do cameras not do this? Is there a problem I'm not seeing?
As far as I know, cameras no longer use analog gain when increasing ISO. It is done digitally after the conversion from analog to digital.
Totally incorrect.

Besides I would rephrase sligtly the sentence, this is more about variable analog gain. A dual gain sensor has 2 different analog gains.
 
Besides I would rephrase sligtly the sentence, this is more about variable analog gain. A dual gain sensor has 2 different analog gains.
Not usually. The most common 'dual gain' configuration is dual conversion gain, which is a completely different thing to the variable voltage gain which people usually mean when they talk about 'gain' in this context.

The whole 'gain' discussion is non-productive unless you're clear about precisely which 'gains' you're talking about. It's also a lot more sensible for us to talk about 'multiplication' and then 'analog' or 'digital' multiplication, because that is conceptually closer to what is happening.
 
. A fully ISO invariant camera was just about achieved with the Nikon D7000 (13.5 bits sensor DR, 13 bit ADC ENOB) but since then the market has gone for higher pixel counts and faster frame rates, both of which require more conversions per second
But ISO invariance is not a goal per see !!

ISO invariance means also that SNR can not improve when we raise ISO. Unless the sensor is perfect, I doubt this is necessarily good news.

This certainly means that the sensor is good enough at base ISO but if they can improve SNR even better when we raise ISO, then i prefer of course !.
I think that ISO invariance can be a sensible design goal. There is no absolute reason why we should not be able to capture all the sensor information, and doing so gives us one less thing to get wrong in capture.
. So in effect your ISO invariance has been traded for 20+ FPS capture and 40+ MP.
I really don't think this is the reason.
I said that was the effect, not the reason.
The D7000 had still some important margins for improvment so they were able to improve thanks to analog gain.
Such as? Please do be explicit.
This is certainly good news that the sensors following D7000 were not ISO invariant in fact.

Now the sensors are so efficient that there is very little margin for improvment. The consequence is that they will be more and more ISO invariant.
It's not about the sensors, it's about the ADCs, and there is certainly plenty of room for improvement of those.
 
With a fixed shutter speed and aperture, there are two ways we can brighten an image:

- by using a higher ISO (analog gain)

- by increasing the exposure slider in post (digital gain)

My understanding is that most modern sensors are approximately ISO invariant (beyond a second base ISO, if they have one), meaning that these two approaches will produce very similar noise levels in the final image.

But the digital gain approach retains the maximum amount of highlight room, while every additional stop of ISO decreases highlight room by a stop. As a result, by using ISO instead of digital gain we could lose multiple stops of dynamic range without seeing any real noise improvement.

Wouldn't we be better off if, beyond their second base ISO, cameras just baked an exposure adjustment into the exif info instead of applying more analog gain? So if "proper" image brightness required ISO 1600, the camera would instead use its highest base ISO (say, ISO 400) and then tell Lightroom to start at +2 on the exposure slider. Compared to using ISO 1600 we'd get more highlight protection and about the same noise.

Why do cameras not do this? Is there a problem I'm not seeing?
As far as I know, cameras no longer use analog gain when increasing ISO.
They do, as a raw histogram without a spike at each Nth level and a gap in-between would indicate.

https://www.rawdigger.com/howtouse/iso-is-seldom-just-digital-gain

--
http://www.libraw.org/
 
Last edited:

Keyboard shortcuts

Back
Top