With a fixed shutter speed and aperture, there are two ways we can brighten an image:
- by using a higher ISO (analog gain)
- by increasing the exposure slider in post (digital gain)
My understanding is that most modern sensors are approximately ISO invariant (beyond a second base ISO, if they have one), meaning that these two approaches will produce very similar noise levels in the final image.
But the digital gain approach retains the maximum amount of highlight room, while every additional stop of ISO decreases highlight room by a stop. As a result, by using ISO instead of digital gain we could lose multiple stops of dynamic range without seeing any real noise improvement.
Wouldn't we be better off if, beyond their second base ISO, cameras just baked an exposure adjustment into the exif info instead of applying more analog gain? So if "proper" image brightness required ISO 1600, the camera would instead use its highest base ISO (say, ISO 400) and then tell Lightroom to start at +2 on the exposure slider. Compared to using ISO 1600 we'd get more highlight protection and about the same noise.
Why do cameras not do this? Is there a problem I'm not seeing?
Since I've already been invoked in this thread, I'll add my twopenny's worth.
Firstly, thinking of what ISO does in terms of 'gain' usually leads to a number of misunderstandings. The key concept to get here is that the input and output of a camera are completely different. The input is an amount of light energy, usually quantified as exposure. The output is a perceptual specification of how light or dark the image (or patches of it) should look, technically called 'value' or 'lightness'. I'll go with the latter. Unfortunately a lot of photographers have come to using the word 'exposure' for both, which leads to the idea that to make the output 'brighter' then more light is required, so we need to apply 'gain' to get more (substitute) light. It's not true. Conceptually what is happening is that the sensor is making exposure measurements in each pixel and then a computational task is applied to derive a set of lightness values from those measurements. That's a conversion, and can be done from any set of exposure measurements however big or small. The fundamental purpose of variable gain in a camera system is to allow more accurate measurement of low exposure values. If you have an ADC which records 14 bits at the exposure for 100 ISO, then at 3200 ISO it will only get to use 9 bits. By boosting the ADC input 32 fold we get to use all 14 bits.
How many bits we actually need depends on the sensor (and here I mean just the analog bit of the sensor). The amount of information per pixel that the sensor gives is determined by how many photoelectrons each pixel can collect and how much electronic noise the sensor electronics add. This in turn depends on the largest exposure with which the sensor is designed to operate, the efficiency of the sensor in converting light to photoelectrons, the area of the pixel and the noisiness of the pixel electronics. If we work out the maximum information output per pixel in bits (a bit is fundamentally a unit of information) then we can work out how many bits of ADC we need to capture all of that information. We can Use Bill Claff's site to get some estimates for different sensors, as follows.
So let's choose a camera, say a Sony A7. That can collect 52,000 and has an electronic noise of 2.7 electrons (there can be a partial electron because these aren't actual electrons, it is noise expressed as electron equivalents). This makes the per-pixel dynamic range of the sensor 52,000/2.7 = 19,529. As bits this is log2 19,529 = 14.23 bits.
So we would need at least a 15-bit ADC to capture all of the information from the sensor and thus make variable gain un-necessary. An added complication is that ADCs are imperfect, and internal noise reduces how much real information the can collect. This is expressed by engineers as 'effective number of bits' (ENOB). Wide ADCs typically lose a couple of bits, so a 14-bit ADC might have an ENOB of around 12. The ADC's in the A7 are pretty good, and only lose a bit or so, (ENOB of 13) but it's still not enough for the camera to operate without variable gain at the ADC input.
As to why they don't just make the ADCs better, it's because there is a constant tradeoff between bit width, speed and cost, in any ADC tech. A fully ISO invariant camera was just about achieved with the Nikon D7000 (13.5 bits sensor DR, 13 bit ADC ENOB) but since then the market has gone for higher pixel counts and faster frame rates, both of which require more conversions per second. So in effect your ISO invariance has been traded for 20+ FPS capture and 40+ MP.