Why do we still use analog gain with ISO invariant sensors?

Sometimes, I would like to undertand this obsession against the exposure triange.
Sometimes, I would like to understand this obsession for the exposure triangle, the thing that fails to account for the 'photo-' in 'photography'.
I hope we can end the debate next time by just saying that we disagree.
Disagreeing with a falsehood is a given.
 
Sometimes, I would like to undertand this obsession against the exposure triange.
Sometimes, I would like to understand this obsession for the exposure triangle, the thing that fails to account for the 'photo-' in 'photography'.
I do not see any obsession in these forums, only in your imagination. The discussion always start with "missiles" launched against the exposure triangle. But the exposure triangle resists better than expected. Oh, it makes me think of a real world situation..
I hope we can end the debate next time by just saying that we disagree.
Disagreeing with a falsehood is a given.
Another missile which did not reach its target.
 
Last edited:
With a fixed shutter speed and aperture, there are two ways we can brighten an image:

- by using a higher ISO (analog gain)

- by increasing the exposure slider in post (digital gain)

My understanding is that most modern sensors are approximately ISO invariant, meaning that these two approaches will produce very similar noise levels in the final image.

But the digital gain approach retains the maximum amount of highlight room, while every additional stop of ISO decreases highlight room by a stop.
But the analog ISO boosts the shadows. Which approach you choose (and you can do digital ISO in PP) depends on whether you have a highlight problem or a shadows problem at your optimal exposure.
Why do cameras not do this? Is there a problem I'm not seeing?
It's to pull the shadows above the noise floor of the ADC. Digital gain (after the ADC) can't do that.

This is a useful demonstration.
 
Last edited:
I dissagree with the presentation of the triangle. I prefer to see the sides labeled rather than the vertices:
The exact opposite for me...

With sides, it is really difficult to understand how you project on each side. Very misleading in fact !!

The logic is so much easier with vertices. It depends on the orthogonal distance of the segment parallel to the opposite side to the vertex. With a point inside the rectangle, you just calculate the 3 distances to each vertex. It is the same as above in fact but without ambiguities.

Personnaly, I would draw the perpendicular bisectors to show the graduation of each axis, I am just lazy to draw it.

Special note to all the anti-ET : we do not care at all about the exact projections, exact values.. it simply illustrates a logic . I prefer to anticipate :-)
52d5554c37a641c7865b54f890fc39c5.jpg

Furthermore, the triangle need not be equilateral, so my labeling method can provide better clarity. For example, with some of us it will look like this:

1397e83f661b479ab28e06ad25e98a64.jpg
 
There's certainly a trade-off, and there is another benefit to one analog gain and digitization, too; compressed RAW files are much smaller when most of the most significant bits are unused. That would make higher-ISO RAW files smaller; not larger.
I'm not sure which way round you mean this. If read-chain gain is increased at high ISO settings the lower bits get filled with (useless) noise, which impedes compression. If the upper bits are just zeros (no variable gain) they will compress well.
I don't know what you're seeing in my paragraph, but that's pretty much what I thought I said. There's a tradeoff that loses headroom to do things the way most MFRs do them, with variable electronic gain, AND using a single gain would also give smaller files for higher ISO exposure indices (lower absolute exposures).
In the end, the ISO control is probably redundant. The camera processor has all the information that it needs to optimally pack the sensor information into the available ADC width, varying the read chain gain if needs be. The JFIF engine has all the information that it needs from the raw file to make a rendering. At most, ISO should be a cue to help signal the photographers intent, though I think that a rendering intent control would be better. Auto ISO is a kind of poor effort at that.
Much photography could abandon the legacy models and parameters of exposure, but I think that there is still some utility to some legacy approaches, like the simplicity of using what we now call "full manual" in even, unchanging light that allows one to convert every image in the same predetermined manner with no exposure interpretation by the camera that can go awry.
 
I dissagree with the presentation of the triangle. I prefer to see the sides labeled rather than the vertices:
The exact opposite for me...

With sides, it is really difficult to understand how you project on each side. Very misleading in fact !!

The logic is so much easier with vertices. It depends on the orthogonal distance of the segment parallel to the opposite side to the vertex. With a point inside the rectangle, you just calculate the 3 distances to each vertex. It is the same as above in fact but without ambiguities.

Personnaly, I would draw the perpendicular bisectors to show the graduation of each axis, I am just lazy to draw it.

Special note to all the anti-ET : we do not care at all about the exact projections, exact values.. it simply illustrates a logic . I prefer to anticipate :-)
Indeed: which is why I prefer the normalization provided by an equilateral triangle (while remaining anti-ET personally).
<snip>
Furthermore, the triangle need not be equilateral, so my labeling method can provide better clarity. <snip>
 
<snip>

Much photography could abandon the legacy models and parameters of exposure, but I think that there is still some utility to some legacy approaches, like the simplicity of using what we now call "full manual" in even, unchanging light that allows one to convert every image in the same predetermined manner with no exposure interpretation by the camera that can go awry.
Agreed. Modern photography is drifting toward 'the camera does everything' and now AI is drifting toward replacing the human in post-processing.

I have a dual approach: my old Sigma is always in manual and set at it's native ISO; my Lumix G9 is generally in aperture priority, Auto WB (shock, horror) and dual OS (shaky hands).

--
what you got is not what you saw ...
 
Last edited:
In the end, the ISO control is probably redundant. The camera processor has all the information that it needs to optimally pack the sensor information into the available ADC width, varying the read chain gain if needs be.
ETTR is not an automatic process. Sometimes you want to leave some headroom above the highlight peaks, depending on how they roll off; sometimes you want to intentionally saturate some very bright point sources to give you more shadow boost. It's an esthetic, not a formulaic process.
 
In the end, the ISO control is probably redundant. The camera processor has all the information that it needs to optimally pack the sensor information into the available ADC width, varying the read chain gain if needs be.
ETTR is not an automatic process. Sometimes you want to leave some headroom above the highlight peaks, depending on how they roll off; sometimes you want to intentionally saturate some very bright point sources to give you more shadow boost. It's an esthetic, not a formulaic process.
While I agree totally with the sentiment, the phrase "optimally pack" does leave the door open to the dreaded AI. A bit like Nikon's "optimal" exposure control involving a huge database of prior images.

Tomorrow's camera processor could even detect an excessive scene DR, automatically do the necessary bracketing, merging and any other pre-processing before writing "raw" to card.
 

Keyboard shortcuts

Back
Top