Shooting high ISO vs underexposing and lifting in post question

Started 3 months ago | Discussions thread
Slaginfected Contributing Member • Posts: 745
Re: High ISO comparison - what do these examples show?

alanr0 wrote:

Slaginfected wrote:

J A C S wrote:

Slaginfected wrote:

What really stumps me is the ignorance I've come to encounter here by supposedly really intelligent people. Where is the curiosity? Where is this "why are there always these discussions, why are quite a few people out there with these questions, where does it come from?" Instead: "You are wrong, this is how it is supposed to be, ignore what you see. End of discussion."

<snip>

Seriously?

https://www.dpreview.com/forums/post/64442377

Now you can run to PDR and check, but the differences there are a little bit too minimal for visual artifacting like that. Nasty pesky reality.

What is that supposed to prove?

That the A7s3 offers more processing latitude than the A9 at higher ISOs hands down. You could replace the A9 with an A7III or -- if you want to get smacked badly -- an A7rIV. The result with the A7III will be a bit better than the A9, the A7rIV will actually be worse. Reason: The way the RAW data ends up being processed, at least currently. If you have a better idea how to mitigate these clearly visible artifacts, you would make not only me, but a lot of people happy.

To my mind, PDR is not a particularly useful tool for comparing these images.

Your original point made here was that bit-shifting the output from an ISO-invariant sensor introduces quantization artefacts, compared with applying analog amplification before digitisation.

While this may be true when the ADC quantisation step (LSB) is larger than the read noise, the impact decreases to imperceptible levels as read noise rises above the ADC step size - a point already made by Jim Kasson, with Jim's simulation results linked here. To the best of my knowledge, this is accepted wisdom in signal processing circles.

The images in the A7sii vs A9 comparison you linked were captured at ISO 10000. For the A7s this is 3 stops above unity ISO where each photo-electron produces a 1 DN step in digital output. For the A9, we are 4 steps above unity ISO. I would expect quantisation effects to be negligible in both sets of images.

Regarding image noise, after re-sampling to the same resolution as the A7s, I would expect the A9 to be somewhat noisier in the deepest shadows. The per-pixel read noise is slightly higher in the older model and there are twice as many pixels per unit area. At higher light levels, photon (shot) noise will dominate, and the higher quantum efficiency of the A9 sensor will deliver rather better signal to noise ratio than the A7s, for the same exposure.

Is greater processing lattitude evident in the thread you linked? I really can't tell. As J A C S pointed out, the deepest shadows of the A7s images are much darker, suppressing most of the noise. For a useful comparison the images must be processed with identical tone curves and equivalent black levels.

With better matched post-processing the images could be relevant to a discussion of pixel size - but that is a different thread which opens up whole new can of worms.

Cheers.

The problem is simple:

  • Processing is part of digital photography. Just talking about effects you see when analysing and comparing RAW data is certainly interesting, but without processing all that stuff is just data. Which means for a full analysis and also understanding of digital photography the processing must be included.
  • Processing is (strongly) non-linear and emphasizes the darker parts over the brighter ones, which is somewhat unfortunate if you have a weak signal there, for example because the pixels are small and there is only so much light to work with. It also means that one should be careful using linear interpretations of RAW data in explaining the results out of the full processing step.
  • Quite a few things in the processing are done per pixel, meaning that the signal quality per pixel has an effect on calculations.
  • Scaling happens towards the end of the processing, means all the artifacts because of a weak signal up to that point will not magically disappear just because you are downscaling, for example. This can be so bad that some of the artifacts are still visible in post stamp sized low res images.

And to come back to the initial problem: Exposure compensation means that you stretch the tonal steps of the darker parts of the image. That also means all the noise in original gets stretched alongside and has no in-between values in the result. If you don't add noise, 14 bit data will invariably have larger steps between the tonal values than for example 16 bit data, and the effect will be even more pronounced if you have only 12 bits or, please avoid, 10 bits to available. It also means the more you push, the more pronounced that problem becomes.

Now, compare that to some analogue processing. You will not have steps in there, because you start with a more or less "stepless" signal, so you can scale that and still be stepless. You will have noise "in-between" automatically, so once you go through an ADC the result will look very different than the digitally processed thing.

Many DSP algorithms rely on the fact that input and output are 1:1, which also means that you don't disrupt the steps between the values too much. Exposure compensation in photography is pretty disruptive, it invalidates this 1:1 assumption big time. That is ok, but if you want results to be more correct, you either have to fix the data up beforehand that it "survives" the treatment, or you have to do post-fixing of it. In both cases noise is the solution.

Post (hide subjects) Posted by
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark MMy threads
Color scheme? Blue / Yellow