Shooting high ISO vs underexposing and lifting in post question

Look at the posterization problem from pushing things around (too much) and people telling me there is none. "ISO-invariant! All the same!". Ok, then well, D810 ISO64 vs. ISO12800 maybe? Because it always helps to put things to the extreme and see if the hypothesis still holds.
Straw man argument. No one is claiming that ISO invariance holds over that range. The model is this: noise comes from pre-PGA sources in the camera, PGA and post-PGA sources in the camera, and shot noise (I'm leaving out some sources). A camera is ISOless when the PGA and post-PGA sources are negligible wrt the others.
But that is the often stated conclusion out of ISOless sensors,
I have never seen anyone with any credibility say that. That's why it's a straw man.
which I just put into a more extreme example where it is obvious that it cannot really work. Which in return means as soon as you start pushing things up you will degrade image quality and with 14 bits the limits are tighter than the analog range of most sensors, even if the sensor is ISOless (or close to that).
Did you see where I said "negligible" above? The argument you are proffering can sacrifice certain improvements in the highlights -- depending on the situation -- in return for invisible improvements in the shadows.

Not a good tradeoff IMHO.
 
Whence comes the assumption that “holes” are a problem? If the signal is sufficiently dithered before scaling, it should also be true after scaling. The noise and quantization step are scaled by the same factor, so the former is still larger than the latter. I also don’t see a problem when hovering over the second image in this page (which truncates it by 3 bits, see histogram). What am I missing?
"If the signal is sufficiently dithered ..." Look at that. This is what is called a precondition. Now imagine we are pushing things up 4 stops or why not 5 stops, pulling tonal values and also noise values apart. How much noise is needed in the signal for your precondition to still hold?
 
  • Processing is (strongly) non-linear and emphasizes the darker parts over the brighter ones, which is somewhat unfortunate if you have a weak signal there, for example because the pixels are small and there is only so much light to work with. It also means that one should be careful using linear interpretations of RAW data in explaining the results out of the full processing step.
Default tone curves in Lr and others have toe and shoulder regions where the shadow and highlight steps are suppressed (contrast is lowered), in order to get more contrast in the midtones. This is the opposite of what you are saying happens.

Are you talking about the gamma of the image once it is converted to a CIE color space? That is a different thing.
 
Whence comes the assumption that “holes” are a problem? If the signal is sufficiently dithered before scaling, it should also be true after scaling. The noise and quantization step are scaled by the same factor, so the former is still larger than the latter. I also don’t see a problem when hovering over the second image in this page (which truncates it by 3 bits, see histogram). What am I missing?
"If the signal is sufficiently dithered ..." Look at that. This is what is called a precondition. Now imagine we are pushing things up 4 stops or why not 5 stops, pulling tonal values and also noise values apart. How much noise is needed in the signal for your precondition to still hold?
3 or 4 LSBs is plenty. I said that earlier, and linked to examples.
 
If you want to answer so I understand then simplification and easy English is the model. I am not a technical illiterate but surely not on the level you guys are. Even if I have trouble follow many of the comments 100% I still pick up stuff and learn so it isn't wasted either and I am very grateful for the involvement this topic has gotten.
I think you understood pretty well. Other people in this topic have obviously figured this out as well. I can summarize it like this. As an example, imagine a normal exposure made with the Sony 7RM4 at ISO 400.

You could also take a picture of the same subject with 4 stops less exposure, setting the camera at ISO 6400. This picture would obviously be noisier. You could also take the picture of the same subject with 4 stops less exposure, setting the camera at ISO 400. The ISO 6400 setting gives you almost no advantage, but you lose 4 stops of dynamic range. The other penalty is that your review and jpeg photo are much too dark.

The camera should be much smarter about this, and some cameras are. But you knew that already, because it's the whole point of your discussion. I just added some numbers.
That it works in practice with minus compensation on the ISO and lifting the RAW in post I knew from shooting low light club events. What I got interested in was why no cameras actually worked like this and protected the highlights automatically. Something I have to do manually with my bodys. And it can be really tricky to see the live view and built in JPEG due to the underexposure (though activating DRO +5 stops does help). At the time I asked I didn't know that some cameras like Fuji actually do use this method. With that for me new knowledge available I conclude that there seems to be no real technical reason to not implement it.
 
And to come back to the initial problem: Exposure compensation means that you stretch the tonal steps of the darker parts of the image. That also means all the noise in original gets stretched alongside and has no in-between values in the result. If you don't add noise, 14 bit data will invariably have larger steps between the tonal values than for example 16 bit data...
You are forgetting the effects of dither. The differences between 16-bit raw precision and 14-bit raw precision vary between non-existent and subtle.
Ah, so you are saying that for example the color cast of the 14 bit RAWs compared to the 16 bit ones out of GFX100 and GFX100s is just a matter of fixing the black point? Hmmm ....

There is a lot more going on between 16 and 14 bit there than just the color cast, actually some of that shows what we were talking about and I was hinting at repeatedly: Posterization. Look at the tonal variations in the 14 bit vs. the 16 bit. 16 bit just shows more nuances in parts which are visually "flattened" in the 14 bit examples.

Now the actual question is: If the 14 bit data is enhanced with noise/dithering to 16 bit, how does that compare to the native 16 bit out of camera? If that is close(r) to the native 16 bit you know that the processing software is "broken" and widening the data further that way to 18 or even 20 bit will get you closer to the ISOless ideal, for example.

Also note that there is a difference between the existence of an artifact and declaring it non-existent or subtle. Former is fact and important for analysing image processing, latter is opinion.
I once has a Hasselblad H2D-39. It had 16 bit precision. The read noise was about 32 LSBs.
 
Whence comes the assumption that “holes” are a problem? If the signal is sufficiently dithered before scaling, it should also be true after scaling. The noise and quantization step are scaled by the same factor, so the former is still larger than the latter. I also don’t see a problem when hovering over the second image in this page (which truncates it by 3 bits, see histogram). What am I missing?
"If the signal is sufficiently dithered ..." Look at that. This is what is called a precondition. Now imagine we are pushing things up 4 stops or why not 5 stops, pulling tonal values and also noise values apart. How much noise is needed in the signal for your precondition to still hold?
Exactly as much as without the pushing, that was my point. The scaling doesn’t create holes — they were already there, just not encoded.
 
Last edited:
Now you are saying that applying the same process at ISO10k which you do for ISO100 is somehow not valid and the differences you see there "prove nothing". Maybe you could elaborate on that? Magic handwaving won't do, though.
For starters, I do not see the differences you seem to see. Once we are settled on that, we can discuss what it proves even if I can see them.

Do you have the RAW files?
Well, the info is just out there. For example: https://www.dpreview.com/reviews/im...1&x=-0.8896187933978725&y=-1.0262140037186216

Look at the black which is pretty much purple-blue with the A7rIV. And the image will also have a distinct desaturated feel to it. And because our seeing depends a lot on experience, there are ways to at least partially mitigate this, like for example if you have the same scene, just flip between the images instead of looking at them side by side.

If everything is the same, where is this tint coming from then? Worse, if we assume that the software is applying the same process over all ISOs and that there is a linear dependency between ISO and DR, this difference which is clearly visible must also be there towards lower ISOs even if you don't see the tint anymore. Naturally, when talking about a long processing chain, there is always a weak link and parts which are affected by limits inherent to the processing, so this difference when going down with the ISOs is eventually counteracted by such effects in the chain, giving higher res sensors an advantage.
 
Slaginfected said:
J A C S said:
Slaginfected said:
Now you are saying that applying the same process at ISO10k which you do for ISO100 is somehow not valid and the differences you see there "prove nothing". Maybe you could elaborate on that? Magic handwaving won't do, though.
For starters, I do not see the differences you seem to see. Once we are settled on that, we can discuss what it proves even if I can see them.

Do you have the RAW files?
Well, the info is just out there. For example: https://www.dpreview.com/reviews/im...1&x=-0.8896187933978725&y=-1.0262140037186216
This is different info; different ISO, scene, processing. I told you that the evidence you presented was not convincing and this is still true.
Member said:
Look at the black which is pretty much purple-blue with the A7rIV.
This is due to nonlinearity at the bottom with possibly not well chosen black point. In any case, it is a problem, and it is fairly common.
Member said:
And the image will also have a distinct desaturated feel to it.
I do not see that.

Just for the fun of it, this is what DXO Photo Lab can do. The NR settings are default. The purple tint is gone.


Member said:
And because our seeing depends a lot on experience, there are ways to at least partially mitigate this, like for example if you have the same scene, just flip between the images instead of looking at them side by side.

If everything is the same, where is this tint coming from then? Worse, if we assume that the software is applying the same process over all ISOs and that there is a linear dependency between ISO and DR, this difference which is clearly visible must also be there towards lower ISOs even if you don't see the tint anymore. Naturally, when talking about a long processing chain, there is always a weak link and parts which are affected by limits inherent to the processing, so this difference when going down with the ISOs is eventually counteracted by such effects in the chain, giving higher res sensors an advantage.
The DR metric we use does not measure everything that screws the image, this is clear.
 
Last edited:
Well, it does not prove that. The A7s3 image has darker shadows and hides the noise better. Both at 12mp, they look quite similar.

Most importantly, this is one scene, one pair of cameras, one ISO, one take on what "more processing latitude" means. It proves nothing.
I mean I don't see a difference at ISO100 between a 5D3 and a D750, yet if you start underexposing and pushing things up the well known problems with the 5D3 will become visible. This doesn't make the 5D3 unusable as a camera (which is an opinion), but the D750 just has more processing latitude, obviously.

We apply the very same process at higher ISOs with A7s3 and A9, for example, and see that the A7s3 is holding things better together, which means the A7s3 has more processing latitude.

Now you are saying that applying the same process at ISO10k which you do for ISO100 is somehow not valid and the differences you see there "prove nothing". Maybe you could elaborate on that? Magic handwaving won't do, though.
Without equalising the post-processing so that the comparison images have equivalent tone black level and identical tone curves, I have no way of knowing if there is a significant difference to explain.

Regarding digital scaling of the output at ISO 100, compared with ISO 10000. At ISO 10000, the Sony sensors you referenced have very low absolute levels of read noise.

At ISO 100 the Canon 5Diii has rather high read noise, including relatively high levels of downstream conversion noise. The input-referred read noise falls roughly inversely with ISO.

In contrast, the Nikon 750D while not exactly isoless, has a much lower ISO 100 read noise, which changes much more slowly with ISO. This gives a D750 image captured at low ISO higher engineering dynamic range, and much greater processing latitude than an equivalent 5Diii image.

https://www.photonstophotos.net/Charts/RN_e.htm#Canon%20EOS%205D%20Mark%20III_14,Nikon%20D750_14,Sony%20ILCE-7SM3_14

At ISO 3200 and higher, there is little to choose between the 5Diii and D750 in terms of read noise. They are both almost-isoless, with around 2.5 e- read noise.

At ISO 10000 the recent Sony sensors have even less read noise, which is almost independent of ISO.

Explaining the difference in behaviour for Canon 5Diii at ISO 100 compared with Sony A7siii at ISO 10000 is simple physics. No magic handwaving required.
Because I'm evil, here something for you: People wrote back in the days that the D750 is better suited for low-light images than the D810. And it seems like it doesn't stop there, you see similar things being mentioned between Z6 and Z7 (and between their 2nd iteration version), between A7III and A7rIII (and also between A7rIII and A7rIV, meaning there must be a larger difference between A7III and A7rIV) etc. So ... any explanations for that?

You know, the problem here at DPR is, that quite a few people can explain you all the nitty gritty details of how a sensor works and the RAW data and a couple other things. But noone can give you even an estimate of how the SNR of the data changes / is affected by the processing of the data afterwards, numerical problems on the way, calculation error estimations etc. Because if someone were able, they would have thrown this around already, including my way. Which didn't happen so far, which you could count as a strong indication nothing like this exists. Still, that doesn't prevent people from telling me that the stuff I'm seeing when actually processing files is wrong etc. Seriously?

Plus there are quite a few questions surrounding that topic. If the results would be all the same, why do camera makers give cameras different max ISO ratings? For example, the A7 line seems to have rather consistent max ISO ratings, and they have a certain difference between them, which seems to match up, roughly, the processing latitude differences you see at higher ISOs, for example. Strange, isn't it? And there are many more such questions combined with logical reasoning which just gives strong indication that "there is something".

And while I'm throwing around questions: How much experience do you have with low-light photography? You know, short exposure times, not really ideal light, maybe even LED lights thrown into the mix, stuff like that? Higher 4 and lower 5 digit ISOs. Just wondering ...
 
Now you are saying that applying the same process at ISO10k which you do for ISO100 is somehow not valid and the differences you see there "prove nothing". Maybe you could elaborate on that? Magic handwaving won't do, though.
For starters, I do not see the differences you seem to see. Once we are settled on that, we can discuss what it proves even if I can see them.

Do you have the RAW files?
Well, the info is just out there. For example: https://www.dpreview.com/reviews/im...1&x=-0.8896187933978725&y=-1.0262140037186216

Look at the black which is pretty much purple-blue with the A7rIV. And the image will also have a distinct desaturated feel to it. And because our seeing depends a lot on experience, there are ways to at least partially mitigate this, like for example if you have the same scene, just flip between the images instead of looking at them side by side.

If everything is the same, where is this tint coming from then?
Wrong black point in the raw converter.
Worse, if we assume that the software is applying the same process over all ISOs and that there is a linear dependency between ISO and DR, this difference which is clearly visible must also be there towards lower ISOs even if you don't see the tint anymore. Naturally, when talking about a long processing chain, there is always a weak link and parts which are affected by limits inherent to the processing, so this difference when going down with the ISOs is eventually counteracted by such effects in the chain, giving higher res sensors an advantage.
 
Because I'm evil, here something for you: People wrote back in the days that the D750 is better suited for low-light images than the D810. And it seems like it doesn't stop there, you see similar things being mentioned between Z6 and Z7 (and between their 2nd iteration version), between A7III and A7rIII (and also between A7rIII and A7rIV, meaning there must be a larger difference between A7III and A7rIV) etc. So ... any explanations for that?
Doing comparisons on a pixel vs pixel basis rather than a whole-image basis.
 
You know, the problem here at DPR is, that quite a few people can explain you all the nitty gritty details of how a sensor works and the RAW data and a couple other things. But noone can give you even an estimate of how the SNR of the data changes / is affected by the processing of the data afterwards, numerical problems on the way, calculation error estimations etc.
Quite a few people here are capable of it, some even do it as a routine (including those involved in development of raw converters and image processing algorithms). Analysis of processing in raw converters is rather straightforward but has a short shelf life.

SNR is not enough to characterize processing, btw. One needs to look at at least a couple more parameters, indicating how resolution holds in both luma and colour channels, as well as at the visual quality of the resulting noise.
 
Last edited:
Now you are saying that applying the same process at ISO10k which you do for ISO100 is somehow not valid and the differences you see there "prove nothing". Maybe you could elaborate on that? Magic handwaving won't do, though.
For starters, I do not see the differences you seem to see. Once we are settled on that, we can discuss what it proves even if I can see them.

Do you have the RAW files?
Well, the info is just out there. For example: https://www.dpreview.com/reviews/im...1&x=-0.8896187933978725&y=-1.0262140037186216
This is different info; different ISO, scene, processing. I told you that the evidence you presented was not convincing and this is still true.
Look at the black which is pretty much purple-blue with the A7rIV.
This is due to nonlinearity at the bottom with possibly not well chosen black point. In any case, it is a problem, and it is fairly common.
It shows that things are not holding up there. You see it both when pushing things too far and also if the ISOs go to far (and any combination of these two).
And the image will also have a distinct desaturated feel to it.
I do not see that.

Just for the fun of it, this is what DXO Photo Lab can do. The NR settings are default. The purple tint is gone.
Yes, I'm aware of DxO, and this is with their neural network denoiser. Like every "AI" solution it has quirks (I remember this one RAW denoiser example here at DPR), but, as neural networks go, "fixing" the colors is part of the deal then. It is just highly interesting that neither camera makers nor companies like DxO, Adobe & Co. have ever found an actual algorithm to make things "right". So they use "AI" now to fix that for them ...

Smartphones use similar tech, which is why it is no wonder the results have a smartphone-ish look to them. Plus I have yet to see anyone doing the underexposure + push thing, only single examples without any reference point.

And because our seeing depends a lot on experience, there are ways to at least partially mitigate this, like for example if you have the same scene, just flip between the images instead of looking at them side by side.

If everything is the same, where is this tint coming from then? Worse, if we assume that the software is applying the same process over all ISOs and that there is a linear dependency between ISO and DR, this difference which is clearly visible must also be there towards lower ISOs even if you don't see the tint anymore. Naturally, when talking about a long processing chain, there is always a weak link and parts which are affected by limits inherent to the processing, so this difference when going down with the ISOs is eventually counteracted by such effects in the chain, giving higher res sensors an advantage.
The DR metric we use does not measure everything that screws the image, this is clear.
 
Because I'm evil, here something for you: People wrote back in the days that the D750 is better suited for low-light images than the D810. And it seems like it doesn't stop there, you see similar things being mentioned between Z6 and Z7 (and between their 2nd iteration version), between A7III and A7rIII (and also between A7rIII and A7rIV, meaning there must be a larger difference between A7III and A7rIV) etc. So ... any explanations for that?
People write that Earth is flat, that Elvis is still alive, and that the man in the sky is watching us, just to mention a few more...
You know, the problem here at DPR is, that quite a few people can explain you all the nitty gritty details of how a sensor works and the RAW data and a couple other things. But noone can give you even an estimate of how the SNR of the data changes / is affected by the processing of the data afterwards, numerical problems on the way, calculation error estimations etc.
You are evil, indeed. And wrong. The SNR of the data does not change with processing. It is data, it should be in a "lockbox", and what you are talking about is noise in the processed image. How noise changes under linear transformations (think about the color matrix) is known, and I have hinted in this before. DXO even plot noise ellipses. I do not want to go there because you would dismiss it anyway regardless of the fact that you asked for it. One can explain what happens under some kinds of non-linear processing, as well but that would be too technical.

Numerical problems are rarely problems but sometimes they can create posterization; we had threads like this before.

Calculation error estimates are a topic of numerical analysis, but it would be too much for our purposes. There is so much noise in our images that we do not need to worry about that.

So you are completely wrong about your claim in bold above.
Because if someone were able, they would have thrown this around already, including my way. Which didn't happen so far, which you could count as a strong indication nothing like this exists. Still, that doesn't prevent people from telling me that the stuff I'm seeing when actually processing files is wrong etc. Seriously?
Yes. All the theory in the world means nothing if we are not seeing what you think you are.
Plus there are quite a few questions surrounding that topic. If the results would be all the same, why do camera makers give cameras different max ISO ratings? For example, the A7 line seems to have rather consistent max ISO ratings, and they have a certain difference between them, which seems to match up, roughly, the processing latitude differences you see at higher ISOs, for example. Strange, isn't it? And there are many more such questions combined with logical reasoning which just gives strong indication that "there is something".
Many cameras today and my Canons in the past had completely useless scaled high ISO because it made them looked better.
And while I'm throwing around questions: How much experience do you have with low-light photography? You know, short exposure times, not really ideal light, maybe even LED lights thrown into the mix, stuff like that? Higher 4 and lower 5 digit ISOs. Just wondering ...
None of us ever shoots in low light, I guess...
 
Last edited:
Now you are saying that applying the same process at ISO10k which you do for ISO100 is somehow not valid and the differences you see there "prove nothing". Maybe you could elaborate on that? Magic handwaving won't do, though.
For starters, I do not see the differences you seem to see. Once we are settled on that, we can discuss what it proves even if I can see them.

Do you have the RAW files?
Well, the info is just out there. For example: https://www.dpreview.com/reviews/im...1&x=-0.8896187933978725&y=-1.0262140037186216

Look at the black which is pretty much purple-blue with the A7rIV. And the image will also have a distinct desaturated feel to it. And because our seeing depends a lot on experience, there are ways to at least partially mitigate this, like for example if you have the same scene, just flip between the images instead of looking at them side by side.

If everything is the same, where is this tint coming from then?
Wrong black point in the raw converter.
Yes, suboptimal processing close to black point. One can't always rely on "known" metadata.
Worse, if we assume that the software is applying the same process over all ISOs and that there is a linear dependency between ISO and DR, this difference which is clearly visible must also be there towards lower ISOs even if you don't see the tint anymore. Naturally, when talking about a long processing chain, there is always a weak link and parts which are affected by limits inherent to the processing, so this difference when going down with the ISOs is eventually counteracted by such effects in the chain, giving higher res sensors an advantage.
 
As J A C S pointed out, the deepest shadows of the A7s images are much darker,
ILCE-7SM3 has its ISO calibrated differently from ILCE-9, too.
Now, we can somewhat correct the A9 image by pulling things down
Responsivity differences corrected after the exposure has ended bring a lot of uncertainty, especially when being combined with non-linear raw conversion, like Adobe converters perform by default.

Such comparison of processed JPEGs isn't something I would rely upon when comparing cameras and sensors.
The problem is people working with files, seeing differences, stumbling over DR diagrams and then wondering why they show different results than they experienced. That question is stated for example somewhere in the DPR forums and then a longwinded discussion starts.

I would expect something along the lines "yes, there are these differences when processing, because what happens is X-Y-Z-whatever". But usually it is "you are wrong what you are seeing, DR diagrams are correct, blablabla". Or it pretty much ends up with that, even if written differently. *THAT* is the problem.

And I honestly don't understand why.
 
Just for the fun of it, this is what DXO Photo Lab can do. The NR settings are default. The purple tint is gone.
Yes, I'm aware of DxO, and this is with their neural network denoiser.
The tint is gone with their non-AI NR as well.
Like every "AI" solution it has quirks (I remember this one RAW denoiser example here at DPR), but, as neural networks go, "fixing" the colors is part of the deal then. It is just highly interesting that neither camera makers nor companies like DxO, Adobe & Co. have ever found an actual algorithm to make things "right". So they use "AI" now to fix that for them ...
What is "right"? I am all ears. This would revolutionize math and imaging as we know it.
Smartphones use similar tech, which is why it is no wonder the results have a smartphone-ish look to them. Plus I have yet to see anyone doing the underexposure + push thing, only single examples without any reference point.
Like what you did? Every example is a single one.


 
Because I'm evil, here something for you: People wrote back in the days that the D750 is better suited for low-light images than the D810. And it seems like it doesn't stop there, you see similar things being mentioned between Z6 and Z7 (and between their 2nd iteration version), between A7III and A7rIII (and also between A7rIII and A7rIV, meaning there must be a larger difference between A7III and A7rIV) etc. So ... any explanations for that?
People write that Earth is flat, that Elvis is still alive, and that the man in the sky is watching us, just to mention a few more...
You know, the problem here at DPR is, that quite a few people can explain you all the nitty gritty details of how a sensor works and the RAW data and a couple other things. But noone can give you even an estimate of how the SNR of the data changes / is affected by the processing of the data afterwards, numerical problems on the way, calculation error estimations etc.
You are evil, indeed. And wrong. The SNR of the data does not change with processing. It is data, it should be in a "lockbox", and what you are talking about is noise in the processed image. How noise changes under linear transformations (think about the color matrix) is known, and I have hinted in this before. DXO even plot noise ellipses.
You mean these fancy things which are much larger with higher res cameras than with lower res cameras towards higher ISOs and less available light? And if we stray from daylight type light it gets even worse?
I do not want to go there because you would dismiss it anyway regardless of the fact that you asked for it. One can explain what happens under some kinds of non-linear processing, as well but that would be too technical.

Numerical problems are rarely problems but sometimes they can create posterization; we had threads like this before.

Calculation error estimates are a topic of numerical analysis, but it would be too much for our purposes. There is so much noise in our images that we do not need to worry about that.

So you are completely wrong about your claim in bold above.
Because if someone were able, they would have thrown this around already, including my way. Which didn't happen so far, which you could count as a strong indication nothing like this exists. Still, that doesn't prevent people from telling me that the stuff I'm seeing when actually processing files is wrong etc. Seriously?
Yes. All the theory in the world means nothing if we are not seeing what you think you are.
Plus there are quite a few questions surrounding that topic. If the results would be all the same, why do camera makers give cameras different max ISO ratings? For example, the A7 line seems to have rather consistent max ISO ratings, and they have a certain difference between them, which seems to match up, roughly, the processing latitude differences you see at higher ISOs, for example. Strange, isn't it? And there are many more such questions combined with logical reasoning which just gives strong indication that "there is something".
Many cameras today and my Canons in the past had completely useless scaled high ISO because it made them looked better.
And while I'm throwing around questions: How much experience do you have with low-light photography? You know, short exposure times, not really ideal light, maybe even LED lights thrown into the mix, stuff like that? Higher 4 and lower 5 digit ISOs. Just wondering ...
None of use ever shoots in low light, I guess...
 
As J A C S pointed out, the deepest shadows of the A7s images are much darker,
ILCE-7SM3 has its ISO calibrated differently from ILCE-9, too.
Now, we can somewhat correct the A9 image by pulling things down
Responsivity differences corrected after the exposure has ended bring a lot of uncertainty, especially when being combined with non-linear raw conversion, like Adobe converters perform by default.

Such comparison of processed JPEGs isn't something I would rely upon when comparing cameras and sensors.
The problem is people working with files, seeing differences, stumbling over DR diagrams and then wondering why they show different results than they experienced.
And when told why ignore it?
That question is stated for example somewhere in the DPR forums and then a longwinded discussion starts.

I would expect something along the lines "yes, there are these differences when processing, because what happens is X-Y-Z-whatever". But usually it is "you are wrong what you are seeing, DR diagrams are correct, blablabla". Or it pretty much ends up with that, even if written differently. *THAT* is the problem.

And I honestly don't understand why.
 

Keyboard shortcuts

Back
Top