Dynamic range and sensor size

So, if I've followed it correctly the explanation is:
  1. DR is measured from a floor of an acceptable amount of noise to saturation
  2. Noise is random
  3. The degree of enlargement determines how visible the noise is (looking more closely makes anything more visible).
  4. So a smaller sensor, which means more enlargement, means noise is more intrusive so the floor level of the range is a little higher.
Maybe we should invite Bill Claff over to explain the results.
It is explained here:

http://photonstophotos.net/GeneralT...r/DX_Crop_Mode_Photographic_Dynamic_Range.htm
I admit I don't understand...

dxo normalises noise to given viewing conditions. It is fair.

Now they say that DR is the difference in stop between the acceptable noise (SNR=1) and the brightest value.

What is wrong with that, I do not understand ? The important thing is to normalise noise, but they do it.
I do not understand the question. Are you asking why they normalize to the same viewing size? Or why the engineering definition does not include (in an explicit way) the normalization?
What is wrong with the way dxo measures DR (looks good to me..) and why do you think PDR is better ? I admit I do not really understand PDR.
It is a matter of taste. Each of those metrics tells you a part of the truth only.

What matters if the whole noise-to-signal curve (assuming that the noise distribution is given). The engineering definition assumes that there is additive (read, in our case) noise and somewhat arbitrarily declares the lowest usable signal level to be equal that that noise floor instead of, say, twice that floor. The PDR definition looks for some, somewhat arbitrary again, minimal acceptable SNR and looks for the number of stops above that. That minimal SNR includes both shot and read noise (and whatever other noise is there).
Not sure to get it, really sorry.

The only difference is the threshold, SNR=20 for PDR instead of 1 ? No fondamental difference between both methods except the threshold ?
Not quite. In the engineering definition, the read noise is measured with no signal. If you add a signal equal to the noise level, this signal has its own noise which will be added to the noise floor. So when the signal is at the noise floor, the noise is higher than the noise floor actually.

If you want the noise to be equal to the level of the signal, in presence of a signal, you have to solve some equation about the level of that signal, and the answer will be a bit above the noise floor.

As an example, say that the noise floor is 4 electrons per pixel (and stay at pixel level). If the signal is 4 electrons, its photon noise would be 2 (and Poisson statistics instead of Gaussian ones but let us ignore this now). Then a signal of 4 electrons would have noise sqrt(2^2+4^2) = 4.47..., and this above the noise floor. To find the "right signal level", you need to solve sqrt(x+4^2)=x, which gives you x=4.53 or so.
In the one hand, I think I understand the DR as defined by dxo, this is simple and makes perfect sense.

On the other hand, PDR looks complex but maybe this is quite simple, it is simply the same method with a different threshold ?
Ok, I understand now, very well explained, thanks a lot !
BTW, 4 electron noise is too low, I believe. I think that it is several time larger. To find the level of the "lowest useable signal" according to Bill's criterion, you need to solve sqrt(x+n^2)=x/20, where n is the noise floor. If n=20, this subtracts 5 stops from the engineering DR (computed as log_2(x/n)).
 
Last edited:
Can someone give me a logically persausive explantion of the fact that smaller sensors have lower DR?

This is the comparison of D800 FX and DX . Same sensor, same pixels. I cannot make the logical leap that is needed to grasp why trimming off the outside of the frame reduces the DR of what is left. Surely cropping in processing won't do the same (rationally it cannot) but I'm at a loss to understand why.
I don't know if dynamic range (DR) will be less because of sensor size. Some FF cameras have lower dynamic range than MFT cameras, and their sensors are 2X bigger.

Back in the day when film was transitioning to DSLR, the ideology was film cameras had more DR than DSLRs did. Perhaps that was the case in 2001. But today, Digital cameras have 14 to 15 stops of DR. And, with raw files, they capture all of it, even the detail that can't be seen on the preview pic.


DR is one thing, and there is the chroma sub sample 8 bit vs 12 bit vs 14 bit. All this will play a huge part in DR and what a camera is capable of doing.
 
So, if I've followed it correctly the explanation is:
  1. DR is measured from a floor of an acceptable amount of noise to saturation
  2. Noise is random
  3. The degree of enlargement determines how visible the noise is (looking more closely makes anything more visible).
  4. So a smaller sensor, which means more enlargement, means noise is more intrusive so the floor level of the range is a little higher.
Maybe we should invite Bill Claff over to explain the results.
It is explained here:

http://photonstophotos.net/GeneralT...r/DX_Crop_Mode_Photographic_Dynamic_Range.htm
I admit I don't understand...

dxo normalises noise to given viewing conditions. It is fair.

Now they say that DR is the difference in stop between the acceptable noise (SNR=1) and the brightest value.

What is wrong with that, I do not understand ? The important thing is to normalise noise, but they do it.
I do not understand the question. Are you asking why they normalize to the same viewing size? Or why the engineering definition does not include (in an explicit way) the normalization?
What is wrong with the way dxo measures DR (looks good to me..) and why do you think PDR is better ? I admit I do not really understand PDR.
It is a matter of taste. Each of those metrics tells you a part of the truth only.

What matters if the whole noise-to-signal curve (assuming that the noise distribution is given). The engineering definition assumes that there is additive (read, in our case) noise and somewhat arbitrarily declares the lowest usable signal level to be equal that that noise floor instead of, say, twice that floor. The PDR definition looks for some, somewhat arbitrary again, minimal acceptable SNR and looks for the number of stops above that. That minimal SNR includes both shot and read noise (and whatever other noise is there).
Not sure to get it, really sorry.

The only difference is the threshold, SNR=20 for PDR instead of 1 ? No fondamental difference between both methods except the threshold ?
Not quite. In the engineering definition, the read noise is measured with no signal. If you add a signal equal to the noise level, this signal has its own noise which will be added to the noise floor. So when the signal is at the noise floor, the noise is higher than the noise floor actually.

If you want the noise to be equal to the level of the signal, in presence of a signal, you have to solve some equation about the level of that signal, and the answer will be a bit above the noise floor.

As an example, say that the noise floor is 4 electrons per pixel (and stay at pixel level). If the signal is 4 electrons, its photon noise would be 2 (and Poisson statistics instead of Gaussian ones but let us ignore this now). Then a signal of 4 electrons would have noise sqrt(2^2+4^2) = 4.47..., and this above the noise floor. To find the "right signal level", you need to solve sqrt(x+4^2)=x, which gives you x=4.53 or so.
In the one hand, I think I understand the DR as defined by dxo, this is simple and makes perfect sense.

On the other hand, PDR looks complex but maybe this is quite simple, it is simply the same method with a different threshold ?
Ok, I understand now, very well explained, thanks a lot !
BTW, 4 electron noise is too low, I believe. I think that it is several time larger. To find the level of the "lowest useable signal" according to Bill's criterion, you need to solve sqrt(x+n^2)=x/20, where n is the noise floor. If n=20, this subtracts 5 stops from the engineering DR (computed as log_2(x/n)).
In fact there are 2 differences if my understanding is correct. The method itself is slightly different but the major difference is the choice of a SNR=20 !!

I was a bit surprised that dxo does not take into account the shot noise.. SNR=1 should mean total noise = signal, this is obviously more relevant. In this case, PDR is better, I agree.
 
In fact there are 2 differences if my understanding is correct. The method itself is slightly different but the major difference is the choice of a SNR=20 !!

I was a bit surprised that dxo does not take into account the shot noise.. SNR=1 should mean total noise = signal, this is obviously more relevant. In this case, PDR is better, I agree.
I am not sure what DXO actually do. The way it is described here, it sounds as if the shot noise is taken into account.

The engineering definition I have seen usually does not. See Wikipedia for DR for audio, for example. Also, the noise floor is easier to measure directly.
 
In fact there are 2 differences if my understanding is correct. The method itself is slightly different but the major difference is the choice of a SNR=20 !!

I was a bit surprised that dxo does not take into account the shot noise.. SNR=1 should mean total noise = signal, this is obviously more relevant. In this case, PDR is better, I agree.
I am not sure what DXO actually do. The way it is described here, it sounds as if the shot noise is taken into account.
Yes it is. Photonic noise is another name for shot noise.
 
In fact there are 2 differences if my understanding is correct. The method itself is slightly different but the major difference is the choice of a SNR=20 !!

I was a bit surprised that dxo does not take into account the shot noise.. SNR=1 should mean total noise = signal, this is obviously more relevant. In this case, PDR is better, I agree.
I am not sure what DXO actually do. The way it is described here, it sounds as if the shot noise is taken into account.
Yes it is. Photonic noise is another name for shot noise.
By using SNR = 1 DxO is ignoring shot noise and only using read noise.
 
In fact there are 2 differences if my understanding is correct. The method itself is slightly different but the major difference is the choice of a SNR=20 !!

I was a bit surprised that dxo does not take into account the shot noise.. SNR=1 should mean total noise = signal, this is obviously more relevant. In this case, PDR is better, I agree.
I am not sure what DXO actually do. The way it is described here, it sounds as if the shot noise is taken into account.
Yes it is. Photonic noise is another name for shot noise.
Then the only difference between pdr and dxo measurements is the threshold, right ?
 
In fact there are 2 differences if my understanding is correct. The method itself is slightly different but the major difference is the choice of a SNR=20 !!

I was a bit surprised that dxo does not take into account the shot noise.. SNR=1 should mean total noise = signal, this is obviously more relevant. In this case, PDR is better, I agree.
I am not sure what DXO actually do. The way it is described here, it sounds as if the shot noise is taken into account.
Yes it is. Photonic noise is another name for shot noise.
I said I was not sure for a different reason. I am not sure how they interpret and use that definition when they take measurements and report their findings.
 
Last edited:
In fact there are 2 differences if my understanding is correct. The method itself is slightly different but the major difference is the choice of a SNR=20 !!

I was a bit surprised that dxo does not take into account the shot noise.. SNR=1 should mean total noise = signal, this is obviously more relevant. In this case, PDR is better, I agree.
I am not sure what DXO actually do. The way it is described here, it sounds as if the shot noise is taken into account.
Yes it is. Photonic noise is another name for shot noise.
By using SNR = 1 DxO is ignoring shot noise and only using read noise.
Origins of noise

Noise in an image can stem from several different factors:
  • Photonic noise: Classical light sources produce random variations in luminous flux which, after deflection in the optical system, produce the most common form of noise in photographic images.
  • Thermal noise: Sensors can generate random signals due to the effect of ambient temperature.
  • Transfer process noise: The process of charge transfer may be incomplete or interfere with adjacent photosites, thereby generating noise.
 
Can someone give me a logically persausive explantion of the fact that smaller sensors have lower DR?
They don't.
This is the comparison of D800 FX and DX . Same sensor, same pixels. I cannot make the logical leap that is needed to grasp why trimming off the outside of the frame reduces the DR of what is left.
It doesn't.
Surely cropping in processing won't do the same (rationally it cannot) but I'm at a loss to understand why.
Here's the thing. There are two reasonable definitions of DR, and only one is affected by sensor size.

If your definition is the typical electronic one (saturation / noise floor) then DR is not affected at all by sensor size. This is because each pixel can be measured independently (well capacity / read noise).

If your definition is as some have stipulated above, that it's saturation divided by some acceptable level of detail retention in the shadows, then sensor size can affect DR. This is because trimming off the edges of the image means you are losing light and that means a loss of signal-to-noise ratio (SNR is proportional to the square root of the total light captured).

This second one confuses people because they can't understand how cropping could increase noise. The explanations above about greater enlargement intuitively explain this but it all goes back to loss of light captured. I've created this sample to show this.

The right column shows that cropping increases noise (the bottom image is not cropped while the cropping increases as you go up to compensate for shorter focal lengths). The left column shows that this effect can be compensated for by using faster and faster apertures.



Constant%20aperture%20comparison%201-6%20crop.jpg




--
Lee Jay
 
In fact there are 2 differences if my understanding is correct. The method itself is slightly different but the major difference is the choice of a SNR=20 !!

I was a bit surprised that dxo does not take into account the shot noise.. SNR=1 should mean total noise = signal, this is obviously more relevant. In this case, PDR is better, I agree.
I am not sure what DXO actually do. The way it is described here, it sounds as if the shot noise is taken into account.
Yes it is. Photonic noise is another name for shot noise.
By using SNR = 1 DxO is ignoring shot noise and only using read noise.
Origins of noise

Noise in an image can stem from several different factors:
  • Photonic noise: Classical light sources produce random variations in luminous flux which, after deflection in the optical system, produce the most common form of noise in photographic images.
  • Thermal noise: Sensors can generate random signals due to the effect of ambient temperature.
  • Transfer process noise: The process of charge transfer may be incomplete or interfere with adjacent photosites, thereby generating noise.
This should be clarified. There seems to be some ambiguity about dxo methodology. I would be surprised also that dxo does not take into account photon noise but maybe this is the case, I dont know.
 
In fact there are 2 differences if my understanding is correct. The method itself is slightly different but the major difference is the choice of a SNR=20 !!

I was a bit surprised that dxo does not take into account the shot noise.. SNR=1 should mean total noise = signal, this is obviously more relevant. In this case, PDR is better, I agree.
I am not sure what DXO actually do. The way it is described here, it sounds as if the shot noise is taken into account.
Yes it is. Photonic noise is another name for shot noise.
By using SNR = 1 DxO is ignoring shot noise and only using read noise.
Origins of noise

Noise in an image can stem from several different factors:
  • Photonic noise: Classical light sources produce random variations in luminous flux which, after deflection in the optical system, produce the most common form of noise in photographic images.
  • Thermal noise: Sensors can generate random signals due to the effect of ambient temperature.
  • Transfer process noise: The process of charge transfer may be incomplete or interfere with adjacent photosites, thereby generating noise.
This should be clarified. There seems to be some ambiguity about dxo methodology. I would be surprised also that dxo does not take into account photon noise but maybe this is the case, I dont know.
I would take Bill's word on that since he has compared DXO's numbers with his data for many cameras.
 
In fact there are 2 differences if my understanding is correct. The method itself is slightly different but the major difference is the choice of a SNR=20 !!

I was a bit surprised that dxo does not take into account the shot noise.. SNR=1 should mean total noise = signal, this is obviously more relevant. In this case, PDR is better, I agree.
I am not sure what DXO actually do. The way it is described here, it sounds as if the shot noise is taken into account.
Yes it is. Photonic noise is another name for shot noise.
Then the only difference between pdr and dxo measurements is the threshold, right ?
I'm going to but in hear with a comment: If you use a camera and take photos in extremely high contrast light conditions and then carefully examine the raw files both in RawDigger as well as actually process them you'll find the DX0 DR figures are pure fantasy. Whereas Bill Claff's PDR values do a good job of matching "photographic reality." Ultimately it's about what we can actually do with the cameras right?
 
  • noise.
This should be clarified. There seems to be some ambiguity about dxo methodology. I would be surprised also that dxo does not take into account photon noise but maybe this is the case, I dont know.
I think Photonic (shot) noise is a constant that is irrelevant in regards to sensor technology.
 
I was a bit surprised that dxo does not take into account the shot noise..
They do. They call it photonic noise.
They mention Photon Noise in their description of noise sources but for their dynamic range measurement they only use read noise. (SNR = 1)
SNR=1 is not a proof.

The question is what is N ?

I understand you may be correct in your assumptions, no problems, I just want to know the exact truth.
 
Last edited:
I'm going to but in hear with a comment: If you use a camera and take photos in extremely high contrast light conditions and then carefully examine the raw files both in RawDigger as well as actually process them you'll find the DX0 DR figures are pure fantasy.
I have done this and I can say that I can confirm the DXO numbers more or less.
Whereas Bill Claff's PDR values do a good job of matching "photographic reality." Ultimately it's about what we can actually do with the cameras right?
You are talking about something different: if, say, the DR is 14 stop, does it mean that you really have 14 useable stops? IMO, the bottom few stops are pure garbage for photographic purposes.

====

@tbcass:
I think Photonic (shot) noise is a constant that is irrelevant in regards to sensor technology.
No, it is not.

=====

@Christoff21:
SNR=1 is not a proof.

The question is what is S ?
If you model the read noise as additive one with zero mean, this does not change the level of the signal.
 
Last edited:
In fact there are 2 differences if my understanding is correct. The method itself is slightly different but the major difference is the choice of a SNR=20 !!

I was a bit surprised that dxo does not take into account the shot noise.. SNR=1 should mean total noise = signal, this is obviously more relevant. In this case, PDR is better, I agree.
I am not sure what DXO actually do. The way it is described here, it sounds as if the shot noise is taken into account.
Yes it is. Photonic noise is another name for shot noise.
Then the only difference between pdr and dxo measurements is the threshold, right ?
I'm going to but in hear with a comment: If you use a camera and take photos in extremely high contrast light conditions and then carefully examine the raw files both in RawDigger as well as actually process them you'll find the DX0 DR figures are pure fantasy. Whereas Bill Claff's PDR values do a good job of matching "photographic reality." Ultimately it's about what we can actually do with the cameras right?
FWIW, DxOMark Landscape Score is purely based on read noise whereas PhotonsToPhotos Photographic Dynamic Range (PDR) is influenced by all noise sources including Photon Noise; so no, it's not just a different threshold.
 
I'm going to but in hear with a comment: If you use a camera and take photos in extremely high contrast light conditions and then carefully examine the raw files both in RawDigger as well as actually process them you'll find the DX0 DR figures are pure fantasy.
I have done this and I can say that I can confirm the DXO numbers more or less.
Whereas Bill Claff's PDR values do a good job of matching "photographic reality." Ultimately it's about what we can actually do with the cameras right?
You are talking about something different: if, say, the DR is 14 stop, does it mean that you really have 14 useable stops? IMO, the bottom few stops are pure garbage for photographic purposes.
Yes, I said I was talking about taking photographs -- pure garbage doesn't work well in our photos. Listing pure garbage as "usable" DR is a fantasy. When DX0 displays their score calling it Landscape (Dynamic Range) I think it's fair to assume with the "Landscape" discriptor there "usable" is at least implied -- that implication is fantasy.
 
Last edited:

Keyboard shortcuts

Back
Top