BTW, 4 electron noise is too low, I believe. I think that it is several time larger. To find the level of the "lowest useable signal" according to Bill's criterion, you need to solve sqrt(x+n^2)=x/20, where n is the noise floor. If n=20, this subtracts 5 stops from the engineering DR (computed as log_2(x/n)).Ok, I understand now, very well explained, thanks a lot !Not quite. In the engineering definition, the read noise is measured with no signal. If you add a signal equal to the noise level, this signal has its own noise which will be added to the noise floor. So when the signal is at the noise floor, the noise is higher than the noise floor actually.Not sure to get it, really sorry.It is a matter of taste. Each of those metrics tells you a part of the truth only.What is wrong with the way dxo measures DR (looks good to me..) and why do you think PDR is better ? I admit I do not really understand PDR.I do not understand the question. Are you asking why they normalize to the same viewing size? Or why the engineering definition does not include (in an explicit way) the normalization?I admit I don't understand...It is explained here:Maybe we should invite Bill Claff over to explain the results.So, if I've followed it correctly the explanation is:
- DR is measured from a floor of an acceptable amount of noise to saturation
- Noise is random
- The degree of enlargement determines how visible the noise is (looking more closely makes anything more visible).
- So a smaller sensor, which means more enlargement, means noise is more intrusive so the floor level of the range is a little higher.
http://photonstophotos.net/GeneralT...r/DX_Crop_Mode_Photographic_Dynamic_Range.htm
dxo normalises noise to given viewing conditions. It is fair.
Now they say that DR is the difference in stop between the acceptable noise (SNR=1) and the brightest value.
What is wrong with that, I do not understand ? The important thing is to normalise noise, but they do it.
What matters if the whole noise-to-signal curve (assuming that the noise distribution is given). The engineering definition assumes that there is additive (read, in our case) noise and somewhat arbitrarily declares the lowest usable signal level to be equal that that noise floor instead of, say, twice that floor. The PDR definition looks for some, somewhat arbitrary again, minimal acceptable SNR and looks for the number of stops above that. That minimal SNR includes both shot and read noise (and whatever other noise is there).
The only difference is the threshold, SNR=20 for PDR instead of 1 ? No fondamental difference between both methods except the threshold ?
If you want the noise to be equal to the level of the signal, in presence of a signal, you have to solve some equation about the level of that signal, and the answer will be a bit above the noise floor.
As an example, say that the noise floor is 4 electrons per pixel (and stay at pixel level). If the signal is 4 electrons, its photon noise would be 2 (and Poisson statistics instead of Gaussian ones but let us ignore this now). Then a signal of 4 electrons would have noise sqrt(2^2+4^2) = 4.47..., and this above the noise floor. To find the "right signal level", you need to solve sqrt(x+4^2)=x, which gives you x=4.53 or so.
In the one hand, I think I understand the DR as defined by dxo, this is simple and makes perfect sense.
On the other hand, PDR looks complex but maybe this is quite simple, it is simply the same method with a different threshold ?
Last edited: