Dynamic Range -- what it is, what it's good for, and how much you 'need'

Started Oct 17, 2011 | Discussions thread
Great Bustard
Forum ProPosts: 23,152
Like?
Wrapping it up.
In reply to FrankyM, Oct 20, 2011

FrankyM wrote:

Well, I guess this is where the diversion into semantics begins. How about this: you don't compare (in terms of the final photo) a single 2x2 pixel to a single 1x1 pixel -- you compare to four 1x1 pixels.

No, it's not a problem of semantics but understanding what exactly we are talking about. I agree with you regarding what you compare, but what I'm saying is that the processing makes a difference to the results. For example, if I reduce those 20Mp to 10 Mp by simply throwing away 1 out of every 2 pixels, the result I get will be very different to that achieved by averaging 2 pixels to 1.

For the record, downsampling is a horrible way to compare the IQ potential of two systems -- the proper method is upsampling one, the other, or both, to the same display dimension.

However, if the purpose of comparison is web display, then, of course, downsampling is the proper method of comparison, since the final photo is necessarily downsampled.

And, yes, the method of resampling is key. Using a "nearest neighbor" downsampling method is a bad choice, as is using an upsampling method that merely scales, rather than interpolates.

In any event, the bottom line is that a pixel-for-pixel comparison is only valid if both photos are made from the same number of pixels. If the photos have different numbers of pixels, then resampling must be done, since we are naturally comparing the two photos on the basis of same display size, and that resampling must be done regardless, whether it is for print (not necessarily under user control -- the printer uses its own resampling algorithms) or for web display.

Again, coming back to DR, I have no problem personally with the DxO definition as an engineer. I don't find it particularly useful though as a photographer - I would much prefer to know how many stops of highlight headroom/shadow footroom I have.

That's what the DxOMark definition of DR tells you (as do the rest) -- the number of stops from the noise floor to the saturation limit.

How much of that range is in the shadow range, and how much is in the highlight range, depends on how the photographer exposes the photo (or, more usually, how the camera exposes the photo, since most let the camera choose the exposure for them in AE modes).

Using the noise floor as the lower limit is OK for DxO's purpose but I think that the limit for a photographer is set by a much higher SNR (my feeling is that the SNR=1 limit is too low also) and the visual characteristics of the image noise.This means that photon shot noise plays a part in DR.

DxOMark uses the 100% NSR for the noise floor (DR100), which, by definition, includes photon noise. If you wish to use a different noise floor, that's entirely your preogitive, but, like DxOMark, you need to clearly spell out what noise floor you are using. In addition, DxOMark gives a "screen DR" (DR / pixel) and a "print DR" (DR / pixel of photo resampled to 8 MP).

Well, if you have no idea, then you have no idea. But I can tell you, for a fact , that more pixels for a given sensor size and efficiency results in more IQ all the way around (although this is subject to diminishing returns, of course). The only question is if the pixels can be made smaller without adversely affecting efficiency. However, the overall trend is that pixels have been getting smaller and more efficient. Of course, that's not to say that when a new technology comes out, that it might not have to begin with larger pixels.

I said I have no idea whether or not Canon has the capability to make the tech with smaller pixels. Do you have a mathematical proof for this? I would be interested if you do.

I can mathematically prove that smaller pixels do not result in less DR / pixel for equally efficient sensors (but greater DR / area), but not mathematically prove that smaller pixels can be made with the same efficiency. Please let me know if you want me to do so -- I'll be happy to oblige.

But I cannot mathematically prove that smaller pixels can be made as efficient as larger pixels. However, I can cite evidence, that, as a general rule , as a function of time, pixels have gotten smaller and more efficient.

However, I note in your reply above that you agree with Dr. Martinec's LL post:

http://www.luminous-landscape.com/forum/index.php?topic=42158.0

Excellent! It is basically a discussion of what DR measure you wish to use (DR100, DR50, DR25, etc.). However, as I said, more useful still, in terms of the visual properties of the final photos, is to compute the DR / area rather than DR / pixel.

Reply   Reply with quote   Complain
Post (hide subjects)Posted by
MathNew
You.New
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark post MMy threads
Color scheme? Blue / Yellow