Let's finally resolve the D800 downsampling / NR questions

Horshack

Forum Pro
Messages
11,231
Solutions
28
Reaction score
12,594
Location
US
I say D800 but the question really applies to any comparison involving a higher density vs lower density sensor.

I had planned to lay all my questions out on the table in this OP but I think it's more productive to start with a focused question first and let it branch organically form there.

I'd like to start with two theorems that seem incongruent to me. Take two theoretical full-frame sensors of equal technology, one is 36MP and the other 12MP (I know the D800 has better technology vs D700 but let's table that for now):

Theorem #1 - The 36MP sensor will have higher per-pixel noise than the 12MP sensor but both will have the equal amounts of noise on a per-area basis. In other words, if you combine multiple 36MP pixels into units roughly equal in size to 12MP pixels, the amount of noise in those combined pixels will be equivalent to the 12MP pixels. This seems pretty obvious and straightforward.

Theorem #2 - Downsampling does not increase the SNR of the image, at least across all spatial frequencies within the image. Based on my understanding of previous threads on this topic*, ejmartin, John Sheehy, and others share the following opinion: Downsampling does reduce the amount of noise as measured via std. deviation, but it also reduces the amount of detail, in varying degrees across the frequency/resolution domain of that signal, so the net effect is a reduction of both detail and noise, with potentially no reduction in the SNR for certain frequencies.

These above theorems seem incongruent to me. If the 36MP sensor has the same per-area noise as the 12MP sensor but the only way to realize that equivalence is by combining the 36MP pixels into units of equal in area to the 12MP pixels (Theorem #1), but the method used to combine those pixels (post-demosaic downsampling) results in no SNR gain (Theorem #2), how in practice can a user of the 36MP sensor achieve the same SNR performance of the 12MP sensor for equal size prints?

'* References:
http://forums.dpreview.com/forums/read.asp?forum=1000&message=30147854
http://forums.dpreview.com/forums/read.asp?forum=1018&message=39856801

http://www.dxomark.com/index.php/Publications/DxOMark-Insights/Detailed-computation-of-DxOMark-Sensor-normalization

http://www.dxomark.com/index.php/Publications/DxOMark-Insights/More-pixels-offset-noise !
 
I say D800 but the question really applies to any comparison involving a higher density vs lower density sensor.

I had planned to lay all my questions out on the table in this OP but I think it's more productive to start with a focused question first and let it branch organically form there.

I'd like to start with two theorems that seem incongruent to me. Take two theoretical full-frame sensors of equal technology, one is 36MP and the other 12MP (I know the D800 has better technology vs D700 but let's table that for now):

Theorem #1 - The 36MP sensor will have higher per-pixel noise than the 12MP sensor but both will have the equal amounts of noise on a per-area basis. In other words, if you combine multiple 36MP pixels into units roughly equal in size to 12MP pixels, the amount of noise in those combined pixels will be equivalent to the 12MP pixels. This seems pretty obvious and straightforward.

Theorem #2 - Downsampling does not increase the SNR of the image, at least across all spatial frequencies within the image. Based on my understanding of previous threads on this topic*, ejmartin, John Sheehy, and others share the following opinion: Downsampling does reduce the amount of noise as measured via std. deviation, but it also reduces the amount of detail, in varying degrees across the frequency/resolution domain of that signal, so the net effect is a reduction of both detail and noise, with potentially no reduction in the SNR for certain frequencies.

These above theorems seem incongruent to me. If the 36MP sensor has the same per-area noise as the 12MP sensor but the only way to realize that equivalence is by combining the 36MP pixels into units of equal in area to the 12MP pixels (Theorem #1), but the method used to combine those pixels (post-demosaic downsampling) results in no SNR gain (Theorem #2), how in practice can a user of the 36MP sensor achieve the same SNR performance of the 12MP sensor for equal size prints?
To my mind the key to the conundrum is what you are terming 'noise'. Perfect random noise is inherently bandwidth dependent. A simple scalar quantity is not enough to measure 'noise' - you also need to specify the bandwidth over which you measure it.

Lets take you theorem number 1. 'The 36MP sensor will have higher per-pixel noise than the 12MP sensor but both will have the equal amounts of noise on a per-area basis.'. What do we mean by 'per pixel noise'? We mean 'noise observed over the bandwidth defined by the pixel spatial frequency (that's the top limit of the band, the bottom is defined by the sensor size). Now, what do we mean by 'per area' noise. What I think you mean is 'observed over a fixed bandwidth'. So, if we normalise the bandwidth over which observe the noise it will look the same.
Now onto theorem number 2:

Downsampling does reduce the amount of noise as measured via std. deviation, but it also reduces the amount of detail, in varying degrees across the frequency/resolution domain of that signal, so the net effect is a reduction of both detail and noise, with potentially no reduction in the SNR for certain frequencies.

Downsampling is a low pass filter, so it reduces the observed bandwidth. That reduces the bandwidth of the subject detail in proportion. So, if we use downsampling to normalise the bandwidths of observation (as in theorem 1) we will observe the same noise.
Not only are your two theorems not incongruent, they are tautologous.
--
Bob
 
Some of the theories and science behind the "size reduction vs noise reduction" idea are so far beyond my knowledge that I really appreciate reading about it here.

However, for myself, I'm a practical guy so when I hear about this kind of stuff, I just do it. And, what I've literally found in front of my eyes in actually working on the existing high ISO files of the D800 is that when you reduce the size to the 12mp range, noise still looks like noise did at 36pm. So, what that tells me is that if you reduce a 36mp file, you'll still see the same size of noise just in a smaller print.

Try it for yourself! Really, go try it, you'll understand exactly what I saw when you do it for yourself.

Theories and science are super cool but actually doing it and seeing it in front of you is what matters because you can't print science! HAHAH!
--
-Dan
Winnipeg, Manitoba, Canada
'Cameras don't take pictures, people do.'
'No one sees your camera when they're looking at your pictures.'
http://www.danharperphotography.com/ -BLOG/stock site
http://www.danharperphoto.com/ -Commercial portfolio
http://www.wpgphoto.com/ -My Winnipeg based photography community
 
To my mind the key to the conundrum is what you are terming 'noise'. Perfect random noise is inherently bandwidth dependent. A simple scalar quantity is not enough to measure 'noise' - you also need to specify the bandwidth over which you measure it.
How exactly would you define 'bandwidth' it in the context you're using it here?

Looking up the physics/technology definition here ( http://www.rp-photonics.com/bandwidth.html ), it's defined as 'the width of some frequency or wavelength range'. Is bandwidth the range of image frequency detail which can be represented?
Lets take you theorem number 1. 'The 36MP sensor will have higher per-pixel noise than the 12MP sensor but both will have the equal amounts of noise on a per-area basis.'. What do we mean by 'per pixel noise'? We mean 'noise observed over the bandwidth defined by the pixel spatial frequency (that's the top limit of the band, the bottom is defined by the sensor size). Now, what do we mean by 'per area' noise. What I think you mean is 'observed over a fixed bandwidth'. So, if we normalise the bandwidth over which observe the noise it will look the same.
I'm one level too removed in the abstract to get my head around this. I made a diagram depicting what I think you're saying. Is this correct:





And even if the diagram is correct I'm still not sure I understand it without knowing what 'bandwidth' means in terms of the image.
Now onto theorem number 2:

Downsampling does reduce the amount of noise as measured via std. deviation, but it also reduces the amount of detail, in varying degrees across the frequency/resolution domain of that signal, so the net effect is a reduction of both detail and noise, with potentially no reduction in the SNR for certain frequencies.

Downsampling is a low pass filter, so it reduces the observed bandwidth. That reduces the bandwidth of the subject detail in proportion. So, if we use downsampling to normalise the bandwidths of observation (as in theorem 1) we will observe the same noise.
Not only are your two theorems not incongruent, they are tautologous.
This vaguely registers with me but could you possibly rephrase it in more pedestrian (less abstract) terms?
 
I think you might be conflating read noise and shot noise in this comparison.

Downsampling involves using a low pass filter (which is what a downsamplng filter is essentially). This has noise mitigating effects to the extent that the noise is high frequency noise located -- mostly -- above the cutoff of the low pass filter. For most varieties of shot noise, with its characteristic spatial frequency distribution, fall into this category. For reasons that are pragmatic reasons only, for the fact that shot noise happens to fall above the downsampling filter cutoff, some noise mitigation is afforded.

I think you are being smart though, and picking up on the notion that there are some unexamined assumptions floating around. I'm thinking further about that.
 
I think when Bob says "bandwidth" he's referring to the dsitribution of spatial frequencies in the noise. Let's distinguish between read noise and shot noise here. Where shot noise shows a characteristic distribution of frequencies in the high frequency range, a low pass filter (as used in downsampling) will mitigate that.

Unless I missed the finer points of your question.
 
Some of the theories and science behind the "size reduction vs noise reduction" idea are so far beyond my knowledge that I really appreciate reading about it here.

However, for myself, I'm a practical guy so when I hear about this kind of stuff, I just do it. And, what I've literally found in front of my eyes in actually working on the existing high ISO files of the D800 is that when you reduce the size to the 12mp range, noise still looks like noise did at 36pm. So, what that tells me is that if you reduce a 36mp file, you'll still see the same size of noise just in a smaller print.

Try it for yourself! Really, go try it, you'll understand exactly what I saw when you do it for yourself.

Theories and science are super cool but actually doing it and seeing it in front of you is what matters because you can't print science! HAHAH!
I agree, but the problem in this case is that it's easier to observe differences in noise than detail, esp. if the detail that's destroyed is not immediately obvious, so observations might be misleading. This is not to say that perceptual improvements in detail/noise from downsampling aren't as important as purely analytical improvements like SNR, since ultimately it's us humans who observe and perceive these images, but I'd like any discussion about perceptual improvements to be grounded in explanation (ie, descriptions as to why we might perceive the noise to be less rather than just sharing an observation that it is)
 
Almost. The high frequency noise will disappear because the downsampling filter simply eliminates all high frequencies beyond a certain cutoff. The rest of the noise, that which falls below the cutoff of the low-pass filter, will remain.
 
with potentially no reduction in the SNR for certain frequencies.
I think the key is certain frequencies

My simple thinking is that downsampling will act as a global average filter for noise

but as a low pass filter for detail and obviously, reducing the 'signal' of of high frequency detail

so for 'fine' detail, downsampling will obviously not help the signal to noise ratio (SNR)

but dowsampling will help the signal to noise ratio of low frequency (low detail) stuff like the sky or those dark shadowy areas
 
noise still looks like noise did at 36pm. So, what that tells me is that if you reduce a 36mp file, you'll still see the same size of noise just in a smaller print.
I have and noise doesn't look the same to me,

perhaps if you showed an example we could understand what particular feature you are looking at
 
In science and engineering, if you can keep the units straight you're more than halfway to the correct answers.

Signal to noise ratio is in the domain of intensity. Subject detail is in the domain of spatial frequency. Theorem #2 is fuzzy because it confuses the two quantities.

MTF is the function that connects the domains of intensity and spatial frequency.
 
Boring, in fact boring as hell. Reducing visible noise by downsizing files is so obvious that it's simply absurd to argue about it.
 
In science and engineering, if you can keep the units straight you're more than halfway to the correct answers.

Signal to noise ratio is in the domain of intensity. Subject detail is in the domain of spatial frequency. Theorem #2 is fuzzy because it confuses the two quantities.

MTF is the function that connects the domains of intensity and spatial frequency.
I thought he was conflating shot noise and read noise. Shot noise is certainly measured in the spatial frequency domain (as well as in other ways), and this is relevant in the sense that a low-pass downsampling filter will tend to mitigate high frequency noise to the extent that the high frequency noise lies beyond the cutoff of the low-pass filter.
 
Try it for yourself! Really, go try it, you'll understand exactly what I saw when you do it for yourself.

Theories and science are super cool but actually doing it and seeing it in front of you is what matters because you can't print science! HAHAH!
Like I mentioned to you in the other thread, I did try this for myself and my results are completely the opposite of yours. For me, a noisy 36MP image became visually less noisy at the pixel level as the image was downsized to 12 MP, even less noisy at 4MP and essentially clean at 1MP. Did you try more aggressive reduction (such as 1MP)? If you are still seeing no pixel level noise reduction, I think something must be wrong with your downscaling software, since the per-pixel noise difference is very obvious at that extreme downsampling...
 
I think when Bob says "bandwidth" he's referring to the dsitribution of spatial frequencies in the noise. Let's distinguish between read noise and shot noise here. Where shot noise shows a characteristic distribution of frequencies in the high frequency range, a low pass filter (as used in downsampling) will mitigate that.

Unless I missed the finer points of your question.
No, I think you're on the right path although I'm unsure whether you're saying that shot noise is more prevalent or less prevalent at the frequencies that would be removed by a low-pass filter, or at least relative to detail at those frequencies.
 
I think when Bob says "bandwidth" he's referring to the dsitribution of spatial frequencies in the noise. Let's distinguish between read noise and shot noise here. Where shot noise shows a characteristic distribution of frequencies in the high frequency range, a low pass filter (as used in downsampling) will mitigate that.

Unless I missed the finer points of your question.
No, I think you're on the right path although I'm unsure whether you're saying that shot noise is more prevalent or less prevalent at the frequencies that would be removed by a low-pass filter, or at least relative to detail at those frequencies.
Depending upon the original sampling frequency and the degree of downsampling, I'm saying that a significant amount of the shot noise will lie above the cutoff frequency of the downsampling (low pass) filter. Both noise and detail above the cutoff frequency will be eliminated. This is a practical affordance that has to do with the contingent matter of fact that the shot noise (in large part) happens to lie above the filter cutoff.
 
In science and engineering, if you can keep the units straight you're more than halfway to the correct answers.

Signal to noise ratio is in the domain of intensity. Subject detail is in the domain of spatial frequency. Theorem #2 is fuzzy because it confuses the two quantities.

MTF is the function that connects the domains of intensity and spatial frequency.
It's highly likely that I'm mixing up these two domains but I'm still not clear how the two interact with respect to detail and noise when downsampling. Diagrams or pictures would go a long way in helping me understand.

Here are some of a resolution wedge. Can you relate your spatial vs intensity description to these images?

Original wedge:





Wedge with 20% gaussian noise added to simulate shot noise





20% gaussian noise wedge with 1pt gaussian low-pass filter:





20% gaussian noise wedge, no gaussian low-pass filter:, then bicubic sharper downsaple to half resolution:



 
Depending upon the original sampling frequency and the degree of downsampling, I'm saying that a significant amount of the shot noise will lie above the cutoff frequency of the downsampling (low pass) filter. Both noise and detail above the cutoff frequency will be eliminated. This is a practical affordance that has to do with the contingent matter of fact that the shot noise (in large part) happens to lie above the filter cutoff.
Does the diagram in this post depict what you just described to me?

http://forums.dpreview.com/forums/read.asp?forum=1018&message=39859119
 
.. that you do not wish to throw away ?
Or isn't there any of that ? Or could this vary from picture to picture ?
 

Keyboard shortcuts

Back
Top