irrelevancy
Well-known member
So - at the end of the day, how do I improve noise by downsizing? If phils approach didn't produce that much of a difference - how do you make a bigger difference?
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
My point is that a proper downsampling algorithm* does NOT change the noise spectrum, rather it simply moves the Nyquist frequency and lops off any image data (including noise) beyond the new Nyquist frequency. Because unfiltered, spatially random noise rises linearly with spatial frequency, by moving to a lower Nyquist frequency and measuring the noise at that frequency (by the width of the histogram in a uniform patch), one sees a lower value because that's the structure of image noise as a function of frequency. So it's a bit of a misnomer to say that downsizing reduces noise; instead the sorts of measures typically used are simply measuring a different part of the noise spectrum after downsizing, and that different portion of the noise spectrum happens to typically have lower noise power.So - at the end of the day, how do I improve noise by downsizing? If
phils approach didn't produce that much of a difference - how do you
make a bigger difference?
--downsampling may not help a lot - fine. But does 12M downsampled to
8M compensate for the poorer performance in low light ? Does 12M in
good light provides better performance than 8M ?
if so (?), more megapixels is good.
Why must you people make it so complicated? Let's say you take a photoPhil is right. The S/N improvement expected is based on averaging or
combining noise from independent samples and clearly with a Bayer
sensor the noise is spread over the pixels used in the interpolation
and is not independent and Phil refers to this as grain.
So in a 2:1 downsize there are only two independent Green pixels and
one each for Blue and Red. From these you need to generate a new R,
G and B. The best you could expect to do is to reduce the noise on
the Green channel by Root 2 with no noise reduction on Red or Blue.
This is a simplistic way to view it but its the way it is.
--http://blog.dpreview.com/editorial/2008/11/downsampling-to.html#grainsize
--
Phil Askey
Editor, dpreview.com
--average grain size: 2.0 pixels: 11.1 standard deviation
Original
![]()
After 50% Photoshop Bicubic downsampling (crops)
![]()
Yes interesting, but basically this really backfires on him. I can
clearly see that noise is reduced by downsampling in the images he
posted.
Since Dpreview pretty much counts every single grain of noise, it
appears questionable that the effect of downsampling to images of
equal size is not considered in their reviews. They even complain
about the G10s noise level at ISO 80. Now if your concerned about
that, then you really have to take into account noise reduction due
to downsampling.
Regards
kikl
But if you are, the per-pixel SNR should improve by a factor of 1.5,The demosaicing process is irrelevant, as long as you're
not pixel-peeping.
No reason what so ever. I'm not ignoring visible noise and I'm not saying that I just can't see. I can see clearly that the second image has a far better noise performance.Any reason you didn't report the standard deviation of the second
crop? Also are you just ignoring the visible noise or are you saying
that you just can't see it?
Most people, perhaps including Phil, may never have "seen" a true raw file, only a demosaiced (with more or less pixel averaging/noise filtering) file.Apart from pattern noise (banding), which is a small component of--image analysis tool. I filled it with middle gray and added Gaussian
noise. I then resampled the image to 2/3 the image area (the
sensor read noise, the noise in a raw file IS Gaussian random
noise, to a rather good approximation.
See Figures 1 and 2 at
http://theory.uchicago.edu/~ejm/pix/20d/tests/noise/
Looking at undemosaiced RAW files is great, as is arguing that theoretically high pixel densities should work just fine. The problem with all these arguments is that they tend to ignore a number of genuine practicalities about how cameras work and tend to be used.Most people, perhaps including Phil, may never have "seen" a true raw
file, only a demosaiced (with more or less pixel averaging/noise
filtering) file.
Nikon D3: .113 electrons per square micron of sensor at ISO 400Looking at undemosaiced RAW files is great, as is arguing thatMost people, perhaps including Phil, may never have "seen" a true raw
file, only a demosaiced (with more or less pixel averaging/noise
filtering) file.
theoretically high pixel densities should work just fine. The
problem with all these arguments is that they tend to ignore a number
of genuine practicalities about how cameras work and tend to be used.
Let's list a few of the (generally unproven) assumptions which
underlie the whole 'increasing pixel counts indefinitely should be
just fine' argument.
1) Sensor quantum efficiencies are never affected by decreasing the
photosite area.
One would likely have to go to the sort of column-parallel ADC architecture that Sony has been using, so that each ADC can operate at a sufficiently low frequency not to have detrimental effects on noise. Canon also used this architecture on their 50MP APS-H prototype sensor reported on a year or so ago, which had reasonable read noise data.2) Sticking a sensor with a much higher pixel count into a real
camera which has to achieve a certain performance spec (e.g. frame
rate), thereby forcing much higher data transfer/processing rates on
the system as a whole, will never have a detrimental affect on the
electronic noise within that system.
In the raw data, the only noise that is correlated between photosites is pattern (banding) noise, which is typically a small component of electronic read noise. Tell me if I'm wrong, but I don't think that is what you had in mind, since that's certainly not what Phil has been talking about.3) Image noise is always uncorrelated between photosites
Let me handle both these points together, as they both stem from the same misperception, guided by the pixel-centric view. Downsizing an image does not change the noise spectrum, it simply moves the Nyquist frequency to a different point. Histograms of monotone patches measure the noise power at Nyquist; move the Nyquist frequency, and you measure the noise power at a different part of the image noise spectrum.4) That uncorrelated noise will be averaged before demosaicing in
order to achieve the theoretical s/n increase properly (not exactly a
common approach to data processing in current cameras).
5) If the file is instead resized after RAW conversion, all of those
theoretical improvements in noise can be achieved in practice.
IMO you've yet to find a show-stopper.I'm sure there are more; I'm not a camera designer and don't pretend
to understand this area to the degree of detail required to make
definitive statements.
I'm sure there are engineering hurdles to be overcome. What I don't see so far are limitations arising from the device physics; and there are current devices with 2µ pixels which outperform the best DSLR sensors per unit of area. The big question is whether those small pixel devices can be scaled while maintaining their performance.But it's pretty clear that making an
ultra-high pixel count DSLR camera would be far more complicated than
metaphorically sticking together lots of G10 sensors and putting them
in a 50D body.
--
It's not about finding a 'show-stopper'.IMO you've yet to find a show-stopper.
Precisely. What sometimes seems to be forgotten is that camera is not just a sensor, but far more complicated piece of engineering altogether.I'm sure there are engineering hurdles to be overcome. What I don't
see so far are limitations arising from the device physics; and there
are current devices with 2µ pixels which outperform the best DSLR
sensors per unit of area. The big question is whether those small
pixel devices can be scaled while maintaining their performance.
If you regard downsampling as an ideal low pass filter, then you're just taking away the high frequency components from the frequency spectrum. Therefore, the shape of the curve is not altered, i.e. the noise spectrum is not changed. The high frequency components are simply removed, or differently said, the highest frequency component of the noise spectrum (the nyquist frequency) is shifted to a lower frequency.Let me handle both these points together, as they both stem from the
same misperception, guided by the pixel-centric view. Downsizing an
image does not change the noise spectrum, it simply moves the
Nyquist frequency to a different point. Histograms of monotone
patches measure the noise power at Nyquist; move the Nyquist
frequency, and you measure the noise power at a different part of the
image noise spectrum.
Because the power spectrum rises linearly in the absence of pixel
correlation, measuring the noise at a lower frequency gives a lower
value, which is what pixel-peepers interpret as "downsampling
decreases noise".
When pixels are correlated, by demosaicing, NR or other filtering,
the noise power levels off starting at the spatial frequency where
the filtering kicks in. Then moving the Nyquist frequency by
downsampling doesn't seem to reduce noise, or if it does, not by as
much; but that conclusion is based on fallacious reasoning, since
again downsampling doesn't change the noise power at any scale that
is still present in the downsampled image (unless the downsampling
algorithm permits aliasing of noise beyond Nyquist, in which case
it's time to get a better downsampling algorithm; I've mentioned two
so far in this thread that are better than straight PS Bicubic, which
does have the capacity to alias noise upon downsampling).
Precisely. Just out of interest, has anyone made one yet, preferably in a form which could run inside a camera at a sensible frame rate on large pixel count files?It presumes that
the down sampling algorithm is pretty much an ideal lowpass filter.
I do think one should make it very clear if the discussion is about in-camera limitations (JPG engine, etc.) or limitations in the raw data itself.Precisely. Just out of interest, has anyone made one yet, preferably
in a form which could run inside a camera at a sensible frame rate on
large pixel count files?
That's an artifical distinction. The camera has to be capable of creating useable JPEGs from the RAW data coming off its sensor at a sensible operating speed. This is the point many people seem to be missing in these theoretical discussions; it's all very well if sensor technology can in principle scale to ever-smaller pixel sizes, but that doesn't help if the camera can't deliver a half-decent JPEG. Not everyone shoots RAW; Sigma tried selling a RAW-only camera but don't any more.I do think one should make it very clear if the discussion is about
in-camera limitations (JPG engine, etc.) or limitations in the raw
data itself.
No, but it does show that the idea does not universally hold true by any stretch of the imagination. Bear in mind that it's not so long ago that Canon users were berating us for using ACR instead of DPP apparently on the grounds that DPP is the final word in IQ (and neatly forgetting the fact that pretty well every forum complains equally that ACR isn't the best for their cameras).I have the impression that DPreview caters mostly to the JPG-shooter
(nothing wrong with that), so in-camera limitations are very
important. However, this should be no reason to extrapolate these
results to all photographic uses. The fact that the blog article
tests one raw converter (DPP) and finds it doesn't scale with the
theory does not mean that the idea is flawed in general.
But this is precisely the point. What's an appropriate algorithm, if those ones most commonly used aren't?In addition, please do take into account that any Bayer image will be
a bit soft at 100%, so the first reduction steps should not be
expected to reduce the noise a lot (they do increase apparent
sharpness). Noise levels should certainly decrease if you decrease
further, if an appropriate algorithm is used.
It's the algorithm that performs best, i.e. retains most of the imaging information.But this is precisely the point. What's an appropriate algorithm, if
those ones most commonly used aren't?
--
Andy Westlake
dpreview.com/lensreviews