downsampling to reduce noise - how much?

So - at the end of the day, how do I improve noise by downsizing? If phils approach didn't produce that much of a difference - how do you make a bigger difference?
 
So - at the end of the day, how do I improve noise by downsizing? If
phils approach didn't produce that much of a difference - how do you
make a bigger difference?
My point is that a proper downsampling algorithm* does NOT change the noise spectrum, rather it simply moves the Nyquist frequency and lops off any image data (including noise) beyond the new Nyquist frequency. Because unfiltered, spatially random noise rises linearly with spatial frequency, by moving to a lower Nyquist frequency and measuring the noise at that frequency (by the width of the histogram in a uniform patch), one sees a lower value because that's the structure of image noise as a function of frequency. So it's a bit of a misnomer to say that downsizing reduces noise; instead the sorts of measures typically used are simply measuring a different part of the noise spectrum after downsizing, and that different portion of the noise spectrum happens to typically have lower noise power.

There is really nothing to be done about it, except to get a camera with better noise characteristics (as measured directly in the RAW data). You can apply noise filtering, that reduces the noise power near Nyquist but not lower down in frequency, the result being the blotchy pattern one typically associates with noise reduction.

One that doesn't remap the noise spectrum through aliasing, as PS Bicubic seems to do. This is avoidable in Photoshop if one first performs a Gaussian blur before downsizing; a tentative recommendation -- if one is downsampling an image by a factor X (so that if X=2, the output image is half the size), then a Gaussian blur of radius X/4 (radius .5 for half-size image rescaling) followed by the resampling, should help reduce the noise.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
downsampling may not help a lot - fine. But does 12M downsampled to 8M compensate for the poorer performance in low light ? Does 12M in good light provides better performance than 8M ?

if so (?), more megapixels is good.
 
no much of difference to make it worth it. try it and see. you can downsample all you want, but I'm with Phil, the difference isn't worth it. noise is over-rated IMO. it shows by far more on-screen and unless users are viewing images online at 100% it won't matter. most don't. 800x600 is common and heck, you can have an ISO 6400 shot with no NR and it would be fine on the web. in print, again, the printing software and normal image clean up in PP'ing is going to eliminate most all of the noise people complain about.

again, just try it out and you'll see.

Here's an ISO 12800 shot cleaned up and when printed at 4x6 it's fine.
http://www.pbase.com/timothylauro/image/103857099

Here's an ISO 6400. Clearly details are present. Outside a little color blotching at the top which I failed to clean up, prints just fine.
http://www.pbase.com/timothylauro/image/103710595/original

This one at ISO 6400 prints great. Just printed some for my inlaws tonight.
http://www.pbase.com/timothylauro/image/104734503

So again, it all depends on your use of the image. Not many are going to be shooting at ISO 6400 and upwards for poster prints. If they are and say a sports shooter, then go buy a super fast lens and gear to match your job.
downsampling may not help a lot - fine. But does 12M downsampled to
8M compensate for the poorer performance in low light ? Does 12M in
good light provides better performance than 8M ?

if so (?), more megapixels is good.
--
-tim

NW Columbus/Dublin, Ohio
http://www.pbase.com/timothylauro
 
So basically this whole thread is about phil's measurements of noise, rather than whether downsampling actually makes much of a difference?

Darn - I thought that I could be getting noise free iso 12600 shots :P

Thanks for the explanation though!
 
Phil is right. The S/N improvement expected is based on averaging or
combining noise from independent samples and clearly with a Bayer
sensor the noise is spread over the pixels used in the interpolation
and is not independent and Phil refers to this as grain.

So in a 2:1 downsize there are only two independent Green pixels and
one each for Blue and Red. From these you need to generate a new R,
G and B. The best you could expect to do is to reduce the noise on
the Green channel by Root 2 with no noise reduction on Red or Blue.
This is a simplistic way to view it but its the way it is.
Why must you people make it so complicated? Let's say you take a photo
of a church. Look at any region of the scene containing a reasonable
number of pixels, including the whole image. Downsample 4 pixels to 1
using a non-suck downsampling algorithm such as Lanczos. Look at the
same region again. The downsampled version will be approximately 1 stop
better in the noise department, at the cost of having lower resolution.
The number of photosites involved in creating the pixels in the original
image is irrelevant. The demosaicing process is irrelevant, as long as you're
not pixel-peeping. So, once again (say it with me), the most important
single factor in digital image quality by a mile is sensor size. At best, high
pixel density gives photographers more flexibility and at worst it's about
value-neutral. If you can't afford a RAID array and have nothing but
detest for megapixels, just downsample your images straight away and
archive only those versions. In another thread, Phil says 'more megapixels
and hours of postprocessing is the answer for everything these days'. If
you downsample all your images it's easy to automate as a batch job, and
otherwise there's a tiny per-image cost of seconds, not hours.

Now if Phil would only focus his juju on pressuring manufacturers to use
bigger sensors, instead of hitting them over the head for pixel density...

-Carl
 
average grain size: 2.0 pixels: 11.1 standard deviation

Original



After 50% Photoshop Bicubic downsampling (crops)



Yes interesting, but basically this really backfires on him. I can clearly see that noise is reduced by downsampling in the images he posted.

Since Dpreview pretty much counts every single grain of noise, it appears questionable that the effect of downsampling to images of equal size is not considered in their reviews. They even complain about the G10s noise level at ISO 80. Now if your concerned about that, then you really have to take into account noise reduction due to downsampling.

Regards

kikl
 
Any reason you didn't report the standard deviation of the second crop? Also are you just ignoring the visible noise or are you saying that you just can't see it?
average grain size: 2.0 pixels: 11.1 standard deviation

Original



After 50% Photoshop Bicubic downsampling (crops)



Yes interesting, but basically this really backfires on him. I can
clearly see that noise is reduced by downsampling in the images he
posted.

Since Dpreview pretty much counts every single grain of noise, it
appears questionable that the effect of downsampling to images of
equal size is not considered in their reviews. They even complain
about the G10s noise level at ISO 80. Now if your concerned about
that, then you really have to take into account noise reduction due
to downsampling.

Regards

kikl
--
Phil Askey
Editor, dpreview.com
 
The demosaicing process is irrelevant, as long as you're
not pixel-peeping.
But if you are, the per-pixel SNR should improve by a factor of 1.5,
since four photosites contribute to an original pixel while nine photosites
contribute to a 4:1 downsampled pixel.

-Carl
 
Any reason you didn't report the standard deviation of the second
crop? Also are you just ignoring the visible noise or are you saying
that you just can't see it?
No reason what so ever. I'm not ignoring visible noise and I'm not saying that I just can't see. I can see clearly that the second image has a far better noise performance.

Regards,

kikl
 
image analysis tool. I filled it with middle gray and added Gaussian
noise. I then resampled the image to 2/3 the image area (the
--
Apart from pattern noise (banding), which is a small component of
sensor read noise, the noise in a raw file IS Gaussian random
noise, to a rather good approximation.

See Figures 1 and 2 at
http://theory.uchicago.edu/~ejm/pix/20d/tests/noise/
Most people, perhaps including Phil, may never have "seen" a true raw file, only a demosaiced (with more or less pixel averaging/noise filtering) file.

Just my two oere
Erik from Sweden
 
Most people, perhaps including Phil, may never have "seen" a true raw
file, only a demosaiced (with more or less pixel averaging/noise
filtering) file.
Looking at undemosaiced RAW files is great, as is arguing that theoretically high pixel densities should work just fine. The problem with all these arguments is that they tend to ignore a number of genuine practicalities about how cameras work and tend to be used.

Let's list a few of the (generally unproven) assumptions which underlie the whole 'increasing pixel counts indefinitely should be just fine' argument.

1) Sensor quantum efficiencies are never affected by decreasing the photosite area.

2) Sticking a sensor with a much higher pixel count into a real camera which has to achieve a certain performance spec (e.g. frame rate), thereby forcing much higher data transfer/processing rates on the system as a whole, will never have a detrimental affect on the electronic noise within that system.

3) Image noise is always uncorrelated between photosites

4) That uncorrelated noise will be averaged before demosaicing in order to achieve the theoretical s/n increase properly (not exactly a common approach to data processing in current cameras).

5) If the file is instead resized after RAW conversion, all of those theoretical improvements in noise can be achieved in practice.

I'm sure there are more; I'm not a camera designer and don't pretend to understand this area to the degree of detail required to make definitive statements. But it's pretty clear that making an ultra-high pixel count DSLR camera would be far more complicated than metaphorically sticking together lots of G10 sensors and putting them in a 50D body.
--
Andy Westlake
dpreview.com/lensreviews
 
Most people, perhaps including Phil, may never have "seen" a true raw
file, only a demosaiced (with more or less pixel averaging/noise
filtering) file.
Looking at undemosaiced RAW files is great, as is arguing that
theoretically high pixel densities should work just fine. The
problem with all these arguments is that they tend to ignore a number
of genuine practicalities about how cameras work and tend to be used.

Let's list a few of the (generally unproven) assumptions which
underlie the whole 'increasing pixel counts indefinitely should be
just fine' argument.

1) Sensor quantum efficiencies are never affected by decreasing the
photosite area.
Nikon D3: .113 electrons per square micron of sensor at ISO 400
Panasonic LX3: .106 electrons/µ^2 @ISO 400

If one incorporates the fact that the LX3 is at least a half stop more sensitive than the typical DSLR (DxO got this one wrong, I did my own measurements), then the LX3 sensor is the current winner in quantum efficiency, since the D3 is easily the best sensor among current DSLR's as is the Panasonic among digicams.
2) Sticking a sensor with a much higher pixel count into a real
camera which has to achieve a certain performance spec (e.g. frame
rate), thereby forcing much higher data transfer/processing rates on
the system as a whole, will never have a detrimental affect on the
electronic noise within that system.
One would likely have to go to the sort of column-parallel ADC architecture that Sony has been using, so that each ADC can operate at a sufficiently low frequency not to have detrimental effects on noise. Canon also used this architecture on their 50MP APS-H prototype sensor reported on a year or so ago, which had reasonable read noise data.
3) Image noise is always uncorrelated between photosites
In the raw data, the only noise that is correlated between photosites is pattern (banding) noise, which is typically a small component of electronic read noise. Tell me if I'm wrong, but I don't think that is what you had in mind, since that's certainly not what Phil has been talking about.

In a converted image, there are local correlations introduced by the demosaic process, which I reported on elsewhere in this thread. This is a red herring, as I also explained elsewhere in this thread.

http://forums.dpreview.com/forums/read.asp?forum=1000&message=30150571
http://forums.dpreview.com/forums/read.asp?forum=1000&message=30158640
4) That uncorrelated noise will be averaged before demosaicing in
order to achieve the theoretical s/n increase properly (not exactly a
common approach to data processing in current cameras).

5) If the file is instead resized after RAW conversion, all of those
theoretical improvements in noise can be achieved in practice.
Let me handle both these points together, as they both stem from the same misperception, guided by the pixel-centric view. Downsizing an image does not change the noise spectrum, it simply moves the Nyquist frequency to a different point. Histograms of monotone patches measure the noise power at Nyquist; move the Nyquist frequency, and you measure the noise power at a different part of the image noise spectrum.

Because the power spectrum rises linearly in the absence of pixel correlation, measuring the noise at a lower frequency gives a lower value, which is what pixel-peepers interpret as "downsampling decreases noise".

When pixels are correlated, by demosaicing, NR or other filtering, the noise power levels off starting at the spatial frequency where the filtering kicks in. Then moving the Nyquist frequency by downsampling doesn't seem to reduce noise, or if it does, not by as much; but that conclusion is based on fallacious reasoning, since again downsampling doesn't change the noise power at any scale that is still present in the downsampled image (unless the downsampling algorithm permits aliasing of noise beyond Nyquist, in which case it's time to get a better downsampling algorithm; I've mentioned two so far in this thread that are better than straight PS Bicubic, which does have the capacity to alias noise upon downsampling).
I'm sure there are more; I'm not a camera designer and don't pretend
to understand this area to the degree of detail required to make
definitive statements.
IMO you've yet to find a show-stopper.
But it's pretty clear that making an
ultra-high pixel count DSLR camera would be far more complicated than
metaphorically sticking together lots of G10 sensors and putting them
in a 50D body.
--
I'm sure there are engineering hurdles to be overcome. What I don't see so far are limitations arising from the device physics; and there are current devices with 2µ pixels which outperform the best DSLR sensors per unit of area. The big question is whether those small pixel devices can be scaled while maintaining their performance.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
IMO you've yet to find a show-stopper.
It's not about finding a 'show-stopper'.
I'm sure there are engineering hurdles to be overcome. What I don't
see so far are limitations arising from the device physics; and there
are current devices with 2µ pixels which outperform the best DSLR
sensors per unit of area. The big question is whether those small
pixel devices can be scaled while maintaining their performance.
Precisely. What sometimes seems to be forgotten is that camera is not just a sensor, but far more complicated piece of engineering altogether.

--
Andy Westlake
dpreview.com/lensreviews
 
...
Let me handle both these points together, as they both stem from the
same misperception, guided by the pixel-centric view. Downsizing an
image does not change the noise spectrum, it simply moves the
Nyquist frequency to a different point. Histograms of monotone
patches measure the noise power at Nyquist; move the Nyquist
frequency, and you measure the noise power at a different part of the
image noise spectrum.

Because the power spectrum rises linearly in the absence of pixel
correlation, measuring the noise at a lower frequency gives a lower
value, which is what pixel-peepers interpret as "downsampling
decreases noise".

When pixels are correlated, by demosaicing, NR or other filtering,
the noise power levels off starting at the spatial frequency where
the filtering kicks in. Then moving the Nyquist frequency by
downsampling doesn't seem to reduce noise, or if it does, not by as
much; but that conclusion is based on fallacious reasoning, since
again downsampling doesn't change the noise power at any scale that
is still present in the downsampled image (unless the downsampling
algorithm permits aliasing of noise beyond Nyquist, in which case
it's time to get a better downsampling algorithm; I've mentioned two
so far in this thread that are better than straight PS Bicubic, which
does have the capacity to alias noise upon downsampling).
If you regard downsampling as an ideal low pass filter, then you're just taking away the high frequency components from the frequency spectrum. Therefore, the shape of the curve is not altered, i.e. the noise spectrum is not changed. The high frequency components are simply removed, or differently said, the highest frequency component of the noise spectrum (the nyquist frequency) is shifted to a lower frequency.

Well, this is my understanding of this paragraph. It presumes that the down sampling algorithm is pretty much an ideal lowpass filter.

Regards,

kikl
 
It presumes that
the down sampling algorithm is pretty much an ideal lowpass filter.
Precisely. Just out of interest, has anyone made one yet, preferably in a form which could run inside a camera at a sensible frame rate on large pixel count files?

--
Andy Westlake
dpreview.com/lensreviews
 
Precisely. Just out of interest, has anyone made one yet, preferably
in a form which could run inside a camera at a sensible frame rate on
large pixel count files?
I do think one should make it very clear if the discussion is about in-camera limitations (JPG engine, etc.) or limitations in the raw data itself.

I have the impression that DPreview caters mostly to the JPG-shooter (nothing wrong with that), so in-camera limitations are very important. However, this should be no reason to extrapolate these results to all photographic uses. The fact that the blog article tests one raw converter (DPP) and finds it doesn't scale with the theory does not mean that the idea is flawed in general.

In addition, please do take into account that any Bayer image will be a bit soft at 100%, so the first reduction steps should not be expected to reduce the noise a lot (they do increase apparent sharpness). Noise levels should certainly decrease if you decrease further, if an appropriate algorithm is used.

Simon
 
I do think one should make it very clear if the discussion is about
in-camera limitations (JPG engine, etc.) or limitations in the raw
data itself.
That's an artifical distinction. The camera has to be capable of creating useable JPEGs from the RAW data coming off its sensor at a sensible operating speed. This is the point many people seem to be missing in these theoretical discussions; it's all very well if sensor technology can in principle scale to ever-smaller pixel sizes, but that doesn't help if the camera can't deliver a half-decent JPEG. Not everyone shoots RAW; Sigma tried selling a RAW-only camera but don't any more.
I have the impression that DPreview caters mostly to the JPG-shooter
(nothing wrong with that), so in-camera limitations are very
important. However, this should be no reason to extrapolate these
results to all photographic uses. The fact that the blog article
tests one raw converter (DPP) and finds it doesn't scale with the
theory does not mean that the idea is flawed in general.
No, but it does show that the idea does not universally hold true by any stretch of the imagination. Bear in mind that it's not so long ago that Canon users were berating us for using ACR instead of DPP apparently on the grounds that DPP is the final word in IQ (and neatly forgetting the fact that pretty well every forum complains equally that ACR isn't the best for their cameras).
In addition, please do take into account that any Bayer image will be
a bit soft at 100%, so the first reduction steps should not be
expected to reduce the noise a lot (they do increase apparent
sharpness). Noise levels should certainly decrease if you decrease
further, if an appropriate algorithm is used.
But this is precisely the point. What's an appropriate algorithm, if those ones most commonly used aren't?

--
Andy Westlake
dpreview.com/lensreviews
 
But this is precisely the point. What's an appropriate algorithm, if
those ones most commonly used aren't?

--
Andy Westlake
dpreview.com/lensreviews
It's the algorithm that performs best, i.e. retains most of the imaging information.

Now, you may complain that on the one hand people will trash you because you compare pictures of different sizes without considering the effects of downsampling. On the other hand, you have to choose the appropriate downsampling algorithm. Well, if you choose the algorithm that performs best in your tests, then I don't think there is any reason for complaining.

Regards

kikl
 

Keyboard shortcuts

Back
Top