downsampling to reduce noise - how much?

I do think one should make it very clear if the discussion is about
in-camera limitations (JPG engine, etc.) or limitations in the raw
data itself.
That's an artifical distinction.
I think that's overstating it. I'd hazard a guess that many (but not all!) serious photographers shoot only RAW. I've switched my camera to RAW a month after I got it and haven't looked back. Of course, it may be handy to have JPG as a back up, but I don't expect the ultimate in quality. For me, it's emergency-only.

Sigma's RAW only camera is something entirely different, of course. Being Foveon-based, you're not simply stuck to RAW, you're stuck to their RAW software as well. I could live with the former, not with the latter.

My point is simple: the DPreview staff clearly focuses on JPG-performance, which is fine. But please don't use JPG-only arguments to make generic points.
But this is precisely the point. What's an appropriate algorithm, if
those ones most commonly used aren't?
Maybe that should be a reason for DPreview to go on a rant against crappy processing instead of 'the MP race'.

Simon
 
I do think one should make it very clear if the discussion is about
in-camera limitations (JPG engine, etc.) or limitations in the raw
data itself.
That's an artifical distinction.
I think that's overstating it. I'd hazard a guess that many (but not
all!) serious photographers shoot only RAW.
A camera has to be a saleable product, period. That means it has to function well at producing JPEGs for those customers that need it (plenty of 'serious' photographers fall into this category; sports and photojournalism demand decent JPEGs, for example).
I've switched my camera
to RAW a month after I got it and haven't looked back. Of course, it
may be handy to have JPG as a back up, but I don't expect the
ultimate in quality. For me, it's emergency-only.
But you are not the only customer for that product, or even a typical one. Many enthusiasts choose RAW (personally, when I have the time to do the processing, I do too), but the bulk of camera users shoot JPEG.
Sigma's RAW only camera is something entirely different, of course.
Being Foveon-based, you're not simply stuck to RAW, you're stuck to
their RAW software as well. I could live with the former, not with
the latter.
Point is that commercial pressure meant they had to include JPEG in later iterations. You simply lose too many customers without it.
My point is simple: the DPreview staff clearly focuses on
JPG-performance, which is fine. But please don't use JPG-only
arguments to make generic points.
Phil's blog wasn't exactly 'JPEG-only'. And those who wish to promote high pixel count sensor need to understand thay can't use RAW-only (or even worse, effectively sensor-only) arguments. There's no point in any major manufacturer making a camera which suits 25 enthusiasts but doesn't sell to anyone else.
But this is precisely the point. What's an appropriate algorithm, if
those ones most commonly used aren't?
Maybe that should be a reason for DPreview to go on a rant against
crappy processing instead of 'the MP race'.
One begets the other. We tend to gravitate towards the simplest solution; after all, who really needs all those megapixels?

--
Andy Westlake
dpreview.com/lensreviews
 
But this is precisely the point. What's an appropriate algorithm, if
those ones most commonly used aren't?

--
Andy Westlake
dpreview.com/lensreviews
It's the algorithm that performs best, i.e. retains most of the
imaging information.
That's a good definition of the word 'appropriate', but hasn't yet answered the question.
Now, you may complain that on the one hand people will trash you
because you compare pictures of different sizes without considering
the effects of downsampling. On the other hand, you have to choose
the appropriate downsampling algorithm. Well, if you choose the
algorithm that performs best in your tests, then I don't think there
is any reason for complaining.
Why downsize to some arbitrary output size at all? Is it so hard to understand that the quality of a print is directly related to the quality of the input file? Garbage in, garbage out; smeared, detail-obliterated noise-reduced files do indeed give ugly prints.

--
Andy Westlake
dpreview.com/lensreviews
 
It presumes that
the down sampling algorithm is pretty much an ideal lowpass filter.
Precisely. Just out of interest, has anyone made one yet, preferably
in a form which could run inside a camera at a sensible frame rate on
large pixel count files?
First of all, the downsampling algorithm need not be an ideal lowpass filter. The ideal lowpass filter is the sinc filter, which is pretty much a step function in frequency space; not practical, because the filter is long-range in position space, but ideal. Signal processing has a long history, and so by now rather good approximations to the ideal filter have been developed; the Lanczos filter is a good approximation to the sinc filter, one that is pretty good for downsampling and one that is very practical to implement in terms of computational cost.

I'm not sure what Canon is using for sRAW, but it seems to be pretty good from the samples I've seen posted.

Any linear filter (such as is used in resampling) ends up multiplying the frequency spectrum of the original image by the frequency spectrum of the filter. Non-ideal filters aren't sharply cutoff in frequency space, so remove a bit of additional image data from slightly below Nyquist; and have a bit of support above the new Nyquist, and so alias that data into the downsampled image unless additional (easy to implement) precautions are taken.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
downsampling to reduce noise - how much?
Downsampling does NOT reduce noise. Never has...never will.

The phrase “reduce noise” makes two very important implications. First, of course, is that noise is reduced (or “less visible”, but that's not always the case.) But second, and equally important, is that detail remain.

When you downsample you reduce both noise AND detail. Therefore the resulting noise level is exactly the same.

Here is how you prove this to yourself. This is a two step experiment. Use a compact to take an image at ISO 800. Put the image on your screen, zoom to 100% (of course, you'll only see a portion of the image.) The image is very noisy.

Here's the first part. You're probably looking at your screen from 1.5 to 2 feet away. Take a good look at the image to learn the details of the noise within. Next, move back away from the screen about 6 feet from where you are, and examine the image. The image is smaller and noise is less visible.

Here's the second part. Take the image and create a reduced copy. Whatever the resolution was, just divide by two and use bicubic or whatever is handy. Zoom to 100% and examine this image from normal viewing distance. The image is smaller and noise is less visible.

If you switch back and forth to compare the two views (not so easy 'cause you have to keep moving yourself back and forth) you should see that the resized image looks exactly the same as viewing the full-sized image from 6 feet further back. The images are exactly the same.

If you were to print these two images at the same print size (and the full-sized image doesn't exceed the printer resolution) your two prints would have exactly the same apparent noise, but the resized image will have less detail.

Try it.
 
A camera has to be a saleable product, period. That means it has to
function well at producing JPEGs for those customers that need it
(plenty of 'serious' photographers fall into this category; sports
and photojournalism demand decent JPEGs, for example).
Yes, JPG should be included, but IMO it's not a deadly sin if the JPG results don't live up to the RAW results (when a 'good enough' level is reached). I do realize that this is mostly my personal preference. Also, I did not say that most serious shooters prefer RAW.
My point is simple: the DPreview staff clearly focuses on
JPG-performance, which is fine. But please don't use JPG-only
arguments to make generic points.
Phil's blog wasn't exactly 'JPEG-only'. And those who wish to promote
high pixel count sensor need to understand thay can't use RAW-only
(or even worse, effectively sensor-only) arguments. There's no point
in any major manufacturer making a camera which suits 25 enthusiasts
but doesn't sell to anyone else.
Phil's blog is JPG with a single RAW converter thrown in and a few things ignored. It's fine, but please indicate clearly that it's JPG performance you're measuring and that the argument therefore applies mostly to JPG results. To prove the same thing in general (RAW as well) would need a lot more data.

Of course, the opposite is also true. People using RAW-centric arguments should say so and not over-generalize their results. And I have to say that DxOmark (that you obviously dislike) does a good job in providing RAW-only measurements - and makes that clear fairly well.
Maybe that should be a reason for DPreview to go on a rant against
crappy processing instead of 'the MP race'.
One begets the other. We tend to gravitate towards the simplest
solution; after all, who really needs all those megapixels?
I agree that this is the simplest solution. However, it's not necessarily the best solution: if everything works according to theory (in the rough sense of the word), the higher-MP camera can deliver low noise or excellent resolution.

As the no.1 resource in digital cameras, I think DPreview has an obligation to clearly indicate the motivations of their statements (in this case: algorithms, storage constraints or perhaps read noise instead of shot noise). Not doing so breeds a blind small-pixels-are-bad attitude among many of your readers and I don't think that's of much use in the long run.

Simon
 
Kikl wrote:
.
Why downsize to some arbitrary output size at all? Is it so hard to
understand that the quality of a print is directly related to the
quality of the input file? Garbage in, garbage out; smeared,
detail-obliterated noise-reduced files do indeed give ugly prints.

--
Andy Westlake
dpreview.com/lensreviews
Because, a larger image shows more noise than an image downsized to the same size as a smaller image. That's why. Unless your printing humongous poster, everybody needs to downsample the image, so this is actually part of everybodys workflow and people don't complain about ugly prints. Comparing noise levels and image quality must be performed on equal sized images, that's the whole point.

Regards

kikl
 
downsampling to reduce noise - how much?
Downsampling does NOT reduce noise. Never has...never will.

The phrase “reduce noise” makes two very important implications.
First, of course, is that noise is reduced (or “less visible”, but
that's not always the case.) But second, and equally important, is
that detail remain.

When you downsample you reduce both noise AND detail. Therefore the
resulting noise level is exactly the same.

Here is how you prove this to yourself. This is a two step
experiment. Use a compact to take an image at ISO 800. Put the
image on your screen, zoom to 100% (of course, you'll only see a
portion of the image.) The image is very noisy.

Here's the first part. You're probably looking at your screen from
1.5 to 2 feet away. Take a good look at the image to learn the
details of the noise within. Next, move back away from the screen
about 6 feet from where you are, and examine the image. The image is
smaller and noise is less visible.

Here's the second part. Take the image and create a reduced copy.
Whatever the resolution was, just divide by two and use bicubic or
whatever is handy. Zoom to 100% and examine this image from normal
viewing distance. The image is smaller and noise is less visible.

If you switch back and forth to compare the two views (not so easy
'cause you have to keep moving yourself back and forth) you should
see that the resized image looks exactly the same as viewing the
full-sized image from 6 feet further back. The images are exactly
the same.

If you were to print these two images at the same print size (and the
full-sized image doesn't exceed the printer resolution) your two
prints would have exactly the same apparent noise, but the resized
image will have less detail.

Try it.
Good post, I agree

But the reduced image is "apparently" sharper. Which is why so many complain about images created for the net, as being "cheating."

And to be fair, if you read Phil's explanation, he points out that there IS a small reduction in noise. If I recall, four percent, which matches my own observations.

Dave
 
Why downsize to some arbitrary output size at all?
Because, a larger image shows more noise than an image downsized to
the same size as a smaller image.
No it doesn't. Noise is the same.
Unless your printing humongous poster, everybody needs to downsample the image.
Few people need to downsize images.

For those who print at home, Canon and Epson printers will print 600/720 PPI. You need 12MP images to use all the resolution that's available from a Canon printer for 4"x6" prints.

If you send images out for commercial printing, then the required resolutions are lower. If your printer uses a Durst Theta 76 then all you need is 5.2MP for an 8"x10". A Noritsu wet lab will uses a little more, about 7.2MP. But a Noritsu dry lab has the same resolution as an Epson inkjet, and will print up to 720PPI. For a 4"x6" you will need 12.5MP to use up all the resolution that's available.

With the Noritsu dry lab becoming more popular, camera's will have a long way to go before there's a significant amount of unusable resolution.
 
And how many DPI can you see, at normal viewing distances?

--
Phil Askey
Editor, dpreview.com
 
For those who print at home, Canon and Epson printers will print
600/720 PPI. You need 12MP images to use all the resolution that's
available from a Canon printer for 4"x6" prints.
This is counterintuitive but absolutely true. To extend it a bit further, an Epson R800 (for example) can use about 50Mp of data when making an A4 print.

The one problem, of course, is that you'll seriously struggle to differentiate A4 prints made from 8Mp and 24Mp input files. Factors such as colour accuracy and saturation tend to influence viewers more than detail which is only visible under a loupe.

Printing is a complex process with many variables, but it is unarguable that the output quality is directly related to the data going in.

--
Andy Westlake
dpreview.com/lensreviews
 
Consider the three "grain size" samples from Phil's blog post



Here are the Fourier transform of the three samples



The edges of the plot are Nyquist frequency for a particular direction; the center is low frequency. For instance, the "fine grain" sample has noise power all the way out to Nyquist, while the "medium grain" and "coarse grain" samples have the noise power cut off at a frequency below Nyquist.

Now, adding up the noise in a band from radius k to radius k+deltak gives the noise power in that frequency band (green is "fine", blue is "medium", and red is "coarse"):



Nyquist is at 128 on the horizontal axis (the overall normalization is not important). For white noise, like the "fine grain" sample, uniformity of the Fourier plot means that the noise spectrum is linear up to Nyquist, since one is summing a uniform RMS value over an annular domain of area 2*Pi*k deltak and the noise power is thus linear in the frequency k. The noise power spectra of the "medium grain" image starts to tail off before Nyquist, and for the "coarse grain" image it tails off starting even earlier.

The main point to observe here, however, is the fact that the noise power for both the "medium grain" and "coarse grain" samples is higher than that of the "fine grain" sample, below the points in frequency space where they respectively start to tail off (ie in the region where they are rising linearly). In order to make the standard deviations of each sample approximately match, Phil had to make the overall noise power of the coarser grained images higher to start with at scales of perceptual relevance. When each image was downsampled by a factor of two (putting the new Nyquist at 64 on the scale of the above graph), Phil kept this higher power spectrum of the coarse-grained samples at frequencies 0-64, while throwing away the upper half of the power spectrum from 64-128 during the downsampling. And so the std devs of the histograms of the coarser-grained samples are higher, because the noise in the vicinity of spatial frequency 64 was higher in those samples to begin with.

As an aside, note that the coarse-grained samples look more fine-grained after downsampling; that is directly related to the fact that the downsampling keeps the nearly linear part of their noise spectra from 0-64, and throws away the highly damped portion from 64-128 -- and so the resulting plot looks like the "fine-grained" sample that is linear over the whole domain 0-128. The size of the "grain" above is mostly an indication of how much noise reduction has been performed on the image during conversion and post-processing; having examined the power spectra from just a decent demosaic process, demosaicing has the effect of lowering the slope of the power spectrum in the upper half of frequencies, but the power doesn't tail off nearly so dramatically as in the above samples.

The upshot is that this example is quite unrelated to the issue of comparing the results from two sensors of equal quantum efficiency and read noise per area. The RAW data from such sensors would have the same slope of noise power up to tail-off, rather than having differing slopes depending on the "graininess" of the image. This is true both before and after RAW conversion.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
Good post, I agree
TY.
But the reduced image is "apparently" sharper.
Right. That's due to the increased acutance of the edges. Acutance increases as the number of pixels used to describe the edge is reduced. Of course, that doesn't change the fact that resolution is reduced.
And to be fair, if you read Phil's explanation, he points out that
there IS a small reduction in noise. If I recall, four percent, which
matches my own observations.
The reduction in noise that was noted doesn't take detail into account. For the purposes of noise comparison, an image of a solid color is dimensionless...that is, with no noise the image would be exactly the same no matter if it were scaled up or down (the very reason why it's used.) I would put forth that it is more accurate to say that less noise was recorded, rather than saying noise was reduced. I would say this in order to keep within my previous comment that “reduced noise” means less noise with the same detail. A monochrome image has the same detail no matter what size it is, so the “detail” part of “reduced noise” has effectively been eliminated in the tests.
 
Forgot to mention -- the reason the width of the histogram doesn't go down much in the coarser grained examples upon downsampling, is that they don't have much noise power at high frequencies to begin with; so not much is thrown away during the downsampling process that contributes to the width of the histogram.

The width of the histogram is not a good measure of the noise in these examples, since it doesn't capture the structure of the noise power spectrum.
--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
First I'd just like to say thank you for the work you do on DPR, and also thanks for taking the time to participate in these discussions.
And how many DPI can you see, at normal viewing distances?
My personal testing of prints indicates that I can detect an overall difference, at 8 inches, between two images where image 1 was printed at 600 DPI and where image 2 was resized to 300 DPI and printed.

I actually find this to be a fascinating question, to which I don't have a good answer. But I have realized that our eyes can actually see the combined effects of detail that, by itself, may be imperceptible.

I came to this realization quite by accident when I was configuring a radio-controlled car simulator called RealRace G2. Along with a phenomenal physics engine, RealRace would also sync its frame rate with the monitor's vertical refresh rate (this program was created when CRTs were still king.) My IBM P260 allows me to set a 150hz refresh rate at 640x480. So I started experimenting with increased refresh rates (nVidia has a refresh rate override that allows you to fix the refresh rate of the monitor.) What I found was nothing short of startling.

As I increased the refresh rate of the monitor, the motions of the car within the simulator became more and more lifelike. The test I eventually ended up with was to crash the car in such a way as to cause it to tumble rapidly. As anyone who races RC cars can tell you, these little cars can roll over at high rates. Not only could I see a difference between 120hz and 150hz, but at 150hz the car's motion in the crash looked incredibly lifelike. The difference wasn't small...it was huge.

I wondered if I could see the difference in a game. In the first-person shooter, Unreal Tournament 2003, the games motion took on a new level of realism. Unfortunately, it was only in the smallest areas where I could get the games FPS above 150. But when I did...it was amazing. I could see the rockets coming in far more easily, and they were easier to avoid. I didn't lose track of enemies while jumping and dodging. I was like an entirely different game. From then on I always played at 640x480 with a high refresh rate.

The point of all this, of course, is that if I had tried to create some test to somehow “see” the frames between 120hz and 150hz, I would have failed miserably. There's no way I can spot individual frames. But the effect of the frame rate...that was easy to spot. And that is exactly what I see between the two prints I described above. I can't point to any little spot and say “there...I see more detail there.” But the overall image has a greater sense of realism when printed at 600DPI .
 
However, my initial simplistic view is that area under the curve,
which I would think represents the total noise energy, reduces
considerably in the non-Bayer example, and much less in the Bayer
example. I'm not sure if this is fully represented by quoting the
relative widths of the curves.
The area under the Gaussian curve is essentially the std dev times
the number of pixels, up to a universal, fixed factor of order one.
So we might as well discuss the std dev (width) of the curve. This
quantity is a measure of the noise at the Nyquist frequency, and not
any other frequency.
This last statement is somewhat incorrect. The standard deviation is the sum of the noise power over frequency. But since the noise power is generally rising up to the highest frequencies it is dominated by the noise at the highest frequency where the noise spectrum has significant support.

See for example the noise power spectra from elsewhere in this thread:



--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
A camera has to be a saleable product, period. That means it has to
function well at producing JPEGs for those customers that need it
(plenty of 'serious' photographers fall into this category; sports
and photojournalism demand decent JPEGs, for example).
That's nonsense. It can be any RGB-space image format. It needn't
and shouldn't be a lossy one.
But you are not the only customer for that product, or even a
typical one. Many enthusiasts choose RAW (personally, when I have the
time to do the processing, I do too), but the bulk of camera users
shoot JPEG.
And we owe it to them to help them understand why it's ruining humanity.
The web is so bad it hurts my eyes anymore to even look at it.
Facebook re-jpegs everything, with predictable results. Amateurs should
be pitied in all of this; pros have no excuse.
Sigma's RAW only camera is something entirely different, of course.
Being Foveon-based, you're not simply stuck to RAW, you're stuck to
their RAW software as well. I could live with the former, not with
the latter.
Point is that commercial pressure meant they had to include JPEG in
later iterations. You simply lose too many customers without it.
This is really a self-defeating argument. I think if I make a good camera
I'll be able to sell it. I work at Apple. They said we couldn't sell PCs
without floppy drives. They said we couldn't sell PCs without serial ports.
They said we couldn't sell smartphones without keyboards....
after all, who really needs all those megapixels?
Ansel Adams would need them. I need them. Anyone with a 600dpi
printer needs them...

-Carl
 

Keyboard shortcuts

Back
Top