downsampling to reduce noise - how much?

In an unrelated thread I approached this quite differently. My first step was to actually determine what processes are necessary in order to actually reduce the resolution of an image by half. Once that is known, the same process can be applied to a step wedge image and the DR can be measured with Imatest. (I'm aware of many of the limitations of this method and personally consider the results useful and valid only so long as the response is linear.)

The camera was a Coolpix 8400. The raw (SFR target and T4110 step wedge target) was demosaiced in dcraw and left 16-bit linear with no additional processing (sharpening, levels, etc.).

I also showed the results the resizing was done only with a direct resize in Photoshop. The increase in SFR response past Nyquist shows why people don't see much noise reduction when that's all they do. The fact that the resolution is not halved is also telling.

The first tests are the unscaled raw. That is followed by the low pass filtered and scaled raw and then the raw file only scaled.

When low pass filtering is applied, we end up with a resolution response very similar in nature to an imaged directly from a CFA sensor camera with that native sensor pixel density. The resolution response without low pass filtering is quite different. Likewise, matching the response to what we'd expect from a lower pixel count sensor results in greater increases in measured DR (due to reductions in noise).





--
Jay Turberville
http://www.jayandwanda.com
 
"For those who print at home, Canon and Epson printers will print 600/720 PPI"
Can you site a source for these PPIs? Thanks.
As far as I know there is no official specs on the resulting PPI for Canon and Epson printers. Just like the megapixel war on cameras, the DPI war is still going on in printers. Canon only spec for color is 9600x2400.

600/720 are then resolution values that ddisoftware states (Qimage.)

"When set to its highest interpolation setting (Max), Qimage sends data to the print driver at the resolution requested by the driver which is almost always either 600 PPI (Canon, HP and a few other printers) or 720 PPI (Epson printers)."
http://www.ddisoftware.com/qimage/quality/
"So these printers can reproduce an image at 600/720 PPI. This has been my experience with my Canon printer."
How did you verify this at home?
I verified 600 PPI from my Canon printer with the image below, taken with my Canon A710IS in macro mode.

The top row of lines is a high-precision ruler that is graduated in 100ths of an inch.

The bottom row of lines represents a print I made in the following fashion:

1. An image was created of 600, 1-pixel wide lines, alternating black and white (600 pixels tall but that doesn't really matter.)
2. The image was set to 600 PPI.
3. The color depth of the image was set to 8-bit color.

There are a couple of things to notice. First, is that the black lines are wider than the white lines in the print. That's what I get for using cheaper paper instead of my good paper. This is a good example of why the printer drivers don't allow you to select the highest quality levels with cheap papers. The paper simply doesn't have the resolution, and better paper is required to limit the spread of ink. All I did was select the Pro paper so that I can print at the highest quality.

Second, if you count the printed line-pairs within one line-pair from the ruler you'll see that there are three printed line-pairs for every line-pair from the ruler. A line-pair from the ruler is 1/100th of an inch wide. A printed line pair is 1/300th of an inch wide. There are six lines for every 1/100th of an inch (three black and three white,) making each line 1/600th of an inch wide (okay okay...because of my cheap paper the black lines are wider. One day I'll do it with my good paper.)

 
emil:

Thanks for some lengthy and detail explanations. However, as a retired EE let me see if I can restate some things in different terms. Firstly the noise from individual photosites is nearly white and approximately Gaussian with a bandwidth primarily determined by the AA filter if there is one. Otherwise by the Nyquist frequency. The noise power in this bandwidth is determined by the variance of the noise. Since the noise is nearly white (it will be for a uniform tonal region as in your example) the noise power per unit of frequency is the same. Nyquist frequency or not.

Any spatial filtering by demosaicing, NR or downsizing will act as a lowpass filter and reduce the noise variance. Since demosaicing occurs first the effect of other filtering will not be as great.

Now what is the proper IQ metric as it relates to noise? In uniform tonal regions the noise variance and correlation distance will be important. Pattern noise is one extreme case. I am not sure what IQ subjective studies have shown.

BTW what is this "noise figure" that you and Dxomark seem to have in common. Also to be perfectly clear I do not at all agree with the Dxomark MP scaling and IQ. Dxomark has all the analytical data to indicate sensor performance such as Photon Transfer Functions and Electro-Optical Transfer Characteristics. They should publish these and let knowledgable people draw their own conclusions about IQ.
--
A bird in the viewfinder is worth...
 
The fact that power spectra slopes match at low frequencies is a
consequence of the scaling properties of noise, as elaborated in
articles at DxOmark and in many, many posts here at DPR, and my noise
tutorial
http://theory.uchicago.edu/~ejm/pix/20d/tests/noise/noise-p3.html#pixelsize
Your tutorial is very interesting. I noticed that noise is always higher in dark areas. Here are two shots with same exposure( 1/80s, f/5.6, ISO800 - taken with XSi)



full-size: http://www.flickr.com/photos/23784308@N06/3067275176/sizes/l/

The skin is very smooth and clean.



full-size: http://www.flickr.com/photos/23784308@N06/3067272782/sizes/l/

There is much, much noise on the skin... Ooops, sorry - because of downsampling it does not show much ;-P hehe. Look at this 100% crop:



In dark areas at ISO100 I get much worse noise than middle-gray at ISO800.

I was curious:

1) what is the source of noise in these cases ? read noise, photon shot noise

2) what would be your recommendation to avoid this effect ? would exposing on the dark areas (+0EV ? -1EV ?..) and reducing britghness in post-processing help ?

Thanks,
 
Thanks for the demo I have been researching this PPI business for a while. The Qimage site doesn't offer a cite either. I can believe that black and white line pair resolution could be this high but I doubt if a colored image would ever print with this resolution. What do you think?
--
A bird in the viewfinder is worth...
 
Thanks for the demo I have been researching this PPI business for a
while. The Qimage site doesn't offer a cite either. I can believe
that black and white line pair resolution could be this high but I
doubt if a colored image would ever print with this resolution. What
do you think?
That is a color image.

The distinction between a color image and a monochrome image is the bit depth that is specified in the image header. The actual colors that are used don't matter. If you specify 8-bit color then the image is printed as a true-color image even though the image may contain only black and white pixels. In other words, the printer doesn't "look" at the image and decide it only needs black and change its operation. Its operation is based on the image settings.

As I said previously, the bit depth had been set to 8-bit...just like a regular JPEG image. So the image was printed as a regular color image.

I've done a lot of testing so that I know exactly what the printer is going to do when I print an image. I've performed this resolution test with colors as well, and the results are the same. Canon printers print images at 600 PPI.
 
Who cares what the maths of noise reduction are?

If you downress a high resolution image to reduce noise,
you are also throwing away that high resolution.
Period.

Anything else is dross.
Well, there's more to it than that.

For a person who believes that pixel density and increased pixel-level noise is not a problem, per se, convincing someone who believes otherwise requires them to take a very big step. Therefore, the bridge concept of equivalent noise at common downsamples might seem necessary. Of course, many people get confused an think that you have to downsample to get the lower noise, but that is not the case. Simply displaying at a smaller size on a higher density display would do it - the problem is, our printers and our monitors are actually very poor display devices, so we never see our images at a reasonable size in all of their glory. If you had a 100MP monitor, and I had one, and Phil had one, etc, we wouldn't even be having this discussion. We would all KNOW that higher pixel densities like the 50D are a good thing, as we upsample our images to fill the screen, in any style we choose (egg carton, film grain, mosaic tetris pieces, jigsaw pieces, fractal upsizing, etc).

--
John

 
That makes perfect sense, as noise depends on sensor size. All the
latest cameras of a given sensor size should produce images with
the same levels of noise. And when those images are all resized to
the same dimensions, they should all exhibit the same amount of noise.
The fact is, though, that the 12MP resized image has the same amount
of noise as the 24MP image. If you print the images at the same
size, you'll get the same noise.
This is true for shot noise, if quantum efficiency is the same. Read noise has a very loose connection to sensor size.

--
John

 
Maybe that should be a reason for DPreview to go on a rant against
crappy processing instead of 'the MP race'.
Exactly. The people whose words have more voice than the rest of us, are unfortunately focusing on the wrong things.

Canon needs to be taken to task for not reducing banding in cameras like the 50D. Performance is good, for high ISO, but it can VERY easily be a lot better. I can remove 95% of the banding in Canon high ISO shots. Unfortunately, it takes a bit of manual work, so it's a bit of a chore. It would probably drop the frame rate from 5.5 fps to 5.3 fps to eliminate the banding before the RAW is written.

It serves no one any good when people of influence blame this as an innate problem of small pixels. Most P&S cameras have no banding visible except fine little streaks (which fade with reduced display size or downsampling), and that's with pixels 1/8 the size of 50D pixels! It can't be the pixel size! It can be the number of pixels, and the "FPS race".

--
John

 
for the standard deviation of the green channel using an actual picture file (mid-gray average value of 128.9) throughout a fairly large resizing sequence.

Y = a+b/x+c/x^2
a = 0.55
b = 2.934967742
c = -0.81548387

"Y" is the standard deviation and "x" is the "lineal" resizing factor (e.g. x=1.414 would be a 50% reduction in the camera's maximum resolution). Data was taken at x=1 (of course), 2, 4, and 8 for curve fitting (DF Adj r2 = 0.999862097). To use the equation for analysis at any resizing factor I'd suggest sticking with values of x between 1 and 4 for the most accurate prediction.

FWIW, within the range of x-values between 1 and 4, every time you decrease the MP rating by a factor of two via resizing the standard deviation goes down by approximately 17.6 percent; this percentage change is the number I was trying to determine in the first place.

Regards,

Joe Kurkjian

Galleries: http://www.pbase.com/jkurkjia



SEARCHING FOR A BETTER SELF PORTRAIT
 
so the shot noise, sqrt(n)
But the article hints on the fact that you can not assume random noise. From the way (esp. Bayer-sensored) pixels are created follows the noise in the neigbouring cells is not independent, so simple calculations based on this can not applied.
 
so the shot noise, sqrt(n)
But the article hints on the fact that you can not assume random
noise. From the way (esp. Bayer-sensored) pixels are created follows
the noise in the neigbouring cells is not independent, so simple
calculations based on this can not applied.
O.K. I will try to clear this up:

1. What is the spectrum (frequency distribution) of a uniform picture, say a gray picture? It has a peak at f=0 and otherwise it is zero.

2. What is the frequency distribution of that picture including noise? It has a peak at f=0 and it has additional amplitudes at f> 0. In case of white noise the amplitude of the spectrum is uniform for f> 0. Otherwise the noise spectrum may assume any arbitrary shape.

3. What is the total noise of that picture? Its total area covered by the noise spectrum.

4. What happens if a lowpass filter is applied to the image? The amplitude at f=0 remains the same. The noise spectrum is cut off at the cut off frequency of the lowpass filter. All amplitudes above the cut off frequency are essentially removed.

5. What happens to the total amount of noise? Well, since a part of the noise spectrum is removed, the area covered by the noise spectrum is reduced. The total amount of noise is reduced, irrespective of the particular shape of the noise spectrum.

Regards,

kikl
 
......I've got a splitting headache ;-(

Michael
Well, there's no need for a headache. All we are saying is that it makes a considerable difference if you're examining a picture with a magnifying glass or if you're looking at it from a normal viewing distance. Downsampling is equivalent to stepping back and upsampling is equivalent to using a magnifying glass. That's what this thread is about.

If you magnify the image, you are going to see a lot more artifacts including noise than if you are taking a step back. However, DPreview looks at high MegaPixel images through a magnifying glass, whereas low Megapixel images are looked upon from a larger distance. That's no decent way for comparing image quality and that's what this thread is about: Take a look at this page:

http://www.dpreview.com/reviews/canong10/page18.asp

Regards,

kikl
 
I had to search :)

Are you the physicist whose research interests include:

"Superstring/M-theories; specifically, their geometrical structure and their role in grand unification and the quantization of gravity"

?

I must say I always read your posts on here with great interest even though they are more often than not way over my head :)

Jim
Could we just call up Stephen Hawking and have him settle this once
and for all?
--
There is a string theorist participating very actively in this thread
;-)

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
:)

I have had a passion for physics most of my life, I studied physics at Glasgow University in the 70's but really wasn't quite smart enough, ended up in software development instead :)

I can imagine that you have spent a considerable amount of time thinking about noise on the planck scale, never mind digital cameras :)

Ah well, it's good that you take the time to post here.

Cheers
Jim
I had to search :)

Are you the physicist whose research interests include:

"Superstring/M-theories; specifically, their geometrical structure
and their role in grand unification and the quantization of gravity"

?
Guilty.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 

Keyboard shortcuts

Back
Top