downsampling to reduce noise - how much?

Unless your printing
humongous poster, everybody needs to downsample the image,
Is an 8x10 a humongous poster? 'cause it takes 29mp to print
one at 600dpi.

By the way, to answer Phil's question, the human eye can resolve
1200dpi in high contrast situations, no problem. In a busy color
photo, 600dpi is probably sufficient, but why limit the subject
matter? If I can cherry pick examples, I can find photos that require
no more than 200 dpi. But to really say the technology is working,
we need 1200dpi.

-Carl
 
For those who print at home, Canon and Epson printers will print
600/720 PPI. You need 12MP images to use all the resolution that's
available from a Canon printer for 4"x6" prints.

If you send images out for commercial printing, then the required
resolutions are lower. If your printer uses a Durst Theta 76 then
all you need is 5.2MP for an 8"x10". A Noritsu wet lab will uses a
little more, about 7.2MP. But a Noritsu dry lab has the same
resolution as an Epson inkjet, and will print up to 720PPI. For a
4"x6" you will need 12.5MP to use up all the resolution that's
available.
Leaving aside some commercial options, I think it is worth mentioning that HP/Canon/Epson home printers print in dots-per-inch of C, Y, M or K -- not Pixels-per-inch. A 24-bit "full color" pixel on screen is not translated completely into one of those 600 or 720 (or 2400) DPI on paper. Even the nicer 6, 8, and 12-color printing systems are still "dots of component color" -- not PPI in the on-screen sense.
 
Unless your printing
humongous poster, everybody needs to downsample the image,
Is an 8x10 a humongous poster? 'cause it takes 29mp to print
one at 600dpi.

By the way, to answer Phil's question, the human eye can resolve
1200dpi in high contrast situations, no problem. In a busy color
photo, 600dpi is probably sufficient, but why limit the subject
matter? If I can cherry pick examples, I can find photos that require
no more than 200 dpi. But to really say the technology is working,
we need 1200dpi.
From what viewing distance?

According to Wikipedia,

"For a human eye with excellent acuity, the maximum theoretical resolution would be 50 cycles per degree (1.2 arcminute per line pair, or a 0.35 mm line pair, at 1 m)"

If Wikipedia are correct, 1200dpi is resolvable if the viewing distance is about 12cm (a little over 4 inches).

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
From what viewing distance?

According to Wikipedia,

"For a human eye with excellent acuity, the maximum theoretical
resolution would be 50 cycles per degree (1.2 arcminute per line
pair, or a 0.35 mm line pair, at 1 m)"

If Wikipedia are correct, 1200dpi is resolvable if the viewing
distance is about 12cm (a little over 4 inches).

--
emil
Hi emil, thanks for answering this.

Regards,

kikl
 
Leaving aside some commercial options, I think it is worth mentioning
that HP/Canon/Epson home printers print in dots-per-inch of C, Y, M
or K -- not Pixels-per-inch.
Yes, I know. When I said...
"Canon and Epson printers will print 600/720 PPI."
I meant 600/720 pixels from the image per inch.
A 24-bit "full color" pixel on screen is
not translated completely into one of those 600 or 720 (or 2400) DPI
on paper. Even the nicer 6, 8, and 12-color printing systems are
still "dots of component color" -- not PPI in the on-screen sense.
That is not correct. A complete translation to 600 or 720 PPI does occur. Qimage refers to these resolutions as the “native” resolution of the printer (I call them the photo resolutions.) Modern Canon printers will print at 2400 DPI vertically and between 2400 and 9600 DPI horizontally using drop sizes of 1 and 5 picoliters. This provides matrix sizes of between 16 color dots (4x4) and 64 color dots (16x4) to describe a single color pixel from an image at 600 PPI. I don't use Epson printers, but the information I've read indicates that it also uses 16 color dots to describe a single pixel, using an 8x2 matrix.

So these printers can reproduce an image at 600/720 PPI. This has been my experience with my Canon printer.
 
And how many DPI can you see, at normal viewing distances?
My personal testing of prints indicates that I can detect an overall
difference, at 8 inches, between two images where image 1 was printed
at 600 DPI and where image 2 was resized to 300 DPI and printed.
I'm just wondering how much of that difference is autosuggestion. For more general results a double blind experiment should be done. For personal use such results as yours may of course still be useful
I actually find this to be a fascinating question, to which I don't
have a good answer. But I have realized that our eyes can actually
see the combined effects of detail that, by itself, may be
imperceptible.
That is certainly possible, although far from proven.
Not only could I
see a difference between 120hz and 150hz, but at 150hz the car's
motion in the crash looked incredibly lifelike. The difference
wasn't small...it was huge.
The ability to perceive life-like motion and the the connection of frame rate to it is relatively well known fact. However, it is mostly a separate subject from the resolving ability of the human vision.
From then on I always played at 640x480 with a high refresh
rate.
As most serious amateurs and FPS pros do. Nothing new here. With modern graphics cards you usually can increase the resolution to 800x600 or even higher though when playing older games, since in most cases the frame rate at low resolution is not limited by the graphics card but rather by the processor or memory subsystem.
 
If Wikipedia are correct, 1200dpi is resolvable if the viewing
distance is about 12cm (a little over 4 inches).
Oy...maybe when I was a kid. Righit now anything closer than 8 inches starts getting blurry.

Getting old sucks! :p
 
the way this argument is currently at? it's getting stale, this stuff...
 
I have placed two pairs ISO 25600 files on my SkyDrive at the following link

http://cid-51df194aca5b7136.skydrive.live.com/browse.aspx/ISO%2025600%20downsampling%20Challenge?uc=4

Shot with a D700 in crummy incadescent lighting. The jpgs are a Small and Large Fine converted in camera, and the NEFs are 14 bit compressed. Picture Control set to Neutral, Manual WB Incadescent, Manual exposure.

I deliberately chose ISO 25600 to get the nasty shot noise, which of course means that the noise is non-Gaussian.

My prelim theories are:

1. Downsampling does matter, and helps a lot

2. A theory is that downsampling is more successful with big fat photosites and RAW files. This may be why dpreview does not see big differences with the tiny photosite cameras working off jpgs.

Challenge: Using various RAW processors, down-convert the two NEF files and the Large JPG file by a factor of 4 to 3 MPix.

Compare the resulting three downconverted files to the in-camera Small JPG file.

Questions:

1. Is jpg downconversion more noisy than RAW downconversion?

2. Does the in-camera downconversion do a better job than with an external processor?
3. Does Nikon NX do a better job than third party tools can do?

4. How does an upconverted 3 MPix in camera JPG compare to the 12 MPix in camera JPG. My gut feel, looking at it in VIEW NX is that the upconverted file has a more pleasing appearance noisewise, but not as sharp. (Sorry about not using trip, it can affect sharpness).

Carsten Thomsen
 
Hi dpreview folks!

I've have made an early contribution about this topic:

http://blog.dpreview.com/editorial/2008/11/sprechen-sie-fu.html#comment-140580062

I think that Phill's findings are perfectly OK. The reason for the noise behaviour is the Bayer interpolation / demosaicing. The Red channel of a 16 Mpixels "Bayer image" has data from 4 million photosites plus interpolated data. Downsampling the image to 4 Mpixels roughly preserves the information (and noise) of these 4 million Red photosites and eliminates the interpolated data. So you are not "averaging out" noise (yet).

However...

a) this applies to Red and Blue channels but the Green channel (which has twice as many photosites) is being "averaged out". That's why you see a decrease of noise in Phill' graphs.

b) if you further downsample (e.g. to 2 Mpixels) you will start "averaging out" noise in all channels and getting a significant improvement.

c) all that is relevant for Bayer sensors but not for Foveon sensors (or for film scanners).

Consequences:

1) Yes, downsampling at less than 4:1 does not improve noise by much.

2) But... you need only 3 Mpixels for a 4x6 print (@ 300dpi). While downsampling from 12Mpixels to 3Mpixels will not reduce the noise by much, downsampling from 24Mpixels image will...

3) From (a): downsampling reduces luminance noise earlier/stronger than crominance noise. Shooting B&W in low light might be a good strategy.

From (2) we learn that using an standard image size to compare cameras with very different pixel-counts might make sense. But all depends on how people use their images. Comparing an Alpha 900 with a Nikon D3 at 3 Mpixels might be enlightning but some users would argue that they print larger than 4x6. Others would argue that a full-HD screen resolves only 2 Mpixels.

Thanks Phill for this valuable insight!
 
Except that's mathematically generated totally random noise, not the
type of noise we see from a bayer sensor.
That's true, but the noise from a sensor is a lot closer to mathematically-generated noise than you think, in character; you seem to be getting your idea of what a sensor generates from conversions . Conversions filter out noise near the nyquist through demosaicing and NR. All converters use significant NR even when you turn NR "Off" - all that "off" means is that the least amount of NR that the converter wants you to see is used. All RAW data is much noisier than you seem to think.

When you take your converted grey patches and resample them, you are resampling data that has almost nothing near the nyquist, despite the fact that this is where the RAW noise is concentrated. Then, when you look at the standard deviations, they are meaningless because standard deviations of different samples are only comparable at the pixel level when the spectral distribution of noise is the same.

I just took a patch of sky from a converted RAW, and resampled it to 25%, 50%, 200%, and 400%, and laid them out in order across the screen (with the original in the middle). I could clearly see that the visible pixel-level noise increased a good amount, going from left to right, between each neighbor. The standard deviations of all were within a very narrow range; 8.56 for the 25%, and 8.71 for the 400%:



When your eyes clearly show you that noise at the new 100% is much different, and the standard deviations are the same, it is very clear that standard deviations are irrelevant in the context. The standard deviation of an image sample is NOT a measure of its noise, either at the pixel level, or at the image level. Spectral distributions are much better for those, considering pixel and image frequency, respectively.

Now, when the pro-high-density theorists here are measuring noise with standard deviation, they are almost always talking about real undemosaiced RAW noise, concentrated at the nyquist (where standard deviation is more meaningful); not reduced, correlated, converted noise.

--
John

 
In addition, please do take into account that any Bayer image will be
a bit soft at 100%, so the first reduction steps should not be
expected to reduce the noise a lot (they do increase apparent
sharpness). Noise levels should certainly decrease if you decrease
further, if an appropriate algorithm is used.
But this is precisely the point. What's an appropriate algorithm, if
those ones most commonly used aren't?
Until recently nobody had to pay attention to downsampling algorithms for DSLRs, the required native resolutions on printers 300-600 PPI and 360-720 PPI usually ask for upsampling. To avoid downsampling one can set the printer driver on the higher print resolutions (dpi), better weaving and the requested (input) native resolution will be 600 or 720 PPI. A 10 x 7 inch print from an A900 24 MP file isn't downsampled (or upsampled) when 600 PPI input resolution is requested by the print driver. That will not be the real optical resolution available in the print but it will not downsample the image before it is processed in the printer driver and if noise should become visible in a print one has to aim for a print quality like that at that size or print 4x larger in area at a lower quality setting.

Web images from DSLRs are the exception, noise will not play much of a role there but aliasing in downsampling could be a problem.
http://www.xs4all.nl/~bvdwolf/main/foto/down_sample/down_sample.htm
http://www.xs4all.nl/~bvdwolf/main/foto/down_sample/example1.htm

It also shows that Photoshop's extrapolation routines are not the best despite the program's price and their popularity.

It's different with scanned film though, MF or LF film often has to be downsampled and the noise can be heavy. Aliasing happens easily then. Qimage has a choice of sampling algorithms + an adjustable ant-aliasing filter. Can be really slow on large files though. While printer driver upsampling routines are much improved over the last ten years that isn't the case with downsampling routines and/or anti-aliasing filters in printer drivers.

That's where I got the experience that made me think DxO's "normalization" formula isn't a smart choice to cover low to high resolution sensors with different noise levels. Using up- or downsampling to bring the different sensors in line is bound to deliver deviations of the real quality. 8MP on 30x20 cm means approx. 300 PPI native resolution, not the highest quality choice in printer drivers these days. Noise could also be disguised in that quality. I do not understand why they didn't raise the bar instead and let all the camera systems try to deliver a decent print that's much larger and translate the results of that to a good formula.

How well camera CPU's (often ARM cores) can handle better extrapolation algorithms in RAW processing is an open question. There are other approaches to deliver better S/N by a lower resolution, Fuji's older sensors and the new one Super CCD EXR announced at the Photokina are examples where photosites are combined in an early stage of processing.

Ernst Dinkla
 
2. Does the in-camera downconversion do a better job than with an
external processor?
Down-sizing the JPEG gave worse results with nearest-neighbor and better results with Gaussian. But the camera creates the low-res JPEG from a full-res RAW, so it's not really a good comparison.

In any case, you've not accounted for the point I brought up earlier, which is that the concept of “reduced noise” implies retained detail. You can hardly claim reduced noise if you also toss 50% of the detail (or 75%, depending on how you view a resolution/2 image.)
 
That's where I got the experience that made me think DxO's
"normalization" formula isn't a smart choice to cover low to high
resolution sensors with different noise levels. Using up- or
downsampling to bring the different sensors in line is bound to
deliver deviations of the real quality. 8MP on 30x20 cm means approx.
300 PPI native resolution, not the highest quality choice in printer
drivers these days. Noise could also be disguised in that quality. I
do not understand why they didn't raise the bar instead and let all
the camera systems try to deliver a decent print that's much larger
and translate the results of that to a good formula.
Well, that appears reasonable to me. Nevertheless, if you want to compare the image quality, you have to compare images of equal size. This may be difficult and there may be obstacles. You may choose to upsample up to 10 or 12 MP or downsample to 6MP, whatever, but you just can't post crops of two images of completely different size next to each other and assess the comparative image quality based on these crops. But that's what is being done here and it's just wrong.

Regards

kikl
 
Sorry Graystar, it looks like my original attempt to reply to this was eaten
by the system. Let me try to reconstruct it...
Downsampling does NOT reduce noise. Never has...never will.
When most people say "reduce noise" they mean "increase SNR".
The classical uncertainty principle limits the product of SNR and
resolution, but happily we can trade one for the other.
If you switch back and forth to compare the two views (not so easy
'cause you have to keep moving yourself back and forth) you should
see that the resized image looks exactly the same as viewing the
full-sized image from 6 feet further back. The images are exactly
the same.
Yes, moving away from a source is equivalent to downsampling it.

-Carl
 
Downsampling does NOT reduce noise. Never has...never will.
When most people say "reduce noise" they mean "increase SNR".
The classical uncertainty principle limits the product of SNR and
resolution, but happily we can trade one for the other.
If you switch back and forth to compare the two views (not so easy
'cause you have to keep moving yourself back and forth) you should
see that the resized image looks exactly the same as viewing the
full-sized image from 6 feet further back. The images are exactly
the same.
Yes, moving away from a source is equivalent to downsampling it.

-Carl
--

Yeah, downsample to a single pixel and tell me the resulting signal to noise ratio. That should settle this question.

Regards

kikl
 

Keyboard shortcuts

Back
Top