what's wrong with pixel peeping?

On the odd occasion that I have had to take a crop from an image such as an individual from a group shot I have been disappointed in the shots from my Fuji S6500 - admittedly in slightly poor lighting conditions - and this is from a sensor that is renowned for its low light capability.

So, yes pixel peeping can be important.

Cheers
 
On the odd occasion that I have had to take a crop from an image such
as an individual from a group shot I have been disappointed in the
shots from my Fuji S6500 - admittedly in slightly poor lighting
You'd be disappointed that many P&S cameras give fair poor 100% views when taken in real life conditions by average camera users who don't know to monitor shutter speed or to focus the camera themselves instead of letting the camera decide what to focus on. I've dealt with lots of those kind of photos doing my family's calendar every year and often I need to use as much of the original composition as possible due to poor quality if I crop tighter.

And it isn't surprising since the price of P&S cameras keep going down and down with lower aperture lenses. You can't keep quality up if the price you sell things for keep lowering!

However most photos from P&S cameras will make a decen 4x6 or 5x7 print from nearly the entire composition captured. :)
 
You basically answered the questions of Phil's blog as to why
downsampling didn't remove the noise in that the noise had already
been stripped as if these images were from a camera that had already
applied noise filtering and that proper algorithms and techniques
need to be used. What I would like to see is something provided to
Phil that he could post as an extension to his blog page that would
show him and his public how proper comparisons of cameras of
different pixel densities could be compared effectively.
Since when is Phil given to admitting error?
You made lots of posts showing all the pieces,
but perhaps we should have a post
showing how the Canon G10 isn't really worse than the G9 or or
G7 (although I regret it doesn't have native raw, although I think
the CHDK patch now works with it). Any takers? Or maybe I'll take a
stab at it myself. In that way, we are addressing the real issue as
you say.
Unfortunately, I don't have a G10 or G9, and Imaging-Resource is not hosting RAW test files of the G9. So I did the next best thing, I compared two ISO 1600 images from the 40D and 50D, using the following two RAW files:

http://75.126.132.154/PRODS/E40D/FULLRES/E40DhSLI1600.CR2
http://75.126.132.154/PRODS/E50D/FULLRES/E50DhSLI01600.CR2

The files look like this (thumbnail taken from IR):



I converted each file in DPP with NR turned off completely (according to the controls of the converter). I then picked a uniform patch on the background wall on the far right as a suitable region to do a spectral analysis of the noise. The noise spectrum of the Green channel was generated for each of the following (I looked at the Red channel, and the results were not much different):

1. The 40D.
2. The 50D.

3. The 50D resampled by a factor of 2592/3168 (the ratio of the vertical pixel counts) using PS Bicubic.

4. The 50D resampled by this same factor using ImageMagick with the Lanczos filter.

And now for the results. First, the noise spectra of the 40D (red) and the 50D (blue):



How is this data plot to be read? The horizontal axis is spatial frequency (fineness of scale in the image), with the Nyquist frequency (the pixel level) all the way to the right and the coarsest scales all the way to the left. The vertical axis is a measure of the amount of noise. So the data points are a measure of the amount of noise at a given scale in the image. Uncorrelated noise would appear as a rising straight line.

Hmm. It looks like the 50D is noisier than the 40D at all spatial frequencies! But that is because the Nyquist frequency is a finer scale on the 50D than it is on the 40D. The green data is the 50D data rescaled by the factor 2592/3168=.8182, which is the theoretical factor by which noise should scale -- the noise of the 40D at Nyquist should be .8182 times the noise of the 50D at its Nyquist frequency, and similarly for any spatial scale which is a given fixed fraction of Nyquist for each camera.

And indeed, performing this rescaling the noise performance of both cameras' noise spectra are nearly identical! In fact, the properly rescaled 50D data lies slightly below the 40D noise, because the 50D is in fact slightly more efficient per unit area in capturing photons, and so has a slightly higher S/N per unit area.

Now for the issue of downsampling the image. The 50D image was resampled by a factor .8182 to match the resolution of the 40D. Two versions were performed, Bicubic downsampling in Photoshop CS3, and Lanczos resampling in ImageMagick. Fourier spectra of the noise were evaluated in ImageJ, then collated and graphed in Mathematica. Here are the results (as before, 40D data in red, 50D data theoretically scaled in green; Photoshop Bicubic resampling in orange; IM Lanczos resampling in black)



The Photoshop resize is a bit more noisy because the algorithm is less precise, but it is quite close to the theoretical prediction. The Lanczos resampling is closer still. All told, the data shows that downsampling the noise spectrum of a higher pixel density camera (that has not been depleted at high frequency due to noise reduction, as in Phil's blog post examples) yields a noise spectrum that quite closely matches the theoretical expectation. The better the resampling algorithm, the closer it comes to the theoretical ideal. The default Lanczos algorithm in ImageMagick is reasonably good, and better can be had if so desired without extraordinary computational cost.

The takeaway lesson is that two cameras, one with 50% more pixels in the same size sensor, have equal levels of noise when properly compared, either by theoretically scaling the result of a raw conversion, or by downsampling the higher resolution image to match the lower resolution image.

If noise reduction had been performed, either in the RAW converter or after conversion, the noise power at the highest spatial frequencies would have been reduced. But since one can generate the noise spectrum of the lower resolution sensor by downsampling before noise reduction, whatever noise scrubbing one wishes to perform can be made to have the same effect on both the 40D and the downsampled 50D (though I wouldn't recommend it as an optimal post-processing scheme for the 50D).

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
You basically answered the questions of Phil's blog as to why
downsampling didn't remove the noise in that the noise had already
been stripped as if these images were from a camera that had already
applied noise filtering and that proper algorithms and techniques
need to be used. What I would like to see is something provided to
Phil that he could post as an extension to his blog page that would
show him and his public how proper comparisons of cameras of
different pixel densities could be compared effectively.
Based on the degree and nature of his participation in the previous discussions, I don't think he's interested in correcting his blog. But maybe he'll surprise me.
You made
lots of posts showing all the pieces, but perhaps we should have a
post showing how the Canon G10 isn't really worse than the G9 or or
G7 (although I regret it doesn't have native raw, although I think
the CHDK patch now works with it). Any takers?
The problem with many of the better technical posts is that they are made in technical terms and forms that I suspect are hard to follow for most people. I suspect that many of the graphs are also difficult for many people to relate to real world images. Further, the more technically astute posters also (IMO) undermined their position by seeming to bounce back and fourth between two different positions on what happens to noise. At times they seem to say that noise is never reduced. At other times they seem to say that with proper filtering, high spatial frequency noise is reduced. In fact, ejmartins "executive summary" includes the same seeming contradiction.

It would be nice to see something comprehensive, concise and consistent put together on a separate web page that could be used for future reference.
Or maybe I'll take a
stab at it myself. In that way, we are addressing the real issue as
you say.
I don't have the background to put it exactly right or to explain the technically proper analysis, but I do understand the basic idea behind the issue of applying a low pass filter before downsizing. I can also use the DPreview resolution charts to check if the low pass filtering I'm applying is approximately correct. When I do that and apply the low pass filtering to the DPReview test images, it becomes pretty clear that the newer higher density sensors are noticably better than the older low density ones.

Of course, since these images include different JPEG processing, noise reduction sharpening and perhaps othe processing, these comparisons can't be taken as definitive proof. But they should give readers good reason to consider that the DPReview crusade against higher pixel density may very well be a misguided one. I think that what DPReview should be crusading for is larger sensors - for instance, getting back to 2/3" sizes as was commonly used in the best compacts from few years ago.

Here are some ISO 400 and ISO 1600 G10 vs. G7 images. I did not apply any low pass filtering to the ISO 1600 image since the heavy noise reduction already applied probably overdid that beforehand. I used a .3 pixel Gaussian blur prefilter on the ISO 400 image. .6 seems to work well when halving the megapixels and 1.2 seems to work well when reducing megapixels to one fourth (50% linear scale reduction) the original count.







Here are the SFR plots from Imatest comparing the G7 output to the G10 output after only applying bicubic resizing as well as to the G10 output after a .3 pixel Gaussian blur and bicubic resizing were applied. You can see that bicubic resizing only preserves more resolution than the G7 could deliver. We get a better match with the .3 pixel Gaussian pre-filtering.



I did the same thing with ISO 80 and ISO 400 images for the G10 and A710 earlier today. The filtering was a .6 Gaussian blur.

http://www.jayandwanda.com/photography/PixDensity/G10_v_A710_a.jpg
http://www.jayandwanda.com/photography/PixDensity/G10_v_A710_b.jpg
http://www.jayandwanda.com/photography/PixDensity/G10_v_A710_c.jpg
http://www.jayandwanda.com/photography/PixDensity/G10_v_A710_SFR.png
http://www.jayandwanda.com/photography/PixDensity/G10_v_A710_SFR_bicubic.png

--
Jay Turberville
http://www.jayandwanda.com
 
You made
lots of posts showing all the pieces, but perhaps we should have a
post showing how the Canon G10 isn't really worse than the G9 or or
G7
BTW, the G7 review conclusion is chock full of complaints about having too many pixels crammed onto the sensor. Further, I was a bit off-base in my previous post. DPReview does fairly consistently call for larger sensors in the compacts. Maybe it would be more memorable though if it wasn't accompanied by so much harping on pixel density.

--
Jay Turberville
http://www.jayandwanda.com
 
Excellent work, Emil! Comments as follows:
What I would like to see is something provided to
Phil
Since when is Phil given to admitting error?
Ah so ;-)
Unfortunately, I don't have a G10 or G9, and Imaging-Resource is not
hosting RAW test files of the G9. So I did the next best thing, I
compared two ISO 1600 images from the 40D and 50D, using the
following two RAW files:
Good choice, as this was also a review where DPR reviewer commented on "too many MP".
And indeed, performing this rescaling the noise performance of both
cameras' noise spectra are nearly identical! In fact, the properly
rescaled 50D data lies slightly below the 40D noise, because the 50D
is in fact slightly more efficient per unit area in capturing
photons, and so has a slightly higher S/N per unit area.
Does re-sampling in this way actually properly handle the frequencies that would be above the Nyquist frequency of the new re-sampledimage? If not, a slight blur filter prior to down-scaling would make the results of the 50D even better.

Also, it should be pointed out that the true demosiaced resolution of a Bayer sensor is quite a bit less than the pixel count says, where as the down scaled image likely has a higher resolution compared to the total pixel count. Again, compensating for these differences would make the 50D come out even better.

Finally, concerning this test, it worked so well as compared to theory because Canon DPP does not do much frequency "spreading" filtering when NR is turned off (as I understand). The results may not have been so close to theory using ACR which tends to spread the pixels even with all noise reduction sliders set to zero. This, of course is understood in the light that ACR has already cut the power of the highest frequencies.

Same comments for the re-sizing as for the re-sampling.
The takeaway lesson is that two cameras, one with 50% more pixels in
the same size sensor, have equal levels of noise when properly
compared, either by theoretically scaling the result of a raw
conversion, or by downsampling the higher resolution image to match
the lower resolution image.
Note that one doesn't have to theoretically scale the raw conversion as one could actually come up with an adaptation of these algorithms to re-scale or down-sample the raw data before raw conversion, in which case I suspect the results would be even closer to ideal.
If noise reduction had been performed, either in the RAW converter or
after conversion, the noise power at the highest spatial frequencies
would have been reduced. But since one can generate the noise
spectrum of the lower resolution sensor by downsampling before noise
reduction, whatever noise scrubbing one wishes to perform can be
made to have the same effect on both the 40D and the downsampled 50D
(though I wouldn't recommend it as an optimal post-processing scheme
for the 50D).
Yes, Noise Reduction (NR) is the bane of digital cameras, or especially noise reduction that reduces the resolution of the camera beyond it's best Bayer resolution.

It's interesting to note how NR has progressed with problems with cameras to suit reviews such as the ones on this site that compare noise by using the standard deviation method. The history was as follows:

1) There was no NR applied to JPEG's prior to the 5 MP 1/1.8" sensor models, which in some/most implementations had quite high noisie for ISO's of 400. the best of breed for high ISO use was the Canon S30 3 MP camera which could produce almost usable ISO 800 images.

2) One can observe across all the review done across the years on this website that non-NR'ed images have excellent image quality with a standard deviation of noise in the shadows of about 1.5 JPEG levels, adequate quality at about 3 JPEG levels, and barely acceptable quality at 5 or 6 JPEG levels.

3) Starting with some follow-up models using the 5 MP and higher sensors, NR was introduced to try to keep the higher ISO sensitivies noise standard deviations at about 3 JPEG levels in the shadows and no higher than about 5. In this way, higher and higher ISO's were offered by compact cameras in spite of the pixel density increase, although as you have pointed out, technology also improved.

4) Now with very much higher photosite densities that have brought us almost 15 MP in the best compact cameras, we can still produce adequate images at ISO 200, somewhat acceptable ones at ISO 400, and almost acceptable ones at ISO 800 at full resolution , with the main complaints at these higher ISO's being that detail is smeared.

I ask the retorical question, why bother with NR which just serves to throw away detail and give objectionable splotches when the camera may as well just produce successively greater reduction of the size of images with increasing ISO sensitivity? If it did this, we would still have very high resolution images at lowest ISO's and still have something like 4 MP or more at usable ISO's of about 800, which is better than we ever had in the past. As you can see, I am completely anti NR and pro proper down-scaling.

And that, dear OP, is the purpose of pixel peeping as applied to improving over all image quality.

Regards, GordonBGood
 
Comments as follows:
You basically answered the questions of Phil's blog as to why
downsampling didn't remove the noise in that the noise had already
been stripped as if these images were from a camera that had already
applied noise filtering and that proper algorithms and techniques
need to be used. What I would like to see is something provided to
Phil that he could post as an extension to his blog page that would
show him and his public how proper comparisons of cameras of
different pixel densities could be compared effectively.
Based on the degree and nature of his participation in the previous
discussions, I don't think he's interested in correcting his blog.
But maybe he'll surprise me.
You made
lots of posts showing all the pieces, but perhaps we should have a
post showing how the Canon G10 isn't really worse than the G9 or or
G7 (although I regret it doesn't have native raw, although I think
the CHDK patch now works with it). Any takers?
The problem with many of the better technical posts is that they are
made in technical terms and forms that I suspect are hard to follow
for most people. I suspect that many of the graphs are also
difficult for many people to relate to real world images. Further,
the more technically astute posters also (IMO) undermined their
position by seeming to bounce back and fourth between two different
positions on what happens to noise. At times they seem to say that
noise is never reduced. At other times they seem to say that with
proper filtering, high spatial frequency noise is reduced. In fact,
ejmartins "executive summary" includes the same seeming contradiction.
I should let Emil answer to any apparent inconsistency, but I gather that the position is that high frequency noise is reduced but the ratio of the noise power to resolution stays the same; thus one can reduce the noise only by reducing the resolution which may as well be by down-sizing in order to avoid the noise "blotches" that are almost worse than the original "fine grained" noise.
It would be nice to see something comprehensive, concise and
consistent put together on a separate web page that could be used for
future reference.
Jay, your work in the post to which I am replying is an excellent start!

I think you are right that these discussions between physicists and engineers are too much for many photographers and just plain camera users to understand, and that is a shame in that they actually show just how much better modern digital cameras are than than their predecessors in general.

There are some truisms that Emil (ejmartin) has brought out or observed that should particularly be noted: 1) You can't reduce noise by re-sampling/down-sizing when the noise you would be removing has already been removed by Noise Reduction (NR), and 2) properly down-sizing an already NR'ed image makes it look better even when the standard deviation of the noise is not changed, likely because the resulting fine "grain" pattern of the noise is less objectionable because it is better able to be averaged away my our visual systems.

To my eyes, for 100% views of images noise reduction is somewhat acceptable when it doesn't reduce the standard deviation of the noise by more that a factor of two (as compared to completely noise reduction, thus not increasing the resulting "grain" size of the noise by more than about 50% more than the maximum resolvable "grain" for a Bayer sensor. Beyond that, one may as well down-size the image to keep the "grain" fine.

I was going to suggest to Emil that the best way to refute Phil's blog would be to produce equivalent image results from real cameras rather than charts that show the opposite of what Phil concludes from his flawed experiment; however you have already done this in your G10 vs. G7 comparisons even using JPEG's. The only reason that one would dispute your results is that you haven't shown that when you apply these same down-sizing techniques to resolution charts, the resolution goes down to the lower resolution camera, although I know you have done this and you mention it in your post.

I'm sure that Emil can do the same for the Canon 50D vs 40D, whether in raw or JPEG.

With real high MP cameras showing both all of the resolution and the same or less noise when properly down-sized as compared to their lower MP predecessors, how can anyone, even those more technically challenged, dispute that more MP is necessarily bad?

To me, what is bad is increasing NR with increasing numbers of MP to make standard deviations of the noise fall within standard limits without also downsizing the images to a more appropriate size for their remaining real resolution.

Regards, GordonBGood
 
The problem with many of the better technical posts is that they are
made in technical terms and forms that I suspect are hard to follow
for most people. I suspect that many of the graphs are also
difficult for many people to relate to real world images. Further,
the more technically astute posters also (IMO) undermined their
position by seeming to bounce back and fourth between two different
positions on what happens to noise. At times they seem to say that
noise is never reduced. At other times they seem to say that with
proper filtering, high spatial frequency noise is reduced. In fact,
ejmartins "executive summary" includes the same seeming contradiction.
I should let Emil answer to any apparent inconsistency, but I gather
that the position is that high frequency noise is reduced but
the ratio of the noise power to resolution stays the same; thus one
can reduce the noise only by reducing the resolution which may
as well be by down-sizing in order to avoid the noise "blotches" that
are almost worse than the original "fine grained" noise.
I think the problem is that the context of the discussion shift back and forth to one of actual manipulation of the image data and one of a theoretically "perfect" scaling where the image is not actually manipulated but the viewing scale is change - as you might get by simply backing off from the images. But I'm not sure. Either way, after "suffering" the technical jargon, new terms and different way of thinking about image information, one doesn't want to be left with this apparent discrepancy. It leaves you wondering if you really got the point.
I think you are right that these discussions between physicists and
engineers are too much for many photographers and just plain camera
users to understand, and that is a shame in that they actually show
just how much better modern digital cameras are than than their
predecessors in general.
Yep. I'm inclined to dig deeper in to the technical than most, and its an effort for me to stick with it. So I'm figuring that lots of other people are skimming the info at best.
There are some truisms that Emil (ejmartin) has brought out or
observed that should particularly be noted:
Don't get me wrong. I think I learned a lot this weekend. There was some really good information and analysis in there. You just have to stick with it and be willing to set some questions aside for a bit while you move along to the next bit.
I was going to suggest to Emil that the best way to refute Phil's
blog would be to produce equivalent image results from real cameras
rather than charts that show the opposite of what Phil concludes from
his flawed experiment; however you have already done this in your G10
vs. G7 comparisons even using JPEG's. The only reason that one would
dispute your results is that you haven't shown that when you apply
these same down-sizing techniques to resolution charts, the
resolution goes down to the lower resolution camera, although I know
you have done this and you mention it in your post.
I have provided the information via Imatest SFR response curves/charts. I included the results (that show too high of a resolution) when bicubic only are applied. Here is the G7 v. G10 curve again. If there's a complaint here, I think it would be the lack of an SFR chart at each ISO that is compared.

Here are results for the G10 v. G7 and G10 v. A710 - including what happens with bicubic resizing only.






With real high MP cameras showing both all of the resolution and the
same or less noise when properly down-sized as compared to their
lower MP predecessors, how can anyone, even those more technically
challenged, dispute that more MP is necessarily bad?
Shrug. Yesterday someone was correcting me by referring me to Roger Clark's website. I think the problem is that so much discussion of the fundamentals has centered on pixels and how they work. Now add to this the "curse of 100% pixels" in image evaluation and some people in leadership postitions spreading their own "gospels" and it's no surprise. We really do have a "forest for the trees" problem here.
To me, what is bad is increasing NR with increasing numbers of MP to
make standard deviations of the noise fall within standard limits
without also downsizing the images to a more appropriate size for
their remaining real resolution.
Yep. If it seems to me that the main complaint about the G10 ISO 1600 would be the rather heavy handed noise reduction being applied. But then, consider that the many users have been trained to look for speckles at 100% and many others are going to resize the image for email and it makes sense. They do have raw in there for the rest of the world.

--
Jay Turberville
http://www.jayandwanda.com
 
The problem with many of the better technical posts is that they are
made in technical terms and forms that I suspect are hard to follow
for most people. I suspect that many of the graphs are also
difficult for many people to relate to real world images. Further,
the more technically astute posters also (IMO) undermined their
position by seeming to bounce back and fourth between two different
positions on what happens to noise. At times they seem to say that
noise is never reduced. At other times they seem to say that with
proper filtering, high spatial frequency noise is reduced. In fact,
ejmartins "executive summary" includes the same seeming contradiction.
Yes, looking back on that post
http://forums.dpreview.com/forums/read.asp?forum=1000&message=30177171

I can see how it might seem to contradict itself; it could have been better worded.

If one compares a properly downsampled image to its original, the noise power spectrum of the downsampled image will match that of the original up to the (reduced) Nyquist frequency of the downsampled version (as one can infer from the results I posted above from the 40D/50D). In that sense, the noise spectrum of the downsampled image is not changed from its parent. However, the part of the noise spectrum of the original that lies beyond the Nyquist frequency of the downsample is removed by the downsampling (technical caveat -- I'm assuming that the downsample algorithm doesn't alias noise beyond the target Nyquist frequency; a good downsample algorithm will not do this). So in that sense, downsampling reduces noise if there is noise in this band of frequencies in the parent image. In Phil's grain size example,



The noise power spectra are (green is "fine" grain, blue is "medium", and red is "coarse"):



Downsampling by a factor of two halves the Nyquist frequency (from 128 to 64 in the above graph), and so removes the right half of the noise power graph. For the fine grain sample, there is lots of noise power there, and indeed the downsample reduced the std dev from 11.4 to 4.9 according to Phil; for the coarse grain sample, there is very little noise power in the top half of spatial frequencies, and so the std dev only went down from 11.1 to 9.9 upon downsampling, again according to Phil.

So downsampling removes whatever noise is at high frequency in the image; there is a separate question as to how this noise is perceived (a discussion of "grain size") that I won't get into here.
I should let Emil answer to any apparent inconsistency, but I gather
that the position is that high frequency noise is reduced but
the ratio of the noise power to resolution stays the same; thus one
can reduce the noise only by reducing the resolution which may
as well be by down-sizing in order to avoid the noise "blotches" that
are almost worse than the original "fine grained" noise.
It would be nice to see something comprehensive, concise and
consistent put together on a separate web page that could be used for
future reference.
I'm thinking about writing something up, but I've already wasted a lot of time on this and it may take some time to get to it.
There are some truisms that Emil (ejmartin) has brought out or
observed that should particularly be noted: 1) You can't reduce
noise by re-sampling/down-sizing when the noise you would be removing
has already been removed by Noise Reduction (NR),
Yes, NR depletes the noise power at high frequency (scales finer than the radius over which the NR filter acts). Then there is little or nothing for the downsample to remove, as Phil found.
and 2) properly
down-sizing an already NR'ed image makes it look better even when the
standard deviation of the noise is not changed, likely because the
resulting fine "grain" pattern of the noise is less objectionable
because it is better able to be averaged away my our visual systems.
That may be so.
I was going to suggest to Emil that the best way to refute Phil's
blog would be to produce equivalent image results from real cameras
rather than charts that show the opposite of what Phil concludes from
his flawed experiment; [...]

I'm sure that Emil can do the same for the Canon 50D vs 40D, whether
in raw or JPEG.
I may post the samples I worked from, at least crops, later tonight.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
The problem with many of the better technical posts is that they are
made in technical terms and forms that I suspect are hard to follow
for most people. I suspect that many of the graphs are also
difficult for many people to relate to real world images. Further,
the more technically astute posters also (IMO) undermined their
position by seeming to bounce back and fourth between two different
positions on what happens to noise. At times they seem to say that
noise is never reduced. At other times they seem to say that with
proper filtering, high spatial frequency noise is reduced. In fact,
ejmartins "executive summary" includes the same seeming contradiction.
I should let Emil answer to any apparent inconsistency, but I gather
that the position is that high frequency noise is reduced but
the ratio of the noise power to resolution stays the same; thus one
can reduce the noise only by reducing the resolution which may
as well be by down-sizing in order to avoid the noise "blotches" that
are almost worse than the original "fine grained" noise.
What I would say is that proper downsampling throws away the high frequency component of noise together with resolution. Whether the ratio of noise power to resolution stays the same or not depends on the noise power profile. In an image that has had no noise reduction applied, and just an accurate RAW conversion (such as the DPP example above), the noise power is linear with frequency; then halving the resolution halves the noise power at any scale that is kept at a fixed fraction of the resolving power. If the noise power profile is not linear (as is the case if noise reduction is applied), then the ratio of noise power to resolution will not stay the same, because the curve is non-linear.
I think the problem is that the context of the discussion shift back
and forth to one of actual manipulation of the image data and one of
a theoretically "perfect" scaling where the image is not actually
manipulated but the viewing scale is change - as you might get by
simply backing off from the images. But I'm not sure. Either way,
after "suffering" the technical jargon, new terms and different way
of thinking about image information, one doesn't want to be left with
this apparent discrepancy. It leaves you wondering if you really got
the point.
It's useful to consider both the result of theoretical scaling and of actual downsampling, to see whether they agree or if there is something wrong with the theory. I hope I have provided the evidence in this thread that shows the two to be in quite pleasing agreement.

Zooming with one's feet to effect a rescaling is similar to the effects of theoretical scaling, if one is looking at the noise in a small patch, the eye has a fixed resolving power; when one backs up, that limiting resolution is probing a different, lower scale, and the noise at that scale. It is similar to the effect of downsampling, in that noise at scales below the resolving power of the viewer is averaged away when one backs up.

For someone familiar with how MTF curves work, there's another way to say all this, that may resonate better. In an optical system with many components, the output spatial frequency response is a result of multiplying the MTF's of the individual components against the spatial frequency content of the incident image. The downsampling filter has an "MTF" (ie spatial frequency response) that goes from near one below the new Nyquist frequency, to near zero above it. The noise power spectrum is just the spatial frequency content of the image with regard to noise. The downsampled image's noise power is then the product of the original noise power spectrum, and the "MTF" of the downsampling filter. The downsampling thus removes the high frequency noise, just as a bad lens throws away high frequency image content by its poor MTF response. Similarly, when you back up from the image, the MTF of your vision throws away the noise and other image data at high frequency (although the MTF curve for human vision is probably less sharply cut off than the "MTF" of a good downsampling filter) by moving the MTF cutoff in your vision to a lower spatial frequency in the image content.

With this in mind, and a bit of understanding of the noise power spectrum and how it is affected by noise reduction, demosaicing, etc, one can understand the effect of downsampling on any given image.

There is a separate discussion to be had regarding the relative merits of downsampling a high resolution image, vs performing NR while keeping the high resolution, vs just keeping the high resolution and its "fine grain" of noise.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
It would be nice to see something comprehensive, concise and
consistent put together on a separate web page that could be used for
future reference.
I'm thinking about writing something up, but I've already wasted a
lot of time on this and it may take some time to get to it.
Please reconsider. The "time wasted" is trying to prove this with forum posts. If you spent half the time that you've been posting, writing a (web) paper on this, then these issues can be resolved by linking to your paper.

As it stands now, it is a continuous game of "whack-a-mole." It doesn't make any difference how many times you prove the point in forum posts. They get totally ignored, and we start over from scratch every time somebody posts (or blogs) something decrying "megapixel madness."

Please summarize your evidence on a web page. In terms that even a New York Times writer can understand.

TIA.

Wayne
 
If it did this, we would
still have very high resolution images at lowest ISO's and still have
something like 4 MP or more at usable ISO's of about 800, which is
better than we ever had in the past. As you can see, I am completely
anti NR and pro proper down-scaling.
Count me in. I hate NR, you almost always loose details, unless you have an uniform surface. Recently I even set Lightroom to do 0 chroma NR. Chroma NR is only good for pixel peeping but I think for prints at higher densities, removing chroma noise will remove some information in the colors.

--
Manu

 
I'm thinking about writing something up, but I've already wasted a
lot of time on this and it may take some time to get to it.
Please reconsider. The "time wasted" is trying to prove this with
forum posts. If you spent half the time that you've been posting,
writing a (web) paper on this, then these issues can be resolved by
linking to your paper.
OTOH, if you aren't posting and actively engaging others, you might think you understand where others are coming from, but be quite wrong. So you could end up writing a page that doesn't really accomplish anything.
As it stands now, it is a continuous game of "whack-a-mole." It
doesn't make any difference how many times you prove the point in
forum posts.
Well, I've got news for you. Even with a good web page reference, it will still be a game of "whack-a-mole" - the moles are merely reduced a bit and are a little easier to "whack."

--
Jay Turberville
http://www.jayandwanda.com
 
The problem with many of the better technical posts is that they are
made in technical terms and forms that I suspect are hard to follow
for most people. I suspect that many of the graphs are also
difficult for many people to relate to real world images. Further,
the more technically astute posters also (IMO) undermined their
position by seeming to bounce back and fourth between two different
positions on what happens to noise. At times they seem to say that
noise is never reduced. At other times they seem to say that with
proper filtering, high spatial frequency noise is reduced. In fact,
ejmartins "executive summary" includes the same seeming contradiction.
I couldn't agree with you more!

The minute those charts and graphs come out, most of us stop reading and just go out and start taking pictures and leave the hashing to the techies!

To most photographers, those charts and graphs are meaningless as we really don't look at cameras and lenses that way.

I know that when I see most of these charts and graphs around here, I'm sure that I have that 'deer in the headlights' look!

Some of us just understand photography and the camera, and how to make it do what we want it to do and to produce what we want them to produce.

We don't want or need to know the physics behind what they do.

--
J. D.
Colorful Colorado



Remember . . . always keep your receipt, the box, and everything that came in it!
 
Some of us just understand photography and the camera, and how to
make it do what we want it to do and to produce what we want them to
produce.
But that's the rub - lots of people think they understand things, but just a little bit of probing shows that they don't really. And that'd be OK, but then they start spreading the misinformation. So someone steps up and says that they are wrong, and that means that eventually the challeng is made to "prove it" somehow. And in today's age of sensors and digital sampling, that often means a bit of basic physics - same as it always has, except that there are some new issues and concerns.
We don't want or need to know the physics behind what they do.
So who do you trust for information about what your gear will do or what design approaches suit your needs? The question really isn't one of knowing the physics. The question is one of how much do you need to know to get where you want to go. Basic exposure theory is strongly rooted in basic physics. It always has been.

--
Jay Turberville
http://www.jayandwanda.com
 
The question really isn't one of knowing the physics.
The question is one of how much do you
need to know to get where you want to go. Basic exposure theory is
strongly rooted in basic physics. It always has been.
Do we need to know everything about how a car is made to understand and know how to drive one?

--
J. D.
Colorful Colorado

Remember . . . always keep your receipt, the box, and everything that came in it!
 
Let me assure you that Leonardo was a pixel peeper, so were the Ming China artisans, Ansel Adams, and most other artists through history. Picasso was a pixel peeping craftman. All great artists start by mastering the craft, which means knowing exactly what their tools will - and won't - do.

People seem to be afraid of pixel peepers, maybe because they are afraid of the work and effort to master image quality. Let me assure you, I've seen flaws on 13X19 prints that didn't show up clearly at 100% on the screen.

Do you think Leonardo thought "hey, it will look good from the floor" when he was painting the Sistine Chapel? Or did he put his best into it when he was 1 foot away from the ceiling?

Yes, I agree pixel peeping simple for the sake of pixel peeping is pretty silly. But saying hey, it will look OK if I print small enough is at least as silly. I see nothing wrong with pixel peeping providing it has a purpose and a goal and is part of a greater craft.
 
Do we need to know everything about how a car is made to understand
and know how to drive one?
I heard dozens of people say "my new car only costs $x to fill up, my last car cost twice as much to fill up, so I really appreciate the better fuel economy." But what if the new car has the same fuel economy but just a smaller tank? What if the fuel economy is even worse than the old car? The driver thinks they are measuring fuel economy when they're actually measuring something else entirely.
--
Daniel
 
The only reason that one would
dispute your results is that you haven't shown that when you apply
these same down-sizing techniques to resolution charts, the
resolution goes down to the lower resolution camera, although I know
you have done this and you mention it in your post.
I have provided the information via Imatest SFR response
curves/charts. I included the results (that show too high of a
resolution) when bicubic only are applied. Here is the G7 v. G10
curve again. If there's a complaint here, I think it would be the
lack of an SFR chart at each ISO that is compared.
Again, the technically challenged understand resolution charts as DPR provide just before the conclusions, but don't really relate to step charts and the step response as it relates to frequencies and resolution. Your filters applied to the G10 showing that they reduce a low ISO shot to the same real resolution as the G7 would likely help convince them. You could use the resolution charts for the G10 and the G6 from the DPR review (I've done that when I did similar work and tests).

Best regards, GordonBGood
 
Here are the Imaging-Resource test images I used to generate the power spectra (at least, jpegs generated from the TIFFs; I did the analysis directly on the TIFFs, of course):

40D:
http://theory.uchicago.edu/~ejm/pix/20d/posts/tests/Noise/E40DhSLI1600.jpg

50D:
http://theory.uchicago.edu/~ejm/pix/20d/posts/tests/Noise/E50DhSLI01600.jpg

50D, downsampled to 40D dimensions:

http://theory.uchicago.edu/~ejm/pix/20d/posts/tests/Noise/E50DhSLI01600-downsamp.jpg

Warning: these are 4-6MB files.

They were generated from the RAWs posted at Imaging-Resource using DPP with noise reduction turned off. My noise spectrum analysis was taken from the wall on the RHS, just to the right of the yarn skeins. The standard deviations in luminosity (as measured by PS CS3) of those patches are

40D: 3.23
50D: 4.28
50D dowsampled: 3.08

These are rough averages over several small windows, to mitigate the effects of overall tonal variation across the patches yet have a statistically valid sample. A conservative estimate of the error in these figures would be + - .05.

The theoretical reduction in noise from downsampling the 50D to the size of the 40D is the ratio of vertical pixel dimensions 2592/3168=.8182; the reduction in the measured patch is 3.08/4.28=.72. The additional reduction is due to the additional smoothing applied by PS Bicubic resampling relative to an ideal resampling (with corresponding loss of detail, which is why one typically needs to sharpen after downsampling).

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 

Keyboard shortcuts

Back
Top