Pixel density - can the playing field be leveled???

I’ll believe when I see it. And so far, I haven't seen it.
I just showed you an example of it, but have now included another
I’m having trouble following all this in relation to the OP’s opening question. The OP asked...
If two camera's
have the same size sensor, and one is a 10 MP, and the other is a 15
MP, and I lower the resolution on the 15 MP to 10, can I expect the
same picture quality for a noise perspective? Of course, this is
putting all of the other camera functions (noise reduction, etc)
aside.
So there are two parts to this question. The first is what happens when you lower the resolution of the image from 15MP to 10MP. The second question is, essentially, what happens when you compare sensors of similar size but different densities.

On resizing...

I said that resizing an image doesn’t change the noise level. It simply makes the noise smaller, as with everything else in the image. When said image is printed at the same size as the original, it will have the same noise.

You’re saying that’s not the case. You’re saying that a “proper” downsampling process starts with a low-pass filter, and you provide images to that effect.

The problem I have with this is that a low-pass filter is a noise reduction process that has nothing to do with resolution reduction. I consider the application of the low-pass filter prior to image reduction to be unjustified when the intent is to perform a qualitative comparison to the original image. I could just as easily apply the low-pass filter to the original image and have matching noise levels but more detail than the reduced image. What is the justification to not do so? There is none. There’s no reason to say, “Applying a low pass filter to the original is unfair to the upcoming comparison.”

The only thing you can say about your processing is that the end result may have less noise (at the expense of detail) than an untouched original. But it’s not going to be better than the original with matching noise reduction applied. At best the noise levels will be the same, which is what I’m saying.

So here’s a challenge. Take a noisy image...
Apply a low pass filter and print at 8x10...
“Properly” resize image to reduce noise and print at 8x10...

Present images of the prints for comparison. If the images show that the resized print has less noise with equal detail, then I’ll reconsider my position.

Here’s an image...
http://home.roadrunner.com/~graystar/yo800.jpg
Here’s the control...
http://home.roadrunner.com/~graystar/yo80.jpg

On the equal sized sensor comparison...

The sensor comparison will simply follow the results above. If, somehow, the downsampling of a high-res image can actually produce a better print (at the same size) as a processed lower-res image, then there’s no question that we would want higher pixel densities on our sensors. So we have to first get clear resolution on the resizing question, which (at least to me) your analysis doesn’t provide because you didn’t compare the resized image to the original size, low pass filtered image.
 
As a general rule, yes. If all other things could be held equal, they
would be equivalent. Of course, in a real world example, all other
things are never held equal.
This seems to be the only clear, simple answer to the original
question in the entire thread :)

The heated debate shows that the 15MP might be slightly better or
the 10MP might be slightly better, but whatever the difference,
it's not large enough to create consensus.
The point of comparisons is to see the differences, not the
samenesses. If one doesn't have the mental capacity to know when
that matters, that is their problem.

All cameras' output can be reduced to nearly equal - the question is,
what is the best that a camera can do.
No, the question in this thread was neither about that, nor about my mental capacity, thank you.

To use your car analogy, the question was whether a Porsche would be as good as a Prius in a traffic jam. Turberville seemed to answer that question nicely.
 
So there are two parts to this question. The first is what happens
when you lower the resolution of the image from 15MP to 10MP. The
second question is, essentially, what happens when you compare
sensors of similar size but different densities.

On resizing...
I said that resizing an image doesn’t change the noise level. It
simply makes the noise smaller, as with everything else in the image.
When said image is printed at the same size as the original, it will
have the same noise.
This depends on what you mean by "change the noise level". Noise is not one number, it has a frequency spectrum. If by "noise level" one means pixel standard deviation, then downsampling reduces the pixel standard deviation, because the new pixels are aggregates of the old ones, and the noise averages out. However the amount by which the noise is lowered depends on the noise power spectrum of the original, and the downsampling method. Assuming that the downsampling method does not alias the noise, the noise power of the source image at frequencies beyond the Nyquist frequency of the target image is filtered out; then there will be less noise power in the target image.

Aliasing reduces the amount of noise power that is filtered out. Discrete sampling of the source image can shift frequencies beyond the Nyquist frequency of the sampling to frequencies below Nyquist; in that case, noise power that should be absent makes its way into the target image. Since there can be no true detail beyond the Nyquist frequency, any image data beyond the target Nyquist (including noise at those frequencies) that makes its way into the downsampled image is spurious.

For example, suppose one has image detail at the source Nyquist frequency; a sequence of pixel values along a row is thus

a b a b a b a b a b a b

Now suppose one downsamples by a factor 3; sampling every third pixel, the downsampled sequence is

a b a b

Which is what naively would have come from a sequence

a a a b b b a a a b b b

in the source image. This many-to-one frequency map is called aliasing. If there is noise at high frequencies and one allows it to be aliased upon downsampling, it gets lumped in with lower frequency noise and is not filtered out.
You’re saying that’s not the case. You’re saying that a “proper”
downsampling process starts with a low-pass filter, and you provide
images to that effect.

The problem I have with this is that a low-pass filter is a noise
reduction process that has nothing to do with resolution reduction.
On the contrary, it is there to guarantee that there will be no aliasing of high-frequency noise into the target image that shouldn't be there.
I consider the application of the low-pass filter prior to image
reduction to be unjustified when the intent is to perform a
qualitative comparison to the original image. I could just as easily
apply the low-pass filter to the original image and have matching
noise levels but more detail than the reduced image.
No, a low-pass filter will remove both noise and detail beyond the cutoff frequency of the filter. Perhaps you are thinking of noise reduction filters; these are not strictly low-pass filters, since they retain high-frequency detail.
What is the
justification to not do so? There is none. There’s no reason to
say, “Applying a low pass filter to the original is unfair to the
upcoming comparison.”

The only thing you can say about your processing is that the end
result may have less noise (at the expense of detail) than an
untouched original. But it’s not going to be better than the
original with matching noise reduction applied. At best the noise
levels will be the same, which is what I’m saying.
Jay's processing will have less high frequency noise, since it has less high frequencies, provided that aliasing has been tamed, which is why he applied the blur prior to downsampling. Agreed though that this sort of processing is throwing the baby out with the bath water; use of a sophisticated noise filter will retain much high frequency detail while decresing high frequency noise.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
The OP's question is essentially asking if a resolution reduction will equalize the the noise levels. And the answer is that it will tend to.
On resizing...
I said that resizing an image doesn’t change the noise level. It
simply makes the noise smaller, as with everything else in the image.
And I showed measurements that show that the noise level does change.
You’re saying that’s not the case. You’re saying that a “proper”
downsampling process starts with a low-pass filter, and you provide
images to that effect.
Actually, I'm also saying that even an "improper" downsampling will reduce the noise. It just won't be as much as you would get if you actually made sure that you reduced the resolution by an equal amount. Simple resizing of a 15Mp sensor image will typically leave you with more resolution than the 10Mp sensor and will also pass through high spatial frequency noise as new lower spatial frequency noise. Pixel count does not equal resolution.
The problem I have with this is that a low-pass filter is a noise
reduction process that has nothing to do with resolution reduction.
Actually, it has everything to do with doing the job correctly and not introducing image aliasing and noise aliasing. I supplied SFR test results that show all of the beyond Nyquist response that you get if you fail to use a low pass filter. That same test also shows that simply resizing by 50% does not result in a 50% reduction in resolution. Low pass filtering minimizes the aliasing of image detail and image noise. Take another look. The simple resize (far right) using Photoshop's bicubic algorithm only reduced resoltuion by 50% and results in more response beyond Nyquist.


I consider the application of the low-pass filter prior to image
reduction to be unjustified when the intent is to perform a
qualitative comparison to the original image.
But that's not the purpose. The purpose is to create an image from a higher density sensor that matches the resolution form the same sensor area of a lower resolution sensor.
I could just as easily
apply the low-pass filter to the original image and have matching
noise levels but more detail than the reduced image. What is the
justification to not do so? There is none. There’s no reason to
say, “Applying a low pass filter to the original is unfair to the
upcoming comparison.”
Which "original" image? We are talking about two original images from two different sensors. One lower density and one higher density. The drill is to match output resolutions (not necessarily image sizes) and see what happens to the noise when the higher resolution sensor's output is reduced to equal the resolution of the lower density sensor.
So here’s a challenge. Take a noisy image...
Apply a low pass filter and print at 8x10...
“Properly” resize image to reduce noise and print at 8x10...
Present images of the prints for comparison. If the images show that
the resized print has less noise with equal detail, then I’ll
reconsider my position.
Huh? You've reduced the resolution of both images. That's exactly what the low pass filter does. It removed high spatial frequencies. Of course they will look the same.

And it's beside the point. The question wasn't about printing.
On the equal sized sensor comparison...
The sensor comparison will simply follow the results above. If,
somehow, the downsampling of a high-res image can actually produce a
better print (at the same size) as a processed lower-res image, then
there’s no question that we would want higher pixel densities on our
sensors.
Here's the short point I'd like people to get on pixel density. You almost never give up anything regarding image quality by going with a higher pixel density but given good light and a good lens, you do stand to gain additional detail.
So we have to first get clear resolution on the resizing
question, which (at least to me) your analysis doesn’t provide
because you didn’t compare the resized image to the original size,
low pass filtered image.
The whole point was to answer the question of what happens to noise when you equalize the resolution output of a higher pixel density sensor to match the resolution of a lower pixel density sensor. And that is what I did. And I did it with an example where the pixel densities differed by a fairly radical 4:1 ratio.

The result is that when you reduce the output resolution of the higher density sensor to match the output resolution of the lower resolution sensor, you also end up with similar noise levels.

--
Jay Turberville
http://www.jayandwanda.com
 
Jay's processing will have less high frequency noise, since it has
less high frequencies, provided that aliasing has been tamed, which
is why he applied the blur prior to downsampling. Agreed though that
this sort of processing is throwing the baby out with the bath water;
use of a sophisticated noise filter will retain much high frequency
detail while decresing high frequency noise.
Of course this is probably never going to be the optimal way to process an image if you are looking for the best overall image quality. That wasn't the point of the excercise. Though, when resizing for computer display, it is certainly worth considering doing some low pass filtering before downsampling.

The point of the excercise is not to promote reducing image resolution. It is to show that even with a radical 4:1 pixel density difference, you don't end up with worse noise than you'd have from the same sized sensor with a lower pixel density. We have people worried about 50% increases in pixel density, but in this example a 400% increase in pixel density doesn't hurt noise- given that you hold everything else equal. Sensor size, focal length, shutter speed, aperture, processing and output resolution.

The beauty of the higher resolution sensor is that you get more choice. Maybe you are willing to tolerate some high spatial frequency noise in order to get more detail. Or maybe your lighting is so good that the differences in high spatial frequency noise aren't even visible. But if not, as a worse case scenario, you can always process your higher resolution output to be equivalent to the output you'd get from a lower resolution sensor. There's little reason to worry much about pixel densities.

--
Jay Turberville
http://www.jayandwanda.com
 
I said that resizing an image doesn’t change the noise level. It
simply makes the noise smaller, as with everything else in the image.
And I showed measurements that show that the noise level does change.
And I've made prints (and so have a couple others in the forum in past discussions) showing that downsampled images, when printed, display the same apparent noise as prints made from the full-res image. The only image quality comparisons that matter to me are prints at the same size.

This goes with my primary message to the OP in my first response, which basically said that the noise in the images produced by the 15MP sensor and the 10MP sensor area already the same. No resizing is necessary...printed samples will exhibit the same noise.

If you’re saying that the downsampled 15MP image will have less noise than the original 10MP image. I say produce the prints that demonstrate it. If you don’t care about prints, then I guess we’re at an impasse.
 
Daniel, Jay, ejmartin - your responses are very informative. I have worked a bit on signal and image processing and I've learned quite a few new points from your posts.

Based on my own understanding of the theory, I would agree that more pixels does not make things worse.

Regarding noise reduction algorithms, I've seen a few approaches. Statistical wavelet approaches as described earlier and non-local means, which try to remove noise by replacing each pixel with a weighted average of all other pixels in the image, with the weight of each pixel being set by how likely the other pixel is the same intensity level as the pixel being replaced. Some interesting work in the literature.
 
From the horse's mouth:

http://blog.dpreview.com/editorial/2008/11/downsampling-to.html

Short answer, you can't bypass "the disadvantages that come with
higher pixel densities such as diffraction issues, increased
sensitivity towards camera shake, reduced dynamic range, reduced high
ISO performance", according to
http://www.dpreview.com/reviews/canoneos50d/page31.asp . It's physics,
man!

It's understandable that you can be a bit confused because misleading
posts of some troublemakers (fortunately banned now owing to renowned
DPR's moderation system)
Yes, that's what she said!
 
I said that resizing an image doesn’t change the noise level. It
simply makes the noise smaller, as with everything else in the image.
And I showed measurements that show that the noise level does change.
And I've made prints (and so have a couple others in the forum in
past discussions) showing that downsampled images, when printed,
display the same apparent noise as prints made from the full-res
image. The only image quality comparisons that matter to me are
prints at the same size.
All well and good, but that wasn't the OP's question.
This goes with my primary message to the OP in my first response,
which basically said that the noise in the images produced by the
15MP sensor and the 10MP sensor area already the same. No resizing
is necessary...printed samples will exhibit the same noise.
This may or may not be true. It all depends on the scale at which you print and the scale at which you observe. A number of things could occur.

1) The printing process itself can act as a low pass filter and noise and detail are lost compared to the original image. The result may or may not be equal apparent image quality.

2) The printing process and scale may preserve all of the image detail and noise of both comparison images, but:

a) at the viewing distance, the eye acts as a low pass filter and the noise effectively disappears to the viewer

b) The eye does not act as a low pass filter and the viewer sees slightly more noise (though at a smaller scale) in the higher resolution image along with slightly higher detail levels and experiences a general impression of quality that is about the same as the lower resoltuion image - or maybe an impression of slightly higher quality. It probably depends a lot on the image.

3) The printing scale and process (very large print), not only preserves all of the image detail, but shows it at a very large scale. The viewer might more clearly see more apparent noise as well as more image detail in the image from the higher resolution sensor.
If you’re saying that the downsampled 15MP image will have less noise
than the original 10MP image. I say produce the prints that
demonstrate it. If you don’t care about prints, then I guess we’re
at an impasse.
First, there are two original images. One from a 15Mp sensor and one from a 10Mp sensor.

Second, and for the last time, the question was never about printing.

As I think back about all the posts, I don't think you have been describing resolution reduction - which is what the OP asked about. You seem to be mostly talking about image scaling. These are not necessarily the same things.

If we imagine a crop of our two images from two different density sensors on two display devices that show the images at 100% pixels, and then move the higher resolution image further away until it has the same apparent image detail size as the lower resolution image (same viewing scale), we will observe that the size of the noise in the higher resolution image gets smaller and becomes less obtrusive. But, of course we know that the noise did not actually go away or change in any way. It obviously is still there. Depending on the viewing distance and our visual acuity, all of the noise and detail in both images may be preserved and still be viewable. I believe this is what your are attributing to your printing process.

But if we view the images such that the lower resolution image is being viewed where its finest detail is right at the limit of visual acuity, we can be assured that in the viewer's eye that some noise and some image detail in the higher resolution image hasn't simply become smaller and less obtrusive. It is actually no longer visible. The viewer's eye (lens and/or retina) is acting like a low pass filter. The eye limits the resolution. The image the brain sees is a version with less resolution. This is more akin to what I've been describing. When real resolution is reduced, some real noise is also reduced. This is the answer to the question that the OP posed.

BTW, this goes directly to the point that the more scientifically educated among us have been saying - that noise is also scalar. This also goes directly to a basic error on DPReview's part. They did the riight thing with resolution measurement by presenting it normalized to image height by using Line Widths per Image Height (LWPH). This method appropriately accounts for the different sensor sizes and gives the reader a good measure of the real differences in image detail that can be delivered to a final image - all in one number. It would be misleading and confusing to simply give the resolution as line pairs per millimeter (lp/mm). All of the super high density and super small sensors would have better numbers - which would be misleading and/or create a need for the reader to do futher calculations to do apples to apples comparisons.

When DPReview deals with noise they should do some similar sort of normalization for image resolution or pixel count. But they don't. They give the noise figures in values that amount to "per pixel" values when they should be "per image" values. While technically correct (just as lp/mm are correct), this can be misleading to those that don't understand the scaling/pixel count issues - just as lp/mm can be misleading to people who don't understand or fail to consider the image sensor size factor. The result is confusion, misunderstanding, and incorrect conclusions.

--
Jay Turberville
http://www.jayandwanda.com
 
All well and good, but that wasn't the OP's question.
OP’s question was, ultimately, about image quality...

“...can I expect the same picture quality for a noise perspective?”

And image quality can only be fairly evaluated when the images are the same physical size. That’s why prints are important.
2) The printing process and scale may preserve all of the image
detail and noise of both comparison images,
That’s exactly what happens with all current inkjet printers, except for the really small print sizes.
If we imagine a crop of our two images from two different density
sensors on two display devices that show the images at 100% pixels,
and then move the higher resolution image further away until it has
the same apparent image detail size as the lower resolution image
(same viewing scale), we will observe that the size of the noise in
the higher resolution image gets smaller and becomes less obtrusive.
But, of course we know that the noise did not actually go away or
change in any way.
Yes, I’ve described this same scenario in several posts to help people understand why resizing images doesn’t reduce noise.
But if we view the images such that the lower resolution image is
being viewed where its finest detail is right at the limit of visual
acuity, we can be assured that in the viewer's eye that some noise
and some image detail in the higher resolution image hasn't simply
become smaller and less obtrusive. It is actually no longer visible.
The viewer's eye (lens and/or retina) is acting like a low pass
filter. The eye limits the resolution. The image the brain sees is
a version with less resolution. This is more akin to what I've been
describing. When real resolution is reduced, some real noise is also
reduced. This is the answer to the question that the OP posed.
But you’re forgetting the scenario you just described. When the full-res image, with all its information available, is viewed at the same apparent size as your resized example, a similar effect will occur...the image will appear to have less noise. What both images really have is simply less discernable detail...one because the detail isn’t there, and the other because the detail is too small to see. The visual results are exactly the same.

Only a printer has the resolution to create detail smaller than we can discern. That’s why I say that to prove that your 15 MP image actually has less noise when resized to 10 MP, you have to print every pixel of the original 15MP image at 600DPI and then print the 10MP image at the same size. If you’re right then the two images should have the same discernable detail, while the 10 MP image has less apparent noise.

But I can already tell you...that’s not going to happen. The images will look the same, and will match a 10 MP print from a 10 MP sensor.

There’s no side-by-side comparison you can display on a computer screen that will demonstrate this. You need to produce images at the same physical size with different resolutions.

I have two 21” CRTs side by side and I’ve tried viewing a 1600x1200 image and the same image resized to 800x600 after having a 1-pixel Gaussian blur applied. I step back about 8 ft to ensure that previously seen minute detail in the 1600x1200 image is no longer discernable, and the images look identical.

But maybe I did something wrong in the resizing. If you take the image I posted earlier, cut a 1600x1200 crop from it, resized it to 800x600 and post it then I can look again to see if I notice less noise on the resized image. Otherwise, only a printer will have the resolution range to demonstrate your senario.
 
“...can I expect the same picture quality for a noise perspective?”

And image quality can only be fairly evaluated when the images are
the same physical size. That’s why prints are important.
He asked,"If two camera's have the same size sensor, and one is a 10 MP, and the other is a 15 MP, and I lower the resolution on the 15 MP to 10, can I expect the same picture quality for a noise perspective?"

And he asked, "Can I lower the resolution to get the same reduction in noise, that the lower res camera has?

In both cases, he specifically asks about lowering the resolution of the image, not simply scaling - something you are conveniently leaving out. Nowhere does he mention printing.

You said that lowering the resolution does not reduce noise. That is incorrect. I've demonstrated, in a couple of ways that reducing resolution also reduces noise. High spatial frequency noise will be reduced when resolution is reduced. ejmartin has confirmed this point. The scientific fact is that noise will be reduced.

Note that I said when "resolution" is reduced. Simple rescaling may not reduce resolution and certainly does not reduce it at 1:1 ratio to the scaling factor when the original image is from a CFA sensor. I've show this to be the case also.

As I've outlined, simply scaling an image can deliver a similar effect either through making the the noise smaller, or by presenting the noise at a resolution greater than the viewer can resolve - in which case the eye filters some of the noise out and it becomes not simply smaller, but not visible to the observer.
Only a printer has the resolution to create detail smaller than we
can discern. That’s why I say that to prove that your 15 MP image
actually has less noise when resized to 10 MP, you have to print
every pixel of the original 15MP image at 600DPI and then print the
10MP image at the same size.
No. All I have to do is change the resolution to match the 10Mp image and measure the noise levels after the change. That is completely sufficient to show that a change in resolution yields a change in noise level. I don't have to print anything. It isn't necessary to print images in order to analyze them. In fact, unless your purpose is to analyze the effects of printing, you generally do not want to print first before analyzing image data. Maybe you prefer to because prints are all that matter to you. But that is hardly the only option. And as I've pointed out, the printer itself adds variables to the analysis that complicate matters further.
If you’re right then the two images
should have the same discernable detail, while the 10 MP image has
less apparent noise.
Why would the 10Mp image have less noise? That doesn't follow at all from what I've said or shown.
But maybe I did something wrong in the resizing. If you take the
image I posted earlier, cut a 1600x1200 crop from it, resized it to
800x600 and post it then I can look again to see if I notice less
noise on the resized image. Otherwise, only a printer will have the
resolution range to demonstrate your senario.
There isn't enough information to know. As I've said, simply scaling an image from a CFA sensor does not yield a 1:1 ratio of image resolution reduction.

For a typical out of camera JPEG, you can scale an image about 80% linearly or about 64% in megapixel count and lose almost no detail/resolution.

This test demonstrates that in this case, resizing an out of camera JPEG from 8Mp to 5Mp (80% linear, 64% pixel count) results in a very marginal loss of resolution. Results, of course, vary depending on the camera, and lens being tested.



For instance, simply scaling an image from a 15Mp sensor camera to 10Mp will typically lose almost no image detail even though the pixel count is reduced by 50%. Likewise, the noise will not be reduced by very much either. Like the resolution, most of it will be preserved. BTW, I ran a double blind test on the Olympus forum that demonstrated that any image detail change or loss is generally not visible. People correctly identified the rescaled images about 50% of the time. What you'd expect from random chance.

In my demonstration, I did a fair bit of trial and error to come up with the blur size that resulted in an MTF response that was similar in slope and half the value of the original image in my test comparison. I didn't just pick some value as you did. Further, I measures resolution before and after. And finally, I didn't just look at the images, I made measurements of noise on step wedge images that created using the same processes.

I've now repeated myself many times and I'm pretty much done doing that. The bottom line hasn't changed at all, and even if you can't see the point I've made, we do seem to agree on one thing. There is little reason to be concerned about additional per pixel noise that typically comes from higher density sensors. From a picture standpoint, the overall quality will typically be as good as and often even better than what you would get from a sensor with lower pixel density.

The higher density sensor can deliver more detail. And while it will probably have more per pixel noise, mitigating the effects of that noise either through scaling, resolution changes, or better yet, sophisticated noise reduction techniques, will tend to deliver a final image - in whatever form you choose to view it - that will be observed as being of equal or higher quality than what you can get from the lower resolution sensor. And good lenses and techniques will be needed to fully realize them. And keep in mind that the benefits will generally be small, not great, because a 50% pixel count increase doesn't increase linear resolution by very much.

--
Jay Turberville
http://www.jayandwanda.com
 
For instance, simply scaling an image from a 15Mp sensor camera to
10Mp will typically lose almost no image detail even though the pixel
count is reduced by 50%.
I don’t believe that. I tried it and there is a clear, visible loss of detail when the pixel count is reduced by 50%. Also, the RAW images of the LX3 and G10 test images from Imaging Resource clearly show that the G10 contains more detail.

This is why I put no weight in the charts. The charts are telling you one thing, but on (what I believe to be) the same scenarios my eyes are telling me something else.

You’re going to go by what your charts tell you, and I’m going to go by what my eyes tell me, so I don’t see any way this is ever going to get resolved.
 
For instance, simply scaling an image from a 15Mp sensor camera to
10Mp will typically lose almost no image detail even though the pixel
count is reduced by 50%.
I don’t believe that. I tried it and there is a clear, visible loss
of detail when the pixel count is reduced by 50%.
That must have been a typo, the pixel count here is reduced by 33%.
Also, the RAW
images of the LX3 and G10 test images from Imaging Resource clearly
show that the G10 contains more detail.
Here you misunderstand: The 15mp camera will hold more detail than an image from a 10mp camera. What Turberville says is that the image from the 15mp camera will even hold more detail after it has been scaled down to 10mp (in particular due to the CFA interpolation).
 
For instance, simply scaling an image from a 15Mp sensor camera to
10Mp will typically lose almost no image detail even though the pixel
count is reduced by 50%.
I don’t believe that. I tried it and there is a clear, visible loss
of detail when the pixel count is reduced by 50%.
That must have been a typo, the pixel count here is reduced by 33%.
Jay slipped but I believe he really meant 50%. My comments were based on both a review of the G10 (14.7MP) and LX3( 10MP) RAW images, and a review of my previously posted yo80.jpg image reduced from 7.1MP to 3.55MP.
Also, the RAW
images of the LX3 and G10 test images from Imaging Resource clearly
show that the G10 contains more detail.
Here you misunderstand: The 15mp camera will hold more detail than an
image from a 10mp camera. What Turberville says is that the image
from the 15mp camera will even hold more detail after it has been
scaled down to 10mp (in particular due to the CFA interpolation).
In order to prove that you’d need to include an image that contains close up detail of one area of the scene. This image will be used as a control when comparing the 10MP downsampled image against the 10MP image from a 10MP sensor, to ensure that the detail displayed is actually detail and not noise/artifacts. But no one ever provides such an image when they post their comparative images.

I’ve taken a 7.1 MP image, resized to 3.2 MP and compared it to an equally framed image from a 3.2MP camera. I resized using the various methods, as described on Sean McHugh’s website cambridgeincolour.com, and others sites. I’ve yet to see the resized image contain anything that amounts to more detail.

But I’m willing to learn something new. Show me the images that demonstrate what you say, and tell me how you made them.
 
Here we go :)
Here you misunderstand: The 15mp camera will hold more detail than an
image from a 10mp camera. What Turberville says is that the image
from the 15mp camera will even hold more detail after it has been
scaled down to 10mp (in particular due to the CFA interpolation).
In order to prove that you’d need to include an image that contains
close up detail of one area of the scene. This image will be used as
a control when comparing the 10MP downsampled image against the 10MP
image from a 10MP sensor, to ensure that the detail displayed is
actually detail and not noise/artifacts. But no one ever provides
such an image when they post their comparative images.
Since you pointed them out, I will use the still life pictures at Imaging Resource as an example (there may be better real-world tests, but it gives us a common ground).

What I did with the pictures:

1. First, I resized the G10 picture to the exact same size as the LX3. According to my theory, this image should now contain inherently more per-pixel detail than the LX3 picture.

2. I resized both of them to 80% in each direction, to 2918x2189 pixels, 6.4MP.

3. I resized both 6.4MP files up to the original LX3 size again.

4. I copied the original image as a layer on top of the down-then-up-sampled image and set the blending method to "Difference", so areas where the sampling had no effect on detail would be completely black.

5. I cropped both images around an area where they had lost image detail due to the sampling, the Muscat Wine Vinegar bottle.

All resizing, both down and upsampling was simply done using "Bicubic" in Photoshop CS3. It seems to be a good combination for recreating the original from a downsampled copy.

Here is the crop from the LX3, showing that there is very little loss of detail when an image is downsampled to 64% of original size and then upsampled to original:



Here is the crop from the G10, showing that there is significantly more loss of detail when you continue to downsample the image from an already downsampled version:



I don't know whether it's the CFA interpolation or the AA filter that is responsible, but it certainly seems like a 15MP camera is able to deliver more detailed 10MP pictures than a 10MP camera.
But I’m willing to learn something new. Show me the images that
demonstrate what you say, and tell me how you made them.
I like you attitude! Feel free to teach me something new too :)
 
I appreciate the time taken to prepare your samples.
Here we go :)
Indeed! :)
Since you pointed them out, I will use the still life pictures at
Imaging Resource as an example (there may be better real-world tests,
but it gives us a common ground).
Which ones? You need to start with the RAW files because the Venus JPEG engine is horrible.
What I did with the pictures:
1. First, I resized the G10 picture to the exact same size as the
LX3. According to my theory, this image should now contain inherently
more per-pixel detail than the LX3 picture.
If the images are the same size, then what you have is different detail, not more. The question is which one is a more accurate rendition. Also, it has to be kept in mind that the images came from two completely different cameras. If you compare the mosaic from the Muscat bottles you’ll see that the G10 mosaic is taller after being resized. This isn’t a resizing issue...it has to do with the different lenses. So you have to recognize that when you compare the Muscat mosaics the G10 image contains more pixels than the LX3 image, even though you resized the G10 image to match.

This is always the problem with these types of comparisons...it’s impossible to create a 100% equivalent transformation. To compare the two Muscat mosaics by height you have to reduce the G10 image by 0.76723, rather than the 0.8273 that the resolutions difference would suggest. However, if you do that then the LX3 image is wider. Due to the different lenses, it’s very difficult to compare the images and come to any indisputable conclusion. When the resized images are compared against a crop from the Canon 5D MII, there are areas where the resized G10 looks a hair more accurate and areas where the LX3 looks a hair more accurate.
2. I resized both of them to 80% in each direction, to 2918x2189
pixels, 6.4MP.
Not sure why you did that, but okay.
3. I resized both 6.4MP files up to the original LX3 size again.
Not sure why you did that. Any difference that going to show up after the resizing acrobatics was there before.
4. I copied the original image as a layer on top of the
down-then-up-sampled image and set the blending method to
"Difference", so areas where the sampling had no effect on detail
would be completely black.
5. I cropped both images around an area where they had lost image
detail due to the sampling, the Muscat Wine Vinegar bottle.
Here is the crop from the LX3, showing that there is very little loss
of detail when an image is downsampled to 64% of original size and
then upsampled to original:
Here is the crop from the G10, showing that there is significantly
more loss of detail when you continue to downsample the image from an
already downsampled version:
I don't know whether it's the CFA interpolation or the AA filter that
is responsible, but it certainly seems like a 15MP camera is able to
deliver more detailed 10MP pictures than a 10MP camera.
I’m sorry, I just don’t follow how you come up with that final conclusion from the previous statements.

Comparisons like that mean nothing to me. All you’ve shown is that something happened. But we have no idea what. I want to see the images, and I want a control image to show me what’s really supposed to be there. My own review of the IR RAW files shows that the LX3 native image and the G10 resized image have enough variation from “true” that I consider them both to be of equivalent image quality.
 
Comparisons like that mean nothing to me. All you’ve shown is that
something happened. But we have no idea what. I want to see the
images, and I want a control image to show me what’s really supposed
to be there.
Something like this...



From left to right we have the Canon 5DMII, G10, and the LX3. All images came from the RAW files. The files were opened in Raw Therapee using the Neutral settings (no adjustments applied) and saved as TIFFs for chopping and resizing in PSE. The Canon images were resized using Bicubic Sharper with no pre blur. I tried pre-blurs of .5, .4, .3, and .2mm with bicubic and bicubic sharper. .5/.4 left less detail, and the others weren't any different from straight bicubic sharper.

To my eye, the tile on other side of the woman's legs looks better in the LX3 crop, whereas the right and left arms looks better in the G10 image. The LX3 crop is wider by 7%, which is likely why it looks better. And, of course, the 5DMII image show just how much detail can be shown with that number of pixels.
 
Here we go :)
Indeed! :)
Since you pointed them out, I will use the still life pictures at
Imaging Resource as an example (there may be better real-world tests,
but it gives us a common ground).
Which ones? You need to start with the RAW files because the Venus
JPEG engine is horrible.
As I said, there may be better examples to use, but from my experience it doesn't matter, the conclusion will be identical if you follow the same process as I outlined. Do it yourself with any input you want if you need a personal confirmation.
What I did with the pictures:
1. First, I resized the G10 picture to the exact same size as the
LX3. According to my theory, this image should now contain inherently
more per-pixel detail than the LX3 picture.
If the images are the same size, then what you have is different
detail, not more. The question is which one is a more accurate
rendition. Also, it has to be kept in mind that the images came from
two completely different cameras. If you compare the mosaic from the
Muscat bottles you’ll see that the G10 mosaic is taller after being
resized. This isn’t a resizing issue...it has to do with the
different lenses. So you have to recognize that when you compare the
Muscat mosaics the G10 image contains more pixels than the LX3 image,
even though you resized the G10 image to match.
I didn't choose the pictures, I used your examples. I didn't choose the cameras, I used your examples. Even then I came to the same conclusion as before :)
This is always the problem with these types of comparisons...it’s
impossible to create a 100% equivalent transformation. To compare
the two Muscat mosaics by height you have to reduce the G10 image by
0.76723, rather than the 0.8273 that the resolutions difference would
suggest. However, if you do that then the LX3 image is wider. Due
to the different lenses, it’s very difficult to compare the images
and come to any indisputable conclusion. When the resized images are
compared against a crop from the Canon 5D MII, there are areas where
the resized G10 looks a hair more accurate and areas where the LX3
looks a hair more accurate.
I agree that the cameras are very different and therefore complicated to compare. However, this is all irrelevant , since you can confirm this test with a single camera and a single picture. Here is the result if you upsample the 10MP G10 image back up to original size and overlay the original:



As you can see, the loss of detail when going from 14.6 to 10 MP is very similar to the LX3 going from 10 to 6.4MP, and a lot less than when you downsample the G10 image from 10 MP to 6.4 MP.
2. I resized both of them to 80% in each direction, to 2918x2189
pixels, 6.4MP.
Not sure why you did that, but okay.
Turberville mentioned that out-of-camera images lose very little detail when resized to 64% of pixel size, so I decided to use the values we already discussed in the thread.
3. I resized both 6.4MP files up to the original LX3 size again.
Not sure why you did that. Any difference that going to show up
after the resizing acrobatics was there before.
Correct, but I'm not looking for that, I'm looking for whether one of the pictures lost more detail than the other in the process. That would mean that the significance of each pixel in that picture was higher than in the picture where less detail was lost in the process.
4. I copied the original image as a layer on top of the
down-then-up-sampled image and set the blending method to
"Difference", so areas where the sampling had no effect on detail
would be completely black.
5. I cropped both images around an area where they had lost image
detail due to the sampling, the Muscat Wine Vinegar bottle.
Here is the crop from the LX3, showing that there is very little loss
of detail when an image is downsampled to 64% of original size and
then upsampled to original:
Here is the crop from the G10, showing that there is significantly
more loss of detail when you continue to downsample the image from an
already downsampled version:
I don't know whether it's the CFA interpolation or the AA filter that
is responsible, but it certainly seems like a 15MP camera is able to
deliver more detailed 10MP pictures than a 10MP camera.
I’m sorry, I just don’t follow how you come up with that final
conclusion from the previous statements.
The conclusion is: The quantity of information per pixel is significantly higher in the 10 MP picture from a 15 MP sensor and that detail will inevitably be lost in a down-then-up-sample process, as the posted images clearly suggest. What I cannot tell is whether that information is more correct or not, that depends on other parts of the camera, but I can tell that all-other-things-being-equal (which we can agree they never are), a 15 MP camera may deliver more detailed 10 MP pictures than a 10 MP camera.
Comparisons like that mean nothing to me. All you’ve shown is that
something happened. But we have no idea what. I want to see the
images, and I want a control image to show me what’s really supposed
to be there. My own review of the IR RAW files shows that the LX3
native image and the G10 resized image have enough variation from
“true” that I consider them both to be of equivalent image quality.
Now you are comparing various qualities of different cameras, I was only comparing quantity of information per pixel in a 15 MP picture downscaled to 10 MP and a 10 MP original.
 
From left to right we have the Canon 5DMII, G10, and the LX3. All
images came from the RAW files.
You haven't given any context for these images. What are we looking at? Without knowing how many original pixels went into these, they mean absolutely nothing.

--
John

 

Keyboard shortcuts

Back
Top