Ok, so where to start? (And I have snipped, hopefully appropriately and fairly.)
Yep, indeed, that is why I said:
Of course, I'll freely acknowledge that the tests have also
demonstrated that, when the images are viewed on the computer
screen, the perceptible loss of information in an actual pixel crop
might be overcome, if at all, by actually reducing the accuracy
of the remaining information displayed on the screen in an effort
to mimic the lost detail and fool the eye.
The whole purpose of the crop and displaying the image on the
screen at 100% is so that you can in fact determine the difference
in detail, not to reduce the accuracy of information. Cropping it
at 100% shows you exactly the detail that you have.
I understood, completely, and I appreciated your effort to present a test in which each crop displays each pixel from the same area of each image (as close as possible), to permit a pixel level comparison. Indeed, it is this very fact from your first test that convinced me that the 3mp image not only has less information, but that loss of information translates into a perceptable loss of detail.
I'd take a 1024x768 image from any higher resolution camera than
one which can only take 1024x768 and I'm not talking about better
lenses.
But, would there be a discernable difference in the image quality of the 1024x768 image? And I ask the question honestly, as I don't know. I would hypothesize (based on your test) that there might be a discernable difference between a 1024x768 image taken by a .8mp camera (1024x768) and one taken by, say, a 1.3mp camera. But beyond that . . .? If I had to venture a guess, I would be surprised if there was a discernable difference in image quality in 1024x768 images resampled down from 1.3mp cameras and 5mp cameras. (As I said before, that would be a test I would like to see!)
Even if you don't resample the image, I bet your printer driver is
doing some resampling of its own.
Who knows what happens to our images between the time we hit print and when we pull that paper from the tray ;-). I expect that the printer drivers for my epson are doing a lot more than merely resampling, in an effort to convert 5mp of RGB pixel information into ???mega-dots of information governing application of six discrete colours.
Remember that the in-camera reduction has nothing to do with what
we are saying here, it was just used as a practical example. If
what we are saying is true, you should be able reduce any full
resolution original from any classic mosaic CCD camera by 20%,
enlarge it again and lose nearly zero detail.
I understand, and I disagree. And I disagree both on a practical level (based on both of your tests and bobbo's), as well as on a theoretical level (the math & logic doesn't work).
First, bobbo published his test to support his conclusion that the 707 can be operated in 3mp mode with no perceptible loss of quality (detail and information) (or, at least, with an acceptable loss of quality) when compared to 5mp mode. Your tests followed up on this proposition.
As such, my interest in these threads was to discover whether there is no perceptable loss of quality between 5mp and 3mp mode (or an acceptable level of loss). So, on a practical level, in-camera reduction does have a lot to do with what you have been saying, particularly because the 707 doesn't perform a 20% resampling reduction. It jumps from 4.9mp mode to 3.1mp mode, which is a near-40% drop. Your first test (and bobbo's) demonstrated that the loss of information resulting from this twice-performed 40% swing caused a perceptable loss of detail. (To date, all respondents have been able to correctly identify which image was 'original' and which was twice re-sampled.)
On the 'theoretical level,' I must begin by saying that I don't foresee any need to perform a post-camera resample down by 20% and back up. As such, I really don't care too much whether such a "20% procedure" would cause a perceptable loss of detail. (It may, or may not.)
However, it will cause an actual loss of information. I really don't feel like running through all of the math, in part because it requires an extended foray into the mathematics and theory underlying the algorithms for BOTH creating RGB pixels from mosaiced ccds and combining pixels down. At its most fundamental level, however, the 20% resampling down can be viewed as nothing more than a lossy compression of the 5mp image (and an inefficient compression at that). The fact that the 707 sensor is 75% efficient doesn't suddenly transform a separately applied lossy, 20% compression scheme into a lossless 20% compression scheme.
I would further suggest that, in the realm of imposing a short-term, 20% compression on an image, there are many methods out there that would cause less loss of information (and thus detail) than 'resampling.' Indeed, I would expect that there are some true lossless algorithms that can compress most 5mp images by 20%.
The perfect comparison test will be between an original image from
a 3mp traditional CCD camera, like my S70, and a Foveon based 3mp
camera. Reduce each image to 1600x1200 and then resize them back
to 3mp. If what we are saying is true, you will see more detail
lost in the Foveon camera than the S70 because the Foveon camera
has more detail in the original.
I don't know if this would be the 'perfect comparison test,' but it sure would be fun. ;-)
Howdy.