That is interesting, I've never considered going in a different direction. I would propose, though, that someone who wrote an algorythm to start at the northwest corner would in fact average the pixels to the east, southeast and south so to avoid averaging non-existant pixels. In this case, it would end up with alternating red/green pixels.I like this, because it actually helps. Assume a fairly simpleAs my last argument I give a hypothetical example. Imagine taking
any image and resize it 200% using no interpolation. Now you have
a large image made of 4 pixel squares where each of the 4 pixels in
each square are the same color. Assuming this to be an original
from our hypothetical camera, what would happen if you resize this
image by 50% and enlarge it by 200% again without any
interpolation. I believe you'll just end up with the same image of
2x2 pixel squares.
reduction algorithm (in fact, you already have), in which each one
pixel is 'surviving' pixel and will be given the average value of
itself and its west, southwest, and south neighbor. These three
neighbors are then thrown out. Also assume a row of 2x2 squares
next to each other, alternating between squares with four pure red
pixels, and squares with 4 pure green pixels. If the simple
reduction algorithm starts in the northeast corner of each square,
your 50% reduction with result in a row alternating between red and
green pixels. However, if it starts in the northwest corner
(because it doesn't know you have neat 2x2 squares), you're going
to end up with a row of 128R 128G 0B pixels. Upsample that row by
200%, and your left with a 2 rows of 128R 128G 0B pixels. And that
is a simple reduction algorithm. You get more complex, and you
look to more neighbors, and then you look for patterns, etc, etc.
But, I think it would be a great test of, say, photoshop's
resampling algorithms. Create some nice grids made up of 2x2
squares in which each pixel has an identical pure color. Then
resample down and up, and see what the result is.
I say you can have fractional pixels. Take an image, use a sophisticated algorythm to scale it up 20%. You'll end up with more pixels each of which will have a value equal to its original value and some fraction of its adjacent pixels. Technically, each pixel contains a little more information than it used to, not more detail but it does contain some component of its neighbors. You now have no more information than you used to but it is taking more pixels to display it. That is what we are saying is happening when the RGBR mosaic is interpolated resulting in a camera's native image size.I don't know the math either and it goes
against common sense, but if you can imagine fractional pixels or
the fact that each pixel in our original actually contains some
information from adjacent pixels then maybe reducing these images
by some nominal amount may not really lose any info.
Are you avoiding my test #2??? ;-)You don't have fractional pixels, and when you change a pixels
value, it is changed. You might be able to apply some fancy
algorithms and do a decent job guessing what info was lost, but its
attempting to place lost info. That's why test 1 showed a softer
image.
Anyway, got to run, off to go camping with my sons!
Have a great time camping, I can't wait until my kids are older and we go on a camping trip.
Howdy.