As Peter wrote, I used pretty much the avergaring algorithm you
would expect. It's described here:
http://www.peter-cockerell.net:8080/Bayer
I note that on the green pixels you use only the closest neigbours
of red and blue pixels and divide them by 2 to get the value.
I know that this algorithm has been used, and I think this very
algorithm is actually unfortunately used in my own Casio 2800. I
think the biggest problem with this algorithm is that it produces
terrible jaggies when two high contrast areas (one with a lot of
red and one with a lot of blue) meet. You will get a staircase
effect, instead of a smoothe borderline between the two areas.
To avoid those jaggies you would have to involve more than the two
closest neighbours. The easiest equation would be to involve 4 more
pixels (that I will call "across the street neigbours"), so that
you have the total of 6 pixels to calculate the value from, but
since the "across the street neigbours" are not as close as the 2
neigbours they are each worth less than the closest neigbours.
Example:
In this grid you want to know the red value of the green pixel in
the center:
GRGRG
BGBGB
GRGRG
BGBGB
GRGRG
The 2 neigbouring pixels should be worth enough to make up
something like aproximately 80% of the red value for the green
pixel in the center, but the rest should be made up from the red
pixels in the upper and lowest rows. That would mean that each
neigbouring pixels make up about 40% of the red value and each
"across the street neigbour" each make up about 5% of the red value.
I'm well aware that an algorithm using this type of advanced
calculations would require either more time or better hardware, so
that is probably why it's not always used in consumer cameras.
--
~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~
http://hem.bredband.net/maxstr