In all seriousness, you should go with whatever looks better.
There are many, many things that happen between an idea and a
product and the final execuation as well as the price will
influence the outcome. We can't know for sure until we see the
tests.
Indulge me for a moment and let me try to explain what I was saying
above: Some people have the impression that Bayer sensors get the
"detail" right, and just do interpolation for the color. I'm
intepreting "detail" to mean that the B&W part of the image will be
correct with a Bayer pattern sensor. (I'm going to use brightness
interchangeably with luminance here even though it's not strictly
correct because the usage is more intuitive.)
My reply was an attempt to dispel the myth that Bayer interpolated
sensors always get the right brightness information. Here's why:
Our perception of brightness comes from a combination of red, green
and blue. Suppose our eyes are hit by red at intensity 100, blue
at intensity 100 and green at intensity 100. How bright would this
look to us? Well, we might just add the levels up to together and
say that it has a brightness of 300. This would suggest that if we
took 100 from the blue and added it to the green (R=100, G=200,
B=0) we'd see a different color, but our overall perception of
brightness would be the same.
Now, it turns out we can't trade off one color for the other in
exactly this way. Doubling green makes things look much brighter
to us, but doubling blue doesn't have much effect because our
eyes are much more sensitive to green. The numbers in my previous
message came from scores that people assign to the different
components of color to determine how much they contribute to our
perception of brightness. However, the exact scores aren't really
needed to understand the point and it just complicates the math, so
let's assume that the total brightness level comes from adding
together the R, G, and B values.
With a Bayer sensor each photocell sees only one of R, G, or B.
Let's suppose some light with values R=100, G=100, B=100 hits a
blue sensor. The total brightness for this light should be 300.
What the blue pixel sees is B=100. Now the blue pixel needs to
guess its red and green values based upon its neighbors. Since
green and red influence our perception of brightness so much, a
small mistake in these can cause a big mistake in the brightness we
perceive for the blue site. Suppose we incorrectly guess R=50 and
G=50, then the brightness for R=50, G=50, B=100 is 200, not 300.
So, what has happened is that by guessing the G and R values
incorrectly, the Bayer interpolation algorithm has caused a serious
mistake in the brightness of a pixel (off by 1/3), and this can
cause a mistake in our perception of the detail in the image.
Bayer interpolation algorithms can and do make incorrect guesses.
Is this making more sense?
--
Ron Parr
FAQ:
http://www.cs.duke.edu/~parr/photography/faq.html
Gallery:
http://www.pbase.com/parr/