Saturation occurs at the pixel level and the gamut is determined by
the absorption/transmission spectra of the filters. Changing
ratios of red/green/blue doesn't change these things.
So then the matter here is that the camera itself isn't "seeing" the flower the same way that its user is. It's simply a matter of different response curves to the frequencies of light between the camera and our eyes.
I agree color transitions are a problem, but I don't think that's
what renders purple flowers blue. A purple flower does not contain
red and blue stripes.
Exactly, so the conclusion is that the Bayer pattern is not the
source of the proboem.
I have to thank gkl to pointing me to:
http://www.cambridgeincolour.com/tutorials/
I hadn't really explored Sean McHugh's website. I did see it once and looked through his galleries, but some of the tutorials challenged my thinking and explained concepts that simply haven't been presented to me in a meaningful way before...
So let me use one of his tutorials (this one I already understood from taking 2 courses in college in the topics of human perception and computational vision):
http://www.cambridgeincolour.com/tutorials/color-perception.htm
Think about it this way: There is a whole frequency spectrum describing how light reflects off a surface such as a flower at every given point. Your eye has 3 different types of cones which have 3 different response curves (Short, Medium, and Long). Now, imagine for a moment if your eye had 4 different kinds of cone. Imagine that you could differentiate more of the color spectrum than anyone else on earth, because you have 4 different cones (one kind of shrimp actually has something like 16 different kinds of cones). In this case, you would be able to tell the difference between two reds that I can't distinguish because the spectrum is completely different.
To you with 4 cones, it would be like I'm color-blind (and everyone else would be, too). Computer screens have different Red, Green, and Blue pixels that humans in general see as Red, Green, and Blue. Cameras have 3 color filters for Red, Green, and Blue. People who are actually color blind are missing one or two wavelengths/kind of cone. But if you had the ability to see in more dimensions of color than the rest of humans, then you would look at the world and see color, and look at the computer screen and photographic images and see what would amount to a poor almost black-and-white copy.
Sony and Kodak (and Canon) have produced CCDs with 4 colors instead of the Bayer pattern to better capture color. People still use filters like polarizers and color filters (and color polarizers) to adjust the colors going into their camera sensors. People also adjust curves and white balance and curves on a given color channel to attempt to make the picture that they get out of a camera to look more like what it is that they saw when taking the picture (or for special effects).
So my answer to the original poster is:
It is not a problem with the camera. You just need to get more creative to capture what you see.
Maybe I need to write some software to render an image with color spectrums at each pixel. Then the user can supply 3 response curves representing red, green, and blue. The computer would integrate the response curves and the image spectrums to produce an output image of what someone with those 3 response curves would "see" when looking at the image. Maybe someone has already done this. A picture is worth a thousand words, and it would take a significant time to do what I'm suggesting, so for the time being, I hope my description is sufficient to describe the process.
Note: the red, green, and blue filters/pigments/inks used on your LCD, CRT, Plasma monitor, printers/photo printers, and projectors all have different transmission and reflectance curves. Digital images are simply complicated - it's a mix of art and science.
-Mike
http://demosaic.blogspot.com