Sorry to rain on your crusade, Karl, but the overlap doesn't hurt
your ability to recognize colors as long as every distinct hue can
be mapped to a distinct R, G, B combination in the sensor's space.
There is a practical issue in constructing transformation between
these spaces but as I've pointed out to you many times, the problem
of mapping between spaces is a common problem with well-known
solutions.
Your claim that the sensor would have trouble distinguishing
between cyan and blue is just wrong. From the very graphs that you
provided, it's clear that for cyan, the sensor will see some red,
and much more green than blue. While for blue, the sensor will see
no red, and much more blue than green.
More formally, all we need for the sensor to do its job in terms of
resolving correct hues is for there to be a bijection between hues
in the sensor's space and hues in perceptual space. We can think
of each color's response curve as a basis function. What matters
is that the three layers form a basis that spans the perceptual
space. If they do, we can do a change of basis into, e.g., sRGB.
Note that some overlap is necessary even in a Bayer pattern
solution if colors such as yellow are to detected at all. If you
want to construct an argument that is more compelling, you will
need to argue that because the basis functions for the X3 sensor
are further from orthogonal than a "pure" Bayer pattern set, you
may have trouble spanning a wide range of saturations for some
colors given the limited dynamic range of the sensor.
I could be persuaded by such an argument, but this is a relatively
subtle argument that will depend upon a lot of data we just don't
have. Until then, I don't think it serves anybody to spread FUD.
--
Ron Parr
FAQ:
http://www.cs.duke.edu/~parr/photography/faq.html
Gallery:
http://www.pbase.com/parr/