Interesting, it also says you cannot weight green more heavily because the eye is "extraordinarily sensitive to blue colors." And "If you have an RGB image but have no information about its chromaticities, you cannot accurately reproduce the image."You get a lot less luminance information with equal amounts of R,You might get a little less overall luninance from red and green
and blue vs green alone, but the lack of usable color info from
only having green makes any imbalance much less desirable, which is
why sensors are not all green.
G, and B. Read up:
http://www.poynton.com/notes/colour_and_gamma/ColorFAQ.html#RTFToC9
It also emphasiszes that you cannot have luminance or color infomation forma single component, without superposition of all three components together. The "luminance" of the green component alone is not luminance at all, to characterize it as such is wrong. Luminance requires R+G+B, as does chrominance.
Yet, you keep saying more green is better. You can't say more green is better without saying that all green is best.We've already covered why sensors aren't all green. Why do you
keep raising this ridiculous point?
Sony changed because the new sensor is better. It's very simple.This is, again, incorrect. How many times do I have to explain this?Regardless of what you think about the Foveon concept, it shows the
25/50/25% RGB Bayer pattern is less desirable than a 37/37/25% RGB
split, according to Sony. It follows that a 33/33/33% would be
ideal, since you'd make use of every sensor for both color and
luminance.
Sony is using the E filter in an attempt to improve color fidelity,
NOT resolution. If anything, the problem that they are fixing is
not one of too much green, but too much red:
Then you should have never agreed that sensing all colors at every point is better. But you did, the only logical interpretation of that statement is that a 33/33/33% split is best, since there is no other way to do it. Horizontal or vertical array, it doesn't matter, if you have 33/33/33% mosiac or vertical array, each ouput pixel becomes a straight add of 3 adjacent sensors, no interpolation is required and you've sensed the full spectrum at every pixel location, each with a 3 sensor aperture.You can't have a trend without any data points. However, it isIt's a trend when you are proven wrong.I can't understand anything you said in the above paragraph.
amusing that the times when you are typing total gibberish seem to
coincide with your times when you believe you are proving me wrong.
No - it does not mean that I agree to this. The second sentenceIt's simple: you said you
agree that sensing full color at every pixel location is better
than the Bayer 25/50/25% RGB split which cannot do that. That
means you also agree that a 33/33/33% mosiac,
does not follow logically from the first.
33/33/33 RGB is obviously possible for a color filter array andif such a thing were
possible, would also be ideal since it is indistinguishable from a
vertical array that builds one full color pixel per every 3
sensors--since you simply combine every RGB triple into 1 full
color pixel with no need for interpolation, either way.
also obviously not a good choice for a color filter array because
it allocates fewer resources to the wavelengths that most influence
our perception of detail.
It has been clearly demonstrated many times that we barely notice
decreased blue resolution, while decreased green resolution causes
very visible degradation. What further proof could you possibly
want?