# What the imager has

Started Feb 11, 2014 | Discussions thread
Re: What the imager has

PrebenR wrote:

mike earussi wrote:

Drplusplus wrote:

mike earussi wrote:

The problem is this: because only the blue layer can differentiate color variations for each set of 4 pixels, if you have the blue component the same but the other color channels different then instead of four different tones you're only going to see one that was the average of the four, so you're inevitably going to lose some subtle color variations that the present Merrill sensor can see.

I created this chart as an example of what I mean:

Trying to understand your chart...why does B=R=G=100 give two different colors (dark gray and green)?

Did you perhaps mean to have G=200 on the green block and G=100 on the red block?

These are colors I created in PS to represent what would happen if four unique colors hit the Q sensor, but with the blue component the same in each. These channel combinations each create very different colors but would appear the same within that one quad block. The RGB numbers are just the PS color numbers. The Q block would create an average color the same size as the red and green blocks so the resulting color resolution would be the equivalent of a 4.9mp Foveon sensor, not a 19mp one.

So wrong in so many ways.

1. PS didn't give you those values, unless it was tripping on acid
2. Please stop thinking about blue, green, red layers. They do not exist in the Foveon sensor
3. The readout from the first layer would not be constant as you suggests.

The point is that it is an example. And you're free to call the top layer anything you want. Nor did I make up the concept, but rather got it here from the "boss":

I merely used PS color numbers to illustrate it instead of his because that way I could also show the color differences, and if you notice his example only works if the four top layers all have different values. But since you don't like my example why don't you create your own to show us how your reasoning works instead?

But my point is valid, there will be times when the value of the top four layers will all be the same but the second and third would have been different if they had also split up into four. But in the Quattro they are not, so whatever differences they would have had instead get averaged together. This averaging will cause a loss of color resolution. And because of this there will be times where the Merrill will show more resolution over the Quattro. How much of a difference there will be in a real world photo vs a test chart or example we won't know until the actual camera ships.

It's going to be very interesting to see an actual comparison between the Merrill and Q sensor, but it wouldn't surprise me at all to see the Merrill have higher resolution in some areas than the Quattro.

Remember Q has 19Mpx top layer while M has 15Mpx
--
Lightwriting with Sigma

Complain
Post ()
Keyboard shortcuts: