The Top Layer

D Cox

Forum Pro
Messages
34,667
Solutions
44
Reaction score
18,431
Location
UK
While trying out ISO 1600 monochrome on the DP3M, I shot the Color Checker.

This is the same shot, first in colour as normal, and second rendered as Monochrome with the bias 99% to the "blue" layer.

It is not a blue-sensitive layer. If it were, the blue squares would be much lighter than the red squares. On a blue-sensitive film or plate, red colours come out almost black. This top layer seems to be sensitive to all colours, as indeed is shown by the often viewed graph.

The equivalent rendering for the Middle and Bottom layers is left as an exercise for the reader (who will need a Color Checker or really any multi-coloured subject).

6bfd6d8143fe4a87bd271487b23b8dcf.jpg


059ff75366fb43ccbf6baf082f00ac6c.jpg


One conclusion from this is that if you want the classic effect of white clouds against a nearly black sky, you can put a red filter on the camera, just as for B&W panchromatic film, and still render from the less-noisy top layer only.

Something to try in the Summer.
 
Last edited:
While trying out ISO 1600 monochrome on the DP3M, I shot the Color Checker.

This is the same shot, first in colour as normal, and second rendered as Monochrome with the bias 99% to the "blue" layer.

It is not a blue-sensitive layer. If it were, the blue squares would be much lighter than the red squares. On a blue-sensitive film or plate, red colours come out almost black. This top layer seems to be sensitive to all colours, as indeed is shown by the often viewed graph.

The equivalent rendering for the Middle and Bottom layers is left as an exercise for the reader (who will need a Color Checker or really any multi-coloured subject).

6bfd6d8143fe4a87bd271487b23b8dcf.jpg


059ff75366fb43ccbf6baf082f00ac6c.jpg


One conclusion from this is that if you want the classic effect of white clouds against a nearly black sky, you can put a red filter on the camera, just as for B&W panchromatic film, and still render from the less-noisy top layer only.
Good post, DC, actual examples, not words, words, words :-D

On my SD10, sans dust-cover, the blue layer also responds quite significantly to near IR. For a 700nm filtered woodland street + sky shot the raw response levels are approx 12:6:2 from the bottom up.
Something to try in the Summer.
Or anytime here in Gulf Coast Texas . . .

--
Cheers,
Ted
 
Last edited:
Very interesting. Thank you for posting your observations and those images.
 
That's really interesting. I wonder if someone could do the same for a Merrill and Quattro pair to illuminate some of the theorising taking place in the forum.

thank you

Andrew
 
That's really interesting. I wonder if someone could do the same for a Merrill and Quattro pair to illuminate some of the theorising taking place in the forum.
It'll never catch on, Andrew ;-)

Why show a picture when a 1,000 words will confuse everybody.

(oops, flaming, sorry . . )
 
That's really interesting. I wonder if someone could do the same for a Merrill and Quattro pair to illuminate some of the theorising taking place in the forum.
It'll never catch on, Andrew ;-)

Why show a picture when a 1,000 words will confuse everybody.

(oops, flaming, sorry . . )
 
That's really interesting. I wonder if someone could do the same for a Merrill and Quattro pair to illuminate some of the theorising taking place in the forum.
It'll never catch on, Andrew ;-)

Why show a picture when a 1,000 words will confuse everybody.
When do I limit my posts to 1,000 words Ted?

;)

BTW, I do show a picture sometimes. (And as you can see, I have made an effort to change my rude shouting ways.)
(oops, flaming, sorry . . )

--
Cheers,
Ted
 
That's really interesting. I wonder if someone could do the same for a Merrill and Quattro pair to illuminate some of the theorising taking place in the forum.
It'll never catch on, Andrew ;-)

Why show a picture when a 1,000 words will confuse everybody.

(oops, flaming, sorry . . )
 
That's really interesting. I wonder if someone could do the same for a Merrill and Quattro pair to illuminate some of the theorising taking place in the forum.
Why show a picture when a 1,000 words will confuse everybody.
Actually, I am more of a word person, but relating the test shot to what Sigma has actually said leaves me wondering why the q sensor isn't more obviously better than the Merrills. That's a lot of relatively noise-free all-color information from the top layer.
To be picky, just one layer can not provide any color information at all, only linear illuminance. Two or more layers are required to get what we call "color".
If you look at the blue and the brown in the top row and the blue and the green in the second row, you can see that they register as nearly the same signal intensity at 20Mpix. The two lower layers will show them as different colours at 5Mpix.
Here again we find an assertion that two sources can provide a color which flies in the face of the whole CIE tri-stimulus thing.
I think that you can use some other properties of typical images to produce many images that look as though they are 20Mpix full colour images. The Q captures four times as many electrons in the lower layers per pixel and is therefore more colour sensitive at 5Mpix. Provided that the image has the right properties, it therefore appears more colour sensitive at 20Mpix.
Same comment really but I would be interested to know the exact definition of "colour sensitive"?
Is that a thousand words?
Almost . . ;-)
 
That's really interesting. I wonder if someone could do the same for a Merrill and Quattro pair to illuminate some of the theorising taking place in the forum.
Why show a picture when a 1,000 words will confuse everybody.
Actually, I am more of a word person, but relating the test shot to what Sigma has actually said leaves me wondering why the q sensor isn't more obviously better than the Merrills. That's a lot of relatively noise-free all-color information from the top layer.
To be picky, just one layer can not provide any color information at all, only linear illuminance. Two or more layers are required to get what we call "color".
Ok, yes, I see your point. My thought, which I expressed poorly, was that the information from the top layer contained ALL the colors. But yes, without the information from the other layers it cannot be separated into the various "colors."
If you look at the blue and the brown in the top row and the blue and the green in the second row, you can see that they register as nearly the same signal intensity at 20Mpix. The two lower layers will show them as different colours at 5Mpix.
Here again we find an assertion that two sources can provide a color which flies in the face of the whole CIE tri-stimulus thing.
I think that you can use some other properties of typical images to produce many images that look as though they are 20Mpix full colour images. The Q captures four times as many electrons in the lower layers per pixel and is therefore more colour sensitive at 5Mpix. Provided that the image has the right properties, it therefore appears more colour sensitive at 20Mpix.
Same comment really but I would be interested to know the exact definition of "colour sensitive"?
Is that a thousand words?
Almost . . ;-)
Sigma has described the use of the lower layers to be only in conjunction with each of the smaller pixel(s) above it. The idea is, as they express it, to use those lower layers for color information. Certainly it it makes more sense to use the top, less noisy, layer of smaller pixels (more) for edges and lines than the larger lower layers, which are used (more) for defining color. One problem is the noise from the pixels at the lower levels of the sensor.

Before I go off on some tangent, my first reaction is to carefully read what Sigma has said and see if what they said makes sense. Sigma has in some places suggested that the top layer is more sensitive to green and blue, and so acts as (or more like) the "green" sensor in the Bayer sensor. I have no idea if that is correct or not, or if they say that just to make people feel more comfortable with the idea of large and small layered pixels on the same sensor.

I mean, people had a hard enough time with the idea of one pixel detecting three colors, or is it three pixels, all stacked on top of one another? And now they cut the top one into bits?

Richard
 
I agree with Ted that the colours are generated by comparing the differences between the layers, which can only be done at 5Mpix resolution, and the structure by using the top layer, which can be done at 20Mpix resolution. I am also suggesting that "interpolation" can produce the appearance of 20Mpix for most real world images under most situations. There are other things you might be able to do by using pairs of layers as greyscale images, ie the sum of layers, rather than difference.
 
Sigma has described the use of the lower layers to be only in conjunction with each of the smaller pixel(s) above it. The idea is, as they express it, to use those lower layers for color information. Certainly it it makes more sense to use the top, less noisy, layer of smaller pixels (more) for edges and lines than the larger lower layers, which are used (more) for defining color. One problem is the noise from the pixels at the lower levels of the sensor.
If you read about jpeg compression or TV transmission you'll find the similar principle that the eye is less sensitive to color information than it is to luminance information. It follows therefore that compressing or transmitting color information at half the content but the luminance information at full content is good enough for most folks that view posted images or watch TV.
Before I go off on some tangent, my first reaction is to carefully read what Sigma has said and see if what they said makes sense. Sigma has in some places suggested that the top layer is more sensitive to green and blue, and so acts as (or more like) the "green" sensor in the Bayer sensor. I have no idea if that is correct or not, or if they say that just to make people feel more comfortable with the idea of large and small layered pixels on the same sensor.
Gotta hit you with - guess what - a picture which, as you know, is etc, etc, blah di blah.

another%20Foveon%20spectral%20graph.jpg


Now, armed with such a picture we can assess what the layers do with green light, for example, which I will define as being 555nm +/- 10nm (a box function, sorta). Here we go:

39afa63f0f53483a8121efa93baf445a.jpg


Now we see instantly that there is much more area of green under the middle layer than there is under the other two. And that the top layer sees a tad more green than the lower layer does. From those facts, not opinions, not Sigma marketing-speak, we now know what the Q sensor does with green.

Easy, ennit?

--
Cheers,
Ted
 
Last edited:
While trying out ISO 1600 monochrome on the DP3M, I shot the Color Checker.

This is the same shot, first in colour as normal, and second rendered as Monochrome with the bias 99% to the "blue" layer.

It is not a blue-sensitive layer. If it were, the blue squares would be much lighter than the red squares. On a blue-sensitive film or plate, red colours come out almost black. This top layer seems to be sensitive to all colours, as indeed is shown by the often viewed graph.

The equivalent rendering for the Middle and Bottom layers is left as an exercise for the reader (who will need a Color Checker or really any multi-coloured subject).

One conclusion from this is that if you want the classic effect of white clouds against a nearly black sky, you can put a red filter on the camera, just as for B&W panchromatic film, and still render from the less-noisy top layer only.

Something to try in the Summer.
Ted,

The physics of transmission of radiation (including light) in semi-conductors is well understood. Shorter wavelengths (toward the blue) are propagated a short distance while longer wavelengths are propagated a longer distance. The propagation depth for semi-conductors (e.g. Silicon) and metals is small to begin with as most of the energy is absorbed near the surface. The absorption coefficient is inversely proportional to the wavelength (Lambert-Beer Law). All this follows from Maxwell's equations and the the value of (complex) index of refraction of the material where the imagery part of the index of refraction specifies the absorption properties. What sets dielectric materials apart from semi-conductors is the imagery part of the index of refraction is very small in dielectric materials and much larger in semi-conductors and even higher in metals.

It is not that no red or green get absorbed by the detectors on the top - it is that very little blue makes it to the middle and even less to the lower levels of the Foveon sensor. In reality the cartoon representation of the Foveon of each level detecting a specific RGB channel is simplified. While the top layer is not fully panchromatic (some green and some red photons except detection by the detectors on the top layer and make it to the bottom layers), it is close. That is exactly what your results show.

If you define the "color space" for the Foveon as the three values which are detected at each level in white light, it will not be B/G/R as the cartoon would have you believe but some mixture of the three in different proportions.

You experiment basically validated what is predicted by the Beer-Lambert Law. And yes you are correct, if you want to darken the skies using the top channel in a B&W image - the use an 8, 12, 15, or red filter would be helpful.
 
While trying out ISO 1600 monochrome on the DP3M, I shot the Color Checker.

This is the same shot, first in colour as normal, and second rendered as Monochrome with the bias 99% to the "blue" layer.

It is not a blue-sensitive layer. If it were, the blue squares would be much lighter than the red squares. On a blue-sensitive film or plate, red colours come out almost black. This top layer seems to be sensitive to all colours, as indeed is shown by the often viewed graph.

The equivalent rendering for the Middle and Bottom layers is left as an exercise for the reader (who will need a Color Checker or really any multi-coloured subject).

One conclusion from this is that if you want the classic effect of white clouds against a nearly black sky, you can put a red filter on the camera, just as for B&W panchromatic film, and still render from the less-noisy top layer only.

Something to try in the Summer.
Ted,

The physics of transmission of radiation (including light) in semi-conductors is well understood. Shorter wavelengths (toward the blue) are propagated a short distance while longer wavelengths are propagated a longer distance. The propagation depth for semi-conductors (e.g. Silicon) and metals is small to begin with as most of the energy is absorbed near the surface. The absorption coefficient is inversely proportional to the wavelength (Lambert-Beer Law). All this follows from Maxwell's equations and the the value of (complex) index of refraction of the material where the imagery part of the index of refraction specifies the absorption properties. What sets dielectric materials apart from semi-conductors is the imagery part of the index of refraction is very small in dielectric materials and much larger in semi-conductors and even higher in metals.

It is not that no red or green get absorbed by the detectors on the top - it is that very little blue makes it to the middle and even less to the lower levels of the Foveon sensor. In reality the cartoon representation of the Foveon of each level detecting a specific RGB channel is simplified. While the top layer is not fully panchromatic (some green and some red photons except detection by the detectors on the top layer and make it to the bottom layers), it is close. That is exactly what your results show.

If you define the "color space" for the Foveon as the three values which are detected at each level in white light, it will not be B/G/R as the cartoon would have you believe but some mixture of the three in different proportions.

Your experiment basically validated what is predicted by the Beer-Lambert Law. And yes you are correct, if you want to darken the skies using the top channel in a B&W image - the use an 8, 12, 15, or red filter would be helpful.
Are you responding to D Cox or me?

If to me, the last thing I need today is a big lecture on how the Foveon works, thank you.
 
The physics of transmission of radiation (including light) in semi-conductors is well understood. Shorter wavelengths (toward the blue) are propagated a short distance while longer wavelengths are propagated a longer distance. The propagation depth for semi-conductors (e.g. Silicon) and metals is small to begin with as most of the energy is absorbed near the surface. The absorption coefficient is inversely proportional to the wavelength (Lambert-Beer Law). All this follows from Maxwell's equations and the the value of (complex) index of refraction of the material where the imagery part of the index of refraction specifies the absorption properties. What sets dielectric materials apart from semi-conductors is the imagery part of the index of refraction is very small in dielectric materials and much larger in semi-conductors and even higher in metals.

It is not that no red or green get absorbed by the detectors on the top - it is that very little blue makes it to the middle and even less to the lower levels of the Foveon sensor. In reality the cartoon representation of the Foveon of each level detecting a specific RGB channel is simplified. While the top layer is not fully panchromatic (some green and some red photons except detection by the detectors on the top layer and make it to the bottom layers), it is close. That is exactly what your results show.
My original shot was taken under a Halogen lamp, so the relative brightness of red, green and blue squares in the output would be more equal than in daylight -- the upward slope of the Halogen spectrum complements the downward slope of the top layer's sensitivity.
If you define the "color space" for the Foveon as the three values which are detected at each level in white light, it will not be B/G/R as the cartoon would have you believe but some mixture of the three in different proportions.
The crucial figures for detecting the colour balance of light falling on any pixel is the difference between the outputs from the layers. Same as the opponent-colour setup in the human retina.
You experiment basically validated what is predicted by the Beer-Lambert Law. And yes you are correct, if you want to darken the skies using the top channel in a B&W image - the use an 8, 12, 15, or red filter would be helpful.

--
Truman
www.pbase.com/tprevatt
 
While trying out ISO 1600 monochrome on the DP3M, I shot the Color Checker.

This is the same shot, first in colour as normal, and second rendered as Monochrome with the bias 99% to the "blue" layer.

It is not a blue-sensitive layer. If it were, the blue squares would be much lighter than the red squares. On a blue-sensitive film or plate, red colours come out almost black. This top layer seems to be sensitive to all colours, as indeed is shown by the often viewed graph.

The equivalent rendering for the Middle and Bottom layers is left as an exercise for the reader (who will need a Color Checker or really any multi-coloured subject).

One conclusion from this is that if you want the classic effect of white clouds against a nearly black sky, you can put a red filter on the camera, just as for B&W panchromatic film, and still render from the less-noisy top layer only.

Something to try in the Summer.
Ted,

The physics of transmission of radiation (including light) in semi-conductors is well understood. Shorter wavelengths (toward the blue) are propagated a short distance while longer wavelengths are propagated a longer distance. The propagation depth for semi-conductors (e.g. Silicon) and metals is small to begin with as most of the energy is absorbed near the surface. The absorption coefficient is inversely proportional to the wavelength (Lambert-Beer Law). All this follows from Maxwell's equations and the the value of (complex) index of refraction of the material where the imagery part of the index of refraction specifies the absorption properties. What sets dielectric materials apart from semi-conductors is the imagery part of the index of refraction is very small in dielectric materials and much larger in semi-conductors and even higher in metals.

It is not that no red or green get absorbed by the detectors on the top - it is that very little blue makes it to the middle and even less to the lower levels of the Foveon sensor. In reality the cartoon representation of the Foveon of each level detecting a specific RGB channel is simplified. While the top layer is not fully panchromatic (some green and some red photons except detection by the detectors on the top layer and make it to the bottom layers), it is close. That is exactly what your results show.

If you define the "color space" for the Foveon as the three values which are detected at each level in white light, it will not be B/G/R as the cartoon would have you believe but some mixture of the three in different proportions.

Your experiment basically validated what is predicted by the Beer-Lambert Law. And yes you are correct, if you want to darken the skies using the top channel in a B&W image - the use an 8, 12, 15, or red filter would be helpful.
Are you responding to D Cox or me?

If to me, the last thing I need today is a big lecture on how the Foveon works, thank you.
I think he's responding to me. You can just skip.

"Lectures" are OK, because they show, not necessarily how the thing works, but how a person thinks it works.

Regards, Don.
 
Last edited:
While trying out ISO 1600 monochrome on the DP3M, I shot the Color Checker.

This is the same shot, first in colour as normal, and second rendered as Monochrome with the bias 99% to the "blue" layer.

It is not a blue-sensitive layer. If it were, the blue squares would be much lighter than the red squares. On a blue-sensitive film or plate, red colours come out almost black. This top layer seems to be sensitive to all colours, as indeed is shown by the often viewed graph.

The equivalent rendering for the Middle and Bottom layers is left as an exercise for the reader (who will need a Color Checker or really any multi-coloured subject).

One conclusion from this is that if you want the classic effect of white clouds against a nearly black sky, you can put a red filter on the camera, just as for B&W panchromatic film, and still render from the less-noisy top layer only.

Something to try in the Summer.
Ted,

The physics of transmission of radiation (including light) in semi-conductors is well understood. Shorter wavelengths (toward the blue) are propagated a short distance while longer wavelengths are propagated a longer distance. The propagation depth for semi-conductors (e.g. Silicon) and metals is small to begin with as most of the energy is absorbed near the surface. The absorption coefficient is inversely proportional to the wavelength (Lambert-Beer Law). All this follows from Maxwell's equations and the the value of (complex) index of refraction of the material where the imagery part of the index of refraction specifies the absorption properties. What sets dielectric materials apart from semi-conductors is the imagery part of the index of refraction is very small in dielectric materials and much larger in semi-conductors and even higher in metals.

It is not that no red or green get absorbed by the detectors on the top - it is that very little blue makes it to the middle and even less to the lower levels of the Foveon sensor. In reality the cartoon representation of the Foveon of each level detecting a specific RGB channel is simplified. While the top layer is not fully panchromatic (some green and some red photons except detection by the detectors on the top layer and make it to the bottom layers), it is close. That is exactly what your results show.

If you define the "color space" for the Foveon as the three values which are detected at each level in white light, it will not be B/G/R as the cartoon would have you believe but some mixture of the three in different proportions.

Your experiment basically validated what is predicted by the Beer-Lambert Law. And yes you are correct, if you want to darken the skies using the top channel in a B&W image - the use an 8, 12, 15, or red filter would be helpful.
Are you responding to D Cox or me?

If to me, the last thing I need today is a big lecture on how the Foveon works, thank you.
I think he's responding to me. You can just skip.
Thank you.
"Lectures" are OK, because they show, not necessarily how the thing works, but how a person thinks it works.
Beautifully said!I suppose lectures are indeed more informative than those one-line assertions that appear occasionally round here ;-)
 
My original shot was taken under a Halogen lamp, so the relative brightness of red, green and blue squares in the output would be more equal than in daylight -- the upward slope of the Halogen spectrum complements the downward slope of the top layer's sensitivity.
Indeed, it seems like the Merrill is optimised for balanced detection in tungsten or halogen light, at least it seems the top layer gets much more photons in daylight.

Which then is a one (or two) line assertion, which Ted thinks is less valuable than a lecture and very much less valuable than a hands on measurement :)

One of the problems with a measurement of these things is that we do not know what the A/D converter does, or even if the digital signal is amplified. So, if we have a grey card and a tungsten lamp and get 6152/4539/3213 in the three channels, that really tells us nothing, if we dont assume that the amplification between the number of caught photons and digital number is approximately 1. Which it might be ... or not.

Foe Quattro, where the areas are different, it is even more complicated.
 
Sigma has described the use of the lower layers to be only in conjunction with each of the smaller pixel(s) above it. The idea is, as they express it, to use those lower layers for color information. Certainly it it makes more sense to use the top, less noisy, layer of smaller pixels (more) for edges and lines than the larger lower layers, which are used (more) for defining color. One problem is the noise from the pixels at the lower levels of the sensor.
If you read about jpeg compression or TV transmission you'll find the similar principle that the eye is less sensitive to color information than it is to luminance information. It follows therefore that compressing or transmitting color information at half the content but the luminance information at full content is good enough for most folks that view posted images or watch TV.
Before I go off on some tangent, my first reaction is to carefully read what Sigma has said and see if what they said makes sense. Sigma has in some places suggested that the top layer is more sensitive to green and blue, and so acts as (or more like) the "green" sensor in the Bayer sensor. I have no idea if that is correct or not, or if they say that just to make people feel more comfortable with the idea of large and small layered pixels on the same sensor.
Gotta hit you with - guess what - a picture which, as you know, is etc, etc, blah di blah.

another%20Foveon%20spectral%20graph.jpg


Now, armed with such a picture we can assess what the layers do with green light, for example, which I will define as being 555nm +/- 10nm (a box function, sorta). Here we go:

39afa63f0f53483a8121efa93baf445a.jpg


Now we see instantly that there is much more area of green under the middle layer than there is under the other two. And that the top layer sees a tad more green than the lower layer does. From those facts, not opinions, not Sigma marketing-speak, we now know what the Q sensor does with green.

Easy, ennit?

--
Cheers,
Ted
Ted

Your illustration shows what I was talking about, which is that the top layer (obviously more sensitive to blue than green) acts as, or provides the information which we would expect from, the "green" sensor in the CFA (Bayer) sensors. What is the effect of using high "blue" sensitivity for that information rather than green? Is there some ideal "color" sensitivity that should be used for that purpose? We can propose that the human eye is most sensitive to green in real life, and so we can double the number of green sensors in a CFA, but does it really matter which "color" is used for that (luminance?) purpose?

The big problem for Foveon has been the "Noise" from the lower layers, really the S/N ratio, which should be improved, for our purposes, by the larger pixel areas in the lower layers.

Is the current SPP algorithm for untangling all the information from the sensor the best possible one? Particularly when considering the large and small pixels? Probably not, because there has been no serious competition in that area.



Richard

--
My small gallery: http://www.pbase.com/richard44/inbox
 

Keyboard shortcuts

Back
Top