The Top Layer

While trying out ISO 1600 monochrome on the DP3M, I shot the Color Checker.

This is the same shot, first in colour as normal, and second rendered as Monochrome with the bias 99% to the "blue" layer.

It is not a blue-sensitive layer. If it were, the blue squares would be much lighter than the red squares. On a blue-sensitive film or plate, red colours come out almost black. This top layer seems to be sensitive to all colours, as indeed is shown by the often viewed graph.

The equivalent rendering for the Middle and Bottom layers is left as an exercise for the reader (who will need a Color Checker or really any multi-coloured subject).

One conclusion from this is that if you want the classic effect of white clouds against a nearly black sky, you can put a red filter on the camera, just as for B&W panchromatic film, and still render from the less-noisy top layer only.

Something to try in the Summer.
Ted,

The physics of transmission of radiation (including light) in semi-conductors is well understood. Shorter wavelengths (toward the blue) are propagated a short distance while longer wavelengths are propagated a longer distance. The propagation depth for semi-conductors (e.g. Silicon) and metals is small to begin with as most of the energy is absorbed near the surface. The absorption coefficient is inversely proportional to the wavelength (Lambert-Beer Law). All this follows from Maxwell's equations and the the value of (complex) index of refraction of the material where the imagery part of the index of refraction specifies the absorption properties. What sets dielectric materials apart from semi-conductors is the imagery part of the index of refraction is very small in dielectric materials and much larger in semi-conductors and even higher in metals.

It is not that no red or green get absorbed by the detectors on the top - it is that very little blue makes it to the middle and even less to the lower levels of the Foveon sensor. In reality the cartoon representation of the Foveon of each level detecting a specific RGB channel is simplified. While the top layer is not fully panchromatic (some green and some red photons except detection by the detectors on the top layer and make it to the bottom layers), it is close. That is exactly what your results show.

If you define the "color space" for the Foveon as the three values which are detected at each level in white light, it will not be B/G/R as the cartoon would have you believe but some mixture of the three in different proportions.

Your experiment basically validated what is predicted by the Beer-Lambert Law. And yes you are correct, if you want to darken the skies using the top channel in a B&W image - the use an 8, 12, 15, or red filter would be helpful.
Are you responding to D Cox or me?

If to me, the last thing I need today is a big lecture on how the Foveon works, thank you.
I think he's responding to me. You can just skip.
Thank you.
"Lectures" are OK, because they show, not necessarily how the thing works, but how a person thinks it works.
Beautifully said!I suppose lectures are indeed more informative than those one-line assertions that appear occasionally round here ;-)
 
Ted

Your illustration shows what I was talking about, which is that the top layer (obviously more sensitive to blue than green) acts as, or provides the information which we would expect from, the "green" sensor in the CFA (Bayer) sensors. What is the effect of using high "blue" sensitivity for that information rather than green? Is there some ideal "color" sensitivity that should be used for that purpose? We can propose that the human eye is most sensitive to green in real life, and so we can double the number of green sensors in a CFA, but does it really matter which "color" is used for that (luminance?) purpose?
Yes, we see the word "Luminance" here a lot since the Quattro came out, but rarely do we see it's definition. So it has become a word similar to "resolution" which means many things to many people.

So. In the world of imaging, Luminance Y = 0.2R + 0.7G + 0.07B, where we're talking linear RGB decoded from a camera. The coefficients 0.2, 0.7 and 0.07 are called weights or weightings. The weighting of 0.7 for green tells us that green is the most important channel when determining the luminance of a pixel's-worth of a scene. And we note that blue (decoded blue, not the blue layer) carries hardly any weight at all, i.e. 0.07 is not a typo. And that is why JPEG compression uses red and blue for the color info, I reckon.

A difficulty in using only one channel to determine Y (as Foveon implies) is that the top layer signal does not tell us what it's detected wavelength is. Go back to diagram and it can be seen that there are many instances of incident spectra that would give the same output. A narrow spectrum at say 450nm could easily give the same output as a wider spectrum at say 630nm (think "area under the curve").

So all this talk of glibly using the top layer as "the luminance channel" carries little weight, lol. Before someone rushes to disagree, provision of Sigmas exact algorithm would necessary to convince anyone here. So far, no such algorithm has been, even after all this time.

I contrast that with the SD9 conversion matrix which has been in the public domain for over 10 years.
The big problem for Foveon has been the "Noise" from the lower layers, really the S/N ratio, which should be improved, for our purposes, by the larger pixel areas in the lower layers.
I think Roland is interested in finding out those "Noise" figures are.
Is the current SPP algorithm for untangling all the information from the sensor the best possible one?
Nobody knows what the current algorithm is, AFAIK. So, the question can not be answered.
Particularly when considering the large and small pixels? Probably not, because there has been no serious competition in that area.
Later,
 
Nothing new was described by Ted's experiment. If the results had been different than what their were - then some major theories would have been falsified - but they weren't.
Truman, will you PLEASE stop referring to "Ted's experiment"? - it is not mine. And will you try not to respond to my posts when you are clearly addressing someone else?
 
Last edited:
Ted

Your illustration shows what I was talking about, which is that the top layer (obviously more sensitive to blue than green) acts as, or provides the information which we would expect from, the "green" sensor in the CFA (Bayer) sensors. What is the effect of using high "blue" sensitivity for that information rather than green? Is there some ideal "color" sensitivity that should be used for that purpose? We can propose that the human eye is most sensitive to green in real life, and so we can double the number of green sensors in a CFA, but does it really matter which "color" is used for that (luminance?) purpose?
Yes, we see the word "Luminance" here a lot since the Quattro came out, but rarely do we see it's definition. So it has become a word similar to "resolution" which means many things to many people.
I would mean the L channel in Lab colour space.
So. In the world of imaging, Luminance Y = 0.2R + 0.7G + 0.07B, where we're talking linear RGB decoded from a camera.The coefficients 0.2, 0.7 and 0.07 are called weights or weightings. The weighting of 0.7 for green tells us that green is the most important channel when determining the luminance of a pixel's-worth of a scene. And we note that blue (decoded blue, not the blue layer) carries hardly any weight at all, i.e. 0.07 is not a typo. And that is why JPEG compression uses red and blue for the color info, I reckon.
You are assuming a camera that uses narrow-band filters to detect colour.
A difficulty in using only one channel to determine Y (as Foveon implies) is that the top layer signal does not tell us what it's detected wavelength is.
That is why it can be treated as a Luminance channel. Its sensitivity diminishes steadily with increasing wavelength, but it would work just as well if it were even throughout the spectrum.
Go back to diagram and it can be seen that there are many instances of incident spectra that would give the same output. A narrow spectrum at say 450nm could easily give the same output as a wider spectrum at say 630nm (think "area under the curve").

So all this talk of glibly using the top layer as "the luminance channel" carries little weight, lol. Before someone rushes to disagree, provision of Sigmas exact algorithm would necessary to convince anyone here. So far, no such algorithm has been, even after all this time.

I contrast that with the SD9 conversion matrix which has been in the public domain for over 10 years.
The big problem for Foveon has been the "Noise" from the lower layers, really the S/N ratio, which should be improved, for our purposes, by the larger pixel areas in the lower layers.
I think Roland is interested in finding out those "Noise" figures are.
Using larger pixels should decrease high (spatial) frequency noise, but I don't think it will reduce noise at lower frequencies (i.e. blotches).
Is the current SPP algorithm for untangling all the information from the sensor the best possible one?
Nobody knows what the current algorithm is, AFAIK. So, the question can not be answered.
Particularly when considering the large and small pixels? Probably not, because there has been no serious competition in that area.
 
Ted

Your illustration shows what I was talking about, which is that the top layer (obviously more sensitive to blue than green) acts as, or provides the information which we would expect from, the "green" sensor in the CFA (Bayer) sensors. What is the effect of using high "blue" sensitivity for that information rather than green? Is there some ideal "color" sensitivity that should be used for that purpose? We can propose that the human eye is most sensitive to green in real life, and so we can double the number of green sensors in a CFA, but does it really matter which "color" is used for that (luminance?) purpose?
Yes, we see the word "Luminance" here a lot since the Quattro came out, but rarely do we see it's definition. So it has become a word similar to "resolution" which means many things to many people.
I would mean the L channel in Lab colour space.
So. In the world of imaging, Luminance Y = 0.2R + 0.7G + 0.07B, where we're talking linear RGB decoded from a camera.The coefficients 0.2, 0.7 and 0.07 are called weights or weightings. The weighting of 0.7 for green tells us that green is the most important channel when determining the luminance of a pixel's-worth of a scene. And we note that blue (decoded blue, not the blue layer) carries hardly any weight at all, i.e. 0.07 is not a typo. And that is why JPEG compression uses red and blue for the color info, I reckon.
You are assuming a camera that uses narrow-band filters to detect colour.
A difficulty in using only one channel to determine Y (as Foveon implies) is that the top layer signal does not tell us what it's detected wavelength is.
That is why it can be treated as a Luminance channel. Its sensitivity diminishes steadily with increasing wavelength, but it would work just as well if it were even throughout the spectrum.
Go back to diagram and it can be seen that there are many instances of incident spectra that would give the same output. A narrow spectrum at say 450nm could easily give the same output as a wider spectrum at say 630nm (think "area under the curve").

So all this talk of glibly using the top layer as "the luminance channel" carries little weight, lol. Before someone rushes to disagree, provision of Sigmas exact algorithm would necessary to convince anyone here. So far, no such algorithm has been, even after all this time.

I contrast that with the SD9 conversion matrix which has been in the public domain for over 10 years.
The big problem for Foveon has been the "Noise" from the lower layers, really the S/N ratio, which should be improved, for our purposes, by the larger pixel areas in the lower layers.
I think Roland is interested in finding out those "Noise" figures are.
Using larger pixels should decrease high (spatial) frequency noise, but I don't think it will reduce noise at lower frequencies (i.e. blotches).
Is the current SPP algorithm for untangling all the information from the sensor the best possible one?
Nobody knows what the current algorithm is, AFAIK. So, the question can not be answered.
Particularly when considering the large and small pixels? Probably not, because there has been no serious competition in that area.
Thanks.

Cleaning up the blotches is a major practical issue. It puts an ugly limit on DR.
 
Yes, we see the word "Luminance" here a lot since the Quattro came out, but rarely do we see it's definition. So it has become a word similar to "resolution" which means many things to many people.
I would mean the L channel in Lab colour space.
L* represents Lightness, which is a power function of luminance.


Is not Y the generally accepted symbol for luminance?
So. In the world of imaging, Luminance Y = 0.2R + 0.7G + 0.07B, where we're talking linear RGB decoded from a camera.The coefficients 0.2, 0.7 and 0.07 are called weights or weightings. The weighting of 0.7 for green tells us that green is the most important channel when determining the luminance of a pixel's-worth of a scene. And we note that blue (decoded blue, not the blue layer) carries hardly any weight at all, i.e. 0.07 is not a typo. And that is why JPEG compression uses red and blue for the color info, I reckon.
You are assuming a camera that uses narrow-band filters to detect colour.
I was making no assumptions about the camera (did you miss the bolded text above?). The rest is from the literature, with which I hope you agree.

A difficulty in using only one channel to determine Y (as Foveon implies) is that the top layer signal does not tell us what it's detected wavelength is.
That is why it can be treated as a Luminance channel. Its sensitivity diminishes steadily with increasing wavelength, but it would work just as well if it were even throughout the spectrum.
I can not agree with that.

The opinion on the tech forum was that Y should match the CIE Photopic Luminous Efficiency function and that, if not, dire consequences would result. The corollary being that the top layer can not be used successfully because it is a poor match to the said function.

So, in using the top layer alone as "Luminance" (assuming that is in fact done) their peak response at about 450nm is nowhere near that of the CIE function at 555nm - and that is ignoring the obvious skew in the top layer's response.
I think Roland is interested in finding out those "Noise" figures are.
Using larger pixels should decrease high (spatial) frequency noise, but I don't think it will reduce noise at lower frequencies (i.e. blotches).
I'll pass on that, I don't know enough about noise to comment.
 
Last edited:
Cleaning up the blotches is a major practical issue. It puts an ugly limit on DR.
And my experience is that blotching doesn't show up when there is no light. Which might sound odd, but a naive measurement of DR might compare shots with lens cap on and lens cap off under bright light thereby coming up with numbers like 9EV or more.

Blotching however, is significant at a higher level level than dark and we're pushed to get much better than 4 or 5EV in practice.

We're only saved by SPP - the best de-blotcher on the planet :-) . Not convinced? Try a low-light SD9 shot and convert it with default DCraw (been there, done that).

Crocodile Dundee would say "Now this is noise!" . . .

--
Cheers,
Ted
 
Last edited:
Well, this little sub thread has left me behind! Maybe someone could take me by the hand and walk me through it.

Here's a word picture of my current state of understanding (right or wrong). Perhaps this might be helpful for anyone else in a state of confusion ;-)

1. There is no such thing as colour in a physical sense, it is something constructed by minds

2. What is physical. is the intensity of EM radiation reflected by an object.

3. So, if you illuminate an object then use a detector of some kind sensitive to EM, you can work out differences in the intensity of reflected EM from various objects (and thus their relative brightness or luminance)

4. The reflectance of EM by an object is rarely uniform at every wavelength: every object will reflect different percentages of EM of different wavelengths.

5. If you had a tunable detector sensitive to all wavelengths, you could measure the percentage EM reflected at every specific wavelength

6. From that information you can characterise the complete reflectance spectrum of the object and make a complete refectance "fingerprint" of it.

7. The eye is not such a wide band detector: it detects only the narrow spread of wavelengths we call "light".

8. The human eye possesses three detectors capable of detecting light

9. Each has a slightly different sensitivity to different wavelengths within the spectrum of light

10. So, when measuring the reflectance of an object, each detector will measures the brightness differently (depending on the percentage of various wavelengths the object reflects)

11. So you end up with a triplet of brightness measures of the object

12. Armed with these triplets, the brain can distinguish objects not just by total brightness but by the differences between the three parts of each triplet. Certain values of the triplets can be arbitrarily assigned to different sensations in the brain.

13. These sensations are what we call "colour" (or "color")

14. We can mimic the eye in a sense by using 3 artificial detectors of different wavelength sensitivity to measure brightnesses as long as we find detectors that produce similar values as the retina (or can be corrected to do so).

15. These values are then arbitrarily assigned to a colour - presumably lots of research lies behind that assignment to create a plausible facsmile

Some questions:

A) Why 3 detectors? Why not 2 or 19? What difference does it make?

B) If 3 detectors are the minimum (and this is presumably why Foveon has 3 layers), how does the Quattro work: It has asymmetric layers with each top layer detector sharing the lower layers with its neighbours.If you take a group of 4 top layer pixels then the top layer could yield different values for each detector but the two lower layers would be the same for each pixel. A pixel that allows only the top layer to vary within each group of 4 doesn't sound as if it would actually work very well....yet it does...
 
Ted

I have been watching these discussions with continued interest but with not much extra to offer.

However, it may be that we are getting lost in the difference between technical definitions and the ability of the sensor plus algorithms to produce attractive images.

The thinking behind the Quattro and how it makes use of the distinctive capabilities of Foveon technology is impressive but there are inevitable compromises in how it works.

When we loosely and vaguely describe the top layer as being like luminance, what we intended is the ability to resolve shapes and patterns, especially edges. Although the response of the top layer is very different from the eye, it can still provide edge and texture data to the image construction algorithms.

--

Infinite are the arguments of mages. Truth is a jewel with many facets. Ursula K LeGuin
 
Last edited:
Basically yes and you seem to have started with a good understanding.

However:

1 Objects mostly scatter light rather than reflect it. In a minority of cases they can emit it - fluorescence for example.

2 Just to be clear that colour is a construct of the human brain and that we distinguish three primary colours from which all the rest can be mixed because we have three types of colour sensor. If you have a spectrum produced by a prism, you are laying out individual wavelengths of light in a line. You can see how the brain produces colours by comparing this with charts of the wavelength response of the three types of cone.

3 We also have rods that provide low resolution greyscale images in low light. The moonlight zone where both rods and cones contribute to sight has different properties.

4 There is a small part of the eye that can detect polarisation, which you can see if you twist a filter while looking at a bright white background. It looks like a little yellow rag.

5 Although you can do technical descriptions of images, the brain interprets them as pictures with meaning and content.

6 Maybe someone else can explain why our sensory apparatus works the way it does. Saying "evolution" is both true and also useless. There are some other aspects of cognitive processing to do with facial recognition, developmental aspects of what you actually "see" and the difference between static and moving images and how the eyes scan a scene.

7 Do you need to know all this to be a good photographer - absolutely not. Do you to be a good camera engineer - absolutely yes. We are at the edge of what even a gear forum might discuss but the relative capabilities of different sensors and algorithms do depend on this (good work by camera engineers).

Infinite are the arguments of mages. Truth is a jewel with many facets. Ursula K LeGuin
 
Ted

I have been watching these discussions with continued interest but with not much extra to offer.

However, it may be that we are getting lost in the difference between technical definitions and the ability of the sensor plus algorithms to produce attractive images.
From which, should I conclude that, if someone calls luminance 'Lightness', it doesn't matter and is not relevant to a discussion about the top layer?
 
Ted

It is what it is - the wavelength response curve given by Sigma. Your point is that there is a definition of Luminance that relates to the combined response of the receptors in the human eye, in fact several definitions, depending on what standard is used.

Given the panchromatic nature of the Top Layer it does rather perform a similar function to the add up of the three rods in the eye. It is much more sensitive at the blue end and relatively less so in the green middle and red end.

I'm not sure we are really communicating? I am genuinely interested in the OP's view of how the Quattro delivers images via the RAW to SPP to JPEG route.
 
Ted

It is what it is - the wavelength response curve given by Sigma. Your point is that there is a definition of Luminance that relates to the combined response of the receptors in the human eye, in fact several definitions, depending on what standard is used.
Yes.
Given the panchromatic nature of the Top Layer it does rather perform a similar function to the add up of the three rods in the eye.
It does not. BTW, the eye has rods and cones, not just rods.
It is much more sensitive at the blue end and relatively less so in the green middle and red end.
It is what it is - the wavelength response curve given by Sigma.
I'm not sure we are really communicating?
We are not, which is why we're going back and forth.
I am genuinely interested in the OP's view of how the Quattro delivers images via the RAW to SPP to JPEG route.
No disrespect, but perhaps you should communicate with DC, rather than myself?
 
Ted

Quite right - got my rods and cones mixed up.

Best

Andrew
 
And my experience is that blotching doesn't show up when there is no light.
That is an interesting observation, Yes, blotches are mostly in the mid or mid-low tones.

Personally I think the blotches are created by the difference between the middle and the lower layer. And I guess that the dark tones are mostly taken from the top layer. That makes sense if the blotches are not found in the dark tones.
We're only saved by SPP - the best de-blotcher on the planet :-) .
That is probably the scary truth.
 
A) Why 3 detectors? Why not 2 or 19? What difference does it make?
In layman's terms you need at least three channels to simulate eye color response. There's such thing called metamerism (if you see same color for different spectral compositions, then sensor must detect same color for these spectras either and vice versa) and based on that it is assumed that for proper color reproduction eye spectral response must be representable as linear combination of sensor spectral responses. There can be more than three channels, but this makes both sensor technology and processing more expensive. There are some sensors with four different color channels produced (RGBE from Sony for example), but these sensors are abandoned due to processing difficulties.

What difference more channels can make? Maybe a bit better color reproduction, maybe wider operating conditions (from candlelight to very high color temperature scenes or while using lapms with weird spectra) - not much for ordinary photography. For scientific or forensic uses such sensors would be more interesting - but using external filters is usually much better and more flexible solution.
B) If 3 detectors are the minimum (and this is presumably why Foveon has 3 layers), how does the Quattro work: It has asymmetric layers with each top layer detector sharing the lower layers with its neighbours.If you take a group of 4 top layer pixels then the top layer could yield different values for each detector but the two lower layers would be the same for each pixel. A pixel that allows only the top layer to vary within each group of 4 doesn't sound as if it would actually work very well....yet it does...
It does sound to me as if it would work quite well. You need to consider, that a) human eye doesn't separate little details color very well and b) in real images color channels are strongly correlated. (Both of these are basis of CFA sensor processing too.)

I would process Q data as follows (noise removal and many other corrections omitted):
  1. group/bin top layer pixels 2x2
  2. calculate pixel values (HSV or Lab or RGB + Y - in whatever color space it is most correct to process next steps) from resulting 5MPix three-layer data as usual
  3. resize result to 20MPix
  4. redistribute intensity (luminance, lightness - let everyone choose correct term here, I don't know) in these 2x2 areas according to top layer real pixel values
I think such approach would create relatively few visible artefacts for most images, even without residual color fringe correction. Is someone willing to try that approach on real data?
 
Yes, we see the word "Luminance" here a lot since the Quattro came out, but rarely do we see it's definition. So it has become a word similar to "resolution" which means many things to many people.

So. In the world of imaging, Luminance Y = 0.2R + 0.7G + 0.07B, where we're talking linear RGB decoded from a camera. The coefficients 0.2, 0.7 and 0.07 are called weights or weightings. The weighting of 0.7 for green tells us that green is the most important channel when determining the luminance of a pixel's-worth of a scene. And we note that blue (decoded blue, not the blue layer) carries hardly any weight at all, i.e. 0.07 is not a typo. And that is why JPEG compression uses red and blue for the color info, I reckon.
Regarding luminosity. I have been doing lots of monochrome film based photography. I then used filters. Those filters changed the spectral response, so did the lighting/illuminant. Still - I would absolutely call all those responses luminosity. I mean - monochrome - what is there more than luminosity?
A difficulty in using only one channel to determine Y (as Foveon implies) is that the top layer signal does not tell us what it's detected wavelength is. Go back to diagram and it can be seen that there are many instances of incident spectra that would give the same output. A narrow spectrum at say 450nm could easily give the same output as a wider spectrum at say 630nm (think "area under the curve").

So all this talk of glibly using the top layer as "the luminance channel" carries little weight, lol. Before someone rushes to disagree, provision of Sigmas exact algorithm would necessary to convince anyone here. So far, no such algorithm has been, even after all this time.
Yes, we do not know the algorithm. We can guess its nature though.

For Merrill and Quattro it has to put very high weight on the top layer, for different reasons. Merrill has lousy lower layers and Quattro has lower layers of lower resolution.

So this is my (maybe faulty) guess.

They use the top layer in order to make the original monochrome image. This image has max quality and resolution. But, it has the wrong luminosity - it is too blue sensitive.

Then they use the noisy information from the lower layers. Exactly how they do it - I do not know. They cannot simply denoise and smooth them and do some mixing, because then you would get halos and also decoloring of small details.

I guess they use some AI thinking or adaptive method. This method is used to correct the faulty luminance and also to color the image, without getting too much blotches.

Can be wrong - maybe they just do some clever filtering. Cannot understand how though.
 
A) Why 3 detectors? Why not 2 or 19? What difference does it make?
In layman's terms you need at least three channels to simulate eye color response. There's such thing called metamerism (if you see same color for different spectral compositions, then sensor must detect same color for these spectras either and vice versa) and based on that it is assumed that for proper color reproduction eye spectral response must be representable as linear combination of sensor spectral responses. There can be more than three channels, but this makes both sensor technology and processing more expensive. There are some sensors with four different color channels produced (RGBE from Sony for example), but these sensors are abandoned due to processing difficulties.

What difference more channels can make? Maybe a bit better color reproduction, maybe wider operating conditions (from candlelight to very high color temperature scenes or while using lapms with weird spectra) - not much for ordinary photography. For scientific or forensic uses such sensors would be more interesting - but using external filters is usually much better and more flexible solution.
B) If 3 detectors are the minimum (and this is presumably why Foveon has 3 layers), how does the Quattro work: It has asymmetric layers with each top layer detector sharing the lower layers with its neighbours.If you take a group of 4 top layer pixels then the top layer could yield different values for each detector but the two lower layers would be the same for each pixel. A pixel that allows only the top layer to vary within each group of 4 doesn't sound as if it would actually work very well....yet it does...
It does sound to me as if it would work quite well. You need to consider, that a) human eye doesn't separate little details color very well and b) in real images color channels are strongly correlated. (Both of these are basis of CFA sensor processing too.)

I would process Q data as follows (noise removal and many other corrections omitted):
  1. group/bin top layer pixels 2x2
  2. calculate pixel values (HSV or Lab or RGB + Y - in whatever color space it is most correct to process next steps) from resulting 5MPix three-layer data as usual
  3. resize result to 20MPix
  4. redistribute intensity (luminance, lightness - let everyone choose correct term here, I don't know) in these 2x2 areas according to top layer real pixel values
I think such approach would create relatively few visible artefacts for most images, even without residual color fringe correction. Is someone willing to try that approach on real data?
Yes - I think this is how the do it - in principle. At least that would be a good way of doing it. And maybe they do the same for Merrill in order to fix the problem with the noisy lower layers.
 
Well, this little sub thread has left me behind! Maybe someone could take me by the hand and walk me through it.

Here's a word picture of my current state of understanding (right or wrong). Perhaps this might be helpful for anyone else in a state of confusion ;-)

1. There is no such thing as colour in a physical sense, it is something constructed by minds

2. What is physical. is the intensity of EM radiation reflected by an object.

3. So, if you illuminate an object then use a detector of some kind sensitive to EM, you can work out differences in the intensity of reflected EM from various objects (and thus their relative brightness or luminance)

4. The reflectance of EM by an object is rarely uniform at every wavelength: every object will reflect different percentages of EM of different wavelengths.

5. If you had a tunable detector sensitive to all wavelengths, you could measure the percentage EM reflected at every specific wavelength

6. From that information you can characterise the complete reflectance spectrum of the object and make a complete refectance "fingerprint" of it.

7. The eye is not such a wide band detector: it detects only the narrow spread of wavelengths we call "light".

8. The human eye possesses three detectors capable of detecting light

9. Each has a slightly different sensitivity to different wavelengths within the spectrum of light

10. So, when measuring the reflectance of an object, each detector will measures the brightness differently (depending on the percentage of various wavelengths the object reflects)

11. So you end up with a triplet of brightness measures of the object

12. Armed with these triplets, the brain can distinguish objects not just by total brightness but by the differences between the three parts of each triplet. Certain values of the triplets can be arbitrarily assigned to different sensations in the brain.
"Differences" being the key word.
13. These sensations are what we call "colour" (or "color")

14. We can mimic the eye in a sense by using 3 artificial detectors of different wavelength sensitivity to measure brightnesses as long as we find detectors that produce similar values as the retina (or can be corrected to do so).

15. These values are then arbitrarily assigned to a colour - presumably lots of research lies behind that assignment to create a plausible facsmile

Some questions:

A) Why 3 detectors? Why not 2 or 19? What difference does it make?
Most mammals have two types of detector, and many birds have four. A very small number of women have four, and report finer distinctions in the brown/fawn/olive colours.

The more you have, assuming they are well engineered, the nearer you are to getting a complete spectrum curve rather than a crude estimate.
B) If 3 detectors are the minimum (and this is presumably why Foveon has 3 layers), how does the Quattro work: It has asymmetric layers with each top layer detector sharing the lower layers with its neighbours.If you take a group of 4 top layer pixels then the top layer could yield different values for each detector but the two lower layers would be the same for each pixel. A pixel that allows only the top layer to vary within each group of 4 doesn't sound as if it would actually work very well....yet it does...
It works because most detail in most images involves changes in luminance (which the top layer can detect) rather than differences in hue.
 

Keyboard shortcuts

Back
Top