CCD and CMOS colour

Just wondering.

The substrates for CCDs and CMOS image sensors
What do you mean by 'substrates'?
The stuff the image sensor is made of + the doping.
Silicon plus the doping?
have different spectral sensitivity from what I have seen (ESA).
That would be odd, since the 'substrates' are pretty much the same.
Just read an article from ESA where one of their space telescopes had CCD and CMOS image sensors because each sensor type had different sensitivity. One was better towards UV and the other one was better towards IR. Different substates too. Got a bit curious...
Can you be more specific? And is it substrates or substates we’re talking about. In either case, you seem to be defining the word as the semiconductor material, in this case, Si, together with the doping applied before lithography begins. In that case, is there a difference besides doping?
 
Just wondering.

The substrates for CCDs and CMOS image sensors
What do you mean by 'substrates'?
The stuff the image sensor is made of + the doping.
Silicon plus the doping?
have different spectral sensitivity from what I have seen (ESA).
That would be odd, since the 'substrates' are pretty much the same.
Just read an article from ESA where one of their space telescopes had CCD and CMOS image sensors because each sensor type had different sensitivity. One was better towards UV and the other one was better towards IR. Different substates too. Got a bit curious...
Can you be more specific? And is it substrates or substates we’re talking about. In either case, you seem to be defining the word as the semiconductor material, in this case, Si, together with the doping applied before lithography begins. In that case, is there a difference besides doping?
Ok - silicon seems to be sensitive to radiation between 0.1 to 1000 nm. So the difference between the common image sensors seems to be down to doping.

Seems like the odd image sensor is based on gallium nitride (GaN) or gallium arsinide (GaAs). Not exactly what we find in our cameras.
 
I remember a discussion on this same topic some years ago, and it was pointed out. by Emil Martinec as I recall, that what the 'strength' of the CFA does have an effect on is the balance between so-called 'chroma' and 'luma' noise after processing. Whilst this is not strictly a matter of colour separation, it might affect what is perceived as such in some conditions.
I think that the notion of strong vs. weak CFA in these forums comes from the naïve expectation that the curves must actually separate some spectral bands, which is wrong. People eyeball the spectral curves and are very unhappy when they do not see the separation they expect. They are not so unhappy to do that with eyes having spectral responses with a significant overlap...
All that is required for recording hues and saturation is that each wavelength should give a unique combination of three numbers (one or two can be zero).
Not really. Where is the human color vision involved in this?
Why should the human colour vision be involved ? A photographic camera (unlike a robot's camera) isn't an eye.

The purpose of straight colour photography (excluding creative manipulation) is to present to the viewer the same light and geometry as when viewing the real thing. It should be like a window or a mirror.

Don Cox
 
I remember a discussion on this same topic some years ago, and it was pointed out. by Emil Martinec as I recall, that what the 'strength' of the CFA does have an effect on is the balance between so-called 'chroma' and 'luma' noise after processing. Whilst this is not strictly a matter of colour separation, it might affect what is perceived as such in some conditions.
I think that the notion of strong vs. weak CFA in these forums comes from the naïve expectation that the curves must actually separate some spectral bands, which is wrong. People eyeball the spectral curves and are very unhappy when they do not see the separation they expect. They are not so unhappy to do that with eyes having spectral responses with a significant overlap...
All that is required for recording hues and saturation is that each wavelength should give a unique combination of three numbers (one or two can be zero).
Not really. Where is the human color vision involved in this?
Why should the human colour vision be involved ? A photographic camera (unlike a robot's camera) isn't an eye.

The purpose of straight colour photography (excluding creative manipulation) is to present to the viewer the same light and geometry as when viewing the real thing. It should be like a window or a mirror.
Well, this is too ambitious. It cannot be done. Then the purpose is to present a viewer with a fake copy (a metamer colorowise) of the original which the viewer cannot distinguish from the original. Or at least, get close enough. If all that fails, create an eye-candy.
 
I remember a discussion on this same topic some years ago, and it was pointed out. by Emil Martinec as I recall, that what the 'strength' of the CFA does have an effect on is the balance between so-called 'chroma' and 'luma' noise after processing. Whilst this is not strictly a matter of colour separation, it might affect what is perceived as such in some conditions.
I think that the notion of strong vs. weak CFA in these forums comes from the naïve expectation that the curves must actually separate some spectral bands, which is wrong. People eyeball the spectral curves and are very unhappy when they do not see the separation they expect. They are not so unhappy to do that with eyes having spectral responses with a significant overlap...
All that is required for recording hues and saturation is that each wavelength should give a unique combination of three numbers (one or two can be zero).
Not really. Where is the human color vision involved in this?
Why should the human colour vision be involved ? A photographic camera (unlike a robot's camera) isn't an eye.

The purpose of straight colour photography (excluding creative manipulation) is to present to the viewer the same light and geometry as when viewing the real thing. It should be like a window or a mirror.
Not gonna happen. Spectral reproduction is not practical with anywhere near the resolution that we expect of modern cameras, at least with a single capture.

Proper encoding to triplets requires assumptions about human vision. A standard observer, if you will.

Jim

--
https://blog.kasson.com
 
Last edited:
Just wondering.

The substrates for CCDs and CMOS image sensors
What do you mean by 'substrates'?
The stuff the image sensor is made of + the doping.
Silicon plus the doping?
have different spectral sensitivity from what I have seen (ESA).
That would be odd, since the 'substrates' are pretty much the same.
Just read an article from ESA where one of their space telescopes had CCD and CMOS image sensors because each sensor type had different sensitivity. One was better towards UV and the other one was better towards IR. Different substates too. Got a bit curious...
Can you be more specific? And is it substrates or substates we’re talking about. In either case, you seem to be defining the word as the semiconductor material, in this case, Si, together with the doping applied before lithography begins. In that case, is there a difference besides doping?
Ok - silicon seems to be sensitive to radiation between 0.1 to 1000 nm.
What is important is the sensitivity of the device -- the photodiode.
So the difference between the common image sensors seems to be down to doping.
And are you saying that the doping of a CCD before lithography is different from a CMOS chip before lithography? I'm guessing that's true, but not dispositive. It's the sensitivity of the device that's important here.
Seems like the odd image sensor is based on gallium nitride (GaN) or gallium arsinide (GaAs). Not exactly what we find in our cameras.
 
I remember a discussion on this same topic some years ago, and it was pointed out. by Emil Martinec as I recall, that what the 'strength' of the CFA does have an effect on is the balance between so-called 'chroma' and 'luma' noise after processing. Whilst this is not strictly a matter of colour separation, it might affect what is perceived as such in some conditions.
I think that the notion of strong vs. weak CFA in these forums comes from the naïve expectation that the curves must actually separate some spectral bands, which is wrong. People eyeball the spectral curves and are very unhappy when they do not see the separation they expect. They are not so unhappy to do that with eyes having spectral responses with a significant overlap...
All that is required for recording hues and saturation is that each wavelength should give a unique combination of three numbers (one or two can be zero).
Not really. Where is the human color vision involved in this?
Why should the human colour vision be involved ? A photographic camera (unlike a robot's camera) isn't an eye.

The purpose of straight colour photography (excluding creative manipulation) is to present to the viewer the same light and geometry as when viewing the real thing. It should be like a window or a mirror.
Not gonna happen. Spectral reproduction is not practical with anywhere near the resolution that we expect of modern cameras, at least with a single capture.

Proper encoding to triplets requires assumptions about human vision. A standard observer, if you will.
And it being like a window or mirror is somewhat undesirable. It would necessitate you viewing the image in exactly the same lighting conditions as existed when you took it. Possibly OK for an immersive VR display, but for any of the display methods we have, not a good idea.
 
I remember a discussion on this same topic some years ago, and it was pointed out. by Emil Martinec as I recall, that what the 'strength' of the CFA does have an effect on is the balance between so-called 'chroma' and 'luma' noise after processing. Whilst this is not strictly a matter of colour separation, it might affect what is perceived as such in some conditions.
I think that the notion of strong vs. weak CFA in these forums comes from the naïve expectation that the curves must actually separate some spectral bands, which is wrong. People eyeball the spectral curves and are very unhappy when they do not see the separation they expect. They are not so unhappy to do that with eyes having spectral responses with a significant overlap...
All that is required for recording hues and saturation is that each wavelength should give a unique combination of three numbers (one or two can be zero).
Not really. Where is the human color vision involved in this?
Why should the human colour vision be involved ? A photographic camera (unlike a robot's camera) isn't an eye.

The purpose of straight colour photography (excluding creative manipulation) is to present to the viewer the same light and geometry as when viewing the real thing. It should be like a window or a mirror.
Not gonna happen. Spectral reproduction is not practical with anywhere near the resolution that we expect of modern cameras, at least with a single capture.

Proper encoding to triplets requires assumptions about human vision. A standard observer, if you will.
And it being like a window or mirror is somewhat undesirable. It would necessitate you viewing the image in exactly the same lighting conditions as existed when you took it. Possibly OK for an immersive VR display, but for any of the display methods we have, not a good idea.
Yeah, we don't have displays capable of spectral reproduction, either.
 
I remember a discussion on this same topic some years ago, and it was pointed out. by Emil Martinec as I recall, that what the 'strength' of the CFA does have an effect on is the balance between so-called 'chroma' and 'luma' noise after processing. Whilst this is not strictly a matter of colour separation, it might affect what is perceived as such in some conditions.
I think that the notion of strong vs. weak CFA in these forums comes from the naïve expectation that the curves must actually separate some spectral bands, which is wrong. People eyeball the spectral curves and are very unhappy when they do not see the separation they expect. They are not so unhappy to do that with eyes having spectral responses with a significant overlap...
All that is required for recording hues and saturation is that each wavelength should give a unique combination of three numbers (one or two can be zero).
Not really. Where is the human color vision involved in this?
Why should the human colour vision be involved ? A photographic camera (unlike a robot's camera) isn't an eye.

The purpose of straight colour photography (excluding creative manipulation) is to present to the viewer the same light and geometry as when viewing the real thing. It should be like a window or a mirror.
Maybe this would be a good time to revisit Hunt's objectives for color reproduction:
  • Spectral color reproduction, in which the reproduction, on a pixel-by-pixel basis, contains the same spectral power distributions or reflectance spectra as the original.
  • Exact color reproduction, in which the reproduction has the same chromaticities and luminances as those of the original.
  • Colorimetric color reproduction, in which the reproduced image has the same chromaticities as the original, and luminances proportional to those of the original.
  • Equivalent color reproduction, in which the image values are corrected so that the image appears the same as the original, even though the reproduction is viewed in different conditions than was the original.
  • Corresponding color reproduction, in which the constraints of equivalent color reproduction are relaxed to allow differing absolute illumination levels between the original and the reproduction; the criterion becomes that the reproduction looks the same as the original would have had it been illuminated at the absolute level at which the reproduction is viewed.
  • Preferred color reproduction, in which reproduced colors differ from the original colors in order to give a more pleasing result.
Knowledge of human vision is optional only for the first.

--
https://blog.kasson.com
 
Last edited:
Ok - silicon seems to be sensitive to radiation between 0.1 to 1000 nm. So the difference between the common image sensors seems to be down to doping.

Seems like the odd image sensor is based on gallium nitride (GaN) or gallium arsinide (GaAs). Not exactly what we find in our cameras.
Actually silicon is sensitive to x-rays, too, as well as even higher energy photons.

Once the photon energy is less than about 1.1eV, then it does not have enough energy to break Si covalent bonds to generate photoelectrons. Then you have to start to look to other semiconductors like III-V's and II-VI compound materials for detection, such as in the thermal infrared. Plenty of IR focal plane arrays (= image sensors) being made for for defense and some consumer or industrial applications, but nothing like the volume of CMOS image sensors. Silicon is today's workhorse material for many good reasons.
 
D Cox wrote:All that is required for recording hues and saturation is that each wavelength should give a unique combination of three numbers (one or two can be zero).
If you know that the light is monocromatic (single wavelength), then two overlapping channels would in principle be enough to uniquely identify wavelength in the abscence of noise and irregularity, right?

For light spectra consisting of more than one wavelength, you would either have to «sample» the spectrum sufficiently dense (say, every 100/10/1nm) so as to capture enough spectral resolution. Or you would have to «lump together» spectral measurements in a few bands. The minimum is 3 bands for human viewers, and then you need for those bands to have a response that can be related to how humans see light.

-h
 
Last edited:
Just wondering.

The substrates for CCDs and CMOS image sensors
What do you mean by 'substrates'?
The stuff the image sensor is made of + the doping.
Silicon plus the doping?
have different spectral sensitivity from what I have seen (ESA).
That would be odd, since the 'substrates' are pretty much the same.
Just read an article from ESA where one of their space telescopes had CCD and CMOS image sensors because each sensor type had different sensitivity. One was better towards UV and the other one was better towards IR. Different substates too. Got a bit curious...
Can you be more specific? And is it substrates or substates we’re talking about. In either case, you seem to be defining the word as the semiconductor material, in this case, Si, together with the doping applied before lithography begins. In that case, is there a difference besides doping?
I find it improbable that doping would make a big difference to the efficiency of the photoelectric effect, given that dopant concentrations are at most 1 dopant per 10^4 silicon atoms. It does certainly make a difference in efficiency of charge collection, but that's a matter of device design, not the 'substrate'.
 
Just wondering.

The substrates for CCDs and CMOS image sensors
What do you mean by 'substrates'?
The stuff the image sensor is made of + the doping.
Silicon plus the doping?
have different spectral sensitivity from what I have seen (ESA).
That would be odd, since the 'substrates' are pretty much the same.
Just read an article from ESA where one of their space telescopes had CCD and CMOS image sensors because each sensor type had different sensitivity. One was better towards UV and the other one was better towards IR. Different substates too. Got a bit curious...
Can you be more specific? And is it substrates or substates we’re talking about. In either case, you seem to be defining the word as the semiconductor material, in this case, Si, together with the doping applied before lithography begins. In that case, is there a difference besides doping?
I find it improbable that doping would make a big difference to the efficiency of the photoelectric effect, given that dopant concentrations are at most 1 dopant per 10^4 silicon atoms. It does certainly make a difference in efficiency of charge collection, but that's a matter of device design, not the 'substrate'.
 
D Cox wrote:All that is required for recording hues and saturation is that each wavelength should give a unique combination of three numbers (one or two can be zero).
If you know that the light is monocromatic (single wavelength), then two overlapping channels would in principle be enough to uniquely identify wavelength in the abscence of noise and irregularity, right?

For light spectra consisting of more than one wavelength, you would either have to «sample» the spectrum sufficiently dense (say, every 100/10/1nm) so as to capture enough spectral resolution.
But then you would need sufficiently many channels as well.
Or you would have to «lump together» spectral measurements in a few bands. The minimum is 3 bands for human viewers, and then you need for those bands to have a response that can be related to how humans see light.
The human vision does not really see the spectrum in bands.
 
D Cox wrote:All that is required for recording hues and saturation is that each wavelength should give a unique combination of three numbers (one or two can be zero).
If you know that the light is monocromatic (single wavelength), then two overlapping channels would in principle be enough to uniquely identify wavelength in the abscence of noise and irregularity, right?

For light spectra consisting of more than one wavelength, you would either have to «sample» the spectrum sufficiently dense (say, every 100/10/1nm) so as to capture enough spectral resolution.
But then you would need sufficiently many channels as well.
Or you would have to «lump together» spectral measurements in a few bands. The minimum is 3 bands for human viewers, and then you need for those bands to have a response that can be related to how humans see light.
The human vision does not really see the spectrum in bands.
There is a common misconception at work in Don Cox's post, which is embodied in the phrase 'all the colours of the spectrum'. The spectrum does not contain all possible colours, because colour is not about single wavelengths. As I said somewhere else, magenta in an example of a colour that cannot be produced by a single wavelength, because it is produced by simultaneous stimulation of the L and S cones. This is very much about human vision. Turtles and some birds have five stimulus colour vision, and most birds have four stimulus vision, extending to about 300nm wavelength. Colour imaging systems designed for humans would be useless for birds, even it they could hold the camera.
 
D Cox wrote:All that is required for recording hues and saturation is that each wavelength should give a unique combination of three numbers (one or two can be zero).
If you know that the light is monocromatic (single wavelength), then two overlapping channels would in principle be enough to uniquely identify wavelength in the abscence of noise and irregularity, right?

For light spectra consisting of more than one wavelength, you would either have to «sample» the spectrum sufficiently dense (say, every 100/10/1nm) so as to capture enough spectral resolution.
But then you would need sufficiently many channels as well.
Or you would have to «lump together» spectral measurements in a few bands. The minimum is 3 bands for human viewers, and then you need for those bands to have a response that can be related to how humans see light.
The human vision does not really see the spectrum in bands.
There is a common misconception at work in Don Cox's post, which is embodied in the phrase 'all the colours of the spectrum'. The spectrum does not contain all possible colours, because colour is not about single wavelengths. As I said somewhere else, magenta in an example of a colour that cannot be produced by a single wavelength, because it is produced by simultaneous stimulation of the L and S cones. This is very much about human vision. Turtles and some birds have five stimulus colour vision, and most birds have four stimulus vision, extending to about 300nm wavelength. Colour imaging systems designed for humans would be useless for birds, even it they could hold the camera.
Right. Here are the spectra of many green plants, taken from a link somebody posted here years above. None of them is even remotely close to a spike around 530nm or so.



34579573052_b4a6d9229b_h.jpg
 
bobn2 wrote:.

The spectrum does not contain all possible colours, because colour is not about single wavelengths.
Argueing semantics seldomly lead anywhere, but if «the spectrum» is any kind of light, then surely the spectrum contains «all colors»? When radio authorities auction radio spectrum, that is a chunk of fairly generic bandwidth, not single wavelengths.
As I said somewhere else, magenta in an example of a colour that cannot be produced by a single wavelength, because it is produced by simultaneous stimulation of the L and S cones. This is very much about human vision. Turtles and some birds have five stimulus colour vision, and most birds have four stimulus vision, extending to about 300nm wavelength. Colour imaging systems designed for humans would be useless for birds, even it they could hold the camera.
Of course. For a 3 degrees of freedom system you would generally need at least 3 degrees of freedom. If you want to capture «color» approximately as seen by a wide range of animals you would need a generic capture system. Perhaps a uniform filterbank sampling the range of light wavelengths into a larger number of bands. Then you could sum over weighted subbands to approximate various species.

But that seems a bit beyond what we need to make those sunsets look plausible for human beings.
 
Last edited:
bobn2 wrote:.

The spectrum does not contain all possible colours, because colour is not about single wavelengths.
Argueing semantics seldomly lead anywhere, but if «the spectrum» is any kind of light, then surely the spectrum contains «all colors»? When radio authorities auction radio spectrum, that is a chunk of fairly generic bandwidth, not single wavelengths.
Crying 'semantics' is generally the sign of someone who isn't following. 'Spectrum' in this context was coined by Isaac Newton, to denote the colours visible after passing white light through a prism. That set of colours does not contain all colours that a human observer can perceive, the major example that I gave being magenta. So long as you aren't of the opinion that it does contain all perceptible colours, and hence think that colour is a single wavelength phenomenon, we're cool.

In fact, different mixes of wavelengths can lead to the same perceived colour.

--
Is it always wrong
for one to have the hots for
Comrade Kim Yo Jong?
 
Last edited:
bobn2 wrote:.

The spectrum does not contain all possible colours, because colour is not about single wavelengths.
Argueing semantics seldomly lead anywhere, but if «the spectrum» is any kind of light, then surely the spectrum contains «all colors»? When radio authorities auction radio spectrum, that is a chunk of fairly generic bandwidth, not single wavelengths.
Actually, when radio spectrum is auctioned off, it is in chunks of frequencies specified by their extremes.

It's all about the frequencies allowed to be transmitted. This is a far cry from the way that color is determined.

There are several uses of the word spectrum that are being used here, and they conflict.

If you talk about color and wavelengths, you talk about the spectral colors, and those are an infinite number of single wavelengths. On the other hand, when you talk about illuminants, you specify them by the mix of frequencies contained therein, ie D50, D65, Illuminant C, etc. When you talk about a particular color, you can't say much about the mix of wavelengths used to create it, since there are an infinite number of mixes of wavelengths that will result in a particular color. But if you want to specify a particular metamer, you do that by providing the intensities as a function of wavelength.
As I said somewhere else, magenta in an example of a colour that cannot be produced by a single wavelength, because it is produced by simultaneous stimulation of the L and S cones.
Well, there is spectral violet, but your main point is quite valid.
This is very much about human vision. Turtles and some birds have five stimulus colour vision, and most birds have four stimulus vision, extending to about 300nm wavelength. Colour imaging systems designed for humans would be useless for birds, even it they could hold the camera.
Of course. For a 3 degrees of freedom system you would generally need at least 3 degrees of freedom. If you want to capture «color» approximately as seen by a wide range of animals you would need a generic capture system. Perhaps a uniform filterbank sampling the range of light wavelengths into a larger number of bands. Then you could sum over weighted subbands to approximate various species.

But that seems a bit beyond what we need to make those sunsets look plausible for human beings.
The point is, with present technology, to convert the scene before us into a representation that has a prayer of reproducing the way it looked to a human observer, we need to know how that observer converts light to colors.
 

Keyboard shortcuts

Back
Top