Role of sensor in color rendering?

Source

Source

In this case, the sensor would be pretty blind to differences in color between 640 and 750 microns
From my measurements of D700, spectral responses in blue and green channels are non-zero in that range, and it is typical for a sensor with an IR filter to have 700 nm at cut-off.
Thanks for the comments! I didn't want to be negative about the D700 just illustrate some details
I'm trying to say that spectral response graphs quite often are not reliable, and graphs like the above one are something I've never seen, through 100+ cameras I've measured. I, however, understand what you were trying to illustrate - but maybe this graph doesn't suite the purpose well.

On a side note, the Y axis in linear scale is less informative than when it is in log scale; and control over light intensity is critical for obtaining reliable data.

--
 
Source

Source

In this case, the sensor would be pretty blind to differences in color between 640 and 750 microns
From my measurements of D700, spectral responses in blue and green channels are non-zero in that range, and it is typical for a sensor with an IR filter to have 700 nm at cut-off.
Thanks for the comments! I didn't want to be negative about the D700 just illustrate some details
I'm trying to say that spectral response graphs quite often are not reliable, and graphs like the above one are something I've never seen, through 100+ cameras I've measured. I, however, understand what you were trying to illustrate - but maybe this graph doesn't suite the purpose well.

On a side note, the Y axis in linear scale is less informative than when it is in log scale; and control over light intensity is critical for obtaining reliable data.
Iliah,

Thanks for explaining. I was thinking about linear vs. log scale, too.

Would be interesting some of the data you measured. Good sensor data is not an abundant resource.

One thing that surprises me in the spectral response plots I have seen is the sharp rise of the red channel around 570 nm. The drop off on the right side I would think comes from IR filter.

BTW, do you think the IR cutoff is an interference filter or just IR absorbing?

Best regards

Erik

--
Erik Kaffehr
Website: http://echophoto.dnsalias.net
Magic uses to disappear in controlled experiments…
Gallery: http://echophoto.smugmug.com
Articles: http://echophoto.dnsalias.net/ekr/index.php/photoarticles
 
What I dislike the most in this illustration and looks like what I get in many, many, of my shots is the color variations in the area around her mouth. Especially the small area below her inferior lips.
Now I see it. I agree that it looks greenish, not necessarily because it is; probably because it is surrounded by pink and red - but it does look unnatural.
Yeah, might be the reason why it looks 'too green' to me!
If so, that's the simultaneous contrast issue that Iliah mentioned earlier.
Yes, got it. :)
By the way, there are imperfect standard observers for constant-color fields, but, AFAIK, there are no standard models for simultaneous contrast color effects, and it is certainly possible (and I think, likely) that the quantitative effects are different in different people. This would not be detected by any color normalcy testing that I know of.

Jim
I know that I'm very sensitive to color contrast. Let's say a car which was partially repainted, I might spot it. (Even if I have to admit that sometimes it is really, really, well done and invisible.)
But I always thought it was more a question of temperament, demanding nature and attention do details...

In the case of the 'hue shift' accross faces, this makes me feel uncomfortable. Like if I was looking at sick people. Really unpleasant (but highly subjective and maybe cultural).

It's quite amazing, because for example when I did my first CCSG shots, results were sh** due to glare. This gave me oversaturated skin tones...but after hours looking at the same images, I could not see it anymore.
On the other hand, a slight variation accross the face: I feel uncomfortable. I also try the other way around, using Capture One Pro Color Palet or DxO Photo Lab HSL tool to 'uniformize' hue a bit: it's even worse.

All that said another problem I have is how the color rendering of skin tones seems to variy with different lighting. I have an old stock of Canon 300D (mly first DSLR), Canon 350D and Canon EOS 40D photos of the same persons photographed with different lighting, including catastrophic mixed lighting (for example natural light + integrated flash for fill-in) and skin tones much more consistent from shot to shot. It's with Canon in-camera processing through or it's DxO Photo Lab/Lightroom/Capture One Pro sibbling. So it remain to be seen what it would look like withotu all these subjective adjustments Canon do...
I'll try to find or make a more 'neutral' profile and see...
I would recall that I have seen an article by TheSuede on the Fred Miranda forums about CFA designs.
What he sort of said that Canon has taken a route that works well for mixed light while some other sensors were optimized for like shooting in the studio.
I would recall that both TheSuede and Iliah Borg pretty much considered the CFA design on the Sony A900 to be a pretty decent compromise regarding color rendition and SNR.
I had some discussions regarding the color rendition of the Phase One P45+ back I have with Tim Parkin. Tim Parkin and his friend Joe Cornish had issues with yellow contamination of chlorophyll greens. I have seen that, too, but I hoped that DCP profiles would be able to handle that.
In the end, we have found out that Tim and I had different interpretations of color, although I must say that I would lean to Tim being right.
One of the things I considered was that the IR filter (or hot mirror) design may have played a role.
Interestingly, a couple of years ago, Phase One introduced a new back, called 'Thricomatic'. They produced some explanations that ignored pretty much all color science ever developed. But, reading between the lines it may be concluded that there were modifications to the 'hot filter'. What I have seen from real worlds samples, the new Thrichromatic back did not have that yellow contamination of vegetable greens I have seen on my P45+ back and on the IQ 3100 MP I have seen tested.

Lime green seems to have extreme characteristics in the near IR (infra red) region. That was one of the reasons I include lime green in my 'tricolore' tests. But I found that all the three sensorsi tried (Phase One P45+, Sony Alpha 900 and Sony A7rII) did a decent job on that lime.
In the end, I don't pretend to know...
Thanks for sharing.

I read a few posts from theSuede on Fred Miranda's forum, as well as a few articles on the web.

My 'layman takeaways' + some questions:
  • Two different light spectra can appear as the same color to an observer (human, camera...) (I did know). In such a case, these two different spectra are called 'metamers' (I did not know) and the perceived matching 'metamerism' (I did know).
  • If the perceived color matching of two patches falls apart under a different illuminant, it's called 'illuminant metameric failure' (I did know this can happen - and learnt it the hard way! - but I did not know how it was called).
  • One observer may see the same color, but not another observer. It's called 'observer metameric failure'. (I did know that because of my father who is colorblind: sometimes he sees two different colors while I see the same, sometimes the opposite. But I never thought about implication for cameras.)
    Then I fall into the complete unknown...
  • theSuede talks about 'hue resolution' and 'metameric failure' (I guess he means: 'observer metameric failure').
    My layman understanding is that the two are closely linked to the sensor 'spectral response' (linked itself to CFA design, silicium and other stuff I don't know...). Correct?
    Also, that the 'hue resolution' and 'metameric failure' maybe somewhat related: sometimes the sensor may be unable to distinguish two different spectra because of an 'insufficient' hue resolution. Really not sure about this one, is it correct?
    And if it fails while a human observer would be able to pick the difference, that's a 'metameric failure' and may cause problem. Correct?
    What leads me to another question: what happens when the camera can pick a difference and not the human observer?.
Most of this is basic linear algebra. You have a linear operator (projection) which maps the real world to the captured colors (yes, I will all them colors) and your eyes do the same but with a somewhat different operator. Think of it as projections under different angles. Sometimes two different spectral densities would be projected (viewed) the same by the sensor but not by you. Those are colors you can distinguish but the sensor cannot. On the other hand, this means that there must be spectra which the sensor will distinguish but you cannot. What happens then depends on how the signal is processed - if by a color matrix only, this means the photo will show real color variations in some cases when you cannot see any.

A slightly more complicated version of that is to put a threshold/sensitivity of what differences you can see or the sensor can distinguish and then you can see, etc.
  • theSuede also says speaking about 5D Mark II (with apparently new CFA design) that the 'hue resolution' in the orange-green would be low, what would lead to minimize color differences in skin tones. (He does not mention that, but I guess this is in addition to the subjective adjustment Canon makes in its color profiles). May this be correct?
    May it explain (in addition to the subjective adjustments made in their color profiles) why some Canon bodies give 'more consistent' skin tones?
    ('More consistent' in the sense there are less color variations depending on the lighting and people complexion.)
Thanks!
The post I have seen here is highly speculative and I would ignore it, see also below.
Hi,

With regard to hue resolution, it may be conceivable that a sensor would be more or less sensitive to a change of wave length across the spectrum:

Source

Source

In this case, the sensor would be pretty blind to differences in color between 640 and 750 microns, as it would only have one signal,
So does your eye. Such an analysis must be done in a comparison to the human vision. The problem I see above is that the red curve is too narrow and misses a lot of the 500-550 band or so; and this it does not seem to be possible to compensate for it by the color matrix.

500px-Cones_SMJ2_E.svg.png

On the other hand, a change at 570 nm would have a huge effect.
Your eyes react similarly.
Many thanks for your insight and the links.

I found that about the Canon EOS 5D Mark II:

canon_5d_5d2.png


source: http://www.astrosurf.com/buil/50d/test.htm

If I get it correctly, hue resolution would be quite 'low' between ~530 and 550nm, right ?
It is hard to answer questions like this before we compute a "good" color matrix. The small red bump would be compensated to some extent by the green one. We have to see what is left and then understand how sensitive to the error left we are, etc. Something like this was done in the threads mentioned above.
Got it.

Should there be a 'lack' of hue resolution in this wavelengths band, may it partially explain (in addition to the subjective adjstments made to their color profiles) why Canon cameras seem to show less variations in skin tonality depending on the lighting?

theSuede was mentioning that...but the there were a lot of 'shortcuts' and 'unsaid' in his posts....it does not appear as obvious to me.
 
I had a look at the portrait posted.

First i took two patches laid over each other with light and shadow Lab values, second I did the same but using the same L value. Here is a screen dump:

064a1e4af07a4cea9390bc6bfd8b88bf.jpg

View attachment 588ec098bd0b40f7853cc0a4400152bc.jpg
Here is the JPEG. :-( Note how much the screen dump distorts the colors :-(

Looking at the Hsv coordinates the hue change would be on the red/yellow segment.

Looking at the Hsv coordinates the hue change would be on the red/yellow segment.

Just to say, it is a nice portrait and I like the colors.

Best regards
Erik
Thanks for looking closely.

That's what Jim said (shift toward yellow).

That's what I see with the patches superimposed.
Thats' not what I see in the actual picture, at least in some areas (in paricular below the lips). It looks like the brown has 'a greenish cast'.
Some say it maybe a color contrast effects.
Human white balance issue/observer metameric failure? Possibly.

It can be extremely difficult to distinguish a greenish cast from a yellowish one.

Unfortunately, human perception isn't so straightforward with desaturated colours as it doesn't follow the same regime as saturated hues. The brain seems to adjust to desaturated colours differently - in that we are more sensitive to hue shifts than we should be based purely on retinal sensitivity. This is not accounted for by monitor calibration and it doesn't take a big deviation in the observer, or the display colour, to cause problems.

Rather than trying to balance green it sometimes helps to shift mid-yellow slightly towards red in the HSL adjustment, or try desaturating yellow slightly.

I really struggle with some colours depending on the time of day I am working on the images. I do have daylight balanced LED bulbs on my desk lamp, but colours still look slightly off compared to indirect diffuse daylight, which is a little cooler. Makes it hard to get prints right if you don't know where they will end up hanging.

A photographer friend of mine painted his entire office matt grey because anything else caused him to perceive colour shifts. Drove him nuts.

--
"A designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away." Antoine de Saint-Exupery
 
Last edited:
I was thinking about linear vs. log scale
;)

Second, measurement of small responses that are too close to read noise is not a good tactics. Neither is increasing exposure time past 0.5 .. 2 secs. We are left with light intensity control (preferred) and stacking to measure small responses reliably. If the proposed methods are not taking care of this detail IMO no reason to trust the results.
Would be interesting some of the data you measured.
That's commercial work, sorry, can't publish. Maybe next year.
One thing that surprises me in the spectral response plots I have seen is the sharp rise of the red channel around 570 nm. The drop off on the right side I would think comes from IR filter.
~570 nm is usually about the maximum roughly for the same reason photopic curve has the maximum in that area ("green", 555 nm). It is rather essential to have good response there to allow for mapping into distinctly separate tints and lightnesses.

IR filters are a bit of know-how, but the result is very close to using a BG38 (Canon) .. BG40 (Nikon) 2mm absorbing filters combined with a dichroic visible bandpass filter (say, as the coating on the dust shaker, while the dust shaker may also function as the first low pass filter) with a transmission like this:

VisiMax Thin Films
VisiMax Thin Films

Up to 3 filters can be used to control IR and UV. For Canon, see also "phaser layer Infrared-absorption glass"

--
 
What I dislike the most in this illustration and looks like what I get in many, many, of my shots is the color variations in the area around her mouth. Especially the small area below her inferior lips.
Now I see it. I agree that it looks greenish, not necessarily because it is; probably because it is surrounded by pink and red - but it does look unnatural.
Yeah, might be the reason why it looks 'too green' to me!
If so, that's the simultaneous contrast issue that Iliah mentioned earlier.
Yes, got it. :)
By the way, there are imperfect standard observers for constant-color fields, but, AFAIK, there are no standard models for simultaneous contrast color effects, and it is certainly possible (and I think, likely) that the quantitative effects are different in different people. This would not be detected by any color normalcy testing that I know of.

Jim
I know that I'm very sensitive to color contrast. Let's say a car which was partially repainted, I might spot it. (Even if I have to admit that sometimes it is really, really, well done and invisible.)
But I always thought it was more a question of temperament, demanding nature and attention do details...

In the case of the 'hue shift' accross faces, this makes me feel uncomfortable. Like if I was looking at sick people. Really unpleasant (but highly subjective and maybe cultural).

It's quite amazing, because for example when I did my first CCSG shots, results were sh** due to glare. This gave me oversaturated skin tones...but after hours looking at the same images, I could not see it anymore.
On the other hand, a slight variation accross the face: I feel uncomfortable. I also try the other way around, using Capture One Pro Color Palet or DxO Photo Lab HSL tool to 'uniformize' hue a bit: it's even worse.

All that said another problem I have is how the color rendering of skin tones seems to variy with different lighting. I have an old stock of Canon 300D (mly first DSLR), Canon 350D and Canon EOS 40D photos of the same persons photographed with different lighting, including catastrophic mixed lighting (for example natural light + integrated flash for fill-in) and skin tones much more consistent from shot to shot. It's with Canon in-camera processing through or it's DxO Photo Lab/Lightroom/Capture One Pro sibbling. So it remain to be seen what it would look like withotu all these subjective adjustments Canon do...
I'll try to find or make a more 'neutral' profile and see...
I would recall that I have seen an article by TheSuede on the Fred Miranda forums about CFA designs.
What he sort of said that Canon has taken a route that works well for mixed light while some other sensors were optimized for like shooting in the studio.
I would recall that both TheSuede and Iliah Borg pretty much considered the CFA design on the Sony A900 to be a pretty decent compromise regarding color rendition and SNR.
I had some discussions regarding the color rendition of the Phase One P45+ back I have with Tim Parkin. Tim Parkin and his friend Joe Cornish had issues with yellow contamination of chlorophyll greens. I have seen that, too, but I hoped that DCP profiles would be able to handle that.
In the end, we have found out that Tim and I had different interpretations of color, although I must say that I would lean to Tim being right.
One of the things I considered was that the IR filter (or hot mirror) design may have played a role.
Interestingly, a couple of years ago, Phase One introduced a new back, called 'Thricomatic'. They produced some explanations that ignored pretty much all color science ever developed. But, reading between the lines it may be concluded that there were modifications to the 'hot filter'. What I have seen from real worlds samples, the new Thrichromatic back did not have that yellow contamination of vegetable greens I have seen on my P45+ back and on the IQ 3100 MP I have seen tested.

Lime green seems to have extreme characteristics in the near IR (infra red) region. That was one of the reasons I include lime green in my 'tricolore' tests. But I found that all the three sensorsi tried (Phase One P45+, Sony Alpha 900 and Sony A7rII) did a decent job on that lime.
In the end, I don't pretend to know...
Thanks for sharing.

I read a few posts from theSuede on Fred Miranda's forum, as well as a few articles on the web.

My 'layman takeaways' + some questions:
  • Two different light spectra can appear as the same color to an observer (human, camera...) (I did know). In such a case, these two different spectra are called 'metamers' (I did not know) and the perceived matching 'metamerism' (I did know).
  • If the perceived color matching of two patches falls apart under a different illuminant, it's called 'illuminant metameric failure' (I did know this can happen - and learnt it the hard way! - but I did not know how it was called).
  • One observer may see the same color, but not another observer. It's called 'observer metameric failure'. (I did know that because of my father who is colorblind: sometimes he sees two different colors while I see the same, sometimes the opposite. But I never thought about implication for cameras.)
    Then I fall into the complete unknown...
  • theSuede talks about 'hue resolution' and 'metameric failure' (I guess he means: 'observer metameric failure').
    My layman understanding is that the two are closely linked to the sensor 'spectral response' (linked itself to CFA design, silicium and other stuff I don't know...). Correct?
    Also, that the 'hue resolution' and 'metameric failure' maybe somewhat related: sometimes the sensor may be unable to distinguish two different spectra because of an 'insufficient' hue resolution. Really not sure about this one, is it correct?
    And if it fails while a human observer would be able to pick the difference, that's a 'metameric failure' and may cause problem. Correct?
    What leads me to another question: what happens when the camera can pick a difference and not the human observer?.
Most of this is basic linear algebra. You have a linear operator (projection) which maps the real world to the captured colors (yes, I will all them colors) and your eyes do the same but with a somewhat different operator. Think of it as projections under different angles. Sometimes two different spectral densities would be projected (viewed) the same by the sensor but not by you. Those are colors you can distinguish but the sensor cannot. On the other hand, this means that there must be spectra which the sensor will distinguish but you cannot. What happens then depends on how the signal is processed - if by a color matrix only, this means the photo will show real color variations in some cases when you cannot see any.

A slightly more complicated version of that is to put a threshold/sensitivity of what differences you can see or the sensor can distinguish and then you can see, etc.
  • theSuede also says speaking about 5D Mark II (with apparently new CFA design) that the 'hue resolution' in the orange-green would be low, what would lead to minimize color differences in skin tones. (He does not mention that, but I guess this is in addition to the subjective adjustment Canon makes in its color profiles). May this be correct?
    May it explain (in addition to the subjective adjustments made in their color profiles) why some Canon bodies give 'more consistent' skin tones?
    ('More consistent' in the sense there are less color variations depending on the lighting and people complexion.)
Thanks!
The post I have seen here is highly speculative and I would ignore it, see also below.
Hi,

With regard to hue resolution, it may be conceivable that a sensor would be more or less sensitive to a change of wave length across the spectrum:

Source

Source

In this case, the sensor would be pretty blind to differences in color between 640 and 750 microns, as it would only have one signal,
So does your eye. Such an analysis must be done in a comparison to the human vision. The problem I see above is that the red curve is too narrow and misses a lot of the 500-550 band or so; and this it does not seem to be possible to compensate for it by the color matrix.

500px-Cones_SMJ2_E.svg.png

On the other hand, a change at 570 nm would have a huge effect.
Your eyes react similarly.
Many thanks for your insight and the links.

I found that about the Canon EOS 5D Mark II:

canon_5d_5d2.png


source: http://www.astrosurf.com/buil/50d/test.htm

If I get it correctly, hue resolution would be quite 'low' between ~530 and 550nm, right ?
It is hard to answer questions like this before we compute a "good" color matrix. The small red bump would be compensated to some extent by the green one. We have to see what is left and then understand how sensitive to the error left we are, etc. Something like this was done in the threads mentioned above.
Got it.

Should there be a 'lack' of hue resolution in this wavelengths band, may it partially explain (in addition to the subjective adjstments made to their color profiles) why Canon cameras seem to show less variations in skin tonality depending on the lighting?
I am not sure that this is a fact (about Canon cameras), and I do not see this a lack of hue resolution. It could be localized metamerism but without knowing how this would be converted, it is hard to say more.

The sensor does not have to resolve the spectrum - it just has to fail to resolve it in the same way as the human eye fails; if the goal is fidelity.
theSuede was mentioning that...but the there were a lot of 'shortcuts' and 'unsaid' in his posts....it does not appear as obvious to me.
 
Thanks for sharing.

But do you think that, for example, the hue shift towards a 'slightly greenish brown' of caucasian skin tones when viewed from an angle and in the shadow is really caused by cross-talking?
Cross-talk artifacts are a function of the angle between the light and plane of the sensor.

Human skin emits IR light (red and lower). Red light increases cross-talk contamination.

You don't mention green hue problems with other subjects.

--

"The belief that ‘randomness’ is some kind of real property existing in Nature is a form of the mind projection fallacy which says, in effect, ‘I don’t know the detailed causes – therefore – Nature does not know them."
E.T Jaynes, Probability Theory: The Logic of Science
 
Thanks for sharing.

But do you think that, for example, the hue shift towards a 'slightly greenish brown' of caucasian skin tones when viewed from an angle and in the shadow is really caused by cross-talking?
Cross-talk artifacts are a function of the angle between the light and plane of the sensor.

Human skin emits IR light (red and lower). Red light increases cross-talk contamination.

You don't mention green hue problems with other subjects.
Whatever effect you are describing would be global, and we see in specific parts of the image only.
 
Last edited:
Thanks for sharing.

But do you think that, for example, the hue shift towards a 'slightly greenish brown' of caucasian skin tones when viewed from an angle and in the shadow is really caused by cross-talking?
Cross-talk artifacts are a function of the angle between the light and plane of the sensor.

Human skin emits IR light (red and lower). Red light increases cross-talk contamination.

You don't mention green hue problems with other subjects.
Thanks for your reply.

But if it's linked to the angle betwen light and the sensor, than it should depend on the position of the subject in the frame, not the origientation of the skin relative to the sensor plane
 
But do you think that, for example, the hue shift towards a 'slightly greenish brown' of caucasian skin tones when viewed from an angle and in the shadow is really caused by cross-talking?
My goal was to address your question on the parent post
"I was wondering how the camera sensor, in particular the CFA design, could impact that."
The answer is yes - it is possible Sensor assembly design can affect perceived hue rendering.

Of all the possible causes for perceived hue rendering shifts in shadows, I doubt sensor assembly design most probable source .

-
"The belief that ‘randomness’ is some kind of real property existing in Nature is a form of the mind projection fallacy which says, in effect, ‘I don’t know the detailed causes – therefore – Nature does not know them."
E.T Jaynes, Probability Theory: The Logic of Science
 
But do you think that, for example, the hue shift towards a 'slightly greenish brown' of caucasian skin tones when viewed from an angle and in the shadow is really caused by cross-talking?
My goal was to address your question on the parent post
"I was wondering how the camera sensor, in particular the CFA design, could impact that."
The answer is yes - it is possible Sensor assembly design can affect perceived hue rendering.

Of all the possible causes for perceived hue rendering shifts in shadows, I doubt sensor assembly design most probable source .

-
"The belief that ‘randomness’ is some kind of real property existing in Nature is a form of the mind projection fallacy which says, in effect, ‘I don’t know the detailed causes – therefore – Nature does not know them."
E.T Jaynes, Probability Theory: The Logic of Science
Dr. Fairchild says on the subject of colour perception, "I can take any wavelength and make it appear almost any color" - 7 mins listen:

https://www.npr.org/sections/health...-shade-looks-both-yellow-and-gray-whats-color
 

Keyboard shortcuts

Back
Top