Role of sensor in color rendering?

Started 3 months ago | Discussions thread
J A C S
J A C S Forum Pro • Posts: 16,676
Re: done!

lélé wrote:

Erik Kaffehr wrote:

lélé wrote:

Erik Kaffehr wrote:

lélé wrote:

JimKasson wrote:

lélé wrote:

JimKasson wrote:

lélé wrote:

J A C S wrote:

lélé wrote:

What I dislike the most in this illustration and looks like what I get in many, many, of my shots is the color variations in the area around her mouth. Especially the small area below her inferior lips.

Now I see it. I agree that it looks greenish, not necessarily because it is; probably because it is surrounded by pink and red - but it does look unnatural.

Yeah, might be the reason why it looks 'too green' to me!

If so, that's the simultaneous contrast issue that Iliah mentioned earlier.

Yes, got it.

By the way, there are imperfect standard observers for constant-color fields, but, AFAIK, there are no standard models for simultaneous contrast color effects, and it is certainly possible (and I think, likely) that the quantitative effects are different in different people. This would not be detected by any color normalcy testing that I know of.

Jim

I know that I'm very sensitive to color contrast. Let's say a car which was partially repainted, I might spot it. (Even if I have to admit that sometimes it is really, really, well done and invisible.)
But I always thought it was more a question of temperament, demanding nature and attention do details...

In the case of the 'hue shift' accross faces, this makes me feel uncomfortable. Like if I was looking at sick people. Really unpleasant (but highly subjective and maybe cultural).

It's quite amazing, because for example when I did my first CCSG shots, results were sh** due to glare. This gave me oversaturated skin tones...but after hours looking at the same images, I could not see it anymore.
On the other hand, a slight variation accross the face: I feel uncomfortable. I also try the other way around, using Capture One Pro Color Palet or DxO Photo Lab HSL tool to 'uniformize' hue a bit: it's even worse.

All that said another problem I have is how the color rendering of skin tones seems to variy with different lighting. I have an old stock of Canon 300D (mly first DSLR), Canon 350D and Canon EOS 40D photos of the same persons photographed with different lighting, including catastrophic mixed lighting (for example natural light + integrated flash for fill-in) and skin tones much more consistent from shot to shot. It's with Canon in-camera processing through or it's DxO Photo Lab/Lightroom/Capture One Pro sibbling. So it remain to be seen what it would look like withotu all these subjective adjustments Canon do...
I'll try to find or make a more 'neutral' profile and see...

I would recall that I have seen an article by TheSuede on the Fred Miranda forums about CFA designs.
What he sort of said that Canon has taken a route that works well for mixed light while some other sensors were optimized for like shooting in the studio.
I would recall that both TheSuede and Iliah Borg pretty much considered the CFA design on the Sony A900 to be a pretty decent compromise regarding color rendition and SNR.
I had some discussions regarding the color rendition of the Phase One P45+ back I have with Tim Parkin. Tim Parkin and his friend Joe Cornish had issues with yellow contamination of chlorophyll greens. I have seen that, too, but I hoped that DCP profiles would be able to handle that.
In the end, we have found out that Tim and I had different interpretations of color, although I must say that I would lean to Tim being right.
One of the things I considered was that the IR filter (or hot mirror) design may have played a role.
Interestingly, a couple of years ago, Phase One introduced a new back, called 'Thricomatic'. They produced some explanations that ignored pretty much all color science ever developed. But, reading between the lines it may be concluded that there were modifications to the 'hot filter'. What I have seen from real worlds samples, the new Thrichromatic back did not have that yellow contamination of vegetable greens I have seen on my P45+ back and on the IQ 3100 MP I have seen tested.

Lime green seems to have extreme characteristics in the near IR (infra red) region. That was one of the reasons I include lime green in my 'tricolore' tests. But I found that all the three sensorsi tried (Phase One P45+, Sony Alpha 900 and Sony A7rII) did a decent job on that lime.
In the end, I don't pretend to know...

Thanks for sharing.

I read a few posts from theSuede on Fred Miranda's forum, as well as a few articles on the web.

My 'layman takeaways' + some questions:

  • Two different light spectra can appear as the same color to an observer (human, camera...) (I did know). In such a case, these two different spectra are called 'metamers' (I did not know) and the perceived matching 'metamerism' (I did know).
  • If the perceived color matching of two patches falls apart under a different illuminant, it's called 'illuminant metameric failure' (I did know this can happen - and learnt it the hard way! - but I did not know how it was called).
  • One observer may see the same color, but not another observer. It's called 'observer metameric failure'. (I did know that because of my father who is colorblind: sometimes he sees two different colors while I see the same, sometimes the opposite. But I never thought about implication for cameras.)
    Then I fall into the complete unknown...
  • theSuede talks about 'hue resolution' and 'metameric failure' (I guess he means: 'observer metameric failure').
    My layman understanding is that the two are closely linked to the sensor 'spectral response' (linked itself to CFA design, silicium and other stuff I don't know...). Correct?
    Also, that the 'hue resolution' and 'metameric failure' maybe somewhat related: sometimes the sensor may be unable to distinguish two different spectra because of an 'insufficient' hue resolution. Really not sure about this one, is it correct?
    And if it fails while a human observer would be able to pick the difference, that's a 'metameric failure' and may cause problem. Correct?
    What leads me to another question: what happens when the camera can pick a difference and not the human observer?.

Most of this is basic linear algebra. You have a linear operator (projection) which maps the real world to the captured colors (yes, I will all them colors) and your eyes do the same but with a somewhat different operator. Think of it as projections under different angles. Sometimes two different spectral densities would be projected (viewed) the same by the sensor but not by you. Those are colors you can distinguish but the sensor cannot. On the other hand, this means that there must be spectra which the sensor will distinguish but you cannot. What happens then depends on how the signal is processed - if by a color matrix only, this means the photo will show real color variations in some cases when you cannot see any.

A slightly more complicated version of that is to put a threshold/sensitivity of what differences you can see or the sensor can distinguish and then you can see, etc.

  • theSuede also says speaking about 5D Mark II (with apparently new CFA design) that the 'hue resolution' in the orange-green would be low, what would lead to minimize color differences in skin tones. (He does not mention that, but I guess this is in addition to the subjective adjustment Canon makes in its color profiles). May this be correct?
    May it explain (in addition to the subjective adjustments made in their color profiles) why some Canon bodies give 'more consistent' skin tones?
    ('More consistent' in the sense there are less color variations depending on the lighting and people complexion.)

Thanks!

The post I have seen here is highly speculative and I would ignore it, see also below.

Hi,

With regard to hue resolution, it may be conceivable that a sensor would be more or less sensitive to a change of wave length across the spectrum:

Source

In this case, the sensor would be pretty blind to differences in color between 640 and 750 microns, as it would only have one signal,

So does your eye. Such an analysis must be done in a comparison to the human vision. The problem I see above is that the red curve is too narrow and misses a lot of the 500-550 band or so; and this it does not seem to be possible to compensate for it by the color matrix.

On the other hand, a change at 570 nm would have a huge effect.

Your eyes react similarly.

Many thanks for your insight and the links.

I found that about the Canon EOS 5D Mark II:

source: http://www.astrosurf.com/buil/50d/test.htm

If I get it correctly, hue resolution would be quite 'low' between ~530 and 550nm, right ?

It is hard to answer questions like this before we compute a "good" color matrix. The small red bump would be compensated to some extent by the green one. We have to see what is left and then understand how sensitive to the error left we are, etc. Something like this was done in the threads mentioned above.

Post (hide subjects) Posted by
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark MMy threads
Color scheme? Blue / Yellow