Camera color accuracy outside of the training set

ecf710c0758848b1985fd1bc2f961915.jpg.png

Munsell 1994 matte patch set, Sony a7RIII.

--
 
After this process, you are left with device metamerism errors, which are irreducible and ultimately uncorrectable. You can attempt to adjust some metamers to match exactly, but lose other metamers in the process. But that's a judgement call: you have inevitable errors in color and you have to come up with a satisfactory method of distributing those errors.
I believe that these are the errors which the 2.5D HueSatMap table of Adobe's DCP profiles attempts to allow correction of.

As you point out, doing this is something that requires a lot of tradeoffs and judgement calls - and as I understand it, it's when attempting to generate a HueSatMap that the risk of overfitting greatly increases.

I agree that I don't think overfitting would be a concern for a basic matrix-only profile.
 
Also, some further thought - the camera's response to changes in amplitude is linear, but the camera's response to a change in frequency/wavelength is a function of the shape of the SSF. There could be some cases where a pure matrix profile might get a hue wrong, but not in a way that is not subject to metamerism. e.g., for example - there aren't many spectral possibilities for a highly saturated yellow, but dependent on the spectral sensitivity of the camera, a pure matrix profile might shift it towards green or red.

It's no surprise that the most common use case of HueSatMap is what Adobe calls "2.5D" - where the value dimension is 1, e.g. value is not taken into consideration in the LUT - which is consistent with an assumption that a camera is linear in amplitude response.

Overfitting problems would likely occur in a scenario where:
  • HueSatMap has too high of a resolution in the hue and saturation dimensions, and
  • The camera is subject to significant metamerism for two training patches that are close to each other in hue and saturation but very different spectrally. Probably more likely for low-saturation scenarios, since as saturation increases, the spectral possibilities become more and more limited. (With the extreme of this being that anything on the horseshoe literally has only one spectral possibility by the definition of the horseshoe on a chromaticity diagram.)
--
Context is key. If I have quoted someone else's post when replying, please do not reply to something I say without reading text that I have quoted, and understanding the reason the quote function exists.
 
Last edited:
Also, some further thought - the camera's response to changes in amplitude is linear, but the camera's response to a change in frequency/wavelength is a function of the shape of the SSF.
What is SSF?
There could be some cases where a pure matrix profile might get a hue wrong, but not in a way that is not subject to metamerism.
If there is metamerism, it should get the hue (what is hue in this context) wrong often, this is what metamerism does. I do not see what your point.
e.g., for example - there aren't many spectral possibilities for a highly saturated yellow, but dependent on the spectral sensitivity of the camera, a pure matrix profile might shift it towards green or red.
It should, and a non-matrix profile would still be wrong very often, just in a different way.
 
Also, some further thought - the camera's response to changes in amplitude is linear, but the camera's response to a change in frequency/wavelength is a function of the shape of the SSF.
What is SSF?
spectr. sens. function - the curve how "CFA" (or in real life rather everything starting from lens/filters down to IR/UV cut filtes, AA filters, microlenses, etc, etc ) will let incoming spectrum in to be absorbed
 
Also, some further thought - the camera's response to changes in amplitude is linear, but the camera's response to a change in frequency/wavelength is a function of the shape of the SSF.
What is SSF?
Spectral Sensitivity Function
There could be some cases where a pure matrix profile might get a hue wrong, but not in a way that is not subject to metamerism.
If there is metamerism, it should get the hue (what is hue in this context) wrong often, this is what metamerism does. I do not see what your point.
Metamerism is where either two hues/saturations that are perceived differently by an observer are perceived identically by the camera, or the camera perceives two different hues/saturations when a human observer would perceive only one.

It's possible that a pure matrix profile may distort some hues/saturations in such a way as to still be a 1:1 input/output mapping.
e.g., for example - there aren't many spectral possibilities for a highly saturated yellow, but dependent on the spectral sensitivity of the camera, a pure matrix profile might shift it towards green or red.
It should, and a non-matrix profile would still be wrong very often, just in a different way.
Only if the errors have discontinuities that lead to them not being a 1:1 mapping.

That won't happen for something like a pure 580nm yellow that gets misrepresented by a matrix transformation, unless there's another spectral combination that leads to an identical response from the camera to what it "sees" 580nm yellow as.

It's more likely you'll have metamerism failures for a less saturated color, since there now become multiple ways to represent it spectrally - for example obtaining "white" from individual red, green, and blue LEDs vs. atmospheric-filtered sunlight.
 
Also, some further thought - the camera's response to changes in amplitude is linear, but the camera's response to a change in frequency/wavelength is a function of the shape of the SSF.
What is SSF?
Spectral Sensitivity Function
There could be some cases where a pure matrix profile might get a hue wrong, but not in a way that is not subject to metamerism.
If there is metamerism, it should get the hue (what is hue in this context) wrong often, this is what metamerism does. I do not see what your point.
Metamerism is where either two hues/saturations that are perceived differently by an observer are perceived identically by the camera, or the camera perceives two different hues/saturations when a human observer would perceive only one.

It's possible that a pure matrix profile may distort some hues/saturations in such a way as to still be a 1:1 input/output mapping.
That is why I said "often" above. Even a stopped clock is right twice a day.
e.g., for example - there aren't many spectral possibilities for a highly saturated yellow, but dependent on the spectral sensitivity of the camera, a pure matrix profile might shift it towards green or red.
It should, and a non-matrix profile would still be wrong very often, just in a different way.
Only if the errors have discontinuities that lead to them not being a 1:1 mapping.
Discontinuities? The mapping is a bit arbitrary since there is no exact mapping anyway, and we usually do some kind of optimization. It may happen to be right here and there, which was the reason I said "often" for a second time.
That won't happen for something like a pure 580nm yellow that gets misrepresented by a matrix transformation, unless there's another spectral combination that leads to an identical response from the camera to what it "sees" 580nm yellow as.
And, most often, there is, so...
It's more likely you'll have metamerism failures for a less saturated color, since there now become multiple ways to represent it spectrally - for example obtaining "white" from individual red, green, and blue LEDs vs. atmospheric-filtered sunlight.
BTW, metamerism is basically why we are discussing this in the first place...
 
e.g., for example - there aren't many spectral possibilities for a highly saturated yellow, but dependent on the spectral sensitivity of the camera, a pure matrix profile might shift it towards green or red.
It should, and a non-matrix profile would still be wrong very often, just in a different way.
Only if the errors have discontinuities that lead to them not being a 1:1 mapping.
Discontinuities? The mapping is a bit arbitrary since there is no exact mapping anyway, and we usually do some kind of optimization. It may happen to be right here and there, which was the reason I said "often" for a second time.
Situations where metamerism would lead to a LUT that is non-monotonic/discontinuous and thus not invertible. Any non-monotonicities that lead to it not being invertible are a sure sign you've found an uncorrectable metameric failure.
That won't happen for something like a pure 580nm yellow that gets misrepresented by a matrix transformation, unless there's another spectral combination that leads to an identical response from the camera to what it "sees" 580nm yellow as.
And, most often, there is, so...
Oh, have an example for a case such as 580nm yellow?
It's more likely you'll have metamerism failures for a less saturated color, since there now become multiple ways to represent it spectrally - for example obtaining "white" from individual red, green, and blue LEDs vs. atmospheric-filtered sunlight.
BTW, metamerism is basically why we are discussing this in the first place...
Yes. My point is: A HueSatMap table that does not use value as an input may correct certain issues for a reasonably well-behaved camera, but care needs to be taken not to overfit, decisions such as:

1) Using lots of training patches of low saturation and high LUT resolution in the low-saturation region is going to be a bad idea

2) Corrections for high-saturation errors are far less likely to be erroneous, since there are far fewer possible spectra for a given hue, again with the extreme case being single-wavelength monochromatic emitters.

Edit: It would be interesting to see Jim revisiting https://blog.kasson.com/nikon-z6-7/camera-differences-in-color-profile-making/ in the context of this latest work, since he generated repro profiles with 2.5D LUTs.
 
Last edited:
6bfbc086dd544e859eb7b66e91b7b531.jpg.png

Fit is nowhere near as good as for the less chromatic sets I've tried.

I'm going to continue to add patch sets for a while, testing and training with the same set. Eventually, I'll move on to testing with different sets than the training set.

It's always been true that the colors above, which are restricted to the sRGB gamut, are not faithful representations of the actual patch colors. It's especially important to observe that caveat with this patch set.

--
https://blog.kasson.com
 
Last edited:
From https://blog.kasson.com/the-last-word/optimal-cfa-spectral-response/ (and some of your other post with spectral locus graphs) - especially for the 10/20nm Gaussian

Those results look a bit odd - is the spectral locus getting distorted that severely, or are some inputs on the spectral locus failing to convert to anything meaningful? (Some of that reminds me of the disucssion of extreme blues converting to a negative Y value after the matrix is applied - https://rawtherapee.com/mirror/dcamprof/dcamprof.html#extreme_colors )

It would be interesting to see another line on the plot showing which inputs on the spectral locus actually converted to something valid in the output.
 
From https://blog.kasson.com/the-last-word/optimal-cfa-spectral-response/ (and some of your other post with spectral locus graphs) - especially for the 10/20nm Gaussian

Those results look a bit odd - is the spectral locus getting distorted that severely, or are some inputs on the spectral locus failing to convert to anything meaningful? (Some of that reminds me of the disucssion of extreme blues converting to a negative Y value after the matrix is applied - https://rawtherapee.com/mirror/dcamprof/dcamprof.html#extreme_colors )

It would be interesting to see another line on the plot showing which inputs on the spectral locus actually converted to something valid in the output.
After using that panel for a while, I'm becoming less and less convinced that it is at all useful, especially now that I've got some high-chroma patch sets. I may drop it.

I can't figure out what the difference between distorting the spectral local that severely and the spectral locus failing to convert to anything meaningful is. And what's valid and what's not? With real cameras, the 3x3 matrix does a lousy job for spectral signals, especially in the shorter wavelengths. I haven't even had much luck with a lot of spectral signals in the training set.
 
From https://blog.kasson.com/the-last-word/optimal-cfa-spectral-response/ (and some of your other post with spectral locus graphs) - especially for the 10/20nm Gaussian

Those results look a bit odd - is the spectral locus getting distorted that severely, or are some inputs on the spectral locus failing to convert to anything meaningful? (Some of that reminds me of the disucssion of extreme blues converting to a negative Y value after the matrix is applied - https://rawtherapee.com/mirror/dcamprof/dcamprof.html#extreme_colors )

It would be interesting to see another line on the plot showing which inputs on the spectral locus actually converted to something valid in the output.
After using that panel for a while, I'm becoming less and less convinced that it is at all useful, especially now that I've got some high-chroma patch sets. I may drop it.

I can't figure out what the difference between distorting the spectral local that severely
Distorting severely would be, for example, green is converting to yellow, but at least converting to SOMETHING that doesn't cause a math error at some point in the pipeline (such as a negative Y value as one possibility)
and the spectral locus failing to convert to anything meaningful is.
I'm assuming that anything that the matrix converted to a negative Y value isn't being shown on the output curve?
And what's valid and what's not? With real cameras, the 3x3 matrix does a lousy job for spectral signals, especially in the shorter wavelengths. I haven't even had much luck with a lot of spectral signals in the training set.
That's consistent with Anders Torger's discussion of extreme blue in the link above - many 3x3 matrices convert it to a negative Y value.
 
From https://blog.kasson.com/the-last-word/optimal-cfa-spectral-response/ (and some of your other post with spectral locus graphs) - especially for the 10/20nm Gaussian

Those results look a bit odd - is the spectral locus getting distorted that severely, or are some inputs on the spectral locus failing to convert to anything meaningful? (Some of that reminds me of the disucssion of extreme blues converting to a negative Y value after the matrix is applied - https://rawtherapee.com/mirror/dcamprof/dcamprof.html#extreme_colors )

It would be interesting to see another line on the plot showing which inputs on the spectral locus actually converted to something valid in the output.
After using that panel for a while, I'm becoming less and less convinced that it is at all useful, especially now that I've got some high-chroma patch sets. I may drop it.

I can't figure out what the difference between distorting the spectral local that severely
Distorting severely would be, for example, green is converting to yellow, but at least converting to SOMETHING that doesn't cause a math error at some point in the pipeline (such as a negative Y value as one possibility)
Would you be interested in looking at what happens with a spectral training set? There is an issue with a spectral training set in that the intensity at the short and long wavelengths has to be raised if we are to keep the deltaE from dropping to nearly nothing because the L* is so low. But we are not likely to see many such sources IRL.
and the spectral locus failing to convert to anything meaningful is.
I'm assuming that anything that the matrix converted to a negative Y value isn't being shown on the output curve?
I am pruning out the spectral locus when Y falls below an arbitrary value. That seems to be sufficient to prevent negative Y values.
And what's valid and what's not? With real cameras, the 3x3 matrix does a lousy job for spectral signals, especially in the shorter wavelengths. I haven't even had much luck with a lot of spectral signals in the training set.
That's consistent with Anders Torger's discussion of extreme blue in the link above - many 3x3 matrices convert it to a negative Y value.
 
From https://blog.kasson.com/the-last-word/optimal-cfa-spectral-response/ (and some of your other post with spectral locus graphs) - especially for the 10/20nm Gaussian

Those results look a bit odd - is the spectral locus getting distorted that severely, or are some inputs on the spectral locus failing to convert to anything meaningful? (Some of that reminds me of the disucssion of extreme blues converting to a negative Y value after the matrix is applied - https://rawtherapee.com/mirror/dcamprof/dcamprof.html#extreme_colors )

It would be interesting to see another line on the plot showing which inputs on the spectral locus actually converted to something valid in the output.
After using that panel for a while, I'm becoming less and less convinced that it is at all useful, especially now that I've got some high-chroma patch sets. I may drop it.

I can't figure out what the difference between distorting the spectral local that severely
Distorting severely would be, for example, green is converting to yellow, but at least converting to SOMETHING that doesn't cause a math error at some point in the pipeline (such as a negative Y value as one possibility)
Would you be interested in looking at what happens with a spectral training set? There is an issue with a spectral training set in that the intensity at the short and long wavelengths has to be raised if we are to keep the deltaE from dropping to nearly nothing because the L* is so low. But we are not likely to see many such sources IRL.
Interesting... As far as "many such source", I guess it depends on whether the red and blue channels of "typical" RGB LEDs would be in the problem wavelengths. Obviously you're going to have some serious metamerism issues in your scene, but an idea of where a camera is going to start having serious issues with the scene lighting due to it causing problems later in the math pipeline would be interesting, and whether or not CFA tuning or tweaking the training set could potentially mitigate that. I need to look up where typical "consumer" blue LEDs typically lie on the spectral locus (as opposed to the extreme case of something like a royal blue/dental Luxeon...)

If red and blue LEDs typically found in RGB LED fixtures doe lie in the "problem" wavelengths, then - I've seen such sources frequently. A local concert venue owner LOVED RGB LED lighting and his favorite color was purple... Some of this (and your work prompting me to reread some of Anders' documentation including his comments on loss of tonality) might explain why I've often had some really strange results trying to handle those scenes and how to revisit them.
and the spectral locus failing to convert to anything meaningful is.
I'm assuming that anything that the matrix converted to a negative Y value isn't being shown on the output curve?
I am pruning out the spectral locus when Y falls below an arbitrary value. That seems to be sufficient to prevent negative Y values.
OK, so not just negative Y values, but some of the smaller ones.
And what's valid and what's not? With real cameras, the 3x3 matrix does a lousy job for spectral signals, especially in the shorter wavelengths. I haven't even had much luck with a lot of spectral signals in the training set.
That's consistent with Anders Torger's discussion of extreme blue in the link above - many 3x3 matrices convert it to a negative Y value.
 
I am pruning out the spectral locus when Y falls below an arbitrary value. That seems to be sufficient to prevent negative Y values.
OK, so not just negative Y values, but some of the smaller ones.
You have to question the utility of a u'v' diagram when the L* is so low that the CIEL*u*v* values are all about he same.
 
Would you be interested in looking at what happens with a spectral training set? There is an issue with a spectral training set in that the intensity at the short and long wavelengths has to be raised if we are to keep the deltaE from dropping to nearly nothing because the L* is so low. But we are not likely to see many such sources IRL.
Interesting... As far as "many such source", I guess it depends on whether the red and blue channels of "typical" RGB LEDs would be in the problem wavelengths. Obviously you're going to have some serious metamerism issues in your scene, but an idea of where a camera is going to start having serious issues with the scene lighting due to it causing problems later in the math pipeline would be interesting, and whether or not CFA tuning or tweaking the training set could potentially mitigate that. I need to look up where typical "consumer" blue LEDs typically lie on the spectral locus (as opposed to the extreme case of something like a royal blue/dental Luxeon...)

If red and blue LEDs typically found in RGB LED fixtures doe lie in the "problem" wavelengths, then - I've seen such sources frequently. A local concert venue owner LOVED RGB LED lighting and his favorite color was purple... Some of this (and your work prompting me to reread some of Anders' documentation including his comments on loss of tonality) might explain why I've often had some really strange results trying to handle those scenes and how to revisit them.
With the present displays and printers, I'm not sure it's worth much time worrying about the accurate reproduction of many spectral colors (especially the longer-wavelength ones), since we can't come close to displaying them or printing them. Of course, it would be nice to get the hue angles close to right, but when we do that, we run into issues with the modeling spaces, Lab and Luv.



Note the curved IT8 lines
Note the curved IT8 lines



--
 
Distorting severely would be, for example, green is converting to yellow, but at least converting to SOMETHING that doesn't cause a math error at some point in the pipeline (such as a negative Y value as one possibility)
We know the matrix is just a least bad compromise, but if you look at the SSFs you realize that situations that generate negative XYZ coordinates are few and far between, call them out of gamut. Combine the Forward Matrices shared in this thread with the XYZD50->sRGB one and you realize that FMs are the least of our linear problems.

Here are some examples of the effect of both


Jack
 
Last edited:

Keyboard shortcuts

Back
Top