Camera color accuracy outside of the training set

JimKasson

Community Leader
Forum Moderator
Messages
52,267
Solutions
52
Reaction score
59,055
Location
Monterey, CA, US
Last edited:
I've recently done some work that may interest the denizens of this forum:

https://blog.kasson.com/the-last-word/camera-color-accuracy-outside-of-the-training-set/

If the methodology is confusing to you, go back a few posts and you'll see what I'm doing and how I came to it.

I'd be happy to discuss it here, and I would appreciate pointers to sets of standardized photographically-significant reflectance spectra.
I think an executive summary would be helpful so people don't have to slog through the blog.
 
I've recently done some work that may interest the denizens of this forum:

https://blog.kasson.com/the-last-word/camera-color-accuracy-outside-of-the-training-set/

If the methodology is confusing to you, go back a few posts and you'll see what I'm doing and how I came to it.

I'd be happy to discuss it here, and I would appreciate pointers to sets of standardized photographically-significant reflectance spectra.
I think an executive summary would be helpful so people don't have to slog through the blog.
  • If a camera is accurate with a CC24 patch set as the test stimuli, it will probably be relatively accurate throughout the gamut of that patch set.
  • There will be errors that are quite a bit worse than the 24-patch errors. Training the optimizer on 24 patches is not enough to produce consistant accuracy throughout the 576 patch set.
  • The relative spread of the errors doesn't change a lot as the CFA filters change. If it's a bad set for the CC24 patches, it will have roughly the same variation as a percent of the errors for the smaller set.
 
I never liked calibration with those patch sets. One of the reasons is that it seems (and demonstrated by some authors) that the dimension of the spectra is pretty low. Now, what you are doing does not change that fact (caveat #1). If you calibrate with the derived patches, you are essentially changing the weights in the optimization - and this can be done on the original set as well.

BTW, how do you create the new colors physically using the available color chart? One can do multiple exposures in principle...
 
I never liked calibration with those patch sets. One of the reasons is that it seems (and demonstrated by some authors) that the dimension of the spectra is pretty low. Now, what you are doing does not change that fact (caveat #1). If you calibrate with the derived patches, you are essentially changing the weights in the optimization - and this can be done on the original set as well.

BTW, how do you create the new colors physically using the available color chart?
I don't. This was all simulation. I'm just mixing spectra.
One can do multiple exposures in principle...
 
I think an executive summary would be helpful so people don't have to slog through the blog.
  • If a camera is accurate with a CC24 patch set as the test stimuli, it will probably be relatively accurate throughout the gamut of that patch set.
  • There will be errors that are quite a bit worse than the 24-patch errors. Training the optimizer on 24 patches is not enough to produce consistant accuracy throughout the 576 patch set.
  • The relative spread of the errors doesn't change a lot as the CFA filters change. If it's a bad set for the CC24 patches, it will have roughly the same variation as a percent of the errors for the smaller set.
Summary of previous work in this series:
  • Too much CFA spectral overlap is bad for color accuracy.
  • Too little overlap is bad for color accuracy.
  • For optimal SMI performance, the red and green center wavelengths should be fairly close.
  • For optimal SMI performance, the red and green overlap should be larger than we see in most cameras.
  • You can achieve remarkably high SMIs — much higher than we see with consumer cameras — with simple Gaussian spectra.
  • The CFA spectra for high SMI isn’t that far off the CFA spectra for low chroma noise.
  • My guess is that the reason we don’t have better SMIs in consumer cameras is the availability of chemical compounds, not noise considerations.
  • Microcolor is a real thing: for some base colors, cameras/raw developers can show larger or smaller changes in captured color differences than observed by the eye.
  • The effect is small, even with the color patches most affected
  • Average microcolor changes across all colors are tiny.
  • I see no evidence that narrower CFA spectra means more microcolor.
  • There is wide variation in microcolor as the base patch values change.
  • The two CFA spectra that produce the most accurate color produce the microcolor that most agrees with what the eye sees
  • The optimum Gaussian spectra produce slightly more microcolor errors than an ideal spectra set.
 
I'd be happy to discuss it here, and I would appreciate pointers to sets of standardized photographically-significant reflectance spectra.
Hi Jim, very interesting. Your findings on generalizing from a simple CC24 training set confirm what Anders Torger also found while developing DCAMPROF: not ideal but pretty good. I seem to remember that he also found that if the training set was made too large there were issues created by overfitting.

As for databases of reflectance spectra, no time for me to look now, but there were a number referenced in some early papers on optimizing CFAs, Finlayson etc.

Jack
 
Last edited:
I had wondered about that, so thanks for the hard work - caveats notwithstanding.

It seems that red spectral sensitivity is often optimised more for QE/SNR than for SMI. In other words, the red peak is further right than in the LMS space.

My question is the degree to which the perceptual significance of this is predicated by the selected colour space. This may of course be a dumb question...
 
I had wondered about that, so thanks for the hard work - caveats notwithstanding.
I turned out to be more work than I expected it to be. Enough that, had I known, I probably wouldn't have done the project.
It seems that red spectral sensitivity is often optimised more for QE/SNR than for SMI.
Did you see the work I did on SNR in the previous posts?

Here's a relevant point from one of those:
  • The CFA spectra for high SMI isn’t that far off the CFA spectra for low chroma noise.
In other words, the red peak is further right than in the LMS space.

My question is the degree to which the perceptual significance of this is predicated by the selected colour space.
It should be the same in any perceptually uniform color space. Unfortunately, those are thin on the ground. I make due with CIELab and CIELuv.
This may of course be a dumb question...
There are no dumb questions, right?
 
Last edited:
I'd be happy to discuss it here, and I would appreciate pointers to sets of standardized photographically-significant reflectance spectra.
Hi Jim, very interesting. Your findings on generalizing from a simple CC24 training set confirm what Anders Torger also found while developing DCAMPROF: not ideal but pretty good. I seem to remember that he also found that if the training set was made too large there were issues created by overfitting.

As for databases of reflectance spectra, no time for me to look now, but there were a number referenced in some early papers on optimizing CFAs, Finlayson etc.
Pointers to those papers would be good. I have the NASA data set, but I don't think it's particularly relevant -- lots of grays and browns.

I should mention that, several years ago, Jack kindly provided the Matlab code that served as the foundation for this project.

--
https://blog.kasson.com
 
Last edited:
I had wondered about that, so thanks for the hard work - caveats notwithstanding.
I turned out to be more work than I expected it to be. Enough that, had I known, I probably wouldn't have done the project.
It seems that red spectral sensitivity is often optimised more for QE/SNR than for SMI.
Did you see the work I did on SNR in the previous posts?
Just looked. Very informative, thank you. I will read more of your blog in future. I have read most of Jack's already... ;-)
In other words, the red peak is further right than in the LMS space.

My question is the degree to which the perceptual significance of this is predicated by the selected colour space.
It should be the same in any perceptually uniform color space. Unfortunately, those are thin on the ground. I make due with CIELab and CIELuv.
Right. I had assumed this but wasn't sure. I presume the delta function would vary in proportion to the linear transformation between CIELab and the colour space, with some inevitable clipping. Is this a reasonable assumption?
This may of course be a dumb question...
There are no dumb questions, right?
I seem to manage, but I learn a lot that way ;-)
 
I had wondered about that, so thanks for the hard work - caveats notwithstanding.
I turned out to be more work than I expected it to be. Enough that, had I known, I probably wouldn't have done the project.
It seems that red spectral sensitivity is often optimised more for QE/SNR than for SMI.
Did you see the work I did on SNR in the previous posts?
Just looked. Very informative, thank you. I will read more of your blog in future. I have read most of Jack's already... ;-)
In other words, the red peak is further right than in the LMS space.

My question is the degree to which the perceptual significance of this is predicated by the selected colour space.
It should be the same in any perceptually uniform color space. Unfortunately, those are thin on the ground. I make due with CIELab and CIELuv.
Right. I had assumed this but wasn't sure. I presume the delta function would vary in proportion to the linear transformation between CIELab and the colour space, with some inevitable clipping.
No clipping in the large sample set I used if you use Adobe RGB. A little clipping in the cyans if you use sRGB. The delta I used was CIELab DeltaE 2000, and, except for color spaces that clip the samples, I'd get the same results with the camera raw image converted to any RGB color space. I use XYZ as my internal standard RGB color space, but I could have used PPRGB, Adobe RGB, or many other RGB color spaces.
 
I've recently done some work that may interest the denizens of this forum:

https://blog.kasson.com/the-last-word/camera-color-accuracy-outside-of-the-training-set/

If the methodology is confusing to you, go back a few posts and you'll see what I'm doing and how I came to it.

I'd be happy to discuss it here, and I would appreciate pointers to sets of standardized photographically-significant reflectance spectra.

Jim
Can this be used to confirm that >3 colors in the CFA has utility? I think that spectrometers for display calibration tends to use larger numbers (7?).

Can you use this to find some kind of "optimum" set of N patches for calibrating a camera?

As to overfitting, I would assume that the more physical patches you capture, the more parameters may be used in the model to calibrate the camera response?

-h
 
I had wondered about that, so thanks for the hard work - caveats notwithstanding.
I turned out to be more work than I expected it to be. Enough that, had I known, I probably wouldn't have done the project.
It seems that red spectral sensitivity is often optimised more for QE/SNR than for SMI.
Did you see the work I did on SNR in the previous posts?
Just looked. Very informative, thank you. I will read more of your blog in future. I have read most of Jack's already... ;-)
In other words, the red peak is further right than in the LMS space.

My question is the degree to which the perceptual significance of this is predicated by the selected colour space.
It should be the same in any perceptually uniform color space. Unfortunately, those are thin on the ground. I make due with CIELab and CIELuv.
Right. I had assumed this but wasn't sure. I presume the delta function would vary in proportion to the linear transformation between CIELab and the colour space, with some inevitable clipping.
No clipping in the large sample set I used if you use Adobe RGB. A little clipping in the cyans if you use sRGB. The delta I used was CIELab DeltaE 2000, and, except for color spaces that clip the samples, I'd get the same results with the camera raw image converted to any RGB color space. I use XYZ as my internal standard RGB color space, but I could have used PPRGB, Adobe RGB, or many other RGB color spaces.
Useful to know, many thanks for the response.
 
Can this be used to confirm that >3 colors in the CFA has utility?
Not without some additional assumptions. The XYZ filter set is already giving an SMI of 99.7 with the CC24 and low errors for the larger set. It's hard to get better than perfect.

Even the optimized Gaussian set gives an SMI of 99.3.

I believe the reasons that filters like the above are not used has to do with the availability of the dyes and pigments. If we were to add a fourth filter spectrum, we'd have to somehow encode the availability of dyes and pigments of various spectra into the mix, and I don't know how to do that.
I think that spectrometers for display calibration tends to use larger numbers (7?).

Can you use this to find some kind of "optimum" set of N patches for calibrating a camera?
Unfortunately, I am pretty sure that will vary with the camera's
As to overfitting, I would assume that the more physical patches you capture, the more parameters may be used in the model to calibrate the camera response?
I'm not sure what you mean by that. The most I use now is 6 parameters, for the optimal Gaussian. I use 3 for the fixed-sigma Gaussians.

Or are you talking about the compromise matrix? It has 9 entries, one of which is redundant.

If we were to use a compromise matrix with a 4-color CFA, it would have 12 entries.
 
I'd be happy to discuss it here, and I would appreciate pointers to sets of standardized photographically-significant reflectance spectra.

Jim
Check out the AMPAS Training set.




 
Thanks. I downloaded some camera spectra. Now I have to write a JSON parser. Do you know where I can download the AMPAS training dataset?
 

Keyboard shortcuts

Back
Top