Camera color accuracy outside of the training set

FYI, the AMPAS training set is actually the Color Checker SG. There is also nice collection of spectral targets included with the BasicColor Input demo and several are bundled with DCamProf.

Also, before you put time into writing your own, know that DCamProf includes “text2ti3” which will transform plain text spectral data into a .ti3 that can be used with camera SSF data to build simulated target “shots”. DCamProf also has dcp2json and icc2json commands for viewing and editing profiles.
 
Last edited:
FYI, the AMPAS training set is actually the Color Checker SG. There is also nice collection of spectral targets included with the BasicColor Input demo and several are bundled with DCamProf.

Also, before you put time into writing your own, know that DCamProf includes “text2ti3” which will transform plain text spectral data into a .ti3 that can be used with camera SSF data to build simulated target “shots”. DCamProf also has dcp2json and icc2json commands for viewing and editing profiles.
It's a little bigger. Colorchecker SG is 140 patches, AMPAS is 190.
 
FYI, the AMPAS training set is actually the Color Checker SG. There is also nice collection of spectral targets included with the BasicColor Input demo and several are bundled with DCamProf.

Also, before you put time into writing your own, know that DCamProf includes “text2ti3” which will transform plain text spectral data into a .ti3 that can be used with camera SSF data to build simulated target “shots”. DCamProf also has dcp2json and icc2json commands for viewing and editing profiles.
Thank you.
 
FYI, the AMPAS training set is actually the Color Checker SG. There is also nice collection of spectral targets included with the BasicColor Input demo and several are bundled with DCamProf.

Also, before you put time into writing your own, know that DCamProf includes “text2ti3” which will transform plain text spectral data into a .ti3 that can be used with camera SSF data to build simulated target “shots”. DCamProf also has dcp2json and icc2json commands for viewing and editing profiles.
It's a little bigger. Colorchecker SG is 140 patches, AMPAS is 190.
And some SG patches are duplicates.
 
FYI, the AMPAS training set is actually the Color Checker SG. There is also nice collection of spectral targets included with the BasicColor Input demo and several are bundled with DCamProf.

Also, before you put time into writing your own, know that DCamProf includes “text2ti3” which will transform plain text spectral data into a .ti3 that can be used with camera SSF data to build simulated target “shots”. DCamProf also has dcp2json and icc2json commands for viewing and editing profiles.
It's a little bigger. Colorchecker SG is 140 patches, AMPAS is 190.
Huh… I could have sworn that when I downloaded the RawToACES targets and loaded them into Babel Color Patch Tool they were the same. Maybe I mixed something up.
 
ce0c23dfca44496ebd182b3090b331e4.jpg.png

--
https://blog.kasson.com
 
Last edited:
Last edited:
As to overfitting, I would assume that the more physical patches you capture, the more parameters may be used in the model to calibrate the camera response?
I'm not sure what you mean by that. The most I use now is 6 parameters, for the optimal Gaussian. I use 3 for the fixed-sigma Gaussians.

Or are you talking about the compromise matrix? It has 9 entries, one of which is redundant.

If we were to use a compromise matrix with a 4-color CFA, it would have 12 entries.
Perhaps I misunderstood.

I thought that overfitting referred to the color processing needed to convert a raw camera image to a standardized color representation. The more patches you record, the more confidence you can have in knowing the camera response (?) and the more parameters you could (in principle, at least) use in that color processing.

I understand the simplicity of having a 3x3 linear matrix, but why limit oneself to that if there is sufficient data to do a more complex correction?

-h
 
As to overfitting, I would assume that the more physical patches you capture, the more parameters may be used in the model to calibrate the camera response?
I'm not sure what you mean by that. The most I use now is 6 parameters, for the optimal Gaussian. I use 3 for the fixed-sigma Gaussians.

Or are you talking about the compromise matrix? It has 9 entries, one of which is redundant.

If we were to use a compromise matrix with a 4-color CFA, it would have 12 entries.
Perhaps I misunderstood.

I thought that overfitting referred to the color processing needed to convert a raw camera image to a standardized color representation. The more patches you record, the more confidence you can have in knowing the camera response (?) and the more parameters you could (in principle, at least) use in that color processing.

I understand the simplicity of having a 3x3 linear matrix, but why limit oneself to that if there is sufficient data to do a more complex correction?

-h
I am just trying to keep the playing field level. If I add LUTs, they can be populated in so many ways the simulation becomes intractable.
 
Can this be used to confirm that >3 colors in the CFA has utility? I think that spectrometers for display calibration tends to use larger numbers (7?).
There is an underlying assumption in color science that light behaves linearly all the way to how it is collected by cones in the Standard Observer eye. There are three types of cones so if they are exposed to the same light they will produce three signals for the HVS to process into color. Therefore ideally all the HVS requires is a Nx3 input and that's all that our imaging system would be needed to provide.

However there are non-idealities in the system (e.g. different SSFs) which we can try to compensate for in different ways. For instance if we captured a scene with a set of Nx3 tones (A) which we knew corresponded to Nx3 HVS inputs (b) we could solve the linear set of equations Ax = b for x, et voilà we could then calculate b for any given A.

By the magic of linear algebra, x is a 3x3 matrix. To determine it, all you need are 3 sets of tones.
Can you use this to find some kind of "optimum" set of N patches for calibrating a camera?
If the system is overdetermined (N>3 usually), x can still be normally calculated but with some limitations (e.g. least square approximation instead of exact). N of course is the number of patches. Assuming the system is linear and assimilable to a cube (e.g. XYZ) you would probably want some patches near the 5 non-zero corners and perhaps one in the center, the rest would fall into place linearly, like an interpolation. If the corners are not in the visible range (they mostly aren't), go for closest in visible. Come to think of it you don't need to exceed the gamut of your output device. And maybe weigh a little more the tones that you see more often. In fact, include them while you are at it.

Hey, aren't we designing a CC24? If the system is linear more than this is redundant. And in fact for well behaved cameras and general purposes Anders Torger mentions that there is usually little to gain to go much beyond that. I seem to remember that Jim's simulations in the past appeared to support this conclusion.
As to overfitting, I would assume that the more physical patches you capture, the more parameters may be used in the model to calibrate the camera response?
If we've chosen the patches judiciously for our application more is not necessarily better, because it could confuse our simple optimization algorithm: we are fitting a 3D object and smooth surfaces are desirable, as opposed to spiky overfitted ones. However more patches for the specific application are always welcome to have distinct train, validate and test sets.

Jack

* Practice shows that more than 3 filters do not add much at the capture stage (different story at the output, e.g. inks)
 
Last edited:
FYI, the AMPAS training set is actually the Color Checker SG. There is also nice collection of spectral targets included with the BasicColor Input demo and several are bundled with DCamProf.

Also, before you put time into writing your own, know that DCamProf includes “text2ti3” which will transform plain text spectral data into a .ti3 that can be used with camera SSF data to build simulated target “shots”. DCamProf also has dcp2json and icc2json commands for viewing and editing profiles.
It's a little bigger. Colorchecker SG is 140 patches, AMPAS is 190.
And some SG patches are duplicates.
some extra = https://chromaxion.com/spectral-library.php
 
I thought that overfitting referred to the color processing needed to convert a raw camera image to a standardized color representation. The more patches you record, the more confidence you can have in knowing the camera response (?) and the more parameters you could (in principle, at least) use in that color processing.
As I understand it, overfitting is where you have too many parameters for the number of data points, or more generally, when the model you are fitting is more complex than the reality of the data. You know you are overfitting when the model predicts the test data exactly, including noise, and when validation data is consistently fit worse than with fewer parameters.

I can't see how using more test colors will lead to overfitting if you use the same simple model.
I understand the simplicity of having a 3x3 linear matrix, but why limit oneself to that if there is sufficient data to do a more complex correction?
But what more complex correction makes logical sense? Sure, I think it would make a lot of sense to carefully linearize the raw data beforehand, and this linearization could be implemented as 3x or 4x 1D LUTs. You can make this as complicated as needed, attempting to control fixed pattern noise and so forth. But the more linear you make the data, the easier the rest is going to be. Fitting of this data is conceptually different from fitting the color data, as presumably it is objective without aesthetic judgment involved.

Even when using 3D LUTs, which can be complex with many variables, linearizing first allows the use of much simpler 3D LUTs.

But why are 3D LUTs used? Besides for linearizing data, they are used for applying gamma curves, conversion to standard color spaces and most especially for creative control in color grading. So one data construct is handing many logically independent actions. Most of these actions are not statistical in nature, and so don't need to participate in the fitting of color data.

But what if you strip out all of the non statistical stuff that can be handled separately? What's left is the bare effects of the three color filters, which are linear, and the only operation that make sense—to me, at least—with this data is a 3x3 linear matrix.

After this process, you are left with device metamerism errors, which are irreducible and ultimately uncorrectable. You can attempt to adjust some metamers to match exactly, but lose other metamers in the process. But that's a judgement call: you have inevitable errors in color and you have to come up with a satisfactory method of distributing those errors.

Sure, there are other adjustments, such as making colors more pleasing, but again I think that is best done separately instead of part of the calibrated basic color conversion, which is the only part of the process which requires an intrinsically statistical approach.
 
I thought that overfitting referred to the color processing needed to convert a raw camera image to a standardized color representation. The more patches you record, the more confidence you can have in knowing the camera response (?) and the more parameters you could (in principle, at least) use in that color processing.
As I understand it, overfitting is where you have too many parameters for the number of data points, or more generally, when the model you are fitting is more complex than the reality of the data. You know you are overfitting when the model predicts the test data exactly, including noise, and when validation data is consistently fit worse than with fewer parameters.

I can't see how using more test colors will lead to overfitting if you use the same simple model.
That's the way I look at it.
I understand the simplicity of having a 3x3 linear matrix, but why limit oneself to that if there is sufficient data to do a more complex correction?
But what more complex correction makes logical sense? Sure, I think it would make a lot of sense to carefully linearize the raw data beforehand, and this linearization could be implemented as 3x or 4x 1D LUTs.
The raw data is already pretty darned linear for modern cameras. And of course, it is linear within the computation error for the simulation.
You can make this as complicated as needed, attempting to control fixed pattern noise and so forth. But the more linear you make the data, the easier the rest is going to be. Fitting of this data is conceptually different from fitting the color data, as presumably it is objective without aesthetic judgment involved.

Even when using 3D LUTs, which can be complex with many variables, linearizing first allows the use of much simpler 3D LUTs.

But why are 3D LUTs used? Besides for linearizing data, they are used for applying gamma curves, conversion to standard color spaces and most especially for creative control in color grading. So one data construct is handing many logically independent actions.
Also the way I look at it.
Most of these actions are not statistical in nature, and so don't need to participate in the fitting of color data.

But what if you strip out all of the non statistical stuff that can be handled separately? What's left is the bare effects of the three color filters, which are linear, and the only operation that make sense—to me, at least—with this data is a 3x3 linear matrix.

After this process, you are left with device metamerism errors, which are irreducible and ultimately uncorrectable. You can attempt to adjust some metamers to match exactly, but lose other metamers in the process. But that's a judgement call: you have inevitable errors in color and you have to come up with a satisfactory method of distributing those errors.

Sure, there are other adjustments, such as making colors more pleasing, but again I think that is best done separately instead of part of the calibrated basic color conversion, which is the only part of the process which requires an intrinsically statistical approach.
D'accord.
 
First results with the AMPAS 190-patch training set.

995b603d75384f80a8e7d4f53fad2165.jpg.png

There may be bugs. I made the changes for a linear array of patches, fired it up with the AMPAS spectral data, and it pretty much worked. I'm always suspicious when that happens. I'll work of coloring in the patch bars properly and doing a chromaticity plot.

--
https://blog.kasson.com
 
Last edited:
First results with the AMPAS 190-patch training set.

995b603d75384f80a8e7d4f53fad2165.jpg.png

There may be bugs. I made the changes for a linear array of patches, fired it up with the AMPAS spectral data, and it pretty much worked. I'm always suspicious when that happens. I'll work of coloring in the patch bars properly and doing a chromaticity plot.
By the way, which patch in the AMPAS data set should I white balance to?

--
 

Keyboard shortcuts

Back
Top