Interesting paper: Spectral Sensitivity Estimation Without a Camera

Okay, that makes sense. And, the low-pass predicted curve is clearly not the same as its measured counterpart.

I'm just wondering if the performance against the CC24 training data is sensitive to that (any) particular patch set. I'm going to re-run the camera profiles against a much-larger Munsell reference dataset, which may expose such sensitivity
 
That deltaE is kind of cheating. There are many spectral sensitivities which would give you the same "prediction." Your deltaE would be the same but they would still be different.
Strictly speaking, doesn't it tell me that this particular predicted SSF dataset will resolve those 24 colors to within a deltaE to a spectral reference that's not much worse than a target shot profile's. Metamerically, there are other datasets that are worse, others the same, others better... ??
But this is not what you want to validate. You want to see if the derived spectral curves are close enough to the actual ones. My understanding is that to compute that deltaE, you convert to some color space with some transformation. But... those curves came from such a transformation...

It is like you asking me to find x and y knowing that x+y=5. I tell you x=2, y=3. You compute the sum, compute deltaE=0, and you say - great, those are the numbers...
The way to check whether the derived curves are close enough to Color Matching Functions (or Cone Fundamentals) is to guesstimate them in camera space (SSFs) and to move them to a common color space so that they can be compared. They do this by using canned linear DNG matrices to move SSFs back and forth to the XYZ 'connection' color space.

Once in there a dot product of the curves and the reference illuminant produces XYZ tristimulus color values. If two tones have the same values they are perceived as having the same color. How much off do they need to be for humans to see them as different?

Enter a just noticeable difference. They are usually computed in a more perceptually uniform color space, L*ab (under D50, did they?), which is one (or two) transforms away from XYZ. Common metrics for perceived color differences are dE(76) and dE(00). dE(76) is the earliest and less representative of such metrics but it is the easiest to understand: it is just the quadrature sum of L*, a and b. dE(00) is a more perceptually uniform evolution of it, also calculated from L*ab. A just noticeable difference is roughly 1 dE(00) or 2 dE(76). (Which deltaE are you using Glenn?)

If the color difference from a reference tone is not noticeable the tones are perceived as the same, and that's all we care about. If the color differences from a representative orthogonal sample of reference colors found in typical scenes are low, such scenes are going to look relatively 'correct'. Metamers are an important but different issue, which of course also affects humans directly since we see color through the SSFs of cones in the retina, that also perform a dot product and produce subliminal tristimulus values in HVS space.

My beef with their study is that they attempt to extrapolate guesstimated SSFs from a tiny sample, without the benefit of training on tons of raw data under known illumination, underfitting big time and producing generic results that regress towards the mean.

BTW this was another attempt at generalizing using an analytic approach from 10 years ago: https://ieeexplore.ieee.org/document/6475015

Jack
 
Last edited:
So, proof's in the pudding, I made camera profiles from each dataset. Here are the CC24 patch deltaE, sorted min-max:

1. predicted SSF:

D02 DE 0.00 DE LCh +0.00 +0.00 +0.00 (gray 80%)
D06 DE 0.07 DE LCh +0.02 -0.06 +0.03 (gray 20%)
D03 DE 0.15 DE LCh -0.02 -0.04 -0.14 (gray 70%)
D04 DE 0.16 DE LCh -0.02 -0.06 -0.15 (gray 50%)
D05 DE 0.20 DE LCh -0.01 -0.03 -0.20 (gray 40%)
D01 DE 0.83 DE LCh -0.14 -0.75 -0.34 (white)
A01 DE 0.95 DE LCh -0.06 +0.45 +0.84 (dark brown)
A04 DE 0.96 DE LCh -0.66 +0.62 -0.32 (yellow-green)
A06 DE 1.14 DE LCh +0.28 -0.72 +0.83 (light cyan)
B04 DE 1.22 DE LCh +1.03 -0.14 +0.62 (dark purple)
A03 DE 1.23 DE LCh +0.87 -0.91 -0.48 (purple-blue)
B03 DE 1.23 DE LCh +0.98 -0.30 -0.67 (red)
A02 DE 1.36 DE LCh +0.48 -0.09 -1.27 (red)
A05 DE 1.36 DE LCh +1.13 -0.25 +0.61 (purple-blue)
B02 DE 1.83 DE LCh +1.53 -1.59 -0.90 (purple-blue)
C01 DE 1.84 DE LCh +1.50 -1.63 -1.43 (dark purple-blue)
C02 DE 2.07 DE LCh -0.30 -1.77 +1.03 (yellow-green)
B01 DE 2.15 DE LCh -0.35 -1.31 -1.66 (strong orange)
B06 DE 2.30 DE LCh -0.69 -2.12 -0.55 (light strong orange)
C03 DE 2.45 DE LCh +2.17 -0.39 -1.06 (strong red)
C06 DE 2.50 DE LCh +2.06 -0.38 +1.37 (blue)
C05 DE 2.71 DE LCh +2.11 -0.68 +1.56 (purple-red)
B05 DE 3.29 DE LCh -1.09 -2.95 +0.93 (light strong yellow-green)
C04 DE 3.68 DE LCh -0.66 -3.60 -0.46 (light vivid yellow)

2. rawtoaces measured SSF

D02 DE 0.00 DE LCh +0.00 +0.00 +0.00 (gray 80%)
D06 DE 0.09 DE LCh +0.02 -0.08 +0.03 (gray 20%)
D03 DE 0.11 DE LCh -0.01 -0.07 -0.09 (gray 70%)
D04 DE 0.12 DE LCh -0.01 -0.02 -0.12 (gray 50%)
D05 DE 0.16 DE LCh -0.00 -0.04 -0.15 (gray 40%)
A01 DE 0.41 DE LCh -0.04 +0.26 +0.32 (dark brown)
D01 DE 0.69 DE LCh -0.13 -0.64 -0.22 (white)
A04 DE 0.80 DE LCh -0.64 +0.43 -0.23 (yellow-green)
A06 DE 1.05 DE LCh +0.24 -0.60 +0.82 (light cyan)
A03 DE 1.10 DE LCh +0.84 -0.76 -0.28 (purple-blue)
A02 DE 1.18 DE LCh +0.53 +0.12 -1.05 (red)
B03 DE 1.24 DE LCh +1.14 -0.07 -0.49 (red)
B04 DE 1.35 DE LCh +0.82 -0.87 +0.61 (dark purple)
C02 DE 1.50 DE LCh -0.32 -1.37 +0.51 (yellow-green)
A05 DE 1.55 DE LCh +1.06 -0.89 +0.41 (purple-blue)
B02 DE 1.59 DE LCh +1.47 -1.05 -0.88 (purple-blue)
B06 DE 1.73 DE LCh -0.85 -1.45 -0.44 (light strong orange)
C01 DE 1.76 DE LCh +1.59 -0.94 -1.18 (dark purple-blue)
B01 DE 1.85 DE LCh -0.40 -0.86 -1.59 (strong orange)
C03 DE 2.27 DE LCh +1.91 -0.60 -1.08 (strong red)
B05 DE 2.41 DE LCh -1.11 -2.00 +0.75 (light strong yellow-green)
C05 DE 2.42 DE LCh +1.97 -0.71 +1.21 (purple-red)
C06 DE 2.69 DE LCh +2.04 -1.08 +1.35 (blue)
C04 DE 2.76 DE LCh -0.77 -2.62 -0.36 (light vivid yellow)

3. dcamprof target-shot matrix:

D01 DE 0.03 DE LCh +0.03 +0.00 +0.00 (white)
A01 DE 0.29 DE LCh +0.24 +0.15 -0.05 (dark brown)
A04 DE 0.54 DE LCh -0.22 +0.36 -0.33 (yellow-green)
C02 DE 0.70 DE LCh +0.01 -0.56 +0.42 (yellow-green)
C04 DE 0.72 DE LCh -0.57 +0.32 +0.31 (light vivid yellow)
A06 DE 0.82 DE LCh -0.01 -0.35 +0.74 (light cyan)
D02 DE 0.93 DE LCh +0.28 -0.16 -0.87 (gray 80%)
D04 DE 1.04 DE LCh +0.56 -0.39 -0.79 (gray 50%)
A03 DE 1.19 DE LCh +0.47 -0.56 -1.14 (purple-blue)
D03 DE 1.35 DE LCh +0.03 -0.10 -1.34 (gray 70%)
D05 DE 1.43 DE LCh +1.20 -0.30 -0.71 (gray 40%)
B01 DE 1.44 DE LCh -0.98 -0.20 -1.03 (strong orange)
B06 DE 1.69 DE LCh -1.30 +0.66 +0.86 (light strong orange)
B05 DE 1.81 DE LCh -1.53 -0.92 -0.34 (light strong yellow-green)
D06 DE 1.81 DE LCh +1.71 -0.17 -0.57 (gray 20%)
B03 DE 2.10 DE LCh +2.07 -0.05 +0.35 (red)
A05 DE 2.15 DE LCh +1.89 -1.16 -1.02 (purple-blue)
B02 DE 2.17 DE LCh +1.94 -1.61 -1.04 (purple-blue)
C06 DE 2.43 DE LCh +1.93 -1.21 +0.82 (blue)
C03 DE 2.70 DE LCh +2.44 -1.14 -0.02 (strong red)
C01 DE 2.78 DE LCh +2.35 -2.38 -1.80 (dark purple-blue)
C05 DE 2.83 DE LCh +2.15 -0.94 +1.58 (purple-red)
A02 DE 3.05 DE LCh -0.23 -0.07 -3.04 (red)
B04 DE 3.51 DE LCh +1.75 -3.02 +0.21 (dark purple)

'predicted' fares a smidge worse than target-shot matrix, 3.68 vs 3.51

I guess I'm a little torn. With SSF data and dcamprof, I can make camera profiles for any illuminant. I can also use domain-specific training data, e.g., Lippman 2000 skin tone dataset. That the color fidelity is only matrix-good may be a wash... This, for my D7000, would want to eval other cameras' predicted to make a generalization.
Good work Glenn. SMI is a 'color accuracy' metric, computed by 100 minus 5.5 times the average of the dE(76) of just the color patches. In the case of the SSFs above that would be 89.5, 90.9, 89.9. Are you showing dE(76) or dE(00)?

I never would have guessed by looking at those generic curves that it would be so good. I may investigate further.

Jack
 
Last edited:
My point is that this study is trying to see beyond our vision, that is the whole point. We cannot validate it by projecting back to our vision.
 
Good work Glenn. SMI is a 'color accuracy' metric, computed by 100 minus 5.5 times the average of the dE(76) of just the color patches. In the case of the SSFs above that would be 89.5, 90.9, 89.9. Are you showing dE(76) or dE(00)?
I never would have guessed by looking at those generic curves that it would be so good. I may investigate further.

Jack
Based on this, I'm gonna say '76:

https://rawtherapee.com/mirror/dcamprof/dcamprof.html#observers

but Anders isn't specific about it.

Johannes is looking at this dataset as a possible source of SSF for vkdt, his next generation of darktable. Using SSF has some advantages, being able to make illuminant-specific camera profiles with one measurement set being one. Me, I'm just a shade-tree mechanic trying to figure out pragmatically how useful they could be... :-)
 
Okay, that makes sense. And, the low-pass predicted curve is clearly not the same as its measured counterpart.

I'm just wondering if the performance against the CC24 training data is sensitive to that (any) particular patch set. I'm going to re-run the camera profiles against a much-larger Munsell reference dataset, which may expose such sensitivity
I re-did the dcamprof camera profiles with the predicted and rawtoaces data using the 1600-patch munsell reference dataset that comes with dcamprof. In the interest of brevity, here are the max dE patches for each:

rawtoaces:
AGY7012 DE 4.33 DE LCh -2.09 -3.37 +1.73 (light vivid yellow-green)

predicted:
AGY7012 DE 5.16 DE LCh -2.09 -4.35 +1.84 (light vivid yellow-green)

So, bear-of-little-brain here thinks that both struggle with outliers, but predicted struggles a bit more. Maybe makes a case for using monochromator-measured data (oh, and using a training dataset from measurements on the subject) for color reproduction work... ??
 
Okay, that makes sense. And, the low-pass predicted curve is clearly not the same as its measured counterpart.

I'm just wondering if the performance against the CC24 training data is sensitive to that (any) particular patch set. I'm going to re-run the camera profiles against a much-larger Munsell reference dataset, which may expose such sensitivity
I re-did the dcamprof camera profiles with the predicted and rawtoaces data using the 1600-patch munsell reference dataset that comes with dcamprof. In the interest of brevity, here are the max dE patches for each:

rawtoaces:
AGY7012 DE 4.33 DE LCh -2.09 -3.37 +1.73 (light vivid yellow-green)

predicted:
AGY7012 DE 5.16 DE LCh -2.09 -4.35 +1.84 (light vivid yellow-green)

So, bear-of-little-brain here thinks that both struggle with outliers, but predicted struggles a bit more. Maybe makes a case for using monochromator-measured data (oh, and using a training dataset from measurements on the subject) for color reproduction work... ??
It occurs to me that if the predicted SSFs were completely wrong for the given camera yet had inherently decent performance we would be none the wiser by looking at their deltaEs against a common target: they could all be constrained to be a shade off Color Matching Functions while being virtually unrelated to the curves they are purported to mimic, therefore not being representative of them. This would negate the whole point of the exercise.

The proper way to evaluate them is against the actual SSF curves themselves, if available. You did that earlier plotting them on the same graph. All we need to do now is come up with an appropriate metric.
 
Last edited:
I've tested half a dozen of cameras in the dataset that went with the paper, and I find them to be significantly worse from a color accuracy perspective than other curves I have for the same cameras.

Example:

9040fcb72d8c40f5b24648b2ff8e94cd.jpg.png

997fcd20f31f4bf2b735efb832df0a7f.jpg.png

--
https://blog.kasson.com
 
Last edited:

Keyboard shortcuts

Back
Top