Camera color accuracy outside of the training set

Metamerism is when two different colors are perceived differently by a human observer, but the same by a camera, or when two colors are perceived as the same by a human observer but different by the camera.
What you are describing is not metamerism, but various kinds of metameric errors or metameric failures.

https://blog.kasson.com/the-last-word/metameric-failure/
A more accurate terminology - and I agree that *this* cannot be corrected for.

Mathematically this would appear as nonmonotonicities or discontinuities when trying to construct something like a HueSatMap table.
 
There is no such thing as "deficiencies of a matrix approximation",
Sure there is. Monochromatic blue getting mapped to a negative luminance value for example.

Monochromatic blue being mapped to a negative Y value is not metamerism. It's a model error, plain and simple.
Actually, it is exactly metamerism. The latter is not just those extreme examples you mentioned in the previous post. It is a failure of the IL condition.

When you construct an approximate matrix, you do not have a "model error". A model error would be a failure of the tristimulus model or similar. You may get negative values, indeed, as I mentioned a few posts above. Then you would somehow compensate for that, where a nonlinearity comes. That compensation screws things somewhere else, etc.
 
Metamerism is when two different colors are perceived differently by a human observer, but the same by a camera, or when two colors are perceived as the same by a human observer but different by the camera.
What you are describing is not metamerism, but various kinds of metameric errors or metameric failures.

https://blog.kasson.com/the-last-word/metameric-failure/
A more accurate terminology - and I agree that *this* cannot be corrected for.

Mathematically this would appear as nonmonotonicities or discontinuities when trying to construct something like a HueSatMap table.
I think that could happen, but I don't think that has to happen. I think you could train an optimizer with one patch set, and test on another patch set and get good results, then test on a third patch set and get bad results.

Or train on one patch set, and test on a patch set with the same colors but different spectra and also get bad results.
 
There is no such thing as "deficiencies of a matrix approximation",
Sure there is. Monochromatic blue getting mapped to a negative luminance value for example.

Monochromatic blue being mapped to a negative Y value is not metamerism. It's a model error, plain and simple.
Actually, it is exactly metamerism.
It is the result of the camera's suffering from observer metameric failure.
The latter is not just those extreme examples you mentioned in the previous post. It is a failure of the IL condition.

When you construct an approximate matrix, you do not have a "model error". A model error would be a failure of the tristimulus model or similar. You may get negative values, indeed, as I mentioned a few posts above. Then you would somehow compensate for that, where a nonlinearity comes. That compensation screws things somewhere else, etc.
 
Metamerism is when two different colors are perceived differently by a human observer, but the same by a camera, or when two colors are perceived as the same by a human observer but different by the camera.
What you are describing is not metamerism, but various kinds of metameric errors or metameric failures.

https://blog.kasson.com/the-last-word/metameric-failure/
A more accurate terminology - and I agree that *this* cannot be corrected for.

Mathematically this would appear as nonmonotonicities or discontinuities when trying to construct something like a HueSatMap table.
I think that could happen, but I don't think that has to happen. I think you could train an optimizer with one patch set, and test on another patch set and get good results, then test on a third patch set and get bad results.

Or train on one patch set, and test on a patch set with the same colors but different spectra and also get bad results.
Yes, that could also happen too.

The risks of this happening go down as spectral purity goes up., because the available possibilities of spectra that correspond to the same color reduce, eventually to the point where you have exactly one possibility at the spectral locus for any given color on the locus.
 
Then you would somehow compensate for that, where a nonlinearity comes. That compensation screws things somewhere else, etc.
We're talking about a HueSatMap that has a value dimension of 1 here. There is no nonlinearity.

Frequency response ripple is not nonlinearity.
 
Then you would somehow compensate for that, where a nonlinearity comes. That compensation screws things somewhere else, etc.
We're talking about a HueSatMap that has a value dimension of 1 here.
I am not.
Then please stop trying to hijack the discussion by dragging it off topic. Reference https://www.dpreview.com/forums/post/66044851

Specifically:

"So I've been thinking about this a bit more - and in the case of dcamprof, the idea of a "2.5D" LUT is one that does not take input amplitude (value) into account, only hue/saturation, but DOES adjust amplitude (value) based on hue/saturation."

aka a Value dimension of 1

So why do you insist on constantly talking about HueSatMaps with a value dimension other than 1 (which are nonlinear, and I think we can all agree are not appropriate for anything other than a "look" table), given that an attempt is being made at discussing ones with a value dimension of 1.
 
Then you would somehow compensate for that, where a nonlinearity comes. That compensation screws things somewhere else, etc.
We're talking about a HueSatMap that has a value dimension of 1 here.
I am not.
Then please stop trying to hijack the discussion by dragging it off topic. Reference https://www.dpreview.com/forums/post/66044851
This is an incorrect link to the OP, here is the right one:

Camera color accuracy outside of the training set: Photographic Science and Technology Forum: Digital Photography Review (dpreview.com)
Specifically:

"So I've been thinking about this a bit more - and in the case of dcamprof, the idea of a "2.5D" LUT is one that does not take input amplitude (value) into account, only hue/saturation, but DOES adjust amplitude (value) based on hue/saturation."

aka a Value dimension of 1

So why do you insist on constantly talking about HueSatMaps
I am ignoring every mentioning of HueSatMaps. I am just saying that there is no magic fix for loss of info, that is all.
with a value dimension other than 1 (which are nonlinear, and I think we can all agree are not appropriate for anything other than a "look" table), given that an attempt is being made at discussing ones with a value dimension of 1.
 
Last edited:

Keyboard shortcuts

Back
Top