How to combine mono and color SPP images for sharpening control

And this (not your reaching a higher level, but the large overlaps of layer-response) is the reason I claimed that the colours are not (purely) additive in one of the now filled-up threads:
Look at the 1931 CIE XYZ curves. They are purely additive, and there is considerable overlap.
Thanks for responding, and for having the patience, Jim. Maybe I am formulating something else than what I am after in an attempt to simplify my homemade english. Let me retry, sorry if tedious:

It is not clear to me that one can get to the 1931 CIE-curves by doing a linear transformation of the responses of the foveon-chip.
Early Sigma cameras prior to the Merrills have a camera-to-XYZ 3x3 matrix which passed to the raw file as meta-data.

A very early one taken from a paper:

XYZ white ref is probably D55 but they don't actually say.

XYZ white ref is probably D55 but they don't actually say.
Good. But I'll bet you dollars to doughnuts you don't get 1931 CIE XYZ curves if you use a monochromator on the camera and pass the results through that matrix.
Fine. I don't own a monochromator. Your rebuttals often refer to stuff other people either don't know or can't do. Whatever you do, don't connect the above to a Foveon response chart - e,g, QE and produce XYZ vs wavelength curves.

Best not read this paper by Rush of Foveon either:


No doubt it will be as unsatisfactory as everything else I've posted on this subject.
 
And this (not your reaching a higher level, but the large overlaps of layer-response) is the reason I claimed that the colours are not (purely) additive in one of the now filled-up threads:
Look at the 1931 CIE XYZ curves. They are purely additive, and there is considerable overlap.
Thanks for responding, and for having the patience, Jim. Maybe I am formulating something else than what I am after in an attempt to simplify my homemade english. Let me retry, sorry if tedious:

It is not clear to me that one can get to the 1931 CIE-curves by doing a linear transformation of the responses of the foveon-chip.
Early Sigma cameras prior to the Merrills have a camera-to-XYZ 3x3 matrix which passed to the raw file as meta-data.

A very early one taken from a paper:

XYZ white ref is probably D55 but they don't actually say.

XYZ white ref is probably D55 but they don't actually say.
Good. But I'll bet you dollars to doughnuts you don't get 1931 CIE XYZ curves if you use a monochromator on the camera and pass the results through that matrix.
Fine. I don't own a monochromator. Your rebuttals often refer to stuff other people either don't know or can't do. Whatever you do, don't connect the above to a Foveon response chart - e,g, QE and produce XYZ vs wavelength curves.
And why not? If the matrix is right, that ought to be close to CIE 1931 XYZ.
Best not read this paper by Rush of Foveon either:

http://kronometric.org/phot/sensor/fov/2002 Color Paper - Allen Rush.pdf

No doubt it will be as unsatisfactory as everything else I've posted on this subject.
Hey Ted, you've help me a lot on this thread.

--
 
And this (not your reaching a higher level, but the large overlaps of layer-response) is the reason I claimed that the colours are not (purely) additive in one of the now filled-up threads:
Look at the 1931 CIE XYZ curves. They are purely additive, and there is considerable overlap.
Thanks for responding, and for having the patience, Jim. Maybe I am formulating something else than what I am after in an attempt to simplify my homemade english. Let me retry, sorry if tedious:

It is not clear to me that one can get to the 1931 CIE-curves by doing a linear transformation of the responses of the foveon-chip.
Early Sigma cameras prior to the Merrills have a camera-to-XYZ 3x3 matrix which passed to the raw file as meta-data.

A very early one taken from a paper:

XYZ white ref is probably D55 but they don't actually say.

XYZ white ref is probably D55 but they don't actually say.
Good. But I'll bet you dollars to doughnuts you don't get 1931 CIE XYZ curves if you use a monochromator on the camera and pass the results through that matrix.
Fine. I don't own a monochromator. Your rebuttals often refer to stuff other people either don't know or can't do. Whatever you do, don't connect the above to a Foveon response chart - e,g, QE and produce XYZ vs wavelength curves.

Best not read this paper by Rush of Foveon either:

http://kronometric.org/phot/sensor/fov/2002 Color Paper - Allen Rush.pdf

No doubt it will be as unsatisfactory as everything else I've posted on this subject.
I like it when you two work together on this. Nothing is ever perfect, nothing always goes to plan, but watching people who actually know what they are talking about is a helluva lot better than reading wild guesses, supposition and wishful thinking.

Keep it up, Tag Team 1!



--
Photo of the day: https://whisperingcat.co.uk/wp/photo-of-the-day/
Website: http://www.whisperingcat.co.uk/ (2022 - website rebuilt, updated and back in action)
DPReview gallery: https://www.dpreview.com/galleries/0286305481
Flickr: http://www.flickr.com/photos/davidmillier/ (very old!)
 
Jim wrote:

"In two recent threads, we determined that with the DP1 Merrill, you couldn't turn sharpening completely off in SPP in color mode, but you could in monochromatic mode.

For those who want the sharpness of the monochromatic mode for their color images, here's a technique that will do the job in Photoshop.

1. Save a color version from SPP as a TIFF.

2. With sharpening turned to minimum, save a monochromatic version from SPP as a TIFF.

3. Load both files into Photoshop.

View:
original size

4. Convert both files to Lab.

View:
original size

5. In the channels control for both images, make only L visible.

View:
original size

6. Select all in the monoo image.

View:
original size

7. Copy the channel.

8. Paste the channel in the L channel of the color image.

View:
original size

9. Make all channels in the color image visible.

10. Convert the color image back to RGB.

View:
original size

11 Save the image."


Hi Jim.

Your work (created above) makes sense to me, it's honest and fruitful, and it gave me the impetus to "invent" my own simpler method in Photoshop that doesn't resharpen images or create/amplify aliasing and noise. It took me a few days to figure it out - and I figured it out now, I'm happy with the result, I'm still checking if this procedure is universal.

In any case, I want to thank you for your sincere effort to help with processing the image output of Foveon images and for the impetus that your above work gave me in searching and finding my own path in processing Foveon photos.

Thanks for the inspiration, Jim.

Peter
 
Last edited:
And this (not your reaching a higher level, but the large overlaps of layer-response) is the reason I claimed that the colours are not (purely) additive in one of the now filled-up threads:
Look at the 1931 CIE XYZ curves. They are purely additive, and there is considerable overlap.
Thanks for responding, and for having the patience, Jim. Maybe I am formulating something else than what I am after in an attempt to simplify my homemade english. Let me retry, sorry if tedious:

It is not clear to me that one can get to the 1931 CIE-curves by doing a linear transformation of the responses of the foveon-chip.
That is correct. The Foveon sensors do not pass the Luther-Ives criterion.
But is it better than CFA sensors? Foveons seem better than at least Nikons at yellow and violet - there have been a couple of threads about this here and in the Nikon Z forum.

We've talked a lot about sharpening, but not much about color accuracy. In general I think color isn't talked about enough in the camera community.
Would you like me to do an analysis of the accuracy of Foveon colors? Thanks to Ted, I now have the sensitivity curves, but I'm not sure for what camera they apply.
Curves I posted are for two sensor models, F7 and F13, except the one marked as "Merrill" which was extrapolated somehow from F13 data.

F7 has two versions - one for the SD9 without micro-lenses and one for the SD10 with micro-lenses. I've read that the F13 for one or more DP models has radially offset micro-lenses.

F13 is for all subsequent cameras up to but not including the Merrill models. There are no published Sigma/Foveon curves to my knowledge for the Merrill F20 sensor. Here's a Marketing one for the Quattro which claims to be the same as for the Merrill:

Obviously without the hot mirror and normalized to the top layer.

Obviously without the hot mirror and normalized to the top layer.
Yes please :-)
Thanks, Ted. That helps, assuming the sensor stack is the same.
So it says.
 
Thanks, Ted. That helps, assuming the sensor stack is the same.
So it says.
Here's what I got when I optimized the compromise matrix based on the raw sensor curves and the hot mirror spectra you supplied me. Training and testing on the CC24, which is about the lowest bar.

7b48ca9e35084e34a1462bb9c0bdcf03.jpg.png


Forward Matrix to XYZ D50/2 =
1.8505 -1.7174 0.8808
-0.3166 1.9503 -0.5945
0.6667 -2.5896 2.7689

It's interesting to me that this produces a sensor response to spectral inputs that yields chromaticities that are almost completely outside the range of human perceivable chromaticities. Of course, in editing these would get mapped into visible colors.

--
https://blog.kasson.com
 
Last edited:
Jim wrote:

"In two recent threads, we determined that with the DP1 Merrill, you couldn't turn sharpening completely off in SPP in color mode, but you could in monochromatic mode.

For those who want the sharpness of the monochromatic mode for their color images, here's a technique that will do the job in Photoshop.

1. Save a color version from SPP as a TIFF.

2. With sharpening turned to minimum, save a monochromatic version from SPP as a TIFF.

3. Load both files into Photoshop.

View:
original size

4. Convert both files to Lab.

View:
original size

5. In the channels control for both images, make only L visible.

View:
original size

6. Select all in the monoo image.

View:
original size

7. Copy the channel.

8. Paste the channel in the L channel of the color image.

View:
original size

9. Make all channels in the color image visible.

10. Convert the color image back to RGB.

View:
original size

11 Save the image."


Hi Jim.

Your work (created above) makes sense to me, it's honest and fruitful, and it gave me the impetus to "invent" my own simpler method in Photoshop that doesn't resharpen images or create/amplify aliasing and noise. It took me a few days to figure it out - and I figured it out now, I'm happy with the result, I'm still checking if this procedure is universal.
I disagree that mono always fixes the Merrill -2.0 sharpening issue.

Firstly, not all Merrills are "too sharp" at -2.0

Secondly, the mono mode can be sharper than that at -2.0 as shown here:

mono%20is%20sharper.jpg


The X3F, etc. can be found here:

http://kronometric.org/phot/processing/SPP/DP1M replacing mono/

:-D
 
Last edited:
Jim wrote:

"In two recent threads, we determined that with the DP1 Merrill, you couldn't turn sharpening completely off in SPP in color mode, but you could in monochromatic mode.

For those who want the sharpness of the monochromatic mode for their color images, here's a technique that will do the job in Photoshop.

1. Save a color version from SPP as a TIFF.

2. With sharpening turned to minimum, save a monochromatic version from SPP as a TIFF.

3. Load both files into Photoshop.

View:
original size

4. Convert both files to Lab.

View:
original size

5. In the channels control for both images, make only L visible.

View:
original size

6. Select all in the monoo image.

View:
original size

7. Copy the channel.

8. Paste the channel in the L channel of the color image.

View:
original size

9. Make all channels in the color image visible.

10. Convert the color image back to RGB.

View:
original size

11 Save the image."


Hi Jim.

Your work (created above) makes sense to me, it's honest and fruitful, and it gave me the impetus to "invent" my own simpler method in Photoshop that doesn't resharpen images or create/amplify aliasing and noise. It took me a few days to figure it out - and I figured it out now, I'm happy with the result, I'm still checking if this procedure is universal.
I disagree that mono always fixes the Merrill -2.0 sharpening issue.

Firstly, not all Merrills are "too sharp" at -2.0
You think the sharpening is camera-dependent? Different serial numbers, or different model numbers? Or are you saying it's secen dependent?
Secondly, the mono mode can be sharper than that at -2.0 as shown here:

mono%20is%20sharper.jpg


The X3F, etc. can be found here:

http://kronometric.org/phot/processing/SPP/DP1M replacing mono/

:-D


--
 
Jim wrote:

"In two recent threads, we determined that with the DP1 Merrill, you couldn't turn sharpening completely off in SPP in color mode, but you could in monochromatic mode.

For those who want the sharpness of the monochromatic mode for their color images, here's a technique that will do the job in Photoshop.

1. Save a color version from SPP as a TIFF.

2. With sharpening turned to minimum, save a monochromatic version from SPP as a TIFF.

3. Load both files into Photoshop.

View:
original size

4. Convert both files to Lab.

View:
original size

5. In the channels control for both images, make only L visible.

View:
original size

6. Select all in the monoo image.

View:
original size

7. Copy the channel.

8. Paste the channel in the L channel of the color image.

View:
original size

9. Make all channels in the color image visible.

10. Convert the color image back to RGB.

View:
original size

11 Save the image."


Hi Jim.

Your work (created above) makes sense to me, it's honest and fruitful, and it gave me the impetus to "invent" my own simpler method in Photoshop that doesn't resharpen images or create/amplify aliasing and noise. It took me a few days to figure it out - and I figured it out now, I'm happy with the result, I'm still checking if this procedure is universal.
I disagree that mono always fixes the Merrill -2.0 sharpening issue.

Firstly, not all Merrills are "too sharp" at -2.0
You think the sharpening is camera-dependent? Different serial numbers, or different model numbers? Or are you saying it's secen dependent?
Secondly, the mono mode can be sharper than that at -2.0 as shown here:

mono%20is%20sharper.jpg


The X3F, etc. can be found here:

http://kronometric.org/phot/processing/SPP/DP1M replacing mono/

:-D
Jim, here is my attempt 2 at capturing images for your MTF sampling blog.
 
Jim wrote:

"In two recent threads, we determined that with the DP1 Merrill, you couldn't turn sharpening completely off in SPP in color mode, but you could in monochromatic mode.

For those who want the sharpness of the monochromatic mode for their color images, here's a technique that will do the job in Photoshop.

1. Save a color version from SPP as a TIFF.

2. With sharpening turned to minimum, save a monochromatic version from SPP as a TIFF.

3. Load both files into Photoshop.

View:
original size

4. Convert both files to Lab.

View:
original size

5. In the channels control for both images, make only L visible.

View:
original size

6. Select all in the monoo image.

View:
original size

7. Copy the channel.

8. Paste the channel in the L channel of the color image.

View:
original size

9. Make all channels in the color image visible.

10. Convert the color image back to RGB.

View:
original size

11 Save the image."


Hi Jim.

Your work (created above) makes sense to me, it's honest and fruitful, and it gave me the impetus to "invent" my own simpler method in Photoshop that doesn't resharpen images or create/amplify aliasing and noise. It took me a few days to figure it out - and I figured it out now, I'm happy with the result, I'm still checking if this procedure is universal.
I disagree that mono always fixes the Merrill -2.0 sharpening issue.

Firstly, not all Merrills are "too sharp" at -2.0
You think the sharpening is camera-dependent?
No. I am talking about conversion by SPP from raw. Maybe SPP-version dependent. For Merrills, that would be SPP5 onwards.
Different serial numbers
No. However, Sigma does issue camera firmware updates from time to time.
or different model numbers?
Yes, but only if different model numbers get different sharpening algorithms in SPP.
Or are you saying it's secen dependent?
No. SPP does not know what's in the scene.
 
Last edited:
Ted wrote:

"I disagree that mono always fixes the Merrill -2.0 sharpening issue.

Firstly, not all Merrills are "too sharp" at -2.0

Secondly, the mono mode can be sharper than that at -2.0 as shown here"..


Yes, I agree with that, that's why I create each TIFF individually (and it may differ from image to image).. After further processing in Photoshop, I "control" aliasing, sharpening and noise.
The result corresponds to my idea of how the image should look (1. on the display/ 2. before printing) without refocusing, aliasing and noise.

I use this procedure for the DP/SD1 Merrill.

Peter
 
Last edited:
And this (not your reaching a higher level, but the large overlaps of layer-response) is the reason I claimed that the colours are not (purely) additive in one of the now filled-up threads:
Look at the 1931 CIE XYZ curves. They are purely additive, and there is considerable overlap.
Thanks for responding, and for having the patience, Jim. Maybe I am formulating something else than what I am after in an attempt to simplify my homemade english. Let me retry, sorry if tedious:

It is not clear to me that one can get to the 1931 CIE-curves by doing a linear transformation of the responses of the foveon-chip.
That is correct. The Foveon sensors do not pass the Luther-Ives criterion.
But is it better than CFA sensors? Foveons seem better than at least Nikons at yellow and violet - there have been a couple of threads about this here and in the Nikon Z forum.

We've talked a lot about sharpening, but not much about color accuracy. In general I think color isn't talked about enough in the camera community.
Would you like me to do an analysis of the accuracy of Foveon colors? Thanks to Ted, I now have the sensitivity curves, but I'm not sure for what camera they apply. It will take a morning (or maybe a whole day) to do the work, and I don't want to waste my time if nobody's interested.
The photos at the top of this thread are good enough for my purposes.

https://www.dpreview.com/forums/thread/4723273
I don't think they say anything at all about the accuracy of Foveon colors.
For my purposes, accuracy isn't too interesting to me - at the end of the day I just want to take photos that I like. A camera can be less accurate but have a more pleasing output.

I have a theoretical interest in - say - how the dyes in a CFA contribute to color interpretation. And I have suspicions on how each manufacturer tunes the dyes to arrive at that camera's colors.

But I don't know what I would do with the accuracy data. I'm not doing fine art reproduction.

If you have any links to research I can read up on - especially comparing different cameras or how the dyes in the CFA contribute to color interpretation I'd be interested in reading up.

I suspect Nikon tuned their dyes to - say - produce pleasing skin tones in natural light situations (based on personal experience), and Canon maybe tuned their dyes for more realistic sky colors (based on rumors I read), and there's a bit more color variation in skin tones in Canon cameras as a result (which I prefer for studio photography as a base to start editing from).

But honestly I have no idea what their intention was, what their dyes actually were designed to do and how that affects the final image beyond just my personal experience of working with these cameras.

So - without benchmarks/comps to other cameras I'm not sure a "Foveon color accuracy" study would mean much to me, and even if I had those comps - I'm still going to reach for the cameras I reach for in various situations based on personal experience of working with those cameras/files.
Thanks for the offer - if anyone else is interested I can let them chime in.
I can supply benchmarks.

One thing you should consider: cameras that are less accurate tend to have more capture metameric error:

https://blog.kasson.com/the-last-word/observer-metameric-error-in-simulated-cameras/
There's a lot to unpack there. Let me see if I understand it correctly.

You have data sampled from a number of cameras, which produces a Spectral Sensitivity Function (SSF).

The SSF is a mathematical model of the sensor.

You can then feed a series of theoretical color swatches into this SSF function and run it through a Compromise Matrix, which is akin to demosaicing an image to produce something like RGB values.

The mention of a particular illuminant (D50) leads me to believe that you are comparing colors in RGB color space. Or LAB color space, but essentially measuring the same thing - how far the computed values are from the original values.

That is - it's not so much a measurement of the sensor itself as its ability to distinguish colors after being processed by the confusion matrix.

The confusion matrix is generic - though tuned to a particular illuminant/training set of swatches. That is to say - no specific "color profile" has been applied, it's an attempt at a generic/standard workflow.

You can then measure the distance from the color swatch to the outputted value and compute the error.

You use Mean (average), Standard Deviation (to measure the range of errors) and Worst which is the single largest deviation.

This gives us a fairly complete picture of the color accuracy of each sensor. How close the sensor's RGB value (after the Compromise Matrix) are to "real world" colors.

Metameric Error is when multiple inputs can give the same output - that is, the sensor can't differentiate between colors.

I'm not sure how Metameric Error is calculated - it seems like it should be a discrete number somehow (# of measurements where the output is identical?). I'm not sure how you "error" is calculated in this instance.

But it sounds like you have a dataset of common Metameric swatches you can feed through the model.

Finally you "calibrate" the confusion matrix (train) it on a set of standard color checker swatches and again measure the error. This is akin to calibrating a camera to a color checker.

This would mean what - that each color is pushed or pulled from the RAW file towards these swatches, which is essentially what calibration is.

Then you run some more sample swatches (it's unclear which of the 3 named sets of swatches is used in this test) and calculate the error.

Assuming the above is correct I have some questions.

Does the "training" on the Color Checker (or Natural color set) imply what I think it does - tweaking the confusion matrix? If so, how close is it to what I would expect Adobe software to produce when "calibrating" to a color checker?

What was the test set for the color checker "calibrated" test? Was it the color checker swatches or some other swatches?

When reading the charts - seeing as the Mean and Standard Deviation are highly correlated - which of the 3 sets should I be looking at to gauge camera color accuracy? Natural, Metameric or Color Checker? My guess is the first set - Natural.

Given that the color swatches cannot be measured by any camera in your dataset (none are close enough to the theoretical (Luther-Ives?) ideal) how did you generate them? Some other measurement equipment?

Are the color swatches a range of frequencies? E.g. if a leaf was measured, did it measure the entire range of frequencies produced by the leaf by reflection from the e.g. D50 light source?

Based on the Natural color set charts, it seems the Nikon D810 and D850 are the worst (most error) - does this mean they are the least accurate? What does that mean for metamer-ism?

What should I be taking away from this comparison for real-world photography? If accuracy was my concern, which camera should I choose? Or which set of charts should I use to gauge my buying decision?

For example - if my goal was fine art reproduction, which camera should I choose?

Do / how do Hue Twists play into this? E.g. on an HSV, as value increases hue and saturation change for certain (in camera & other) color profiles. Are they ignored or are they part of this? (And generally, I've been curious how calibration from a color checker may or may not affect hue twists.)

My stated suspicion was that Nikon (and the early Kodak made Leica cameras) were tuned for a certain look - some say Kodachrome-like colors. (Fully acknowledging there were multiple generations of Kodachrome over the decades.) And therefore purposefully less than the theoretical ideal.

In light of this suspicion, it's interesting that the Leica M8 was close to the theoretical ideal on the Natural swatches, but the worst (ignoring the Grasshopper) on the Metameric and Color Checker test - what am I to glean from this?

Is there anything we can imply about CFA dye "strength" - not just color separation but overall strength from these results? The hypothesis being that camera manufacturers purposefully weakened the dyes (to improve quantum efficiency) in an attempt to improve their "high ISO" noise ratings, which lead to worse color accuracy, or at least shifted color response curves.

If you ever get the chance, the Phase One Trichromatic sensor would be one I'd be interested in seeing in a test like this. Since it's the only sensor I've seen where the color filter array dyes are touted in their marketing literature.

Is the reason no X-Trans sensors appear on this list because they weren't part of the dataset, or does the X-Trans sensor require radically different algorithms and is therefore not valid? (The Fuji S5 Pro and your offer to measure Sigma cameras implies this isn't the case...)

--
"no one should have a camera that can't play Candy Crush Saga."
https://www.instagram.com/sodiumstudio/
Camera JPG Portrait Shootout http://sodium.nyc/blog/2020/05/camera-jpg-portrait
 
Last edited:
There's a lot to unpack there. Let me see if I understand it correctly.

You have data sampled from a number of cameras, which produces a Spectral Sensitivity Function (SSF).

The SSF is a mathematical model of the sensor.

You can then feed a series of theoretical color swatches into this SSF function and run it through a Compromise Matrix, which is akin to demosaicing an image to produce something like RGB values.
The conversion from camera raw space to colorimetric (CIE 1931 XYZ-referenced) space is independent of the demosaicing. The same math works for Foveon sensors and Bayer sensors.
The mention of a particular illuminant (D50) leads me to believe that you are comparing colors in RGB color space. Or LAB color space, but essentially measuring the same thing - how far the computed values are from the original values.
I'm measuring the errors in CIEL*a*b*.
That is - it's not so much a measurement of the sensor itself as its ability to distinguish colors after being processed by the confusion matrix.
You mean the compromise matrix, I think.
The confusion matrix is generic - though tuned to a particular illuminant/training set of swatches. That is to say - no specific "color profile" has been applied, it's an attempt at a generic/standard workflow.
It's aimed at accuracy. Most color profiles are not trying to be accurate; they're trying to be pleasing.
You can then measure the distance from the color swatch to the outputted value and compute the error.
I measure the distance in Lab between the Lab values of the samples and the Lab values of the colorimetric colors produced by the camera via the compromise matrix.
You use Mean (average), Standard Deviation (to measure the range of errors) and Worst which is the single largest deviation.
Yes.
This gives us a fairly complete picture of the color accuracy of each sensor. How close the sensor's RGB value (after the Compromise Matrix) are to "real world" colors.
Right.
Metameric Error is when multiple inputs can give the same output - that is, the sensor can't differentiate between colors.
In this case, it's the opposite. All of the samples are metamers -- they should resolve to the same color. As you can see, they don't
I'm not sure how Metameric Error is calculated - it seems like it should be a discrete number somehow (# of measurements where the output is identical?). I'm not sure how you "error" is calculated in this instance.
The error is the distance from the right color.
But it sounds like you have a dataset of common Metameric swatches you can feed through the model.
I can make as many as I want, using basis functions derived from natural scenes or from color patches.
Finally you "calibrate" the confusion matrix (train) it on a set of standard color checker swatches and again measure the error. This is akin to calibrating a camera to a color checker.
The optimization of the compromise matrix is performed before the metamers are run through the simulation. I can optimize it with any set of patches I want.
 
There's a lot to unpack there. Let me see if I understand it correctly.

You have data sampled from a number of cameras, which produces a Spectral Sensitivity Function (SSF).

The SSF is a mathematical model of the sensor.

You can then feed a series of theoretical color swatches into this SSF function and run it through a Compromise Matrix, which is akin to demosaicing an image to produce something like RGB values.

The mention of a particular illuminant (D50) leads me to believe that you are comparing colors in RGB color space. Or LAB color space, but essentially measuring the same thing - how far the computed values are from the original values.

That is - it's not so much a measurement of the sensor itself as its ability to distinguish colors after being processed by the confusion matrix.

The confusion matrix is generic - though tuned to a particular illuminant/training set of swatches. That is to say - no specific "color profile" has been applied, it's an attempt at a generic/standard workflow.

You can then measure the distance from the color swatch to the outputted value and compute the error.

You use Mean (average), Standard Deviation (to measure the range of errors) and Worst which is the single largest deviation.

This gives us a fairly complete picture of the color accuracy of each sensor. How close the sensor's RGB value (after the Compromise Matrix) are to "real world" colors.

Metameric Error is when multiple inputs can give the same output - that is, the sensor can't differentiate between colors.

I'm not sure how Metameric Error is calculated - it seems like it should be a discrete number somehow (# of measurements where the output is identical?). I'm not sure how you "error" is calculated in this instance.

But it sounds like you have a dataset of common Metameric swatches you can feed through the model.

Finally you "calibrate" the confusion matrix (train) it on a set of standard color checker swatches and again measure the error. This is akin to calibrating a camera to a color checker.

This would mean what - that each color is pushed or pulled from the RAW file towards these swatches, which is essentially what calibration is.

Then you run some more sample swatches (it's unclear which of the 3 named sets of swatches is used in this test) and calculate the error.
In this case, the model was trained on a metamer set and measured with a metamer set.
Assuming the above is correct I have some questions.

Does the "training" on the Color Checker (or Natural color set) imply what I think it does - tweaking the confusion matrix?
Yes.
If so, how close is it to what I would expect Adobe software to produce when "calibrating" to a color checker?
Depends on the color profile chosen. Not close at all for things like Vivid, Portrait, and Landscape.
What was the test set for the color checker "calibrated" test? Was it the color checker swatches or some other swatches?
One set was the CC24 swatches. One was a bunch of spectra found in nature, and one was metamers of those colors.
When reading the charts - seeing as the Mean and Standard Deviation are highly correlated - which of the 3 sets should I be looking at to gauge camera color accuracy? Natural, Metameric or Color Checker? My guess is the first set - Natural.
Depends on your goals. If you're shooting landscapes, probably natural. If you're shooting things for which the CC24 is a good proxy, use that.
Given that the color swatches cannot be measured by any camera in your dataset (none are close enough to the theoretical (Luther-Ives?) ideal) how did you generate them? Some other measurement equipment?
I generated them mathematically, by finding metamers of colors found in patch sets and in nature.
Are the color swatches a range of frequencies?
Yes, they are spectra.
E.g. if a leaf was measured, did it measure the entire range of frequencies produced by the leaf by reflection from the e.g. D50 light source?
I think I only used the wavelengths between 380 and 720 nm.
Based on the Natural color set charts, it seems the Nikon D810 and D850 are the worst (most error) - does this mean they are the least accurate? What does that mean for metamer-ism?
The Nikons don't look bad at all trained on other sets but the natural metamer set.
What should I be taking away from this comparison for real-world photography? If accuracy was my concern, which camera should I choose? Or which set of charts should I use to gauge my buying decision?
This is probably not the best single test for making buying decisions.
For example - if my goal was fine art reproduction, which camera should I choose?
For fine art reproduction, the high road is to make the training set from the colorants in the media to be digitized. That avoids a whole bunch of metamers, and makes capture metameric error less likely.
Do / how do Hue Twists play into this? E.g. on an HSV, as value increases hue and saturation change for certain (in camera & other) color profiles. Are they ignored or are they part of this? (And generally, I've been curious how calibration from a color checker may or may not affect hue twists.)
There aren't any hue twists here, except those residual ones embodied in CIELab.
My stated suspicion was that Nikon (and the early Kodak made Leica cameras) were tuned for a certain look - some say Kodachrome-like colors. (Fully acknowledging there were multiple generations of Kodachrome over the decades.) And therefore purposefully less than the theoretical ideal.

In light of this suspicion, it's interesting that the Leica M8 was close to the theoretical ideal on the Natural swatches, but the worst (ignoring the Grasshopper) on the Metameric and Color Checker test - what am I to glean from this?
Beats me.
Is there anything we can imply about CFA dye "strength" - not just color separation but overall strength from these results?
No.
The hypothesis being that camera manufacturers purposefully weakened the dyes (to improve quantum efficiency) in an attempt to improve their "high ISO" noise ratings, which lead to worse color accuracy, or at least shifted color response curves.
I don't buy into that.
If you ever get the chance, the Phase One Trichromatic sensor would be one I'd be interested in seeing in a test like this. Since it's the only sensor I've seen where the color filter array dyes are touted in their marketing literature.
Jack Hogan did a good job debunking that in his Strolls with my Dog blog.
Is the reason no X-Trans sensors appear on this list because they weren't part of the dataset,
Yes.
or does the X-Trans sensor require radically different algorithms and is therefore not valid?
No.
(The Fuji S5 Pro and your offer to measure Sigma cameras implies this isn't the case...)
 
There's a lot to unpack there. Let me see if I understand it correctly.

You have data sampled from a number of cameras, which produces a Spectral Sensitivity Function (SSF).

The SSF is a mathematical model of the sensor.

You can then feed a series of theoretical color swatches into this SSF function and run it through a Compromise Matrix, which is akin to demosaicing an image to produce something like RGB values.
The conversion from camera raw space to colorimetric (CIE 1931 XYZ-referenced) space is independent of the demosaicing. The same math works for Foveon sensors and Bayer sensors.
Understood. I'm not sure entirely how it works, but understand that it does.
The mention of a particular illuminant (D50) leads me to believe that you are comparing colors in RGB color space. Or LAB color space, but essentially measuring the same thing - how far the computed values are from the original values.
I'm measuring the errors in CIEL*a*b*.
That is - it's not so much a measurement of the sensor itself as its ability to distinguish colors after being processed by the confusion matrix.
You mean the compromise matrix, I think.
The confusion matrix is generic - though tuned to a particular illuminant/training set of swatches. That is to say - no specific "color profile" has been applied, it's an attempt at a generic/standard workflow.
It's aimed at accuracy. Most color profiles are not trying to be accurate; they're trying to be pleasing.
Understood - reserving my question for a bit further down.
You can then measure the distance from the color swatch to the outputted value and compute the error.
I measure the distance in Lab between the Lab values of the samples and the Lab values of the colorimetric colors produced by the camera via the compromise matrix.
You use Mean (average), Standard Deviation (to measure the range of errors) and Worst which is the single largest deviation.
Yes.
This gives us a fairly complete picture of the color accuracy of each sensor. How close the sensor's RGB value (after the Compromise Matrix) are to "real world" colors.
Right.
Metameric Error is when multiple inputs can give the same output - that is, the sensor can't differentiate between colors.
In this case, it's the opposite. All of the samples are metamers -- they should resolve to the same color. As you can see, they don't
Ah that makes more sense.
I'm not sure how Metameric Error is calculated - it seems like it should be a discrete number somehow (# of measurements where the output is identical?). I'm not sure how you "error" is calculated in this instance.
The error is the distance from the right color.
Is the distance from the right color the correct test here?

The question you're asking is "do two different inputs yield the same output". So the absolute error isn't as important as whether or not the two inputs produce nearly identical outputs.

Isn't the correct measurement how far the expected metamer pairs (or trios or whatever) are from EACH OTHER and not the expected output?
But it sounds like you have a dataset of common Metameric swatches you can feed through the model.
I can make as many as I want, using basis functions derived from natural scenes or from color patches.
Understood.
Finally you "calibrate" the confusion matrix (train) it on a set of standard color checker swatches and again measure the error. This is akin to calibrating a camera to a color checker.
The optimization of the compromise matrix is performed before the metamers are run through the simulation. I can optimize it with any set of patches I want.
The optimization ("training"?) is a necessary step correct? Since no two cameras have identical quantum efficiencies, some training of the compromise matrix is necessary.

<second comment pasted below to save on comments>
Then you run some more sample swatches (it's unclear which of the 3 named sets of swatches is used in this test) and calculate the error.
In this case, the model was trained on a metamer set and measured with a metamer set.
Assuming the above is correct I have some questions.

Does the "training" on the Color Checker (or Natural color set) imply what I think it does - tweaking the confusion matrix?
Yes.
If so, how close is it to what I would expect Adobe software to produce when "calibrating" to a color checker?
Depends on the color profile chosen. Not close at all for things like Vivid, Portrait, and Landscape.
Is color profile part of the compromise matrix or done after the compromise matrix?

It's been a while since I've bothered profiling a camera, from what I remember it shows up in the color profile section of Adobe Camera Raw.
What was the test set for the color checker "calibrated" test? Was it the color checker swatches or some other swatches?
One set was the CC24 swatches. One was a bunch of spectra found in nature, and one was metamers of those colors.
Maybe my question was confusing. You have training data and test data.

I understood the three tests to be
  • Natural Color Set Training Data, Natural Color Set Test Data
  • Natural Color Set Training Data, Matameric Test Data
  • CC24 Training Data, ???? Test Data
Is this correct, and if so what is the test data for the CC24 Training Data.
When reading the charts - seeing as the Mean and Standard Deviation are highly correlated - which of the 3 sets should I be looking at to gauge camera color accuracy? Natural, Metameric or Color Checker? My guess is the first set - Natural.
Depends on your goals. If you're shooting landscapes, probably natural. If you're shooting things for which the CC24 is a good proxy, use that.
Understood. Are skin tones included in the Natural dataset? Seeing as they do occur in nature...

It sounds like for portraits I should be looking at the CC24 charts since they have a high proportion of skin tones.
Given that the color swatches cannot be measured by any camera in your dataset (none are close enough to the theoretical (Luther-Ives?) ideal) how did you generate them? Some other measurement equipment?
I generated them mathematically, by finding metamers of colors found in patch sets and in nature.
Understood.
Are the color swatches a range of frequencies?
Yes, they are spectra.
Understood, thank you.
E.g. if a leaf was measured, did it measure the entire range of frequencies produced by the leaf by reflection from the e.g. D50 light source?
I think I only used the wavelengths between 380 and 720 nm.
I meant "is it a spectra" - you already answered.
Based on the Natural color set charts, it seems the Nikon D810 and D850 are the worst (most error) - does this mean they are the least accurate? What does that mean for metamer-ism?
The Nikons don't look bad at all trained on other sets but the natural metamer set.
Wait is it trained on the metamer set or tested on the metamer set?
What should I be taking away from this comparison for real-world photography? If accuracy was my concern, which camera should I choose? Or which set of charts should I use to gauge my buying decision?
This is probably not the best single test for making buying decisions.
Agree, I just meant that it could factor into a buying decision as one KPI among many.
For example - if my goal was fine art reproduction, which camera should I choose?
For fine art reproduction, the high road is to make the training set from the colorants in the media to be digitized. That avoids a whole bunch of metamers, and makes capture metameric error less likely.
Understood. Though in this hypothetical situation, there is still a camera - or set of cameras - that's better than others, correct?
Do / how do Hue Twists play into this? E.g. on an HSV, as value increases hue and saturation change for certain (in camera & other) color profiles. Are they ignored or are they part of this? (And generally, I've been curious how calibration from a color checker may or may not affect hue twists.)
There aren't any hue twists here, except those residual ones embodied in CIELab.
Understood. So hue twists are part of a "color profile" and this test excludes/ignores things like color profiles.
My stated suspicion was that Nikon (and the early Kodak made Leica cameras) were tuned for a certain look - some say Kodachrome-like colors. (Fully acknowledging there were multiple generations of Kodachrome over the decades.) And therefore purposefully less than the theoretical ideal.

In light of this suspicion, it's interesting that the Leica M8 was close to the theoretical ideal on the Natural swatches, but the worst (ignoring the Grasshopper) on the Metameric and Color Checker test - what am I to glean from this?
Beats me.
Fair.
Is there anything we can imply about CFA dye "strength" - not just color separation but overall strength from these results?
No.
The hypothesis being that camera manufacturers purposefully weakened the dyes (to improve quantum efficiency) in an attempt to improve their "high ISO" noise ratings, which lead to worse color accuracy, or at least shifted color response curves.
I don't buy into that.
Thanks.
If you ever get the chance, the Phase One Trichromatic sensor would be one I'd be interested in seeing in a test like this. Since it's the only sensor I've seen where the color filter array dyes are touted in their marketing literature.
Jack Hogan did a good job debunking that in his Strolls with my Dog blog.
Interesting - I'll take look for it - it looks like a series of articles starting with this one:

Is the reason no X-Trans sensors appear on this list because they weren't part of the dataset,
Yes.
or does the X-Trans sensor require radically different algorithms and is therefore not valid?
No.
(The Fuji S5 Pro and your offer to measure Sigma cameras implies this isn't the case...)
Thanks for taking the time to respond & post the associated charts.

It looks like there are 3 metamer sets.

L50, a0, b0

L50, a0, b50

L50, a50, b0

Getting back to my earlier question - would it make more sense to compute the average of the outputs and then measure the distance from that average? Rather than from the expected output value?

That is - what you're testing for is how closely the colors match each other (does the camera treat them as metamers), not how closely the colors match the expected output. Otherwise a specific camera can be off from all the expected output by the same amount in the same direction, but still treat them as metamers.

That is - it can be precise but inaccurate, isn't what you want to measure precision and not accuracy for this specific test?
 
There's a lot to unpack there. Let me see if I understand it correctly.

You have data sampled from a number of cameras, which produces a Spectral Sensitivity Function (SSF).

The SSF is a mathematical model of the sensor.

You can then feed a series of theoretical color swatches into this SSF function and run it through a Compromise Matrix, which is akin to demosaicing an image to produce something like RGB values.
The conversion from camera raw space to colorimetric (CIE 1931 XYZ-referenced) space is independent of the demosaicing. The same math works for Foveon sensors and Bayer sensors.
Understood. I'm not sure entirely how it works, but understand that it does.
The mention of a particular illuminant (D50) leads me to believe that you are comparing colors in RGB color space. Or LAB color space, but essentially measuring the same thing - how far the computed values are from the original values.
I'm measuring the errors in CIEL*a*b*.
That is - it's not so much a measurement of the sensor itself as its ability to distinguish colors after being processed by the confusion matrix.
You mean the compromise matrix, I think.
The confusion matrix is generic - though tuned to a particular illuminant/training set of swatches. That is to say - no specific "color profile" has been applied, it's an attempt at a generic/standard workflow.
It's aimed at accuracy. Most color profiles are not trying to be accurate; they're trying to be pleasing.
Understood - reserving my question for a bit further down.
You can then measure the distance from the color swatch to the outputted value and compute the error.
I measure the distance in Lab between the Lab values of the samples and the Lab values of the colorimetric colors produced by the camera via the compromise matrix.
You use Mean (average), Standard Deviation (to measure the range of errors) and Worst which is the single largest deviation.
Yes.
This gives us a fairly complete picture of the color accuracy of each sensor. How close the sensor's RGB value (after the Compromise Matrix) are to "real world" colors.
Right.
Metameric Error is when multiple inputs can give the same output - that is, the sensor can't differentiate between colors.
In this case, it's the opposite. All of the samples are metamers -- they should resolve to the same color. As you can see, they don't
Ah that makes more sense.
I'm not sure how Metameric Error is calculated - it seems like it should be a discrete number somehow (# of measurements where the output is identical?). I'm not sure how you "error" is calculated in this instance.
The error is the distance from the right color.
Is the distance from the right color the correct test here?
It's only one of the tests I did. Look at the mean results if that's what you're interested in.
The question you're asking is "do two different inputs yield the same output". So the absolute error isn't as important as whether or not the two inputs produce nearly identical outputs.
Look at the standard deviation to get an idea of that.
Isn't the correct measurement how far the expected metamer pairs (or trios or whatever) are from EACH OTHER and not the expected output?
That's a different measurement but that would yield 2450 numbers.
But it sounds like you have a dataset of common Metameric swatches you can feed through the model.
I can make as many as I want, using basis functions derived from natural scenes or from color patches.
Understood.
Finally you "calibrate" the confusion matrix (train) it on a set of standard color checker swatches and again measure the error. This is akin to calibrating a camera to a color checker.
The optimization of the compromise matrix is performed before the metamers are run through the simulation. I can optimize it with any set of patches I want.
The optimization ("training"?) is a necessary step correct?
If you want an accurate compromise matirx.
Since no two cameras have identical quantum efficiencies, some training of the compromise matrix is necessary.
QE doesn't vary much my serial number.
<second comment pasted below to save on comments>
Then you run some more sample swatches (it's unclear which of the 3 named sets of swatches is used in this test) and calculate the error.
In this case, the model was trained on a metamer set and measured with a metamer set.
Assuming the above is correct I have some questions.

Does the "training" on the Color Checker (or Natural color set) imply what I think it does - tweaking the confusion matrix?
Yes.
If so, how close is it to what I would expect Adobe software to produce when "calibrating" to a color checker?
Depends on the color profile chosen. Not close at all for things like Vivid, Portrait, and Landscape.
Is color profile part of the compromise matrix or done after the compromise matrix?
The compromise matrix is part of most color profiles.
It's been a while since I've bothered profiling a camera, from what I remember it shows up in the color profile section of Adobe Camera Raw.
What was the test set for the color checker "calibrated" test? Was it the color checker swatches or some other swatches?
One set was the CC24 swatches. One was a bunch of spectra found in nature, and one was metamers of those colors.
Maybe my question was confusing. You have training data and test data.

I understood the three tests to be
  • Natural Color Set Training Data, Natural Color Set Test Data
  • Natural Color Set Training Data, Matameric Test Data
  • CC24 Training Data, ???? Test Data
Is this correct, and if so what is the test data for the CC24 Training Data.
The natural metamers.
When reading the charts - seeing as the Mean and Standard Deviation are highly correlated - which of the 3 sets should I be looking at to gauge camera color accuracy? Natural, Metameric or Color Checker? My guess is the first set - Natural.
Depends on your goals. If you're shooting landscapes, probably natural. If you're shooting things for which the CC24 is a good proxy, use that.
Understood. Are skin tones included in the Natural dataset? Seeing as they do occur in nature...
I don't remember.
It sounds like for portraits I should be looking at the CC24 charts since they have a high proportion of skin tones.
Given that the color swatches cannot be measured by any camera in your dataset (none are close enough to the theoretical (Luther-Ives?) ideal) how did you generate them? Some other measurement equipment?
I generated them mathematically, by finding metamers of colors found in patch sets and in nature.
Understood.
Are the color swatches a range of frequencies?
Yes, they are spectra.
Understood, thank you.
E.g. if a leaf was measured, did it measure the entire range of frequencies produced by the leaf by reflection from the e.g. D50 light source?
I think I only used the wavelengths between 380 and 720 nm.
I meant "is it a spectra" - you already answered.
Based on the Natural color set charts, it seems the Nikon D810 and D850 are the worst (most error) - does this mean they are the least accurate? What does that mean for metamer-ism?
The Nikons don't look bad at all trained on other sets but the natural metamer set.
Wait is it trained on the metamer set or tested on the metamer set?
The Nikons don't look bad trained on the natural colors or the CC24. All the testing was on the metamer set.
What should I be taking away from this comparison for real-world photography? If accuracy was my concern, which camera should I choose? Or which set of charts should I use to gauge my buying decision?
This is probably not the best single test for making buying decisions.
Agree, I just meant that it could factor into a buying decision as one KPI among many.
For example - if my goal was fine art reproduction, which camera should I choose?
For fine art reproduction, the high road is to make the training set from the colorants in the media to be digitized. That avoids a whole bunch of metamers, and makes capture metameric error less likely.
Understood. Though in this hypothetical situation, there is still a camera - or set of cameras - that's better than others, correct?
I imagine that depends to some extent on the media being digitized.
Do / how do Hue Twists play into this? E.g. on an HSV, as value increases hue and saturation change for certain (in camera & other) color profiles. Are they ignored or are they part of this? (And generally, I've been curious how calibration from a color checker may or may not affect hue twists.)
There aren't any hue twists here, except those residual ones embodied in CIELab.
Understood. So hue twists are part of a "color profile" and this test excludes/ignores things like color profiles.
My stated suspicion was that Nikon (and the early Kodak made Leica cameras) were tuned for a certain look - some say Kodachrome-like colors. (Fully acknowledging there were multiple generations of Kodachrome over the decades.) And therefore purposefully less than the theoretical ideal.

In light of this suspicion, it's interesting that the Leica M8 was close to the theoretical ideal on the Natural swatches, but the worst (ignoring the Grasshopper) on the Metameric and Color Checker test - what am I to glean from this?
Beats me.
Fair.
Is there anything we can imply about CFA dye "strength" - not just color separation but overall strength from these results?
No.
The hypothesis being that camera manufacturers purposefully weakened the dyes (to improve quantum efficiency) in an attempt to improve their "high ISO" noise ratings, which lead to worse color accuracy, or at least shifted color response curves.
I don't buy into that.
Thanks.
If you ever get the chance, the Phase One Trichromatic sensor would be one I'd be interested in seeing in a test like this. Since it's the only sensor I've seen where the color filter array dyes are touted in their marketing literature.
Jack Hogan did a good job debunking that in his Strolls with my Dog blog.
Interesting - I'll take look for it - it looks like a series of articles starting with this one:

https://www.strollswithmydog.com/phase-one-iq3-100mp-trichromatic-linear-color-i/
Is the reason no X-Trans sensors appear on this list because they weren't part of the dataset,
Yes.
or does the X-Trans sensor require radically different algorithms and is therefore not valid?
No.
(The Fuji S5 Pro and your offer to measure Sigma cameras implies this isn't the case...)
Thanks for taking the time to respond & post the associated charts.

It looks like there are 3 metamer sets.

L50, a0, b0

L50, a0, b50

L50, a50, b0
In the referenced post, I only tested with one sets. I thought it would be instructive to see several.
Getting back to my earlier question - would it make more sense to compute the average of the outputs and then measure the distance from that average? Rather than from the expected output value?
It's another way to get a metric, if you're not interested in accuracy, but only scatter.
That is - what you're testing for is how closely the colors match each other (does the camera treat them as metamers), not how closely the colors match the expected output. Otherwise a specific camera can be off from all the expected output by the same amount in the same direction, but still treat them as metamers.

That is - it can be precise but inaccurate, isn't what you want to measure precision and not accuracy for this specific test?
 
There's a lot to unpack there. Let me see if I understand it correctly.

You have data sampled from a number of cameras, which produces a Spectral Sensitivity Function (SSF).

The SSF is a mathematical model of the sensor.

You can then feed a series of theoretical color swatches into this SSF function and run it through a Compromise Matrix, which is akin to demosaicing an image to produce something like RGB values.
The conversion from camera raw space to colorimetric (CIE 1931 XYZ-referenced) space is independent of the demosaicing. The same math works for Foveon sensors and Bayer sensors.
The mention of a particular illuminant (D50) leads me to believe that you are comparing colors in RGB color space. Or LAB color space, but essentially measuring the same thing - how far the computed values are from the original values.
I'm measuring the errors in CIEL*a*b*.
That is - it's not so much a measurement of the sensor itself as its ability to distinguish colors after being processed by the confusion matrix.
You mean the compromise matrix, I think.
I'm confused by the use of the term which is not used by Sigma/Foveon, AFAIK.

I asked the ChatGPT thing, it said:

A "compromise matrix" typically refers to a tool or framework used in decision-making processes, especially in situations where multiple criteria or factors need to be considered. The goal of a compromise matrix is to help individuals or groups make informed decisions by weighing various options against each other.

Here's a general idea of how a compromise matrix might work:

  1. Identify Criteria: List the criteria or factors that are relevant to the decision. These could be things like cost, time, quality, feasibility, etc.
  2. Assign Weights: Assign weights to each criterion based on its importance or priority in the decision-making process. This step helps to reflect the relative significance of each criterion.
  3. Evaluate Options: Evaluate each option or alternative against the criteria. This could involve assigning scores or ratings to each option for each criterion.
  4. Calculate Scores: Multiply the scores by the weights and calculate a total score for each option. This helps to quantify the overall performance of each option based on the established criteria and their respective weights.
  5. Identify Compromises: The compromise matrix helps to visualize trade-offs and compromises. It may highlight situations where one option excels in certain criteria but lags in others. The goal is to find a balanced solution that aligns with the priorities and objectives of the decision-maker.
  6. Make Informed Decisions: With the information provided by the compromise matrix, decision-makers can make more informed choices that consider the various factors and their relative importance.
Overall, a compromise matrix is a valuable tool in decision-making, especially when there are competing priorities or conflicting criteria that need to be taken into account. It provides a structured approach to analyze options and make decisions that align with broader goals or objectives.

Not sure that the matrix in question is for decision-making or visualization.

In any case, my Sigma has at least three matrices, not just one.

cam_to_XYX, WB XYX, XYZ_to_Color_space.
 
Last edited:
There's a lot to unpack there. Let me see if I understand it correctly.

You have data sampled from a number of cameras, which produces a Spectral Sensitivity Function (SSF).

The SSF is a mathematical model of the sensor.

You can then feed a series of theoretical color swatches into this SSF function and run it through a Compromise Matrix, which is akin to demosaicing an image to produce something like RGB values.
The conversion from camera raw space to colorimetric (CIE 1931 XYZ-referenced) space is independent of the demosaicing. The same math works for Foveon sensors and Bayer sensors.
The mention of a particular illuminant (D50) leads me to believe that you are comparing colors in RGB color space. Or LAB color space, but essentially measuring the same thing - how far the computed values are from the original values.
I'm measuring the errors in CIEL*a*b*.
That is - it's not so much a measurement of the sensor itself as its ability to distinguish colors after being processed by the confusion matrix.
You mean the compromise matrix, I think.
I'm confused by the use of the term which is not used by Sigma/Foveon, AFAIK.

I asked the ChatGPT thing, it said:

A "compromise matrix" typically refers to a tool or framework used in decision-making processes, especially in situations where multiple criteria or factors need to be considered. The goal of a compromise matrix is to help individuals or groups make informed decisions by weighing various options against each other.

Here's a general idea of how a compromise matrix might work:

  1. Identify Criteria: List the criteria or factors that are relevant to the decision. These could be things like cost, time, quality, feasibility, etc.
  2. Assign Weights: Assign weights to each criterion based on its importance or priority in the decision-making process. This step helps to reflect the relative significance of each criterion.
  3. Evaluate Options: Evaluate each option or alternative against the criteria. This could involve assigning scores or ratings to each option for each criterion.
  4. Calculate Scores: Multiply the scores by the weights and calculate a total score for each option. This helps to quantify the overall performance of each option based on the established criteria and their respective weights.
  5. Identify Compromises: The compromise matrix helps to visualize trade-offs and compromises. It may highlight situations where one option excels in certain criteria but lags in others. The goal is to find a balanced solution that aligns with the priorities and objectives of the decision-maker.
  6. Make Informed Decisions: With the information provided by the compromise matrix, decision-makers can make more informed choices that consider the various factors and their relative importance.
Overall, a compromise matrix is a valuable tool in decision-making, especially when there are competing priorities or conflicting criteria that need to be taken into account. It provides a structured approach to analyze options and make decisions that align with broader goals or objectives.

Not sure that the matrix in question is for decision-making or visualization.

In any case, my Sigma has at least three matrices, not just one.

cam_to_XYX, WB XYX, XYZ_to_Color_space.
Does this help?

 

Keyboard shortcuts

Back
Top