And this (not your reaching a higher level, but the large overlaps of layer-response) is the reason I claimed that the colours are not (purely) additive in one of the now filled-up threads:
Look at the 1931 CIE XYZ curves. They are purely additive, and there is considerable overlap.
Thanks for responding, and for having the patience, Jim. Maybe I am formulating something else than what I am after in an attempt to simplify my homemade english. Let me retry, sorry if tedious:
It is not clear to me that one can get to the 1931 CIE-curves by doing a
linear transformation of the responses of the foveon-chip.
That is correct. The Foveon sensors do not pass the Luther-Ives criterion.
But is it better than CFA sensors? Foveons seem better than at least Nikons at yellow and violet - there have been a couple of threads about this here and in the Nikon Z forum.
We've talked a lot about sharpening, but not much about color accuracy. In general I think color isn't talked about enough in the camera community.
Would you like me to do an analysis of the accuracy of Foveon colors? Thanks to Ted, I now have the sensitivity curves, but I'm not sure for what camera they apply. It will take a morning (or maybe a whole day) to do the work, and I don't want to waste my time if nobody's interested.
The photos at the top of this thread are good enough for my purposes.
https://www.dpreview.com/forums/thread/4723273
I don't think they say anything at all about the accuracy of Foveon colors.
For my purposes, accuracy isn't too interesting to me - at the end of the day I just want to take photos that I like. A camera can be less accurate but have a more pleasing output.
I have a theoretical interest in - say - how the dyes in a CFA contribute to color interpretation. And I have suspicions on how each manufacturer tunes the dyes to arrive at that camera's colors.
But I don't know what I would do with the accuracy data. I'm not doing fine art reproduction.
If you have any links to research I can read up on - especially comparing different cameras or how the dyes in the CFA contribute to color interpretation I'd be interested in reading up.
I suspect Nikon tuned their dyes to - say - produce pleasing skin tones in natural light situations (based on personal experience), and Canon maybe tuned their dyes for more realistic sky colors (based on rumors I read), and there's a bit more color variation in skin tones in Canon cameras as a result (which I prefer for studio photography as a base to start editing from).
But honestly I have no idea what their intention was, what their dyes actually were designed to do and how that affects the final image beyond just my personal experience of working with these cameras.
So - without benchmarks/comps to other cameras I'm not sure a "Foveon color accuracy" study would mean much to me, and even if I had those comps - I'm still going to reach for the cameras I reach for in various situations based on personal experience of working with those cameras/files.
Thanks for the offer - if anyone else is interested I can let them chime in.
I can supply benchmarks.
One thing you should consider: cameras that are less accurate tend to have more capture metameric error:
https://blog.kasson.com/the-last-word/observer-metameric-error-in-simulated-cameras/
There's a lot to unpack there. Let me see if I understand it correctly.
You have data sampled from a number of cameras, which produces a
Spectral Sensitivity Function (SSF).
The SSF is a mathematical model of the sensor.
You can then feed a series of theoretical color swatches into this SSF function and run it through a
Compromise Matrix, which is akin to demosaicing an image to produce something like RGB values.
The mention of a particular illuminant (D50) leads me to believe that you are comparing colors in RGB color space. Or LAB color space, but essentially measuring the same thing - how far the computed values are from the original values.
That is - it's not so much a measurement of the sensor itself as its ability to distinguish colors after being processed by the confusion matrix.
The confusion matrix is generic - though tuned to a particular illuminant/training set of swatches. That is to say - no specific "color profile" has been applied, it's an attempt at a generic/standard workflow.
You can then measure the distance from the color swatch to the outputted value and compute the error.
You use Mean (average), Standard Deviation (to measure the range of errors) and Worst which is the single largest deviation.
This gives us a fairly complete picture of the color accuracy of each sensor. How close the sensor's RGB value (after the Compromise Matrix) are to "real world" colors.
Metameric Error is when multiple inputs can give the same output - that is, the sensor can't differentiate between colors.
I'm not sure how Metameric Error is calculated - it seems like it should be a discrete number somehow (# of measurements where the output is identical?). I'm not sure how you "error" is calculated in this instance.
But it sounds like you have a dataset of common Metameric swatches you can feed through the model.
Finally you "calibrate" the confusion matrix (train) it on a set of standard color checker swatches and again measure the error. This is akin to calibrating a camera to a color checker.
This would mean what - that each color is pushed or pulled from the RAW file towards these swatches, which is essentially what calibration is.
Then you run some more sample swatches (it's unclear which of the 3 named sets of swatches is used in this test) and calculate the error.
Assuming the above is correct I have some questions.
Does the "training" on the Color Checker (or Natural color set) imply what I think it does - tweaking the confusion matrix? If so, how close is it to what I would expect Adobe software to produce when "calibrating" to a color checker?
What was the test set for the color checker "calibrated" test? Was it the color checker swatches or some other swatches?
When reading the charts - seeing as the Mean and Standard Deviation are highly correlated - which of the 3 sets should I be looking at to gauge camera color accuracy? Natural, Metameric or Color Checker? My guess is the first set - Natural.
Given that the color swatches cannot be measured by any camera in your dataset (none are close enough to the theoretical (Luther-Ives?) ideal) how did you generate them? Some other measurement equipment?
Are the color swatches a range of frequencies? E.g. if a leaf was measured, did it measure the entire range of frequencies produced by the leaf by reflection from the e.g. D50 light source?
Based on the Natural color set charts, it seems the Nikon D810 and D850 are the worst (most error) - does this mean they are the least accurate? What does that mean for metamer-ism?
What should I be taking away from this comparison for real-world photography? If accuracy was my concern, which camera should I choose? Or which set of charts should I use to gauge my buying decision?
For example - if my goal was fine art reproduction, which camera should I choose?
Do / how do Hue Twists play into this? E.g. on an HSV, as value increases hue and saturation change for certain (in camera & other) color profiles. Are they ignored or are they part of this? (And generally, I've been curious how calibration from a color checker may or may not affect hue twists.)
My stated suspicion was that Nikon (and the early Kodak made Leica cameras) were tuned for a certain look - some say Kodachrome-like colors. (Fully acknowledging there were multiple generations of Kodachrome over the decades.) And therefore purposefully less than the theoretical ideal.
In light of this suspicion, it's interesting that the Leica M8 was close to the theoretical ideal on the Natural swatches, but the worst (ignoring the Grasshopper) on the Metameric and Color Checker test - what am I to glean from this?
Is there anything we can imply about CFA dye "strength" - not just color separation but overall strength from these results? The hypothesis being that camera manufacturers purposefully weakened the dyes (to improve quantum efficiency) in an attempt to improve their "high ISO" noise ratings, which lead to worse color accuracy, or at least shifted color response curves.
If you ever get the chance, the Phase One Trichromatic sensor would be one I'd be interested in seeing in a test like this. Since it's the only sensor I've seen where the color filter array dyes are touted in their marketing literature.
Is the reason no X-Trans sensors appear on this list because they weren't part of the dataset, or does the X-Trans sensor require radically different algorithms and is therefore not valid? (The Fuji S5 Pro and your offer to measure Sigma cameras implies this isn't the case...)
--
"no one should have a camera that can't play Candy Crush Saga."
https://www.instagram.com/sodiumstudio/
Camera JPG Portrait Shootout
http://sodium.nyc/blog/2020/05/camera-jpg-portrait