Here are results for a bunch of cameras and a bunch of patch sets, with the camera trained and tested on the same set.


--
https://blog.kasson.com


--
https://blog.kasson.com
Attachments
Last edited:
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.


The Sony data appear to have a resolution of 20nm, the Nikon 10nm, if guessed that correctly from the figure. The 'residual jitter' of the Nikon spectra may suggest a 1% precision of successive measurement points similar for example to fig 6 for the underlying silicon response in the "amazon" reference given by zzip.I assume that's what Bernard meant by "looks poor"I am doing the sim using 1 nm lambda spacing. I get from coarser data to that by using linear interpolation. That means the data are jagged. I could use splines or something similar to get smoother curves.I think the reason it looks "poor" is because it clearly has lower wavelength resolution (10nm) than some other measurements - but I question whether or not it's actually low enough to be problematic. I suspect most CFAs are not going to be so "notchy/spiky" as for a 10nm resolution to be a problem.I'm not sure it's a poor measurement. It was done by Weta VFX and is used in their spectral rendering system for motion pictures.BTW: the Sony camera spectral response in the opening post looks like a poor measurement to me. I would not like to use such data for my follow up work.
You can see how they measured it in their presentation slides for Physlight. https://drive.google.com/file/d/1a2jGciAmfH9yPdJCXNuNNEs_U07znp9C/view?usp=sharing
Some more info on Weta's Physlight system:
https://www.fxguide.com/fxfeatured/physlight-innovation-at-weta-digital/
https://github.com/wetadigital/physlight
I personally question how much difference the reduced resolution actually matters, including interpolation artifacts.
The Sony data is definitely at 10nm, since you can get the original data from the links above - all of WETA's sets are published with 10nm resolution.The Sony data appear to have a resolution of 20nm, the Nikon 10nm, if guessed that correctly from the figure. The 'residual jitter' of the Nikon spectra may suggest a 1% precision of successive measurement points similar for example to fig 6 for the underlying silicon response in the "amazon" reference given by zzip.I assume that's what Bernard meant by "looks poor"I am doing the sim using 1 nm lambda spacing. I get from coarser data to that by using linear interpolation. That means the data are jagged. I could use splines or something similar to get smoother curves.I think the reason it looks "poor" is because it clearly has lower wavelength resolution (10nm) than some other measurements - but I question whether or not it's actually low enough to be problematic. I suspect most CFAs are not going to be so "notchy/spiky" as for a 10nm resolution to be a problem.I'm not sure it's a poor measurement. It was done by Weta VFX and is used in their spectral rendering system for motion pictures.BTW: the Sony camera spectral response in the opening post looks like a poor measurement to me. I would not like to use such data for my follow up work.
You can see how they measured it in their presentation slides for Physlight. https://drive.google.com/file/d/1a2jGciAmfH9yPdJCXNuNNEs_U07znp9C/view?usp=sharing
Some more info on Weta's Physlight system:
https://www.fxguide.com/fxfeatured/physlight-innovation-at-weta-digital/
https://github.com/wetadigital/physlight
I personally question how much difference the reduced resolution actually matters, including interpolation artifacts.
I think all the Weta measurements are at 10nm.The Sony data appear to have a resolution of 20nm, the Nikon 10nm, if guessed that correctly from the figure. The 'residual jitter' of the Nikon spectra may suggest a 1% precision of successive measurement points similar for example to fig 6 for the underlying silicon response in the "amazon" reference given by zzip.I assume that's what Bernard meant by "looks poor"I am doing the sim using 1 nm lambda spacing. I get from coarser data to that by using linear interpolation. That means the data are jagged. I could use splines or something similar to get smoother curves.I think the reason it looks "poor" is because it clearly has lower wavelength resolution (10nm) than some other measurements - but I question whether or not it's actually low enough to be problematic. I suspect most CFAs are not going to be so "notchy/spiky" as for a 10nm resolution to be a problem.I'm not sure it's a poor measurement. It was done by Weta VFX and is used in their spectral rendering system for motion pictures.BTW: the Sony camera spectral response in the opening post looks like a poor measurement to me. I would not like to use such data for my follow up work.
You can see how they measured it in their presentation slides for Physlight. https://drive.google.com/file/d/1a2jGciAmfH9yPdJCXNuNNEs_U07znp9C/view?usp=sharing
Some more info on Weta's Physlight system:
https://www.fxguide.com/fxfeatured/physlight-innovation-at-weta-digital/
https://github.com/wetadigital/physlight
I personally question how much difference the reduced resolution actually matters, including interpolation artifacts.
The jitter as seen near the blue maximum and elsewhere in the Sony suggests to me a 10% of precision of the measurement points. Because the lambda resolution is relatively coarse, there is no inherent handle to judge if this is poor sampling of real spectral variations or if this is stochastic noise affecting the sample points. The experimenter should have paused seeing this and improved the measurement to make the signature clear. Comparing to the smoothness of the Nikon spectrum, I suggest that the measurement errors may be on the order 10% for the Sony.
If you look up my camera response measurements in my dpreview tech gallery, you see spectra with high oversampling (like ~0.1 nm spacing) and a resolution of about 10nm. In my older spectra you just can discern measurement noise on a close look.


sorry could not find that thread quickly, no link given. I would like to have a look.And those last two are good things?they have that fundamental, clean, easily defined property. They are also far off the spectra from the paint samples. Also outside the the gamut of the rendering device.
You're talking about mapping of out of gamut colors, which is beyond the scope of this exercise.But is should be possible to render spectral colors in a perception wise correct way. As any color has a well defined spectral composition, the linear translation to its rendering is defined by an integral over the spectral transfer functions. Non-linearities in perception could also be handled, if they were necessary.
In the immediately previous thread, I did use spectral colors in the testing set. That's what produced the spectral locus plots. I dropped them because I found them of limited utility.Spectral colors are easily produced and characterized using a diffraction grating as the key component. To get the spectra of other colors, or spectral intensities, one needs a photo-spectrometer.
I am way too busy for now to engage into such an investigation in the near future.
BTW: the Sony camera spectral response in the opening post looks like a poor measurement to me. I would not like to use such data for my follow up work.
I have trained the cameras using spectral signals. The results are weird when tested against the conventional patch sets. In addition, there is a problem with using spectral signals that occurs at both ends of the visible spectrum, where the luminance is way down. That means that the Delta E's are very small and the matrix has little effect on the mean error, so those wavelengths are effectively ignored. If I weight the energy of spectral set to compensate, then the signal levels at the extremes are wholly unrealistic, and the results when run against a conventional test patch set are still weird, but in a different way.
Which is precisely what I am doing.sorry could not find that thread quickly, no link given. I would like to have a look.And those last two are good things?they have that fundamental, clean, easily defined property. They are also far off the spectra from the paint samples. Also outside the the gamut of the rendering device.
You're talking about mapping of out of gamut colors, which is beyond the scope of this exercise.But is should be possible to render spectral colors in a perception wise correct way. As any color has a well defined spectral composition, the linear translation to its rendering is defined by an integral over the spectral transfer functions. Non-linearities in perception could also be handled, if they were necessary.
In the immediately previous thread, I did use spectral colors in the testing set. That's what produced the spectral locus plots. I dropped them because I found them of limited utility.Spectral colors are easily produced and characterized using a diffraction grating as the key component. To get the spectra of other colors, or spectral intensities, one needs a photo-spectrometer.
I am way too busy for now to engage into such an investigation in the near future.
BTW: the Sony camera spectral response in the opening post looks like a poor measurement to me. I would not like to use such data for my follow up work.
If you have a decent measurement of camera spectral response, you can calculate the raw response for any given spectrum.
It's easy to light the spectra with D50, and compute the colors. That's what I'm doing.For example for the entire flower data base linked by D Cox. Since I do not know the precise colors of these flowers,
I don't understand that sentence. What is a procedural gap?I would have a procedural gap judging the accuracy by which the color mapping makes them appear on my screen.
What's the point of that? Why not just do the mixing computationally?One might think of re-merging part of the spectral colors using a mask to construct edge of gamut colors on a backlit matte screen.
Why not just compute the colors of the spectra? That's what I'm doing.The resulting camera response, could be viewed on the computer screen and compared to the matte screen side by side.I would be particularly interested into the red-green transition of narrow banded light, expecting some large errors.
Why not just keep doing it in the sim?Perhaps a practical approach would be, to generate near edge of gamut colors using the old color enlarger with its dichroic (YMC) filters.
Why?That can produce synthetic spectra consisting of 3 bands of ~100nm width with arbitrary intensity for each of the 3 bands. Photograph, and study the the color matrix needed to accurately reproduce those colors on the computer screen in side by side comparison with enlarger light.
I would guess that the comparison of two backlit colored light sources would be easier than to compare screen color to reflective paint.
I have trained the cameras using spectral signals. The results are weird when tested against the conventional patch sets. In addition, there is a problem with using spectral signals that occurs at both ends of the visible spectrum, where the luminance is way down. That means that the Delta E's are very small and the matrix has little effect on the mean error, so those wavelengths are effectively ignored. If I weight the energy of spectral set to compensate, then the signal levels at the extremes are wholly unrealistic, and the results when run against a conventional test patch set are still weird, but in a different way.
FYI, dcamprof (and I assume Lumariver given that all of the RawTherapee profiles Maciej generated show signs of this and the historical shared codebase of dcamprof and Lumariver) does this by default too. I missed it last time through the document, but I just finally got around to my A7M4 profiling from shots I took over a month ago.I figure that's why around the middle of the last decade Adobe started desaturating Forward Matrices (all positive coefficients) then resaturating tones appropriately via Look Up Tables based on rendering intent (neutral, portrait, landscape - I would assume within the constraints of the desired output color space) once safely in XYZ.
I was poking around and found a post where Iliah Borg found an Adobe profile with identical values for FM1 and FM2 but different CM1/CM2.
The advantage of the matrix is that it's a very smooth transform and also very efficient. A disadvantage of the matrix is that some colors can clip. Therefore, in some cases we instead use an empty (null transform) matrix and perform the bulk of the color correction using tables. This helps to preserve detail in saturated colors. In the cases where we use an empty/null transform matrix, you'll see the same values used for both ForwardMatrix1 and ForwardMatrix2.
Error is always a possibility when I am involved, though I am surprised to see a SSF by me there."Nikon measured by Jack" having worse performance than a 10,10,10 Gaussian seems strange.
Measurement error?
except that I would not rate the Sony camera spectrum a decent quality.Which is precisely what I am doing.sorry could not find that thread quickly, no link given. I would like to have a look.And those last two are good things?they have that fundamental, clean, easily defined property. They are also far off the spectra from the paint samples. Also outside the the gamut of the rendering device.
You're talking about mapping of out of gamut colors, which is beyond the scope of this exercise.But is should be possible to render spectral colors in a perception wise correct way. As any color has a well defined spectral composition, the linear translation to its rendering is defined by an integral over the spectral transfer functions. Non-linearities in perception could also be handled, if they were necessary.
In the immediately previous thread, I did use spectral colors in the testing set. That's what produced the spectral locus plots. I dropped them because I found them of limited utility.Spectral colors are easily produced and characterized using a diffraction grating as the key component. To get the spectra of other colors, or spectral intensities, one needs a photo-spectrometer.
I am way too busy for now to engage into such an investigation in the near future.
BTW: the Sony camera spectral response in the opening post looks like a poor measurement to me. I would not like to use such data for my follow up work.
If you have a decent measurement of camera spectral response, you can calculate the raw response for any given spectrum.
I may produce the color on screen, but I do not have ground truth of the real plant color to judge accuracy of reproduction.It's easy to light the spectra with D50, and compute the colors. That's what I'm doing.For example for the entire flower data base linked by D Cox. Since I do not know the precise colors of these flowers,
I don't understand that sentence. What is a procedural gap?I would have a procedural gap judging the accuracy by which the color mapping makes them appear on my screen.
The three corner colors of the screen gamut can be easily obtained (255 0 0 etc) even for different gamuts on my EIZO color graphic (native for calibration, sRGB and several more). These gamut edge colors can be constructed from spectrally pure colors in a range of different ways, or also by the dichroic filters of the color enlarger. The screen gamut edge color and the precisely synthesized matching color could be photographed. The resulting raw camera values likely would not match precisely indicating a range for the residual metameric failure at the extremes of the gamut. The is one color matrix giving precise match for all the screen colors for screen generated colors and there is a slightly different color matrix for each of the other realizations of the gamut with different base spectra.What's the point of that? Why not just do the mixing computationally?One might think of re-merging part of the spectral colors using a mask to construct edge of gamut colors on a backlit matte screen.
I just like to compare to ground truth now and then. Only doing simulation is one sided (in fantasy-land ?).Why not just compute the colors of the spectra? That's what I'm doing.The resulting camera response, could be viewed on the computer screen and compared to the matte screen side by side.I would be particularly interested into the red-green transition of narrow banded light, expecting some large errors.
Why not just keep doing it in the sim?Perhaps a practical approach would be, to generate near edge of gamut colors using the old color enlarger with its dichroic (YMC) filters.
Spectral measurement with filter sets are convenient. See the compact desktop setup above. But they have shortcomings like limited resolution defined by the set. No access to spectral oversampling. Persisting need for light source + filter calibration . A calibration error translates to a systematic error on spectral response. My guess was 10% error a possibility. The upside is that the systematic error is constant for sensors measured with the same setup.I think all the Weta measurements are at 10nm.The Sony data appear to have a resolution of 20nm, the Nikon 10nm, if guessed that correctly from the figure. The 'residual jitter' of the Nikon spectra may suggest a 1% precision of successive measurement points similar for example to fig 6 for the underlying silicon response in the "amazon" reference given by zzip.I assume that's what Bernard meant by "looks poor"I am doing the sim using 1 nm lambda spacing. I get from coarser data to that by using linear interpolation. That means the data are jagged. I could use splines or something similar to get smoother curves.I think the reason it looks "poor" is because it clearly has lower wavelength resolution (10nm) than some other measurements - but I question whether or not it's actually low enough to be problematic. I suspect most CFAs are not going to be so "notchy/spiky" as for a 10nm resolution to be a problem.I'm not sure it's a poor measurement. It was done by Weta VFX and is used in their spectral rendering system for motion pictures.BTW: the Sony camera spectral response in the opening post looks like a poor measurement to me. I would not like to use such data for my follow up work.
You can see how they measured it in their presentation slides for Physlight. https://drive.google.com/file/d/1a2jGciAmfH9yPdJCXNuNNEs_U07znp9C/view?usp=sharing
Some more info on Weta's Physlight system:
https://www.fxguide.com/fxfeatured/physlight-innovation-at-weta-digital/
https://github.com/wetadigital/physlight
I personally question how much difference the reduced resolution actually matters, including interpolation artifacts.
The jitter as seen near the blue maximum and elsewhere in the Sony suggests to me a 10% of precision of the measurement points. Because the lambda resolution is relatively coarse, there is no inherent handle to judge if this is poor sampling of real spectral variations or if this is stochastic noise affecting the sample points. The experimenter should have paused seeing this and improved the measurement to make the signature clear. Comparing to the smoothness of the Nikon spectrum, I suggest that the measurement errors may be on the order 10% for the Sony.
If you look up my camera response measurements in my dpreview tech gallery, you see spectra with high oversampling (like ~0.1 nm spacing) and a resolution of about 10nm. In my older spectra you just can discern measurement noise on a close look.
![]()
They show 36 filters. For ~10nm spacing you need about 31 for 400-700nm.Cost adds up with such filters. The image shows that transmission varies a lot and must be calibrated carefully. The illumination system with a halogen lamp remains simple.
I am guessing that's an old link, the current URL structure is http://forum.luminous-landscape.com/index.php instead of www/forum/index.phpEric Chan :: ( http://www.luminous-landscape.com/forum/index.php?topic=84129.msg680333#msg680333 ) @ November 15, 2013 :I was poking around and found a post where Iliah Borg found an Adobe profile with identical values for FM1 and FM2 but different CM1/CM2.
The advantage of the matrix is that it's a very smooth transform and also very efficient. A disadvantage of the matrix is that some colors can clip. Therefore, in some cases we instead use an empty (null transform) matrix and perform the bulk of the color correction using tables. This helps to preserve detail in saturated colors. In the cases where we use an empty/null transform matrix, you'll see the same values used for both ForwardMatrix1 and ForwardMatrix2.
Then link to camera curves that you think are good, and I’ll test to them. Otherwise, I don’t think you’re helping here. A lot of what this study is about has nothing to do with the particular curves used.except that I would not rate the Sony camera spectrum a decent quality.Which is precisely what I am doing.sorry could not find that thread quickly, no link given. I would like to have a look.And those last two are good things?they have that fundamental, clean, easily defined property. They are also far off the spectra from the paint samples. Also outside the the gamut of the rendering device.
You're talking about mapping of out of gamut colors, which is beyond the scope of this exercise.But is should be possible to render spectral colors in a perception wise correct way. As any color has a well defined spectral composition, the linear translation to its rendering is defined by an integral over the spectral transfer functions. Non-linearities in perception could also be handled, if they were necessary.
In the immediately previous thread, I did use spectral colors in the testing set. That's what produced the spectral locus plots. I dropped them because I found them of limited utility.Spectral colors are easily produced and characterized using a diffraction grating as the key component. To get the spectra of other colors, or spectral intensities, one needs a photo-spectrometer.
I am way too busy for now to engage into such an investigation in the near future.
BTW: the Sony camera spectral response in the opening post looks like a poor measurement to me. I would not like to use such data for my follow up work.
If you have a decent measurement of camera spectral response, you can calculate the raw response for any given spectrum.
I may produce the color on screen, but I do not have ground truth of the real plant color to judge accuracy of reproduction.It's easy to light the spectra with D50, and compute the colors. That's what I'm doing.For example for the entire flower data base linked by D Cox. Since I do not know the precise colors of these flowers,
I don't understand that sentence. What is a procedural gap?I would have a procedural gap judging the accuracy by which the color mapping makes them appear on my screen.
The three corner colors of the screen gamut can be easily obtained (255 0 0 etc) even for different gamuts on my EIZO color graphic (native for calibration, sRGB and several more). These gamut edge colors can be constructed from spectrally pure colors in a range of different ways, or also by the dichroic filters of the color enlarger. The screen gamut edge color and the precisely synthesized matching color could be photographed. The resulting raw camera values likely would not match precisely indicating a range for the residual metameric failure at the extremes of the gamut. The is one color matrix giving precise match for all the screen colors for screen generated colors and there is a slightly different color matrix for each of the other realizations of the gamut with different base spectra.What's the point of that? Why not just do the mixing computationally?One might think of re-merging part of the spectral colors using a mask to construct edge of gamut colors on a backlit matte screen.
I just like to compare to ground truth now and then. Only doing simulation is one sided (in fantasy-land ?).Why not just compute the colors of the spectra? That's what I'm doing.The resulting camera response, could be viewed on the computer screen and compared to the matte screen side by side.I would be particularly interested into the red-green transition of narrow banded light, expecting some large errors.
Why not just keep doing it in the sim?Perhaps a practical approach would be, to generate near edge of gamut colors using the old color enlarger with its dichroic (YMC) filters.
True the goodness and accuracy of the camera curves changes nothing to the principle.Then link to camera curves that you think are good, and I’ll test to them. Otherwise, I don’t think you’re helping here. A lot of what this study is about has nothing to do with the particular curves used.except that I would not rate the Sony camera spectrum a decent quality.Which is precisely what I am doing.sorry could not find that thread quickly, no link given. I would like to have a look.And those last two are good things?they have that fundamental, clean, easily defined property. They are also far off the spectra from the paint samples. Also outside the the gamut of the rendering device.
You're talking about mapping of out of gamut colors, which is beyond the scope of this exercise.But is should be possible to render spectral colors in a perception wise correct way. As any color has a well defined spectral composition, the linear translation to its rendering is defined by an integral over the spectral transfer functions. Non-linearities in perception could also be handled, if they were necessary.
In the immediately previous thread, I did use spectral colors in the testing set. That's what produced the spectral locus plots. I dropped them because I found them of limited utility.Spectral colors are easily produced and characterized using a diffraction grating as the key component. To get the spectra of other colors, or spectral intensities, one needs a photo-spectrometer.
I am way too busy for now to engage into such an investigation in the near future.
BTW: the Sony camera spectral response in the opening post looks like a poor measurement to me. I would not like to use such data for my follow up work.
If you have a decent measurement of camera spectral response, you can calculate the raw response for any given spectrum.
I may produce the color on screen, but I do not have ground truth of the real plant color to judge accuracy of reproduction.It's easy to light the spectra with D50, and compute the colors. That's what I'm doing.For example for the entire flower data base linked by D Cox. Since I do not know the precise colors of these flowers,
I don't understand that sentence. What is a procedural gap?I would have a procedural gap judging the accuracy by which the color mapping makes them appear on my screen.
The three corner colors of the screen gamut can be easily obtained (255 0 0 etc) even for different gamuts on my EIZO color graphic (native for calibration, sRGB and several more). These gamut edge colors can be constructed from spectrally pure colors in a range of different ways, or also by the dichroic filters of the color enlarger. The screen gamut edge color and the precisely synthesized matching color could be photographed. The resulting raw camera values likely would not match precisely indicating a range for the residual metameric failure at the extremes of the gamut. The is one color matrix giving precise match for all the screen colors for screen generated colors and there is a slightly different color matrix for each of the other realizations of the gamut with different base spectra.What's the point of that? Why not just do the mixing computationally?One might think of re-merging part of the spectral colors using a mask to construct edge of gamut colors on a backlit matte screen.
I just like to compare to ground truth now and then. Only doing simulation is one sided (in fantasy-land ?).Why not just compute the colors of the spectra? That's what I'm doing.The resulting camera response, could be viewed on the computer screen and compared to the matte screen side by side.I would be particularly interested into the red-green transition of narrow banded light, expecting some large errors.
Why not just keep doing it in the sim?Perhaps a practical approach would be, to generate near edge of gamut colors using the old color enlarger with its dichroic (YMC) filters.
Doing my own measurements is beyond the scope of this work. Can you provide your data in json or csv form? I'd appreciate that.True the goodness and accuracy of the camera curves changes nothing to the principle.Then link to camera curves that you think are good, and I’ll test to them. Otherwise, I don’t think you’re helping here. A lot of what this study is about has nothing to do with the particular curves used.except that I would not rate the Sony camera spectrum a decent quality.Which is precisely what I am doing.sorry could not find that thread quickly, no link given. I would like to have a look.And those last two are good things?they have that fundamental, clean, easily defined property. They are also far off the spectra from the paint samples. Also outside the the gamut of the rendering device.
You're talking about mapping of out of gamut colors, which is beyond the scope of this exercise.But is should be possible to render spectral colors in a perception wise correct way. As any color has a well defined spectral composition, the linear translation to its rendering is defined by an integral over the spectral transfer functions. Non-linearities in perception could also be handled, if they were necessary.
In the immediately previous thread, I did use spectral colors in the testing set. That's what produced the spectral locus plots. I dropped them because I found them of limited utility.Spectral colors are easily produced and characterized using a diffraction grating as the key component. To get the spectra of other colors, or spectral intensities, one needs a photo-spectrometer.
I am way too busy for now to engage into such an investigation in the near future.
BTW: the Sony camera spectral response in the opening post looks like a poor measurement to me. I would not like to use such data for my follow up work.
If you have a decent measurement of camera spectral response, you can calculate the raw response for any given spectrum.
I may produce the color on screen, but I do not have ground truth of the real plant color to judge accuracy of reproduction.It's easy to light the spectra with D50, and compute the colors. That's what I'm doing.For example for the entire flower data base linked by D Cox. Since I do not know the precise colors of these flowers,
I don't understand that sentence. What is a procedural gap?I would have a procedural gap judging the accuracy by which the color mapping makes them appear on my screen.
The three corner colors of the screen gamut can be easily obtained (255 0 0 etc) even for different gamuts on my EIZO color graphic (native for calibration, sRGB and several more). These gamut edge colors can be constructed from spectrally pure colors in a range of different ways, or also by the dichroic filters of the color enlarger. The screen gamut edge color and the precisely synthesized matching color could be photographed. The resulting raw camera values likely would not match precisely indicating a range for the residual metameric failure at the extremes of the gamut. The is one color matrix giving precise match for all the screen colors for screen generated colors and there is a slightly different color matrix for each of the other realizations of the gamut with different base spectra.What's the point of that? Why not just do the mixing computationally?One might think of re-merging part of the spectral colors using a mask to construct edge of gamut colors on a backlit matte screen.
I just like to compare to ground truth now and then. Only doing simulation is one sided (in fantasy-land ?).Why not just compute the colors of the spectra? That's what I'm doing.The resulting camera response, could be viewed on the computer screen and compared to the matte screen side by side.I would be particularly interested into the red-green transition of narrow banded light, expecting some large errors.
Why not just keep doing it in the sim?Perhaps a practical approach would be, to generate near edge of gamut colors using the old color enlarger with its dichroic (YMC) filters.
I think the Nikon D5100 spectra, you show in your opening post are OK, albeit they halt at response curves normed to 1 at the peak. Since photon flux at the peak is not needed for thus norm, there is no evidence that it was measured across the spectrum and properly calibrated out for the sensor response.
An oldergood measurement, going all the way to quantum efficiency . It uses a Jobin-Yvon diffraction grid tunable monochromator, an integrating sphere and an Oriel photo spectrometer as critical hardware.
Of course, I think my workfeaturing D800, D7200 D500 (and D850 in my tech gallery) qualifies as decent, despite all the hate mail that it stirred. You may judge the substance of the objections yourself. This work uses a diffraction grating and a i1Studio photo-spectrometer as critical hardware.
With the level of investment in photographic science, that I glean from your postings, you could easily copy and better the setup linked by cameronrad . You could measure the photon flux at the sensor for any camera to be hooked up using an i1Studio photo-spectrometer as minimum investment for this part. Heck you might be willing to spend the money for the ~5nm spaced 10nm narrow band filters for oversampling as discussed with cameronrad. The small sized 12.5mm interference filters would perfectly serve the purpose.
That's an interesting link, I don't remember seeing it before, good find cameronrad. Their SSFs are not bad but I agree with you that there is probably more play in their system than desirable. Take a look for instance near the peak of the greens below for a D810 and D850. Filtered dyes in the center of the range do not look like that - plus such peaky response would be undesirable. It suggests a systematic error.Bernard Delley wrote: ... you could easily copy and better the setup linked by cameronrad .



There's a link on that page:PS There is always ol' camspec SSF data around.
I was too since I didn't remember you ever talking about doing SSF measurementsError is always a possibility when I am involved, though I am surprised to see a SSF by me there."Nikon measured by Jack" having worse performance than a 10,10,10 Gaussian seems strange.
Measurement error?
I think it may be, and I think he's basically confirmed your doubts about the setup.I once used a science project spectrometer to read the raw values captured by a Nikon D90 and D610 mounting a 16-85mm/DX lens. The subject was a CC24 Passport Photo's gray card in my living room illuminated by light from the sun after reflection by a mirror through an open window. It was early in my journey of color science discovery so I did not understand the results well, nor the implications of my poor setup.
For instance to be comparable to the others here one would first need to back out the lens, the mirror and the gray target, then normalize the result by the unknown illuminant. And even then ... I hope this is not the referenced data, Jim?
I was JUST thinking about pointing out Glenn's work - you can deep-dive his methodology and development history starting at https://discuss.pixls.us/t/the-ques...vity-functions-ssfs-and-camera-profiles/18002You can find some detail of my experiment here
https://www.strollswithmydog.com/bayer-cfa-spectral-power-distribution/
Jack
PS This reminds me that Glenn Butcher has put some effort into producing a number of SSFs (Nikon D3500, D7000 and Z6) via the controlled diffraction grating procedure used by OpenFilmTools. They are here .