Using color profiles on spectra well outside of training set

Here are results for a bunch of cameras and a bunch of patch sets, with the camera trained and tested on the same set.

0c9c7c3e538f4ea48b50c4569a40a02e.jpg.png

70a33b0bc3f844e1b94ba4480ec4dc27.jpg.png

--
https://blog.kasson.com
 

Attachments

  • f7a9730f1f3f40fcba3b27973b846039.jpg.png
    f7a9730f1f3f40fcba3b27973b846039.jpg.png
    172.9 KB · Views: 0
Last edited:
"Nikon measured by Jack" having worse performance than a 10,10,10 Gaussian seems strange.

Measurement error?
 
BTW: the Sony camera spectral response in the opening post looks like a poor measurement to me. I would not like to use such data for my follow up work.
I'm not sure it's a poor measurement. It was done by Weta VFX and is used in their spectral rendering system for motion pictures.

You can see how they measured it in their presentation slides for Physlight. https://drive.google.com/file/d/1a2jGciAmfH9yPdJCXNuNNEs_U07znp9C/view?usp=sharing

Some more info on Weta's Physlight system:

https://www.fxguide.com/fxfeatured/physlight-innovation-at-weta-digital/

https://github.com/wetadigital/physlight
I think the reason it looks "poor" is because it clearly has lower wavelength resolution (10nm) than some other measurements - but I question whether or not it's actually low enough to be problematic. I suspect most CFAs are not going to be so "notchy/spiky" as for a 10nm resolution to be a problem.
I am doing the sim using 1 nm lambda spacing. I get from coarser data to that by using linear interpolation. That means the data are jagged. I could use splines or something similar to get smoother curves.
I assume that's what Bernard meant by "looks poor"

I personally question how much difference the reduced resolution actually matters, including interpolation artifacts.
The Sony data appear to have a resolution of 20nm, the Nikon 10nm, if guessed that correctly from the figure. The 'residual jitter' of the Nikon spectra may suggest a 1% precision of successive measurement points similar for example to fig 6 for the underlying silicon response in the "amazon" reference given by zzip.

The jitter as seen near the blue maximum and elsewhere in the Sony suggests to me a 10% of precision of the measurement points. Because the lambda resolution is relatively coarse, there is no inherent handle to judge if this is poor sampling of real spectral variations or if this is stochastic noise affecting the sample points. The experimenter should have paused seeing this and improved the measurement to make the signature clear. Comparing to the smoothness of the Nikon spectrum, I suggest that the measurement errors may be on the order 10% for the Sony.

If you look up my camera response measurements in my dpreview tech gallery, you see spectra with high oversampling (like ~0.1 nm spacing) and a resolution of about 10nm. In my older spectra you just can discern measurement noise on a close look.
 
Last edited:
BTW: the Sony camera spectral response in the opening post looks like a poor measurement to me. I would not like to use such data for my follow up work.
I'm not sure it's a poor measurement. It was done by Weta VFX and is used in their spectral rendering system for motion pictures.

You can see how they measured it in their presentation slides for Physlight. https://drive.google.com/file/d/1a2jGciAmfH9yPdJCXNuNNEs_U07znp9C/view?usp=sharing

Some more info on Weta's Physlight system:

https://www.fxguide.com/fxfeatured/physlight-innovation-at-weta-digital/

https://github.com/wetadigital/physlight
I think the reason it looks "poor" is because it clearly has lower wavelength resolution (10nm) than some other measurements - but I question whether or not it's actually low enough to be problematic. I suspect most CFAs are not going to be so "notchy/spiky" as for a 10nm resolution to be a problem.
I am doing the sim using 1 nm lambda spacing. I get from coarser data to that by using linear interpolation. That means the data are jagged. I could use splines or something similar to get smoother curves.
I assume that's what Bernard meant by "looks poor"

I personally question how much difference the reduced resolution actually matters, including interpolation artifacts.
The Sony data appear to have a resolution of 20nm, the Nikon 10nm, if guessed that correctly from the figure. The 'residual jitter' of the Nikon spectra may suggest a 1% precision of successive measurement points similar for example to fig 6 for the underlying silicon response in the "amazon" reference given by zzip.
The Sony data is definitely at 10nm, since you can get the original data from the links above - all of WETA's sets are published with 10nm resolution.

I don't know if that's because of limitations of their measurement system, or possibly because they degraded the publically available data (since high-quality data is a competitive advantage) in that github respository by reducing the resolution. I need to pull up that Google Drive link tonight.
 
BTW: the Sony camera spectral response in the opening post looks like a poor measurement to me. I would not like to use such data for my follow up work.
I'm not sure it's a poor measurement. It was done by Weta VFX and is used in their spectral rendering system for motion pictures.

You can see how they measured it in their presentation slides for Physlight. https://drive.google.com/file/d/1a2jGciAmfH9yPdJCXNuNNEs_U07znp9C/view?usp=sharing

Some more info on Weta's Physlight system:

https://www.fxguide.com/fxfeatured/physlight-innovation-at-weta-digital/

https://github.com/wetadigital/physlight
I think the reason it looks "poor" is because it clearly has lower wavelength resolution (10nm) than some other measurements - but I question whether or not it's actually low enough to be problematic. I suspect most CFAs are not going to be so "notchy/spiky" as for a 10nm resolution to be a problem.
I am doing the sim using 1 nm lambda spacing. I get from coarser data to that by using linear interpolation. That means the data are jagged. I could use splines or something similar to get smoother curves.
I assume that's what Bernard meant by "looks poor"

I personally question how much difference the reduced resolution actually matters, including interpolation artifacts.
The Sony data appear to have a resolution of 20nm, the Nikon 10nm, if guessed that correctly from the figure. The 'residual jitter' of the Nikon spectra may suggest a 1% precision of successive measurement points similar for example to fig 6 for the underlying silicon response in the "amazon" reference given by zzip.

The jitter as seen near the blue maximum and elsewhere in the Sony suggests to me a 10% of precision of the measurement points. Because the lambda resolution is relatively coarse, there is no inherent handle to judge if this is poor sampling of real spectral variations or if this is stochastic noise affecting the sample points. The experimenter should have paused seeing this and improved the measurement to make the signature clear. Comparing to the smoothness of the Nikon spectrum, I suggest that the measurement errors may be on the order 10% for the Sony.

If you look up my camera response measurements in my dpreview tech gallery, you see spectra with high oversampling (like ~0.1 nm spacing) and a resolution of about 10nm. In my older spectra you just can discern measurement noise on a close look.
I think all the Weta measurements are at 10nm.

25decda5944f41bab98ed05b56642a3e.jpg

d4e9529c48424766beebdbe1cae2b473.jpg
 
they have that fundamental, clean, easily defined property. They are also far off the spectra from the paint samples. Also outside the the gamut of the rendering device.
And those last two are good things?
But is should be possible to render spectral colors in a perception wise correct way. As any color has a well defined spectral composition, the linear translation to its rendering is defined by an integral over the spectral transfer functions. Non-linearities in perception could also be handled, if they were necessary.
You're talking about mapping of out of gamut colors, which is beyond the scope of this exercise.
Spectral colors are easily produced and characterized using a diffraction grating as the key component. To get the spectra of other colors, or spectral intensities, one needs a photo-spectrometer.

I am way too busy for now to engage into such an investigation in the near future.

BTW: the Sony camera spectral response in the opening post looks like a poor measurement to me. I would not like to use such data for my follow up work.
In the immediately previous thread, I did use spectral colors in the testing set. That's what produced the spectral locus plots. I dropped them because I found them of limited utility.
sorry could not find that thread quickly, no link given. I would like to have a look.

If you have a decent measurement of camera spectral response, you can calculate the raw response for any given spectrum. For example for the entire flower data base linked by D Cox. Since I do not know the precise colors of these flowers, I would have a procedural gap judging the accuracy by which the color mapping makes them appear on my screen.

One might think of re-merging part of the spectral colors using a mask to construct edge of gamut colors on a backlit matte screen. The resulting camera response, could be viewed on the computer screen and compared to the matte screen side by side.I would be particularly interested into the red-green transition of narrow banded light, expecting some large errors.

Perhaps a practical approach would be, to generate near edge of gamut colors using the old color enlarger with its dichroic (YMC) filters. That can produce synthetic spectra consisting of 3 bands of ~100nm width with arbitrary intensity for each of the 3 bands. Photograph, and study the the color matrix needed to accurately reproduce those colors on the computer screen in side by side comparison with enlarger light. I would guess that the comparison of two backlit colored light sources would be easier than to compare screen color to reflective paint.
I have trained the cameras using spectral signals. The results are weird when tested against the conventional patch sets. In addition, there is a problem with using spectral signals that occurs at both ends of the visible spectrum, where the luminance is way down. That means that the Delta E's are very small and the matrix has little effect on the mean error, so those wavelengths are effectively ignored. If I weight the energy of spectral set to compensate, then the signal levels at the extremes are wholly unrealistic, and the results when run against a conventional test patch set are still weird, but in a different way.
 
they have that fundamental, clean, easily defined property. They are also far off the spectra from the paint samples. Also outside the the gamut of the rendering device.
And those last two are good things?
But is should be possible to render spectral colors in a perception wise correct way. As any color has a well defined spectral composition, the linear translation to its rendering is defined by an integral over the spectral transfer functions. Non-linearities in perception could also be handled, if they were necessary.
You're talking about mapping of out of gamut colors, which is beyond the scope of this exercise.
Spectral colors are easily produced and characterized using a diffraction grating as the key component. To get the spectra of other colors, or spectral intensities, one needs a photo-spectrometer.

I am way too busy for now to engage into such an investigation in the near future.

BTW: the Sony camera spectral response in the opening post looks like a poor measurement to me. I would not like to use such data for my follow up work.
In the immediately previous thread, I did use spectral colors in the testing set. That's what produced the spectral locus plots. I dropped them because I found them of limited utility.
sorry could not find that thread quickly, no link given. I would like to have a look.

If you have a decent measurement of camera spectral response, you can calculate the raw response for any given spectrum.
Which is precisely what I am doing.
For example for the entire flower data base linked by D Cox. Since I do not know the precise colors of these flowers,
It's easy to light the spectra with D50, and compute the colors. That's what I'm doing.
I would have a procedural gap judging the accuracy by which the color mapping makes them appear on my screen.
I don't understand that sentence. What is a procedural gap?
One might think of re-merging part of the spectral colors using a mask to construct edge of gamut colors on a backlit matte screen.
What's the point of that? Why not just do the mixing computationally?
The resulting camera response, could be viewed on the computer screen and compared to the matte screen side by side.I would be particularly interested into the red-green transition of narrow banded light, expecting some large errors.
Why not just compute the colors of the spectra? That's what I'm doing.
Perhaps a practical approach would be, to generate near edge of gamut colors using the old color enlarger with its dichroic (YMC) filters.
Why not just keep doing it in the sim?
That can produce synthetic spectra consisting of 3 bands of ~100nm width with arbitrary intensity for each of the 3 bands. Photograph, and study the the color matrix needed to accurately reproduce those colors on the computer screen in side by side comparison with enlarger light.
Why?
I would guess that the comparison of two backlit colored light sources would be easier than to compare screen color to reflective paint.
I have trained the cameras using spectral signals. The results are weird when tested against the conventional patch sets. In addition, there is a problem with using spectral signals that occurs at both ends of the visible spectrum, where the luminance is way down. That means that the Delta E's are very small and the matrix has little effect on the mean error, so those wavelengths are effectively ignored. If I weight the energy of spectral set to compensate, then the signal levels at the extremes are wholly unrealistic, and the results when run against a conventional test patch set are still weird, but in a different way.
 
I figure that's why around the middle of the last decade Adobe started desaturating Forward Matrices (all positive coefficients) then resaturating tones appropriately via Look Up Tables based on rendering intent (neutral, portrait, landscape - I would assume within the constraints of the desired output color space) once safely in XYZ.
FYI, dcamprof (and I assume Lumariver given that all of the RawTherapee profiles Maciej generated show signs of this and the historical shared codebase of dcamprof and Lumariver) does this by default too. I missed it last time through the document, but I just finally got around to my A7M4 profiling from shots I took over a month ago.

https://rawtherapee.com/mirror/dcamprof/dcamprof.html#cm_and_fm

In the intermediary JSON, you can see both the original ForwardMatrix and the "desaturated" LUTMatrix - This is even more aggressive than what's described in the Matrix Optimization section I linked to a few days ago.

What actually gets saved to the DCP profile as ForwardMatrix is the LUTMatrix

From the intermediary JSON:

"ColorMatrix1": [
[ 0.552690, -0.107726, -0.036461 ],
[ -0.522859, 1.253366, 0.304515 ],
[ -0.096603, 0.158188, 0.737913 ]
],
"ForwardMatrixWhitebalance1": [ 0.393215, 1.000000, 0.668108 ],
"ForwardMatrix1": [
[ 0.802058, 0.046080, 0.116081 ],
[ 0.348307, 0.768362, -0.116669 ],
[ 0.056327, -0.250920, 1.019795 ]
],
"LUTMatrix1": [
[ 0.672506, 0.122211, 0.169502 ],
[ 0.306713, 0.643274, 0.050014 ],
[ 0.000079, 0.001888, 0.823234 ]
],

From the output DCP:

Color Matrix 1 : 0.5527 -0.1077 -0.0365 -0.5229 1.2534 0.3045 -0.0966 0.1582 0.7379
Forward Matrix 1 : 0.6726 0.1222 0.1695 0.3067 0.6433 0.05 0.0001 0.0019 0.8231

Odd that 0.823234 got rounded to 0.8231...

(I can't seem to find the equivalent of a code tag for DPR's post editor???)

I was poking around and found a post where Iliah Borg found an Adobe profile with identical values for FM1 and FM2 but different CM1/CM2. I STILL haven't gotten around to my tungsten reference shot for this, but dcamprof/LR don't go quite that far. RT's A7M3 profile (generated by Maciej using Lumariver, see https://github.com/Beep6581/RawTherapee/blob/dev/rtdata/dcpprofiles/SONY ILCE-7M3.dcp ) has different FM1 and FM2.

--
Context is key. If I have quoted someone else's post when replying, please do not reply to something I say without reading text that I have quoted, and understanding the reason the quote function exists.
 
Last edited:
I was poking around and found a post where Iliah Borg found an Adobe profile with identical values for FM1 and FM2 but different CM1/CM2.

Eric Chan :: ( http://www.luminous-landscape.com/forum/index.php?topic=84129.msg680333#msg680333 ) @ November 15, 2013 :
The advantage of the matrix is that it's a very smooth transform and also very efficient. A disadvantage of the matrix is that some colors can clip. Therefore, in some cases we instead use an empty (null transform) matrix and perform the bulk of the color correction using tables. This helps to preserve detail in saturated colors. In the cases where we use an empty/null transform matrix, you'll see the same values used for both ForwardMatrix1 and ForwardMatrix2.
 
"Nikon measured by Jack" having worse performance than a 10,10,10 Gaussian seems strange.

Measurement error?
Error is always a possibility when I am involved, though I am surprised to see a SSF by me there.

I once used a science project spectrometer to read the raw values captured by a Nikon D90 and D610 mounting a 16-85mm/DX lens. The subject was a CC24 Passport Photo's gray card in my living room illuminated by light from the sun after reflection by a mirror through an open window. It was early in my journey of color science discovery so I did not understand the results well, nor the implications of my poor setup.

For instance to be comparable to the others here one would first need to back out the lens, the mirror and the gray target, then normalize the result by the unknown illuminant. And even then ... I hope this is not the referenced data, Jim?

You can find some detail of my experiment here

https://www.strollswithmydog.com/bayer-cfa-spectral-power-distribution/

Jack

PS This reminds me that Glenn Butcher has put some effort into producing a number of SSFs (Nikon D3500, D7000 and Z6) via the controlled diffraction grating procedure used by OpenFilmTools. They are here .
 
Last edited:
they have that fundamental, clean, easily defined property. They are also far off the spectra from the paint samples. Also outside the the gamut of the rendering device.
And those last two are good things?
But is should be possible to render spectral colors in a perception wise correct way. As any color has a well defined spectral composition, the linear translation to its rendering is defined by an integral over the spectral transfer functions. Non-linearities in perception could also be handled, if they were necessary.
You're talking about mapping of out of gamut colors, which is beyond the scope of this exercise.
Spectral colors are easily produced and characterized using a diffraction grating as the key component. To get the spectra of other colors, or spectral intensities, one needs a photo-spectrometer.

I am way too busy for now to engage into such an investigation in the near future.

BTW: the Sony camera spectral response in the opening post looks like a poor measurement to me. I would not like to use such data for my follow up work.
In the immediately previous thread, I did use spectral colors in the testing set. That's what produced the spectral locus plots. I dropped them because I found them of limited utility.
sorry could not find that thread quickly, no link given. I would like to have a look.

If you have a decent measurement of camera spectral response, you can calculate the raw response for any given spectrum.
Which is precisely what I am doing.
except that I would not rate the Sony camera spectrum a decent quality.
For example for the entire flower data base linked by D Cox. Since I do not know the precise colors of these flowers,
It's easy to light the spectra with D50, and compute the colors. That's what I'm doing.
I would have a procedural gap judging the accuracy by which the color mapping makes them appear on my screen.
I don't understand that sentence. What is a procedural gap?
I may produce the color on screen, but I do not have ground truth of the real plant color to judge accuracy of reproduction.
One might think of re-merging part of the spectral colors using a mask to construct edge of gamut colors on a backlit matte screen.
What's the point of that? Why not just do the mixing computationally?
The three corner colors of the screen gamut can be easily obtained (255 0 0 etc) even for different gamuts on my EIZO color graphic (native for calibration, sRGB and several more). These gamut edge colors can be constructed from spectrally pure colors in a range of different ways, or also by the dichroic filters of the color enlarger. The screen gamut edge color and the precisely synthesized matching color could be photographed. The resulting raw camera values likely would not match precisely indicating a range for the residual metameric failure at the extremes of the gamut. The is one color matrix giving precise match for all the screen colors for screen generated colors and there is a slightly different color matrix for each of the other realizations of the gamut with different base spectra.
The resulting camera response, could be viewed on the computer screen and compared to the matte screen side by side.I would be particularly interested into the red-green transition of narrow banded light, expecting some large errors.
Why not just compute the colors of the spectra? That's what I'm doing.
Perhaps a practical approach would be, to generate near edge of gamut colors using the old color enlarger with its dichroic (YMC) filters.
Why not just keep doing it in the sim?
I just like to compare to ground truth now and then. Only doing simulation is one sided (in fantasy-land ?).
 
BTW: the Sony camera spectral response in the opening post looks like a poor measurement to me. I would not like to use such data for my follow up work.
I'm not sure it's a poor measurement. It was done by Weta VFX and is used in their spectral rendering system for motion pictures.

You can see how they measured it in their presentation slides for Physlight. https://drive.google.com/file/d/1a2jGciAmfH9yPdJCXNuNNEs_U07znp9C/view?usp=sharing

Some more info on Weta's Physlight system:

https://www.fxguide.com/fxfeatured/physlight-innovation-at-weta-digital/

https://github.com/wetadigital/physlight
I think the reason it looks "poor" is because it clearly has lower wavelength resolution (10nm) than some other measurements - but I question whether or not it's actually low enough to be problematic. I suspect most CFAs are not going to be so "notchy/spiky" as for a 10nm resolution to be a problem.
I am doing the sim using 1 nm lambda spacing. I get from coarser data to that by using linear interpolation. That means the data are jagged. I could use splines or something similar to get smoother curves.
I assume that's what Bernard meant by "looks poor"

I personally question how much difference the reduced resolution actually matters, including interpolation artifacts.
The Sony data appear to have a resolution of 20nm, the Nikon 10nm, if guessed that correctly from the figure. The 'residual jitter' of the Nikon spectra may suggest a 1% precision of successive measurement points similar for example to fig 6 for the underlying silicon response in the "amazon" reference given by zzip.

The jitter as seen near the blue maximum and elsewhere in the Sony suggests to me a 10% of precision of the measurement points. Because the lambda resolution is relatively coarse, there is no inherent handle to judge if this is poor sampling of real spectral variations or if this is stochastic noise affecting the sample points. The experimenter should have paused seeing this and improved the measurement to make the signature clear. Comparing to the smoothness of the Nikon spectrum, I suggest that the measurement errors may be on the order 10% for the Sony.

If you look up my camera response measurements in my dpreview tech gallery, you see spectra with high oversampling (like ~0.1 nm spacing) and a resolution of about 10nm. In my older spectra you just can discern measurement noise on a close look.
I think all the Weta measurements are at 10nm.

25decda5944f41bab98ed05b56642a3e.jpg

d4e9529c48424766beebdbe1cae2b473.jpg
Spectral measurement with filter sets are convenient. See the compact desktop setup above. But they have shortcomings like limited resolution defined by the set. No access to spectral oversampling. Persisting need for light source + filter calibration . A calibration error translates to a systematic error on spectral response. My guess was 10% error a possibility. The upside is that the systematic error is constant for sensors measured with the same setup.

That they have not fully calibrated their apparatus can be gleaned from their measured curves normed to 1 at the peak value. The setup lends the to measure the photon flux at the sensor position with a photo-spectrometer for each of the color filters. If they had done this, they could show us spectral quantum efficiency curves for the camera sensor.
 
Last edited:
They show 36 filters. For ~10nm spacing you need about 31 for 400-700nm.Cost adds up with such filters. The image shows that transmission varies a lot and must be calibrated carefully. The illumination system with a halogen lamp remains simple.
 
Last edited:
I was poking around and found a post where Iliah Borg found an Adobe profile with identical values for FM1 and FM2 but different CM1/CM2.
Eric Chan :: ( http://www.luminous-landscape.com/forum/index.php?topic=84129.msg680333#msg680333 ) @ November 15, 2013 :
The advantage of the matrix is that it's a very smooth transform and also very efficient. A disadvantage of the matrix is that some colors can clip. Therefore, in some cases we instead use an empty (null transform) matrix and perform the bulk of the color correction using tables. This helps to preserve detail in saturated colors. In the cases where we use an empty/null transform matrix, you'll see the same values used for both ForwardMatrix1 and ForwardMatrix2.
I am guessing that's an old link, the current URL structure is http://forum.luminous-landscape.com/index.php instead of www/forum/index.php

https://forum.luminous-landscape.com/index.php?topic=84129.msg680333#msg680333 works in the current forum software

Good find!
 
they have that fundamental, clean, easily defined property. They are also far off the spectra from the paint samples. Also outside the the gamut of the rendering device.
And those last two are good things?
But is should be possible to render spectral colors in a perception wise correct way. As any color has a well defined spectral composition, the linear translation to its rendering is defined by an integral over the spectral transfer functions. Non-linearities in perception could also be handled, if they were necessary.
You're talking about mapping of out of gamut colors, which is beyond the scope of this exercise.
Spectral colors are easily produced and characterized using a diffraction grating as the key component. To get the spectra of other colors, or spectral intensities, one needs a photo-spectrometer.

I am way too busy for now to engage into such an investigation in the near future.

BTW: the Sony camera spectral response in the opening post looks like a poor measurement to me. I would not like to use such data for my follow up work.
In the immediately previous thread, I did use spectral colors in the testing set. That's what produced the spectral locus plots. I dropped them because I found them of limited utility.
sorry could not find that thread quickly, no link given. I would like to have a look.

If you have a decent measurement of camera spectral response, you can calculate the raw response for any given spectrum.
Which is precisely what I am doing.
except that I would not rate the Sony camera spectrum a decent quality.
For example for the entire flower data base linked by D Cox. Since I do not know the precise colors of these flowers,
It's easy to light the spectra with D50, and compute the colors. That's what I'm doing.
I would have a procedural gap judging the accuracy by which the color mapping makes them appear on my screen.
I don't understand that sentence. What is a procedural gap?
I may produce the color on screen, but I do not have ground truth of the real plant color to judge accuracy of reproduction.
One might think of re-merging part of the spectral colors using a mask to construct edge of gamut colors on a backlit matte screen.
What's the point of that? Why not just do the mixing computationally?
The three corner colors of the screen gamut can be easily obtained (255 0 0 etc) even for different gamuts on my EIZO color graphic (native for calibration, sRGB and several more). These gamut edge colors can be constructed from spectrally pure colors in a range of different ways, or also by the dichroic filters of the color enlarger. The screen gamut edge color and the precisely synthesized matching color could be photographed. The resulting raw camera values likely would not match precisely indicating a range for the residual metameric failure at the extremes of the gamut. The is one color matrix giving precise match for all the screen colors for screen generated colors and there is a slightly different color matrix for each of the other realizations of the gamut with different base spectra.
The resulting camera response, could be viewed on the computer screen and compared to the matte screen side by side.I would be particularly interested into the red-green transition of narrow banded light, expecting some large errors.
Why not just compute the colors of the spectra? That's what I'm doing.
Perhaps a practical approach would be, to generate near edge of gamut colors using the old color enlarger with its dichroic (YMC) filters.
Why not just keep doing it in the sim?
I just like to compare to ground truth now and then. Only doing simulation is one sided (in fantasy-land ?).
Then link to camera curves that you think are good, and I’ll test to them. Otherwise, I don’t think you’re helping here. A lot of what this study is about has nothing to do with the particular curves used.
 
they have that fundamental, clean, easily defined property. They are also far off the spectra from the paint samples. Also outside the the gamut of the rendering device.
And those last two are good things?
But is should be possible to render spectral colors in a perception wise correct way. As any color has a well defined spectral composition, the linear translation to its rendering is defined by an integral over the spectral transfer functions. Non-linearities in perception could also be handled, if they were necessary.
You're talking about mapping of out of gamut colors, which is beyond the scope of this exercise.
Spectral colors are easily produced and characterized using a diffraction grating as the key component. To get the spectra of other colors, or spectral intensities, one needs a photo-spectrometer.

I am way too busy for now to engage into such an investigation in the near future.

BTW: the Sony camera spectral response in the opening post looks like a poor measurement to me. I would not like to use such data for my follow up work.
In the immediately previous thread, I did use spectral colors in the testing set. That's what produced the spectral locus plots. I dropped them because I found them of limited utility.
sorry could not find that thread quickly, no link given. I would like to have a look.

If you have a decent measurement of camera spectral response, you can calculate the raw response for any given spectrum.
Which is precisely what I am doing.
except that I would not rate the Sony camera spectrum a decent quality.
For example for the entire flower data base linked by D Cox. Since I do not know the precise colors of these flowers,
It's easy to light the spectra with D50, and compute the colors. That's what I'm doing.
I would have a procedural gap judging the accuracy by which the color mapping makes them appear on my screen.
I don't understand that sentence. What is a procedural gap?
I may produce the color on screen, but I do not have ground truth of the real plant color to judge accuracy of reproduction.
One might think of re-merging part of the spectral colors using a mask to construct edge of gamut colors on a backlit matte screen.
What's the point of that? Why not just do the mixing computationally?
The three corner colors of the screen gamut can be easily obtained (255 0 0 etc) even for different gamuts on my EIZO color graphic (native for calibration, sRGB and several more). These gamut edge colors can be constructed from spectrally pure colors in a range of different ways, or also by the dichroic filters of the color enlarger. The screen gamut edge color and the precisely synthesized matching color could be photographed. The resulting raw camera values likely would not match precisely indicating a range for the residual metameric failure at the extremes of the gamut. The is one color matrix giving precise match for all the screen colors for screen generated colors and there is a slightly different color matrix for each of the other realizations of the gamut with different base spectra.
The resulting camera response, could be viewed on the computer screen and compared to the matte screen side by side.I would be particularly interested into the red-green transition of narrow banded light, expecting some large errors.
Why not just compute the colors of the spectra? That's what I'm doing.
Perhaps a practical approach would be, to generate near edge of gamut colors using the old color enlarger with its dichroic (YMC) filters.
Why not just keep doing it in the sim?
I just like to compare to ground truth now and then. Only doing simulation is one sided (in fantasy-land ?).
Then link to camera curves that you think are good, and I’ll test to them. Otherwise, I don’t think you’re helping here. A lot of what this study is about has nothing to do with the particular curves used.
True the goodness and accuracy of the camera curves changes nothing to the principle.

I think the Nikon D5100 spectra, you show in your opening post are OK, albeit they halt at response curves normed to 1 at the peak. Since photon flux at the peak is not needed for thus norm, there is no evidence that it was measured across the spectrum and properly calibrated out for the sensor response.

An older good measurement, going all the way to quantum efficiency . It uses a Jobin-Yvon diffraction grid tunable monochromator, an integrating sphere and an Oriel photo spectrometer as critical hardware.

Of course, I think my work featuring D800, D7200 D500 (and D850 in my tech gallery) qualifies as decent, despite all the hate mail that it stirred. You may judge the substance of the objections yourself. This work uses a diffraction grating and a i1Studio photo-spectrometer as critical hardware.

With the level of investment in photographic science, that I glean from your postings, you could easily copy and better the setup linked by cameronrad . You could measure the photon flux at the sensor for any camera to be hooked up using an i1Studio photo-spectrometer as minimum investment for this part. Heck you might be willing to spend the money for the ~5nm spaced 10nm narrow band filters for oversampling as discussed with cameronrad. The small sized 12.5mm interference filters would perfectly serve the purpose.
 
they have that fundamental, clean, easily defined property. They are also far off the spectra from the paint samples. Also outside the the gamut of the rendering device.
And those last two are good things?
But is should be possible to render spectral colors in a perception wise correct way. As any color has a well defined spectral composition, the linear translation to its rendering is defined by an integral over the spectral transfer functions. Non-linearities in perception could also be handled, if they were necessary.
You're talking about mapping of out of gamut colors, which is beyond the scope of this exercise.
Spectral colors are easily produced and characterized using a diffraction grating as the key component. To get the spectra of other colors, or spectral intensities, one needs a photo-spectrometer.

I am way too busy for now to engage into such an investigation in the near future.

BTW: the Sony camera spectral response in the opening post looks like a poor measurement to me. I would not like to use such data for my follow up work.
In the immediately previous thread, I did use spectral colors in the testing set. That's what produced the spectral locus plots. I dropped them because I found them of limited utility.
sorry could not find that thread quickly, no link given. I would like to have a look.

If you have a decent measurement of camera spectral response, you can calculate the raw response for any given spectrum.
Which is precisely what I am doing.
except that I would not rate the Sony camera spectrum a decent quality.
For example for the entire flower data base linked by D Cox. Since I do not know the precise colors of these flowers,
It's easy to light the spectra with D50, and compute the colors. That's what I'm doing.
I would have a procedural gap judging the accuracy by which the color mapping makes them appear on my screen.
I don't understand that sentence. What is a procedural gap?
I may produce the color on screen, but I do not have ground truth of the real plant color to judge accuracy of reproduction.
One might think of re-merging part of the spectral colors using a mask to construct edge of gamut colors on a backlit matte screen.
What's the point of that? Why not just do the mixing computationally?
The three corner colors of the screen gamut can be easily obtained (255 0 0 etc) even for different gamuts on my EIZO color graphic (native for calibration, sRGB and several more). These gamut edge colors can be constructed from spectrally pure colors in a range of different ways, or also by the dichroic filters of the color enlarger. The screen gamut edge color and the precisely synthesized matching color could be photographed. The resulting raw camera values likely would not match precisely indicating a range for the residual metameric failure at the extremes of the gamut. The is one color matrix giving precise match for all the screen colors for screen generated colors and there is a slightly different color matrix for each of the other realizations of the gamut with different base spectra.
The resulting camera response, could be viewed on the computer screen and compared to the matte screen side by side.I would be particularly interested into the red-green transition of narrow banded light, expecting some large errors.
Why not just compute the colors of the spectra? That's what I'm doing.
Perhaps a practical approach would be, to generate near edge of gamut colors using the old color enlarger with its dichroic (YMC) filters.
Why not just keep doing it in the sim?
I just like to compare to ground truth now and then. Only doing simulation is one sided (in fantasy-land ?).
Then link to camera curves that you think are good, and I’ll test to them. Otherwise, I don’t think you’re helping here. A lot of what this study is about has nothing to do with the particular curves used.
True the goodness and accuracy of the camera curves changes nothing to the principle.

I think the Nikon D5100 spectra, you show in your opening post are OK, albeit they halt at response curves normed to 1 at the peak. Since photon flux at the peak is not needed for thus norm, there is no evidence that it was measured across the spectrum and properly calibrated out for the sensor response.

An oldergood measurement, going all the way to quantum efficiency . It uses a Jobin-Yvon diffraction grid tunable monochromator, an integrating sphere and an Oriel photo spectrometer as critical hardware.

Of course, I think my workfeaturing D800, D7200 D500 (and D850 in my tech gallery) qualifies as decent, despite all the hate mail that it stirred. You may judge the substance of the objections yourself. This work uses a diffraction grating and a i1Studio photo-spectrometer as critical hardware.

With the level of investment in photographic science, that I glean from your postings, you could easily copy and better the setup linked by cameronrad . You could measure the photon flux at the sensor for any camera to be hooked up using an i1Studio photo-spectrometer as minimum investment for this part. Heck you might be willing to spend the money for the ~5nm spaced 10nm narrow band filters for oversampling as discussed with cameronrad. The small sized 12.5mm interference filters would perfectly serve the purpose.
Doing my own measurements is beyond the scope of this work. Can you provide your data in json or csv form? I'd appreciate that.

Thanks,

Jim
 
Bernard Delley wrote: ... you could easily copy and better the setup linked by cameronrad .
That's an interesting link, I don't remember seeing it before, good find cameronrad. Their SSFs are not bad but I agree with you that there is probably more play in their system than desirable. Take a look for instance near the peak of the greens below for a D810 and D850. Filtered dyes in the center of the range do not look like that - plus such peaky response would be undesirable. It suggests a systematic error.

Something substantially wrong there around the green peak. (The D850 seems to use newer dyes with less leakage in the lower wavelengths)
Something substantially wrong there around the green peak. (The D850 seems to use newer dyes with less leakage in the lower wavelengths)

Compare the D810 above with Glenn's Z6 SSF to stay with Nikon, normalized so that peaks match:

Dotted green peak from Glenn's Z6 is more like it
Dotted green peak from Glenn's Z6 is more like it

Here is the Z6 vs the D5100 from the UK

Glenn's self admitted one limitation is that he used the published spec for the dedolight source to normalize his curves instead of measuring it. Not a big deal, probably only a couple of 100 deg K off in white balance.
Glenn's self admitted one limitation is that he used the published spec for the dedolight source to normalize his curves instead of measuring it. Not a big deal, probably only a couple of 100 deg K off in white balance.

Considering these cameras are at least two generations apart that's a pretty good match.

Jack

PS There is always ol' camspec SSF data around.
 
Last edited:
"Nikon measured by Jack" having worse performance than a 10,10,10 Gaussian seems strange.

Measurement error?
Error is always a possibility when I am involved, though I am surprised to see a SSF by me there.
I was too since I didn't remember you ever talking about doing SSF measurements
I once used a science project spectrometer to read the raw values captured by a Nikon D90 and D610 mounting a 16-85mm/DX lens. The subject was a CC24 Passport Photo's gray card in my living room illuminated by light from the sun after reflection by a mirror through an open window. It was early in my journey of color science discovery so I did not understand the results well, nor the implications of my poor setup.

For instance to be comparable to the others here one would first need to back out the lens, the mirror and the gray target, then normalize the result by the unknown illuminant. And even then ... I hope this is not the referenced data, Jim?
I think it may be, and I think he's basically confirmed your doubts about the setup.

It looks like illuminant calibration is probably the biggest issue there?
You can find some detail of my experiment here

https://www.strollswithmydog.com/bayer-cfa-spectral-power-distribution/

Jack

PS This reminds me that Glenn Butcher has put some effort into producing a number of SSFs (Nikon D3500, D7000 and Z6) via the controlled diffraction grating procedure used by OpenFilmTools. They are here .
I was JUST thinking about pointing out Glenn's work - you can deep-dive his methodology and development history starting at https://discuss.pixls.us/t/the-ques...vity-functions-ssfs-and-camera-profiles/18002

One of the things that I think is still on his TODO list is buying a reference spectrometer to cal out the illuminant - but he got surprisingly close with his existing setup.

Also some data can be obtained from the github repo referenced by https://discuss.pixls.us/t/high-quality-spectral-response-data-incoming/28497
 

Keyboard shortcuts

Back
Top