Single-Image Spectroscope Measurement of Camera SSF (long)

ggbutcher

Senior Member
Messages
1,842
Solutions
4
Reaction score
1,487
Over recent years, I've encountered various writings and utterances of the general form, "using SSF data for camera profiling is good..." Okay fine, nice to know, but I had to get on with taking pictures. And did I did just fine with it until one day I came back from a choral performance where I'd taken pictures of the stage lit with blue accent spotlights. Wow, did that look bad, blotchy and posterized blue patches on the walls and ceiling. And so, wrestling control of extreme colors became a thing, and making camera profiles from spectral sensitivity data became a personal endeavor...

If one wants to follow the journey I took, read this:

https://discuss.pixls.us/t/the-quest-for-good-color-4-the-diffraction-grating-shootout/19984

What I want to do here is summarize the method upon which I finally settled, put it in front of persons I know have the chops to provide constructive criticism. I've already read the previous threads on such ways, and know my method is not the best available. But I'm a photographer, not an optics engineer; I just want to make decent images with tools that are up to the task.

The method I adopted is based on single-image capture of a diffraction-produced spectrum. I wasn't interested in spending a ton of money on equipment that I'd only effectively use three times, for each of the three cameras I own. And my underlying thought was, if I did this with consideration for making the method easy to duplicate, others would be able to characterize there own cameras. And also maybe, share their data. So, I set out to incrementally attempt to measure SSFs, starting with as cheap a setup as I could get away with, with successive increments incorporating improvement until I got "good enough". I initially used a monochromator-measured dataset for my Nikon D7000 as an objective reference, obtained from this source:

https://github.com/ampas/rawtoaces/tree/master/data/camera

Later, I used the deltaE reports from dcamprof, the software I used to make the camera profiles; comments and concerns on these references will be highly appreciated. dcamprof source code can be had here:

https://github.com/Beep6581/dcamprof

So, the workflow upon which I settled generally goes like this: 1) shoot spectrum and calibration images using a spectroscope lightbox illuminated by a tungsten-halogen light, 2) use software tools to extract the spectrum and produce normalized SSF data from it, and 3) use dcamprof to produce a camera profile sufficient for ingest by raw processing software. I'll describe each step:

Measurement

Here's a picture of the spectroscope in action:

4addb362871741f3a864b364d6c04708.jpg

The inspiration for the lightbox was the OpenFilmTools project, described at:

https://www.hdm-stuttgart.de/open-film-tools/english/camera_characterization

The lightbox holds three elements in the optical chain, a diffuser and slit at the far end, and a diffraction grating mounted on the angled end right in front of the camera. The angle of the near end is critical, as it is based on the production of a symmetrical spectrum between 380nm and 730nm. For a 1000-lines-per-mm diffraction grating spacing, that angle is about 34 degrees. For the initial attempt, I used wax paper as a diffuser, a 1mm slit cut in cardboard, and a cheap slide-mounted 1000lpm holographic diffraction grating. Total cost for wood and the "optical" components was about $30US. I did mess with upgrades to the optical chain: a $15US diffuser, a dual-razorblade slit, and a $108US transmissive diffraction grating; I'll discuss the performance differences later.

The spectrum light source is a LowellPro spot holding a tungsten-halogen lamp. The calibration light source is a conventional CFL bulb from the household stores, mounted in the blue-shade gooseneck fixture that can be easily swung in front of the spot for the calibration capture.

A few iterations of the box itself were required to come up with a design that controlled the light well enough and was easy to produce. All the wood is standard trim finish lumber purchased at the local home store. All cuts can be easily made with a radial "chop saw", including that critical angle cut. The input aperture is a 1-inch hole drilled with a standard spade bit, the output aperture is a 2-inch hole drilled with a hole saw; no weird rectangular cuts. In order to help stabilize the box, I extended the angled face below the table surface with the intent to provide holes for bolt-mounting, but I found that wasn't necessary if I just stayed away from it. That face extension does help in one regard, it makes moving the box laterally for alignment easy without disturbing the angle orientation to the camera.

The camera is tripod-mounted, and just moved up to the output opening. I use the tripod's level to align the head square to the box face. If there is stray light showing in the spectrum image from the little gap between the lens front and the box face, I just toss an opaque cloth over the camera and box.

The measurement act is to simply capture an image of the tungsten-halogen spectrum, then another of the CFL spectrum.

Data Reduction

Involves these steps: 1) extracting the spectrum and calibration parts of the image, 2) producing column-major data of the pixel values, 3) wavelength-calibrating the tungsten-halogen spectrum using the known peaks of the CFL spectrum, 4) adjusting the spectrum data to the power curve of the tungsten-halogen source, and 4) intervalizing and normalizing the data.

For all these activities, I spent some time writing software to facilitate it. Particularly, it became tiresome to reconstruct spreadsheets each time I did a new capture, and I did a lot of captures of different optical and mechanical configurations. That software can be found here:

https://github.com/butcherg/ssftool

The tiff2specdata program does what it's name implies, it takes a TIFF image, finds the centerline of the brightest green thing in the image, extracts the 50 lines above and below the centerline, and prints the R,G,B triplets of the average of each column of those 100 lines. So, the brightest green thing needs to be the center of the spectrum, and no capture of stray light elsewhere. The TIFF needs to be linear, that is, no tone processing whatsoever, so dcraw -D -T -4 is recommended.

The ssftool program is designed to ingest a comma-separated text data file, do one of a number of things to it, and print the result as comma-separated text data. ssftool will read from stdin, so it can be used in a string of piped invocations. Specifics of its use are described in the README.

Wavelength calibration is done with known values of the CFL peaks, obtained from here:

https://stargazerslounge.com/topic/...cent-lights-for-calibration-of-spectrometers/

I know one thing I'll probably take grief about, and that's power calibration. All the projects for which I found literature included a separate spectral power measurement a the time the spectrum measurements were taken, and I first decided to determine if an adequate measurement could be arrived at with "generic" data for the light source. All the power distributions I could find for tungsten-halogen lighting seemed to exhibit a similar pattern from 380 to 730nm, that being an upward slope. The OpenFilmTools folks posted spectral power distribution for a number of light sources including a tungsten-halogen source, so I first tried using that data to power-compensate my spectral data. More on that in a bit, but essentially that approach seems to work well enough. I may eventually buy a photospectrometer like the X-Rite i1Studio, but the average photographer might not be into spending ~$450US for such a device.

The normal format of a SSF dataset is column-major, each line providing the so-called R,G,B triplets for a given wavelength. ssftool has an operation to read that data and produce JSON-formatted data suitable for ingest by dcamprof, so that was the last act in my data reduction chain.

Profile Development

Anders Torger's dcamprof tool, referenced above, contains all the logic to produce either DCP or ICC camera profiles for either target shot or spectral data input. Matrix or LUT profiles can be produced, and the software internally contains an adequate set of "training spectra" for munging camera SSF data into a conformant matrix or LUT. With the -r switch, it will also produce a ton of data about the process, including deltaE statistics for comparing the reference RGB of the training spectra to the RGB produced by the result profile.

For a performance objective, I produced a LUT ICC of the rawtoaces monochromator-measured Nikon D7000 data with the dcamprof CC24 training spectra, and the max DE of the 24 CC24 patches was 2.76. For my initial test I produced profiles for data obtained from both the cheap holographic grating and the optical-grade etched glass grating; the best max DE I got from each respectively was 2.93 and 2.80. Both measurements used the "generic" power data adjustment.

Conclusion

Well, both the DE numbers and some actual developed images convinced me that the above approach was good enough for my photography. But, I still wonder about corner cases and other such things I'm not equipped to consider. So, I am interested in constructive criticism, as I believe such a method can be of use to others, especially in light of the dearth of available SSF data for cameras. Fire away...

Addendum

I also started a github repo where I'm collecting SSF data, both what I've measured and others' measurements. Currently, the only other data I've been able to incorporate per their license has been the camspec database, but I'll add others as I can resolve their licensing:

https://github.com/butcherg/ssf-data
 
All the projects for which I found literature included a separate spectral power measurement a the time the spectrum measurements were taken, and I first decided to determine if an adequate measurement could be arrived at with "generic" data for the light source. All the power distributions I could find for tungsten-halogen lighting seemed to exhibit a similar pattern from 380 to 730nm, that being an upward slope. The OpenFilmTools folks posted spectral power distribution for a number of light sources including a tungsten-halogen source, so I first tried using that data to power-compensate my spectral data.
Good show Glenn!

With respect to the above, my feeling is that as long as the generic tungsten bulbs (yours and the one whose spectral data you used) behave much like a blackbody radiator, the main unknown is color temperature at the time of measurement. My guess is that you could be at most a few hundred degrees K off, which would have the somewhat innocuous effect of throwing your white balance slightly off by that amount.

The other question I haven't been able to formulate properly even to myself yet is quanta vs energy unit measurements for the SSFs (i.e. whether for the purposes of defining a matrix/profile via simulated targets one should produce SSFs in spectral QE or energy units) . If the wrong set were used, it would also have a wrong white-balance like effect on results.

Jack
 
Last edited:
With respect to the above, my feeling is that as long as the generic tungsten bulbs (yours and the one whose spectral data you used) behave much like a blackbody radiator, the main unknown is color temperature at the time of measurement. My guess is that you could be at most a few hundred degrees K off, which would have the somewhat innocuous effect of throwing your white balance slightly off by that amount.
The lamp I used is this:


which produces 3200K illumination. The OpenFilmTools lamp data named "Dedolight_Aspheric2_TU_L3_3200K_Spot" is what I used for my power adjustment.
The other question I haven't been able to formulate properly even to myself yet is quanta vs energy unit measurements for the SSFs (i.e. whether for the purposes of defining a matrix/profile via simulated targets one should produce SSFs in spectral QE or energy units) . If the wrong set were used, it would also have a wrong white-balance like effect on results.
Bit of further disclosure, I wrote tiff2specdata.cpp after I'd done my initial evaluation. For that, I used an extract of a manual crop of the libraw-delivered raw data, well, actually floating-point converted 16-bit integers. What would I do to convert those numbers to either QE or energy units? Showing my colors here, so to speak; I'm neither a physics or a math person... :D
 
The other question I haven't been able to formulate properly even to myself yet is quanta vs energy unit measurements for the SSFs (i.e. whether for the purposes of defining a matrix/profile via simulated targets one should produce SSFs in spectral QE or energy units) . If the wrong set were used, it would also have a wrong white-balance like effect on results.
Bit of further disclosure, I wrote tiff2specdata.cpp after I'd done my initial evaluation. For that, I used an extract of a manual crop of the libraw-delivered raw data, well, actually floating-point converted 16-bit integers. What would I do to convert those numbers to either QE or energy units? Showing my colors here, so to speak; I'm neither a physics or a math person... :D
Ok, I think in pictures so ... I believe you have seen plots like these:

Quantal related units from On Semiconductor sensor spec sheet
Quantal related units from On Semiconductor sensor spec sheet

Quantal related units from my unnormalized measurement with a set up similar to yours under D55 or so https://www.strollswithmydog.com/bayer-cfa-spectral-power-distribution/
Quantal related units from my unnormalized measurement with a set up similar to yours under D55 or so https://www.strollswithmydog.com/bayer-cfa-spectral-power-distribution/

and plots like these:

Energy related units
Energy related units

The former's y axis is a percentage, meaning that photons of a given wavelength get counted according to the relative percentage shown by the curve. The latter's y axis is relative energy, meaning that it is proportional to joules (j) as opposed to photons.

The two are related by the energy of a photon = hc/lambda (j/photon). What really counts though is the wavelength lambda because the two constants in the numerator get washed out in the normalization of the y axis. So if you have the QE curve you can get its energy version simply by dividing by lambda and normalizing - and vice versa.

Here is another pic: you know that equi-energy illuminant Se looks like a straight line in a spectral energy plot, right? Here is what it looks like once converted to quantal units (yellow curve):

Quantal related units
Quantal related units

Hope what I mean is clearer

Jack
 
Last edited:
(a lucid explanation)
Hope what I mean is clearer

Jack
Yes indeed. Sorry it's taken multiple tries on your part.

So, it would seem a conversion to energy would be prudent, if the human eye responds to that... ? I haven't seen such a consideration in any of the other measurement endeavors I've reviewed.
 
Data Reduction, 3rd paragraph: -D in the dcraw command shouldn't be there; the TIFF needs the demosaiced image. -h is best, doesn't modify the values.
 
until one day I came back from a choral performance where I'd taken pictures of the stage lit with blue accent spotlights. Wow, did that look bad, blotchy and posterized blue patches on the walls and ceiling. And so, wrestling control of extreme colors became a thing
I long had problems with bright saturated light sources in my photos. If it was important at all, I'd spend time retouching, but it was time-consuming sometimes, as it required me to hand-paint plausible details.

Then I got a contract to shoot a Christmas book, and had to travel thousands of miles and take a large number of photos on location, all within two weeks. Brightly colored saturated light sources, obviously, were a prominent subject matter. I didn't have the time to figure out what was wrong, so I just had to shoot and deal with it later.

Dynamic range limits are significant when dealing with light sources within the frame: overexposure will cause clipping and often abrupt hue shifts. Typically, I'd shoot to preserve the highlights, and then take one or so additional images with greater exposure so I could do HDR or exposure blend them later if needed. I shot raw and used a tripod extensively.

But the main problem is typically gamut. In out of gamut colors, you'll always have at least one color channel for that color that is always blown or plugged, no matter the exposure. The common color spaces, sRGB, Adobe RGB, and even ProPhoto have their limits, especially when cameras' spectral sensitivities can deliver results that are even outside of the gamut of human vision. Since I had to process my images in CMYK, color gamut limits were even more critical. Gamut isn't an issue if you shoot raw, but eventually, when producing an output, you'll have to deal with it.

When processing, my main problem was avoiding the blowing or plugging of any of the CMYK color channels for significant details, which were usually the bright Christmas lights, while still getting adequate shadow detail. Once I almost completely processed an image, only then did I decide what to do with the out-of-gamut colors.

One problem is that some raw processors nearly always plug some color channels, no matter the adjustments made. Typically, this is found in yellow objects where the blue color channel is always black or at least very noisy. Adobe Camera Raw and Nikon software does this frequently, and from what I understand, this is mainly a problem with color profile handing, since the software does not allow negative values in some parts of the color profile tables.

One solution that I used for difficult images was an "unbounded mode" raw processor, one which allows color numbers—at least during intermediate processing—to go beyond the normal limits, for example, above 255 and below 0 for 8-bit images or likewise for higher bit depth, which allows natural handling of out-of-gamut colors by completely preserving them. I used RawTherapee and processed images in 32 bit floating point mode, which helped preserve more detail than what I've been able to retain in Photoshop, and this avoided clipping brightly colored highlights.

Reducing saturation and contrast often helped me do this, and when I imported the images into Photoshop for final editing in CMYK, I was able to boost everything to the limits of that medium. Sometimes I had to reduce the brightness or saturation of the extreme colors, and sometimes I had to shift hue a bit to give a better match for the printing inks used.

A few images were completely intransigent, giving me intensely noisy images with lots of out-of-gamut colors, and for these I just extracted the demosaiced raw values without color processing, and used brute force to give me something plausible.
 
until one day I came back from a choral performance where I'd taken pictures of the stage lit with blue accent spotlights. Wow, did that look bad, blotchy and posterized blue patches on the walls and ceiling. And so, wrestling control of extreme colors became a thing
I long had problems with bright saturated light sources in my photos. If it was important at all, I'd spend time retouching, but it was time-consuming sometimes, as it required me to hand-paint plausible details.

Then I got a contract to shoot a Christmas book, and had to travel thousands of miles and take a large number of photos on location, all within two weeks. Brightly colored saturated light sources, obviously, were a prominent subject matter. I didn't have the time to figure out what was wrong, so I just had to shoot and deal with it later.

Dynamic range limits are significant when dealing with light sources within the frame: overexposure will cause clipping and often abrupt hue shifts. Typically, I'd shoot to preserve the highlights, and then take one or so additional images with greater exposure so I could do HDR or exposure blend them later if needed. I shot raw and used a tripod extensively.

But the main problem is typically gamut. In out of gamut colors, you'll always have at least one color channel for that color that is always blown or plugged, no matter the exposure. The common color spaces, sRGB, Adobe RGB, and even ProPhoto have their limits, especially when cameras' spectral sensitivities can deliver results that are even outside of the gamut of human vision. Since I had to process my images in CMYK, color gamut limits were even more critical. Gamut isn't an issue if you shoot raw, but eventually, when producing an output, you'll have to deal with it.

When processing, my main problem was avoiding the blowing or plugging of any of the CMYK color channels for significant details, which were usually the bright Christmas lights, while still getting adequate shadow detail. Once I almost completely processed an image, only then did I decide what to do with the out-of-gamut colors.

One problem is that some raw processors nearly always plug some color channels, no matter the adjustments made. Typically, this is found in yellow objects where the blue color channel is always black or at least very noisy. Adobe Camera Raw and Nikon software does this frequently, and from what I understand, this is mainly a problem with color profile handing, since the software does not allow negative values in some parts of the color profile tables.

One solution that I used for difficult images was an "unbounded mode" raw processor, one which allows color numbers—at least during intermediate processing—to go beyond the normal limits, for example, above 255 and below 0 for 8-bit images or likewise for higher bit depth, which allows natural handling of out-of-gamut colors by completely preserving them. I used RawTherapee and processed images in 32 bit floating point mode, which helped preserve more detail than what I've been able to retain in Photoshop, and this avoided clipping brightly colored highlights.

Reducing saturation and contrast often helped me do this, and when I imported the images into Photoshop for final editing in CMYK, I was able to boost everything to the limits of that medium. Sometimes I had to reduce the brightness or saturation of the extreme colors, and sometimes I had to shift hue a bit to give a better match for the printing inks used.

A few images were completely intransigent, giving me intensely noisy images with lots of out-of-gamut colors, and for these I just extracted the demosaiced raw values without color processing, and used brute force to give me something plausible.
I'm finding that the simple LUT camera profiles I've developed with this SSF data go a long way toward corralling colors to retain some notion of gradation. But the real potential of SSF data is in developing problem-specific profiles using targeted training data. Right now, I'm just using ColorChecker24 spectra, but I know of a person who's training SSF profiles with the Lippman 2000 spectra collection of skin tones.

There are some good color manipulation tools in the various softwares, but it seems to me that managing color in the camera -> sRGB/AdobeRGB/whatever transform that has to be done anyway is the better place to minimize hue impact and exercise better overall color control.
 
Hummingbirds come to mind. The reds are brilliant and easily saturated, giving significantly shifted gorget colors. It's partly my equipment (I'm looking for something else), but I notice that lots of other people have the same problem.
 
Good job!

I've also read the 4 articles on PIXLS.US. But to be honest, these stuffs are too much to digest for a normal photographer without any color science background.

I wonder why Adobe or Capture One or DxO don't make robust SSF-based camera profiles? It's always hard to deal with extremely harsh artificial lighting conditions.

In very harsh artificial lighting conditions, I tend to use video workflow on photos (shoot log gamma heif pictures and grade it in Resolve). I find the picture often comes out much smoother than common raw photo developing process, as for harsh lighting color gradations. Do camera factories use in-house measured SSF data to do colorspace transformation from RAW to log sooc pictures/video clips?

Besides, if you have time, you can even consider to make this a side business. I think robust third-party SSF-based camera profiles will sell like cupcakes among photograpers. Harsh gradations under extreme lighting is really a pain in the ass for years. People who solve this problem for normal consumers deserves to make money.
 
Good job!

I've also read the 4 articles on PIXLS.US. But to be honest, these stuffs are too much to digest for a normal photographer without any color science background.

I wonder why Adobe or Capture One or DxO don't make robust SSF-based camera profiles? It's always hard to deal with extremely harsh artificial lighting conditions.

In very harsh artificial lighting conditions, I tend to use video workflow on photos (shoot log gamma heif pictures and grade it in Resolve). I find the picture often comes out much smoother than common raw photo developing process, as for harsh lighting color gradations. Do camera factories use in-house measured SSF data to do colorspace transformation from RAW to log sooc pictures/video clips?

Besides, if you have time, you can even consider to make this a side business. I think robust third-party SSF-based camera profiles will sell like cupcakes among photograpers. Harsh gradations under extreme lighting is really a pain in the ass for years. People who solve this problem for normal consumers deserves to make money.
Yeah, the four articles are chronicle of my journey to get there, probably need to write a concise how-to. Really not that hard: 1) build the box, cost about $30US; 2) use it to take two images, one of a full spectrum light and one of a CFL or other such light source with known peaks, 3) pull the pixels from a swath through the 'rainbow' for each image, 4) use the CFL pixels to align the full-spectrum pixels to the wavelengths, and 5) each RGB pixel from the full-spectrum image becomes a data triple at that wavelength. Ta-Da.

Really the hard part of it is to get cameras in front of measurement tools. At home, I have three, all measured. I have assembled a collection of datasets from various research and other such projects, have posted ones I could divine a license for here:

https://github.com/butcherg/ssf-data

The big source is the camspec database, 30-something cameras measured with a monochromator.

SSF data lets you do a number of unique things for your camera. For one, camera profiles without futzing with target shots and glare control. Another is the ability to make profiles for any illuminant with that one dataset. Yet another is to make LUT profiles instead of matrix profiles, which tend to transition extreme colors out of the camera space better. And, and, you can make profiles trained with color references of interest, e.g. skin tones using the Lippman 2000 dataset assembled by RIT researchers.

With all that, I still end up just using the simple matrix profile for the majority of my images. But, when I run into a color problem...
 

Keyboard shortcuts

Back
Top