ggbutcher
Senior Member
- Messages
- 1,842
- Solutions
- 4
- Reaction score
- 1,487
Over recent years, I've encountered various writings and utterances of the general form, "using SSF data for camera profiling is good..." Okay fine, nice to know, but I had to get on with taking pictures. And did I did just fine with it until one day I came back from a choral performance where I'd taken pictures of the stage lit with blue accent spotlights. Wow, did that look bad, blotchy and posterized blue patches on the walls and ceiling. And so, wrestling control of extreme colors became a thing, and making camera profiles from spectral sensitivity data became a personal endeavor...
If one wants to follow the journey I took, read this:
https://discuss.pixls.us/t/the-quest-for-good-color-4-the-diffraction-grating-shootout/19984
What I want to do here is summarize the method upon which I finally settled, put it in front of persons I know have the chops to provide constructive criticism. I've already read the previous threads on such ways, and know my method is not the best available. But I'm a photographer, not an optics engineer; I just want to make decent images with tools that are up to the task.
The method I adopted is based on single-image capture of a diffraction-produced spectrum. I wasn't interested in spending a ton of money on equipment that I'd only effectively use three times, for each of the three cameras I own. And my underlying thought was, if I did this with consideration for making the method easy to duplicate, others would be able to characterize there own cameras. And also maybe, share their data. So, I set out to incrementally attempt to measure SSFs, starting with as cheap a setup as I could get away with, with successive increments incorporating improvement until I got "good enough". I initially used a monochromator-measured dataset for my Nikon D7000 as an objective reference, obtained from this source:
https://github.com/ampas/rawtoaces/tree/master/data/camera
Later, I used the deltaE reports from dcamprof, the software I used to make the camera profiles; comments and concerns on these references will be highly appreciated. dcamprof source code can be had here:
https://github.com/Beep6581/dcamprof
So, the workflow upon which I settled generally goes like this: 1) shoot spectrum and calibration images using a spectroscope lightbox illuminated by a tungsten-halogen light, 2) use software tools to extract the spectrum and produce normalized SSF data from it, and 3) use dcamprof to produce a camera profile sufficient for ingest by raw processing software. I'll describe each step:
Measurement
Here's a picture of the spectroscope in action:

The inspiration for the lightbox was the OpenFilmTools project, described at:
https://www.hdm-stuttgart.de/open-film-tools/english/camera_characterization
The lightbox holds three elements in the optical chain, a diffuser and slit at the far end, and a diffraction grating mounted on the angled end right in front of the camera. The angle of the near end is critical, as it is based on the production of a symmetrical spectrum between 380nm and 730nm. For a 1000-lines-per-mm diffraction grating spacing, that angle is about 34 degrees. For the initial attempt, I used wax paper as a diffuser, a 1mm slit cut in cardboard, and a cheap slide-mounted 1000lpm holographic diffraction grating. Total cost for wood and the "optical" components was about $30US. I did mess with upgrades to the optical chain: a $15US diffuser, a dual-razorblade slit, and a $108US transmissive diffraction grating; I'll discuss the performance differences later.
The spectrum light source is a LowellPro spot holding a tungsten-halogen lamp. The calibration light source is a conventional CFL bulb from the household stores, mounted in the blue-shade gooseneck fixture that can be easily swung in front of the spot for the calibration capture.
A few iterations of the box itself were required to come up with a design that controlled the light well enough and was easy to produce. All the wood is standard trim finish lumber purchased at the local home store. All cuts can be easily made with a radial "chop saw", including that critical angle cut. The input aperture is a 1-inch hole drilled with a standard spade bit, the output aperture is a 2-inch hole drilled with a hole saw; no weird rectangular cuts. In order to help stabilize the box, I extended the angled face below the table surface with the intent to provide holes for bolt-mounting, but I found that wasn't necessary if I just stayed away from it. That face extension does help in one regard, it makes moving the box laterally for alignment easy without disturbing the angle orientation to the camera.
The camera is tripod-mounted, and just moved up to the output opening. I use the tripod's level to align the head square to the box face. If there is stray light showing in the spectrum image from the little gap between the lens front and the box face, I just toss an opaque cloth over the camera and box.
The measurement act is to simply capture an image of the tungsten-halogen spectrum, then another of the CFL spectrum.
Data Reduction
Involves these steps: 1) extracting the spectrum and calibration parts of the image, 2) producing column-major data of the pixel values, 3) wavelength-calibrating the tungsten-halogen spectrum using the known peaks of the CFL spectrum, 4) adjusting the spectrum data to the power curve of the tungsten-halogen source, and 4) intervalizing and normalizing the data.
For all these activities, I spent some time writing software to facilitate it. Particularly, it became tiresome to reconstruct spreadsheets each time I did a new capture, and I did a lot of captures of different optical and mechanical configurations. That software can be found here:
https://github.com/butcherg/ssftool
The tiff2specdata program does what it's name implies, it takes a TIFF image, finds the centerline of the brightest green thing in the image, extracts the 50 lines above and below the centerline, and prints the R,G,B triplets of the average of each column of those 100 lines. So, the brightest green thing needs to be the center of the spectrum, and no capture of stray light elsewhere. The TIFF needs to be linear, that is, no tone processing whatsoever, so dcraw -D -T -4 is recommended.
The ssftool program is designed to ingest a comma-separated text data file, do one of a number of things to it, and print the result as comma-separated text data. ssftool will read from stdin, so it can be used in a string of piped invocations. Specifics of its use are described in the README.
Wavelength calibration is done with known values of the CFL peaks, obtained from here:
https://stargazerslounge.com/topic/...cent-lights-for-calibration-of-spectrometers/
I know one thing I'll probably take grief about, and that's power calibration. All the projects for which I found literature included a separate spectral power measurement a the time the spectrum measurements were taken, and I first decided to determine if an adequate measurement could be arrived at with "generic" data for the light source. All the power distributions I could find for tungsten-halogen lighting seemed to exhibit a similar pattern from 380 to 730nm, that being an upward slope. The OpenFilmTools folks posted spectral power distribution for a number of light sources including a tungsten-halogen source, so I first tried using that data to power-compensate my spectral data. More on that in a bit, but essentially that approach seems to work well enough. I may eventually buy a photospectrometer like the X-Rite i1Studio, but the average photographer might not be into spending ~$450US for such a device.
The normal format of a SSF dataset is column-major, each line providing the so-called R,G,B triplets for a given wavelength. ssftool has an operation to read that data and produce JSON-formatted data suitable for ingest by dcamprof, so that was the last act in my data reduction chain.
Profile Development
Anders Torger's dcamprof tool, referenced above, contains all the logic to produce either DCP or ICC camera profiles for either target shot or spectral data input. Matrix or LUT profiles can be produced, and the software internally contains an adequate set of "training spectra" for munging camera SSF data into a conformant matrix or LUT. With the -r switch, it will also produce a ton of data about the process, including deltaE statistics for comparing the reference RGB of the training spectra to the RGB produced by the result profile.
For a performance objective, I produced a LUT ICC of the rawtoaces monochromator-measured Nikon D7000 data with the dcamprof CC24 training spectra, and the max DE of the 24 CC24 patches was 2.76. For my initial test I produced profiles for data obtained from both the cheap holographic grating and the optical-grade etched glass grating; the best max DE I got from each respectively was 2.93 and 2.80. Both measurements used the "generic" power data adjustment.
Conclusion
Well, both the DE numbers and some actual developed images convinced me that the above approach was good enough for my photography. But, I still wonder about corner cases and other such things I'm not equipped to consider. So, I am interested in constructive criticism, as I believe such a method can be of use to others, especially in light of the dearth of available SSF data for cameras. Fire away...
Addendum
I also started a github repo where I'm collecting SSF data, both what I've measured and others' measurements. Currently, the only other data I've been able to incorporate per their license has been the camspec database, but I'll add others as I can resolve their licensing:
https://github.com/butcherg/ssf-data
If one wants to follow the journey I took, read this:
https://discuss.pixls.us/t/the-quest-for-good-color-4-the-diffraction-grating-shootout/19984
What I want to do here is summarize the method upon which I finally settled, put it in front of persons I know have the chops to provide constructive criticism. I've already read the previous threads on such ways, and know my method is not the best available. But I'm a photographer, not an optics engineer; I just want to make decent images with tools that are up to the task.
The method I adopted is based on single-image capture of a diffraction-produced spectrum. I wasn't interested in spending a ton of money on equipment that I'd only effectively use three times, for each of the three cameras I own. And my underlying thought was, if I did this with consideration for making the method easy to duplicate, others would be able to characterize there own cameras. And also maybe, share their data. So, I set out to incrementally attempt to measure SSFs, starting with as cheap a setup as I could get away with, with successive increments incorporating improvement until I got "good enough". I initially used a monochromator-measured dataset for my Nikon D7000 as an objective reference, obtained from this source:
https://github.com/ampas/rawtoaces/tree/master/data/camera
Later, I used the deltaE reports from dcamprof, the software I used to make the camera profiles; comments and concerns on these references will be highly appreciated. dcamprof source code can be had here:
https://github.com/Beep6581/dcamprof
So, the workflow upon which I settled generally goes like this: 1) shoot spectrum and calibration images using a spectroscope lightbox illuminated by a tungsten-halogen light, 2) use software tools to extract the spectrum and produce normalized SSF data from it, and 3) use dcamprof to produce a camera profile sufficient for ingest by raw processing software. I'll describe each step:
Measurement
Here's a picture of the spectroscope in action:

The inspiration for the lightbox was the OpenFilmTools project, described at:
https://www.hdm-stuttgart.de/open-film-tools/english/camera_characterization
The lightbox holds three elements in the optical chain, a diffuser and slit at the far end, and a diffraction grating mounted on the angled end right in front of the camera. The angle of the near end is critical, as it is based on the production of a symmetrical spectrum between 380nm and 730nm. For a 1000-lines-per-mm diffraction grating spacing, that angle is about 34 degrees. For the initial attempt, I used wax paper as a diffuser, a 1mm slit cut in cardboard, and a cheap slide-mounted 1000lpm holographic diffraction grating. Total cost for wood and the "optical" components was about $30US. I did mess with upgrades to the optical chain: a $15US diffuser, a dual-razorblade slit, and a $108US transmissive diffraction grating; I'll discuss the performance differences later.
The spectrum light source is a LowellPro spot holding a tungsten-halogen lamp. The calibration light source is a conventional CFL bulb from the household stores, mounted in the blue-shade gooseneck fixture that can be easily swung in front of the spot for the calibration capture.
A few iterations of the box itself were required to come up with a design that controlled the light well enough and was easy to produce. All the wood is standard trim finish lumber purchased at the local home store. All cuts can be easily made with a radial "chop saw", including that critical angle cut. The input aperture is a 1-inch hole drilled with a standard spade bit, the output aperture is a 2-inch hole drilled with a hole saw; no weird rectangular cuts. In order to help stabilize the box, I extended the angled face below the table surface with the intent to provide holes for bolt-mounting, but I found that wasn't necessary if I just stayed away from it. That face extension does help in one regard, it makes moving the box laterally for alignment easy without disturbing the angle orientation to the camera.
The camera is tripod-mounted, and just moved up to the output opening. I use the tripod's level to align the head square to the box face. If there is stray light showing in the spectrum image from the little gap between the lens front and the box face, I just toss an opaque cloth over the camera and box.
The measurement act is to simply capture an image of the tungsten-halogen spectrum, then another of the CFL spectrum.
Data Reduction
Involves these steps: 1) extracting the spectrum and calibration parts of the image, 2) producing column-major data of the pixel values, 3) wavelength-calibrating the tungsten-halogen spectrum using the known peaks of the CFL spectrum, 4) adjusting the spectrum data to the power curve of the tungsten-halogen source, and 4) intervalizing and normalizing the data.
For all these activities, I spent some time writing software to facilitate it. Particularly, it became tiresome to reconstruct spreadsheets each time I did a new capture, and I did a lot of captures of different optical and mechanical configurations. That software can be found here:
https://github.com/butcherg/ssftool
The tiff2specdata program does what it's name implies, it takes a TIFF image, finds the centerline of the brightest green thing in the image, extracts the 50 lines above and below the centerline, and prints the R,G,B triplets of the average of each column of those 100 lines. So, the brightest green thing needs to be the center of the spectrum, and no capture of stray light elsewhere. The TIFF needs to be linear, that is, no tone processing whatsoever, so dcraw -D -T -4 is recommended.
The ssftool program is designed to ingest a comma-separated text data file, do one of a number of things to it, and print the result as comma-separated text data. ssftool will read from stdin, so it can be used in a string of piped invocations. Specifics of its use are described in the README.
Wavelength calibration is done with known values of the CFL peaks, obtained from here:
https://stargazerslounge.com/topic/...cent-lights-for-calibration-of-spectrometers/
I know one thing I'll probably take grief about, and that's power calibration. All the projects for which I found literature included a separate spectral power measurement a the time the spectrum measurements were taken, and I first decided to determine if an adequate measurement could be arrived at with "generic" data for the light source. All the power distributions I could find for tungsten-halogen lighting seemed to exhibit a similar pattern from 380 to 730nm, that being an upward slope. The OpenFilmTools folks posted spectral power distribution for a number of light sources including a tungsten-halogen source, so I first tried using that data to power-compensate my spectral data. More on that in a bit, but essentially that approach seems to work well enough. I may eventually buy a photospectrometer like the X-Rite i1Studio, but the average photographer might not be into spending ~$450US for such a device.
The normal format of a SSF dataset is column-major, each line providing the so-called R,G,B triplets for a given wavelength. ssftool has an operation to read that data and produce JSON-formatted data suitable for ingest by dcamprof, so that was the last act in my data reduction chain.
Profile Development
Anders Torger's dcamprof tool, referenced above, contains all the logic to produce either DCP or ICC camera profiles for either target shot or spectral data input. Matrix or LUT profiles can be produced, and the software internally contains an adequate set of "training spectra" for munging camera SSF data into a conformant matrix or LUT. With the -r switch, it will also produce a ton of data about the process, including deltaE statistics for comparing the reference RGB of the training spectra to the RGB produced by the result profile.
For a performance objective, I produced a LUT ICC of the rawtoaces monochromator-measured Nikon D7000 data with the dcamprof CC24 training spectra, and the max DE of the 24 CC24 patches was 2.76. For my initial test I produced profiles for data obtained from both the cheap holographic grating and the optical-grade etched glass grating; the best max DE I got from each respectively was 2.93 and 2.80. Both measurements used the "generic" power data adjustment.
Conclusion
Well, both the DE numbers and some actual developed images convinced me that the above approach was good enough for my photography. But, I still wonder about corner cases and other such things I'm not equipped to consider. So, I am interested in constructive criticism, as I believe such a method can be of use to others, especially in light of the dearth of available SSF data for cameras. Fire away...
Addendum
I also started a github repo where I'm collecting SSF data, both what I've measured and others' measurements. Currently, the only other data I've been able to incorporate per their license has been the camspec database, but I'll add others as I can resolve their licensing:
https://github.com/butcherg/ssf-data



