How to Manually Process a DSLR Image

Started Mar 1, 2016 | Discussions
sharkmelley
sharkmelley Senior Member • Posts: 1,455
How to Manually Process a DSLR Image
2

Finally I've worked out how to take linear raw data from the camera and manually process it so it looks almost right.

Why could this be useful? Firstly because it gives some understanding of the hidden processing going on inside the camera. Secondly because it allows the stacking and subtraction of light pollution to be done in linear space before the tone curve is applied. Thirdly because it explains why the colours in some of my DSLR astro images look so dull.

Firstly an example of the problem I'm talking about.

Here's a picture of some flowers taken with my Canon 600D (T3i) converted using Adobe Camera Raw:

Adobe Camera Raw version

The ACR version is almost identical to the JPG produced by the camera.

Here is the same image where I've taken the linear camera data, demosaiced, applied white balance and then a rough and ready 1.6 gamma tone curve:

Raw data with white balance

Look how dull and lifeless this is and how the colours just don't look right. This is the kind of effect you get when taking your linear stacked DSLR data from Deep Sky Stacker or PixInsight then applying white balance and stretching.

The missing step is that a transformation (in addition to white balance) must be done from "sensor coordinates" to standard colour space coordinates.

Such a matrix can be often be found at DXOMark.

http://www.dxomark.com/Cameras/Canon/EOS-600D---Measurements#measuretabs-7

So for the 600D the colour matrix is:

2.12 -1.28 0.16

-0.24 1.63 -0.38

0.04 -0.69 1.65

ACR and DCRaw contain a similar (but not identical) matrix which I'll explain in a reply to this thread.

If I process the raw CR2 file using the command:

dcraw -w -T -6 IMG_6285.CR2 then it produces a TIFF looking like this:

DCRaw version with default tone curve

But I can take the linear demosaiced data, manually apply white balance, manually apply the colour matrix and then a rough and ready 1.6 gamma tone curve:

ColorMatrix manually applied

Note how much better this is than the naive version with just a white balance applied. Unsurprisingly it also looks pretty much identical to the DCRaw version (since both are using the Adobe ColorMatrix for the Canon 600D)

To my mind, the pinkish white flowers of ACR version look slightly overexposed. This can be improved by reducing the exposure by half a stop. Even then, their colour looks a bit washed owing to the shape of the ACR tone curve:

ACR with exposure lowered by half a stop

My next post in this thread will explain how to find and use the Adobe ColorMatrix.

Mark

 sharkmelley's gear list:sharkmelley's gear list
Sony Alpha a7S +1 more
Canon EOS 600D (EOS Rebel T3i / EOS Kiss X5)
If you believe there are incorrect tags, please send us this post using our feedback form.
Jon Rista Contributing Member • Posts: 681
Re: How to Manually Process a DSLR Image

Very interesting stuff, Mark. I'm amazed you managed to dig up those matrices...I figured those details would have been buried in proprietary code.

I do wonder how "accurate" this approach may be to processing astro images. Personally, I think the original ACR version of your flower pot is too bright and too vibrant for an astro image. I think you would lose subtle color variations and possibly certain details if you used that on an astro integration.

Something in between the ACR version and the original dull version might be ideal, giving you color, but avoiding oversaturation of colors that eats up details.

Personally, I have not generally found that I cannot saturate enough with PixInsight. Between the CurvesTransformation and ColorSaturation tools, you have both global and selective control over color. The biggest issue I may run into is the color noise. With a strait up linear process, your not getting any of that built-in ACR color noise reduction. You have to apply it yourself. You have to be pretty meticulous about that, to reduce color noise, but not reduce color in general. That can be difficult to fine tune, and if you don't, color saturation will often re-enhance color noise.

I would be very interested in seeing how this approach affects color noise in a PixInsight integration of data. Would the high color saturation result in increased color noise as well? I wonder what other differences there may be with low SNR noisy data using this approach.

 Jon Rista's gear list:Jon Rista's gear list
Canon EOS 5D Mark III Sony a6000 Canon EF 50mm f/1.4 USM Canon EF 16-35mm F2.8L II USM Canon EF 100-400mm f/4.5-5.6L IS USM +4 more
sharkmelley
OP sharkmelley Senior Member • Posts: 1,455
Where to find the Adobe ColorMatrix
2

To manually apply the colour matrix for your camera you first need to find it. One way is to create a DNG (digital negative) file from ACR. If you then inspect it with a tool such as ExifToolGUI you are likely to find ColorMatrix1 and ColorMatrix2. The first is the matrix for a low colour temperature and the second is for a higher colour temperature such as D65. So use ColorMatrix2.

Alternatively, if DCRaw supports your camera you'll find the same matrix in the source code in the adobe_coeff class. It will look something like this:

{ "Canon EOS 600D", 0, 0x3510,
{ 6461,-907,-882,-4300,12184,2378,-819,1944,5931 } },

The numbers are divided by 10000 to give the matrix (for the Canon 600D):

0.6461 -0.0907 -0.0882

-0.4300 1.2184 0.2378

-0.0819 0.1944 0.5931

This is identical to the Adobe ColorMatrix2.

The Adobe DNG specification can be found here: http://www.adobe.com/content/dam/Adobe/en/products/photoshop/pdfs/dng_spec_1.4.0.0.pdf

The spec explains that ColorMatrix2 is the matrix that goes from XYZ colour space (Google it if you want to know more about XYZ) to the Camera's RGB.

There is another standard matrix that goes from RGB to XYZ (see for instance http://www.easyrgb.com/index.php?X=MATH&H=02#text2)

This matrix is:

0.4124 0.3576 0.1805

0.2126 0.7152 0.0722

0.0193 0.1192 0.9505

Multiplying the first matrix by the second (in that order i.e. the first matrix pre-multiplies the second) gives a matrix that goes from standard RGB to the Camera's RGB for the 600D:

0.245467 0.155663 0.026238

0.086289 0.745977 0.236382

0.01900 0.180445 0.562994

So to go from the 600D CameraRGB to standard RGB we want the inverse of this matrix. This will be applied to our white balanced data. So we need to make sure that the matrix will not change the colour of white. This is done by scaling each row of the above matrix so each row sums to 1.0:

0.574368 0.364237 0.061395

0.080746 0.698056 0.221197

0.024921 0.236668 0.738411

Now we can invert the matrix giving us:

1.879574 -1.03263 0.153055

-0.21962 1.715146 -0.49553

0.006956 -0.51487 1.507914

Now, each pixel's rgb value must be multiplied by this matrix to get into the standard RGB colour space. So if the white balanced pixel has values r,g,b then the output values R,G,B are given by:

R = 1.879574*r  -  1.03263*g  + 0.153055*b

G = -0.21962*r  + 1.715146*g -  0.49553*b

B = 0.006956*r  -  0.51487*g  + 1.507914*b

After this the tone curve is applied to each channel - typically with a gamma of 2.2   By default, DCRaw applies the BT.709 curve (Google it if you want to know more).  It is not clear to me at the current time exactly what ACR applies.

There you have it.  If you can follow matrix arithmetic then you can find the relevant matrix for your won camera and do the same thing.

Mark

 sharkmelley's gear list:sharkmelley's gear list
Sony Alpha a7S +1 more
sharkmelley
OP sharkmelley Senior Member • Posts: 1,455
Re: How to Manually Process a DSLR Image

Jon Rista wrote:

Very interesting stuff, Mark. I'm amazed you managed to dig up those matrices...I figured those details would have been buried in proprietary code.

I do wonder how "accurate" this approach may be to processing astro images. Personally, I think the original ACR version of your flower pot is too bright and too vibrant for an astro image. I think you would lose subtle color variations and possibly certain details if you used that on an astro integration.

Something in between the ACR version and the original dull version might be ideal, giving you color, but avoiding oversaturation of colors that eats up details.

Personally, I have not generally found that I cannot saturate enough with PixInsight. Between the CurvesTransformation and ColorSaturation tools, you have both global and selective control over color. The biggest issue I may run into is the color noise. With a strait up linear process, your not getting any of that built-in ACR color noise reduction. You have to apply it yourself. You have to be pretty meticulous about that, to reduce color noise, but not reduce color in general. That can be difficult to fine tune, and if you don't, color saturation will often re-enhance color noise.

I would be very interested in seeing how this approach affects color noise in a PixInsight integration of data. Would the high color saturation result in increased color noise as well? I wonder what other differences there may be with low SNR noisy data using this approach.

I see what you mean Jon but colour saturation is a different issue. The purpose of the ColourMatrix is to correct the colour hues. The colour hues are affected by the overlapping response curves of the RGB filters of the colour filter array. Hence both a white balance adjustment and a hue adjustment are required to make the colours correct. You are then free to adjust saturation as you see fit.

An interesting comparison would be to compare with a mono CCD with colour filters.  The response curves of such filters tend not to overlap.  I wonder if anyone has tried imaging a standard colour chart with such a setup?

Mark

 sharkmelley's gear list:sharkmelley's gear list
Sony Alpha a7S +1 more
swimswithtrout Senior Member • Posts: 2,953
Re: Where to find the Adobe ColorMatrix

Very interesting. Is there some way to plug this into DSS, since I believe it uses DCRaw ?

Allien
Allien Regular Member • Posts: 402
Re: How to Manually Process a DSLR Image

Interesting stuff thanks for sharing.

Jon Rista Contributing Member • Posts: 681
Re: How to Manually Process a DSLR Image

sharkmelley wrote:

Jon Rista wrote:

Very interesting stuff, Mark. I'm amazed you managed to dig up those matrices...I figured those details would have been buried in proprietary code.

I do wonder how "accurate" this approach may be to processing astro images. Personally, I think the original ACR version of your flower pot is too bright and too vibrant for an astro image. I think you would lose subtle color variations and possibly certain details if you used that on an astro integration.

Something in between the ACR version and the original dull version might be ideal, giving you color, but avoiding oversaturation of colors that eats up details.

Personally, I have not generally found that I cannot saturate enough with PixInsight. Between the CurvesTransformation and ColorSaturation tools, you have both global and selective control over color. The biggest issue I may run into is the color noise. With a strait up linear process, your not getting any of that built-in ACR color noise reduction. You have to apply it yourself. You have to be pretty meticulous about that, to reduce color noise, but not reduce color in general. That can be difficult to fine tune, and if you don't, color saturation will often re-enhance color noise.

I would be very interested in seeing how this approach affects color noise in a PixInsight integration of data. Would the high color saturation result in increased color noise as well? I wonder what other differences there may be with low SNR noisy data using this approach.

I see what you mean Jon but colour saturation is a different issue. The purpose of the ColourMatrix is to correct the colour hues. The colour hues are affected by the overlapping response curves of the RGB filters of the colour filter array. Hence both a white balance adjustment and a hue adjustment are required to make the colours correct. You are then free to adjust saturation as you see fit.

That I do understand. I am curious, though...correct relative to what reference point? It is color we are talking about here...subjectivity central.

An interesting comparison would be to compare with a mono CCD with colour filters. The response curves of such filters tend not to overlap. I wonder if anyone has tried imaging a standard colour chart with such a setup?

Mark

That would be interesting. Depends on which filter, but some do have a gap between red and green to block some of the more egregious LP emitters (i.e. sodium vapor). Personally, I find that when I process CCD data, color is just "easy". I hardly have to work at it. No calibration at all, just a linear fit, and the results usually look very good. It also seems easier to perform G2V calibration with LRGB data, which again just seem to work.

That doesn't mean that CCD data is devoid of gradients, however, and it can be important to properly remove the gradients and their color casts before doing serious color calibration, because otherwise the gradient color casts WILL influence the results. That may be an area where using a calibration matrix might simplify things...you wouldn't have to worry about the gradients, or at least, you could worry about them whenever you pleased if you pleased.

 Jon Rista's gear list:Jon Rista's gear list
Canon EOS 5D Mark III Sony a6000 Canon EF 50mm f/1.4 USM Canon EF 16-35mm F2.8L II USM Canon EF 100-400mm f/4.5-5.6L IS USM +4 more
Jon Rista Contributing Member • Posts: 681
Re: Where to find the Adobe ColorMatrix

When exactly is this applied? Does it have to be applied to the raw data, or during demosaicing, or can it be applied to the demosaiced data?

I ask because I wonder if the data AFTER this transformation could still be considered linear...

 Jon Rista's gear list:Jon Rista's gear list
Canon EOS 5D Mark III Sony a6000 Canon EF 50mm f/1.4 USM Canon EF 16-35mm F2.8L II USM Canon EF 100-400mm f/4.5-5.6L IS USM +4 more
sharkmelley
OP sharkmelley Senior Member • Posts: 1,455
Re: How to Manually Process a DSLR Image

Jon Rista wrote:

sharkmelley wrote:

I see what you mean Jon but colour saturation is a different issue. The purpose of the ColourMatrix is to correct the colour hues. The colour hues are affected by the overlapping response curves of the RGB filters of the colour filter array. Hence both a white balance adjustment and a hue adjustment are required to make the colours correct. You are then free to adjust saturation as you see fit.

That I do understand. I am curious, though...correct relative to what reference point? It is color we are talking about here...subjectivity central.

I don't want another conversation diverted down the subjectivity issue. The manufacturer and/or Adobe and/or DXOMark calculate these matrices so the generated image has the best fit to the measured response of the human eye tested over a certain range of colours (admittedly measured a long time ago on too few people). At least that's my understanding based on what I've read. One month ago I knew nothing about all this interesting stuff - I'd never understood a "colour calibrated workflow" and there are still plenty of gaps in my understanding.

My goal was to understand why I could never match the colours in the camera produced JPG when I processed linear data. The solution I've found here works OK for pictures taken in daylight. It becomes a bit more difficult for images taken in other colour temperatures. An extra step comes into play. The Adobe specification recommends using a Bradford matrix for this:

http://www.brucelindbloom.com/index.html?Eqn_ChromAdapt.html

Mark

 sharkmelley's gear list:sharkmelley's gear list
Sony Alpha a7S +1 more
sharkmelley
OP sharkmelley Senior Member • Posts: 1,455
Re: Where to find the Adobe ColorMatrix
2

Jon Rista wrote:

When exactly is this applied? Does it have to be applied to the raw data, or during demosaicing, or can it be applied to the demosaiced data?

I ask because I wonder if the data AFTER this transformation could still be considered linear...

It has to be applied once we have a Camera RGB value for each pixel. I did this after demosaicing and white balancing and I think DCRaw does the same.

However, it is just possible that other raw converters (e.g. ACR) might do it simultaneously with demosaicing using some kind of optimisation. Mathematically it would be feasible to do so but I really don't know what might be going on inside the raw convertor.

Yes, the data after this operation is still linear because matrix multiplication is (by definition) linear.  Here again is my example for the Canon 600D:

R = 1.879574*r - 1.03263*g + 0.153055*b

G = -0.21962*r + 1.715146*g - 0.49553*b

B = 0.006956*r - 0.51487*g + 1.507914*b

Hence light pollution can be safely subtracted before or after this operation.

There is one additional step that might be taken in a properly colour calibrated workflow, which is to produce extra corrections generated from a contemporaneous image of a colour chart.  I might be wrong (I'm sure someone with more experience can confirm or refute) but I think that additional correction might be performed using a LUT (colour lookup table).  Applying a LUT is not a linear operation.

Mark

 sharkmelley's gear list:sharkmelley's gear list
Sony Alpha a7S +1 more
Jon Rista Contributing Member • Posts: 681
Re: How to Manually Process a DSLR Image

sharkmelley wrote:

Jon Rista wrote:

sharkmelley wrote:

I see what you mean Jon but colour saturation is a different issue. The purpose of the ColourMatrix is to correct the colour hues. The colour hues are affected by the overlapping response curves of the RGB filters of the colour filter array. Hence both a white balance adjustment and a hue adjustment are required to make the colours correct. You are then free to adjust saturation as you see fit.

That I do understand. I am curious, though...correct relative to what reference point? It is color we are talking about here...subjectivity central.

I don't want another conversation diverted down the subjectivity issue. The manufacturer and/or Adobe and/or DXOMark calculate these matrices so the generated image has the best fit to the measured response of the human eye tested over a certain range of colours (admittedly measured a long time ago on too few people). At least that's my understanding based on what I've read. One month ago I knew nothing about all this interesting stuff - I'd never understood a "colour calibrated workflow" and there are still plenty of gaps in my understanding.

My goal was to understand why I could never match the colours in the camera produced JPG when I processed linear data. The solution I've found here works OK for pictures taken in daylight. It becomes a bit more difficult for images taken in other colour temperatures. An extra step comes into play. The Adobe specification recommends using a Bradford matrix for this:

http://www.brucelindbloom.com/index.html?Eqn_ChromAdapt.html

Mark

I just wanted to know the reference point. So it sounds like CIE Lab 1931? There are multiple versions of those standards, as it was repeated in the 60's and 70's with updated models. Some of them vary quite considerably from the master produced in 1931, as some testing was done with a 2° "foveal spot", which basically just covered the sensitivity of the central region of the retina which is packed with red and green cones, and other testing was done with a 10° foveal spot which accounted for a much greater range of blue sensitivity by accounting for the greater number of blue cones in the periphery of our retina. I think the 1931 model is most often used, particularly for Source/XYZ/LAB/XYZ/RGB conversions, but that model doesn't account for nearly as much blue sensitivity range as we might actually have.

So you are using Lindbloom's data to convert from one camera white point to another?

 Jon Rista's gear list:Jon Rista's gear list
Canon EOS 5D Mark III Sony a6000 Canon EF 50mm f/1.4 USM Canon EF 16-35mm F2.8L II USM Canon EF 100-400mm f/4.5-5.6L IS USM +4 more
Jon Rista Contributing Member • Posts: 681
Re: Where to find the Adobe ColorMatrix

sharkmelley wrote:

Jon Rista wrote:

When exactly is this applied? Does it have to be applied to the raw data, or during demosaicing, or can it be applied to the demosaiced data?

I ask because I wonder if the data AFTER this transformation could still be considered linear...

It has to be applied once we have a Camera RGB value for each pixel. I did this after demosaicing and white balancing and I think DCRaw does the same.

However, it is just possible that other raw converters (e.g. ACR) might do it simultaneously with demosaicing using some kind of optimisation. Mathematically it would be feasible to do so but I really don't know what might be going on inside the raw convertor.

Yes, the data after this operation is still linear because matrix multiplication is (by definition) linear. Here again is my example for the Canon 600D:

R = 1.879574*r - 1.03263*g + 0.153055*b

G = -0.21962*r + 1.715146*g - 0.49553*b

B = 0.006956*r - 0.51487*g + 1.507914*b

Hence light pollution can be safely subtracted before or after this operation.

There is one additional step that might be taken in a properly colour calibrated workflow, which is to produce extra corrections generated from a contemporaneous image of a colour chart. I might be wrong (I'm sure someone with more experience can confirm or refute) but I think that additional correction might be performed using a LUT (colour lookup table). Applying a LUT is not a linear operation.

Mark

Thanks for the details. I'll experiment with it tomorrow. I am curious if it would be possible to do with PixelMath...I am not sure if you can access different color channels in the R/K, G, and B inputs to PixelMath. I guess if you split the image into it's constituent channels, you would then be able to access each wherever you wanted in the formulas... I may try creating a PI script to handle this automatically if it works.

I don't believe applying a LUT is linear, however there are different kinds, and I don't know if 3D LUTs might be able to maintain linearity or not. I never looked too much into LUTs back when I first started researching color theory and color management...which was many years ago now. I do know that my screen calibration software supports generating profiles that can be handled purely with software, or actually stored in a hardware LUT for screens that have them, and with an older NEC PA series screen I used a while back, calibrating the hardware LUT produced significantly better color. However I believe that screen was expanding the input 8-bpc color into 14-bpc color, and the tonality was phenomenal. I don't think that necessarily applies here...but it's about the extent of my knowledge of LUTs.

 Jon Rista's gear list:Jon Rista's gear list
Canon EOS 5D Mark III Sony a6000 Canon EF 50mm f/1.4 USM Canon EF 16-35mm F2.8L II USM Canon EF 100-400mm f/4.5-5.6L IS USM +4 more
sharkmelley
OP sharkmelley Senior Member • Posts: 1,455
Re: How to Manually Process a DSLR Image

Jon Rista wrote:

sharkmelley wrote:

Jon Rista wrote:

sharkmelley wrote:

I see what you mean Jon but colour saturation is a different issue. The purpose of the ColourMatrix is to correct the colour hues. The colour hues are affected by the overlapping response curves of the RGB filters of the colour filter array. Hence both a white balance adjustment and a hue adjustment are required to make the colours correct. You are then free to adjust saturation as you see fit.

That I do understand. I am curious, though...correct relative to what reference point? It is color we are talking about here...subjectivity central.

I don't want another conversation diverted down the subjectivity issue. The manufacturer and/or Adobe and/or DXOMark calculate these matrices so the generated image has the best fit to the measured response of the human eye tested over a certain range of colours (admittedly measured a long time ago on too few people). At least that's my understanding based on what I've read. One month ago I knew nothing about all this interesting stuff - I'd never understood a "colour calibrated workflow" and there are still plenty of gaps in my understanding.

My goal was to understand why I could never match the colours in the camera produced JPG when I processed linear data. The solution I've found here works OK for pictures taken in daylight. It becomes a bit more difficult for images taken in other colour temperatures. An extra step comes into play. The Adobe specification recommends using a Bradford matrix for this:

http://www.brucelindbloom.com/index.html?Eqn_ChromAdapt.html

Mark

I just wanted to know the reference point. So it sounds like CIE Lab 1931? There are multiple versions of those standards, as it was repeated in the 60's and 70's with updated models. Some of them vary quite considerably from the master produced in 1931, as some testing was done with a 2° "foveal spot", which basically just covered the sensitivity of the central region of the retina which is packed with red and green cones, and other testing was done with a 10° foveal spot which accounted for a much greater range of blue sensitivity by accounting for the greater number of blue cones in the periphery of our retina. I think the 1931 model is most often used, particularly for Source/XYZ/LAB/XYZ/RGB conversions, but that model doesn't account for nearly as much blue sensitivity range as we might actually have.

So you are using Lindbloom's data to convert from one camera white point to another?

Yes it's based on the CIE stuff but to be honest, I'm still a bit hazy on exactly what is going on here. ColorMatrix2 which is what I'm using has been calibrated for the D65 illuminant, since the field CalibrationIlluminant2 indicates "D65". ColorMatrix1 also exists which CalibrationIlluminant1 indicates is "Standard Light A".  The Adobe digital negative also contains Forward Matrices which they recommend to be used in preference to ColorMatrix1 and ColorMatrix2 but it's not clear to me exactly how they are used - every attempt so far has gone completely wrong.  DCRaw doesn't use them.

Mark

 sharkmelley's gear list:sharkmelley's gear list
Sony Alpha a7S +1 more
Jon Rista Contributing Member • Posts: 681
Re: How to Manually Process a DSLR Image

sharkmelley wrote:

Jon Rista wrote:

sharkmelley wrote:

Jon Rista wrote:

sharkmelley wrote:

I see what you mean Jon but colour saturation is a different issue. The purpose of the ColourMatrix is to correct the colour hues. The colour hues are affected by the overlapping response curves of the RGB filters of the colour filter array. Hence both a white balance adjustment and a hue adjustment are required to make the colours correct. You are then free to adjust saturation as you see fit.

That I do understand. I am curious, though...correct relative to what reference point? It is color we are talking about here...subjectivity central.

I don't want another conversation diverted down the subjectivity issue. The manufacturer and/or Adobe and/or DXOMark calculate these matrices so the generated image has the best fit to the measured response of the human eye tested over a certain range of colours (admittedly measured a long time ago on too few people). At least that's my understanding based on what I've read. One month ago I knew nothing about all this interesting stuff - I'd never understood a "colour calibrated workflow" and there are still plenty of gaps in my understanding.

My goal was to understand why I could never match the colours in the camera produced JPG when I processed linear data. The solution I've found here works OK for pictures taken in daylight. It becomes a bit more difficult for images taken in other colour temperatures. An extra step comes into play. The Adobe specification recommends using a Bradford matrix for this:

http://www.brucelindbloom.com/index.html?Eqn_ChromAdapt.html

Mark

I just wanted to know the reference point. So it sounds like CIE Lab 1931? There are multiple versions of those standards, as it was repeated in the 60's and 70's with updated models. Some of them vary quite considerably from the master produced in 1931, as some testing was done with a 2° "foveal spot", which basically just covered the sensitivity of the central region of the retina which is packed with red and green cones, and other testing was done with a 10° foveal spot which accounted for a much greater range of blue sensitivity by accounting for the greater number of blue cones in the periphery of our retina. I think the 1931 model is most often used, particularly for Source/XYZ/LAB/XYZ/RGB conversions, but that model doesn't account for nearly as much blue sensitivity range as we might actually have.

So you are using Lindbloom's data to convert from one camera white point to another?

Yes it's based on the CIE stuff but to be honest, I'm still a bit hazy on exactly what is going on here. ColorMatrix2 which is what I'm using has been calibrated for the D65 illuminant, since the field CalibrationIlluminant2 indicates "D65". ColorMatrix1 also exists which CalibrationIlluminant1 indicates is "Standard Light A". The Adobe digital negative also contains Forward Matrices which they recommend to be used in preference to ColorMatrix1 and ColorMatrix2 but it's not clear to me exactly how they are used - every attempt so far has gone completely wrong. DCRaw doesn't use them.

Mark

Interesting. I agree that using the D65 matrix is probably best, since that is also the default white reference for sRGB. Standard Illuminant A is basically a Tungsten bulb white reference. I think it's 2800K or 2900K or around there, so fairly red-shifted on the color temperature scale with a definite orange color cast. I wonder how that might work with light polluted subs (although Tungsten isn't quite the same as sodium vapor, as the former is more of a broadband emission whereas the latter is definite narrow band emissions.)

Regarding the forward matrices...do you know if they are even populated with valid data? If the original source RAW did not have them, where would the data for those matrices be coming from? Is it just a set of standard matrices that are hard coded?

 Jon Rista's gear list:Jon Rista's gear list
Canon EOS 5D Mark III Sony a6000 Canon EF 50mm f/1.4 USM Canon EF 16-35mm F2.8L II USM Canon EF 100-400mm f/4.5-5.6L IS USM +4 more
Jon Rista Contributing Member • Posts: 681
Re: How to Manually Process a DSLR Image

Oh, just a quick note on what I remember (still a bit hazy) about color and white balance conversions. IIRC, the flow is this:

Source RGB -> XYZ -> L*a*b (adjust white point here!) -> XYZ -> Destination RGB

White balance conversion, as well as things like color similarity mapping and other perceptually relevant color modeling or processing tasks, should be performed in Lab space. Since Lab models human perception and vision, adjusting white point there (and you should be able to find the necessary formulas for that pretty easily) is apparently better than adjusting it elsewhere, although I believe it is more common to just do it in XYZ.

 Jon Rista's gear list:Jon Rista's gear list
Canon EOS 5D Mark III Sony a6000 Canon EF 50mm f/1.4 USM Canon EF 16-35mm F2.8L II USM Canon EF 100-400mm f/4.5-5.6L IS USM +4 more
Greg VdB Veteran Member • Posts: 4,485
Re: How to Manually Process a DSLR Image

Mark, I admire your persistence in unraveling the RAW converter black box. This step in particular definitely gives you a much better understanding of how to go from linear output to a realistic colour image. The question then is: have you already tried applying this to one of your "bare" astro images? If Jon could get a script working for PI, I'm sure many people would be interested. On a side note (and I'm somewhat apprehensive to even utter it, in fear of starting this debate again), it would also allow a more straightforward comparison between the two post-processing methodologies.

Again, kudos for your efforts!

cheers,

-- hide signature --

Greg Van den Bleeken
www.pbase.com/gbleek
vimeo.com/vdbphotography
Take photographs *you* want to look at. Take photographs you want to *look* at. (Ed Leys)

Trollmannx Senior Member • Posts: 4,458
Re: How to Manually Process a DSLR Image

Despite having a sloppy and relaxed way of post processing my own images I find threads like this one very interesting. Makes me rethink my own image processing and leaves me pondering about alternative routes to get the result I am after. Guess others benefit from threads like this one, too.

So keep up the good work here, and the discussion is very much appreciated!

sharkmelley
OP sharkmelley Senior Member • Posts: 1,455
What does the ColorMatrix do to H-alpha?

Greg VdB wrote:

Mark, I admire your persistence in unraveling the RAW converter black box. This step in particular definitely gives you a much better understanding of how to go from linear output to a realistic colour image. The question then is: have you already tried applying this to one of your "bare" astro images? If Jon could get a script working for PI, I'm sure many people would be interested. On a side note (and I'm somewhat apprehensive to even utter it, in fear of starting this debate again), it would also allow a more straightforward comparison between the two post-processing methodologies.

Again, kudos for your efforts!

cheers,

Unfortunately, all my astro-images have been created using a modified camera i.e. modified so that H-alpha is not filtered out. I've tried applying this colour matrix to those images but it results in a very overpowering redness in all H-alpha areas even if I scale down the red channel during white balancing.

Remember this matrix has been designed with human colour vision in mind. Specifically it has been optimised over a certain set of colour patches in a colour chart. I've just found this blog by Jim Kasson which describes the kind of process employed:

http://blog.kasson.com/?p=12486

http://blog.kasson.com/?p=12489

One very interesting issue immediately strikes me. What is the effect of this matrix on images containing H-alpha? The colour patches used for the optimisation of the matrix do not include an H-alpha patch. Therefore there is no a-priori certainty that a raw converter using this matrix will render the colour of H-alpha correctly, in a human vision sense, even for an unmodified camera. This needs further investigation.

It's the kind of thing that Roger Clark may have already looked into.

My initial guess is that every camera will render H-alpha differently.

Mark

 sharkmelley's gear list:sharkmelley's gear list
Sony Alpha a7S +1 more
Greg VdB Veteran Member • Posts: 4,485
Re: What does the ColorMatrix do to H-alpha?

sharkmelley wrote:

Greg VdB wrote:

Mark, I admire your persistence in unraveling the RAW converter black box. This step in particular definitely gives you a much better understanding of how to go from linear output to a realistic colour image. The question then is: have you already tried applying this to one of your "bare" astro images? If Jon could get a script working for PI, I'm sure many people would be interested. On a side note (and I'm somewhat apprehensive to even utter it, in fear of starting this debate again), it would also allow a more straightforward comparison between the two post-processing methodologies.

Again, kudos for your efforts!

cheers,

Unfortunately, all my astro-images have been created using a modified camera i.e. modified so that H-alpha is not filtered out. I've tried applying this colour matrix to those images but it results in a very overpowering redness in all H-alpha areas even if I scale down the red channel during white balancing.

I see. Well, in case you mod a camera for H-alpha, you have to live with the consequences as well of course What I mean is that it would be pointless to pursue "true" colours after astromodding a camera. That would be like modding your camera for daytime IR photography, and then trying to achieve natural colour balance...

Remember this matrix has been designed with human colour vision in mind. Specifically it has been optimised over a certain set of colour patches in a colour chart. I've just found this blog by Jim Kasson which describes the kind of process employed:

http://blog.kasson.com/?p=12486

http://blog.kasson.com/?p=12489

One very interesting issue immediately strikes me. What is the effect of this matrix on images containing H-alpha? The colour patches used for the optimisation of the matrix do not include an H-alpha patch. Therefore there is no a-priori certainty that a raw converter using this matrix will render the colour of H-alpha correctly, in a human vision sense, even for an unmodified camera. This needs further investigation.

That's a bit beyond my understanding (and willingness to invest time to understand it) I'm afraid, but again, as soon as you make your camera more sensitive to Ha than the human eye, I see it as a moot point to pursue accurate "natural" colour.

It's the kind of thing that Roger Clark may have already looked into.

My initial guess is that every camera will render H-alpha differently.

I'm sure that is very much true, keeping in mind how different (generations of) camera brands render even simple daylight scenes in very different colours.

cheers,

Jon Rista Contributing Member • Posts: 681
Re: How to Manually Process a DSLR Image

I decided to run my Seagull nebula through PI again, but this time applying Mark's matrix first. I applied it with PixelMath on the linear data, after a linear fit and after removing light pollution gradients. There was no question that the matrix boosted the saturation of the color of the linear data, and no question that the final result is significantly more saturated. More saturated than I am normally able to get without applying the matrix.

Original processing, pure PixInsight and G2V calibration

Reprocessing, with Canon Transformation Matrix Applied, original G2V calibration, reapplied original processing.

I do feel, however, that the image is too saturated now, and a bit more red-shifted than I believe is accurate. The color shift also seems to have nuked the visibility of a couple of the faint blue reflection nebula, the one near the center of the image, and the one towards the lower left. The bigger blue reflection to the upper right has also been dampened.  I did not re-do the G2V calibration on the image after applying the matrix, so my guess i the color calibration is actually off. That said, even if the image was less red, it is more saturated than I myself prefer. For those who do like more saturated images, this is certainly one way to bring out the colors with PixInsight.

 Jon Rista's gear list:Jon Rista's gear list
Canon EOS 5D Mark III Sony a6000 Canon EF 50mm f/1.4 USM Canon EF 16-35mm F2.8L II USM Canon EF 100-400mm f/4.5-5.6L IS USM +4 more
Keyboard shortcuts:
FForum MMy threads