Color Science and Post Processing Order

Status
Not open for further replies.

dbmcclain

Member
Messages
45
Reaction score
2
I just bought a Hasselblad X2D and it arrived one week ago. I am now learning about photographic post-processing.

I arrive by way of Astrophysics and PixInsight processing of both: filter wheel multi-color exposure stacks, and CFA image stacks from Bayer matrix sensors.

There is no ISO setting on an Astronomical camera, and I have full control over what linear and nonlinear processing steps I apply to raw images, once corrected for flat field, read noise, and bias, (and possibly deBayered).

So, now in the photon rich environment of photography, there is ISO -- which appears to be simply a way of scaling the significant portion of 16 bit DAC outputs to the upper 8 bits shown on histograms in programs like Phocus, Capture One, and others.

I am confused that there is so little discussion of deBayering of CFA's as an integral part of "color science" in the popular posts. And there appears to be no available information from any of the camera manufacturers about how they deBayer their raw data. Likewise, there are no explicit mentions of the ordering of exposure correction, histogram adjustments, saturation changes, etc.

My impression is that Phocus reads in a Bayer'ed image from the camera and applies lens distortion and vignetting corrections (akin to my flat field corrections), and deBayering, in addition to a possible nonlinear Gamma correction, on the way to the in-memory image shown on screen, and used for export conversion.

However, the .FFF files remain as RGGB Bayer'ed images, and are possibly stored without lens distortion and vignetting corrections. The .FFF files also do not appear to have the Gamma correction folded in.

All these camera corrections appear to be performed by Phocus, and not by the camera body processor. I cannot say what happens to in-camera JPEG images. Certainly much of this happens in camera, but that is of little concern to me. I am mainly interested in RAW processing.

Do any of you veterans out there know any of these details of processing? Thanks in advance!
 
Sounds like you're fairly technical. One way to learn a lot about this is to write your own raw developer. Matlab (which I use) and Python (which I don't) both have extensive image libraries. Matlab is the lingua franca for color science, or at least it used to be.
Heh! Yes, I have written many in the past... But CFA's are a bit of a new twist for me.

Lisp is my main language, and has been for more than 30 years. But I can tackle just about anything. Been computing for more than 50 years.
Then try doing the conversion to CIE space before and after demosaicing and see which you like better. The standard demosaicing software in Matlab uses gradient-corrected linear interpolation.


Here's one with more algorithms:


Here is a link to a lot of papers with demosacing code:

 
I'm trying to say that splitting 4 colour channels into 12 makes demosaicking harder, less stable, and don't promise any real advantages. I've tried this approach, and others did it too - the result is that RGBE Bayer filtering scheme is all but abandoned.
Sorry, I'm not following you...
Or vice versa, I'm not following. When you say "color science" and "deBayering" in one sentence, how do you mean?
I meant that the act of deBayering, however you do that, itself introduces color artifacts, even before you get to the stage of color transformation. How much RGB at each pixel site?
Of course, and the space in which you perform the demosaicing will affect the details of the artifacts. A bit. But enough to make a difference in the acceptability of the resultant image? I think not.

So I don't see how you can separate the act of deBayering from all the other color transformations. They act in concert with each other.
I've never considered them closely coupled.
 
I'm trying to say that splitting 4 colour channels into 12 makes demosaicking harder, less stable, and don't promise any real advantages. I've tried this approach, and others did it too - the result is that RGBE Bayer filtering scheme is all but abandoned.
Sorry, I'm not following you...
Or vice versa, I'm not following. When you say "color science" and "deBayering" in one sentence, how do you mean?
I meant that the act of deBayering, however you do that, itself introduces color artifacts
Yes, that's the nature of interpolation - it isn't precise.
How much RGB at each pixel site?
"R" pixel contains 100% R, 0% of G&B, and so on. Those RGGB are not colours, they are channel labels only.
So I don't see how you can separate the act of deBayering from all the other color transformations.
I don't see the connection. Demosaicking starts and ends in a non-colorimetric input device space, essentially the colours are not yet established, and thus not a subject of "color science", as it deals with averaged values, not noise, and not artifacts.

--
http://www.libraw.org/
 
Last edited:
Sounds like you're fairly technical. One way to learn a lot about this is to write your own raw developer. Matlab (which I use) and Python (which I don't) both have extensive image libraries. Matlab is the lingua franca for color science, or at least it used to be.
Heh! Yes, I have written many in the past... But CFA's are a bit of a new twist for me.

Lisp is my main language, and has been for more than 30 years. But I can tackle just about anything. Been computing for more than 50 years.
Then try doing the conversion to CIE space before and after demosaicing and see which you like better. The standard demosaicing software in Matlab uses gradient-corrected linear interpolation.

https://www.mathworks.com/help/images/ref/demosaic.html

Here's one with more algorithms:

https://www.mathworks.com/matlabcentral/fileexchange/63628-demosaicing_v2-input-type

Here is a link to a lot of papers with demosacing code:

https://danielkhashabi.com/files/2013_2014_demosaicing/demosaicing.html
Wow! Thanks for that information!

But at the root, the question should really address - how does Hasselblad and Phocus choose to do these things?

The Hasselblad, for me, fulfills part of my bucket list. Ever since I watched a Hasselblad float away from the Gemini Spacecraft when I was really into photography as a teen. I had a Yashica SLR and was really into Tri-X film and doing all my own developing and printing.

The Hasselblad was a delicious lofty goal. But the real reason I chose the X2D is because of 10-15 years of struggling to read the legends in the eyepiece and on the LCD backs of Canon and Sony cameras. My eyesight is getting more and more far-sighted as I age, and with the X2D I can finally read the settings and menus while in the field.
 
I don't see the connection. Demosaicking starts and ends in a non-colorimetric input device space, essentially the colours are not yet established, and thus not a subject of "color science", as it deals with averaged values, not noise, and not artifacts.
Right. Although there is color engineering that considers noise when choosing CFA spectra.

At the risk of putting too fine a point on this, the raw developer assigns colors to triplets in the camera's native capture space. There is no color in raw files.

By the way, Iliah knows more about the practice of this stuff than I'll ever know. When I worked in color science, I was a researcher. We used to call IBM Research "where the rubber meets the sky".

--
https://blog.kasson.com
 
Last edited:
Sounds like you're fairly technical. One way to learn a lot about this is to write your own raw developer. Matlab (which I use) and Python (which I don't) both have extensive image libraries. Matlab is the lingua franca for color science, or at least it used to be.
Heh! Yes, I have written many in the past... But CFA's are a bit of a new twist for me.

Lisp is my main language, and has been for more than 30 years. But I can tackle just about anything. Been computing for more than 50 years.
Then try doing the conversion to CIE space before and after demosaicing and see which you like better. The standard demosaicing software in Matlab uses gradient-corrected linear interpolation.

https://www.mathworks.com/help/images/ref/demosaic.html

Here's one with more algorithms:

https://www.mathworks.com/matlabcentral/fileexchange/63628-demosaicing_v2-input-type

Here is a link to a lot of papers with demosacing code:

https://danielkhashabi.com/files/2013_2014_demosaicing/demosaicing.html
Wow! Thanks for that information!

But at the root, the question should really address - how does Hasselblad and Phocus choose to do these things?
I don't think they want us to know.
The Hasselblad, for me, fulfills part of my bucket list. Ever since I watched a Hasselblad float away from the Gemini Spacecraft when I was really into photography as a teen. I had a Yashica SLR and was really into Tri-X film and doing all my own developing and printing.

The Hasselblad was a delicious lofty goal. But the real reason I chose the X2D is because of 10-15 years of struggling to read the legends in the eyepiece and on the LCD backs of Canon and Sony cameras. My eyesight is getting more and more far-sighted as I age, and with the X2D I can finally read the settings and menus while in the field.
That is nice.
 
I'm trying to say that splitting 4 colour channels into 12 makes demosaicking harder, less stable, and don't promise any real advantages. I've tried this approach, and others did it too - the result is that RGBE Bayer filtering scheme is all but abandoned.
Sorry, I'm not following you...
Or vice versa, I'm not following. When you say "color science" and "deBayering" in one sentence, how do you mean?
I meant that the act of deBayering, however you do that, itself introduces color artifacts
Yes, that's the nature of interpolation - it isn't precise.
How much RGB at each pixel site?
"R" pixel contains 100% R, 0% of G&B, and so on. Those RGGB are not colours, they are channel labels only.
So I don't see how you can separate the act of deBayering from all the other color transformations.
I don't see the connection. Demosaicking starts and ends in a non-colorimetric input device space, essentially the colours are not yet established, and thus not a subject of "color science", as it deals with averaged values, not noise, and not artifacts.
Okay, I have seen mention elsewhere that the RGB channels are not colors. And so I have to bend my thinking a bit to get into this mind frame.

I did learn from some time doing Astrophotography, that colors are really meaningless human perceptions and hugely subjective. All my career has been spent using "color" filters at numerous different wavelengths. And, to me, the color of a filter denotes some physical process in the subject under investigation.

I started out in Radio Astronomy at 115 GHz, observing the J 1-0 transition line in Carbon Monoxide, in nearby galactic arms. Then migrated blueward to 10-20 microns, observing dust around stars and novae. Then more blueward to 5 microns for target detection, and 700 nm to perform a Northern Sky Survey in near IR. Finally, for my own edification, I spent some time in the visible regime to learn just exactly what nebula lies where. And that's where I learned about spoof colors as human perceptual targets.
 
I don't see the connection. Demosaicking starts and ends in a non-colorimetric input device space, essentially the colours are not yet established, and thus not a subject of "color science", as it deals with averaged values, not noise, and not artifacts.
Right. Although there is color engineering that considers noise when choosing CFA spectra.

At the risk of putting too fine a point on this, the raw developer assigns colors to triplets in the camera's native capture space. There is no color in raw files.

By the way, Iliah knows more about the practice of this stuff than I'll ever know. When I worked in color science, I was a researcher. We used to call IBM Research "where the rubber meets the sky".
Just as an aside, I read your bio, and saw your connection to Almaden Research Center. I worked for a while for IBM back in the early 80's and spent a lot of time at Yorktown Heights.

I like reading your Blog posts.
 
I did learn from some time doing Astrophotography, that colors are really meaningless human perceptions and hugely subjective.
In astro, where false color is the name of the game, and the captured spectra are wildly different from Luther-Ives, that's true.

In regular photography, less so. Human perceptions are not meaningless, and many of them are quantified and modeled. It is true that color is a psychological term, not a physical one, but the conversion of spectra to color has some rigor.

However, in most kinds of photography other than documentation, color accuracy is not the goal. Pleasing color is the goal, and the persons being pleased -- or not -- may have wildly different ideas about what is pleasant.
All my career has been spent using "color" filters at numerous different wavelengths. And, to me, the color of a filter denotes some physical process in the subject under investigation.
That connection is tenuous. There are an infinite number of spectra that can resolve to any given color.
 
Hi,

I was with IBM from 81 thru 94. Poughkeepsie and Kingston NY and RTP NC. More development than research. And we referred to what the guys at the Watson Research Center did as Pie In The Sky and what we did as Pie On The Plate. ;)

The phrase Rubber Meeting The Sky I heard used in auto racing and it meant your car is now on its roof.... AKA, a Shunt.

Stan
 
Sounds like you're fairly technical. One way to learn a lot about this is to write your own raw developer. Matlab (which I use) and Python (which I don't) both have extensive image libraries. Matlab is the lingua franca for color science, or at least it used to be.
Heh! Yes, I have written many in the past... But CFA's are a bit of a new twist for me.

Lisp is my main language, and has been for more than 30 years. But I can tackle just about anything. Been computing for more than 50 years.
Then try doing the conversion to CIE space before and after demosaicing and see which you like better.
The more I think about this, the more I'm convincing myself that the demosaicing should be performed in native camera space. It is intellectually cleaner to assign colors after demosaicing when you have triplets at every location. In a perfect world you'd probably want to apply a tone curve to make the interpolation more perceptually uniform, but I'm not convinced that would offer any real advantages.
 
I don't see the connection. Demosaicking starts and ends in a non-colorimetric input device space, essentially the colours are not yet established, and thus not a subject of "color science", as it deals with averaged values, not noise, and not artifacts.
Right. Although there is color engineering that considers noise when choosing CFA spectra.

At the risk of putting too fine a point on this, the raw developer assigns colors to triplets in the camera's native capture space. There is no color in raw files.
IMO any modification of raw data prior to demosaicking should have the goal to improve demosaicking through creating conditions for better interpolation (including better detection of gradients / directions). That's why in RPP we apply white balance and gamma prior to demosaicking, and we also apply certain sharpening to channels. Compensating vignetting before demosicking seems to be yet another useful tool here.
By the way, Iliah knows more about the practice of this stuff than I'll ever know. When I worked in color science, I was a researcher. We used to call IBM Research "where the rubber meets the sky".
--
http://www.libraw.org/
 
Last edited:
I don't see the connection. Demosaicking starts and ends in a non-colorimetric input device space, essentially the colours are not yet established, and thus not a subject of "color science", as it deals with averaged values, not noise, and not artifacts.
Right. Although there is color engineering that considers noise when choosing CFA spectra.

At the risk of putting too fine a point on this, the raw developer assigns colors to triplets in the camera's native capture space. There is no color in raw files.
IMO any modification of raw data prior to demosaicking should have the goal to improve demosaicking through creating conditions for better interpolation (including better detection of gradients / directions). That's why in RPP we apply white balance and gamma prior to demosaicking, and we also apply certain sharpening to channels. Compensating vignetting before demosicking seems to be yet another useful tool here.
I'll buy all that. The gamma would make the interpolation more perceptually uniform. And vignetting compensation would logically be done in a linear space.
 
The more I think about this, the more I'm convincing myself that the demosaicing should be performed in native camera space. It is intellectually cleaner to assign colors after demosaicing when you have triplets at every location. In a perfect world you'd probably want to apply a tone curve to make the interpolation more perceptually uniform, but I'm not convinced that would offer any real advantages.
Yes, scientifically, this would be my approach too.

I'm still wrestling with the distinction between the RGB channels and "color". Granted that colors are represented as triplets of RGB, a triplet such as (r, 0, 0) still represents a color and the reading from the R channel.

I don't know how such a "color" as (r, 0, 0) would be represented in CIE space and ultimately in some gamut. But surely it would be a physically perceived color to our visual senses.

I did see, however, in reading some posts of someone who made the transition from Phase One to Fujifilm, that the lack of IR and UV filtering in front of a sensor could make the camera's impression of color become warped relative to human vision since the sensors have sensitivity where we do not, and some surfaces are brightly reflective in the near IR.
 
IMO any modification of raw data prior to demosaicking should have the goal to improve demosaicking through creating conditions for better interpolation (including better detection of gradients / directions). That's why in RPP we apply white balance and gamma prior to demosaicking, and we also apply certain sharpening to channels. Compensating vignetting before demosicking seems to be yet another useful tool here.
RPP? And how do you perform a white balance to the raw pixels, even before you know what the camera thought white is?

In Astrophotography, after removing the base level color gradients from an image (long after demosaicing and flat-field correction), we then use a statistical survey of the bright stars as representatives of white. That gives us the slope of corrections in the 3 color channels.

In photography? I'm puzzled about how Hasselblad manages in-camera white balance, but it seems to get it pretty darn close to correct.
 
In photography? I'm puzzled about how Hasselblad manages in-camera white balance, but it seems to get it pretty darn close to correct.
Ah yes... I was just reading about using image edge gradients as representatives for gray levels. The assumption being that edge gradients are uniform across all colors. Perhaps that is how Hasselblad can take an image and derive white balance?
 
The more I think about this, the more I'm convincing myself that the demosaicing should be performed in native camera space. It is intellectually cleaner to assign colors after demosaicing when you have triplets at every location. In a perfect world you'd probably want to apply a tone curve to make the interpolation more perceptually uniform, but I'm not convinced that would offer any real advantages.
Yes, scientifically, this would be my approach too.

I'm still wrestling with the distinction between the RGB channels and "color".
The RGB channels aren't colors unless the camera meets the Luther-Ives criterion.


Granted that colors are represented as triplets of RGB,
In some colorimetric color space.
a triplet such as (r, 0, 0) still represents a color and the reading from the R channel.
Nope. It's not a color. It's the camera's response to some spectrum.
I don't know how such a "color" as (r, 0, 0) would be represented in CIE space and ultimately in some gamut. But surely it would be a physically perceived color to our visual senses.
The color is assigned to raw triplets during development. Before that, the triplets aren't colors.
I did see, however, in reading some posts of someone who made the transition from Phase One to Fujifilm, that the lack of IR and UV filtering in front of a sensor could make the camera's impression of color become warped relative to human vision since the sensors have sensitivity where we do not, and some surfaces are brightly reflective in the near IR.
That's called capture metameric error.

 
In photography? I'm puzzled about how Hasselblad manages in-camera white balance, but it seems to get it pretty darn close to correct.
Ah yes... I was just reading about using image edge gradients as representatives for gray levels. The assumption being that edge gradients are uniform across all colors. Perhaps that is how Hasselblad can take an image and derive white balance?


 
IMO any modification of raw data prior to demosaicking should have the goal to improve demosaicking through creating conditions for better interpolation (including better detection of gradients / directions). That's why in RPP we apply white balance and gamma prior to demosaicking, and we also apply certain sharpening to channels. Compensating vignetting before demosicking seems to be yet another useful tool here.
RPP? And how do you perform a white balance to the raw pixels, even before you know what the camera thought white is?
Camera "thoughts" are recorded in metadata, and click-grey methods, "grey world" algorithms, adjusting WB multipliers by a user all don't need demosaicking. Yes, changing WB settings in this case mean re-demosaicking, for better accuracy.

Flat field application, as well as black non-uniformity, patters, contouring, and gradients are better be removed prior to demosaicking, in linear space. Otherwise demosaicking is guided by non-image information.
 
Status
Not open for further replies.

Keyboard shortcuts

Back
Top