Color Science and Post Processing Order

Status
Not open for further replies.

dbmcclain

Member
Messages
45
Reaction score
2
I just bought a Hasselblad X2D and it arrived one week ago. I am now learning about photographic post-processing.

I arrive by way of Astrophysics and PixInsight processing of both: filter wheel multi-color exposure stacks, and CFA image stacks from Bayer matrix sensors.

There is no ISO setting on an Astronomical camera, and I have full control over what linear and nonlinear processing steps I apply to raw images, once corrected for flat field, read noise, and bias, (and possibly deBayered).

So, now in the photon rich environment of photography, there is ISO -- which appears to be simply a way of scaling the significant portion of 16 bit DAC outputs to the upper 8 bits shown on histograms in programs like Phocus, Capture One, and others.

I am confused that there is so little discussion of deBayering of CFA's as an integral part of "color science" in the popular posts. And there appears to be no available information from any of the camera manufacturers about how they deBayer their raw data. Likewise, there are no explicit mentions of the ordering of exposure correction, histogram adjustments, saturation changes, etc.

My impression is that Phocus reads in a Bayer'ed image from the camera and applies lens distortion and vignetting corrections (akin to my flat field corrections), and deBayering, in addition to a possible nonlinear Gamma correction, on the way to the in-memory image shown on screen, and used for export conversion.

However, the .FFF files remain as RGGB Bayer'ed images, and are possibly stored without lens distortion and vignetting corrections. The .FFF files also do not appear to have the Gamma correction folded in.

All these camera corrections appear to be performed by Phocus, and not by the camera body processor. I cannot say what happens to in-camera JPEG images. Certainly much of this happens in camera, but that is of little concern to me. I am mainly interested in RAW processing.

Do any of you veterans out there know any of these details of processing? Thanks in advance!
 
I just bought a Hasselblad X2D and it arrived one week ago. I am now learning about photographic post-processing.

I arrive by way of Astrophysics and PixInsight processing of both: filter wheel multi-color exposure stacks, and CFA image stacks from Bayer matrix sensors.

There is no ISO setting on an Astronomical camera, and I have full control over what linear and nonlinear processing steps I apply to raw images, once corrected for flat field, read noise, and bias, (and possibly deBayered).

So, now in the photon rich environment of photography, there is ISO -- which appears to be simply a way of scaling the significant portion of 16 bit DAC outputs to the upper 8 bits shown on histograms in programs like Phocus, Capture One, and others.
No. The histograms shown in raw developers have a nonlinear x axis. In Lightroom, it's the sRGB tone curve. In Lightroom, the primaries of the histogram plot are the Pro Photo RGB primaries. The calculation precision is for the most part 32-bit linear in a color space with those primaries.

Also, the ISO knob controls the read noise to some extent, and is the only way you can control the conversion gain.
I am confused that there is so little discussion of deBayering of CFA's as an integral part of "color science" in the popular posts.
The color engineering can start with the demosacied image in camera space, or the demosaicing can occur after conversion to a CIE-referenced space. In either case, demosaicing is not part of color engineering, although it is important.
And there appears to be no available information from any of the camera manufacturers about how they deBayer their raw data.
You mean in their proprietary raw developers? Except for Phocus, most people on this forum don't use those raw developers.
Likewise, there are no explicit mentions of the ordering of exposure correction, histogram adjustments, saturation changes, etc.
In Lr, the ordering takes place inside a black box. Adobe says it's optimal. There's no way for the rest of us to know for sure.
My impression is that Phocus reads in a Bayer'ed image from the camera and applies lens distortion and vignetting corrections (akin to my flat field corrections), and deBayering, in addition to a possible nonlinear Gamma correction, on the way to the in-memory image shown on screen, and used for export conversion.
I believe, but do not know for sure, that those corrections are performed on the demosaiced image.
However, the .FFF files remain as RGGB Bayer'ed images, and are possibly stored without lens distortion and vignetting corrections. The .FFF files also do not appear to have the Gamma correction folded in.

All these camera corrections appear to be performed by Phocus, and not by the camera body processor.
In general, camera manufacturers tend to avoid performing in-camera corrections on raw files that can be performed at least as well in post production. I wish they were more rigorous about that.
I cannot say what happens to in-camera JPEG images. Certainly much of this happens in camera, but that is of little concern to me. I am mainly interested in RAW processing.

Do any of you veterans out there know any of these details of processing? Thanks in advance!
--
https://blog.kasson.com
 
Last edited:
My impression is that Phocus reads in a Bayer'ed image from the camera and applies lens distortion and vignetting corrections (akin to my flat field corrections), and deBayering, in addition to a possible nonlinear Gamma correction, on the way to the in-memory image shown on screen, and used for export conversion.
I believe, but do not know for sure, that those corrections are performed on the demosaiced image.
Since the Bayer pattern is impressed upon physical pixels, any flat-fielding corrections, such as for lens distortion and vignetting would need to be applied to the non-deBayered image coming right off the sensor. deBayering mixes together different physical locations of the sensor and would impede a proper correction for these optical effects. Eh?

I do see that many of the adjustments to exposure appear to be working against the linear image prior to any "Gamma" correction - which is that conversion to color space that you indicated.

In PixInsight, I can read the raw files, and I see 120 columns of (deBayered) overscan on each side of the sensor, as well as 100 lines of overscan at the top (beginning of file). No doubt those overscan regions (some 70 columns each) between the hard black/white zones and the imaging area, could be used to derive the read noise from the exposure.

As for bias, i would imaging that Hasselblad's complete characterization of each sensor builds that into the camera processor. Or, perhaps, for these photon rich regimes you don't even bother with bias corrections.

It seems that Exposure adjustments occur prior to the nonlinear mapping of RGB values to colorspace. And since we are presented with histograms of 8-bit significance, any adjustments applied directly at the histogram would necessarily be operating post-nonlinear conversion.

But thanks so much for piping in there about the histogram axis. Not simply a bit-shift post DAC, but something more elaborate than I had imagined.

Cheers
 
I am confused that there is so little discussion of deBayering of CFA's as an integral part of "color science" in the popular posts.
Applying colour transforms to mosaicked raw channels splits them.
Yes, but how are they split and combined?

I can immediately give you 3 different methods for this - superpixel binning 2x2, bilinear interpolation, and VNG (variable number of gradients), that each have their own advantages and disadvantages. And VNG uses a 5x5 region for computing its gradients.

No doubt you could dream up many more methods. We are operating against a spatially under-sampled image to derive pixel values between the samples. By rights, from Mr. Nyquist, we should only be able to unambiguously define an N/2 x N/2 image from N x N pixels. Anything else is some sort of analytic continuation for super-resolution.

And all these different methods of mixing will affect the color presentation in different ways.

Eh?
 
My impression is that Phocus reads in a Bayer'ed image from the camera and applies lens distortion and vignetting corrections (akin to my flat field corrections), and deBayering, in addition to a possible nonlinear Gamma correction, on the way to the in-memory image shown on screen, and used for export conversion.
I believe, but do not know for sure, that those corrections are performed on the demosaiced image.
Since the Bayer pattern is impressed upon physical pixels, any flat-fielding corrections, such as for lens distortion and vignetting would need to be applied to the non-deBayered image coming right off the sensor. deBayering mixes together different physical locations of the sensor and would impede a proper correction for these optical effects. Eh?
For astronomical work, things are different, but for normal photography, distortion and vignetting vary at low frequencies, so correction after demosacing works just fine.
I do see that many of the adjustments to exposure appear to be working against the linear image prior to any "Gamma" correction
What makes you say that? Almost all raw developers apply a nonlinear tone curve by default. Lightroom's exposure correction certainly does not work linearly; there is a big shoulder introduced.
- which is that conversion to color space that you indicated.

In PixInsight, I can read the raw files, and I see 120 columns of (deBayered) overscan on each side of the sensor, as well as 100 lines of overscan at the top (beginning of file). No doubt those overscan regions (some 70 columns each) between the hard black/white zones and the imaging area, could be used to derive the read noise from the exposure.
Calibration pixels are common in photographic sensors. They are cropped out of the final image, but you can see them in programs like RawDigger, or in the one that you referenced, about which I have no knowledge. I don't see the relevance of that to the topic under discussion, though.
As for bias, i would imaging that Hasselblad's complete characterization of each sensor builds that into the camera processor. Or, perhaps, for these photon rich regimes you don't even bother with bias corrections.
By bias, do you mean black point? Most raw developers don't do local black point corrections.
It seems that Exposure adjustments occur prior to the nonlinear mapping of RGB values to colorspace.
Again, why do you say that?
And since we are presented with histograms of 8-bit significance,
Do you mean there are 255 buckets? The number of buckets in the histogram has nothing to do with the precision of the data.
any adjustments applied directly at the histogram would necessarily be operating post-nonlinear conversion.
In Lr, the histogram represents the completely developed image, with the effect of all controls applied. It is far, far from the raw histogram.
But thanks so much for piping in there about the histogram axis. Not simply a bit-shift post DAC, but something more elaborate than I had imagined.
 
I am confused that there is so little discussion of deBayering of CFA's as an integral part of "color science" in the popular posts.
Applying colour transforms to mosaicked raw channels splits them.
Yes, but how are they split and combined?

I can immediately give you 3 different methods for this - superpixel binning 2x2, bilinear interpolation, and VNG (variable number of gradients), that each have their own advantages and disadvantages. And VNG uses a 5x5 region for computing its gradients.

No doubt you could dream up many more methods. We are operating against a spatially under-sampled image to derive pixel values between the samples. By rights, from Mr. Nyquist, we should only be able to unambiguously define an N/2 x N/2 image from N x N pixels. Anything else is some sort of analytic continuation for super-resolution.
All of the demosaicing methods that I know of suffer from aliasing with some input images. I don't see how it could be otherwise. Demosaicing has to invent data that's not there.
And all these different methods of mixing will affect the color presentation in different ways.
You are making this way harder than it has to be. Consider the demosaicing -- and there are probably a hundred methods -- separate from the conversion of the image to a CIE color space.

--
https://blog.kasson.com
 
Last edited:
And since we are presented with histograms of 8-bit significance,
Do you mean there are 255 buckets? The number of buckets in the histogram has nothing to do with the precision of the data.
Ah, yes, you are so correct about that.

So where does that leave us? It is amazing to me how uninformed the photographic community is - meaning, how much information is *not* provided to them by the manufacturers. Perhaps, archivists demand more transparency from them? (Maybe that's why the digital backs cost $32,000-$58,000 each?)
 
I am confused that there is so little discussion of deBayering of CFA's as an integral part of "color science" in the popular posts.
Applying colour transforms to mosaicked raw channels splits them.
Yes, but how are they split
I'm trying to say that splitting 4 colour channels into 12 makes demosaicking harder, less stable, and don't promise any real advantages. I've tried this approach, and others did it too - the result is that RGBE Bayer filtering scheme is all but abandoned.
 
You are making this way harder than it has to be. Consider the demosaicing -- and there are probably a hundred methods -- separate from the conversion of the image to a CIE color space.
Okay, it seems that in photography, trial and error adjustments are the rule. I am making it more difficult by trying to understand what each and every step does and when it is applied.

Good pics satisfy the customers, so give them a means to gratification. No need for all the sciency stuff.

I do apologize.
 
I'm trying to say that splitting 4 colour channels into 12 makes demosaicking harder, less stable, and don't promise any real advantages. I've tried this approach, and others did it too - the result is that RGBE Bayer filtering scheme is all but abandoned.
Sorry, I'm not following you... Maybe I should just sit on the sidelines here, and listen a bit longer. It seems that my Astrophysics approach is not pertinent to this arena of photography.

Thanks for piping in there!
 
For astronomical work, things are different, but for normal photography, distortion and vignetting vary at low frequencies, so correction after demosacing works just fine.
Again, yes I see that I mis-stated the Nyquist criterion. It is only necessary to sample the highest spatial frequency at twice rate. If none exist above some frequency, then indeed the (apparent) undersampling would be completely okay.

Thanks for giving me the elbow.
 
I'm trying to say that splitting 4 colour channels into 12 makes demosaicking harder, less stable, and don't promise any real advantages. I've tried this approach, and others did it too - the result is that RGBE Bayer filtering scheme is all but abandoned.
Sorry, I'm not following you...
Or vice versa, I'm not following. When you say "color science" and "deBayering" in one sentence, how do you mean?

--

http://www.libraw.org/
 
Last edited:
You are making this way harder than it has to be. Consider the demosaicing -- and there are probably a hundred methods -- separate from the conversion of the image to a CIE color space.
Okay, it seems that in photography, trial and error adjustments are the rule.
In all visual arts experience gained from trial and error plays a significant role. Same as it is in science ;)
No need for all the sciency stuff.
We are trying to say there is.
 
For astronomical work, things are different, but for normal photography, distortion and vignetting vary at low frequencies, so correction after demosacing works just fine.
Again, yes I see that I mis-stated the Nyquist criterion. It is only necessary to sample the highest spatial frequency at twice rate. If none exist above some frequency, then indeed the (apparent) undersampling would be completely okay.

Thanks for giving me the elbow.
I see that at F/16 the circle of confusion (from diffraction alone) would measure 2.9 pixels across. The optical MTF is probably worse than that. So we are not undersampling to any significant degree.
 
And since we are presented with histograms of 8-bit significance,
Do you mean there are 255 buckets? The number of buckets in the histogram has nothing to do with the precision of the data.
Ah, yes, you are so correct about that.

So where does that leave us? It is amazing to me how uninformed the photographic community is - meaning, how much information is *not* provided to them by the manufacturers. Perhaps, archivists demand more transparency from them? (Maybe that's why the digital backs cost $32,000-$58,000 each?)
As I said earlier, Phocus is an exception. Most camera manufacturers' raw development software has negligible market penetration. Photographers use programs like Lr and C1. And both of those programs go to great lengths to hide what goes on under the covers for photographers, fearing that they'll get freaked out because of the complexity.

Sounds like you're fairly technical. One way to learn a lot about this is to write your own raw developer. Matlab (which I use) and Python (which I don't) both have extensive image libraries. Matlab is the lingua franca for color science, or at least it used to be.
 
For astronomical work, things are different, but for normal photography, distortion and vignetting vary at low frequencies, so correction after demosacing works just fine.
Again, yes I see that I mis-stated the Nyquist criterion. It is only necessary to sample the highest spatial frequency at twice rate. If none exist above some frequency, then indeed the (apparent) undersampling would be completely okay.

Thanks for giving me the elbow.
I see that at F/16 the circle of confusion (from diffraction alone) would measure 2.9 pixels across. The optical MTF is probably worse than that. So we are not undersampling to any significant degree.
The thing that I was talking about being low in spatial frequency was the distortion or vignetting correction field, not the image that's being corrected.
 
Sounds like you're fairly technical. One way to learn a lot about this is to write your own raw developer. Matlab (which I use) and Python (which I don't) both have extensive image libraries. Matlab is the lingua franca for color science, or at least it used to be.
Heh! Yes, I have written many in the past... But CFA's are a bit of a new twist for me.

Lisp is my main language, and has been for more than 30 years. But I can tackle just about anything. Been computing for more than 50 years.

For a time, I served as the Sr. Scientist on the Raytheon EKV Program - first launch. Hitting a bullet with a bullet...
 
For astronomical work, things are different, but for normal photography, distortion and vignetting vary at low frequencies, so correction after demosacing works just fine.
Again, yes I see that I mis-stated the Nyquist criterion. It is only necessary to sample the highest spatial frequency at twice rate. If none exist above some frequency, then indeed the (apparent) undersampling would be completely okay.

Thanks for giving me the elbow.
I see that at F/16 the circle of confusion (from diffraction alone) would measure 2.9 pixels across. The optical MTF is probably worse than that. So we are not undersampling to any significant degree.
The thing that I was talking about being low in spatial frequency was the distortion or vignetting correction field, not the image that's being corrected.
Understood, very clearly. I was making a secondary point of discussion surrounding the sampling of the sensor.
 
I'm trying to say that splitting 4 colour channels into 12 makes demosaicking harder, less stable, and don't promise any real advantages. I've tried this approach, and others did it too - the result is that RGBE Bayer filtering scheme is all but abandoned.
Sorry, I'm not following you...
Or vice versa, I'm not following. When you say "color science" and "deBayering" in one sentence, how do you mean?
I meant that the act of deBayering, however you do that, itself introduces color artifacts, even before you get to the stage of color transformation. How much RGB at each pixel site?

So I don't see how you can separate the act of deBayering from all the other color transformations. They act in concert with each other.
 
Status
Not open for further replies.

Keyboard shortcuts

Back
Top