Color Science and Post Processing Order

Status
Not open for further replies.

dbmcclain

Member
Messages
45
Reaction score
2
I just bought a Hasselblad X2D and it arrived one week ago. I am now learning about photographic post-processing.

I arrive by way of Astrophysics and PixInsight processing of both: filter wheel multi-color exposure stacks, and CFA image stacks from Bayer matrix sensors.

There is no ISO setting on an Astronomical camera, and I have full control over what linear and nonlinear processing steps I apply to raw images, once corrected for flat field, read noise, and bias, (and possibly deBayered).

So, now in the photon rich environment of photography, there is ISO -- which appears to be simply a way of scaling the significant portion of 16 bit DAC outputs to the upper 8 bits shown on histograms in programs like Phocus, Capture One, and others.

I am confused that there is so little discussion of deBayering of CFA's as an integral part of "color science" in the popular posts. And there appears to be no available information from any of the camera manufacturers about how they deBayer their raw data. Likewise, there are no explicit mentions of the ordering of exposure correction, histogram adjustments, saturation changes, etc.

My impression is that Phocus reads in a Bayer'ed image from the camera and applies lens distortion and vignetting corrections (akin to my flat field corrections), and deBayering, in addition to a possible nonlinear Gamma correction, on the way to the in-memory image shown on screen, and used for export conversion.

However, the .FFF files remain as RGGB Bayer'ed images, and are possibly stored without lens distortion and vignetting corrections. The .FFF files also do not appear to have the Gamma correction folded in.

All these camera corrections appear to be performed by Phocus, and not by the camera body processor. I cannot say what happens to in-camera JPEG images. Certainly much of this happens in camera, but that is of little concern to me. I am mainly interested in RAW processing.

Do any of you veterans out there know any of these details of processing? Thanks in advance!
 
Human vision uses three signals, called 'S', 'M' and 'L' that the brain combines inte perceived color.

Camera sensors have also three signals, called 'R', 'G' and 'B'.

Both sets of signals are integrated over the spectrum, essentially multiplying spectral reflectance of subject, with illumination spectra times sensor spectral response.

The RGB signals from the sensor are remapped in a color space, like sRGB or Prophoto RGB.

On computer screens, additive colors are used, three near monochromatic color sources are mixed, which will stimulate an 'S', 'M', 'L' response in human vision.

If the stimulated 'S', 'M', 'L' response is the same as the 'S', 'M', 'L' response stimulated by the original 'color' under original illuminant, we get a metameric match.
Yep. I tried explaining that to Jim.
Go back and read what Erik wrote. It is not at variance with what I've been saying here. I would have phrased it differently but what Erik said is basically right. Erik never says the RGB signals from the sensor represent colors.

Cameras weight spectra different than people. There is no way to precisely map the triplets that commercial cameras capture into colors. Cameras will see colors that are different as the same color. Cameras will see different spectra that are the same color as different colors. Cameras don't see colors. Color is assigned during raw development or in-camera conversion to JPEG.

There's a reason why it's called the compromise matrix.
This is an inevitable conclusion if one looks at how a simple camera works, and how many parameters are recorded. What is being recorded per pixel is set of luminance values, that are "mapped" onto spectral characteristics of each dye used + sensor characteristic. How this is not by proxi defined color space is baffling to me,
It's a space, but it's not a color space. There is no color there.
if all it does is finding a point in 3d SPACE where each triplet value is located, with outer "shell" of that 3D space being defined by dye characteristics. This is a space representing colors... hence color space.
It's a space representing weighted integrated spectra. It's not a color space.
The fact that it is a "perceived" color is irrelevant to whether it is a color space or not.
The concept of color is a psychological one. It's all about perception.

A little background about me:

From 1989 until the middle of 1995, I worked as an IBM Fellow at the Almaden Research Laboratory south of San Jose, CA. My principal area of research for those six years was color management, color processing for digital photography, and color transformations such as gamut mapping.

--
https://blog.kasson.com
 
Last edited:
Jim,
How do you define color and color space then?
 
Last edited:
This is an inevitable conclusion if one looks at how a simple camera works, and how many parameters are recorded. What is being recorded per pixel is set of luminance values, that are "mapped" onto spectral characteristics of each dye used + sensor characteristic. How this is not by proxi defined color space is baffling to me, if all it does is finding a point in 3d SPACE where each triplet value is located, with outer "shell" of that 3D space being defined by dye characteristics. This is a space representing colors... hence color space.
It's not a space representing colors. It's a space representing weighted integrations of spectra.

If it represents colors, why don't raw developers convert any give raw file to the same color? Every raw converter that I know of converts any give raw file differently from all the other raw converters.

And why do we have color profiles, which specify the mapping of the triples in raw files to colors? If the raw files truly had colors encoded in them, you'd only need to convert those triples to the working color space, and there would only be one right answer.

And why do we have to make different color profiles for different lighting conditions. If the raw triples were really colors, then they'd represent the right colors for those lighting conditions.
 
Last edited:
This is an inevitable conclusion if one looks at how a simple camera works, and how many parameters are recorded. What is being recorded per pixel is set of luminance values, that are "mapped" onto spectral characteristics of each dye used + sensor characteristic. How this is not by proxi defined color space is baffling to me, if all it does is finding a point in 3d SPACE where each triplet value is located, with outer "shell" of that 3D space being defined by dye characteristics. This is a space representing colors... hence color space.
It's not a space representing colors. It's a space representing weighted integrations of spectra.

If it represents colors, why don't raw developers convert any give raw file to the same color?
Because they use their own custom mappings. They dont have to, but it would look like cr@p, but a uniform crap every time (assuming the same sensor, same primary colors on the cfa and identical algorithms of demosaicing), if they did not use custom mappings. So, the mapping is there to create more pleasing colors by transforming them... instead of "assigning" them.

If I have a weight sensor and it outputs current or resistivity, I can map that differently based on my LUTs, but that does not mean the signal does not represent weight measurements, albeit in their raw form.
And why do we have color profiles, which specify the mapping of the triples in raw files to colors?
Because they are mappings from one triplet to another triplet. This is done to convert one space into another... either with a set of vector coefficients or a more fine tuned transformational matrix of coefficients.
If the raw files truly had colors encoded in them, you'd only need to convert those triples to the working color space, and there would only be one right answer.
Again, arguing here that scales do not measure weight initially, because the sensor creates current/resistivity changes as the weight placed on the measuring scales changes.

Seems like a purely philosophical debate, rather than a practical one, as to whether a camera captures colors or not.
 
Last edited:
Ok, so where does color occur per your definition? Is it in a human eye or a human brain? I am not talking about a perception of color, but rather - color itself.
Now we're into philosophy. I'm talking about color vision as described in one of the color matching experiments. It's not the beat one, but CIE 1931 XYZ seems to be what most everybody uses.
 
This is an inevitable conclusion if one looks at how a simple camera works, and how many parameters are recorded. What is being recorded per pixel is set of luminance values, that are "mapped" onto spectral characteristics of each dye used + sensor characteristic. How this is not by proxi defined color space is baffling to me, if all it does is finding a point in 3d SPACE where each triplet value is located, with outer "shell" of that 3D space being defined by dye characteristics. This is a space representing colors... hence color space.
It's not a space representing colors. It's a space representing weighted integrations of spectra.

If it represents colors, why don't raw developers convert any give raw file to the same color?
Because they use their own custom mappings.
If the color is stored in the raw file, why do we need mappings?
They dont have to, but it would look like cr@p,
If the color were really stored in the raw file, why would it look like excrement?.
but a uniform crap every time (assuming the same sensor, same primary colors on the cfa and identical algorithms of demosaicing), if they did not use custom mappings. So, the mapping is there to create more pleasing colors by transforming them... instead of "assigning" them.
Nah...
If I have a weight sensor and it outputs current or resistivity, I can map that differently based on my LUTs, but that does not mean the signal does not represent weight measurements, albeit in their raw form.
And why do we have color profiles, which specify the mapping of the triples in raw files to colors?
Because they are mappings from one triplet to another triplet.
That's true, but they are not mappings from one color space to another.
This is done to convert one space into another...
Can't be done precisely unless both have the same spectral weightings, or a linear transform away from that.
either with a set of vector coefficients or a more fine tuned transformational matrix of coefficients.
You're talking about the compromise matrix. There's a reason why it's called the compromise matrix. And there is more than one compromise matrix for a camera/color space pair, depending on the compromises.
If the raw files truly had colors encoded in them, you'd only need to convert those triples to the working color space, and there would only be one right answer.
Again, arguing here that scales do not measure weight initially, because the sensor creates current/resistivity changes as the weight placed on the measuring scales changes.
I can't understand that at all.
Seems like a purely philosophical debate, rather than a practical one, as to whether a camera captures colors or not.
Nope. It has huge implications for color management and for people designing and using raw converters.

I raised the point earlier, but if the raw file is full of colors, then how come two metamers of the same color can have different values in the raw file?
 
Stillton, post: 66867214, member: 1261637"]
This is an inevitable conclusion if one looks at how a simple camera works, and how many parameters are recorded. What is being recorded per pixel is set of luminance values, that are "mapped" onto spectral characteristics of each dye used + sensor characteristic. How this is not by proxi defined color space is baffling to me, if all it does is finding a point in 3d SPACE where each triplet value is located, with outer "shell" of that 3D space being defined by dye characteristics. This is a space representing colors... hence color space.
It's not a space representing colors. It's a space representing weighted integrations of spectra.

If it represents colors, why don't raw developers convert any give raw file to the same color?
Because they use their own custom mappings.
If the color is stored in the raw file, why do we need mappings?
They dont have to, but it would look like cr@p,
If the color were really stored in the raw file, why would it look like excrement?.
but a uniform crap every time (assuming the same sensor, same primary colors on the cfa and identical algorithms of demosaicing), if they did not use custom mappings. So, the mapping is there to create more pleasing colors by transforming them... instead of "assigning" them.
Nah...
If I have a weight sensor and it outputs current or resistivity, I can map that differently based on my LUTs, but that does not mean the signal does not represent weight measurements, albeit in their raw form.
And why do we have color profiles, which specify the mapping of the triples in raw files to colors?
Because they are mappings from one triplet to another triplet.
That's true, but they are not mappings from one color space to another.
This is done to convert one space into another...
Can't be done precisely unless both have the same spectral weightings, or a linear transform away from that.
either with a set of vector coefficients or a more fine tuned transformational matrix of coefficients.
You're talking about the compromise matrix. There's a reason why it's called the compromise matrix. And there is more than one compromise matrix for a camera/color space pair, depending on the compromises.
If the raw files truly had colors encoded in them, you'd only need to convert those triples to the working color space, and there would only be one right answer.
Again, arguing here that scales do not measure weight initially, because the sensor creates current/resistivity changes as the weight placed on the measuring scales changes.
I can't understand that at all.
Seems like a purely philosophical debate, rather than a practical one, as to whether a camera captures colors or not.
Nope. It has huge implications for color management and for people designing and using raw converters.

I raised the point earlier, but if the raw file is full of colors, then how come two metamers of the same color can have different values in the raw file?
[/QUOTE]
would you like to answer my previous question on where color originates at? You seem to have skipped it. You should be able to explain it without linking me to somewhere. Seems like a pretty trivial question for you to answer, especially since it is not philosophy, right?
 
Last edited:
would you like to answer my previous question on where color originates at? You seem to have skipped it. You should be able to explain it without linking me to somewhere.
You don't seem to believe a word I say. I thought those links might lead you to something you can believe.
Seems like a pretty trivial question for you to answer, especially since it is not philosophy, right?
For the purposes of this discussion, consider color as defined in the CIE 1931 color matching experiment.
 
would you like to answer my previous question on where color originates at? You seem to have skipped it. You should be able to explain it without linking me to somewhere.
You don't seem to believe a word I say. I thought those links might lead you to something you can believe.
Only your perception. I believe it but the way you have been dealing with color seem to have some contradictions, and I am trying to understand if I have some misunderstanding of it, or if you have an internal contradiction hidden somewhere in your position... I trust the words you say, but then I validate it against external sources and whether what was said is self-contradictory or not. This is what I am trying to do.
Seems like a pretty trivial question for you to answer, especially since it is not philosophy, right?
For the purposes of this discussion, consider color as defined in the CIE 1931 color matching experiment.
You have avoided answering my question, Jim. I thought the ask was pretty simple, no?

"Ok, so where does color occur per your definition? Is it in a human eye or a human brain? I am not talking about a perception of color, but rather - color itself."
With this answer it would be very easy to figure out if we can have colors in camera in principle or whether it is totally incorrect to say that.
 
Last edited:
would you like to answer my previous question on where color originates at? You seem to have skipped it. You should be able to explain it without linking me to somewhere.
You don't seem to believe a word I say. I thought those links might lead you to something you can believe.
Only your perception. I believe it but the way you have been dealing with color seem to have some contradictions, and I am trying to understand if I have some misunderstanding of it, or if you have an internal contradiction hidden somewhere in your position... I trust the words you say, but then I validate it against external sources and whether what was said is self-contradictory or not. This is what I am trying to do.
Seems like a pretty trivial question for you to answer, especially since it is not philosophy, right?
For the purposes of this discussion, consider color as defined in the CIE 1931 color matching experiment.
You have avoided answering my question, Jim. I thought the ask was pretty simple, no?
I thought the answer was pretty simple, too. Do you understand the experiment?
"Ok, so where does color occur per your definition? Is it in a human eye or a human brain? I am not talking about a perception of color, but rather - color itself."
To a first approximation, the color matching experiment attempts to remove the brain's processing, and concentrate on what's going on in the eye. It's an experiment done on real people, so it's impossible to do that perfectly. But the spectral weightings are mostly caused by the properties of the cone cells.
With this answer it would be very easy to figure out if we can have colors in camera in principle or whether it is totally incorrect to say that.
I don't understand what you're trying to say here.
 
To a first approximation, the color matching experiment attempts to remove the brain's processing, and concentrate on what's going on in the eye. It's an experiment done on real people, so it's impossible to do that perfectly. But the spectral weightings are mostly caused by the properties of the cone cells.
Right. so, can the output of cone cells be considered color information then?
 
To a first approximation, the color matching experiment attempts to remove the brain's processing, and concentrate on what's going on in the eye. It's an experiment done on real people, so it's impossible to do that perfectly. But the spectral weightings are mostly caused by the properties of the cone cells.
Right. so, can the output of cone cells be considered color information then?
The matching experiment is the standard.
 
To a first approximation, the color matching experiment attempts to remove the brain's processing, and concentrate on what's going on in the eye. It's an experiment done on real people, so it's impossible to do that perfectly. But the spectral weightings are mostly caused by the properties of the cone cells.
Right. so, can the output of cone cells be considered color information then?
The matching experiment is the standard.
Me: Jim, would you say it is more of a square or a circle?

Jim: That red bridge is the oldest bridge in the entire state.
 
Last edited:
Jim,
How do you define color and color space then?
Colour is something we see, perceive, from a scene or on an output device. Colour is how we perceive the difference between the ambient light and emitted/reflected light. This includes "achromatic colour", or shades of grey.

In raw mode, cameras (and scanners) are purely input devices, they digitize colours and provide data in a non-colorimetric space. Non-colorimetric spaces don't characterize colours in the scene. Colour reconstruction from raw is ambiguos.
 
In raw mode, cameras (and scanners) are purely input devices, they digitize colours and provide data in a non-colorimetric space. Non-colorimetric spaces don't characterize colours in the scene. Colour reconstruction from raw is ambiguos.
D'accord.
 
To a first approximation, the color matching experiment attempts to remove the brain's processing, and concentrate on what's going on in the eye. It's an experiment done on real people, so it's impossible to do that perfectly. But the spectral weightings are mostly caused by the properties of the cone cells.
Right. so, can the output of cone cells be considered color information then?
The matching experiment is the standard.
Me: Jim, would you say it is more of a square or a circle?

Jim: That red bridge is the oldest bridge in the entire state.
I can't figure out if you're just being obtuse, or if you really don't understand. Have you read about the color matching experiment? Did you understand what you read?

The output of the cones cells can't be directly measured in the color matching experiment. The results of the experiment (the position of the knobs in a calibrated setup after some processing) set the standard.
 
To a first approximation, the color matching experiment attempts to remove the brain's processing, and concentrate on what's going on in the eye. It's an experiment done on real people, so it's impossible to do that perfectly. But the spectral weightings are mostly caused by the properties of the cone cells.
Right. so, can the output of cone cells be considered color information then?
The matching experiment is the standard.
Me: Jim, would you say it is more of a square or a circle?

Jim: That red bridge is the oldest bridge in the entire state.
I can't figure out if you're just being obtuse, or if you really don't understand. Have you read about the color matching experiment? Did you understand what you read?

The output of the cones cells can't be directly measured in the color matching experiment. The results of the experiment (the position of the knobs in a calibrated setup after some processing) set the standard.
I wasn't arguing about the experiment.

What I dont understand is that how arbitrarily you seem to determine where color information magically pops into existence vs where a measured signal by the camera does not have color information (hence "consumer cameras do not measure colors" comment from you), even though that very same signal controls what colors will be assigned later via a predetermined color transformation.

In 1931 experiment, argument goes that an average human observer (defined by that experiment) observing a spectrum, sees "color" via human trichromatic vision, which operates in a human vision color space, defined by/in that experiment.

But then you argue, that if you replace a human eye with a camera, that has a similar type of vision (trichromatic and also collapses spectra into triplets), then the signal the camera produces, would not contain color information, while human eye counterpart clearly does produce it...

This looks like a contradiction to me, and it is a contradiction I have been trying to resolve and understand for while.
 
Jim,
How do you define color and color space then?
. Colour reconstruction from raw is ambiguos.
is it not the case with humans when hormones, time of day, surrounding colors and other non-color-related stuff affect how color is perceived by a human brain aka color perception is also ambiguous?
 
Last edited:
Status
Not open for further replies.

Keyboard shortcuts

Back
Top