Color Science and Post Processing Order

Status
Not open for further replies.

dbmcclain

Member
Messages
45
Reaction score
2
I just bought a Hasselblad X2D and it arrived one week ago. I am now learning about photographic post-processing.

I arrive by way of Astrophysics and PixInsight processing of both: filter wheel multi-color exposure stacks, and CFA image stacks from Bayer matrix sensors.

There is no ISO setting on an Astronomical camera, and I have full control over what linear and nonlinear processing steps I apply to raw images, once corrected for flat field, read noise, and bias, (and possibly deBayered).

So, now in the photon rich environment of photography, there is ISO -- which appears to be simply a way of scaling the significant portion of 16 bit DAC outputs to the upper 8 bits shown on histograms in programs like Phocus, Capture One, and others.

I am confused that there is so little discussion of deBayering of CFA's as an integral part of "color science" in the popular posts. And there appears to be no available information from any of the camera manufacturers about how they deBayer their raw data. Likewise, there are no explicit mentions of the ordering of exposure correction, histogram adjustments, saturation changes, etc.

My impression is that Phocus reads in a Bayer'ed image from the camera and applies lens distortion and vignetting corrections (akin to my flat field corrections), and deBayering, in addition to a possible nonlinear Gamma correction, on the way to the in-memory image shown on screen, and used for export conversion.

However, the .FFF files remain as RGGB Bayer'ed images, and are possibly stored without lens distortion and vignetting corrections. The .FFF files also do not appear to have the Gamma correction folded in.

All these camera corrections appear to be performed by Phocus, and not by the camera body processor. I cannot say what happens to in-camera JPEG images. Certainly much of this happens in camera, but that is of little concern to me. I am mainly interested in RAW processing.

Do any of you veterans out there know any of these details of processing? Thanks in advance!
 
Whoa! I just took 16 frames each at ISO 64, then again at ISO 1600. Did my usual Bias Frame estimation. 1 ms electronic shutter, lens cap on.

Both of these final BIAS Frames are remarkably flat. No vertical streaks at all !!

Pedestal at ISO 64 was 4094.125 with MAD 0.750

Pedestal at ISO 1600 was 4091.533 with MAD 5.471.

Both images have low/hi reject-worthy pixels (dead and hot), but no streaks.

And the MAD seems to scale with ISO, which is what I would have expected. Won't scale exactly for these two because of the amplifier cut-in above ISO 200.

And from visual inspection before cropping, it looks like the overscan, not included in the above stats, could well serve as the Black Point estimator for Phocus.

I won't bother posting a pic of these, since they are quite featureless.
 
Last edited:
If the PGAs were anywhere near that bad, we'd see huge column variation, since there are one or two ADCs per column. Thousands of ADCs on the sensor.
Well, I must concede to having outdated knowledge.

And between your comment re: thousands of ADC's on chip, and my derived (amazingly flat) Bias Frames from my own X2D... I'm beginning to see the magic of these sensors and why they command such a high price.

Your histogram procedures applied to these sensors is totally justified by what I found. Had you had a Bias Frame that looked like my Chilean CCD, I might have raised objections to such a procedure.
 
Last edited:
So from what I'm seeing, whether by intent or serendipity, the effect of ISO is to push the DU into the high 8 bits of the 16 bit data word. Read noise looks like it is overwhelmed by photon (Poisson) noise at all light levels, unless you manage to get really starved for photons.

It looks like Exposure adjustments work first in the linear regime, then contrast takes over after conversion of mid-tones. Not sure about Shadow, but it might be working in the linear regime to push more of the low bits into the histogram over regions below some threshold lightness. (Don't know offhand how Recovery works.)

I swear, the pseudo-input histogram of Phocus sent me for a loop. I simply could not figure out how my changing exposure managed to convert its input histogram into its output histogram.
 
Correction... the absolute variation from factor-of-2, from ISO 100 to 1600, was less than 1.5%, not 3% as stated before. So, even more convinced that these are digital amplifications, not analog.
You are assuming something that's not true when you assume the the PGA's can't be that good. That assumption is leading you astray.

Have you looked at the raw histograms? Those are dispositive.

And how do you explain the input referred read noise variation with ISO?

3ad9336c518544c79619aa2fc8491c62.jpg.png

https://www.photonstophotos.net/Charts/RN_e.htm#Hasselblad X2D-100c_16
Yes, good point. But I would explain that as showing the noise effects of the read amplifier being multiplied digitally with left-shifts.
Digital gain does not affect input referred read noise.
At some point the inherent pixel noise, amplified, should begin to outweigh the amplifier read noise, and that's when you switch in one stage of gain ahead of the ADC and a (restarted) digital shift multiply.

But I have never seen the schematic of the sensor, and I'm only guessing based on past experience with CCD systems, not CMOS systems.

I do think PGA's can be absolutely precise when used digitally. I don't think that analog amplifiers can be that precise.
PGAs *are* analog amplifiers.
But I'm no expert on new era microcircuits. My last detailed exposure to microcircuits was back around 1984 when we, at IBM, were manufacturing our own chips. Long time ago...


--
 
Correction... the absolute variation from factor-of-2, from ISO 100 to 1600, was less than 1.5%, not 3% as stated before. So, even more convinced that these are digital amplifications, not analog.
These are from the GFX 100S, so, same sensor, less processing between the sensor output and the raw file, and gain in 1/3 stop increments instead of one-stop ones.

https://blog.kasson.com/gfx-100s/gfx-100s-dark-field-histograms/

https://blog.kasson.com/the-last-word/gfx-100s-16-bit-dark-field-histograms/

This is not what the histograms look like when there is a lot of digital gain used.

Jim
hmmm... interesting. And offhand, I can't quite interpret your histograms.

First off, however, is how do you obtain the Black Point?
The histograms are for raw files. There is no black point subtraction performed.
I get images with scads of pepper noise in all the pixels. In there is some pedestal plus faint image signal plus pixel noise. Where is black?
Whereever you want to put it. Usually the middle of the haystack.
The way I get black from an Astro image is to take a relatively dark region, surround it with a subframe, and take its statistical moments as representative of black. That includes the effects of pedestal, terrestrial pollution gradients, etc. Then I subtract that from the image.
Feel free to do that, but black point is unrelated to the question at hand.
Next up, finding the color correction - for that I take my background reduced images and do a survey of bright stars. Statistically, those define white. (More recent improvements take those stars and do a lookup against a spectral catalog to find their true colors).

Once I have the average ratio between R:G:B for the white stars, I can then recalibrate the background subtracted image to gain true color.

Note that everything I'm doing here is with respect to R, G, and B channels. Not colors, the way you guys all talk about color.

So I can't find black until I do that background estimation. How do you do it?
 
Correction... the absolute variation from factor-of-2, from ISO 100 to 1600, was less than 1.5%, not 3% as stated before. So, even more convinced that these are digital amplifications, not analog.
These are from the GFX 100S, so, same sensor, less processing between the sensor output and the raw file, and gain in 1/3 stop increments instead of one-stop ones.

https://blog.kasson.com/gfx-100s/gfx-100s-dark-field-histograms/

https://blog.kasson.com/the-last-word/gfx-100s-16-bit-dark-field-histograms/

This is not what the histograms look like when there is a lot of digital gain used.

Jim
I just looked again at your web page graphs and read your setup for the exposures. What you are doing for your Black point is what we call the Bias Frame.

We take a ton of images with the lens cap on, really short exposure times to avoid thermal noise buildup, and then average together all those dark frames into a master Bias frame. That represents our read noise. It gets subtracted from every raw frame before doing flat field corrections.

I have never worked on single camera images before. We always had stacks of dozen or more images. And they were dithered on the sky so that we could average out camera pixel noise and median filter out hot pixels. You want to avoid fixed pattern background noise.

Here with terrestrial photography the norm is single camera images. A whole other ballgame.

So, back to your "Black Point" (my Bias frame). So your histograms are simply raw pixel histograms over these bias frames. And so I do see how you would see the pedestal and the statistical read noise.

[...but our bias frames always show clear patterns in columns of the sensor. Generally the same value for any column, give or take some noise. But adjacent columns could have very different values. A very streaked image.]
Welcome to modern CMOS camera sensors.
Now what accounts for the spread as you shift ISO? (Honestly, ISO is totally unknown in Astrophotos - a sensor is a linear light bucket. We count photons. Period.)
 
Have you looked at the raw histograms? Those are dispositive.
I have looked at the actual raw pixel values, not the input histograms of Phocus. As I stated in another messasge, Phocus never actually shows the input data histograms. It shows what they become after it applies a little of its own magic.

But using a linear image processing app, like PixInsight, I can read those raw pixel values and do my own histograms. And they show factors of 2 between ISO steps above ISO 100, to a precision of 1.5%. That variance is most likely the result of pixel noise in my measurements, and some fluctuation in scene lighting between exposures.

I have never seen an analog system that could run at this level of precision. More typically 5-10% variation. You might reach 1% for (former) MilStd circuits if you try really hard, and place the entire circuit in a temperature controlled oven.

But again, I'm an old fart with aging knowledge. Maybe things have changed over the past 40 years? Could be...
Are you looking at modern CMOS sensors with column ADCs?
 
Bit shifts for gain would result in histogram combing.
Under some circumstances I agree.

But step back a bit and view the system as providing a natural binning of 256 bins based on the high 8 bits in the 16 bit data word. Then the goal of ISO is to shift your ADC output into those upper 8 bits.

You would only see combing if your data were so weak that you had fewer than 8 bits of data in the low end of the word.

I might see this a lot in Astrophoto data. (and I typically do see histogram combing in low level nebulae images) But how often would you have this situation in terrestrial photography? Even at night you are photon rich.
The precision of the raw file is not 8 bits.
 
The amplification steps show variations of less than 3% in a factor-of-two multiplier, and I would bet money that this is digital amplification coupled with my analysis statistics, rather than analog gain variations coupled with my statistics.

Maybe times have changed, and you can do better than 5-10% accuracy in microchip analog circuitry, but I have my doubts - especially given wide latitude in operating temperatures.

The region below ISO 200 flattens out in read noise, so I have to conclude that this region has read amplifier noise dominating over pixel noise, whereas above ISO 200, the pixel noise dominates the amplifier.
You're forgetting about dual conversion gain.
I found a 0.65 stop difference between ISO 64 and ISO 100. Elsewhere, above ISO 100, between successive ISO steps there was a 1.00 +/- 0.02 stop difference.

So the world really does continue to operate the way that I had assumed.
Nope. The PGAs are very good.
Not only are the PGAs well matched, the source followers are, too, although correlated double sampling helps with that. When I measure 600x600 pixel areas (360,000 pixels) for pixel response nonuniformity (PRNU) for a modern CMOS sensor, I get on the order of 0.2 to 0.5% rms.

That couldn't happen with 5% to 10% gain errors.
 
The more I think about this, the more I'm convincing myself that the demosaicing should be performed in native camera space. It is intellectually cleaner to assign colors after demosaicing when you have triplets at every location. In a perfect world you'd probably want to apply a tone curve to make the interpolation more perceptually uniform, but I'm not convinced that would offer any real advantages.
Yes, scientifically, this would be my approach too.

I'm still wrestling with the distinction between the RGB channels and "color". Granted that colors are represented as triplets of RGB, a triplet such as (r, 0, 0) still represents a color and the reading from the R channel.

I don't know how such a "color" as (r, 0, 0) would be represented in CIE space and ultimately in some gamut. But surely it would be a physically perceived color to our visual senses.

I did see, however, in reading some posts of someone who made the transition from Phase One to Fujifilm, that the lack of IR and UV filtering in front of a sensor could make the camera's impression of color become warped relative to human vision since the sensors have sensitivity where we do not, and some surfaces are brightly reflective in the near IR.
CFA dye formulation determines what spectral characteristics each dye/channel has. This determines/sets what R' G' B' actually mean in camera. After demosaicing, those R' G' B' will get converted into "real" and "known" rgb primaries+gamma curves, etc, something like sRGB or AdobeRGB. Each of those color spaces/profiles might have its own definition of "red", "green" and "blue" just like in-camera primary color channels do. In order to get correct colors, you have to convert in-camera color space into another easily displayable or rather - widely known color space.
 
Hi,

Kodak was a master of these dyes. Thus making different sensors mimic some of their film types.

They even made some using CYM dyes on the CFA to allow more light thru for higher ISO capability. Those images were best processed with Kodak's own PhotoDesk or they tended to sport a yellowish cast.

You know, the more we discuss these sorts of things these days, the more I miss Kodak.

Stan
 
Hi,

Kodak was a master of these dyes. Thus making different sensors mimic some of their film types.

They even made some using CYM dyes on the CFA to allow more light thru for higher ISO capability. Those images were best processed with Kodak's own PhotoDesk or they tended to sport a yellowish cast.

You know, the more we discuss these sorts of things these days, the more I miss Kodak.

Stan
You can have my 14n :-) :-) :-)

It doesn't work, mind you. But the results are still about the same as when it was new :-)
 
The more I think about this, the more I'm convincing myself that the demosaicing should be performed in native camera space. It is intellectually cleaner to assign colors after demosaicing when you have triplets at every location. In a perfect world you'd probably want to apply a tone curve to make the interpolation more perceptually uniform, but I'm not convinced that would offer any real advantages.
Yes, scientifically, this would be my approach too.

I'm still wrestling with the distinction between the RGB channels and "color". Granted that colors are represented as triplets of RGB, a triplet such as (r, 0, 0) still represents a color and the reading from the R channel.

I don't know how such a "color" as (r, 0, 0) would be represented in CIE space and ultimately in some gamut. But surely it would be a physically perceived color to our visual senses.

I did see, however, in reading some posts of someone who made the transition from Phase One to Fujifilm, that the lack of IR and UV filtering in front of a sensor could make the camera's impression of color become warped relative to human vision since the sensors have sensitivity where we do not, and some surfaces are brightly reflective in the near IR.
CFA dye formulation determines what spectral characteristics each dye/channel has. This determines/sets what R' G' B' actually mean in camera. After demosaicing, those R' G' B' will get converted into "real" and "known" rgb primaries+gamma curves, etc, something like sRGB or AdobeRGB. Each of those color spaces/profiles might have its own definition of "red", "green" and "blue" just like in-camera primary color channels do. In order to get correct colors, you have to convert in-camera color space into another easily displayable or rather - widely known color space.
Unless the camera meets the Luther-Ives criterion— and no commercial camera does — precise conversion to CIE color is impossible. It is wrong to think of commercial cameras as having a native color space. They don’t capture colors at all. Color is added during the development process.
 
Hi,

Ha!

I didn't want one new. Go from an F5 to an F80? Um, no. And that wasn't a Kodak sensor. What I wanted was my buddies at Kodak to insert one of their 10 MP APS-H CCDs in a 760c for me. But, no.... (Of course it isn't just a swap out the CCD operation)

I don't know if the 14 MP FF CMOS sensor used Kodak dyes or not.

Stan
 
The more I think about this, the more I'm convincing myself that the demosaicing should be performed in native camera space. It is intellectually cleaner to assign colors after demosaicing when you have triplets at every location. In a perfect world you'd probably want to apply a tone curve to make the interpolation more perceptually uniform, but I'm not convinced that would offer any real advantages.
Yes, scientifically, this would be my approach too.

I'm still wrestling with the distinction between the RGB channels and "color". Granted that colors are represented as triplets of RGB, a triplet such as (r, 0, 0) still represents a color and the reading from the R channel.

I don't know how such a "color" as (r, 0, 0) would be represented in CIE space and ultimately in some gamut. But surely it would be a physically perceived color to our visual senses.

I did see, however, in reading some posts of someone who made the transition from Phase One to Fujifilm, that the lack of IR and UV filtering in front of a sensor could make the camera's impression of color become warped relative to human vision since the sensors have sensitivity where we do not, and some surfaces are brightly reflective in the near IR.
CFA dye formulation determines what spectral characteristics each dye/channel has. This determines/sets what R' G' B' actually mean in camera. After demosaicing, those R' G' B' will get converted into "real" and "known" rgb primaries+gamma curves, etc, something like sRGB or AdobeRGB. Each of those color spaces/profiles might have its own definition of "red", "green" and "blue" just like in-camera primary color channels do. In order to get correct colors, you have to convert in-camera color space into another easily displayable or rather - widely known color space.
Unless the camera meets the Luther-Ives criterion— and no commercial camera does — precise conversion to CIE color is impossible. It is wrong to think of commercial cameras as having a native color space. They don’t capture colors at all. Color is added during the development process.
I guess, that would depend on your definition of "color". Since cameras collapse spectra into a specific set of measured "brightness" values, defined by what cfa spectral transmittance function is for each of its "primary" colors + spectral sensitivity of the sensor itself, they do measure sets of luma components within their assumed color space..

So, if by "color" you meant "spectrum", I suppose it is correct to conclude that non-scientific color cameras cant measure any of that precisely as they cannot uniquely identify each possible spectrum... but so does the human eye.

But would it not be a correct statement to make, that they have some color space due "assumption" made through selection to dyes and sensitivity of the sensor? (albeit not a color space that can be mapped onto the original spectral multidimensional space, for obvious reasons).
 
Hi,

Ha!

I didn't want one new. Go from an F5 to an F80? Um, no. And that wasn't a Kodak sensor. What I wanted was my buddies at Kodak to insert one of their 10 MP APS-H CCDs in a 760c for me. But, no.... (Of course it isn't just a swap out the CCD operation)

I don't know if the 14 MP FF CMOS sensor used Kodak dyes or not.

Stan
My recollection was the 14n sensor was made by some company called Fill Factory.

They should have tried harder...
 
My recollection was the 14n sensor was made by some company called Fill Factory.

They should have tried harder...
Kodak should have put an AA filter over that sensor.
 
The more I think about this, the more I'm convincing myself that the demosaicing should be performed in native camera space. It is intellectually cleaner to assign colors after demosaicing when you have triplets at every location. In a perfect world you'd probably want to apply a tone curve to make the interpolation more perceptually uniform, but I'm not convinced that would offer any real advantages.
Yes, scientifically, this would be my approach too.

I'm still wrestling with the distinction between the RGB channels and "color". Granted that colors are represented as triplets of RGB, a triplet such as (r, 0, 0) still represents a color and the reading from the R channel.

I don't know how such a "color" as (r, 0, 0) would be represented in CIE space and ultimately in some gamut. But surely it would be a physically perceived color to our visual senses.

I did see, however, in reading some posts of someone who made the transition from Phase One to Fujifilm, that the lack of IR and UV filtering in front of a sensor could make the camera's impression of color become warped relative to human vision since the sensors have sensitivity where we do not, and some surfaces are brightly reflective in the near IR.
CFA dye formulation determines what spectral characteristics each dye/channel has. This determines/sets what R' G' B' actually mean in camera. After demosaicing, those R' G' B' will get converted into "real" and "known" rgb primaries+gamma curves, etc, something like sRGB or AdobeRGB. Each of those color spaces/profiles might have its own definition of "red", "green" and "blue" just like in-camera primary color channels do. In order to get correct colors, you have to convert in-camera color space into another easily displayable or rather - widely known color space.
Unless the camera meets the Luther-Ives criterion— and no commercial camera does — precise conversion to CIE color is impossible. It is wrong to think of commercial cameras as having a native color space. They don’t capture colors at all. Color is added during the development process.
I guess, that would depend on your definition of "color".
Color is a psychological phenomenon, not a physical one.
Since cameras collapse spectra into a specific set of measured "brightness" values, defined by what cfa spectral transmittance function is for each of its "primary" colors + spectral sensitivity of the sensor itself, they do measure sets of luma components within their assumed color space..
They don't have a color space.
So, if by "color" you meant "spectrum",
Color does not mean spectrum.
I suppose it is correct to conclude that non-scientific color cameras cant measure any of that precisely as they cannot uniquely identify each possible spectrum...
You don't want the camera to uniquely identify each possible spectrum. You want the camera to make the same metameric matches that a human does.
but so does the human eye.
To identify color, the eye doesn't need to uniquely identify each possible spectrum. The eye makes metameric matches, and that's just fine.
But would it not be a correct statement to make, that they have some color space
There you go again. Real cameras don't have color spaces. They don't see color the way the eye does. Colors are assigned to camera raw triplets in processing.
due "assumption" made through selection to dyes and sensitivity of the sensor? (albeit not a color space that can be mapped onto the original spectral multidimensional space, for obvious reasons).
You might bone up on the color matching experiment and its implications, and read this:


Jim
 
Last edited:
The more I think about this, the more I'm convincing myself that the demosaicing should be performed in native camera space. It is intellectually cleaner to assign colors after demosaicing when you have triplets at every location. In a perfect world you'd probably want to apply a tone curve to make the interpolation more perceptually uniform, but I'm not convinced that would offer any real advantages.
Yes, scientifically, this would be my approach too.

I'm still wrestling with the distinction between the RGB channels and "color". Granted that colors are represented as triplets of RGB, a triplet such as (r, 0, 0) still represents a color and the reading from the R channel.

I don't know how such a "color" as (r, 0, 0) would be represented in CIE space and ultimately in some gamut. But surely it would be a physically perceived color to our visual senses.

I did see, however, in reading some posts of someone who made the transition from Phase One to Fujifilm, that the lack of IR and UV filtering in front of a sensor could make the camera's impression of color become warped relative to human vision since the sensors have sensitivity where we do not, and some surfaces are brightly reflective in the near IR.
CFA dye formulation determines what spectral characteristics each dye/channel has. This determines/sets what R' G' B' actually mean in camera. After demosaicing, those R' G' B' will get converted into "real" and "known" rgb primaries+gamma curves, etc, something like sRGB or AdobeRGB. Each of those color spaces/profiles might have its own definition of "red", "green" and "blue" just like in-camera primary color channels do. In order to get correct colors, you have to convert in-camera color space into another easily displayable or rather - widely known color space.
Unless the camera meets the Luther-Ives criterion— and no commercial camera does — precise conversion to CIE color is impossible. It is wrong to think of commercial cameras as having a native color space. They don’t capture colors at all. Color is added during the development process.
I guess, that would depend on your definition of "color".
Color is a psychological phenomenon, not a physical one.
Since cameras collapse spectra into a specific set of measured "brightness" values, defined by what cfa spectral transmittance function is for each of its "primary" colors + spectral sensitivity of the sensor itself, they do measure sets of luma components within their assumed color space..
They don't have a color space.
So, if by "color" you meant "spectrum",
Color does not mean spectrum.
I suppose it is correct to conclude that non-scientific color cameras cant measure any of that precisely as they cannot uniquely identify each possible spectrum...
You don't want the camera to uniquely identify each possible spectrum. You want the camera to make the same metameric matches that a human does.
but so does the human eye.
To identify color, the eye doesn't need to uniquely identify each possible spectrum. The eye makes metameric matches, and that's just fine.
But would it not be a correct statement to make, that they have some color space
There you go again. Real cameras don't have color spaces. They don't see color the way the eye does. Colors are assigned to camera raw triplets in processing.
due "assumption" made through selection to dyes and sensitivity of the sensor? (albeit not a color space that can be mapped onto the original spectral multidimensional space, for obvious reasons).
You might bone up on the color matching experiment and its implications, and read this:

https://blog.kasson.com/the-last-word/the-color-reproduction-problem/

Jim
Human vision uses three signals, called 'S', 'M' and 'L' that the brain combines inte perceived color.

Camera sensors have also three signals, called 'R', 'G' and 'B'.

Both sets of signals are integrated over the spectrum, essentially multiplying spectral reflectance of subject, with illumination spectra times sensor spectral response.

The RGB signals from the sensor are remapped in a color space, like sRGB or Prophoto RGB.

On computer screens, additive colors are used, three near monochromatic color sources are mixed, which will stimulate an 'S', 'M', 'L' response in human vision.

If the stimulated 'S', 'M', 'L' response is the same as the 'S', 'M', 'L' response stimulated by the original 'color' under original illuminant, we get a metameric match.

Best regards

Erik
 
The more I think about this, the more I'm convincing myself that the demosaicing should be performed in native camera space. It is intellectually cleaner to assign colors after demosaicing when you have triplets at every location. In a perfect world you'd probably want to apply a tone curve to make the interpolation more perceptually uniform, but I'm not convinced that would offer any real advantages.
Yes, scientifically, this would be my approach too.

I'm still wrestling with the distinction between the RGB channels and "color". Granted that colors are represented as triplets of RGB, a triplet such as (r, 0, 0) still represents a color and the reading from the R channel.

I don't know how such a "color" as (r, 0, 0) would be represented in CIE space and ultimately in some gamut. But surely it would be a physically perceived color to our visual senses.

I did see, however, in reading some posts of someone who made the transition from Phase One to Fujifilm, that the lack of IR and UV filtering in front of a sensor could make the camera's impression of color become warped relative to human vision since the sensors have sensitivity where we do not, and some surfaces are brightly reflective in the near IR.
CFA dye formulation determines what spectral characteristics each dye/channel has. This determines/sets what R' G' B' actually mean in camera. After demosaicing, those R' G' B' will get converted into "real" and "known" rgb primaries+gamma curves, etc, something like sRGB or AdobeRGB. Each of those color spaces/profiles might have its own definition of "red", "green" and "blue" just like in-camera primary color channels do. In order to get correct colors, you have to convert in-camera color space into another easily displayable or rather - widely known color space.
Unless the camera meets the Luther-Ives criterion— and no commercial camera does — precise conversion to CIE color is impossible. It is wrong to think of commercial cameras as having a native color space. They don’t capture colors at all. Color is added during the development process.
I guess, that would depend on your definition of "color".
Color is a psychological phenomenon, not a physical one.
Since cameras collapse spectra into a specific set of measured "brightness" values, defined by what cfa spectral transmittance function is for each of its "primary" colors + spectral sensitivity of the sensor itself, they do measure sets of luma components within their assumed color space..
They don't have a color space.
So, if by "color" you meant "spectrum",
Color does not mean spectrum.
I suppose it is correct to conclude that non-scientific color cameras cant measure any of that precisely as they cannot uniquely identify each possible spectrum...
You don't want the camera to uniquely identify each possible spectrum. You want the camera to make the same metameric matches that a human does.
but so does the human eye.
To identify color, the eye doesn't need to uniquely identify each possible spectrum. The eye makes metameric matches, and that's just fine.
But would it not be a correct statement to make, that they have some color space
There you go again. Real cameras don't have color spaces. They don't see color the way the eye does. Colors are assigned to camera raw triplets in processing.
due "assumption" made through selection to dyes and sensitivity of the sensor? (albeit not a color space that can be mapped onto the original spectral multidimensional space, for obvious reasons).
You might bone up on the color matching experiment and its implications, and read this:

https://blog.kasson.com/the-last-word/the-color-reproduction-problem/

Jim
Human vision uses three signals, called 'S', 'M' and 'L' that the brain combines inte perceived color.

Camera sensors have also three signals, called 'R', 'G' and 'B'.

Both sets of signals are integrated over the spectrum, essentially multiplying spectral reflectance of subject, with illumination spectra times sensor spectral response.

The RGB signals from the sensor are remapped in a color space, like sRGB or Prophoto RGB.

On computer screens, additive colors are used, three near monochromatic color sources are mixed, which will stimulate an 'S', 'M', 'L' response in human vision.

If the stimulated 'S', 'M', 'L' response is the same as the 'S', 'M', 'L' response stimulated by the original 'color' under original illuminant, we get a metameric match.

Best regards

Erik
Yep. I tried explaining that to Jim.

This is an inevitable conclusion if one looks at how a simple camera works, and how many parameters are recorded. What is being recorded per pixel is set of luminance values, that are "mapped" onto spectral characteristics of each dye used + sensor characteristic. How this is not by proxi defined color space is baffling to me, if all it does is finding a point in 3d SPACE where each triplet value is located, with outer "shell" of that 3D space being defined by dye characteristics. This is a space representing colors... hence color space.

The fact that it is a "perceived" color is irrelevant to whether it is a color space or not.
 
Last edited:
Status
Not open for further replies.

Keyboard shortcuts

Back
Top