How does a 3D LUT map color gamuts if all it does is give an Output RGB value for a given RGB Input

Mandem

Member
Messages
31
Reaction score
2
The common explanation(and, perhaps, overly simplified) online is that 3D and 1D LUT's simply give a certain RGB output for a given RGB input. This explanation doesn't seem to explain how then does a 3D Conversion LUT actually map a color gamut of a color space to a gamut of another color space. Taking an example:

SLOG3(Which uses S-Gamut3 or S-Gamut3.Cine)to Rec709.

Looking at the chromaticity primaries for S-Gamut3 vs Rec709 we'll see that the S-Gamut3 dwarfs Rec709 with it's vastly richer and deeper primaries(2 of the primaries: Green and Blue, go so far as to be even out of the visible spectrum).

So, in a 10 bit file captured on the S-Gamut3 profile, something with an RGB value of (1023,0,0) will give a much richer red on a screen that can actually display S-Gamut3 as opposed to something captured on a Rec-709 profile with an RGB value of (1023,0,0) and shown on a Rec709 display.

Now. What happens when we insert the S-Gamut3 image into an NLE(Premiere or Resolve) that's displaying on a screen with a Rec709 space?

Does the monitor receive the Value (1023,0,0) and just display the reddest red it's capable of(which is less richer than what the SGamut-3 space actually captured). In which case I understand that it's automatically mapped. The display assigns it's own colors to the code values and displays the colors only it can display. The problem arising being that there can be color shifts and this is what the 3D LUT is fixing (Also the Gamma curve of the Log Image but I'm not discussing the Gamma curve and washed outness of the LOG image, that part I understand. Purely the color values and how they're interpreted is what I'm interested in)

OR

Does something tell the monitor that this is a (1023,0,0) meant for S-Gamut3 and it doesn't know how to interpret that thus randomly assigns an RGB Value within the rec709 color space. And this is where the 3D Lut comes in, in which case it has to Map the Gamut of SGamut3 to Rec709 and then the monitor finally understands to display the reddest red it's capable of.
 
Another factor: why should we WB our shots? Our eyes do that automatically, right? Yet, we still do. That does not make metamers not metamers, etc., but it tells is that the finding the "right" optimization is not simple.
If the image viewer were completely adapted to the original scene (and if the light were unmixed and the camera LI-ish), we wouldn't need to color balance.
I doubt it. When I enter a room with tungsten light, my eyes adapt instantly. When I am inside and there is a window with light coming from outside (at dusk), I do not see it as intense blue, I see it gray. If I look at a non-WB'ed photo in a dark room, it still looks bad regardless of the lack of ambient light with colder color temperature.
You missed the part about unmixed light.
So if I take a shot in a tungsten lit room with a camera set to daylight WB, display it on my monitor, it should look like what I see with naked eyes in the same room? It does not, it looks horribly orange.
That’s because your state of adaptation is different, and because you already messed with the WB.
Which WB is unmessed?
You messed with it when you set it to daylight.
I do not insist on daylight. An ideal camera/.../monitor would not have WB at all. Tell me which one is the "god given one", and I will test that one.
Produce the same colors on the screen,
What does this mean?
trick the viewer into the same adaptation, and the color will look the same with no white balancing.
About the state of adaptation - this is the whole point. Why do my eyes adapt to the scene but not to the same scene on the screen?
That is a very complicated subject. There have been many experiments that attempted to artificially control the viewer's state of adaptation. It's not as easy as it sounds.

In a dark room, with a white or gray surround, the viewer tends to adapt to the monito white point.
I have no objection against monitor's white point. In fact, I calibrate my monitors with their native one knowing that it is not 5500K or whatever exactly. But then my monitors are above average in quality. I cannot adjust my vision to an un-WB'ed photo taken in tungsten light however, dark room, gray surround, whatever. I do not even have to, if the surrounding light is stull tungsten.

BTW, WB makes a hypothetical LI sensor anon-LI one, roughly speaking.
 
I believe White Balance as we intend it in raw conversion (e.g. applying a different gain to each of the three raw color channels so that a neutral object will show similar intensity in each) performs at least the three distinct functions below. Working back from silicon:
  1. it converts the signal in the raw data from one proportional to captured quanta (photoelectrons) to one proportional to captured energy;
  2. it partly compensates for different relative sizes and shapes in the SSFs; and
  3. it partly compensates for the illuminant, a sort of first, pre-XYZ, early stab at adaptation, L*ab/wrong Von Kries style.
Each function results in a channel-specific factor triplet that when multiplied together with the others become the wb multipliers of yore.

Some digital cameras come with a list of them pre-computed by the manufacturer for typical illuminants. Those work well as long as the photographer is in the presence of a single, standard illuminant, so every camera tries its best to understand what illuminant the scene is under in order to use the correct ones. Some take hints from the user (e.g. Nikon's Natural Light Auto mode - don't forget it on when indoors).

Because we are assuming that the system is linear, it does not really matter when or where in the pipeline the three factors that make up the multipliers are applied.
If WB is done by three multipliers, then this is equivalent to applying a diagonal matrix. If you normally do it after the color transformation, then to get the same effect with a transform before it (in RAW), you have to multiply the RAW numbers by a non-diagonal matrix, and vise-versa. In other words, if they are multipliers on one side, they cannot be such on the other side.
But since the camera is already estimating the illuminant, in theory it should also know the individual factors above, so it makes sense to me that (1) and (2) be applied before projecting out of raw space. (3) I am suspecting could probably be handled better elsewhere?

A question for the Colorati.

Jack
 
Another factor: why should we WB our shots? Our eyes do that automatically, right? Yet, we still do. That does not make metamers not metamers, etc., but it tells is that the finding the "right" optimization is not simple.
If the image viewer were completely adapted to the original scene (and if the light were unmixed and the camera LI-ish), we wouldn't need to color balance.
I doubt it. When I enter a room with tungsten light, my eyes adapt instantly. When I am inside and there is a window with light coming from outside (at dusk), I do not see it as intense blue, I see it gray. If I look at a non-WB'ed photo in a dark room, it still looks bad regardless of the lack of ambient light with colder color temperature.
You missed the part about unmixed light.
So if I take a shot in a tungsten lit room with a camera set to daylight WB, display it on my monitor, it should look like what I see with naked eyes in the same room? It does not, it looks horribly orange.
That’s because your state of adaptation is different, and because you already messed with the WB.
Which WB is unmessed?
You messed with it when you set it to daylight.
I do not insist on daylight. An ideal camera/.../monitor would not have WB at all. Tell me which one is the "god given one", and I will test that one.
If you had a LI camera, I could tell you, but I'd have to think about it, since adaptation takes place further up the human signal processing chain.
Produce the same colors on the screen,
What does this mean?
Colorimetric color reproduction, in which the reproduced image has the same chromaticities as the original, and luminances proportional to those of the original.
trick the viewer into the same adaptation, and the color will look the same with no white balancing.
About the state of adaptation - this is the whole point. Why do my eyes adapt to the scene but not to the same scene on the screen?
That is a very complicated subject. There have been many experiments that attempted to artificially control the viewer's state of adaptation. It's not as easy as it sounds.

In a dark room, with a white or gray surround, the viewer tends to adapt to the monito white point.
I have no objection against monitor's white point. In fact, I calibrate my monitors with their native one knowing that it is not 5500K or whatever exactly. But then my monitors are above average in quality. I cannot adjust my vision to an un-WB'ed photo taken in tungsten light however, dark room, gray surround, whatever. I do not even have to, if the surrounding light is stull tungsten.
Adaptation is more complex that you are making it out to be. As you pointed out earlier wrt mixed lighting sources, humans can adapt to multiple white points within the visual field.
BTW, WB makes a hypothetical LI sensor anon-LI one, roughly speaking.
Adaptation takes place at a higher level in the human image processing chain than where the CMFs are derived.

BTW, the commonly employed adaptation corrections, such as von Kreis,cmccat, cmccat20, and Bradford, are approximate.

--
https://blog.kasson.com
 
Last edited:
I believe White Balance as we intend it in raw conversion (e.g. applying a different gain to each of the three raw color channels so that a neutral object will show similar intensity in each) performs at least the three distinct functions below. Working back from silicon:
  1. it converts the signal in the raw data from one proportional to captured quanta (photoelectrons) to one proportional to captured energy;
  2. it partly compensates for different relative sizes and shapes in the SSFs; and
  3. it partly compensates for the illuminant, a sort of first, pre-XYZ, early stab at adaptation, L*ab/wrong Von Kries style.
Each function results in a channel-specific factor triplet that when multiplied together with the others become the wb multipliers of yore.

Some digital cameras come with a list of them pre-computed by the manufacturer for typical illuminants. Those work well as long as the photographer is in the presence of a single, standard illuminant, so every camera tries its best to understand what illuminant the scene is under in order to use the correct ones. Some take hints from the user (e.g. Nikon's Natural Light Auto mode - don't forget it on when indoors).

Because we are assuming that the system is linear, it does not really matter when or where in the pipeline the three factors that make up the multipliers are applied.
If WB is done by three multipliers, then this is equivalent to applying a diagonal matrix. If you normally do it after the color transformation, then to get the same effect with a transform before it (in RAW), you have to multiply the RAW numbers by a non-diagonal matrix, and vise-versa. In other words, if they are multipliers on one side, they cannot be such on the other side.
Yes, I was not clear. The three factors can be moved around according to the rules of linear algebra but that does not necessarily mean every term can be applied at different stages in the workflow arbitrarily.

A simplified version of the linear path from pre-white-balanced, demosaiced data (raw below) to an output color space like sRGB can be achieved by matrix multiplication as follows

[1] sRGB = M * raw,

using the row-wise matrix multiplication convention with 3xN data. M is (read it back to front)

M = M_xyzD65>sRGB * CAM_xyzCCT>xyzD65 * CCM_wbraw>xyzCCT * diag(F3) * diag(F2) * diag(F1)

F{1,2,3} are the three factor triplets from the previous post, CCT represents the XYZ coordinate of the white point, diag a diagonal matrix , the other symbols are hopefully self explanatory.

Every term in the equation above is a 3x3 matrix. As we know matrix multiplication is generally not commutative so order matters (A*B is not the same as B*A) and we could not, for instance, switch the first two terms on the right side as-is without obtaining a different result.

They can however be combined thanks to the associative property . For instance, WB multipliers, as they are normally known to photographers, can be obtained by combining the three factors as follows (Matlab notation)

WB_mult = F1.*F2.*F3 = diag(F3) * diag(F2) * diag(F1) * [1;1;1]

So in linear workflows M can be implemented in practice as

[2] M = M_xyzD65>sRGB * CAM_xyzCCT>xyzD65 * CCM_wbraw>xyzCCT * diag(WB_mult)

Back to the order of the three factors making up the white balance multipliers. There is another useful property of matrix multiplication:

[3] (A*B)' = B'*A'

with ' indicating transpose. Since the transpose of a diagonal matrix is the original diagonal matrix,

diag(F1) * diag(F2) = diag(F2) * diag(F1)

so the three factors can not only be merged but interchanged amongst each other at will. This shuffling around can also be applied to non-diagonal matrices by keeping track of transposition. For instance, if we combine all matrices other than the multipliers in [2] into a single matrix from white balanced, demosaiced raw to sRGB, M becomes

M = M_wbraw>sRGB * diag(F3) * diag(F2) * diag(F1)

which can for instance be written

M = [M_wbraw>sRGB * diag(F3)] * [(diag(F2) * diag(F1)]

or applying [3]

M' = [diag(F2) * diag(F1)] * [M_wbraw>sRGB * diag(F3)]'

Applying [3] to equation [1]

sRGB' = (M * raw)' = raw' * M', and therefore

sRGB' = raw' * [diag(F2) * diag(F1)] * [M_wbraw>sRGB * diag(F3)]'

Many other permutations possible. It becomes then clear that as long as linearity holds the three factors in the previous post can be split and applied either before or after projection into the color domain.

It's a minor point but I am wondering whether the effect of F3's rudimentary adaptation-by-scaling could provide better results if handled later by the Chromatic Adaptation process (Bradford and all).

Jack

PS Incidentally, it turns out that Matlab/Octave tend to prefer Nx3 column-wise data, so the data already comes transposed and sRGB' = raw'*M' is the more efficient form to use since it possibly only requires the inexpensive transposition of a 3x3 matrix.
 
Last edited:
I understand the theory. I am just saying that our chain from cameras to monitors is far enough from what the theory predicts (for a good enough chain) that adaptation is not happening.
 
That is a very complicated subject. There have been many experiments that attempted to artificially control the viewer's state of adaptation. It's not as easy as it sounds.

In a dark room, with a white or gray surround, the viewer tends to adapt to the monito white point.
Could part of this be due to the fact that almost no display technology we have available to us can come close to actually reproducing a scene, thus screwing with our adaptation/causing our brains at some level to declare "this part of the scene is a monitor, adapt to its white point"?

Things such as LCD backlight leakage in the shadows, all colors being approximated by a mix of red, green, and blue instead of the original wavelengths, red/green/blue not being fully saturated, etc?
 
I understand the theory. I am just saying that our chain from cameras to monitors is far enough from what the theory predicts (for a good enough chain) that adaptation is not happening.
The adaptation is a function of the viewing conditions, not the image alone. Surround, field of view, clues to self luminous display or not, time spent in environment, etc. all must be controlled if a particular state of adaptation is to be achieved.
 
I understand the theory. I am just saying that our chain from cameras to monitors is far enough from what the theory predicts (for a good enough chain) that adaptation is not happening.
The adaptation is a function of the viewing conditions, not the image alone. Surround, field of view, clues to self luminous display or not, time spent in environment, etc. all must be controlled if a particular state of adaptation is to be achieved.
I understand this as well and I can play with those factors. Still not happening.
 
That is a very complicated subject. There have been many experiments that attempted to artificially control the viewer's state of adaptation. It's not as easy as it sounds.

In a dark room, with a white or gray surround, the viewer tends to adapt to the monito white point.
Could part of this be due to the fact that almost no display technology we have available to us can come close to actually reproducing a scene, thus screwing with our adaptation/causing our brains at some level to declare "this part of the scene is a monitor, adapt to its white point"?
Yes. There are ways around this, but they are complicated. When I am at a real computer I can describe one.
Things such as LCD backlight leakage in the shadows, all colors being approximated by a mix of red, green, and blue instead of the original wavelengths, red/green/blue not being fully saturated, etc?
Those can be largely gotten around.
 
Colorimetric color reproduction, in which the reproduced image has the same chromaticities as the original, and luminances proportional to those of the original.
Note that to maintain the perceptual appearance of the scene, apparently it’s not necessarily the luminance that should be proportional to the original but their respective logarithms:
, https://www.bbc.co.uk/rd/publications/display-high-dynamic-range-images-varying-viewing-conditions
Maintaining the perceptual appearance is often not what the customer wants ;)

IMHO discussions on this forum tend to overstate the need in reproduction.

--
http://www.libraw.org/
 
Last edited:
Colorimetric color reproduction, in which the reproduced image has the same chromaticities as the original, and luminances proportional to those of the original.
Note that to maintain the perceptual appearance of the scene, apparently it’s not necessarily the luminance that should be proportional to the original but their respective logarithms:
, https://www.bbc.co.uk/rd/publications/display-high-dynamic-range-images-varying-viewing-conditions
Here are Hunt's six categories of color reproduction:
  • Spectral color reproduction, in which the reproduction, on a pixel-by-pixel basis, contains the same spectral power distributions or reflectance spectra as the original.
  • Exact color reproduction, in which the reproduction has the same chromaticities and luminances as those of the original.
  • Colorimetric color reproduction, in which the reproduced image has the same chromaticities as the original, and luminances proportional to those of the original.
  • Equivalent color reproduction, in which the image values are corrected so that the image appears the same as the original, even though the reproduction is viewed in different conditions than was the original.
  • Corresponding color reproduction, in which the constraints of equivalent color reproduction are relaxed to allow differing absolute illumination levels between the original and the reproduction; the criterion becomes that the reproduction looks the same as the original would have had it been illuminated at the absolute level at which the reproduction is viewed.
  • Preferred color reproduction, in which reproduced colors differ from the original colors in order to give a more pleasing result.
You are suggesting something other than what Hunt calls colorimetric color reproduction. Probably equivalent color reproduction, or maybe more likely corresponding color reproduction.

--
https://blog.kasson.com
 
Last edited:
That is a very complicated subject. There have been many experiments that attempted to artificially control the viewer's state of adaptation. It's not as easy as it sounds.

In a dark room, with a white or gray surround, the viewer tends to adapt to the monito white point.
Could part of this be due to the fact that almost no display technology we have available to us can come close to actually reproducing a scene, thus screwing with our adaptation/causing our brains at some level to declare "this part of the scene is a monitor, adapt to its white point"?
Yes. There are ways around this, but they are complicated. When I am at a real computer I can describe one.
Okay, I'm on a laptop now.

About 30 years ago, I attended an SPIE conference and saw an interesting presentation on soft proofing, which was a big deal at the time, where offset lithography was the most common printing technique.

The presenters asked the question, "Why doesn't soft proofing work better than it does?". They came to the conclusions that the problem wasn't the colors on the screen; it was the viewer's perception of them. (Yeah, blame the victim.) They set out to control the adaptation of the viewer. They constructed a viewing booth in which a real print could be viewed with a fixed surround and a 5000 Kelvin illuminant. The booth also contained a monitor with a surround so it looked just like the print's surround/ The monitor white point was set to 5000 Kelvin, and the monitor's -- reflected, not self-luminous -- surround was illuminated at 5000 Kelvin by a projector with a gobo mask for the monitor position. They matched the illumination levels on both sides.

They said the viewers reported excellent matching between the real print and the soft proof, which wasn't the case with the standard viewing conditions of the time. They said is was important to disguise that the monitor was self-luminous, since humans process color that they think comes from reflected light differently than color that they think comes from a self-luminous source.
 
Any operation in a colour space can result in gamut mapping, given both colour spaces have gamuts.
Well. Iliah, you've made me think this morning (not the first time, either). My first reaction was that all color spaces have gamuts, in that they can only represent colors. But that is sophistry, I fear. There are color spaces like XYZ and CIEL*u*v that can represent all the colors we can see. Does that mean they don't have gamuts? I guess you could say that, but it feels wrong to me.
I go by RIT definition of colour gamut:

"A gamut is defined as the range of colors that a given imaging device can display." (they include virtual imaging devices and abstract colour spaces here, and the key word is "display" - as in output)
So XYZ does have a gamut, which is the range of colors, period.
Yes.
and

"there is no such thing as a gamut for an input device".
I disagree with that, sort of. I agree that there is no gamut for devices which produce non colorimetric outputs, like cameras writing raw files an spectrophotometers writing spectra. But cameras that produce sRGB or Adobe 1998 JPEG files have gamuts,
Yes, non-colours like UV and IR being mapped to sRGB is a part of that ;)

Mapping to a working colour space is image processor output, to me it's not the same as the camera output. When the image is processed in a device, I don't consider it an input device anymore.
which are the range of colors that they can produce. Colorimeters have gamuts.
For the non color scientists amongst us, the key words in the interesting discussion above are
  1. 'color', which means all and only tones that can be perceived by the Human Visual System; and
  2. 'mapping', which is a linear or non-linear function that moves tones from one space to the next
I am going to concentrate on linear mapping in 2 above because in theory with non-linear mapping any tone (e.g. UV and IR) could be made to fit inside the visible range, thus be erroneously considered a color.
Most of the sensitivity is for 400 to 700 nm region.

Linear mapping is impossible because the measurement space is not colorimetric.
So do input devices have Gamuts? I am going to lay our the case in favor of it, based on the fact that Color Science is the result of inexact compromises as we all know. We just have to specify the circumstances under which the gamut is valid, just like we have to specify a CMF for a given Observer in 1 above. CMF curves are a compromise, an average response in a given set of conditions (illuminant, 2deg vs 10 etc.). The individual curves from which the average was derived create dE errors and different metamers compared to it.

To determine the Input Device Gamut we work backwards from XYZ, assuming a perfect output chain. We choose a relevant illuminant and CMF from a suitable Standard Observer and project the relative locus in the XYZ parallelepiped (ahem, space) linearly back to the camera raw parallelepiped, blocking/clipping values below zero and above full scale.

That's the gamut in camera raw space, clearly a compromise valid only for those conditions, but then again isn't everything in color science valid only under a given set of fairly stringent assumptions?

To see what the camera raw gamut looks like in colorimetric color spaces we reverse the process: start with all possible values that could be captured by the sensor in the raw file and project them linearly through the inverse of the matrix used just above to XYZ, keeping only values that fall inside the chosen Observer locus solid and discarding the rest. It is then easy to project that set of, by now, colors to other parallelepipeds (color spaces), each time clipping/blocking any color that does not fall inside the relative parallelepiped.

For this to be valid I think there need to be two additional assumptions: just like for the individual observers from which Standard Observers are defined, SSFs are expected to be somewhat related to CMFs (for example trichromatic with somewhat similar curves and wavelength ranges); and the matrix used to project out of camera space needs to provide a good compromise response to visible colors as viewed by a typical observer.

Current consumer digital cameras typically meet the above conditions relatively well so in my opinion they can be said to have a Gamut under a given illuminant and set up.

Arri D21 under D50, from Figure 9 in www.strollswithmydog.com/perfect-color-filter-array/
Arri D21 under D50, from Figure 9 in www.strollswithmydog.com/perfect-color-filter-array/

Jack
 
Any operation in a colour space can result in gamut mapping, given both colour spaces have gamuts.
Well. Iliah, you've made me think this morning (not the first time, either). My first reaction was that all color spaces have gamuts, in that they can only represent colors. But that is sophistry, I fear. There are color spaces like XYZ and CIEL*u*v that can represent all the colors we can see. Does that mean they don't have gamuts? I guess you could say that, but it feels wrong to me.
I go by RIT definition of colour gamut:

"A gamut is defined as the range of colors that a given imaging device can display." (they include virtual imaging devices and abstract colour spaces here, and the key word is "display" - as in output)
So XYZ does have a gamut, which is the range of colors, period.
Yes.
and

"there is no such thing as a gamut for an input device".
I disagree with that, sort of. I agree that there is no gamut for devices which produce non colorimetric outputs, like cameras writing raw files an spectrophotometers writing spectra. But cameras that produce sRGB or Adobe 1998 JPEG files have gamuts,
Yes, non-colours like UV and IR being mapped to sRGB is a part of that ;)

Mapping to a working colour space is image processor output, to me it's not the same as the camera output. When the image is processed in a device, I don't consider it an input device anymore.
which are the range of colors that they can produce. Colorimeters have gamuts.
For the non color scientists amongst us, the key words in the interesting discussion above are
  1. 'color', which means all and only tones that can be perceived by the Human Visual System; and
  2. 'mapping', which is a linear or non-linear function that moves tones from one space to the next
I am going to concentrate on linear mapping in 2 above because in theory with non-linear mapping any tone (e.g. UV and IR) could be made to fit inside the visible range, thus be erroneously considered a color.
Most of the sensitivity is for 400 to 700 nm region.

Linear mapping is impossible because the measurement space is not colorimetric.
Of course, linear mappings are possible. It is a different question what they do.
 
Any operation in a colour space can result in gamut mapping, given both colour spaces have gamuts.
Well. Iliah, you've made me think this morning (not the first time, either). My first reaction was that all color spaces have gamuts, in that they can only represent colors. But that is sophistry, I fear. There are color spaces like XYZ and CIEL*u*v that can represent all the colors we can see. Does that mean they don't have gamuts? I guess you could say that, but it feels wrong to me.
I go by RIT definition of colour gamut:

"A gamut is defined as the range of colors that a given imaging device can display." (they include virtual imaging devices and abstract colour spaces here, and the key word is "display" - as in output)
So XYZ does have a gamut, which is the range of colors, period.
Yes.
and

"there is no such thing as a gamut for an input device".
I disagree with that, sort of. I agree that there is no gamut for devices which produce non colorimetric outputs, like cameras writing raw files an spectrophotometers writing spectra. But cameras that produce sRGB or Adobe 1998 JPEG files have gamuts,
Yes, non-colours like UV and IR being mapped to sRGB is a part of that ;)

Mapping to a working colour space is image processor output, to me it's not the same as the camera output. When the image is processed in a device, I don't consider it an input device anymore.
which are the range of colors that they can produce. Colorimeters have gamuts.
For the non color scientists amongst us, the key words in the interesting discussion above are
  1. 'color', which means all and only tones that can be perceived by the Human Visual System; and
  2. 'mapping', which is a linear or non-linear function that moves tones from one space to the next
I am going to concentrate on linear mapping in 2 above because in theory with non-linear mapping any tone (e.g. UV and IR) could be made to fit inside the visible range, thus be erroneously considered a color.
Most of the sensitivity is for 400 to 700 nm region.

Linear mapping is impossible because the measurement space is not colorimetric.
Of course, linear mappings are possible. It is a different question what they do.
You know the context, and in that context, that is for mapping from measurement space to color space, linear mapping doesn't work.
 
Any operation in a colour space can result in gamut mapping, given both colour spaces have gamuts.
Well. Iliah, you've made me think this morning (not the first time, either). My first reaction was that all color spaces have gamuts, in that they can only represent colors. But that is sophistry, I fear. There are color spaces like XYZ and CIEL*u*v that can represent all the colors we can see. Does that mean they don't have gamuts? I guess you could say that, but it feels wrong to me.
I go by RIT definition of colour gamut:

"A gamut is defined as the range of colors that a given imaging device can display." (they include virtual imaging devices and abstract colour spaces here, and the key word is "display" - as in output)
So XYZ does have a gamut, which is the range of colors, period.
Yes.
and

"there is no such thing as a gamut for an input device".
I disagree with that, sort of. I agree that there is no gamut for devices which produce non colorimetric outputs, like cameras writing raw files an spectrophotometers writing spectra. But cameras that produce sRGB or Adobe 1998 JPEG files have gamuts,
Yes, non-colours like UV and IR being mapped to sRGB is a part of that ;)

Mapping to a working colour space is image processor output, to me it's not the same as the camera output. When the image is processed in a device, I don't consider it an input device anymore.
which are the range of colors that they can produce. Colorimeters have gamuts.
For the non color scientists amongst us, the key words in the interesting discussion above are
  1. 'color', which means all and only tones that can be perceived by the Human Visual System; and
  2. 'mapping', which is a linear or non-linear function that moves tones from one space to the next
I am going to concentrate on linear mapping in 2 above because in theory with non-linear mapping any tone (e.g. UV and IR) could be made to fit inside the visible range, thus be erroneously considered a color.
Most of the sensitivity is for 400 to 700 nm region.

Linear mapping is impossible because the measurement space is not colorimetric.
Of course, linear mappings are possible. It is a different question what they do.
Touche!
 
Any operation in a colour space can result in gamut mapping, given both colour spaces have gamuts.
Well. Iliah, you've made me think this morning (not the first time, either). My first reaction was that all color spaces have gamuts, in that they can only represent colors. But that is sophistry, I fear. There are color spaces like XYZ and CIEL*u*v that can represent all the colors we can see. Does that mean they don't have gamuts? I guess you could say that, but it feels wrong to me.
I go by RIT definition of colour gamut:

"A gamut is defined as the range of colors that a given imaging device can display." (they include virtual imaging devices and abstract colour spaces here, and the key word is "display" - as in output)
So XYZ does have a gamut, which is the range of colors, period.
Yes.
and

"there is no such thing as a gamut for an input device".
I disagree with that, sort of. I agree that there is no gamut for devices which produce non colorimetric outputs, like cameras writing raw files an spectrophotometers writing spectra. But cameras that produce sRGB or Adobe 1998 JPEG files have gamuts,
Yes, non-colours like UV and IR being mapped to sRGB is a part of that ;)

Mapping to a working colour space is image processor output, to me it's not the same as the camera output. When the image is processed in a device, I don't consider it an input device anymore.
which are the range of colors that they can produce. Colorimeters have gamuts.
For the non color scientists amongst us, the key words in the interesting discussion above are
  1. 'color', which means all and only tones that can be perceived by the Human Visual System; and
  2. 'mapping', which is a linear or non-linear function that moves tones from one space to the next
I am going to concentrate on linear mapping in 2 above because in theory with non-linear mapping any tone (e.g. UV and IR) could be made to fit inside the visible range, thus be erroneously considered a color.
Most of the sensitivity is for 400 to 700 nm region.

Linear mapping is impossible because the measurement space is not colorimetric.
Of course, linear mappings are possible. It is a different question what they do.
Touche!
No. In the context of color reproduction it's impossible. That's today's truth.
 
Last edited:
Most of the sensitivity is for 400 to 700 nm region.

Linear mapping is impossible because the measurement space is not colorimetric.
Of course, linear mappings are possible. It is a different question what they do.
You know the context, and in that context, that is for mapping from measurement space to color space, linear mapping doesn't work.
For some cameras, and for some purposes, a compromise matrix is adequate.
 

Keyboard shortcuts

Back
Top