Trying to understand gamut limitations

Ruby Rod

Senior Member
Messages
1,413
Solutions
1
Reaction score
1,369
Location
Canandaigua, US
I believe some things that may or may not be true. Maybe somebody can steer me in the right direction.

1) My monitor has red, green and blue sources, making up a pixel. The location of those sources on the CIE color space forms a triangle, and there's no way to create colors outside that triangle.

2) My printer has a bunch of ink tanks. They define more of a blob in the CIE space, but the basic principle remains the same. Can't generate colors outside that blob.

3) My camera has a Bayer filter with red, green and blue filters. Same rule applies? Can you get outside the triangle?

I can think of the camera as "measurements behind filters", if that changes anything. What I don't understand is why nobody talks about the color gamut the camera is capable of reproducing. Unless there's something very special about the filters, how can the camera get much beyond Adobe RGB. Why in the world would anybody use ProPhoto RGB color space, since most of it would be wasted?

Extra thought- do those filters fade over time?
 
Solution
3) My camera has a Bayer filter with red, green and blue filters. Same rule applies? Can you get outside the triangle?
When we think of a camera’s color filter array, it is tempting to imagine that the three filters (red, green, and blue) define a kind of triangular chromaticity gamut, much like the primaries of a display. If that were true, every color the camera could record in xy or u'v', would lie within that triangle, bounded by the pure responses of the filters. But cameras do not work that way.

The three filters overlap strongly, and their shapes bear little resemblance to the idealized primaries used in human colorimetry. They are not orthogonal, and they do not form a clean basis for the space of visible spectra. When...
I believe some things that may or may not be true. Maybe somebody can steer me in the right direction.

1) My monitor has red, green and blue sources, making up a pixel. The location of those sources on the CIE color space forms a triangle, and there's no way to create colors outside that triangle.

2) My printer has a bunch of ink tanks. They define more of a blob in the CIE space, but the basic principle remains the same. Can't generate colors outside that blob.

3) My camera has a Bayer filter with red, green and blue filters. Same rule applies? Can you get outside the triangle?

I can think of the camera as "measurements behind filters", if that changes anything. What I don't understand is why nobody talks about the color gamut the camera is capable of reproducing. Unless there's something very special about the filters, how can the camera get much beyond Adobe RGB. Why in the world would anybody use ProPhoto RGB color space, since most of it would be wasted?

Extra thought- do those filters fade over time?
I cannot answer that question, but cameras work differently from output devices.

I would think that RGB output devices are the easiest to understand. They used to be based on some 'phosphors' with well determined wavelengths and the gamut was the triangle covered by those wavelengths.

Cameras work differently, the camera has three kinds of pixels 'r', 'g' and 'b' that are sensitive to a wide range of wavelengths.

For any light falling on the sensor there will be three signals, for each channel and those signals will be mapped in some huge color space, probably XYZ.

From that XYZ space the color will be mapped to some smaller color space like Adobe RB or sRGB.

Having narrow filters for 'r', 'g' and 'b' on the sensors would not work, as such a sensor would be blind to almost all colors.

Best regards

Erik
 
The photo engineer designing cameras and display devices strives to create a faithful image. Close but no cigar.

We humans see via and eye-brain systems that receive electrical signals from the cone and rod cells imbedded in the retina of our eyes. These signals when passed to our brain are interpreted into what we see.

The eye-brain combination is very adaptive. To see this adaptivity, procure a collection of colored cellophane candy wrappers (color filters). Place just one with strong color over just one eye. Now remove the filter and look about, blinking left eye then right eye. The filtered eye will undergo a distinct change as to its color balance and this change persists for a time. You will gain a better understanding of the eye-brain vision system if you perform this experiment. Note that this system changes its color perception minute by minute based on the color of the ambient light.

The filters used on the camera sensor are designed to yield three images of the vista being imaged. This is an additive system consisting of red, green and blue filters. The color and pattern of this array must adjust the color sensitivity of the image sensor. Its inherent color sensitivity is a mismatch due to the materials that must be used to construct it.

Color TV, digital cinema, computer monitors use the additive system. Images on paper operate using reflected light from an adjacent light source. Best is a subtractive system using the complimentary primary colors yellow, magenta, and cyan. The yellow dye/pigment is excellent, the magenta is fair and the cyan is poor. When all three overlap, black should result, but alas the result is a muddy dark gray. Black dye/pigment is added to the mix to give the blackness a kick. This system is called CMYK whereby K = kicker.

The dye/pigments of each system must be tweaked to force the display system used to present to the eye, the needed shades of red, green and blue. Their colors will be adjusted so that the best possible faithful image will be viewed.

All dyes/pigments will fade in time.
 
3) My camera has a Bayer filter with red, green and blue filters. Same rule applies? Can you get outside the triangle?
Hello Rod,

In the case of digital cameras the shape is more complex than a triangle. Here are the chromaticities of a landscape captured by a Nikon D610 with 24-120mm/4:



aa3219a467e14c05a3619b1877a664b1.jpg.png

I never explored its full gamut but you can find some more info in the articles around here


Jack
 
I believe some things that may or may not be true. Maybe somebody can steer me in the right direction.

1) My monitor has red, green and blue sources, making up a pixel. The location of those sources on the CIE color space forms a triangle, and there's no way to create colors outside that triangle.
I think you're not talking about colors, but about chromaticity. Color is a three dimensional phenomenon.
2) My printer has a bunch of ink tanks. They define more of a blob in the CIE space, but the basic principle remains the same. Can't generate colors outside that blob.
Printers don't use additive color. To a zeroth order approximation, they use subtractive color, but it's really much more complicated than that.
3) My camera has a Bayer filter with red, green and blue filters. Same rule applies? Can you get outside the triangle?
Not at all. The human cone spectral sensitivities allow the construction of all colors from three spectral responses.
I can think of the camera as "measurements behind filters", if that changes anything. What I don't understand is why nobody talks about the color gamut the camera is capable of reproducing.
I've done a fair amount with that. But it depends on the algorithms used to assign color to camera responses.
Unless there's something very special about the filters, how can the camera get much beyond Adobe RGB. Why in the world would anybody use ProPhoto RGB color space, since most of it would be wasted?

Extra thought- do those filters fade over time?
I imagine that they do, but I've never measured that.
 
3) My camera has a Bayer filter with red, green and blue filters. Same rule applies? Can you get outside the triangle?
When we think of a camera’s color filter array, it is tempting to imagine that the three filters (red, green, and blue) define a kind of triangular chromaticity gamut, much like the primaries of a display. If that were true, every color the camera could record in xy or u'v', would lie within that triangle, bounded by the pure responses of the filters. But cameras do not work that way.

The three filters overlap strongly, and their shapes bear little resemblance to the idealized primaries used in human colorimetry. They are not orthogonal, and they do not form a clean basis for the space of visible spectra. When the camera captures a scene, each channel measures a weighted integral of the scene’s spectrum, and those weights are broad and interdependent. The result is that many different spectra can produce the same triplet of camera responses, and conversely, some mathematically valid triplets have no corresponding real spectrum at all.

To make a camera’s colors resemble human perception, manufacturers apply a 3×3 matrix that transforms the raw sensor signals into a standard color space such as sRGB or Adobe RGB. The coefficients of that matrix are often negative, because the only way to make the camera behave approximately colorimetrically over a range of spectra is to allow subtraction as well as addition among the channels. When this transformation is applied, it can easily push some colors outside the triangular gamut that would be implied by the filters themselves.

In other words, the apparent gamut of a camera is not limited by its filters in any simple geometric way. The transformation to a standard color space expands it beyond the physical mixing space of the filters, allowing the camera to represent colors that could not be produced by any positive combination of its own primaries.

--
https://blog.kasson.com
 
Last edited:
Solution
About the CFA you asked "Extra thought- do those filters fade over time?"

At one point I had a reason to read through the patent literature on color filter arrays. Fading of the CFAs is a major concern and many chemicals that would otherwise be desirable are not used because they would fade. So many patents mention that the chemical being patented does not fade. So the implication is the ones that make it into cameras don't fade. But I have not seen any measurements on that. But at least it is nice to know it is not a factor that has been overlooked.

Your general idea that you "can't make a color outside the blob" is correct although your wording was not precise in the scientific sense where words like color and chromaticity have narrow definitions that are separate from layman language. I would just add that in general, if you read lots of posts on various forums here on dpreview, you will see the average person is 100% sure their camera and monitor can produce all possible colors. If you question that believe, you almost always get the response back that they have played with the color adjustments on their displays and seen a huge variety of colors can be produced and that proves their point. There is such a large gap between that kind of thinking and science/math thinking that people in the know don't engage with them much. On the other hand, there are some really knowledgeable people on these forums too.
 
I can think of the camera as "measurements behind filters", if that changes anything. What I don't understand is why nobody talks about the color gamut the camera is capable of reproducing. Unless there's something very special about the filters, how can the camera get much beyond Adobe RGB. Why in the world would anybody use ProPhoto RGB color space, since most of it would be wasted?
Not true, depending on the meaning of "use". Many, if not most, people do their editing in the ProPhoto working color space.

The camera sensor+CFA can record wavelengths outside of the CIE 1931 "horseshoe" diagram, as Jack showed above. They are reduced but not eliminated by the UV/IR blocking filter which is normally mounted on the sensor. For example, with a 720 mm filter on the lens, shots with about one second shutter time or more can collect enough IR to get a passable image out of the camera.
 
Last edited:
2) My printer has a bunch of ink tanks. They define more of a blob in the CIE space, but the basic principle remains the same. Can't generate colors outside that blob.
Printers need different color models than displays. An old standby is Kubelka-Monk. When light hits a layer of paint or ink on paper, some of it is absorbed by the pigments and some is scattered back toward the surface. The Kubelka–Monk model treats that complicated process as if the light travels through the layer in only two directions — downward into the layer and upward back out again, while being partly absorbed and partly scattered at every tiny step.

K–M assumes the layer is perfectly even, thick enough that light becomes diffuse, and that the light entering it is already scattered evenly. Under those simplifications, it turns out you can describe the color of the layer using just two numbers for each colorant:
  • K, its absorption coefficient: how strongly it soaks up light.
  • S, its scattering coefficient: how strongly it bounces light around.
The ratio K/S largely determines how dark or light the color looks. A high ratio means most light is absorbed (a deep color); a low ratio means more is scattered back (a lighter tint).

When you mix two inks or pigments, you can estimate their combined K and S values from the proportions of each ingredient, then compute the resulting reflectance. This makes the model handy for predicting the appearance of mixtures or printed overlays without having to measure each one directly.

Despite its simplicity, Kubelka–Monk has held up for decades because it gives reasonably good results for diffuse, matte, and homogeneous coatings: things like paints, textiles, and ordinary printed papers. It begins to fail when the surface is glossy, when the layers are very thin or transparent, or when metallic or pearlescent pigments are involved.

In short: the K–M model is a clever, compact way to describe how colorants absorb and scatter light. It isn’t a complete physical description, but it captures enough of the essential behavior to make practical predictions about color mixing and printing.

--
https://blog.kasson.com
 
Last edited:
My take, with many things in common with the already provided answers but some nuances are different.

Monitors work with additive color. They emit linear combinations of the three channels with positive (non-negative, to be more precise) coefficients. This forms a 3D convex cone in the infinite dimensional space of possible spectral densities. Equating for the same brightness, we are left with convex linear combinations which fill a triangle. By the way, those possible densities form a convex cone, too.

Sensor's CFA records a 3D projection of the infinitely dimensional cone of densities. Ideally, this would be the same cone our eyes project to. As such, it has no gamut. Our eyes do not have a gamut, either, or at least that would be not a good way to describe what they do. Of courses, those CFA's are not ideal.
 
I believe some things that may or may not be true. Maybe somebody can steer me in the right direction.

3) My camera has a Bayer filter with red, green and blue filters. Same rule applies? Can you get outside the triangle?
Yes. Some standard cameras' response extends even to outside the CIE horseshoe boundary. Folks have been known to put an IR filter on the lens and, with a long, long exposure, capture an actual IR image ...
 
Last edited:

Keyboard shortcuts

Back
Top