Color Appearance Model versus Normal Post-Processing?

I think the unique thing about HSV is that its gamut is identical to the gamut of RGB. This is critical if you want to be able to use curves and similar editing tools on the luminance/lightness/whatever channel without ever altering the colour.

All of the other commonly used colour models (HSL, LAB, CIE..., etc) have larger gamuts than RGB. This means that a pixel may go out of RGB gamut after editing. Then, it is impossible to convert it back to RGB without something being lost, often that is the colour.
I think I understand but which RGB are we talking about? For example, sRGB, Adobe RGB (1998), ProPhoto, CIE RGB, Adobe Wide ...

I'm probably confused because I don't normally think of a color model as having a gamut.

I do realize that editing something in a wide color space e.g. ProPhoto can leave one with colors outside a narrower e.g. sRGB which will get altered or even clipped when saved as the narrower color space.
There are color models that are coupled to RGB color spaces, such as HSL and HSV. Those inherit the gamut of the spaces they use.

There are color models that are tied to human visual systems in some way, like 1976 CIEL*a*b*. The space covered by Lab exceeds the range of visible colors. But if by gamut you mean color gamut, then Lab can represent all visible colors.
It's not the colour that is the problem, but the brightness.

For example, suppose R, G and B values range from 0 to 100. Then a colour such as (30,30,80) may have a luminance value of, say, 50, (in whatever new colour space we are converting to) where luminance may range from 0 to 100. If you double the luminance of that pixel, it gets a luminance of 100 which is just in gamut of the new colour space, but when you try to convert back to RGB you get (60,60,160) which is not possible and gets clipped to (60,60,100), which is a different colour.

I hope that makes sense.
This is typically only an issue if your working space is sRGB (display) and the image profile space is larger (eg Adobe RGB or ProPhoto RGB).

You can't exceed the gamut of the profile space (it won't let you). But if you are using relative colorimetric rendering, it may clip out of gamut colours in the working space.

Changing to perceptual rendering will fix that issue, but more saturated colours in the working space won't match those in the profile. However, if I am outputting for the web, my target is sRGB, so I edit in sRGB (working) and use perceptual rendering. WISIWIG.

My profile space is usually ProPhotoRGB, more for future proofing than anything else. My screen will support sRGB, Adobe RGB and most of DCI-P3, so I have lots of working space options too.
You have completely missed the point.

Suppose you work in LAB space and apply tools like Curves or Levels to change the tone curve in the L* channel, so that hopefully you preserve hue and saturation, only changing the luminance.

This is fine in theory, but in practice you may change the L* value of a pixel such that it is still less than 100% (so still in gamut in LAB space), but when you convert that pixel back to RGB space (the underlying colour profile is irrelevant), then you find that one of the RGB values exceeds 100% and so cannot be represented without clipping.
and if one looks at a color gamut in 3D CIELAB it can be seen that the gamut for a particular value of lightness L* reduces going both toward 100 and toward 0. If the a* and the b* values are not changed, they can easily go out-of-gamut, as you say.
There's a big difference between using Lab mode in an editor (which is constrained to the image profile) and the size of the CIELab space, which is effectively infinite.
I use ColorThink to view the gamut of images and/or color spaces.
I don't know how RT does it though.
Me neither.
But out of gamut and desaturated are not the same thing.
I never said nor intended to imply that they were the same thing.
OK, clearly, LAB mode works differently in PS. If I increase L and then review the a b values again after completing the change, they get smaller. In other words, they just become desaturated. In other words, it's not CIE Lab, its the profile space mapped to Lab values.
OK, thanks, that'll do me for now; I'll ponder it some more and maybe play a little more too. There do seem to be a few ways to skin the LAB cat ...
I guess it's a matter of terminological interpretation. You can define 100,100,100 in Lab space
Is that what Adobe does? I personally follow the most favored in the literature: 100 for L*, -128 to +127 for a* and b*.
Fine (for an 8-bit space).

But 100,100,100 is well within those boundaries... it just doesn't exist as a colour.

So, when I try and specify it in Lab Mode (which is just RGB remapped) it will reproduce the nearest RGB equivalent - often randomly... and change a and b accordingly to fit within gamut.
, but it's outside the human colour gamut, not just the screen gamut or RGB gamut. It's only a virtual colour, like X, Y and Z...
Obviously, which is why I often claim that "spaces" like CIELAB, etc, have no color gamut.
True, LAB space like XYZ can define virtual colours. That's necessary because human colour perception is not based on a tristimulus colour space.

It's the shape of the 3-D XYZ space mapped to 3-D Lab space, which is black at the bottom and white at the top... ;-)
To me, clipping a colour means something else - ie a visible colour that's outside the gamut of the colour space I'm using, so what was a gradient of red hues in Adobe RGB, or in the raw file, becomes a flat red monotone in sRGB, or CMYK.
Agreed - I call that gamut-clipping.
Sorry for the confusion.
No problem - as you say -different terminology where many folks follow the Elephant in the Room ... ;-)
 
I think the unique thing about HSV is that its gamut is identical to the gamut of RGB. This is critical if you want to be able to use curves and similar editing tools on the luminance/lightness/whatever channel without ever altering the colour.

All of the other commonly used colour models (HSL, LAB, CIE..., etc) have larger gamuts than RGB. This means that a pixel may go out of RGB gamut after editing. Then, it is impossible to convert it back to RGB without something being lost, often that is the colour.
I think I understand but which RGB are we talking about? For example, sRGB, Adobe RGB (1998), ProPhoto, CIE RGB, Adobe Wide ...

I'm probably confused because I don't normally think of a color model as having a gamut.

I do realize that editing something in a wide color space e.g. ProPhoto can leave one with colors outside a narrower e.g. sRGB which will get altered or even clipped when saved as the narrower color space.
There are color models that are coupled to RGB color spaces, such as HSL and HSV. Those inherit the gamut of the spaces they use.

There are color models that are tied to human visual systems in some way, like 1976 CIEL*a*b*. The space covered by Lab exceeds the range of visible colors. But if by gamut you mean color gamut, then Lab can represent all visible colors.
It's not the colour that is the problem, but the brightness.

For example, suppose R, G and B values range from 0 to 100. Then a colour such as (30,30,80) may have a luminance value of, say, 50, (in whatever new colour space we are converting to) where luminance may range from 0 to 100. If you double the luminance of that pixel, it gets a luminance of 100 which is just in gamut of the new colour space, but when you try to convert back to RGB you get (60,60,160) which is not possible and gets clipped to (60,60,100), which is a different colour.

I hope that makes sense.
This is typically only an issue if your working space is sRGB (display) and the image profile space is larger (eg Adobe RGB or ProPhoto RGB).

You can't exceed the gamut of the profile space (it won't let you). But if you are using relative colorimetric rendering, it may clip out of gamut colours in the working space.

Changing to perceptual rendering will fix that issue, but more saturated colours in the working space won't match those in the profile. However, if I am outputting for the web, my target is sRGB, so I edit in sRGB (working) and use perceptual rendering. WISIWIG.

My profile space is usually ProPhotoRGB, more for future proofing than anything else. My screen will support sRGB, Adobe RGB and most of DCI-P3, so I have lots of working space options too.
You have completely missed the point.

Suppose you work in LAB space and apply tools like Curves or Levels to change the tone curve in the L* channel, so that hopefully you preserve hue and saturation, only changing the luminance.

This is fine in theory, but in practice you may change the L* value of a pixel such that it is still less than 100% (so still in gamut in LAB space), but when you convert that pixel back to RGB space (the underlying colour profile is irrelevant), then you find that one of the RGB values exceeds 100% and so cannot be represented without clipping.
and if one looks at a color gamut in 3D CIELAB it can be seen that the gamut for a particular value of lightness L* reduces going both toward 100 and toward 0. If the a* and the b* values are not changed, they can easily go out-of-gamut, as you say.
There's a big difference between using Lab mode in an editor (which is constrained to the image profile) and the size of the CIELab space, which is effectively infinite.
I use ColorThink to view the gamut of images and/or color spaces.
I don't know how RT does it though.
Me neither.
But out of gamut and desaturated are not the same thing.
I never said nor intended to imply that they were the same thing.
OK, clearly, LAB mode works differently in PS. If I increase L and then review the a b values again after completing the change, they get smaller. In other words, they just become desaturated. In other words, it's not CIE Lab, its the profile space mapped to Lab values.
OK, thanks, that'll do me for now; I'll ponder it some more and maybe play a little more too. There do seem to be a few ways to skin the LAB cat ...
I guess it's a matter of terminological interpretation. You can define 100,100,100 in Lab space
Is that what Adobe does? I personally follow the most favored in the literature: 100 for L*, -128 to +127 for a* and b*.
Fine (for an 8-bit space).

But 100,100,100 is well within those boundaries... it just doesn't exist as a colour.


44d247f3bc8944739c45c907ae2d93ba.jpg.png



--
 
I think the unique thing about HSV is that its gamut is identical to the gamut of RGB. This is critical if you want to be able to use curves and similar editing tools on the luminance/lightness/whatever channel without ever altering the colour.

All of the other commonly used colour models (HSL, LAB, CIE..., etc) have larger gamuts than RGB. This means that a pixel may go out of RGB gamut after editing. Then, it is impossible to convert it back to RGB without something being lost, often that is the colour.
I think I understand but which RGB are we talking about? For example, sRGB, Adobe RGB (1998), ProPhoto, CIE RGB, Adobe Wide ...

I'm probably confused because I don't normally think of a color model as having a gamut.

I do realize that editing something in a wide color space e.g. ProPhoto can leave one with colors outside a narrower e.g. sRGB which will get altered or even clipped when saved as the narrower color space.
There are color models that are coupled to RGB color spaces, such as HSL and HSV. Those inherit the gamut of the spaces they use.

There are color models that are tied to human visual systems in some way, like 1976 CIEL*a*b*. The space covered by Lab exceeds the range of visible colors. But if by gamut you mean color gamut, then Lab can represent all visible colors.
It's not the colour that is the problem, but the brightness.

For example, suppose R, G and B values range from 0 to 100. Then a colour such as (30,30,80) may have a luminance value of, say, 50, (in whatever new colour space we are converting to) where luminance may range from 0 to 100. If you double the luminance of that pixel, it gets a luminance of 100 which is just in gamut of the new colour space, but when you try to convert back to RGB you get (60,60,160) which is not possible and gets clipped to (60,60,100), which is a different colour.

I hope that makes sense.
This is typically only an issue if your working space is sRGB (display) and the image profile space is larger (eg Adobe RGB or ProPhoto RGB).

You can't exceed the gamut of the profile space (it won't let you). But if you are using relative colorimetric rendering, it may clip out of gamut colours in the working space.

Changing to perceptual rendering will fix that issue, but more saturated colours in the working space won't match those in the profile. However, if I am outputting for the web, my target is sRGB, so I edit in sRGB (working) and use perceptual rendering. WISIWIG.

My profile space is usually ProPhotoRGB, more for future proofing than anything else. My screen will support sRGB, Adobe RGB and most of DCI-P3, so I have lots of working space options too.
You have completely missed the point.

Suppose you work in LAB space and apply tools like Curves or Levels to change the tone curve in the L* channel, so that hopefully you preserve hue and saturation, only changing the luminance.

This is fine in theory, but in practice you may change the L* value of a pixel such that it is still less than 100% (so still in gamut in LAB space), but when you convert that pixel back to RGB space (the underlying colour profile is irrelevant), then you find that one of the RGB values exceeds 100% and so cannot be represented without clipping.
and if one looks at a color gamut in 3D CIELAB it can be seen that the gamut for a particular value of lightness L* reduces going both toward 100 and toward 0. If the a* and the b* values are not changed, they can easily go out-of-gamut, as you say.
There's a big difference between using Lab mode in an editor (which is constrained to the image profile) and the size of the CIELab space, which is effectively infinite.
I use ColorThink to view the gamut of images and/or color spaces.
I don't know how RT does it though.
Me neither.
But out of gamut and desaturated are not the same thing.
I never said nor intended to imply that they were the same thing.
OK, clearly, LAB mode works differently in PS. If I increase L and then review the a b values again after completing the change, they get smaller. In other words, they just become desaturated. In other words, it's not CIE Lab, its the profile space mapped to Lab values.
Definitely, different!

In RawTherapee, I increased L* from 33% to 53% and the a* and b* changed a little but not significantly:

a* from 27 to 25

b* from 60 to 58
How fast they change depends on the point when you exceed the maximum L* for a given primary and how far you move it. If you exceed 61 for a red hue, the saturation will fall off much faster.

It depends where the threshold is (when RGB 255 0 0 --> RGB 255 1 1). This will depend on the hue.

RGB 255 0 0 = L61 a127 b105

RGB 255 100 100 = L71 a101 b36
I begin to understand ref the max L* for 255,g,b

Bruce's CIE Calculator shows it very clearly for each the many RGB spaces he covers.

Thanks!

--
what you get is not what you saw ...
 
Last edited:
I think the unique thing about HSV is that its gamut is identical to the gamut of RGB. This is critical if you want to be able to use curves and similar editing tools on the luminance/lightness/whatever channel without ever altering the colour.

All of the other commonly used colour models (HSL, LAB, CIE..., etc) have larger gamuts than RGB. This means that a pixel may go out of RGB gamut after editing. Then, it is impossible to convert it back to RGB without something being lost, often that is the colour.
I think I understand but which RGB are we talking about? For example, sRGB, Adobe RGB (1998), ProPhoto, CIE RGB, Adobe Wide ...

I'm probably confused because I don't normally think of a color model as having a gamut.

I do realize that editing something in a wide color space e.g. ProPhoto can leave one with colors outside a narrower e.g. sRGB which will get altered or even clipped when saved as the narrower color space.
There are color models that are coupled to RGB color spaces, such as HSL and HSV. Those inherit the gamut of the spaces they use.

There are color models that are tied to human visual systems in some way, like 1976 CIEL*a*b*. The space covered by Lab exceeds the range of visible colors. But if by gamut you mean color gamut, then Lab can represent all visible colors.
It's not the colour that is the problem, but the brightness.

For example, suppose R, G and B values range from 0 to 100. Then a colour such as (30,30,80) may have a luminance value of, say, 50, (in whatever new colour space we are converting to) where luminance may range from 0 to 100. If you double the luminance of that pixel, it gets a luminance of 100 which is just in gamut of the new colour space, but when you try to convert back to RGB you get (60,60,160) which is not possible and gets clipped to (60,60,100), which is a different colour.

I hope that makes sense.
This is typically only an issue if your working space is sRGB (display) and the image profile space is larger (eg Adobe RGB or ProPhoto RGB).

You can't exceed the gamut of the profile space (it won't let you). But if you are using relative colorimetric rendering, it may clip out of gamut colours in the working space.

Changing to perceptual rendering will fix that issue, but more saturated colours in the working space won't match those in the profile. However, if I am outputting for the web, my target is sRGB, so I edit in sRGB (working) and use perceptual rendering. WISIWIG.

My profile space is usually ProPhotoRGB, more for future proofing than anything else. My screen will support sRGB, Adobe RGB and most of DCI-P3, so I have lots of working space options too.
You have completely missed the point.

Suppose you work in LAB space and apply tools like Curves or Levels to change the tone curve in the L* channel, so that hopefully you preserve hue and saturation, only changing the luminance.

This is fine in theory, but in practice you may change the L* value of a pixel such that it is still less than 100% (so still in gamut in LAB space), but when you convert that pixel back to RGB space (the underlying colour profile is irrelevant), then you find that one of the RGB values exceeds 100% and so cannot be represented without clipping.
and if one looks at a color gamut in 3D CIELAB it can be seen that the gamut for a particular value of lightness L* reduces going both toward 100 and toward 0. If the a* and the b* values are not changed, they can easily go out-of-gamut, as you say.
There's a big difference between using Lab mode in an editor (which is constrained to the image profile) and the size of the CIELab space, which is effectively infinite.
I use ColorThink to view the gamut of images and/or color spaces.
I don't know how RT does it though.
Me neither.
But out of gamut and desaturated are not the same thing.
I never said nor intended to imply that they were the same thing.
OK, clearly, LAB mode works differently in PS. If I increase L and then review the a b values again after completing the change, they get smaller. In other words, they just become desaturated. In other words, it's not CIE Lab, its the profile space mapped to Lab values.
OK, thanks, that'll do me for now; I'll ponder it some more and maybe play a little more too. There do seem to be a few ways to skin the LAB cat ...
I guess it's a matter of terminological interpretation. You can define 100,100,100 in Lab space
Is that what Adobe does? I personally follow the most favored in the literature: 100 for L*, -128 to +127 for a* and b*.
Fine (for an 8-bit space).

But 100,100,100 is well within those boundaries... it just doesn't exist as a colour.
44d247f3bc8944739c45c907ae2d93ba.jpg.png
I'm not sure how setting 'clip' 'off' is helping if the result is not a visible colour. I can define all kinds of virtual colours in Lab.

--
"A designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away." Antoine de Saint-Exupery
 
Fine (for an 8-bit space).

But 100,100,100 is well within those boundaries... it just doesn't exist as a colour.
44d247f3bc8944739c45c907ae2d93ba.jpg.png
I'm not sure how setting 'clip' 'off' is helping if the result is not a visible colour. I can define all kinds of virtual colours in Lab.
It certainly is a visible color. It just requires an extra-bright red sRGB primary.

--
 
I think the unique thing about HSV is that its gamut is identical to the gamut of RGB. This is critical if you want to be able to use curves and similar editing tools on the luminance/lightness/whatever channel without ever altering the colour.

All of the other commonly used colour models (HSL, LAB, CIE..., etc) have larger gamuts than RGB. This means that a pixel may go out of RGB gamut after editing. Then, it is impossible to convert it back to RGB without something being lost, often that is the colour.
I think I understand but which RGB are we talking about? For example, sRGB, Adobe RGB (1998), ProPhoto, CIE RGB, Adobe Wide ...

I'm probably confused because I don't normally think of a color model as having a gamut.

I do realize that editing something in a wide color space e.g. ProPhoto can leave one with colors outside a narrower e.g. sRGB which will get altered or even clipped when saved as the narrower color space.
There are color models that are coupled to RGB color spaces, such as HSL and HSV. Those inherit the gamut of the spaces they use.

There are color models that are tied to human visual systems in some way, like 1976 CIEL*a*b*. The space covered by Lab exceeds the range of visible colors. But if by gamut you mean color gamut, then Lab can represent all visible colors.
It's not the colour that is the problem, but the brightness.

For example, suppose R, G and B values range from 0 to 100. Then a colour such as (30,30,80) may have a luminance value of, say, 50, (in whatever new colour space we are converting to) where luminance may range from 0 to 100. If you double the luminance of that pixel, it gets a luminance of 100 which is just in gamut of the new colour space, but when you try to convert back to RGB you get (60,60,160) which is not possible and gets clipped to (60,60,100), which is a different colour.

I hope that makes sense.
This is typically only an issue if your working space is sRGB (display) and the image profile space is larger (eg Adobe RGB or ProPhoto RGB).

You can't exceed the gamut of the profile space (it won't let you). But if you are using relative colorimetric rendering, it may clip out of gamut colours in the working space.

Changing to perceptual rendering will fix that issue, but more saturated colours in the working space won't match those in the profile. However, if I am outputting for the web, my target is sRGB, so I edit in sRGB (working) and use perceptual rendering. WISIWIG.

My profile space is usually ProPhotoRGB, more for future proofing than anything else. My screen will support sRGB, Adobe RGB and most of DCI-P3, so I have lots of working space options too.
You have completely missed the point.

Suppose you work in LAB space and apply tools like Curves or Levels to change the tone curve in the L* channel, so that hopefully you preserve hue and saturation, only changing the luminance.

This is fine in theory, but in practice you may change the L* value of a pixel such that it is still less than 100% (so still in gamut in LAB space), but when you convert that pixel back to RGB space (the underlying colour profile is irrelevant), then you find that one of the RGB values exceeds 100% and so cannot be represented without clipping.
and if one looks at a color gamut in 3D CIELAB it can be seen that the gamut for a particular value of lightness L* reduces going both toward 100 and toward 0. If the a* and the b* values are not changed, they can easily go out-of-gamut, as you say.
There's a big difference between using Lab mode in an editor (which is constrained to the image profile) and the size of the CIELab space, which is effectively infinite.
I use ColorThink to view the gamut of images and/or color spaces.
I don't know how RT does it though.
Me neither.
But out of gamut and desaturated are not the same thing.
I never said nor intended to imply that they were the same thing.
OK, clearly, LAB mode works differently in PS. If I increase L and then review the a b values again after completing the change, they get smaller. In other words, they just become desaturated. In other words, it's not CIE Lab, its the profile space mapped to Lab values.
Definitely, different!

In RawTherapee, I increased L* from 33% to 53% and the a* and b* changed a little but not significantly:

a* from 27 to 25

b* from 60 to 58
How fast they change depends on the point when you exceed the maximum L* for a given primary and how far you move it. If you exceed 61 for a red hue, the saturation will fall off much faster.

It depends where the threshold is (when RGB 255 0 0 --> RGB 255 1 1). This will depend on the hue.

RGB 255 0 0 = L61 a127 b105

RGB 255 100 100 = L71 a101 b36
I begin to understand ref the max L* for 255,g,b

Bruce's CIE Calculator shows it very clearly for each the many RGB spaces he covers.

Thanks!
The penny dropped for me when I turned the RGB cube on its black corner with the white point at the apex. Then, every RGB triplet at any vertical level in the cube adds up to the same number. 0 at the bottom, 127*3 in the middle, and 255*3 at the top. .

The upended cube is therefore small at the bottom, expands in the middle, and contracts at the top as colours desaturate, just like Lab.

Lab is just a wonky version of the same cube adjusted for perceptual consistency.
 
I think the unique thing about HSV is that its gamut is identical to the gamut of RGB. This is critical if you want to be able to use curves and similar editing tools on the luminance/lightness/whatever channel without ever altering the colour.

All of the other commonly used colour models (HSL, LAB, CIE..., etc) have larger gamuts than RGB. This means that a pixel may go out of RGB gamut after editing. Then, it is impossible to convert it back to RGB without something being lost, often that is the colour.
I think I understand but which RGB are we talking about? For example, sRGB, Adobe RGB (1998), ProPhoto, CIE RGB, Adobe Wide ...

I'm probably confused because I don't normally think of a color model as having a gamut.

I do realize that editing something in a wide color space e.g. ProPhoto can leave one with colors outside a narrower e.g. sRGB which will get altered or even clipped when saved as the narrower color space.
There are color models that are coupled to RGB color spaces, such as HSL and HSV. Those inherit the gamut of the spaces they use.

There are color models that are tied to human visual systems in some way, like 1976 CIEL*a*b*. The space covered by Lab exceeds the range of visible colors. But if by gamut you mean color gamut, then Lab can represent all visible colors.
It's not the colour that is the problem, but the brightness.

For example, suppose R, G and B values range from 0 to 100. Then a colour such as (30,30,80) may have a luminance value of, say, 50, (in whatever new colour space we are converting to) where luminance may range from 0 to 100. If you double the luminance of that pixel, it gets a luminance of 100 which is just in gamut of the new colour space, but when you try to convert back to RGB you get (60,60,160) which is not possible and gets clipped to (60,60,100), which is a different colour.

I hope that makes sense.
This is typically only an issue if your working space is sRGB (display) and the image profile space is larger (eg Adobe RGB or ProPhoto RGB).

You can't exceed the gamut of the profile space (it won't let you). But if you are using relative colorimetric rendering, it may clip out of gamut colours in the working space.

Changing to perceptual rendering will fix that issue, but more saturated colours in the working space won't match those in the profile. However, if I am outputting for the web, my target is sRGB, so I edit in sRGB (working) and use perceptual rendering. WISIWIG.

My profile space is usually ProPhotoRGB, more for future proofing than anything else. My screen will support sRGB, Adobe RGB and most of DCI-P3, so I have lots of working space options too.
You have completely missed the point.

Suppose you work in LAB space and apply tools like Curves or Levels to change the tone curve in the L* channel, so that hopefully you preserve hue and saturation, only changing the luminance.

This is fine in theory, but in practice you may change the L* value of a pixel such that it is still less than 100% (so still in gamut in LAB space), but when you convert that pixel back to RGB space (the underlying colour profile is irrelevant), then you find that one of the RGB values exceeds 100% and so cannot be represented without clipping.
and if one looks at a color gamut in 3D CIELAB it can be seen that the gamut for a particular value of lightness L* reduces going both toward 100 and toward 0. If the a* and the b* values are not changed, they can easily go out-of-gamut, as you say.
There's a big difference between using Lab mode in an editor (which is constrained to the image profile) and the size of the CIELab space, which is effectively infinite.
I use ColorThink to view the gamut of images and/or color spaces.
I don't know how RT does it though.
Me neither.
But out of gamut and desaturated are not the same thing.
I never said nor intended to imply that they were the same thing.
OK, clearly, LAB mode works differently in PS. If I increase L and then review the a b values again after completing the change, they get smaller. In other words, they just become desaturated. In other words, it's not CIE Lab, its the profile space mapped to Lab values.
OK, thanks, that'll do me for now; I'll ponder it some more and maybe play a little more too. There do seem to be a few ways to skin the LAB cat ...
I guess it's a matter of terminological interpretation. You can define 100,100,100 in Lab space
Is that what Adobe does? I personally follow the most favored in the literature: 100 for L*, -128 to +127 for a* and b*.
Fine (for an 8-bit space).

But 100,100,100 is well within those boundaries... it just doesn't exist as a colour.
44d247f3bc8944739c45c907ae2d93ba.jpg.png
Yep:

97c9ce018eef4a09b6fb2cf8c27f52f5.jpg



--
what you get is not what you saw ...
 
Fine (for an 8-bit space).

But 100,100,100 is well within those boundaries... it just doesn't exist as a colour.
44d247f3bc8944739c45c907ae2d93ba.jpg.png
I'm not sure how setting 'clip' 'off' is helping if the result is not a visible colour. I can define all kinds of virtual colours in Lab.
It certainly is a visible color. It just requires an extra-bright red sRGB primary.
Yep2:

4f056aa988244a7493cce3682fa56f8f.jpg

Can't argue with Bruce ... he da man!!

--
what you get is not what you saw ...
 
I think the unique thing about HSV is that its gamut is identical to the gamut of RGB. This is critical if you want to be able to use curves and similar editing tools on the luminance/lightness/whatever channel without ever altering the colour.

All of the other commonly used colour models (HSL, LAB, CIE..., etc) have larger gamuts than RGB. This means that a pixel may go out of RGB gamut after editing. Then, it is impossible to convert it back to RGB without something being lost, often that is the colour.
I think I understand but which RGB are we talking about? For example, sRGB, Adobe RGB (1998), ProPhoto, CIE RGB, Adobe Wide ...

I'm probably confused because I don't normally think of a color model as having a gamut.

I do realize that editing something in a wide color space e.g. ProPhoto can leave one with colors outside a narrower e.g. sRGB which will get altered or even clipped when saved as the narrower color space.
There are color models that are coupled to RGB color spaces, such as HSL and HSV. Those inherit the gamut of the spaces they use.

There are color models that are tied to human visual systems in some way, like 1976 CIEL*a*b*. The space covered by Lab exceeds the range of visible colors. But if by gamut you mean color gamut, then Lab can represent all visible colors.
It's not the colour that is the problem, but the brightness.

For example, suppose R, G and B values range from 0 to 100. Then a colour such as (30,30,80) may have a luminance value of, say, 50, (in whatever new colour space we are converting to) where luminance may range from 0 to 100. If you double the luminance of that pixel, it gets a luminance of 100 which is just in gamut of the new colour space, but when you try to convert back to RGB you get (60,60,160) which is not possible and gets clipped to (60,60,100), which is a different colour.

I hope that makes sense.
This is typically only an issue if your working space is sRGB (display) and the image profile space is larger (eg Adobe RGB or ProPhoto RGB).

You can't exceed the gamut of the profile space (it won't let you). But if you are using relative colorimetric rendering, it may clip out of gamut colours in the working space.

Changing to perceptual rendering will fix that issue, but more saturated colours in the working space won't match those in the profile. However, if I am outputting for the web, my target is sRGB, so I edit in sRGB (working) and use perceptual rendering. WISIWIG.

My profile space is usually ProPhotoRGB, more for future proofing than anything else. My screen will support sRGB, Adobe RGB and most of DCI-P3, so I have lots of working space options too.
You have completely missed the point.

Suppose you work in LAB space and apply tools like Curves or Levels to change the tone curve in the L* channel, so that hopefully you preserve hue and saturation, only changing the luminance.

This is fine in theory, but in practice you may change the L* value of a pixel such that it is still less than 100% (so still in gamut in LAB space), but when you convert that pixel back to RGB space (the underlying colour profile is irrelevant), then you find that one of the RGB values exceeds 100% and so cannot be represented without clipping.
and if one looks at a color gamut in 3D CIELAB it can be seen that the gamut for a particular value of lightness L* reduces going both toward 100 and toward 0. If the a* and the b* values are not changed, they can easily go out-of-gamut, as you say.
There's a big difference between using Lab mode in an editor (which is constrained to the image profile) and the size of the CIELab space, which is effectively infinite.
I use ColorThink to view the gamut of images and/or color spaces.
I don't know how RT does it though.
Me neither.
But out of gamut and desaturated are not the same thing.
I never said nor intended to imply that they were the same thing.
OK, clearly, LAB mode works differently in PS. If I increase L and then review the a b values again after completing the change, they get smaller. In other words, they just become desaturated. In other words, it's not CIE Lab, its the profile space mapped to Lab values.
Definitely, different!

In RawTherapee, I increased L* from 33% to 53% and the a* and b* changed a little but not significantly:

a* from 27 to 25

b* from 60 to 58
How fast they change depends on the point when you exceed the maximum L* for a given primary and how far you move it. If you exceed 61 for a red hue, the saturation will fall off much faster.

It depends where the threshold is (when RGB 255 0 0 --> RGB 255 1 1). This will depend on the hue.

RGB 255 0 0 = L61 a127 b105

RGB 255 100 100 = L71 a101 b36
I begin to understand ref the max L* for 255,g,b

Bruce's CIE Calculator shows it very clearly for each the many RGB spaces he covers.

Thanks!
The penny dropped for me when I turned the RGB cube on its black corner with the white point at the apex. Then, every RGB triplet at any vertical level in the cube adds up to the same number. 0 at the bottom, 127*3 in the middle, and 255*3 at the top. .

The upended cube is therefore small at the bottom, expands in the middle, and contracts at the top as colours desaturate, just like Lab.

Lab is just a wonky version of the same cube adjusted for perceptual consistency.
Especially as Bruce doesn't agree that "RGB 255 0 0 = L61 a127 b105"

nor that "RGB 255 100 100 = L71 a101 b36"

We must be talking different RGBs but let's never specify those, eh?
 
Fine (for an 8-bit space).

But 100,100,100 is well within those boundaries... it just doesn't exist as a colour.
44d247f3bc8944739c45c907ae2d93ba.jpg.png
I'm not sure how setting 'clip' 'off' is helping if the result is not a visible colour. I can define all kinds of virtual colours in Lab.
It certainly is a visible color. It just requires an extra-bright red sRGB primary.
OK, neat trick. But if you converted that to XYZ then to sRGB, it would simply scale to a displayable value - eg. 1, 0.33, 0.12 (or RGB 255 84 31 or Lab 61 64 64 )



--
"A designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away." Antoine de Saint-Exupery
 
I think the unique thing about HSV is that its gamut is identical to the gamut of RGB. This is critical if you want to be able to use curves and similar editing tools on the luminance/lightness/whatever channel without ever altering the colour.

All of the other commonly used colour models (HSL, LAB, CIE..., etc) have larger gamuts than RGB. This means that a pixel may go out of RGB gamut after editing. Then, it is impossible to convert it back to RGB without something being lost, often that is the colour.
I think I understand but which RGB are we talking about? For example, sRGB, Adobe RGB (1998), ProPhoto, CIE RGB, Adobe Wide ...

I'm probably confused because I don't normally think of a color model as having a gamut.

I do realize that editing something in a wide color space e.g. ProPhoto can leave one with colors outside a narrower e.g. sRGB which will get altered or even clipped when saved as the narrower color space.
There are color models that are coupled to RGB color spaces, such as HSL and HSV. Those inherit the gamut of the spaces they use.

There are color models that are tied to human visual systems in some way, like 1976 CIEL*a*b*. The space covered by Lab exceeds the range of visible colors. But if by gamut you mean color gamut, then Lab can represent all visible colors.
It's not the colour that is the problem, but the brightness.

For example, suppose R, G and B values range from 0 to 100. Then a colour such as (30,30,80) may have a luminance value of, say, 50, (in whatever new colour space we are converting to) where luminance may range from 0 to 100. If you double the luminance of that pixel, it gets a luminance of 100 which is just in gamut of the new colour space, but when you try to convert back to RGB you get (60,60,160) which is not possible and gets clipped to (60,60,100), which is a different colour.

I hope that makes sense.
This is typically only an issue if your working space is sRGB (display) and the image profile space is larger (eg Adobe RGB or ProPhoto RGB).

You can't exceed the gamut of the profile space (it won't let you). But if you are using relative colorimetric rendering, it may clip out of gamut colours in the working space.

Changing to perceptual rendering will fix that issue, but more saturated colours in the working space won't match those in the profile. However, if I am outputting for the web, my target is sRGB, so I edit in sRGB (working) and use perceptual rendering. WISIWIG.

My profile space is usually ProPhotoRGB, more for future proofing than anything else. My screen will support sRGB, Adobe RGB and most of DCI-P3, so I have lots of working space options too.
You have completely missed the point.

Suppose you work in LAB space and apply tools like Curves or Levels to change the tone curve in the L* channel, so that hopefully you preserve hue and saturation, only changing the luminance.

This is fine in theory, but in practice you may change the L* value of a pixel such that it is still less than 100% (so still in gamut in LAB space), but when you convert that pixel back to RGB space (the underlying colour profile is irrelevant), then you find that one of the RGB values exceeds 100% and so cannot be represented without clipping.
and if one looks at a color gamut in 3D CIELAB it can be seen that the gamut for a particular value of lightness L* reduces going both toward 100 and toward 0. If the a* and the b* values are not changed, they can easily go out-of-gamut, as you say.
There's a big difference between using Lab mode in an editor (which is constrained to the image profile) and the size of the CIELab space, which is effectively infinite.
I use ColorThink to view the gamut of images and/or color spaces.
I don't know how RT does it though.
Me neither.
But out of gamut and desaturated are not the same thing.
I never said nor intended to imply that they were the same thing.
OK, clearly, LAB mode works differently in PS. If I increase L and then review the a b values again after completing the change, they get smaller. In other words, they just become desaturated. In other words, it's not CIE Lab, its the profile space mapped to Lab values.
Definitely, different!

In RawTherapee, I increased L* from 33% to 53% and the a* and b* changed a little but not significantly:

a* from 27 to 25

b* from 60 to 58
How fast they change depends on the point when you exceed the maximum L* for a given primary and how far you move it. If you exceed 61 for a red hue, the saturation will fall off much faster.

It depends where the threshold is (when RGB 255 0 0 --> RGB 255 1 1). This will depend on the hue.

RGB 255 0 0 = L61 a127 b105

RGB 255 100 100 = L71 a101 b36
I begin to understand ref the max L* for 255,g,b

Bruce's CIE Calculator shows it very clearly for each the many RGB spaces he covers.

Thanks!
The penny dropped for me when I turned the RGB cube on its black corner with the white point at the apex. Then, every RGB triplet at any vertical level in the cube adds up to the same number. 0 at the bottom, 127*3 in the middle, and 255*3 at the top. .

The upended cube is therefore small at the bottom, expands in the middle, and contracts at the top as colours desaturate, just like Lab.

Lab is just a wonky version of the same cube adjusted for perceptual consistency.
Especially as Bruce doesn't agree that "RGB 255 0 0 = L61 a127 b105"
My bad, I was using ProPhotoRGB as a profile space and looking at it on sRGB. I thought I had set the file to sRGB.

But you get the idea.
 
Fine (for an 8-bit space).

But 100,100,100 is well within those boundaries... it just doesn't exist as a colour.
44d247f3bc8944739c45c907ae2d93ba.jpg.png
I'm not sure how setting 'clip' 'off' is helping if the result is not a visible colour. I can define all kinds of virtual colours in Lab.
It certainly is a visible color. It just requires an extra-bright red sRGB primary.
OK, neat trick. But if you converted that to XYZ then to sRGB, it would simply scale to a displayable value - eg. 1, 0.33, 0.12 (or RGB 255 84 31 or Lab 61 64 64 )
Nope.



503ae92f7d814012989fc6be96272b3f.jpg.png



--
 
Fine (for an 8-bit space).

But 100,100,100 is well within those boundaries... it just doesn't exist as a colour.
44d247f3bc8944739c45c907ae2d93ba.jpg.png
I'm not sure how setting 'clip' 'off' is helping if the result is not a visible colour. I can define all kinds of virtual colours in Lab.
It certainly is a visible color. It just requires an extra-bright red sRGB primary.
OK, neat trick. But if you converted that to XYZ then to sRGB, it would simply scale to a displayable value - eg. 1, 0.33, 0.12 (or RGB 255 84 31 or Lab 61 64 64 )
Nope.

503ae92f7d814012989fc6be96272b3f.jpg.png
Throw me a bone here. If the result is undisplayable in any RGB space, but it's still a visible colour, what exactly is the point?

Every XYZ space I have seen has coordinates from 0 to 0.8 (x) and 0 to 0.9 (y) so I don't even know that those XYZ coordinates refer to.

If I use the colour picker in Photoshop and ask for Lab 100 100 100 and press enter, it gives me RGB 255 146 53 and resets Lab to 72 37 64, so it is clearly keeping it within gamut. But where does this number come from?

--
"A designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away." Antoine de Saint-Exupery
 
The discussion in this thread has highlighted the problems with changing luminance in colour models that use luminance (or something similar) as one channel. For any colours other than shades of grey, if the luminance is raised to 100%, then the corresponding RGB colour is invalid (one of the colour channels exceeds 100%).

An interesting practical question is: what actually happens if you do this in a photo editor?

I am only familiar with GIMP, so I have tried it there.

Take this image:



67fb12c6755d4407a22ded6d3fd7dadb.jpg

If I create a white layer above the image layer and set the mode of the white layer to "Luminance", this should take the luminance from the top layer and the hue and saturation from the original image. The top layer is white, which has a luminance of 100%. White is the only colour with a luminance of 100%, so the resultant image should be all white. It is as shown below, which is clearly not all white! So, from a purely mathematical point of view, this layer mode in GIMP has been incorrectly implemented. I wonder what Photoshop does?



93c84b7bf2094d84bb1a484377099316.jpg

If, instead, I set the mode of the white top layer to "LCh Lightness", I would have thought that should also increase the lightness to 100% everywhere. However, the result is shown below. Again, it is mathematically incorrect!



b344fbfe95ab4b498d5241ce2a012f2c.jpg

If I set the top layer's mode to "HSV Value", the result should be V = 100% everywhere, which it is:



d633aa81c5cf4c53920d2e04b1b0172e.jpg

I nearly always use HSV because it has a precise mathematical definition and the operations in HSV produce exactly the results I expect.

Unfortunately, the other colour models, in practice, do not produce results that always agree with the mathematics. I strongly dislike this lack of mathematical precision.
 
The discussion in this thread has highlighted the problems with changing luminance in colour models that use luminance (or something similar) as one channel. For any colours other than shades of grey, if the luminance is raised to 100%, then the corresponding RGB colour is invalid (one of the colour channels exceeds 100%).

An interesting practical question is: what actually happens if you do this in a photo editor?

I am only familiar with GIMP, so I have tried it there.

Take this image:

67fb12c6755d4407a22ded6d3fd7dadb.jpg

If I create a white layer above the image layer and set the mode of the white layer to "Luminance", this should take the luminance from the top layer and the hue and saturation from the original image. The top layer is white, which has a luminance of 100%. White is the only colour with a luminance of 100%, so the resultant image should be all white. It is as shown below, which is clearly not all white! So, from a purely mathematical point of view, this layer mode in GIMP has been incorrectly implemented. I wonder what Photoshop does?

93c84b7bf2094d84bb1a484377099316.jpg

If, instead, I set the mode of the white top layer to "LCh Lightness", I would have thought that should also increase the lightness to 100% everywhere. However, the result is shown below. Again, it is mathematically incorrect!

b344fbfe95ab4b498d5241ce2a012f2c.jpg

If I set the top layer's mode to "HSV Value", the result should be V = 100% everywhere, which it is:

d633aa81c5cf4c53920d2e04b1b0172e.jpg

I nearly always use HSV because it has a precise mathematical definition and the operations in HSV produce exactly the results I expect.

Unfortunately, the other colour models, in practice, do not produce results that always agree with the mathematics.
Equally unfortunate, although I am running the GIMP 2.10.8, the Help for the Modes only lists the original 21 varieties - so I can't find the formulae for the above Modes.
I strongly dislike this lack of mathematical precision.
Me too.

--
what you get is not what you saw ...
 
The discussion in this thread has highlighted the problems with changing luminance in colour models that use luminance (or something similar) as one channel. For any colours other than shades of grey, if the luminance is raised to 100%, then the corresponding RGB colour is invalid (one of the colour channels exceeds 100%).

An interesting practical question is: what actually happens if you do this in a photo editor?

I am only familiar with GIMP, so I have tried it there.

If I create a white layer above the image layer and set the mode of the white layer to "Luminance", this should take the luminance from the top layer and the hue and saturation from the original image. The top layer is white, which has a luminance of 100%. White is the only colour with a luminance of 100%, so the resultant image should be all white. It is as shown below, which is clearly not all white! So, from a purely mathematical point of view, this layer mode in GIMP has been incorrectly implemented. I wonder what Photoshop does?
Photoshop produces a completely white image, as expected. So, bug in GIMP...
 
The discussion in this thread has highlighted the problems with changing luminance in colour models that use luminance (or something similar) as one channel. For any colours other than shades of grey, if the luminance is raised to 100%, then the corresponding RGB colour is invalid (one of the colour channels exceeds 100%).

An interesting practical question is: what actually happens if you do this in a photo editor?

I am only familiar with GIMP, so I have tried it there.

If I create a white layer above the image layer and set the mode of the white layer to "Luminance", this should take the luminance from the top layer and the hue and saturation from the original image. The top layer is white, which has a luminance of 100%. White is the only colour with a luminance of 100%, so the resultant image should be all white. It is as shown below, which is clearly not all white! So, from a purely mathematical point of view, this layer mode in GIMP has been incorrectly implemented. I wonder what Photoshop does?
Photoshop produces a completely white image, as expected. So, bug in GIMP...
Excellent.

What about the other modes?
 
The discussion in this thread has highlighted the problems with changing luminance in colour models that use luminance (or something similar) as one channel. For any colours other than shades of grey, if the luminance is raised to 100%, then the corresponding RGB colour is invalid (one of the colour channels exceeds 100%).

An interesting practical question is: what actually happens if you do this in a photo editor?

I am only familiar with GIMP, so I have tried it there.

If I create a white layer above the image layer and set the mode of the white layer to "Luminance", this should take the luminance from the top layer and the hue and saturation from the original image. The top layer is white, which has a luminance of 100%. White is the only colour with a luminance of 100%, so the resultant image should be all white. It is as shown below, which is clearly not all white! So, from a purely mathematical point of view, this layer mode in GIMP has been incorrectly implemented. I wonder what Photoshop does?
Photoshop produces a completely white image, as expected. So, bug in GIMP...
Some departures from the underlying math may not be inadvertent, but intentional. I'm sure that the way that Adobe handled (still handles?) color space changes between PPRGB and Adobe RGB with absolute intent chosen -- it changes the white point to match the new space's white point -- was a conscious decision.
 
Fine (for an 8-bit space).

But 100,100,100 is well within those boundaries... it just doesn't exist as a colour.
44d247f3bc8944739c45c907ae2d93ba.jpg.png
I'm not sure how setting 'clip' 'off' is helping if the result is not a visible colour. I can define all kinds of virtual colours in Lab.
It certainly is a visible color. It just requires an extra-bright red sRGB primary.
OK, neat trick. But if you converted that to XYZ then to sRGB, it would simply scale to a displayable value - eg. 1, 0.33, 0.12 (or RGB 255 84 31 or Lab 61 64 64 )
Nope.

503ae92f7d814012989fc6be96272b3f.jpg.png
Throw me a bone here. If the result is undisplayable in any RGB space, but it's still a visible colour, what exactly is the point?
There are storage or working color spaces that have non-physical primaries (look at PPGRB). There are some that allow negative component values, or values that exceed nominal full scale. There's no reason why working spaces shouldn't have colors that are not displayable on the editor's monitor -- we've been dealing with that situation for years. Other than performance, there's no reason why values above nominal full scale and below zero shouldn't be allowed. When I'm writing image processing code, I allow values above nominal full scale and below zero, and I use floating point numbers so I don't have to worry about how far above and below the nominal bounds the signal gets during manipulation. Clipping at an intermediate stage, in my experience, almost always produces undesirable affects rather than mapping to an output device at the end.
Every XYZ space I have seen has coordinates from 0 to 0.8 (x) and 0 to 0.9 (y) so I don't even know that those XYZ coordinates refer to.
The coordinates that you know, times 100. Note the Y value above. It's 100. That's not an accident.
If I use the colour picker in Photoshop and ask for Lab 100 100 100 and press enter, it gives me RGB 255 146 53 and resets Lab to 72 37 64, so it is clearly keeping it within gamut. But where does this number come from?
I am discussing the general characteristics of these models, not the details of a specific implementation.

--
 

Keyboard shortcuts

Back
Top