AdobeRGB question

Technically it is not possible to encode all colors that a human can see with just RGB.
Depends on how RGB primaries are selected.

"Wright and Guild", "CIE Standard Observer", "colour-matching functions" are possible search terms to study this particular aspect.
For example how do you reproduce the effect of a short wavelength blue light that excites both blue sensing cones and the red sensing cones in eyes with just three primary colors? Could you create sensors in a camera that correspond to the way our eyes work?
 
Technically it is not possible to encode all colors that a human can see with just RGB.
Depends on how RGB primaries are selected.

"Wright and Guild", "CIE Standard Observer", "colour-matching functions" are possible search terms to study this particular aspect.
For example how do you reproduce the effect of a short wavelength blue light that excites both blue sensing cones and the red sensing cones in eyes with just three primary colors?
You cheat. You do not reproduce the monochromatic source, you reproduce the response. You can get the same response with many different combinations of three primary colors.
Could you create sensors in a camera that correspond to the way our eyes work?
Google the "Luther-Ives condition." You could, but it is not clear that this is the ultimate goal.
 
I kind of bought to that idea and now always import my RAW's as ProPhoto RGB hoping this will give more dynamic range for editing, later on i export them as sRGB for putting on internet.
Okay, go ahead. But if your ProPhoto-edited images contain colors that can't be represented in sRGB, what do you think will happen when you convert to sRGB?
How i understand it, please correct me if i am wrong.

If my monitor already set up as sRGB color profile (in Windows color management i have sRGB ICC profile selected) then what i am seeing on my monitor when editing ProPhoto RGB inside Photoshop (or any color managed program) is already converted to sRGB color space before it hits my monitor.

Therefore when i edit image in any color space i am already seeing image in sRGB color format, so there will be not much difference if any after i export it from PhtoPhoto RGB to sRGB.

Yet i am still taking advantage of wider color range of ProPhoto RGB when editing image. e.g. Photoshop selection tool like Magic Wand will work more precisely because RAW image was imported to wider color space (ProPhoto RGB).
I can't disagree with what you're saying, but my question still stands.

Another question: If you have tried editing the same images in the same way in both sRGB and ProPhoto, how often do you see a different (and presumably better) result, and if the result is better, does the improvement carry through to your final sRGB conversion?
I think you're confusing wider color gamut with wider dynamic range ...
Yes, i did not give much thought to it.
did you know that you can edit in 16-bit and 32-bit with sRGB? But if you're happy doing what you do and it works out, I'm happy.
I don't see advantage of limiting colors gamut to sRGB when importing RAW image to Photoshop. Having more color data to work with is better in my opinion.
You don't have more color data in ProPhoto - you have different color data. The advantage of sRGB for me is simplicity. I use the sRGB color space for everything, so I never have to convert for any purpose. The possible disadvantage is that the colors in some of my images can be less than ideal, but that's not a concern to me personally.
 
Last edited:
You don't have more color data in ProPhoto - you have different color data.
Well you get more if you use ProPhoto + 16-bit RGB channels.
The advantage of sRGB for me is simplicity. I use the sRGB color space for everything, so I never have to convert for any purpose. The possible disadvantage is that the colors in some of my images can be less than ideal, but that's not a concern to me personally.
16-bit ProPhoto is definitely better when say exporting from Lightroom to Photoshop and back (yes it's 'better' mostly because of 16 bits, not ProPhoto).

Exporting the final image is another story.
 
You don't have more color data in ProPhoto - you have different color data.
Well you get more if you use ProPhoto + 16-bit RGB channels.
Do you get any more that way than you get if you use sRGB + 16-bit RGB channels?
The advantage of sRGB for me is simplicity. I use the sRGB color space for everything, so I never have to convert for any purpose. The possible disadvantage is that the colors in some of my images can be less than ideal, but that's not a concern to me personally.
16-bit ProPhoto is definitely better when say exporting from Lightroom to Photoshop and back (yes it's 'better' mostly because of 16 bits, not ProPhoto).
As I stated earlier, one can process in 16-bit or 32-bit with sRGB. Depends on what the software offers, I suppose, but I doubt that any of the better tools are limited to 8-bit.

c415548a54064e12b6c4463fecde3609.jpg

100857c89a7e44f2b16569a95c935249.jpg
Exporting the final image is another story.
Yes, it's a separate consideration.
 
Last edited:
Technically it is not possible to encode all colors that a human can see with just RGB.
Depends on how RGB primaries are selected.

"Wright and Guild", "CIE Standard Observer", "colour-matching functions" are possible search terms to study this particular aspect.
For example how do you reproduce the effect of a short wavelength blue light that excites both blue sensing cones and the red sensing cones in eyes with just three primary colors?
You have to stop thinking in absolute terms about colour because we simply do not see it in that way.

You can't attribute a "blue" that we see to a specific wavelength or relate colour as we understand it directly to the visible scale of wavelength. The blue that you see in the real world is ALWAYS a combination of wavelengths NEVER a single wavelength and is largely corrected for colour casts due to the colour temp of the ambient light.

Also we call the receptors RGB as simple labels, but it doesn't represent their true spectral sensitivity which is much broader, the blue actually detects some red.
Could you create sensors in a camera that correspond to the way our eyes work?
That is what the system tries to do, reproduce the colour we see, so in asking how we reproduce the short wavelengths at the blue end you must first ask how we see them in the real world.
 
Last edited:
Another question: If you have tried editing the same images in the same way in both sRGB and ProPhoto, how often do you see a different (and presumably better) result, and if the result is better, does the improvement carry through to your final sRGB conversion?
I did not compare as i started editing i learned that its better to use ProPhoto RGB and i always did just that when importing raw's.
You don't have more color data in ProPhoto - you have different color data. The advantage of sRGB for me is simplicity. I use the sRGB color space for everything, so I never have to convert for any purpose. The possible disadvantage is that the colors in some of my images can be less than ideal, but that's not a concern to me personally.
Conversion is a real headache. I need to read more and understand about wide color gamut vs bitrate before i can make decision to keep using ProPhoto RGB or use sRGB.
 
Another question: If you have tried editing the same images in the same way in both sRGB and ProPhoto, how often do you see a different (and presumably better) result, and if the result is better, does the improvement carry through to your final sRGB conversion?
I did not compare as i started editing i learned that its better to use ProPhoto RGB and i always did just that when importing raw's.
You don't have more color data in ProPhoto - you have different color data. The advantage of sRGB for me is simplicity. I use the sRGB color space for everything, so I never have to convert for any purpose. The possible disadvantage is that the colors in some of my images can be less than ideal, but that's not a concern to me personally.
Conversion is a real headache. I need to read more and understand about wide color gamut vs bitrate before i can make decision to keep using ProPhoto RGB or use sRGB.
Sure, you could do more reading about it. But the term in this case is bit depth, not bitrate.
 
I kind of bought to that idea and now always import my RAW's as ProPhoto RGB hoping this will give more dynamic range for editing, later on i export them as sRGB for putting on internet.
Okay, go ahead. But if your ProPhoto-edited images contain colors that can't be represented in sRGB, what do you think will happen when you convert to sRGB?
How i understand it, please correct me if i am wrong.

If my monitor already set up as sRGB color profile (in Windows color management i have sRGB ICC profile selected) then what i am seeing on my monitor when editing ProPhoto RGB inside Photoshop (or any color managed program) is already converted to sRGB color space before it hits my monitor.

Therefore when i edit image in any color space i am already seeing image in sRGB color format, so there will be not much difference if any after i export it from PhtoPhoto RGB to sRGB.

Yet i am still taking advantage of wider color range of ProPhoto RGB when editing image. e.g. Photoshop selection tool like Magic Wand will work more precisely because RAW image was imported to wider color space (ProPhoto RGB).
I can't disagree with what you're saying, but my question still stands.

Another question: If you have tried editing the same images in the same way in both sRGB and ProPhoto, how often do you see a different (and presumably better) result, and if the result is better, does the improvement carry through to your final sRGB conversion?
The order in which conversion and editing occurs can have a visible difference for colorful images that experience channel clipping somewhere along the way. How visible that is and how easy it is to mitigate/avoid varies greatly (image particulars and your processing skills are important variables here), but you're pretty much always assured of doing the least amount of damage by sticking with the generally recommended workflow of working in ProPhoto and postponing your conversion to sRGB until the end. Of course, if you're outputting to a display that supports colors outside of the sRGB (or for multiple display formats such as print+online display), then the reason to use a wider working color space is really not arguable.

Below is an example I put together quickly to illustrate the foregoing. The outfit worn by the baby was a neutral red and the bowl a bright yellow. The scene was lit by shaded sunlight through a large wall of windows.

The raw processing was done in ACR and the subsequent edits done in PS. What's shown are screen grabs from an (old) iMac that supports a display gamut very close to sRGB. Descriptions of the specific editing/sequencing for the three renderings is as follows:
  • Left=In ACR Exposure adjusted to -.50 and Highlights adjusted to -100. All other settings at default. Some highlight clipping warnings in the red channel for lightest part of the bowl and bow displayed when conversion to sRGB was the setting. After opened in PS from ACR (with sRGB as the selected conversion). A duplicate layer was created and blending mode was set to Multiply and 50% Opacity.
  • Middle=Same as Left except the conversion was done in ProPhoto (no red channel clipping indicated in ACR). Soft proofing in PS was turned on and set to sRGB.
  • Right=Same as Left except that Luminance settings for red and yellow in the Color Mixer palette were slightly lowered to eliminate all red channel clipping prior to the conversion using sRGB.
So, in essence, the Left and Right renderings had the sRGB conversion stage performed prior to the application of the color/tone adjustment using the Multiply blend and Center rendering had the sRGB applied after the application of the Multiply adjustment. The Multiply adjustment had the effect of pushing additional portions of the red channel in the bowl and bow to the point of clipping, although the actual color clipping is well-hidden on my monitor since the green and blue channels were still intact and the fine image noise may also help dither away any apparent clipping. Here are the screen grabs:

Fit on Screen display
Fit on Screen display

100% display showing a portion of the bowls
100% display showing a portion of the bowls

Here's color samples taken from the same locations of the bowl and sleevefrom the 100% display screen grab:

69f295827a634a5f9846d7f21198d6b4.jpg

As can clearly be seen from the color samples taken from the two renderings in which the conversion to sRGB occurred before the PS edit, the colors are different from the version in which the sRGB conversion effectively occurred at the end (thanks to the soft proofing). The only other difference is the very slightly darker colors of the RIght rendering due to the luminance adjustment done in ACR to bring that version fully within the sRGB gamut.

To my eye, the middle rendering is truer and preferred thanks to a purer and more saturated yellow and red. Of course, additional corrections could be applied to the Left and Right renderings, but those will tend to either introduce compromises elsewhere, require additional effort to isolate the edits or both.
I think you're confusing wider color gamut with wider dynamic range ...
Yes, i did not give much thought to it.
did you know that you can edit in 16-bit and 32-bit with sRGB? But if you're happy doing what you do and it works out, I'm happy.
I don't see advantage of limiting colors gamut to sRGB when importing RAW image to Photoshop. Having more color data to work with is better in my opinion.
You don't have more color data in ProPhoto - you have different color data. The advantage of sRGB for me is simplicity.
That's not necessarily true when you're dealing with an image with color clipping in the sRGB rendering but no clipping in the ProPhoto rendering. The ProPhoto version will end up with more unique color values. This is different from the bit depth point you're making.
I use the sRGB color space for everything, so I never have to convert for any purpose. The possible disadvantage is that the colors in some of my images can be less than ideal, but that's not a concern to me personally.
So, you never print and don't care about viewing your images on modern devices that utilize wider-than-sRGB displays?
 
Last edited:
Take a look at the graph in an earlier post. Of course most of what we see is not a single frequency unless we are looking at a laser. For that reason, it is nearly impossible to replicate the full range of what we can see with just red, green and blue. Though I guess it would be possible with a complicated translation into those three colors.
 
I kind of bought to that idea and now always import my RAW's as ProPhoto RGB hoping this will give more dynamic range for editing, later on i export them as sRGB for putting on internet.
Okay, go ahead. But if your ProPhoto-edited images contain colors that can't be represented in sRGB, what do you think will happen when you convert to sRGB?
How i understand it, please correct me if i am wrong.

If my monitor already set up as sRGB color profile (in Windows color management i have sRGB ICC profile selected) then what i am seeing on my monitor when editing ProPhoto RGB inside Photoshop (or any color managed program) is already converted to sRGB color space before it hits my monitor.

Therefore when i edit image in any color space i am already seeing image in sRGB color format, so there will be not much difference if any after i export it from PhtoPhoto RGB to sRGB.

Yet i am still taking advantage of wider color range of ProPhoto RGB when editing image. e.g. Photoshop selection tool like Magic Wand will work more precisely because RAW image was imported to wider color space (ProPhoto RGB).
I can't disagree with what you're saying, but my question still stands.

Another question: If you have tried editing the same images in the same way in both sRGB and ProPhoto, how often do you see a different (and presumably better) result, and if the result is better, does the improvement carry through to your final sRGB conversion?
The order in which conversion and editing occurs can have a visible difference for colorful images that experience channel clipping somewhere along the way. How visible that is and how easy it is to mitigate/avoid varies greatly (image particulars and your processing skills are important variables here), but you're pretty much always assured of doing the least amount of damage by sticking with the generally recommended workflow of working in ProPhoto and postponing your conversion to sRGB until the end. Of course, if you're outputting to a display that supports colors outside of the sRGB (or for multiple display formats such as print+online display), then the reason to use a wider working color space is really not arguable.

Below is an example I put together quickly to illustrate the foregoing. The outfit worn by the baby was a neutral red and the bowl a bright yellow. The scene was lit by shaded sunlight through a large wall of windows.

The raw processing was done in ACR and the subsequent edits done in PS. What's shown are screen grabs from an (old) iMac that supports a display gamut very close to sRGB. Descriptions of the specific editing/sequencing for the three renderings is as follows:
  • Left=In ACR Exposure adjusted to -.50 and Highlights adjusted to -100. All other settings at default. Some highlight clipping warnings in the red channel for lightest part of the bowl and bow displayed when conversion to sRGB was the setting. After opened in PS from ACR (with sRGB as the selected conversion). A duplicate layer was created and blending mode was set to Multiply and 50% Opacity.
  • Middle=Same as Left except the conversion was done in ProPhoto (no red channel clipping indicated in ACR). Soft proofing in PS was turned on and set to sRGB.
  • Right=Same as Left except that Luminance settings for red and yellow in the Color Mixer palette were slightly lowered to eliminate all red channel clipping prior to the conversion using sRGB.
So, in essence, the Left and Right renderings had the sRGB conversion stage performed prior to the application of the color/tone adjustment using the Multiply blend and Center rendering had the sRGB applied after the application of the Multiply adjustment. The Multiply adjustment had the effect of pushing additional portions of the red channel in the bowl and bow to the point of clipping, although the actual color clipping is well-hidden on my monitor since the green and blue channels were still intact and the fine image noise may also help dither away any apparent clipping. Here are the screen grabs:

Fit on Screen display
Fit on Screen display

100% display showing a portion of the bowls
100% display showing a portion of the bowls

Here's color samples taken from the same locations of the bowl and sleevefrom the 100% display screen grab:

69f295827a634a5f9846d7f21198d6b4.jpg

As can clearly be seen from the color samples taken from the two renderings in which the conversion to sRGB occurred before the PS edit, the colors are different from the version in which the sRGB conversion effectively occurred at the end (thanks to the soft proofing). The only other difference is the very slightly darker colors of the RIght rendering due to the luminance adjustment done in ACR to bring that version fully within the sRGB gamut.

To my eye, the middle rendering is truer and preferred thanks to a purer and more saturated yellow and red. Of course, additional corrections could be applied to the Left and Right renderings, but those will tend to either introduce compromises elsewhere, require additional effort to isolate the edits or both.
Thanks for the answer.
I think you're confusing wider color gamut with wider dynamic range ...
Yes, i did not give much thought to it.
did you know that you can edit in 16-bit and 32-bit with sRGB? But if you're happy doing what you do and it works out, I'm happy.
I don't see advantage of limiting colors gamut to sRGB when importing RAW image to Photoshop. Having more color data to work with is better in my opinion.
You don't have more color data in ProPhoto - you have different color data.
That's not necessarily true when you're dealing with an image with color clipping in the sRGB rendering but no clipping in the ProPhoto rendering. The ProPhoto version will end up with more unique color values. This is different from the bit depth point you're making.
I meant, of course, that the available numerical range of color data is the same. Which colors are present in an image, or considered important, are both different points.
The advantage of sRGB for me is simplicity. I use the sRGB color space for everything, so I never have to convert for any purpose. The possible disadvantage is that the colors in some of my images can be less than ideal, but that's not a concern to me personally.
So, you never print
Almost never, and never with a concern about strict color accuracy.
and don't care about viewing your images on modern devices that utilize wider-than-sRGB displays?
Nope. I'll also reiterate that others can easily have different requirements and expectations.
 
Last edited:
Take a look at the graph in an earlier post. Of course most of what we see is not a single frequency unless we are looking at a laser. For that reason, it is nearly impossible to replicate the full range of what we can see with just red, green and blue. Though I guess it would be possible with a complicated translation into those three colors.
What we see, or the range of visible colour can be expressed in terms of the signals generated by the three types of cones in our eyes. This covers what we call the visible spectrum.

To reproduce colour in any system you must tie that to absolute values and construct a theoretical colour chart. And though it's arranged in opposites it is theoretical and based on taking the visible spectrum or the limits of the wavelengths that the eye responds to and constructing an absolute chart that goes towards single wavelengths at the edges. It contains theoretical colours which we don't necessarily see.

The eye doesn't detect colour from a single photon hitting a single cone producing a single signal. It extrapolates colour from multiple signals from a group of cones over a period of time. It works by opposites cancelling so yellow/blue, red/green and black/white all desaturate each other, it corrects white balance by fatigue in the cells over time, it relies on memory and assumption. Yes you can produce single wavelength light but the human eye can't see it in any clarity because the eye (though remarkable) is an evolved organic solution to the real world and not an absolute instrument to measure theoretical colour.

It''s very easy to nail colour to the absolute values that a system of colour reproduction needs and just assume we see colour in an absolute way as it gives us an easy to understand framework. But what you can't do is use that chart to extrapolate real world colour. What you should do is start with that theoretical colour in a reference environment and ask how we see or percieve that. With the right three wavelengths it's theoretically possible to "reproduce" the entire visible spectrum (but not the theoretical chart).

The shorter wavelengths we see as violet, and though the very shortest wavelengths we detect can influence the hue we do not see them or understand them as an individual colour.
 
Last edited:
Take a look at the graph in an earlier post. Of course most of what we see is not a single frequency unless we are looking at a laser. For that reason, it is nearly impossible to replicate the full range of what we can see with just red, green and blue. Though I guess it would be possible with a complicated translation into those three colors.
What you call a full range is not that full. It fills (a part of) a 3D space. So does a monitor or a sensor.
 
But it is my understand that in order to produce the entire range of our vision, some values of the the three colors have to be negative. How is that done with RGB pixels?

See the graph in the following link.

https://en.wikipedia.org/wiki/CIE_1931_color_space#/media/File:CIE1931_RGBCMF.svg
Negative or false colors, like with the ACES AP0 color space. It’s a mathematical method for representing colors. For generating real colors you need real primary colors, but it would be trivial if you had tunable narrow band LEDs.

A full visual gamut monitor would just be an array of sub pixel triples as we currently use. The red primary can be fixed to the extreme red sensation. The blue and green pixels would need to be tunable, by subpixel, and each would vary along the spectral line, the horseshoe curve part of the xy diagram. The monitor is driven by three RGB values to display any visible color (up to the dynamic range of each subpixel).

--
http://therefractedlight.blogspot.com
 
Last edited:
Last edited:
I have come across article (link) that states.
Adobe RGB squeezes colors into a smaller range (makes them duller) before recording them to your file. Special smart software is then needed to expand the colors back to where they should be when opening the file.

Since Adobe RGB squeezes colors into a smaller range, the full range represents a broader range of colors, if and only if you have the correct software to read it.

...

because the colors are compressed into a smaller range that there is more chroma quantization noise when the file is opened again
Could someone explain this to me what this guy means by smaller range, from what i know about Adobe RGB is that color range is actually larger.

a0d3c2a6bf7549d5bd92866ac6136920.jpg
It' a bad explanation.

Imagine that most people write down their measurements in inches, but you write down your measurements in feet. If someone is using your measurements, but ignores the fact that you used feet, then they will end up thinking the measured object is much smaller than what it was. From their point of view, you "squeezed" your measurements into smaller numbers.

If you only allowed to use a fixed range of numbers (say 1 to 32,000), then the person measuring in feet can measure larger objects, but the person measuring in inches has more accuracy.

One could make a good case that if the object being measured is small enough that inches work, you should measure in inches, and only switch to feet when the object is larger than the maximum number of inches you can use.

With typical 8 bit files, there are 16,777,216 possible values for each pixel. Each of these typically represents a point in the visible colorspace. With sRGB the points are very close together, and don't cover the entire visible range. With AdobeRGB, the points are spread out a bit more, and cover a wider range (but still not the entire visible colorspace).

Imagine that a colorspace was a box of 16,777,216 numbered crayons. Each pixel contains the number of the crayon to use for that spot. With AdobeRGB, there is a larger difference between adjacent crayons, but the crayons include some unusual colors (like more saturated greens).

If there is a particular point in the visible colorspace, and it falls within the sRGB colorspace, then you can likely can get closer to specifying that point by using sRGB. If the color falls outside the sRGB colorspace, then you will get closer using AdobeRGB.

Even though the "crayons" are further apart with AdobeRGB, they are still close enough for many purposes, and therefore some people like to use AdobeRGB as it gives them the option of using colors that are a little outside the sRGB colorspace.

The disadvantage of using the AdobeRGB crayon numbers, is that some software is not "color managed". This software might look at the pixel values and use the sRGB crayon with that number, rather than the same numbered AdobeRGB crayon. Generally, with any particular number, the sRGB crayon is a bit more muted than the Adobe RGB crayon with the same number. Therefore, software which is not color managed, may display muted colors for an AdobeRGB image.



Note that this does not mean that AdobeRGB images look better than sRGB images. In fact, if all the colors in an image fit into sRGB, then using sRGB will more accurately reflect the colors. However, the difference is not likely to be noticeable, unless the image will be subject to a lot of editing.
 
The article is from 2006. (sRGB vs. Adobe RGB © 2006 KenRockwell.com)

It's very conditional and way too long and deep to go into every possible situation except to say things have changed A LOT since then.

...

I'm not saying Ken Rockwell is wrong - but digital imaging has changed at an amazing pace.
It's all of those things. The description strives to be simple, but unfortunately over-simplifies, and also, makes generalizations without stating the key assumptions those statements were built on, which confuses any "student" trying to build knowledge from Ken's writings because they were given conclusions without enough supporting info.

For example, whether colors get "squeezed" or "expanded" into smaller or bigger gamuts depends on which "rendering intent" that is used by the color management system. But Ken does not mention that, so he makes it sound like it always does it the way he said. Which is, again, an oversimplification that tends to reduce education instead of increase it.

Ken's description sounds very much like the way it all worked in the early, wild days of color management in the 2000s, when Photoshop 5 did not do it right and things got converted when they were not supposed to, and when color profiles were not inherently supported by OSs and most photo apps. Things are absolutely much different, much better, and more reliable now for color.

If there's anything you can say about the longevity and endurance of Ken Rockwell's life works, you can say that yes, we are still complaining about his errors today. :)
 
Last edited:

Keyboard shortcuts

Back
Top