Shooting RAW: sRGB and Adobe RGB

sirhawkeye64

Forum Pro
Messages
18,802
Solutions
17
Reaction score
6,645
Location
US
So I know that the Adobe RGB color space has a wider gaumet than sRGB, and that when previewing RAW files taken with Adobe RGB they can sometimes appear more flat and less vibrant than those taken with sRGB (Obviously I know that a RAW file doesn't add any additional data such as vibrance or saturation, like an out of camera JPEG might depending on the camera's settings).

It's often been suggested to work in the widest color space and then decrease as you go or need to (ie. converting from Adobe or ProPhoto RGB to sRGB when I'm exporting for web use/computer viewing).

But my question is if I shoot my RAWs in Adobe RGB will I be capturing more color data and thus have more color data available in post to work with? I know a lot of printers can also print using images that are in the sRGB color space as well. Basically I'm wanting to maintain and capture as much color (And accurate color) as possible out of the camera, and then deal with color spaces later in post (ie. converting to sRGB for web).

Thoughts?

(I know there's probably some big technical details behind this but I'm just looking really for if it's advisable to use Adobe RGB to retain the most/accurate colors. i used to use Adobe RGB on my previous cameras but at some point, switched to sRGB but this had me thinking about switching back... Most of my images are for web/online, but occasionally I will print some images so color retention is also important I guess for that.)
 
Last edited:
Selecting AdobeRGB or sRGB in the camera settings isn't affecting the raw data; it's used by the camera in making the JPEG embedded in the raw file, and the JPEG produced when shooting Raw+JPEG or just JPEG.
 
The colour space setting in the camera has no effect on the raw file, if I remember correctly.

So if shooting raw, it's irrelevant.

Edit: answered above already.
 
Last edited:
Hello,

The raw data has its own and certainly unique colorspace
 
So I know that the Adobe RGB color space has a wider gaumet than sRGB, and that when previewing RAW files taken with Adobe RGB they can sometimes appear more flat and less vibrant than those taken with sRGB (Obviously I know that a RAW file doesn't add any additional data such as vibrance or saturation, like an out of camera JPEG might depending on the camera's settings).
The raw file records the light as it falls on each pixel. It doesn't have a colour space and often contains more information than any colour space can represent.

Colour space is about how the image looks, and that depends also on the gamut of the viewing medium. Many viewing devices have a fairly narrow gamut, so sRGB is more appropriate for them. This includes a lot of computer monitors and TVs, although many now offer a wider gamut.

The camera's JPG output uses sRGB. The default output from your raw converter uses whatever you set in camera. If your monitor has a wide gamut you'll see all of AdobeRGB if that's what you set; but if your monitor is narrower than AdobeRGB you won't see all the colours, which is probably why yours look a bit flat.
It's often been suggested to work in the widest color space and then decrease as you go or need to (ie. converting from Adobe or ProPhoto RGB to sRGB when I'm exporting for web use/computer viewing).
Well, yes. But it's often more convenient to work in whatever your output will be. For example, I use sRGB as standard because then I don't need to mess about when posting photos here, viewing at home on deafferent devices (monitor or TV).
But my question is if I shoot my RAWs in Adobe RGB will I be capturing more color data and thus have more color data available in post to work with?
No. Whatever you set the raw file will hold more; it's just what you see that is controlled by colour space.
I know a lot of printers can also print using images that are in the sRGB color space as well. Basically I'm wanting to maintain and capture as much color (And accurate color) as possible out of the camera, and then deal with color spaces later in post (ie. converting to sRGB for web).
 
As other's have said, raw data has no colour space.

Raw converters have an option somewhere which let you select the image profile space. This is the one which the editor will use to do all the calculations, so a large space ensures that no colours will go out of gamut when you edit the file. It's like working with 16-bit rather than 8-bit files.

ProPhotoRGB is definitely the best as it's bigger than everything else, even printer CMYK. Your images are future proof forever.

However, the app will display the image in whatever working space you are using automatically, so if you have an sRGB display, the colours will be converted to fit. If you are using ProPhoto, you may find it better to change the rendering intent to perceptual.

Just remember to change the profile space to sRGB for JPEGs you plan to publish on the web.
 
So I know that the Adobe RGB color space has a wider gaumet than sRGB, and that when previewing RAW files taken with Adobe RGB they can sometimes appear more flat and less vibrant than those taken with sRGB (Obviously I know that a RAW file doesn't add any additional data such as vibrance or saturation, like an out of camera JPEG might depending on the camera's settings).
The raw file records the light as it falls on each pixel. It doesn't have a colour space
It does have a colour space ofc !

Otherwise you can not interpret it as a colour.
and often contains more information than any colour space can represent.
Wrong. It may not contain some colour in a given colour space.

 
So I know that the Adobe RGB color space has a wider gaumet than sRGB, and that when previewing RAW files taken with Adobe RGB they can sometimes appear more flat and less vibrant than those taken with sRGB
If you're viewing on a program set up to display sRGB then adobe RGB will look desaturated. This only means that the program wasn't looking how to deal with the values out of sRGB range.
(Obviously I know that a RAW file doesn't add any additional data such as vibrance or saturation, like an out of camera JPEG might depending on the camera's settings).

It's often been suggested to work in the widest color space and then decrease as you go or need to (ie. converting from Adobe or ProPhoto RGB to sRGB when I'm exporting for web use/computer viewing).

But my question is if I shoot my RAWs in Adobe RGB will I be capturing more color data and thus have more color data available in post to work with? I know a lot of printers can also print using images that are in the sRGB color space as well. Basically I'm wanting to maintain and capture as much color (And accurate color) as possible out of the camera, and then deal with color spaces later in post (ie. converting to sRGB for web).

Thoughts?

(I know there's probably some big technical details behind this but I'm just looking really for if it's advisable to use Adobe RGB to retain the most/accurate colors. i used to use Adobe RGB on my previous cameras but at some point, switched to sRGB but this had me thinking about switching back... Most of my images are for web/online, but occasionally I will print some images so color retention is also important I guess for that.)
Adobe converts files to its own ProPhoto colour space to work on them, so it doesn't matter if you set the camera up for sRGB or Adobe. Photoshop saves to the colour space you have set it up to save usually sRGB or Adobe, I never heard of anyone saving in ProPhoto RGB.

If you want to avoid possible banding and other artefacts in post processing you are better advised to shoot in 14 bits not lossy compression so you can pull the sliders around and use your filters.

Same advice as always: if you want to print high quality, use Adobe RGB if your print service recognises it. If you want to upload to the web, only bother with sRGB.
 
Last edited:
So I know that the Adobe RGB color space has a wider gaumet than sRGB, and that when previewing RAW files taken with Adobe RGB they can sometimes appear more flat and less vibrant than those taken with sRGB
If you're viewing on a program set up to display sRGB then adobe RGB will look desaturated. This only means that the program wasn't looking how to deal with the values out of sRGB range.
(Obviously I know that a RAW file doesn't add any additional data such as vibrance or saturation, like an out of camera JPEG might depending on the camera's settings).

It's often been suggested to work in the widest color space and then decrease as you go or need to (ie. converting from Adobe or ProPhoto RGB to sRGB when I'm exporting for web use/computer viewing).

But my question is if I shoot my RAWs in Adobe RGB will I be capturing more color data and thus have more color data available in post to work with? I know a lot of printers can also print using images that are in the sRGB color space as well. Basically I'm wanting to maintain and capture as much color (And accurate color) as possible out of the camera, and then deal with color spaces later in post (ie. converting to sRGB for web).

Thoughts?

(I know there's probably some big technical details behind this but I'm just looking really for if it's advisable to use Adobe RGB to retain the most/accurate colors. i used to use Adobe RGB on my previous cameras but at some point, switched to sRGB but this had me thinking about switching back... Most of my images are for web/online, but occasionally I will print some images so color retention is also important I guess for that.)
Adobe converts files to its own ProPhoto colour space to work on them, so it doesn't matter if you set the camera up for sRGB or Adobe. Photoshop saves to the colour space you have set it up to save usually sRGB or Adobe, I never heard of anyone saving in ProPhoto RGB.

If you want to avoid possible banding and other artefacts in post processing you are better advised to shoot in 14 bits not lossy compression so you can pull the sliders around and use your filters.
OK and i have my cameras set to do 14-bit RAW, uncompressed.
Same advice as always: if you want to print high quality, use Adobe RGB if your print service recognises it. If you want to upload to the web, only bother with sRGB.
 
So I know that the Adobe RGB color space has a wider gaumet than sRGB, and that when previewing RAW files taken with Adobe RGB they can sometimes appear more flat and less vibrant than those taken with sRGB (Obviously I know that a RAW file doesn't add any additional data such as vibrance or saturation, like an out of camera JPEG might depending on the camera's settings).
The raw file records the light as it falls on each pixel. It doesn't have a colour space
It does have a colour space ofc !

Otherwise you can not interpret it as a colour.
and often contains more information than any colour space can represent.
Wrong. It may not contain some colour in a given colour space.
A pixel generates an analog voltage proportional to the photons that strike it as they pass through the color filter array (bayer sensor), it doesn't know what color the light is. Colors are assigned for each pixel by the demosaicing process which knows what filter the pixel sees. Although a pixel will receive light from either the R, G or B filter, demosaicing estimates and assigns the other 2 colors to the pixel based on the neighboring pixels.

The color space assignment follows this by mapping each pixel's assigned colors from the demosaicing into a tone curve creating an image that we can see.

JPG files produced in the camera, including previews of raw images, go through this process in the camera, including assignment of the color space when the camera provides the option to choose a color space.


Cheers,
Doug
 
So I know that the Adobe RGB color space has a wider gaumet than sRGB, and that when previewing RAW files taken with Adobe RGB they can sometimes appear more flat and less vibrant than those taken with sRGB (Obviously I know that a RAW file doesn't add any additional data such as vibrance or saturation, like an out of camera JPEG might depending on the camera's settings).
The raw file records the light as it falls on each pixel. It doesn't have a colour space
It does have a colour space ofc !

Otherwise you can not interpret it as a colour.
and often contains more information than any colour space can represent.
Wrong. It may not contain some colour in a given colour space.
A pixel generates an analog voltage proportional to the photons that strike it as they pass through the color filter array (bayer sensor), it doesn't know what color the light is. Colors are assigned for each pixel by the demosaicing process which knows what filter the pixel sees. Although a pixel will receive light from either the R, G or B filter, demosaicing estimates and assigns the other 2 colors to the pixel based on the neighboring pixels.

The color space assignment follows this by mapping each pixel's assigned colors from the demosaicing into a tone curve creating an image that we can see.

JPG files produced in the camera, including previews of raw images, go through this process in the camera, including assignment of the color space when the camera provides the option to choose a color space.

Cheers,
Doug
This is a misconception, especially the fact to say that you need to demosaic to have a color...

But this link explains much better:

https://forum.luminous-landscape.com/index.php?topic=22471.0

This is a great explanation !. here are the key points (just quoting some sentences):
  • "Nonetheless, color information is present in the raw file and does not magically appear during demosaicing."
Spot on . People think that before demosaicing there is no color wich is a misconception.
  • "The recognition that the raw file does have a color space is useful in understanding how raw files are processed
I agree that this is important to understand this. It is written withtout ambiguity that raw files do have a color space...
  • "furthermore, in the source code quoted below, Thomas Knoll refers to the “camera native space” and “camera color space”, which would indicate that he thinks that the camera does have a color space that is represented in the raw file and the camera profile.
Without color space you can NOT have any colors. So even from start (from the raw data), you necessarily have a color space and colors are not going to appear magically.
 
As other's have said, raw data has no colour space.
Maybe it's meaningless to say raw data has a "color space", but the color sensitivity of the photosites is affected by the dyes and different sensors use different dyes. I believe it is the color sensitivity that is sometimes called the native color space from which JPEGs are created and for which raw converters are designed. Raw converters are necessarily not all the same.

--
DS
 
Last edited:
So I know that the Adobe RGB color space has a wider gaumet than sRGB, and that when previewing RAW files taken with Adobe RGB they can sometimes appear more flat and less vibrant than those taken with sRGB (Obviously I know that a RAW file doesn't add any additional data such as vibrance or saturation, like an out of camera JPEG might depending on the camera's settings).
The raw file records the light as it falls on each pixel. It doesn't have a colour space
It does have a colour space ofc !

Otherwise you can not interpret it as a colour.
and often contains more information than any colour space can represent.
Wrong. It may not contain some colour in a given colour space.
A pixel generates an analog voltage proportional to the photons that strike it as they pass through the color filter array (bayer sensor), it doesn't know what color the light is. Colors are assigned for each pixel by the demosaicing process which knows what filter the pixel sees. Although a pixel will receive light from either the R, G or B filter, demosaicing estimates and assigns the other 2 colors to the pixel based on the neighboring pixels.

The color space assignment follows this by mapping each pixel's assigned colors from the demosaicing into a tone curve creating an image that we can see.

JPG files produced in the camera, including previews of raw images, go through this process in the camera, including assignment of the color space when the camera provides the option to choose a color space.

Cheers,
Doug
This is a misconception, especially the fact to say that you need to demosaic to have a color...

But this link explains much better:

https://forum.luminous-landscape.com/index.php?topic=22471.0

This is a great explanation !. here are the key points (just quoting some sentences):
  • "Nonetheless, color information is present in the raw file and does not magically appear during demosaicing."
Spot on . People think that before demosaicing there is no color wich is a misconception.
  • "The recognition that the raw file does have a color space is useful in understanding how raw files are processed
I agree that this is important to understand this. It is written withtout ambiguity that raw files do have a color space...
  • "furthermore, in the source code quoted below, Thomas Knoll refers to the “camera native space” and “camera color space”, which would indicate that he thinks that the camera does have a color space that is represented in the raw file and the camera profile.
Without color space you can NOT have any colors. So even from start (from the raw data), you necessarily have a color space and colors are not going to appear magically.
Digital cameras have what's called a "spectral sensitivity". The Bayer (or XTrans) filters on the sensor array aren't filters of color, they're bandpass filters of light, chosen to approximate the tristimulus response of the eye-brain mechanism. So, a raw file contains information pertinent to the construction of color, but the encoding isn't "Color" until demosaicing builds the RGB triplets our eye-brain needs to assert "Color".

That "color" is a construct in your head, and not in the scene or the camera sensor, is a hard thing to wrap one's head about, but it's critical to making all the color gonkulators we deal with, work correctly...
 
As other's have said, raw data has no colour space.
Not correct
He's correct.
I referred to "camera colorspace" in a DPReview thread some time ago, and was admonished for that. Wanting to understand the angst, i did some research and determined, yes, that describing a camera's recording performance as a colorspace is not correct, 'spectral sensitivity', or 'spectral response' is the correct reference.

Recently, I've been recording some spectra from my D7000 with the intent to build a more robust camera profile. What you do is to push a light through a small slit and on to a diffraction grating, which does the same thing as a prism - split the light into it's constituent wavelengths. Take a picture of that, extract the pixels that recorded the light, and use that to develop a 'spectral sensitivity dataset'. Here's a JPEG of one of those recordings:

 Nikon D7000 spectrum recording, measurement-encoded
Nikon D7000 spectrum recording, measurement-encoded

Full disclosure, I grayscaled the original JPEG to show you how the data I work with looks. Here's the RGB-encoded version:

Nikon D7000 spectrum recording, RGB-encoded
Nikon D7000 spectrum recording, RGB-encoded

The difference between the two is this: The first one represents what the camera measures, the second presents those measurements encoded in a way that allows your head to construct color.

In raw processing, either in-camera or in one of the fine softwares available, there's a color transform that has to take place to map the raw data from its spectral response to a colorspace. In-camera, that's typically to AdobeRGB or sRGB to make the JPEG. In software, that's usually to a working space like ProPhoto. Either way, from then on the image has a 'colorspace'. Before that, the data represented the camera's 'spectral response'. Like it or not, that's how it works...
 
Digital cameras have what's called a "spectral sensitivity". The Bayer (or XTrans) filters on the sensor array aren't filters of color, they're bandpass filters of light,
The bandpass filters are indeed filters of light -- in the visible spectrum that means filters of what are called colors. I think we're now playing a semantics game. It doesn't help the understanding of what's going on IMO.

--
DS
 
Last edited:
As other's have said, raw data has no colour space.
Not correct
He's correct.
I referred to "camera colorspace" in a DPReview thread some time ago, and was admonished for that. Wanting to understand the angst, i did some research and determined, yes, that describing a camera's recording performance as a colorspace is not correct, 'spectral sensitivity', or 'spectral response' is the correct reference.

Recently, I've been recording some spectra from my D7000 with the intent to build a more robust camera profile. What you do is to push a light through a small slit and on to a diffraction grating, which does the same thing as a prism - split the light into it's constituent wavelengths. Take a picture of that, extract the pixels that recorded the light, and use that to develop a 'spectral sensitivity dataset'. Here's a JPEG of one of those recordings:

Nikon D7000 spectrum recording, measurement-encoded
Nikon D7000 spectrum recording, measurement-encoded

Full disclosure, I grayscaled the original JPEG to show you how the data I work with looks. Here's the RGB-encoded version:

Nikon D7000 spectrum recording, RGB-encoded
Nikon D7000 spectrum recording, RGB-encoded

The difference between the two is this: The first one represents what the camera measures, the second presents those measurements encoded in a way that allows your head to construct color.

In raw processing, either in-camera or in one of the fine softwares available, there's a color transform that has to take place to map the raw data from its spectral response to a colorspace. In-camera, that's typically to AdobeRGB or sRGB to make the JPEG. In software, that's usually to a working space like ProPhoto. Either way, from then on the image has a 'colorspace'. Before that, the data represented the camera's 'spectral response'. Like it or not, that's how it works...
Interesting.

What was the bit depth you were shooting at... and would that make much difference in these shots?
 
Interesting.

What was the bit depth you were shooting at... and would that make much difference in these shots?
All my cameras deliver raw data as 16-bit integers; what I actually work with is floating point, where the 0-65535 16-bit integer range is mapped to 0.0-1.0, a convention respected internally by a number of softwares I use.

Would 8-bit make a difference? interesting question, as the final data I use to make profiles is usually rounded to two decimal points and it seems to work sufficiently well to not dork up the color. There are ways to measure differences that would be telling, but I'm more interested right now in just getting the danged thing working... :D
 
As other's have said, raw data has no colour space.
Not correct
He's correct.
I referred to "camera colorspace" in a DPReview thread some time ago, and was admonished for that. Wanting to understand the angst, i did some research and determined, yes, that describing a camera's recording performance as a colorspace is not correct, 'spectral sensitivity', or 'spectral response' is the correct reference.

Recently, I've been recording some spectra from my D7000 with the intent to build a more robust camera profile. What you do is to push a light through a small slit and on to a diffraction grating, which does the same thing as a prism - split the light into it's constituent wavelengths. Take a picture of that, extract the pixels that recorded the light, and use that to develop a 'spectral sensitivity dataset'. Here's a JPEG of one of those recordings:

Nikon D7000 spectrum recording, measurement-encoded
Nikon D7000 spectrum recording, measurement-encoded

Full disclosure, I grayscaled the original JPEG to show you how the data I work with looks. Here's the RGB-encoded version:

Nikon D7000 spectrum recording, RGB-encoded
Nikon D7000 spectrum recording, RGB-encoded

The difference between the two is this: The first one represents what the camera measures, the second presents those measurements encoded in a way that allows your head to construct color.
No. For the second you should say instead: that allows to be displayed on a device. That is different from what you said.

But the native color space is the most accurate, unfortunately you have to transform to a colorspace which can be displayed on a device But with some losses...

You could imagine (theorically ofc) a device able to reproduce the native colorspace of a sensor.

You just can't do better than the native colorspace ! Other representations are just approximations.
In raw processing, either in-camera or in one of the fine softwares available, there's a color transform that has to take place to map the raw data from its spectral response to a colorspace. In-camera, that's typically to AdobeRGB or sRGB to make the JPEG. In software, that's usually to a working space like ProPhoto. Either way, from then on the image has a 'colorspace'. Before that, the data represented the camera's 'spectral response'. Like it or not, that's how it works...
The values recorded by the sensor are the coordinates in the native colorspace as explained in the link.

You have to make transformations between the native colorspace and a colorspace more appropriate for displaying on a device.

So I disagree with you.

In both colorspaces, this represents a color, one is just more accurate than the other.
 

Keyboard shortcuts

Back
Top