Pentax K30 RAW shooting - sRGB or AdobeRGB?

I still contend that if all you have is the JPEG, and your output is intended to be sRGB, you're better off working with a sRGB JPEG than starting with an Adobe RGB JPEG and converting it to sRGB later.
Absolutely. If your output is 128 kbps bitrate, do not even think of recording and editing at 320 kbps.

Wait, something is wrong here... LOL
 
I still contend that if all you have is the JPEG, and your output is intended to be sRGB, you're better off working with a sRGB JPEG than starting with an Adobe RGB JPEG and converting it to sRGB later.
Absolutely. If your output is 128 kbps bitrate, do not even think of recording and editing at 320 kbps.

Wait, something is wrong here... LOL
JPEG is always 128 kbps in this analogy
Really? And how, may I ask, you measure fidelity?
 
I still contend that if all you have is the JPEG, and your output is intended to be sRGB, you're better off working with a sRGB JPEG than starting with an Adobe RGB JPEG and converting it to sRGB later.
Absolutely. If your output is 128 kbps bitrate, do not even think of recording and editing at 320 kbps.

Wait, something is wrong here... LOL
JPEG is always 128 kbps in this analogy
Really? And how, may I ask, you measure fidelity?
It wasn't a great analogy in the first place, I'm not going to argue about it. If you want to continue to believe you're getting something for nothing, go ahead.
 
Outside your workflow, the final output is always a 8 bit JPEG, or a print that has less gamut than a computer screen, as you cannot display a raw file.

The difference between out of camera JPEGs and "developing" the raw in your computer is who, where, and how the processing is done.

Raw processors show you a JPEG preview of what the final output might be, they cannot show the 14 bits per colour. Moving the sliders is just choosing what info will be kept (which is of course easier than trying to extrapolate it when processing an OOC JPEG)

In camera, the processing time and power is limited by state of the art technologies and the need to clear off the buffer for the next shots.

However, the increase of in-camera processing power, and the cumulative improvements of the processing algorythms help produce OOC JPEGs much better than a few years ago. The higher pixel count also helps improving the processing software, which has more information to analyse the scene and thus adapt the way it works.

The ability of modern cameras to output high quality 4K video attest the power of modern in-camera processors and the efficiency of their softwares.

The main limitation of today's OOC JPEGs are high contrast scenes and low light high ISO scenes. And of course the choice between true to life rendering or exagerated ambiance for artistic purposes.

In high contrast scenes, because, up to now, the camera cannot guess if a high dynamic range scene will look better with a standard tone curve, i.e. with very dark shadows, and a pleasing contrast in mid tones, at the risk of highlight clipping, or if it will benefit of rising the shadows ans protecting the highlights, at the risk that the final output will look dull or unnatural.

In low light high ISOs scene, because denoising to get a cleaner picture without washing details out is very difficult, and, today, the most advanced denoising softwares need more processing power and time than what is available in cameras.

But maybe tomorrow this processing will be possible in-camera, just like today lens distorsion, corner fall-off, chromatic aberrations and even diffraction can be corrected automatically in cameras by using lens profiles that before could only be applied in post production.

Thus, IMO, the main difference is not the cooker but the cook:

In camera, everything is automatically processed as the manufacturer's engineers have designed it.

You just can choose to fine tune the output settings to your taste as regards white balance, hue, saturation, contrast, sharpening, and highlight/shadows tones rendering. It is like having dinner in a restaurant: you can choose the restaurant and thus the cook, the menu, the sauces that go with it and the wine, but you are not the cook.

Moreover, you must choose the JPEG settings before shooting, it is a one shot no regrets way, as most of the raw information will be thrown away.

Unless you shoot RAW+JPEG, as the saved RAW will allow you a second try once back home.

Processing the RAW at home is like cooking your own meal, you may really customize everything to your taste and your present mood (which may vary from day to day).

And you may spend minutes or even hours to run sophisticated post-production programs, and start again, trying different settings until you get which will suit best this specific scene in the final JPEG output.

--
Tatouzou,
https://www.flickr.com/photos/70066783@N06/
 
Last edited:
Outside your workflow, the final output is always a 8 bit JPEG,
Actually, PNG most of the cases, if for web; but CMYK or ProPhoto TIFF/PDF for serious printing, TIFF in printer profile for round-the-corner labs, Lab TIFF/PSD for a client, and so on.
or a print that has less gamut than a computer screen
Modern printers are wider than sRGB, and CMYK sheet feed presses were always wider than sRGB, using glossy paper..
as you cannot display a raw file.
But I can, of course.
Raw processors show you a JPEG preview of what the final output might be
Not JPEG, bitmap.
they cannot show the 14 bits per colour
Does that make Photoshop "16"-bit mode an error on Adobe's part to use?
In camera, everything is automatically processed as the manufacturer's engineers have designed it.
I can re-process raw files with custom settings right in my cameras.
You just can choose to fine tune the output settings to your taste as regards white balance, hue, saturation, contrast, sharpening, and highlight/shadows tones rendering. It is like having dinner in a restaurant: you can choose the restaurant and thus the cook, the menu, the sauces that go with it and the wine, but you are not the cook.
I can ask the cook to do it the way I want.
Moreover, you must choose the JPEG settings before shooting,
Not with all cameras, some allow what I said above, and it is the trend.
Unless you shoot RAW+JPEG
Raw already have an embedded JPEG. I can simply extract it if I need it.
 
Sorry, we're talking across each other. I didn't make it clear enough - my comments were intended for working with JPEG straight out of the camera, not RAW. They also apply if there's a processing step that reduces down to 8 bits, but that's unlikely in a modern workflow.
Mark, the JPEG out of the camera comes from raw data. That's all the camera can produce initially, from the CFA. The raw is processed to create that in-camera JPEG. From high bit data. To one of two possible color spaces.

https://en.wikipedia.org/wiki/Bayer_filter
Yes, I realize all that. But it's a moot point if you haven't set the camera to save the RAW, because the in-camera conversion to JPEG loses information.
Yes, it most certainly does; we are in agreement! But not due to the color space selected. That's my point.

The raw to sRGB or raw to Adobe RGB conversion comes from the high bit raw data. The rendering and affect of JPEG compression is a different story.
Different color spaces incur the loss in different areas, but there will be a loss.
Agreed, but again, due to the JPEG, not the selection of the color space IF the conversion takes place high bit.
I still contend that if all you have is the JPEG, and your output is intended to be sRGB, you're better off working with a sRGB JPEG than starting with an Adobe RGB JPEG and converting it to sRGB later.
I totally agree; any color space conversion produces some rounding errors and data loss. But it's moot from high bit data. If all you need is sRGB, why would you ever use anything else or convert to a larger gamut color space (now that's pointless).

But if you need sRGB and Adobe RGB (1998), you have only one option if you're not shooting raw! Set the camera for Adobe RGB (1998) so you can use the gamut potential it provides, convert to sRGB. But yes, it's not ideal because you're 'stuck' with an 8-bit per color JPEG.
 
Sorry, we're talking across each other. I didn't make it clear enough - my comments were intended for working with JPEG straight out of the camera, not RAW. They also apply if there's a processing step that reduces down to 8 bits, but that's unlikely in a modern workflow.
Mark, the JPEG out of the camera comes from raw data. That's all the camera can produce initially, from the CFA. The raw is processed to create that in-camera JPEG. From high bit data. To one of two possible color spaces.

https://en.wikipedia.org/wiki/Bayer_filter
Yes, I realize all that. But it's a moot point if you haven't set the camera to save the RAW, because the in-camera conversion to JPEG loses information.
Yes, it most certainly does; we are in agreement! But not due to the color space selected. That's my point.

The raw to sRGB or raw to Adobe RGB conversion comes from the high bit raw data. The rendering and affect of JPEG compression is a different story.
Different color spaces incur the loss in different areas, but there will be a loss.
Agreed, but again, due to the JPEG, not the selection of the color space IF the conversion takes place high bit.
I still contend that if all you have is the JPEG, and your output is intended to be sRGB, you're better off working with a sRGB JPEG than starting with an Adobe RGB JPEG and converting it to sRGB later.
I totally agree; any color space conversion produces some rounding errors and data loss. But it's moot from high bit data. If all you need is sRGB, why would you ever use anything else or convert to a larger gamut color space (now that's pointless).

But if you need sRGB and Adobe RGB (1998), you have only one option if you're not shooting raw! Set the camera for Adobe RGB (1998) so you can use the gamut potential it provides, convert to sRGB. But yes, it's not ideal because you're 'stuck' with an 8-bit per color JPEG.
I think we're almost in total agreement, but I want to emphasize the consequences of choosing a color space and needing to convert later.

When I talk of the loss of going to JPEG, I'm really speaking of the loss of converting from 12 or 14 bits down to 8 bits, not the additional loss introduced by the JPEG format itself. For discussions sake imagine that JPEG is lossless, like PNG.

With 8 bits you only get 256 values to describe each of the primary colors. If you choose Adobe RGB, those 256 levels describe a greater range of color. I'm not sure of the exact figures, but for example the range of 235-255 might be colors that simply don't exist in sRGB. That means converting from Adobe RGB to sRGB, you're going to lose those 20 colors. But the reverse is also true! There are 20 colors in sRGB that you can't get from an Adobe RGB source. They aren't all conveniently grouped at one end of the gamut, rather they're spread throughout the sRGB color space. But they do exist, and the result will be a slight loss in quality as you convert from Adobe RGB to sRGB, even if none of the colors lie outside the sRGB gamut. As I said previously, you're most likely to see it as banding in light areas of nearly constant color, such as a sky.

Your conclusion is absolutely true. If you require both Adobe RGB and sRGB outputs, your only choice is to start with Adobe RGB and accept the consequences.

One last thing, you seem to be well versed on this subject so you might have some insight. I'm curious as to how often a camera produces colors that are outside the sRGB gamut; my gut says it wouldn't be very often. Do you have any examples of such an image that weren't artificially generated? Thanks.
 
There are 20 colors in sRGB that you can't get from an Adobe RGB source.
That's not so. sRGB gamut is fully encompassed in AdobeRGB gamut.
I'm curious as to how often a camera produces colors that are outside the sRGB gamut
The problem starts earlier than that - one can't compute a camera to sRGB transform based on a shot of a colour target, because even small 24-patch ColorChecker falls out of sRGB gamut (cyan patch is clipped in sRGB).
 
When I talk of the loss of going to JPEG, I'm really speaking of the loss of converting from 12 or 14 bits down to 8 bits, not the additional loss introduced by the JPEG format itself.
There's no such loss (I see Iliah has addressed this too).
With 8 bits you only get 256 values to describe each of the primary colors. If you choose Adobe RGB, those 256 levels describe a greater range of color. I'm not sure of the exact figures, but for example the range of 235-255 might be colors that simply don't exist in sRGB.
OK, we start with raw data. Doesn't matter if the scene gamut exceeds sRGB or not. There's this urban legend started here on DPR that if the scene's color gamut if you will, could fit into sRGB, you should always encode into sRGB rather than a larger color space because there's some data loss or errors or issues. I tested this and produced a video to address it:

sRGB urban legend Part 1

In this 30 minute video I'll cover:

Is there benefit or harm using a wider color gamut working space than the image data?

Should sRGB image data always be encoded into sRGB?

What are RGB working spaces and how do they differ?

What is Delta-E and how we use it to evaluate color differences.

Color Accuracy: what it really means, how we measure it!

Using Photoshop to numerically and visually see color differences when using differing working spaces.

Using ColorThink Pro and BableColor CT&A to show the effects of differing working space on our data and analyzing if using a smaller gamut working space is necessary.

Appendix: testing methodology, how differing raw converters encode into working spaces, capturing in-camera JPEG data and color accuracy.

Low resolution (YouTube):

High resoution:
http://digitaldog.net/files/sRGBMyths.mp4

The bottom line was, the colorimetric differences taking a very small gamut scene (white dog on snow), captured in raw, and processing that raw to either sRGB or ProPhoto RGB was insignificant! A difference well below the visual perception (below a dE of 1 by a lot) of the two options.
That means converting from Adobe RGB to sRGB, you're going to lose those 20 colors. But the reverse is also true!
IF we're talking about the same capture and processing, no. Watch the video and let me know if that's the conditions you are referring to.

You can forget shooting raw and worrying about the color gamut of the scene or what color gamut you encode that data into, high bit, from raw. Wide color gamut scene (lots of colorful flowers) or very low color gamut scene (white dog on snow), you can encode into a very large gamut working space like ProPhoto RGB equally.
But they do exist, and the result will be a slight loss in quality as you convert from Adobe RGB to sRGB, even if none of the colors lie outside the sRGB gamut. As I said previously, you're most likely to see it as banding in light areas of nearly constant color, such as a sky.
Again, based on my testing and some peer review from others, no. But I'm always looking for a 2nd set of eyes to see if the conclusions contain any errors.
Your conclusion is absolutely true. If you require both Adobe RGB and sRGB outputs, your only choice is to start with Adobe RGB and accept the consequences.
And with raw, doesn't matter if you skip both and go directly to something larger.
One last thing, you seem to be well versed on this subject so you might have some insight. I'm curious as to how often a camera produces colors that are outside the sRGB gamut; my gut says it wouldn't be very often. Do you have any examples of such an image that weren't artificially generated? Thanks.
Yes! And it can happen quite often. Here's a video where actual images are plotted in a 3d color gamut compared to sRGB and others:

Everything you thought you wanted to know about color gamut

A pretty exhaustive 37 minute video examining the color gamut of RGB working spaces, images and output color spaces. All plotted in 2D and 3D to illustrate color gamut.

High resolution:
http://digitaldog.net/files/ColorGamut.mov

Low Res (YouTube):

Yes, some scenes easily fit into sRGB gamut. Many don't. And it doesn't matter if when you view both videos, you come to the same conclusions I did. IF I'm correct with my colorimetric analysis :-D

--
Andrew Rodney
Author: Color Management for Photographers
The Digital Dog
http://www.digitaldog.net
 
Last edited:
To Iliah Borg

Sorry, my post was not aimed at professional workflows, which I know use controlled workflows, TIFF, Adobe RGB, high quality printers and specially calibrated monitors.

My post was rather for amateurs, as the OP, who didnt understand why there was a setting in camera to choose between sRGB and AdobeRGB, thus I didnt mention the professional workflows.

By the way I followed the expert discussion about the conversion between Adobe RGB and sRGB. IMO, as you use exactly the same 8 bits per color (=256 values) to code all the available colour space, as sRGB is included in Adobe RGB, this means that somewhere in the colour range the sampling interval between two consecutive colours is wider in Adobe RGB than in sRGB, which means the conversion wont be lossless.

That is why pro workflow use formats that can handle more than 8 bits, for instance 16bits TIFF or Adobe proprietary formats, and for them JPEG is only an output format they may use at the end of the process, for instance to show to customers.

But for amateurs, all this is out of our reach, and, anyway, we cannot share or post on the web pictures in format that most users wont be able to open, and that most social networks like flickr or even DPreview wont accept.

And if we post JPEGs encoded in Adobe RGB, most people, as they use a monitor calibrated for sRGB, wont see them as we wanted they look like, there will be a colour shift.
 
Sorry, my post was not aimed at professional workflows
There is very little differences between pro and non-pro workflows, and certainly there is no difference in colour management. Wrong workflows do not help non-pros, or pros for that matter.
My post was rather for amateurs, as the OP, who didnt understand why there was a setting in camera to choose between sRGB and AdobeRGB,
The difference is there because if one wants to upload from the camera directly to the web he sets the camera to sRGB; and for anything else he is better off with Adobe RGB.
thus I didnt mention the professional workflows.

By the way I followed the expert discussion about the conversion between Adobe RGB and sRGB. IMO, as you use exactly the same 8 bits per color (=256 values) to code all the available colour space, as sRGB is included in Adobe RGB, this means that somewhere in the colour range the sampling interval between two consecutive colours is wider in Adobe RGB than in sRGB, which means the conversion wont be lossless.
You are mixing up data values and colours. Data values are 256 per channel (not 255, as 0 is a value). Colours are the gamut volume, and not per channel, but in total. sRGB having more device values per colour is immaterial, especially if we use proper workflow, which includes using "16 bits". Conversion from AdobeRGB to sRGB only damages the colours that were outside sRGB gamut to begin with.
DPreview wont accept.
DPR accepts PNG.
And if we post JPEGs encoded in Adobe RGB, most people, as they use a monitor calibrated for sRGB, wont see them as we wanted they look like, there will be a colour shift.
You can't calibrate a monitor to sRGB without a calibration instrument in your hands, and without specialized software.
 
To Iliah Borg

Sorry, my post was not aimed at professional workflows, which I know use controlled workflows, TIFF, Adobe RGB, high quality printers and specially calibrated monitors.

My post was rather for amateurs, as the OP, who didnt understand why there was a setting in camera to choose between sRGB and AdobeRGB, thus I didnt mention the professional workflows.
Professional or best practice workflows?
By the way I followed the expert discussion about the conversion between Adobe RGB and sRGB. IMO, as you use exactly the same 8 bits per color (=256 values) to code all the available colour space, as sRGB is included in Adobe RGB, this means that somewhere in the colour range the sampling interval between two consecutive colours is wider in Adobe RGB than in sRGB, which means the conversion wont be lossless.
The difference is their color gamuts and thus range of colors encoding being equal.
That is why pro workflow use formats that can handle more than 8 bits, for instance 16bits TIFF or Adobe proprietary formats, and for them JPEG is only an output format they may use at the end of the process, for instance to show to customers.
Every capture device I've used in decades produces high bit data. All raws.
 
Every capture device I've used in decades produces high bit data. All raws.
I quite agree. What I wanted to state is that pro workflows use 16 bits TIFF or Adobe proprietary 16 bits formats to avoid crippling the full range of processed 14 bits raw datas until the process final output.

Best practices is important for industry process, because the industrial workflow must be reliable and always produce identical output when starting from the same input.

But this doesnt mean that photo hobbyists must all act the same way:

IMO, what a photography shows or tells, whether documentary, true to life or purely artistic, and how it does it by the mean of a good framing, a good use of depth of field and shutter speed, and a good use of colours (or B&W) is more important than the way the raw datas of the camera have been processed or the ultimate quality of the lens and sensor.

And, even from the technical point of view, I have seen on the web and even here in DPreview too many pictures processed from raw using pro softwares like ACR or Lightroom, which are much worse than the out of camera JPEGs.
 
IMO, what a photography shows or tells, whether documentary, true to life or purely artistic, and how it does it by the mean of a good framing, a good use of depth of field and shutter speed, and a good use of colours (or B&W) is more important than the way the raw datas of the camera have been processed or the ultimate quality of the lens and sensor.
Equipment is of second importance, but it's like saying let my child be healthy, it is more important than clever. Actually, we want both, and more.

Good shooting and processing discipline may save a lot of money spent chasing better cameras and lenses.
 
Every capture device I've used in decades produces high bit data. All raws.
I quite agree. What I wanted to state is that pro workflows use 16 bits TIFF or Adobe proprietary 16 bits formats to avoid crippling the full range of processed 14 bits raw datas until the process final output.
There's really no clipping. The data is either high bit (more than 8-bits per color) and PS processes it as such, or it's 8-bits per color, which PS of course lets you convert to, from that high bit data.

Further, even Photoshop doesn't operate in 16-bit, it just calls all high bit data, 16-bit.

The high-bit representation in Photoshop has always been "15 1" bits (32767 (which is the total number of values that can be represented by 15 bits of precision) 1).
But this doesnt mean that photo hobbyists must all act the same way:
They can and do act differently based on their needs and more importantly, their understanding of the workflow and it's effects on the data and output.

--
Andrew Rodney
Author: Color Management for Photographers
The Digital Dog
http://www.digitaldog.net
 
Last edited:
IMO, what a photography shows or tells, whether documentary, true to life or purely artistic, and how it does it by the mean of a good framing, a good use of depth of field and shutter speed, and a good use of colours (or B&W) is more important than the way the raw datas of the camera have been processed or the ultimate quality of the lens and sensor.
Equipment is of second importance, but it's like saying let my child be healthy, it is more important than clever. Actually, we want both, and more.

Good shooting and processing discipline may save a lot of money spent chasing better cameras and lenses.
This is SO true. Basically the sensor's "picture" does not exist without a computer to process it, whether it be the one in the camera or the one on your desk, and whether you choose to set it to "JPEG" in one or the other or something else. And the camera's computer, while optimized for processing images, still is rather tough to customize, comes with an iffy display, and is unforgiving of errors.
 
There are 20 colors in sRGB that you can't get from an Adobe RGB source.
That's not so. sRGB gamut is fully encompassed in AdobeRGB gamut.
If we were talking about a source with an infinite number of bits, that would be true. But I was talking specifically about an 8-bit source with 256 expressible levels. The gamut is fully encompassed, but there are missing values in that gamut because they simply don't exist in the source after quantization.
I'm curious as to how often a camera produces colors that are outside the sRGB gamut
The problem starts earlier than that - one can't compute a camera to sRGB transform based on a shot of a colour target, because even small 24-patch ColorChecker falls out of sRGB gamut (cyan patch is clipped in sRGB).
Obviously it's possible, since every camera manages to produce sRGB output. Perhaps they calibrate using a chart with colors that are all inside the sRGB gamut?
 
There are 20 colors in sRGB that you can't get from an Adobe RGB source.
That's not so. sRGB gamut is fully encompassed in AdobeRGB gamut.
If we were talking about a source with an infinite number of bits, that would be true. But I was talking specifically about an 8-bit source with 256 expressible levels. The gamut is fully encompassed, but there are missing values in that gamut because they simply don't exist in the source after quantization.
No colours are missing. If disagree, demonstrate.
I'm curious as to how often a camera produces colors that are outside the sRGB gamut
The problem starts earlier than that - one can't compute a camera to sRGB transform based on a shot of a colour target, because even small 24-patch ColorChecker falls out of sRGB gamut (cyan patch is clipped in sRGB).
Obviously it's possible, since every camera manages to produce sRGB output.
By cutting corners and sacrificing colour accuracy.
 
There are 20 colors in sRGB that you can't get from an Adobe RGB source.
That's not so. sRGB gamut is fully encompassed in AdobeRGB gamut.
If we were talking about a source with an infinite number of bits, that would be true. But I was talking specifically about an 8-bit source with 256 expressible levels. The gamut is fully encompassed, but there are missing values in that gamut because they simply don't exist in the source after quantization.
No colours are missing. If disagree, demonstrate.
I agree and testing I did (that we did discuss) showed the opposite. More device colors in Adobe RGB (1998) than sRGB, after an edit on 8-bit pre color. But the edit could play a role. The gamma encoding (when different) could play a role I suspect. Editing in 8-bit per color will do that! Don't edit in 8-bits per color :-)

What Mark might be referring to is the idea that sRGB is theoretically a half inflated balloon with 16.7 million dots painted on it, representing it's device values. Adobe RGB (1998) would be that balloon inflated say 1.5X larger with air. The colorimetric distance between the dots is wider due to the wider gamut. But the device values are the same and after editing (again subject to better testing and peer review) appear to indicate that editing that data, at least with a simple Level's adjustment, leaves the larger gamut document with more, not less device values.

Editing in 8-bit per color will do that! Don't edit in 8-bits per color :-)

Andrew Rodney

Author: Color Management for Photographers
The Digital Dog
 

Keyboard shortcuts

Back
Top