How Important Is The Histogram . . . Really?

Not sure it's truely relevant, but... I put that here, in case of.

http://www.brucelindbloom.com/index.html?LabGamutDisplayHelp.html
It's not really relevant. ALL color spaces can be mapped in reference to Lab gamut. And Lab is based on the perception of the Human Observer.
EDIT : Even if it's not in the lab, it does'nt mean it's not in the raw ?
It means it's not a color!
I was just asking if a camera can take other infos than colors.
Again, raw files have no colorimetric color gamut.

http://www.color-image.com/2012/08/a-digital-camera-does-not-have-a-color-gamut/

And even color spaces that have a gamut can define numbers that are not colors (we can't see them):

http://digitaldog.net/files/ColorNumbersColorGamut.pdf

R0/G255/B0 in ProPhoto RGB is NOT a color!
Oh, that's cool ! I'm going into another parallel dimension :D
 
I was just asking if a camera can take other infos than colors.
Yes. If I take the dust cover/ UV/IR blocking filter out of any Sigma DSLR except SD9, I can capture wavelengths of less than 380nm and of more than 730nm. Most people can't see those - although some can. Since those wavelengths are not generally regarded as colors they do qualify as "other infos".

Here's a picture captured thus. The lens had a filter on it to block visible light:

SDIM0723-SPP-mono-adj-RT.jpg


:-D

--
"What we've got hyah is Failyah to Communicate": 'Cool Hand Luke' 1967.
Ted
 
Last edited:
Not sure it's truely relevant, but... I put that here, in case of.

http://www.brucelindbloom.com/index.html?LabGamutDisplayHelp.html
It's not really relevant. ALL color spaces can be mapped in reference to Lab gamut. And Lab is based on the perception of the Human Observer.
EDIT : Even if it's not in the lab, it does'nt mean it's not in the raw ?
It means it's not a color!
I was just asking if a camera can take other infos than colors.
Digital cameras don't have a gamut, but rather a color mixing function. Basically, a color mixing function is a mathematical representation of a measured color as a function of the three standard monochromatic RGB primaries needed to duplicate a monochromatic observed color at its measured wavelength. Cameras don’t have primaries, they have spectral sensitivities, and the difference is important because a camera can capture all sorts of different primaries. Two different primaries may be captured as the same values by a camera, and the same primary may be captured as two different values by a camera (if the spectral power distributions of the primaries are different). A camera has colors it can capture and encode as unique values compared to others, that are imaginary (not visible) to us. There are colors we can see, but the camera can't capture that are imaginary to it. Most of the colors the camera can "see" we can see as well. Yet some cameras can “see colors“ outside the spectral locus however every attempt is usually made to filter those out. Most important is the fact that cameras “see colors“ inside the spectral locus differently than humans. I know of no shipping camera that meets the Luther-Ives condition. This means that cameras exhibit significant observer metamerism compared to humans. The camera color space differs from a more common working color space in that it does not have a unique one to one transform to and from CIE XYZ. This is because the camera has different color filters than the human eye, and thus "sees" colors differently. Any translation from camera color space to CIE XYZ space is therefore an approximation.
The point is that if you think of camera primaries you can come to many incorrect conclusions because cameras capture spectrally. On the other hand, displays create colors using primaries. Primaries are defined colorimetrically so any color space defined using primaries is colorimetric. Native (raw) camera color spaces are almost never colorimetric, and therefore cannot be defined using primaries. Therefore, the measured pixel values don't even produce a gamut until they're mapped into a particular RGB space. Before then, *all* colors are (by definition) possible.
Again, raw files have no colorimetric color gamut.

http://www.color-image.com/2012/08/a-digital-camera-does-not-have-a-color-gamut/

And even color spaces that have a gamut can define numbers that are not colors (we can't see them):

http://digitaldog.net/files/ColorNumbersColorGamut.pdf

R0/G255/B0 in ProPhoto RGB is NOT a color!
Oh, that's cool ! I'm going into another parallel dimension :D
Read the ULR (PDF): device values are not necessarily colors!

That's how people state they can create 16.7 million colors (WRONG) when we humans can't see that many colors. Encoding of numbers isn't the same as colors we can see.
 
I just can say that there is no clipped channel in the raw.
You can be told if there is clipping yes, due to over exposure. You can't be told if there is color clipping; there's no color, per se, yet.
And may I add that if there are no clipped raw channels - and even a good bit of headroom - the Foveon conversion matrices have pretty big coefficients including the off-diagonal ones. And I'm talking camera-to-XYZ - an even bigger space than ProPhoto as of course you know.

Sensor Inputs refers to bottom, middle and top layer signals.

Sensor Inputs refers to bottom, middle and top layer signals.

An early matrix for the SD9 - later ones vary.

For example, if I put equal layer exposures, with 1 EV or so headroom, into the Sigma SD14 cam-XYZ matrix, I would naively expect to get something near a white point out of the matrix. But no, it gives XYZ = 0.55, 0.39, 0.91 and that transforms to an x,y chromaticity of 0.3, 0.21- placing the white point nowhere near neutral, more a light purple I reckon.

Main point, since I digressed, is that lack of clipping in the raw is no guarantee that the RGB will not be clipped.
Further, the rendering settings in a raw converter play a huge role here! Take a raw capture of a scene that itself has a very wide color gamut. Yank down Saturation (or vibrance), it can easily be encoded into sRGB without any color clipping.
But if you,"yank" down the saturation, won't your photo look dull?
--
Andrew Rodney
Author: Color Management for Photographers
The Digital Dog
http://www.digitaldog.net


--
Scott Barton Kennelly
 
I just can say that there is no clipped channel in the raw.
You can be told if there is clipping yes, due to over exposure. You can't be told if there is color clipping; there's no color, per se, yet.
And may I add that if there are no clipped raw channels - and even a good bit of headroom - the Foveon conversion matrices have pretty big coefficients including the off-diagonal ones. And I'm talking camera-to-XYZ - an even bigger space than ProPhoto as of course you know.

Sensor Inputs refers to bottom, middle and top layer signals.

Sensor Inputs refers to bottom, middle and top layer signals.

An early matrix for the SD9 - later ones vary.

For example, if I put equal layer exposures, with 1 EV or so headroom, into the Sigma SD14 cam-XYZ matrix, I would naively expect to get something near a white point out of the matrix. But no, it gives XYZ = 0.55, 0.39, 0.91 and that transforms to an x,y chromaticity of 0.3, 0.21- placing the white point nowhere near neutral, more a light purple I reckon.

Main point, since I digressed, is that lack of clipping in the raw is no guarantee that the RGB will not be clipped.
Further, the rendering settings in a raw converter play a huge role here! Take a raw capture of a scene that itself has a very wide color gamut. Yank down Saturation (or vibrance), it can easily be encoded into sRGB without any color clipping.
But if you,"yank" down the saturation, won't your photo look dull?
Indeed! Well desaturated. So encode in a big, honking wide gamut color space like ProPhoto RGB.
--
Andrew Rodney
Author: Color Management for Photographers
The Digital Dog
 
I was just asking if a camera can take other infos than colors.
Yes. If I take the dust cover/ UV/IR blocking filter out of any Sigma DSLR except SD9, I can capture wavelengths of less than 380nm and of more than 730nm. Most people can't see those - although some can. Since those wavelengths are not generally regarded as colors they do qualify as "other infos".

Here's a picture captured thus. The lens had a filter on it to block visible light:
Yes but are the values no mapped (converted) into a color space and numbers that are visible?

No question there are 'colors' (quotes on purpose) a camera and detect and record we can't see. And then, we can when they are converted into numbers that we can plot which fall into human vision.
 
It would not be a color. If you can't see it, it's not a color.
Really ?
Indeed!

Color, is a perceptual property. So if you can't see it it's not a color. Color is not a particular wavelength of light. It is a cognitive perception, the excitation of photoreceptors followed by retinal processing and ending in the our visual cortex, within our brains. As such, colors are defined based on perceptual experiments.
Oh well, in that case, the two circles below truly are different colours, because perceptual experiments say so.

Color-Adapting-optical-illusion-9.jpg


Hmmm?
 
It would not be a color. If you can't see it, it's not a color.
Really ?
Indeed!

Color, is a perceptual property. So if you can't see it it's not a color. Color is not a particular wavelength of light. It is a cognitive perception, the excitation of photoreceptors followed by retinal processing and ending in the our visual cortex, within our brains. As such, colors are defined based on perceptual experiments.
Oh well, in that case, the two circles below truly are different colours, because perceptual experiments say so.

Color-Adapting-optical-illusion-9.jpg


Hmmm?
Indeed! Take a Spectrophotometer and you'll get something quite different from what we perceive. That's why eyeball calibration is iffy at best!

Colorimetry is about color perception. It is not about color appearance. It's not designed for imagery at all. It's all based on solid colors in very specific ambient and surround conditions.



--
Andrew Rodney
Author: Color Management for Photographers
The Digital Dog
http://www.digitaldog.net
 
Last edited:
It would not be a color. If you can't see it, it's not a color.
Really ?
Indeed!

Color, is a perceptual property. So if you can't see it it's not a color. Color is not a particular wavelength of light. It is a cognitive perception, the excitation of photoreceptors followed by retinal processing and ending in the our visual cortex, within our brains. As such, colors are defined based on perceptual experiments.
Oh well, in that case, the two circles below truly are different colours, because perceptual experiments say so.

Color-Adapting-optical-illusion-9.jpg


Hmmm?
Indeed! Take a Spectrophotometer and you'll get something quite different from what we perceive. That's why eyeball calibration is iffy at best!

Colorimetry is about color perception. It is not about color appearance.
You are invited to clarify the statement in bold.
 
I just can say that there is no clipped channel in the raw.
You can be told if there is clipping yes, due to over exposure. You can't be told if there is color clipping; there's no color, per se, yet.
And may I add that if there are no clipped raw channels - and even a good bit of headroom - the Foveon conversion matrices have pretty big coefficients including the off-diagonal ones. And I'm talking camera-to-XYZ - an even bigger space than ProPhoto as of course you know.

Sensor Inputs refers to bottom, middle and top layer signals.

Sensor Inputs refers to bottom, middle and top layer signals.

An early matrix for the SD9 - later ones vary.

For example, if I put equal layer exposures, with 1 EV or so headroom, into the Sigma SD14 cam-XYZ matrix, I would naively expect to get something near a white point out of the matrix. But no, it gives XYZ = 0.55, 0.39, 0.91 and that transforms to an x,y chromaticity of 0.3, 0.21- placing the white point nowhere near neutral, more a light purple I reckon.

Main point, since I digressed, is that lack of clipping in the raw is no guarantee that the RGB will not be clipped.
Further, the rendering settings in a raw converter play a huge role here! Take a raw capture of a scene that itself has a very wide color gamut. Yank down Saturation (or vibrance), it can easily be encoded into sRGB without any color clipping.
But if you,"yank" down the saturation, won't your photo look dull?
Indeed! Well desaturated.
It seems that elements of exaggeration are creeping in which might be taken literally by your respondent.
So encode in a big, honking wide gamut color space like ProPhoto RGB.
As far as I know, SPP has always encoded raw data into a wide-gamut working file, unaffected by the user's choice of working color space. I may have that wrong, as can sometimes occur :-(

My reason for commenting is that someone might take your advice as meaning "always set the working color space to ProPhoto", which I hope was not meant.

--
"What we've got hyah is Failyah to Communicate": 'Cool Hand Luke' 1967.
Ted
 
It would not be a color. If you can't see it, it's not a color.
Really ?
Indeed!

Color, is a perceptual property. So if you can't see it it's not a color. Color is not a particular wavelength of light. It is a cognitive perception, the excitation of photoreceptors followed by retinal processing and ending in the our visual cortex, within our brains. As such, colors are defined based on perceptual experiments.
Oh well, in that case, the two circles below truly are different colours, because perceptual experiments say so.

Color-Adapting-optical-illusion-9.jpg


Hmmm?
Indeed! Take a Spectrophotometer and you'll get something quite different from what we perceive. That's why eyeball calibration is iffy at best!

Colorimetry is about color perception. It is not about color appearance.
You are invited to clarify the statement in bold.
What is to clarify?

Colorimetry and the dE testing is about color perception. It is not about color appearance. The reason why viewing a print is more valid than measuring it is because measurement is about comparing solid colors. Color appearance is about evaluating images and color in context which measurement devices can't provide. Colorimetry is about color perception. It is not about color appearance. Colorimetry based on solid colors in very specific ambient and surround conditions.'

Do get a copy of Fairchild's book on color appearance models.


--
Andrew Rodney
Author: Color Management for Photographers
The Digital Dog
 
I just can say that there is no clipped channel in the raw.
You can be told if there is clipping yes, due to over exposure. You can't be told if there is color clipping; there's no color, per se, yet.
And may I add that if there are no clipped raw channels - and even a good bit of headroom - the Foveon conversion matrices have pretty big coefficients including the off-diagonal ones. And I'm talking camera-to-XYZ - an even bigger space than ProPhoto as of course you know.

Sensor Inputs refers to bottom, middle and top layer signals.

Sensor Inputs refers to bottom, middle and top layer signals.

An early matrix for the SD9 - later ones vary.

For example, if I put equal layer exposures, with 1 EV or so headroom, into the Sigma SD14 cam-XYZ matrix, I would naively expect to get something near a white point out of the matrix. But no, it gives XYZ = 0.55, 0.39, 0.91 and that transforms to an x,y chromaticity of 0.3, 0.21- placing the white point nowhere near neutral, more a light purple I reckon.

Main point, since I digressed, is that lack of clipping in the raw is no guarantee that the RGB will not be clipped.
Further, the rendering settings in a raw converter play a huge role here! Take a raw capture of a scene that itself has a very wide color gamut. Yank down Saturation (or vibrance), it can easily be encoded into sRGB without any color clipping.
But if you,"yank" down the saturation, won't your photo look dull?
Indeed! Well desaturated.
It seems that elements of exaggeration are creeping in which might be taken literally by your respondent.
If so, they should ask for clarification.
So encode in a big, honking wide gamut color space like ProPhoto RGB.
As far as I know, SPP has always encoded raw data into a wide-gamut working file, unaffected by the user's choice of working color space. I may have that wrong, as can sometimes occur :-(
The underlying processing color space? I'd hope so. But the user has, I would expect, options to save that into various RGB working space and we both know, have the color gamut options are vastly different in size! Unless you're telling me SPP can ONLY save the rendered data into ProPhoto RGB?
My reason for commenting is that someone might take your advice as meaning "always set the working color space to ProPhoto", which I hope was not meant.
No, it most certainly was, for master images from raw data. The highest (native) resolution, highest bit depth, widest gamut. From there, producing iterations with less resolution, color gamut (sRGB JPEG for the web), fine.

There's no issues encoding raw into the larger color gamut and spinning off copies as needed. There are disadvantages in encoding many images into a smaller gamut color space from the get-go; color clipping. Why would anyone do that?

--
Andrew Rodney
Author: Color Management for Photographers
The Digital Dog
http://www.digitaldog.net
 
Last edited:
I was just asking if a camera can take other infos than colors.
Yes. If I take the dust cover/ UV/IR blocking filter out of any Sigma DSLR except SD9, I can capture wavelengths of less than 380nm and of more than 730nm. Most people can't see those - although some can. Since those wavelengths are not generally regarded as colors they do qualify as "other infos".

Here's a picture captured thus. The lens had a filter on it to block visible light:
Yes but are the values [not] mapped (converted) into a color space and numbers that are visible?
Obviously. My main response was to Maurice's question; the picture merely shows that the camera captured invisible wavelengths. How I made them visible on-screen is completely irrelevant to Maurice's question.
No question there are 'colors' (quotes on purpose) a camera and detect and record we can't see.
Obviously.
And then, we can when they are converted into numbers that we can plot which fall into human vision.
Obviously.
 
I was just asking if a camera can take other infos than colors.
Yes. If I take the dust cover/ UV/IR blocking filter out of any Sigma DSLR except SD9, I can capture wavelengths of less than 380nm and of more than 730nm. Most people can't see those - although some can. Since those wavelengths are not generally regarded as colors they do qualify as "other infos".

Here's a picture captured thus. The lens had a filter on it to block visible light:
Yes but are the values [not] mapped (converted) into a color space and numbers that are visible?
Obviously. My main response was to Maurice's question; the picture merely shows that the camera captured invisible wavelengths. How I made them visible on-screen is completely irrelevant to Maurice's question.
No question there are 'colors' (quotes on purpose) a camera and detect and record we can't see.
Obviously.
And then, we can when they are converted into numbers that we can plot which fall into human vision.
Obviously.
Good and indeed obvious to both or us.

I wanted to ensure elements of exaggeration were not creeping in which might be taken literally by your respondent. :-)

We are in agreement and now, so should others....
 
I just can say that there is no clipped channel in the raw.
You can be told if there is clipping yes, due to over exposure. You can't be told if there is color clipping; there's no color, per se, yet.
And may I add that if there are no clipped raw channels - and even a good bit of headroom - the Foveon conversion matrices have pretty big coefficients including the off-diagonal ones. And I'm talking camera-to-XYZ - an even bigger space than ProPhoto as of course you know.

Sensor Inputs refers to bottom, middle and top layer signals.

Sensor Inputs refers to bottom, middle and top layer signals.

An early matrix for the SD9 - later ones vary.

For example, if I put equal layer exposures, with 1 EV or so headroom, into the Sigma SD14 cam-XYZ matrix, I would naively expect to get something near a white point out of the matrix. But no, it gives XYZ = 0.55, 0.39, 0.91 and that transforms to an x,y chromaticity of 0.3, 0.21- placing the white point nowhere near neutral, more a light purple I reckon.

Main point, since I digressed, is that lack of clipping in the raw is no guarantee that the RGB will not be clipped.
Further, the rendering settings in a raw converter play a huge role here! Take a raw capture of a scene that itself has a very wide color gamut. Yank down Saturation (or vibrance), it can easily be encoded into sRGB without any color clipping.
But if you,"yank" down the saturation, won't your photo look dull?
Indeed! Well desaturated.
It seems that elements of exaggeration are creeping in which might be taken literally by your respondent.
If so, they should ask for clarification.
So encode in a big, honking wide gamut color space like ProPhoto RGB.
As far as I know, SPP has always encoded raw data into a wide-gamut working file, unaffected by the user's choice of working color space. I may have that wrong, as can sometimes occur :-(
The underlying processing color space? I'd hope so. But the user has, I would expect, options to save that into various RGB working space and we both know, have the color gamut options are vastly different in size!
Yes, your expectation is met in SPP.
Unless you're telling me SPP can ONLY save the rendered data into ProPhoto RGB?
I have no idea why that question was asked - it borders on being deliberately obtuse!
My reason for commenting is that someone might take your advice as meaning "always set the working color space to ProPhoto", which I hope was not meant.
No, it most certainly was, for master images from raw data. The highest (native) resolution, highest bit depth, widest gamut.
Sorry, being somewhat obtuse myself, I have no idea what a "master image" is, please explain.
From there, producing iterations with less resolution, color gamut (sRGB JPEG for the web), fine.
Iterations?

[There are] no issues encoding raw into the larger color gamut and spinning off copies as needed. There are disadvantages in encoding many images into a smaller gamut color space from the get-go; color clipping.
I've explained color-clipping or, more properly, gamut-clipping many, many times here and also over on CiC. So, there is no real need to warn of the dangers of going from a large gamut to a small one.
Why would anyone do that?
Here, I suspect a 'Failyah to Communicate':

Suppose someone always views on a sRGB monitor and never prints anything. Suppose that someone has no intention of ever buying a so-called wide-gamut monitor and has no intention of buying a fancy printer to sell prints or hang stuff on a wall.

(Usually, at this point, it will be said that this someone is not a Real Photographer whatever that means.)

Be that as it may, why would that someone use any working color-space other than sRGB? Best not mention round trips - SPP re-does every adjustment from scratch whenever an adjust is made. People hate that and whine about it being "slow" . . .

--
"What we've got hyah is Failyah to Communicate": 'Cool Hand Luke' 1967.
Ted
 
I just can say that there is no clipped channel in the raw.
You can be told if there is clipping yes, due to over exposure. You can't be told if there is color clipping; there's no color, per se, yet.
And may I add that if there are no clipped raw channels - and even a good bit of headroom - the Foveon conversion matrices have pretty big coefficients including the off-diagonal ones. And I'm talking camera-to-XYZ - an even bigger space than ProPhoto as of course you know.

Sensor Inputs refers to bottom, middle and top layer signals.

Sensor Inputs refers to bottom, middle and top layer signals.

An early matrix for the SD9 - later ones vary.

For example, if I put equal layer exposures, with 1 EV or so headroom, into the Sigma SD14 cam-XYZ matrix, I would naively expect to get something near a white point out of the matrix. But no, it gives XYZ = 0.55, 0.39, 0.91 and that transforms to an x,y chromaticity of 0.3, 0.21- placing the white point nowhere near neutral, more a light purple I reckon.

Main point, since I digressed, is that lack of clipping in the raw is no guarantee that the RGB will not be clipped.
Further, the rendering settings in a raw converter play a huge role here! Take a raw capture of a scene that itself has a very wide color gamut. Yank down Saturation (or vibrance), it can easily be encoded into sRGB without any color clipping.
But if you,"yank" down the saturation, won't your photo look dull?
Indeed! Well desaturated.
It seems that elements of exaggeration are creeping in which might be taken literally by your respondent.
If so, they should ask for clarification.
So encode in a big, honking wide gamut color space like ProPhoto RGB.
As far as I know, SPP has always encoded raw data into a wide-gamut working file, unaffected by the user's choice of working color space. I may have that wrong, as can sometimes occur :-(
The underlying processing color space? I'd hope so. But the user has, I would expect, options to save that into various RGB working space and we both know, have the color gamut options are vastly different in size!
Yes, your expectation is met in SPP.
Unless you're telling me SPP can ONLY save the rendered data into ProPhoto RGB?
I have no idea why that question was asked - it borders on being deliberately obtuse!
Not deliberately! The underlying raw color space, which isn't something many companies tell us about is one colorimetric color space with a defined color gamut.

Adobe is very up front about this: the underlying color space is ProPhoto RGB with a linear gamma encoding. For ALL processing.

Now you say that SPP has a wide gamut underlying color space. I know nothing of how this product processes the data, you appear to know. Perhaps you can share with me where they discuss the underlying color space (Is it like the ACR engine, ProPhoto RGB primaries or something else)?

Despite the underlying color space for processing, a user can encode that into any RGB working space! It could be sRGB. So again, I'm not trying to be obtuse; unless you're saying that SPP ONLY encodes from raw INTO ProPhoto RGB (and it appears that isn't the case), then nothing stops a user from clipping a boat load of colors doing so by encoding in to sRGB. Is that less obtuse?
My reason for commenting is that someone might take your advice as meaning "always set the working color space to ProPhoto", which I hope was not meant.
No, it most certainly was, for master images from raw data. The highest (native) resolution, highest bit depth, widest gamut.
Sorry, being somewhat obtuse myself, I have no idea what a "master image" is, please explain.
Sigh... you work with a raw file. You set the parametric instructions to render an image. A master file would be what you render and ideally it would be the highest resolution, bit depth and widest color gamut. From there, you might very well edit the heck out of it in Photoshop or similar no? So now you've spent 5 hours working on it and you want to post an iteration to the web. You resample way down and convert to sRGB. Or use all that data to make a big print, or output to a magazine, or project on a wide gamut HDTV. Is that less obtuse?
From there, producing iterations with less resolution, color gamut (sRGB JPEG for the web), fine.
Iterations?
Yes of course! Read the text above.

Do you render from raw for every iteration need? The web, a huge print, etc? And then retouch every time? Good workflow if you get paid by the hour!
https://www.merriam-webster.com/dictionary/iteration
[There are] no issues encoding raw into the larger color gamut and spinning off copies as needed. There are disadvantages in encoding many images into a smaller gamut color space from the get-go; color clipping.
I've explained color-clipping or, more properly, gamut-clipping many, many times here and also over on CiC. So, there is no real need to warn of the dangers of going from a large gamut to a small one.
I'm so sorry to do so. Are you certain that every reader here has read your treatise on this subject? I'm also stating and will back up the claim that there's no issues in initially rendering the raw into a big color gamut. Then converting, if and when need be, later on.
Why would anyone do that?
Here, I suspect a 'Failyah to Communicate':
Indeed.
Suppose someone always views on a sRGB monitor and never prints anything.
What if they do? Forever and always are firm absolutes in any workflow.
Suppose that someone has no intention of ever buying a so-called wide-gamut monitor and has no intention of buying a fancy printer to sell prints or hang stuff on a wall.
Suppose they do?
(Usually, at this point, it will be said that this someone is not a Real Photographer whatever that means.)
Those are not my words. I once was a real photographer (it's how I feed my family shooting national ads, magazines, annual reports); you?
Be that as it may, why would that someone use any working color-space other than sRGB?
There would be zero harm in doing so and potential harm not doing so.

What happens in the coming years when sRGB goes the way of the dodo bird and the majority of people are viewing images on a wide gamut display. Did painting yourself in a corner with your data do you good? It isn't the end of the world no. But it's not very forward thinking.

I didn't shoot 8x10 cameras to only put a 35mm back on them!

If your workflow ideas are sound, why not shoot everything in sRGB JPEG? Good enough right?
Best not mention round trips - SPP re-does every adjustment from scratch whenever an adjust is made. People hate that and whine about it being "slow" . . .
Do they or do you? These discussions would be much more useful if people only spoke for themselves and not others.

--
Andrew Rodney
Author: Color Management for Photographers
The Digital Dog
http://www.digitaldog.net
 
Last edited:
I just can say that there is no clipped channel in the raw.
You can be told if there is clipping yes, due to over exposure. You can't be told if there is color clipping; there's no color, per se, yet.
And may I add that if there are no clipped raw channels - and even a good bit of headroom - the Foveon conversion matrices have pretty big coefficients including the off-diagonal ones. And I'm talking camera-to-XYZ - an even bigger space than ProPhoto as of course you know.

Sensor Inputs refers to bottom, middle and top layer signals.

Sensor Inputs refers to bottom, middle and top layer signals.

An early matrix for the SD9 - later ones vary.

For example, if I put equal layer exposures, with 1 EV or so headroom, into the Sigma SD14 cam-XYZ matrix, I would naively expect to get something near a white point out of the matrix. But no, it gives XYZ = 0.55, 0.39, 0.91 and that transforms to an x,y chromaticity of 0.3, 0.21- placing the white point nowhere near neutral, more a light purple I reckon.

Main point, since I digressed, is that lack of clipping in the raw is no guarantee that the RGB will not be clipped.
Further, the rendering settings in a raw converter play a huge role here! Take a raw capture of a scene that itself has a very wide color gamut. Yank down Saturation (or vibrance), it can easily be encoded into sRGB without any color clipping.
But if you,"yank" down the saturation, won't your photo look dull?
Indeed! Well desaturated.
It seems that elements of exaggeration are creeping in which might be taken literally by your respondent.
If so, they should ask for clarification.
So encode in a big, honking wide gamut color space like ProPhoto RGB.
As far as I know, SPP has always encoded raw data into a wide-gamut working file, unaffected by the user's choice of working color space. I may have that wrong, as can sometimes occur :-(
The underlying processing color space? I'd hope so. But the user has, I would expect, options to save that into various RGB working space and we both know, have the color gamut options are vastly different in size!
Yes, your expectation is met in SPP.
Unless you're telling me SPP can ONLY save the rendered data into ProPhoto RGB?
I have no idea why that question was asked - it borders on being deliberately obtuse!
Not deliberately! The underlying raw color space, which isn't something many companies tell us about is one colorimetric color space with a defined color gamut.

Adobe is very up front about this: the underlying color space is ProPhoto RGB with a linear gamma encoding. For ALL processing.
OK. No need to shout, BTW.
Now you say that SPP has a wide gamut underlying color space. I know nothing of how this product processes the data, you appear to know. Perhaps you can share with me where they discuss the underlying color space (Is it like the ACR engine, ProPhoto RGB primaries or something else)?
I wish I could, but the exact workings of SPP are mystery to us all, I'm afraid. I believe it to be Kodak linear ROMM but that is just an opinion.
Despite the underlying color space for processing, a user can encode that into any RGB working space!
Is that so!
It could be sRGB. So again, I'm not trying to be obtuse; unless you're saying that SPP ONLY encodes from raw INTO ProPhoto RGB (and it appears that isn't the case), then nothing stops a user from clipping a boat load of colors doing so by encoding in to sRGB. Is that less obtuse?
If it is still believed that I, moi, would ever say that then I am mortally wounded.
My reason for commenting is that someone might take your advice as meaning "always set the working color space to ProPhoto", which I hope was not meant.
No, it most certainly was, for master images from raw data. The highest (native) resolution, highest bit depth, widest gamut.
Sorry, being somewhat obtuse myself, I have no idea what a "master image" is, please explain.
Sigh...
That "sigh" earns you my departure from this discussion - but I'll read on for now.
you work with a raw file. You set the parametric instructions to render an image. A master file would be what you render and ideally it would be the highest resolution, bit depth and widest color gamut. From there, you might very well edit the heck out of it in Photoshop or similar no? So now you've spent 5 hours working on it and you want to post an iteration to the web. You resample way down and convert to sRGB. Or use all that data to make a big print, or output to a magazine, or project on a wide gamut HDTV. Is that less obtuse?
As in most things photographic, the word "iteration" must have a different meaning to that extant in the Real World . . so be it.
From there, producing iterations with less resolution, color gamut (sRGB JPEG for the web), fine.
Iterations?
Yes of course! Read the text above.

Do you render from raw for every iteration need? The web, a huge print, etc? And then retouch every time? Good workflow if you get paid by the hour!
What?
https://www.merriam-webster.com/dictionary/iteration
[There are] no issues encoding raw into the larger color gamut and spinning off copies as needed. There are disadvantages in encoding many images into a smaller gamut color space from the get-go; color clipping.
I've explained color-clipping or, more properly, gamut-clipping many, many times here and also over on CiC. So, there is no real need to warn of the dangers of going from a large gamut to a small one.
I'm so sorry to do so.
Are you certain that every reader here has read your treatise :-D on this subject?
And here comes another rhetorical question which, being rhetorical, needs no answer.
I'm also stating and will back up the claim that [there are] no issues in initially rendering the raw into a big color gamut. Then converting, if and when need be, later on.
Nobody here says there are issues with that, or did I miss them?
Why would anyone do that?
Here, I suspect a 'Failyah to Communicate':
Indeed.
Suppose someone always views on a sRGB monitor and never prints anything.
What if they do? Forever and always are firm absolutes in any workflow.
You just don't get it, do you?
Suppose that someone has no intention of ever buying a so-called wide-gamut monitor and has no intention of buying a fancy printer to sell prints or hang stuff on a wall.
Suppose they do?
LOL
(Usually, at this point, it will be said that this someone is not a Real Photographer whatever that means.)
Those are not my words.
Never said they were.
I once was a real photographer (it's how I feed my family shooting national ads, magazines, annual reports); you?
I've never been a professional, it's a recently acquired hobby and I'm more interested in the technical side - no masterpieces ever came out of my cameras.
Be that as it may, why would that someone use any working color-space other than sRGB?
There would be zero harm in doing so and potential harm not doing so.

What happens in the coming years when sRGB goes the way of the dodo bird and the majority of people are viewing images on a wide gamut display.
Did painting yourself in a corner with your data do you [any] good?
Here we go again.
It isn't the end of the world no. But it's not very forward thinking.
I really don't have to consider such stuff - I'm way too old.
I didn't shoot 8x10 cameras to only put a 35mm back on them!
Gosh!
If your workflow ideas are sound, why not shoot everything in sRGB JPEG? Good enough right?
Wrong and your sarcasm is unwarranted. Also very difficult on my raw-only SD10, duh.
Best not mention round trips - SPP re-does every adjustment from scratch whenever an adjust is made. People hate that and whine about it being "slow" . . .
Do they or do you?
For god's sake you are trying my patience . .
These discussions would be much more useful if people only spoke for themselves and not others.
As I read somewhere else recently: "I'm outta here" . . ;-)

--
"What we've got hyah is Failyah to Communicate": 'Cool Hand Luke' 1967.
Ted
 
I just shot this photo of a red flower:

As shot, exported from SPP 6.4.0 to a level 11 compressed JPEG
As shot, exported from SPP 6.4.0 to a level 11 compressed JPEG

This is how the histogram looked, upon reviewing the image:

Crop from shot of the camera.
Crop from shot of the camera.

Here's the whole scene/shot:

This is the way the scene looked from above/behind the camera.
This is the way the scene looked from above/behind the camera.

As you can see, I was shooting at ISO 50. The histogram makes it look like I am under-exposing the scene. If I had pushed the histogram to the right, I would probably have blown the red flower. Here is how the histogram looks in SPP, with warnings:

See how the histogram looks? How about those warnings!?!?
See how the histogram looks? How about those warnings!?!?

Here is how the photo looks after I made some adjustments, to try to reduce the warnings:

After adjusting with a -.5 saturation and switch from Sunlight to Auto white balance.
After adjusting with a -.5 saturation and switch from Sunlight to Auto white balance.

Settings, after adjustment, showing the reduced warnings.
Settings, after adjustment, showing the reduced warnings.

To me the warnings are B.S. I guess sometimes they are a good indicator, but like the histogram on the back of the camera, they don't always mean anything significant.

Which version of the red flower photo do you like better?

Here is the raw file, just in case you would like to play with it yourself:
Why are you using ISO 50?

How did you meter this image?

I assume that this is full sunlight, so the base exposure settings for ISO 50 would be 1/50 f/16. Therefore, at f/5.6, the shutter speed should be 1/400 if I am doing my math correctly.

Then we have the old base ISO for any particular imager. If I recall correctly, and Ted can correct this, the base here is ISO 200. That means that is where everything works best, so to speak. (It could be ISO 100, but recall is a shifting result for me these days.)

And then there is the question of how you metered. In this image, you almost need a spot meter or to use that in the SD1. And you would meter the green foliage with a similar lighting. The old adage is green is gray, and you are metering for ND gray with a spot meter.

--
Laurence
laurence dot matson at gmail dot com


-----
"All of the erroneous aspects of analog or film photography are being magnified in this digital age. Images were always poorly composed and overstatements of a message. Now, we are pounding home the crappy message more easily by increasingly sharpening, saturating, contrasting, HDRing, and so on.
"The rule of thumb for each of those setting areas is: It should be done but not seen."
Laurence Matson
-----
 
I just shot this photo of a red flower:

As shot, exported from SPP 6.4.0 to a level 11 compressed JPEG
As shot, exported from SPP 6.4.0 to a level 11 compressed JPEG

This is how the histogram looked, upon reviewing the image:

Crop from shot of the camera.
Crop from shot of the camera.

Here's the whole scene/shot:

This is the way the scene looked from above/behind the camera.
This is the way the scene looked from above/behind the camera.

As you can see, I was shooting at ISO 50. The histogram makes it look like I am under-exposing the scene. If I had pushed the histogram to the right, I would probably have blown the red flower. Here is how the histogram looks in SPP, with warnings:

See how the histogram looks? How about those warnings!?!?
See how the histogram looks? How about those warnings!?!?

Here is how the photo looks after I made some adjustments, to try to reduce the warnings:

After adjusting with a -.5 saturation and switch from Sunlight to Auto white balance.
After adjusting with a -.5 saturation and switch from Sunlight to Auto white balance.

Settings, after adjustment, showing the reduced warnings.
Settings, after adjustment, showing the reduced warnings.

To me the warnings are B.S. I guess sometimes they are a good indicator, but like the histogram on the back of the camera, they don't always mean anything significant.

Which version of the red flower photo do you like better?

Here is the raw file, just in case you would like to play with it yourself:
Why are you using ISO 50?
I wanted to get to the base base ISO, so there would be NO noise at all Laurence.

;)
How did you meter this image?
I didn't. I estimated. I do that all the time, and then I chimp. If the image looks right and the histogram looks right, then I figure I got the shot . . . but I often shoot another, just in case there was a breeze that made the flower move right when the shutter fired. (I was using 2 second self-timer to shoot these.)

I don't have a meter. I shoot with all manual settings and do my best to estimate what the exposure should be, because things change so quickly. I know I should get a meter and try to use it, because I'll be using a large format camera more in the near future, but I'm hoping I an figure out what settings I need to use with my film cameras based on my digital photos. I plan to shoot both - digital first, and then film.

I find that most of my sunrise photos are shot at ISO 100 and 1/60 sec. at f7.1 or f6.3. With a 4x5 or 8x10 that could be 1/30 sec. or maybe 1/15 sec. at f11 or f16. Maybe it's going to be an expensive experimental learning experience for me, but I figure I will be able to use a digital camera as a meter of sorts.

;)
I assume that this is full sunlight, so the base exposure settings for ISO 50 would be 1/50 f/16. Therefore, at f/5.6, the shutter speed should be 1/400 if I am doing my math correctly.
Wouldn't the red flower have looked way too dark then?
Then we have the old base ISO for any particular imager. If I recall correctly, and Ted can correct this, the base here is ISO 200. That means that is where everything works best, so to speak. (It could be ISO 100, but recall is a shifting result for me these days.)
But the photos look way less noisy at ISO 100 or even less noisy at ISO 50, don't they Laurence?
And then there is the question of how you metered.
What does it matter how I metered? It's funny, but I took a photography course in college, and I don't remember the professor ever even mentioning using a meter. Maybe he thought there were more important things. I haven't used a spot meter since I shot with my Canon T90 about 30 years ago. It had a 3% spot meter built into the camera, which would average up to 9 different spots. It seemed very useful, except when I used it. I found that it didn't seem to help to any great degree, but maybe I wasn't that advanced yet. The center-weighted average metering seemed to yield better results for me. Today I don't use any metering at all. Instead I have a tendency to set the camera the way I expect the exposure should be, and then I shoot, chimp, and adjust. With my Sony cameras I don't even have to do that. Instead I can see how the shot will look (to a degree) before I even press the shutter, and I just make my adjustments before shooting. I love EVF cameras!
In this image, you almost need a spot meter or to use that in the SD1. And you would meter the green foliage with a similar lighting. The old adage is green is gray, and you are metering for ND gray with a spot meter.
But If I am trying to get the exposure of the flower perfect, I should just ignore the green parts, right?
--
Laurence
laurence dot matson at gmail dot com

-----
"All of the erroneous aspects of analog or film photography are being magnified in this digital age. Images were always poorly composed and overstatements of a message. Now, we are pounding home the crappy message more easily by increasingly sharpening, saturating, contrasting, HDRing, and so on.
"The rule of thumb for each of those setting areas is: It should be done but not seen."
Laurence Matson
-----
http://www.pbase.com/lmatson
http://www.pbase.com/sigmadslr
http://www.howardmyerslaw.com


--
Scott Barton Kennelly
 
Here is a shot from SPP with highlight recovery on. The shutter speed was chosen to push the large peak all the way to the right of the display. The shutter was 1/8 second.

Then I pushed the exposure just a little less than half a stop using 1/5 second in second picture. Nothing could be done to recover the color of the wall. To me this would indicate the Quattro histogram is very good at showing what to expect.

I shot some pictures at a wedding using the histogram to make sure the wedding dress was not blown out , and it worked perfect.



4bc4bb8b6528472da2df108096b16a39.jpg



1ba5851487b44816a674ba675c046450.jpg
 

Keyboard shortcuts

Back
Top