Why do we still apply gamma curves to linear data?

Olcay12

Active member
Messages
54
Reaction score
3
As far as I know, both the CMOS camera sensors and LCD screens have near linear responses. So, why do we still apply TRCs to scene-referred linear data? Why do we see dark images as we look at the linear raw data on our LCD screens?

PS. As noticed, this topic is not a discussion, a question. If you know the answer and trust your knowledge about the subject, please reply clearly, concisely and enlighten me. Please don't direct me to some wiki pages or to any other web sites. Thanks a lot ))
 
As far as I know, both the CMOS camera sensors and LCD screens have near linear responses. So, why do we still apply TRCs to scene-referred linear data? Why do we see dark images as we look at the linear raw data on our LCD screens?

PS. As noticed, this topic is not a discussion, a question. If you know the answer and trust your knowledge about the subject, please reply clearly, concisely and enlighten me. Please don't direct me to some wiki pages or to any other web sites. Thanks a lot ))
The main reasons for applying a gamma curve are for perceptual uniformity and coding efficiency. With linear encoding a given change in the encoded value results in a greater change in perceived brightness at the low end than at the high end. This means if you are editing brightness with a slider as in LR or ACR, the slider loses precision at the low end. A linear encoding does not appear dark if it is viewed with a linear profile.

As to coding efficiency, a linear encoding wastes bits at the high end and has too few gradations at the low end. With a gamma curve, you can encode a higher dynamic range with a given number of bits. Although you don't want a link to a web site, Greg Ward compares the scRGB standards using 36 bits per pixel with a gamma curve to 48 bits/pixel with linear encoding. I don't have time to repeat what he says in his article.

 
Just to add another point to the very good answer from Bill Janes, another reason for not simply transferring data directly from sensor to viewing screen is that the screen has far lower dynamic range than most good sensors. Of course, this isn't a reason for using gamma curves, but it means that some processing is needed, a direct transfer of data would be very restrictive.
 
Also because of a lot of mis-understandings, like "that's how we see, so we need gamma".

Gamma coding currently serves the needs of reducing bandwidth and storage space. Those are less and less of concern.
 
Also because of a lot of mis-understandings, like "that's how we see, so we need gamma".
Our brightness perception has not a linear relationship with the scene luminance, so it is said that the gamma is there for compensating it. Do you mean that?
 
Also because of a lot of mis-understandings, like "that's how we see, so we need gamma".
Our brightness perception has not a linear relationship with the scene luminance, so it is said that the gamma is there for compensating it. Do you mean that?
Yes, gamma is not compensating for non-linearities of our perception, it rather makes some use of it, saving bandwidth and space.
 
... please reply clearly, concisely and enlighten me. Please don't direct me to some wiki pages or to any other web sites.
Iliah Borg wrote:

Also because of a lot of mis-understandings, like "that's how we see, so we need gamma".

Our brightness perception has not a linear relationship with the scene luminance, so it is said that the gamma is there for compensating it. Do you mean that?
Iliah Borg wrote:

Yes, gamma is not compensating for non-linearities of our perception, it rather makes some use of it, saving bandwidth and space.

.


Sometimes a little reading (of knowledgeable authors) is worth the effort for understanding:

http://www.poynton.com/PDFs/Rehabilitation_of_gamma.pdf

http://www.hpaonline.com/wp-content/uploads/2015/02/Poynton-TR2015-PUDI-pp.pdf
 
Last edited:
... please reply clearly, concisely and enlighten me. Please don't direct me to some wiki pages or to any other web sites.
Iliah Borg wrote:

Also because of a lot of mis-understandings, like "that's how we see, so we need gamma".

Our brightness perception has not a linear relationship with the scene luminance, so it is said that the gamma is there for compensating it. Do you mean that?
Iliah Borg wrote:

Yes, gamma is not compensating for non-linearities of our perception, it rather makes some use of it, saving bandwidth and space.

.


Sometimes a little reading (of knowledgeable authors) is worth the effort for understanding:

http://www.poynton.com/PDFs/Rehabilitation_of_gamma.pdf

http://www.hpaonline.com/wp-content/uploads/2015/02/Poynton-TR2015-PUDI-pp.pdf
I like the part when they say:

"The main purpose of gamma correction in video, desktop graphics, prepress, JPEG, and MPEG is to code luminance or tristimulus estimates (proportional to intensity) into a perceptually-uniform domain, so as optimize perceptual performance of a limited number of bits (such as 8 or 10) in each of the RGB colour components."

They put very strict limitations to the usefulness of gamma that way, but not strict enough.

We use CMYK in pre-press, by the way, and the question of gamma becomes very moot.

All in all, gamma bias of Mr. Poynton seems to be now much less than it was 20 years ago ;)
 
... please reply clearly, concisely and enlighten me. Please don't direct me to some wiki pages or to any other web sites.
Iliah Borg wrote:

Also because of a lot of mis-understandings, like "that's how we see, so we need gamma".

Our brightness perception has not a linear relationship with the scene luminance, so it is said that the gamma is there for compensating it. Do you mean that?
Iliah Borg wrote:

Yes, gamma is not compensating for non-linearities of our perception, it rather makes some use of it, saving bandwidth and space.

.


Sometimes a little reading (of knowledgeable authors) is worth the effort for understanding:

http://www.poynton.com/PDFs/Rehabilitation_of_gamma.pdf

http://www.hpaonline.com/wp-content/uploads/2015/02/Poynton-TR2015-PUDI-pp.pdf
I like the part when they say:

"The main purpose of gamma correction in video, desktop graphics, prepress, JPEG, and MPEG is to code luminance or tristimulus estimates (proportional to intensity) into a perceptually-uniform domain, so as optimize perceptual performance of a limited number of bits (such as 8 or 10) in each of the RGB colour components."

They put very strict limitations to the usefulness of gamma that way, but not strict enough.

We use CMYK in pre-press, by the way, and the question of gamma becomes very moot.

All in all, gamma bias of Mr. Poynton seems to be now much less than it was 20 years ago ;)
Relating to matters of (RAW recorded) digital still photography (where at least 16-bit arithmetic is common in RAW processor operations, although Camera DR does not reach that, and Flare/Glare limit to around 11 EV), the Poynton/Funt paper that I linked-to, and that you quoted in part above in your reply also states (appearing near the bottom of page 13):

This paper mainly concerns quantization of each colour component into a fairly small number of bits – say 8 or 10. Where that constraint is lifted, for example where 16 bits are available per component, then perceptual uniformity is still useful, but the ratios of luminance or tristimulus values between codes are lower than the visual threshold (even if components are coded in radiometrically linear, “linear-light” manner).

It seems that "perceptual uniformity" (PU) concerns expressed by Poynton might potentially be more relevant in cases where 8-bit JPEG encoded image-data might be "transmitted" through intermediate software/hardware devices prior to (luminous) display-devices ?

(Regarding the subject of still images), has modern hardware evolved to become more "perceptually uniform" with reference to post-processings of JPEG encoded image-data ?
 
Last edited:
We use CMYK in pre-press, by the way, and the question of gamma becomes very moot.
Relating to matters of (RAW recorded) digital still photography (where at least 16-bit arithmetic is common in RAW processor operations, although Camera DR does not reach that, and Flare/Glare limit to around 11 EV), the Poynton/Funt paper that I linked-to, and that you quoted in part above in your reply also states (appearing near the bottom of page 13):

This paper mainly concerns quantization of each colour component into a fairly small number of bits – say 8 or 10. Where that constraint is lifted, for example where 16 bits are available per component, then perceptual uniformity is still useful, but the ratios of luminance or tristimulus values between codes are lower than the visual threshold (even if components are coded in radiometrically linear, “linear-light” manner).

It seems that "perceptual uniformity" (PU) concerns expressed by Poynton might potentially be more relevant in cases where 8-bit JPEG encoded image-data might be "transmitted" through intermediate software/hardware devices prior to (luminous) display-devices ?

(Regarding the subject of still images), has modern hardware evolved to become more "perceptually uniform" with reference to post-processings of JPEG encoded image-data ?
Between the mediums of displaying photographs and other kinds of visual works, paper has the least contrast ratio. So, does this mean that we have to use gamma curves for perceptual uniformity as long as we use paper as a medium of every kind of visual presentation?

But, I don't know about the CMYK.
 
We use CMYK in pre-press, by the way, and the question of gamma becomes very moot.
Relating to matters of (RAW recorded) digital still photography (where at least 16-bit arithmetic is common in RAW processor operations, although Camera DR does not reach that, and Flare/Glare limit to around 11 EV), the Poynton/Funt paper that I linked-to, and that you quoted in part above in your reply also states (appearing near the bottom of page 13):

This paper mainly concerns quantization of each colour component into a fairly small number of bits – say 8 or 10. Where that constraint is lifted, for example where 16 bits are available per component, then perceptual uniformity is still useful, but the ratios of luminance or tristimulus values between codes are lower than the visual threshold (even if components are coded in radiometrically linear, “linear-light” manner).

It seems that "perceptual uniformity" (PU) concerns expressed by Poynton might potentially be more relevant in cases where 8-bit JPEG encoded image-data might be "transmitted" through intermediate software/hardware devices prior to (luminous) display-devices ?

(Regarding the subject of still images), has modern hardware evolved to become more "perceptually uniform" with reference to post-processings of JPEG encoded image-data ?
Between the mediums of displaying photographs and other kinds of visual works, paper has the least contrast ratio. So, does this mean that we have to use gamma curves for perceptual uniformity as long as we use paper as a medium of every kind of visual presentation?
I think that the (effective) bit-depth (which can be related to scene/print/display contrast) of the recorded/processed image-data is the issue more so than a (later-stage) compression.

The specific context relating to the composite system block-diagram of devices is important.
 
Last edited:
Mostly.
As far as I know, both the CMOS camera sensors and LCD screens have near linear responses. So, why do we still apply TRCs to scene-referred linear data?
To achieve a "look".

That "look" might be the decision of the phone manufacturer, or it might be the decision of the director of the movie, sitting next to the grading expert in a darkened room with a calibrated monitor.

It's rare that we are trying to achieve a precise, "colour accurate", "tone accurate" rendition of the scene in front of the photographer, though precise rendition is important in some applications.

What the movie director is trying to achieve is a Blu-Ray disc that when played by someone with a decent TV, who has taken the trouble to turn off a lot of the TV's tone-mapping features, who is watching the movie in a dark-ish room, sees the movie with the "look" that the director intended.

The mapping from the (near) linear-light data captured by the camera front-end to the H.26[245] data recorded on the disc is simply a matter of taste.

But in choosing a tone-mapping for the recorded data, the director (implicitly or explicitly) assumes some transfer function at the end user's display device.

Maybe a Bt.709 gamma curve .

The idea of "colour accurate" is generally only relative to human visual perception. Most animals which can trace their ancestry back to fish have four-colour vision, and a wider gamut than primates.

You say you don't like links, but I think this is somewhat informative: http://therefractedlight.blogspot.co.uk/2014/05/color-trek.html

I think the United Federation of Planets would probably need to consider multi-spectral imaging, given their mixed-species crews.

Here's a couple of images. Which one is right ? Better ?

A:

3024f5c31aee4f76ba9dd014a38eae45.jpg

B:

611b9cc214714a399808903d87dbad4b.jpg

Both are derived from the same RAW file.

Either might be acceptable - but for different purposes. I'm quite sure there are plenty of folk on this forum who could do a better job than me on the second version.

I - as the creator of these images - can be fairly confident that the two images look dramatically different to you, the viewer. That's partly because I can assume that your viewing hardware treats both images roughly as sRGB , with sRGB gamma .
Why do we see dark images as we look at the linear raw data on our LCD screens?
Hmm. It usually takes some special effort to display linear RAW data, treated as if it were sRGB or Bt.709, or ...

LCD displays usually make some - perhaps token - effort to support sRGB gamma. If your image has not been pre-processed with this in mind, you'll get an effect like image "A".
 
Last edited:
... please reply clearly, concisely and enlighten me. Please don't direct me to some wiki pages or to any other web sites.
Iliah Borg wrote:

Also because of a lot of mis-understandings, like "that's how we see, so we need gamma".

Our brightness perception has not a linear relationship with the scene luminance, so it is said that the gamma is there for compensating it. Do you mean that?
Iliah Borg wrote:

Yes, gamma is not compensating for non-linearities of our perception, it rather makes some use of it, saving bandwidth and space.

.


Sometimes a little reading (of knowledgeable authors) is worth the effort for understanding:

http://www.poynton.com/PDFs/Rehabilitation_of_gamma.pdf

http://www.hpaonline.com/wp-content/uploads/2015/02/Poynton-TR2015-PUDI-pp.pdf
I like the part when they say:

"The main purpose of gamma correction in video, desktop graphics, prepress, JPEG, and MPEG is to code luminance or tristimulus estimates (proportional to intensity) into a perceptually-uniform domain, so as optimize perceptual performance of a limited number of bits (such as 8 or 10) in each of the RGB colour components."

They put very strict limitations to the usefulness of gamma that way, but not strict enough.

We use CMYK in pre-press, by the way, and the question of gamma becomes very moot.

All in all, gamma bias of Mr. Poynton seems to be now much less than it was 20 years ago ;)
Relating to matters of (RAW recorded) digital still photography (where at least 16-bit arithmetic is common in RAW processor operations, although Camera DR does not reach that, and Flare/Glare limit to around 11 EV), the Poynton/Funt paper that I linked-to, and that you quoted in part above in your reply also states (appearing near the bottom of page 13):

This paper mainly concerns quantization of each colour component into a fairly small number of bits – say 8 or 10. Where that constraint is lifted, for example where 16 bits are available per component, then perceptual uniformity is still useful, but the ratios of luminance or tristimulus values between codes are lower than the visual threshold (even if components are coded in radiometrically linear, “linear-light” manner).

It seems that "perceptual uniformity" (PU) concerns expressed by Poynton might potentially be more relevant in cases where 8-bit JPEG encoded image-data might be "transmitted" through intermediate software/hardware devices prior to (luminous) display-devices ?

(Regarding the subject of still images), has modern hardware evolved to become more "perceptually uniform" with reference to post-processings of JPEG encoded image-data ?
There is one important word, "coding", that is often missing when they speak of perceptual uniformity. Gamma companding is not about achieving a perceptually uniform look, it is about achieving perceptually uniform coding. Perceptually uniform look is linear.

Another wide-spread dogma is that all sensors have linear or very close to linear response. Solar battery does not. We routinely use sensor originally designed by NIT to capture images of welding arc, they operate in photovoltaic mode and have native logarithmic response.
 
Mostly.
As far as I know, both the CMOS camera sensors and LCD screens have near linear responses. So, why do we still apply TRCs to scene-referred linear data?
To achieve a "look".

That "look" might be the decision of the phone manufacturer, or it might be the decision of the director of the movie, sitting next to the grading expert in a darkened room with a calibrated monitor.

It's rare that we are trying to achieve a precise, "colour accurate", "tone accurate" rendition of the scene in front of the photographer, though precise rendition is important in some applications.

What the movie director is trying to achieve is a Blu-Ray disc that when played by someone with a decent TV, who has taken the trouble to turn off a lot of the TV's tone-mapping features, who is watching the movie in a dark-ish room, sees the movie with the "look" that the director intended.

The mapping from the (near) linear-light data captured by the camera front-end to the H.26[245] data recorded on the disc is simply a matter of taste.

But in choosing a tone-mapping for the recorded data, the director (implicitly or explicitly) assumes some transfer function at the end user's display device.

Maybe a Bt.709 gamma curve .

The idea of "colour accurate" is generally only relative to human visual perception. Most animals which can trace their ancestry back to fish have four-colour vision, and a wider gamut than primates.

You say you don't like links, but I think this is somewhat informative: http://therefractedlight.blogspot.co.uk/2014/05/color-trek.html

I think the United Federation of Planets would probably need to consider multi-spectral imaging, given their mixed-species crews.

Here's a couple of images. Which one is right ? Better ?

A:

3024f5c31aee4f76ba9dd014a38eae45.jpg

B:

611b9cc214714a399808903d87dbad4b.jpg

Both are derived from the same RAW file.

Either might be acceptable - but for different purposes. I'm quite sure there are plenty of folk on this forum who could do a better job than me on the second version.

I - as the creator of these images - can be fairly confident that the two images look dramatically different to you, the viewer. That's partly because I can assume that your viewing hardware treats both images roughly as sRGB , with sRGB gamma .
Why do we see dark images as we look at the linear raw data on our LCD screens?
Hmm. It usually takes some special effort to display linear RAW data, treated as if it were sRGB or Bt.709, or ...

LCD displays usually make some - perhaps token - effort to support sRGB gamma. If your image has not been pre-processed with this in mind, you'll get an effect like image "A".
Thanks John.

I can detect two points in your post, maybe missing others.

One is related with the technical necessities in the past. I think that the compatibility issues when we made the transition from CRTs to LCDs led to the adaptation of gamma into LCDs as well. And, so as it is injected in the modern technologies, the producers today need to use gamma curves.

And, secondly, gamma curves help to render more pleasing results to the viewer. The exact relative mapping of scene tones on a paper or on a screen could not create a pleasing or a preferred visual content every time.
 
Last edited:
gamma curves help to render more pleasing results to the viewer.
No. If you look at how colour management works, you will see that all gamma transforms in image processing chain are mutually reduced. In wet photography that was plotted through H&D curves in quadrants.

The actual application of the quadrant diagram was to ensure that the gamma of the print for a normally exposed and processed negative and normally exposed and processed photographic paper is very close to linear and only compensates for flare.

--
http://www.libraw.org/
 
Last edited:
And, secondly, gamma curves help to render more pleasing results to the viewer. The exact relative mapping of scene tones on a paper or on a screen could not create a pleasing or a preferred visual content every time.
As Iliah said, not really. I believe you are not talking about gamma curves now but about "S-curves" that we use to achieve more pleasant results. Those curves are kinda tone mapping in order to fit a higher DR on our screens or on paper. The actual gamma curves are canceled, as said above. If there were no gamma correction, you would not notice it by the overall look but it would create problems elsewhere.
 
Never. Nature is linear, our eyes/brain are expecting linear information for perceptual uniformity.
Agree, of course.

But, I also think that, as our reproductions are very small compared to the real scenes, the spatial workings of our eyes and field of view issues may have different effects on the brightness perception when we give our attention to comparatively very small areas.

Another possible guess. As the reproductions have so narrow tonal ranges as again compared to the real scenes, maybe the luminance compression mechanism of our eyes works differently. When looking at a photograph on a paper for example, maybe the compression mechanism of eyes/brain does not work as it works in the nature.

Those are just guesses, I don't know.

I read a book saying that "exact mapping of scene radiances is not desirable" as the conclusion of Kodak engineers a century ago. I could not the remember the title now, I will quote when I find it.

By the way, my English is not so well, I hope that it is understandable.
 

Keyboard shortcuts

Back
Top