# 50% Gray in RGB, Lab ang Gray-gamma 2.2: why are different?

Started Jan 15, 2013 | Discussions thread
 Like?
 Re: Here's Why In reply to technoid, Jan 20, 2013

technoid wrote:

Vernon D Rainwater wrote:

technoid wrote:

The basic problem is that the idea of 50% gray is one of perception. It is not objective. For intance, the so-called 18% Gray card has a Lab L=50. That is approximately percieved as being a middle gray but only 18 photons out of every 100 are reflected which is where the 18% comes from. If you look at a Gray patch that reflects 50% of the light hitting it it looks like an extremely bright gray and appears much closer to white than black even though it is technically right in the middle.

The sRGB value that corresponds to a patch that reflects half the light hitting it is (288,288,288). The purpose of the Gamma curve is to approximate the human perception of light which is highly compressed.

The relatively small variations that occur between sRGB, Gamma 2.2 (which isn't the same as the sRGB tone curve), and the more complex Lab are in large part because there is no precise way to measure human perception of brightness. They are all approximations.

I really should not post in this thread due to my limited level of understanding the details.

The OP evidently has listed RGB Values as reported by a Photo Editing software such as Photoshop -- However, you have discussed Human perceptions of brightness. The RGB readings listed (some being different and of course three sets of RGB Values are listed but how are the measurements being different related to your mentioned "human perception of brightness" since the differences are the above measured values rather than opinions of one's vision of these three.

Also, I don't see any mention of the OP using an 18% Gray Card to create these different patches so are you effectively indicating the Patches that were created as explained by the OP can (or do) only reflect 18% out of a possible reflection of 100 %.

-- hide signature --

Vernon...

The 50% Gray fill used by the OP simply sets the luminosity at 50% of max in the current colorspace. Since the tone curves for his three examples are different the 50% point on each of these tone curves will produce differing amounts of light. Similarly, any given patch of 50% gray will yield different results when converted to a colorspace with a different Gamma.

Since there is no technically valid way of exactly specifying how much reflectance represents half way between black and white, as percieved, the commonly accepted method is to simply set the value at whatever the colorspace's tone curve produces at half of maximum. For Lab colors, that happens to be 18% reflectance at L=50. For sRGB it's around 20% reflectance. For ProPhoto RGB, easily the furthest from the pack, it's about 25% reflectance.

An 18% Gray card has an L value of 50 and is approximately the grayness of other "50%" gray patch surfaces which is the OP's topic. The point is that unlike measuring light magnitude, where it is just a question of engineering precision as to how accurate something can be measured, the percentage of reflected light that corresponds to what people percieve as 50% gray is not. Some may call a card with 22% reflectance middle gray while others might say the same for 15% or 25%. Because of this variation there is no one tone curve or Gamma that exactly matches human perception. The non-linear tone curve, like the Gamma=2.2 that aRGB uses, that sRGB approximates, and Lab deviates somewhat more from, all have slightly different luminosities at the "50%" gray settings.

Ok, many thanks for the perfect explanation, it was really appreciated.

Frankly, I was suspecting that this aspect was in some way related to the different gamma of the color spaces involved but your answer regarding the 50% gray command settling on the middle of the active color space black-white range was definitely enlightning and all it makes perfect sense now.

Patch n. 22 of the X-rite Colorchecker is nominally rated as "neutral 5" with sRGB R:122 G:122 B:121 and L:50.867 a:-0.153 b:0.27.

A gray patch created on screen with the command fill 50% Gray using sRGB as working/viewing space gives the following results:

created in sRGB is R=G=B:128, created in Adobe 1998 is R=G=B:129, created in Gray gamma 2.2 is R=G=B:129, and created in Lab mode is R=G=B:119.

So, it is confirmed that Adobe 1998 and Gray gamma 2.2 values are matching, and the X-Rite patch n.22 is slighlty different from all of these, near to the Lab one.

The K:50% patch has L:54; the L:50 patch has K:54%. The X-Rite patch n.22 has a K:53 (and L near 51 as seen).

So, for my convenience I think that a good choice could be to stick with Adobe 1998 + Gray gamma 2.2 as working spaces, mainly because they share the same gamma curve (sRGB is not perfectly matching the gamma 2.2 with some difference in the deep shadows). This choice at least allows me to easily translate the Gray R=G=B values to equivalent K values without errors.

But the CIE L*a*b* has a different gamma from what I see, and I'm still a little bit confused regarding the exact relationship existing between the K (Gray %) and the L of L* a* b*.

It is not really an easy L=100-K equivalence, it is a slighly "S" shaped curve...

The Colormunki measures the L value of a gray patch.

Now, assuming the Colormunki is perfect, if I generate a Gray step wedge using equally spaced K values in Gray gamma 2.2, print the wedge, measure the L values and plot them in a graph I was expecting (in fully ideal theoretical conditions) to see a perfect decreasing 45° straight line from ideal white to ideal black.

But now I realize that this is not the case, due to the slight differences in the gamma between Gray gamma 2.2 and CIE L*a*b*, this line should NOT be perfectly straight, should be a little bit "S" shaped with deviations reaching +4 (L) in the K35%-K55% zone and reaching -4 (L) in the K90%-K95% zone. A perfect matching (L=100-K) is occurring only near K80% = L20 (and for pure black and pure white of course).

So, to better understand, if my goal is to create, for example, a simple B&W correction curve to apply to a given printer settings-paper setup (leaving the icc profiles out of the task), the optimal linearization should NOT be calculated basing over an equally spaced K% step wedge, but creating an L equally spaced step wedge and then expecting the Colormunki readings of the printed/measured wedge to be a perfect decreasing straight line.

Is this correct or I still miss something?

Many thanks in advance for the attention.

Ciao

Complain
Post ()
Keyboard shortcuts: