8 Bit or 16 bit , sRGB or Pro Photo

Started Sep 11, 2019 | Discussions thread
knickerhawk Veteran Member • Posts: 7,007
Re: Here's where 30 bit monitor path can make a BIG difference

technoid wrote:

knickerhawk wrote:

There's no point in asking other questions when you haven't answered the one I asked. Again: why don't you see any difference between the 8-bit encoded version and the 16-bit encoded version using your "high bit display path"? The synthetic gradients in your test image should be the ideal opportunity to see the superiority of the version encoded at a deeper bit depth, yet you don't see any difference even at 800% apparently. Why?

I alluded to the issue of 8 bit monitors that don't have internal LUTs but are dependent on the driver card's LUTs earlier. Here's a bit more detail that may explain what's going on and also explain why pixelgenius and I see no effects with our 30 bit monitors. First, lets do a little math:

What's the deltaE2000 between ProPhoto RGB (30,30,30) and (31,31,31)? Turns out it is only 0.42 with a "perfect" display.

OK, now the problem with card drivers is that they have to skip RGB steps from time to time unless the rare (or impossible w/o internal LUTs in the display) occurrence exists that the LUTs are totally monotonic and never skip values or duplicate them as 8 bit RGB values change from 0 to 255.

So let's look at the case where 31 is skipped. What's the deltaE2000 between (30,30,30) and (32,32,32)? Hey, it's still under 1 coming in at 0.85.

So it should still not be a problem? Right?

But the problem is much worse than that. The controller card's LUTs are calibrated independently. The R, G, and B channels don't follow each other exactly.

So lets examine he case where just the G channel gets bumped up from 30 to 31. One would expect this isn't a problem. But surprise. It is.

The deltaE2000 from (30,30,30) to (30,31,30) is … wait for it …. 2.38!

With in monitor calibration, which usually is 10 bits or more per channel) even an 8 bit video card doesn't have any missing or duplicated steps so the actual error at the monitor is usually close to half a bit. Big improvement right there. Now, make the data path a full 10 bits/channel with no missing codes and reasonably big monitor LUT depth - problem solved.

Conversion of 16 bit to 8 bit neutral RGBs in this kind of system always keeps the R,G, and B values the same on the black and white balls. The largest error from rounding off 16 to 8 bits is deltaE2000 .28.

This is why pixelgenius and I don't see any visual changes.

Thanks so much for the detailed and well written explanation. The key here for me is the surprisingly low deltaE for going from 16 to 8 bits. Looking at my monitor and seeing the obvious differences in synthetic ramps always led me to blame the problem on the quantization error rather than something post-conversion in the output to the monitor. One thing that I'm still trying to wrap my head around is why the hiccup in graphics card LUT would visibly affect the 8 bit encoded version so obviously but not the 16 bit version, even though of course the 16 bit version will also trip over the same bad RGB value(s) in the LUT.  Thanks again. This has been helpful.

Post (hide subjects) Posted by
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark MMy threads
Color scheme? Blue / Yellow