8 Bit or 16 bit , sRGB or Pro Photo

Started Sep 11, 2019 | Discussions thread
knickerhawk Veteran Member • Posts: 7,012
Re: The difference is ...

pixelgenius wrote:

knickerhawk wrote:

pixelgenius wrote:

knickerhawk wrote:

pixelgenius wrote:

technoid wrote:

Ken60 wrote:

Sure , despite all this pixel , or dot , peeping .... I truly think the 16 bit workflow all the way to the printer allows the ability to keep the image on screen in photoshop and make little changes, aesthetic adjustments , and go right out to print without having to reduce the bit depth.

Another little note from this set of prints is the blue ball. If you peek at the first set of charts I posted, look at the top right blue ball. Specifically at the outer area of darker tones.... horrid in the 16 bit Pro Photo and quite clean in 8 bit sRGB ! So much for the dedicated blue cartridge of the Pro 1000.

Anyone care to say which black ball four posts or so back , is the 8 bit Pro and which the 16 bit Pro ?

Hi Ken,

Just got some time and loaded Gamut_test_file_flat.tif in PS. It's 16 bit, ProPhoto RGB. I deselected Edit-Color Settings->Use Dither.

I duplicated it then selected Windows->Arrange->Match All so the two tabs are in the same position. This makes it easy to switch back and forth from one image to another and see even the most subtle changes. Much better than side by side.

Then I converted the duplicate to 8 bit using: Image->Mode->8 Bits/channel. It's still in ProPhoto RGB of course.

Switching tabs back and forth from the 16 bit to 8 bit versions I see no difference at all. Not even subtle changes. Just nothing at all.

The problem truncating outlined below....

Ummmm...I responded to your post, not technoid's.

And we (technoid and I), both stated the same observations. Take it to the bank if you can accept facts.

YOU were the one who decided to truncate technoid's post when you responded half way into his post and then deleted the remainder of it.

I answered your question and added support to my observations from another using ideal display equipment you don't have.

I'm seeing exactly what you report as well. I'm running a full, 10-bit video path, using a SpectraView. I looked at all balls and the gradients while at 100% in PS, they are visually identical FWIW.

At what scaling are you doing the comparison?

READ the sentence before asking questions that were outlined.

I just want to confirm that you only looked at the image at 100% and not scaled to any larger sizes.

Read specifically what I wrote. What don't you understand about: I looked at all balls and the gradients while at 100% in PS, they are visually identical FWIW?

I wouldn't expect to see anything either at 100% (not for a steep gradient like these).

There's a lot about this topic you don't understand or have the equipment (high bit display path) to expect. Why do you continue to assume about stuff you haven't experienced? Kind of a waste of your time posting and our time having to provide the facts about how this actually works on a high bit display systems with both high bit and not high bit image data.

At larger sizes, however, I'd expect the 8-bit banding to become increasingly visible (and it does on my antiquated 8-bit iMac).

Unless you've experienced it, and you haven't, you're just assuming again.

I believe that's what Ken60 is seeing too when he suggests examining the balls big enough to fill half the screen, but I don't know what system he's using.

Yes, you don't know. Nor what kind of display system he's using. Assume. Nor do you have the ability by your own admission, like technoid and I, to properly view image data on-screen. So you assume. Done?

Doesn't it seem odd to you that taking a synthetically generated gradient from 16 bit to 8 bit (without dithering either) is producing no visual difference at all?

Not at all. Why don't you ask technoid the same question and you'll likely get the same answer. Wait, you don't have to because....

Of course, you didn't read this did you, but you decided to truncate it:

technoid wrote: So if you are seeing changes on your monitor from 16 bit to 8 bit mode something in your system setup is messed up.

As noted above, you were the one who did the truncating of technoid's post, so give yourself the stern scolding, not me.

I made it quite clear that what technoid reported matched what I saw. The two bit depth's produce identical on-screen previews. And viewing it again (because you will ask), at 800 percent just like technoid! And as you asked, we BOTH expect this and guess what, that's exactly what both of us have seen and reported.

I have two questions for you (and technoid as well if he's interested):

  1. Why should we conclude that something is "messed up" on a system that renders images with pure synthetic ramps encoded at 16 bits with less visible banding than the same pure synthetic ramp encoded at 8 bits?

Since technoid made the statement about 'messed up' ask him. Or assume as you appear to like to do here. That you don't seem to understand the benefits of viewing via a high bit display path like technoid and I, you probably don't understand or care about viewing the data ideally. technoid and I do.

My understanding's just fine, it's my computer/monitor/graphics card budget that's not ideal. Come to think of it though, if viewing the 16-bit encoded version and the 8-bit encoded version of your test image on a correctly configured 10-bit system (which I'm sure yours is) yields no visible difference, then maybe I should just save my pennies for something else...

  1. Why have you invested in your high end full 10-bit graphics/display solution if it doesn't result in any visible difference between the 8-bit encoded version and the 16-bit encoded version of the ramp?

Indeed, you don't understand the benefits of a high bit display path. It's simple: you don't see banding due to the display path.

Any other questions that will aid in your assumptions based posting on the topic?

There's no point in asking other questions when you haven't answered the one I asked. Again: why don't you see any difference between the 8-bit encoded version and the 16-bit encoded version using your "high bit display path"? The synthetic gradients in your test image should be the ideal opportunity to see the superiority of the version encoded at a deeper bit depth, yet you don't see any difference even at 800% apparently. Why?

Cause you've got quite the cheekiness barging in here and "contradicting" technoid. And me!

Post (hide subjects) Posted by
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark MMy threads
Color scheme? Blue / Yellow