Bit depth and output

aChanceEncounter

Senior Member
Messages
1,319
Reaction score
1,285
Location
TX, US
I have been tempted to try and adapt jpg/heif only shooting - however, I do have a question regarding tonal gradations. This conversation is really only focused on color and tonal gradation. I fully appreciate other aspects such as noise and shadow recovery.

I might be wrong but this is my understanding:

JPG - 8 bit

HEIC - 10 bit

X-Trans RAW - 14 bit

GFX RAW (100 series) - 16 bit

Obviously, the bigger the bit depth the smoother the tonal gradations of colors. But the subject gets a bit more confusing to me when I think about output.

The vast majority of screens/monitors are 8 bit, so does this mean they can't faithfully show the gradations well?

High end printers can handle the 14 and 16 bit files and represent the images well.

This assumes a 'properly' exposed image that doesn't require extensive recovery in post.

Conclusion: If I will be just viewing on screens - the jpg/heic is probably fine. If I will be doing fine art prints - shoot RAW.

So, is my conclusion accurate?
 
I have been tempted to try and adapt jpg/heif only shooting - however, I do have a question regarding tonal gradations. This conversation is really only focused on color and tonal gradation. I fully appreciate other aspects such as noise and shadow recovery.

I might be wrong but this is my understanding:

JPG - 8 bit

HEIC - 10 bit

X-Trans RAW - 14 bit

GFX RAW (100 series) - 16 bit

Obviously, the bigger the bit depth the smoother the tonal gradations of colors. But the subject gets a bit more confusing to me when I think about output.

The vast majority of screens/monitors are 8 bit, so does this mean they can't faithfully show the gradations well?
Both JPEG and HEIC compress an image, so not only the color gradation and dynamic range but also the information, potentially details. So yes, if you have a display able to support 12bits (dolby Vision standard?), a JPEG and HEIC will show some banding, Not this this a different problem from color gamut, we're talking brightness level encoding there.
High end printers can handle the 14 and 16 bit files and represent the images well.

This assumes a 'properly' exposed image that doesn't require extensive recovery in post.

Conclusion: If I will be just viewing on screens - the jpg/heic is probably fine. If I will be doing fine art prints - shoot RAW.

So, is my conclusion accurate?
I'd say for printing and archiving 16bit TIFF is a better option, hopefully future display hardware will be more 12bits compatible.
 
I have been tempted to try and adapt jpg/heif only shooting - however, I do have a question regarding tonal gradations. This conversation is really only focused on color and tonal gradation. I fully appreciate other aspects such as noise and shadow recovery.

I might be wrong but this is my understanding:

JPG - 8 bit

HEIC - 10 bit

X-Trans RAW - 14 bit

GFX RAW (100 series) - 16 bit

Obviously, the bigger the bit depth the smoother the tonal gradations of colors. But the subject gets a bit more confusing to me when I think about output.

The vast majority of screens/monitors are 8 bit, so does this mean they can't faithfully show the gradations well?

High end printers can handle the 14 and 16 bit files and represent the images well.

This assumes a 'properly' exposed image that doesn't require extensive recovery in post.

Conclusion: If I will be just viewing on screens - the jpg/heic is probably fine. If I will be doing fine art prints - shoot RAW.
Shoot raw for everything and 8 bits is adequate for output to screen and print.

The comparison you list above has a very fundamental problem. You're not comparing like units and so the comparison is invalid. Imagine your comparing different measuring sticks. That works if they're all marked off in centimeters or inches -- like units. A yard stick is 36 inches long and 3 times longer than a 1 foot ruler -- a valid comparison. Now instead imagine that the inches on the yard stick are only 5/8 as big as some of the inches on the ruler and the inches on the ruler are of varying size with some bigger than others.

Let's switch the analogy to staircases. You have two staircases that climb up a steep hill. They both start at the same ground level and both reach to the top of the hill. One staircase has a total of 256 steps and no two steps are the same height. They start small, progressively get larger and then become smaller again, but the staircase gets you to the top of the hill. The other staircase has 16,384 steps that are all the same height. The distance between the first and last stair on each staircase is the same.

Photography is an exercise in reduction. We start with the tonal range of the original scene and capture a reduced segment of that. Captured in a raw file we have more data than can possible fit on a print or screen. Pour black ink on a sheet of white paper and you've got roughly 5 stops of tonal range. Your image will have to be stuffed into that 5 stops one way or another. Which is why 8 bits of non-linear data is and has long been print sufficient. The print compresses the shadows and highlights and presents the illusion that there's more there than there really is. 256 tonal gradations non-linearly manipulated does the trick.

But we start the manipulation of the captured raw data with massively more data than we'll ultimately end up with. We want to manage the data reduction with lots of available slack because it gives us more control and maintains a high quality level until we're ready to commit the final reduction. Once the reduction is finished (reduced to 8 bit) we don't want to further attempt any data manipulation of consequence -- all the slack is gone.
So, is my conclusion accurate?
 
Last edited:
I have been tempted to try and adapt jpg/heif only shooting - however, I do have a question regarding tonal gradations. This conversation is really only focused on color and tonal gradation. I fully appreciate other aspects such as noise and shadow recovery.

I might be wrong but this is my understanding:

JPG - 8 bit

HEIC - 10 bit

X-Trans RAW - 14 bit

GFX RAW (100 series) - 16 bit
That's all true, but it's missing a critical piece of information.

JPEG and HEIF are output formats. They were developed to show final images, and they typically have a curve applied to the data. So, for instance, have a look at this fairly typical JPEG curve:

Picturemode.png


It's only 8-bit (256 values per channel), but compresses over 8.5 stops into there, with somewhere in the region of 170 levels being devoted to the two stops on either side of middle grey.

HEIF does something similar, but is usually used for output not just on 10-bit monitors, but on 10-bit HDR monitors that go brighter and have a wider colour gamut than SDR monitors. The benefits of using a standard tone curve in 10-bit precision are rather subtle.

Raw, by comparison, is linear: a light level twice as bright is recorded at twice the value of the darker one. That's not how human vision works, so it's a wildly inefficient way of storing data. Half of your data values are being used to record the brightest stop you captured. Whereas in the JPEG example above, it's about 16 of the 256 values (6.25%).

And that brightest stop is also has the highest magnitude of noise: meaning there's little value to recording it in super-high precision, because even an evenly illuminated subject with uniform brightness will be recorded as a vast range of different values. This is a big part of why the number of Raw steps has little, if anything to do with how many colours a camera captures , even though it feels like it should.

In short: you can't directly compare bit depths between linear data and gamma-encoded data. They're just not the same things at all.

(Also, there's virtually no benefit to capturing the current GFX sensors' output in 16-bit precision: they simply don't capture enough DR to make it worthwhile)

Richard - DPReview.com
 
Last edited:
I have been tempted to try and adapt jpg/heif only shooting - however, I do have a question regarding tonal gradations. This conversation is really only focused on color and tonal gradation. I fully appreciate other aspects such as noise and shadow recovery.

I might be wrong but this is my understanding:

JPG - 8 bit

HEIC - 10 bit

X-Trans RAW - 14 bit

GFX RAW (100 series) - 16 bit
That's all true, but it's missing a critical piece of information.

JPEG and HEIF are output formats. They were developed to show final images, and they typically have a curve applied to the data. So, for instance, have a look at this fairly typical JPEG curve:

Picturemode.png


It's only 8-bit (256 values per channel), but compresses over 8.5 stops into there, with somewhere in the region of 170 levels being devoted to the two stops on either side of middle grey.

HEIF does something similar, but is usually used for output not just on 10-bit monitors, but on 10-bit HDR monitors that go brighter and have a wider colour gamut than SDR monitors. The benefits of using a standard tone curve in 10-bit precision are rather subtle.

Raw, by comparison, is linear: a light level twice as bright is recorded at twice the value of the darker one. That's not how human vision works, so it's a wildly inefficient way of storing data. Half of your data values are being used to record the brightest stop you captured. Whereas in the JPEG example above, it's about 16 of the 256 values (6.25%).

And that brightest stop is also has the highest magnitude of noise: meaning there's little value to recording it in super-high precision, because even an evenly illuminated subject with uniform brightness will be recorded as a vast range of different values. This is a big part of why the number of Raw steps has little, if anything to do with how many colours a camera captures , even though it feels like it should.

In short: you can't directly compare bit depths between linear data and gamma-encoded data. They're just not the same things at all.

(Also, there's virtually no benefit to capturing the current GFX sensors' output in 16-bit precision: they simply don't capture enough DR to make it worthwhile)

Richard - DPReview.com


Ok, most of this is over my head. And i just thought the great bit depth contributed to smoother gradations-ha.
--
Paul
 
Ok, most of this is over my head. And i just thought the great bit depth contributed to smoother gradations-ha.
--
Paul
Would that it were so simple. ;)

If you only take one thing from my post, it should be that you can't directly compare bit depth for gamma-encoded formats (JPEG, HEIG, MOV video, etc), with those encoded in a linear manner (Raw).

HEIF can let you more accurately convey gradation than JPEG, especially when shown on 10-bit-capable displays.

If you take two things, it's that JPEG and HEIF are output formats, flattened down, ready for viewing, whereas Raw retains maximum flexibility so it can be processed into a wide range of possible outputs.

Richard - DPReview.com
 

Keyboard shortcuts

Back
Top