Bit depth = levels of gradient?

Started 2 months ago | Discussions
filmrescue Contributing Member • Posts: 802
Bit depth = levels of gradient?

I've been searching around to find some sort of chart or list that tells me how many levels of gradient there is for different single channel bit depths but I'm not having a lot of luck. Can anyone fill me in or link me to the info I'm looking for. I'd like to see from 2 all the way up to 20 or so.

J A C S
J A C S Forum Pro • Posts: 14,422
Re: Bit depth = levels of gradient?
6

2 to the power of the bit depth.

OP filmrescue Contributing Member • Posts: 802
Re: Bit depth = levels of gradient?

Perfect!  Thanks so much. That will be endlessly helpful. 

bclaff Veteran Member • Posts: 8,544
Re: Bit depth = levels of gradient?

filmrescue wrote:

I've been searching around to find some sort of chart or list that tells me how many levels of gradient there is for different single channel bit depths but I'm not having a lot of luck. Can anyone fill me in or link me to the info I'm looking for. I'd like to see from 2 all the way up to 20 or so.

The answer is not simple and is not the answer you will typically receive.
For example, with most systems there is no visible difference between 14-bits and 12-bits; and certain ly a factor of 4 difference.

You may want to look at what DxOMark call "Tonal Range".

What exactly are you trying to accomplish.

-- hide signature --

Bill ( Your trusted source for independent sensor data at PhotonsToPhotos )

J A C S
J A C S Forum Pro • Posts: 14,422
Re: Bit depth = levels of gradient?
2

The levels are as many as I said. How many of them are distinguishable in a typical photo with whatever criterion, is a different question.

John Sheehy Forum Pro • Posts: 21,841
Re: Bit depth = levels of gradient?
2

J A C S wrote:

The levels are as many as I said. How many of them are distinguishable in a typical photo with whatever criterion, is a different question.

The OP isn't factoring in noise, though, so the OP is basically applying math to myth.

OP filmrescue Contributing Member • Posts: 802
Re: Bit depth = levels of gradient?

bclaff wrote:

filmrescue wrote:

I've been searching around to find some sort of chart or list that tells me how many levels of gradient there is for different single channel bit depths but I'm not having a lot of luck. Can anyone fill me in or link me to the info I'm looking for. I'd like to see from 2 all the way up to 20 or so.

The answer is not simple and is not the answer you will typically receive.
For example, with most systems there is no visible difference between 14-bits and 12-bits; and certain ly a factor of 4 difference.

You may want to look at what DxOMark call "Tonal Range".

What exactly are you trying to accomplish.

Primarily to be able to talk intelligently about why to be using a high bit depth capture system when capturing extremely low contrast negatives.
Also to be able to calculate for instance, if the negative I'm capturing is occupying 5 percent of the tonal range from absolute black to absolute white, how many levels of gradient do I have to work with if I use a 16 bit scanner or high end camera vs a 12 bit or 14 bit camera and at what point can I expect to see posterization given the capture system.

The other numbers for other bit depths are primarily just because I was curious.

OP filmrescue Contributing Member • Posts: 802
Re: Bit depth = levels of gradient?
2

John Sheehy wrote:

J A C S wrote:

The levels are as many as I said. How many of them are distinguishable in a typical photo with whatever criterion, is a different question.

The OP isn't factoring in noise, though, so the OP is basically applying math to myth.

Explain please.  This ultimately is to calculate how many levels of gradient I would have to work with given different systems of capturing extremely low contrast film negatives. In practical use, it isn't noise that I see when capturing a low contrast film negative with  12 BIT dslr vs a 16 bit scanner it's posterization because I don't have enough levels of gradient with a 12 bit capture for an extremely low contrast negative. Not sure how that's myth.

bclaff Veteran Member • Posts: 8,544
Re: Bit depth = levels of gradient?

filmrescue wrote:

bclaff wrote:

filmrescue wrote:

I've been searching around to find some sort of chart or list that tells me how many levels of gradient there is for different single channel bit depths but I'm not having a lot of luck. Can anyone fill me in or link me to the info I'm looking for. I'd like to see from 2 all the way up to 20 or so.

The answer is not simple and is not the answer you will typically receive.
For example, with most systems there is no visible difference between 14-bits and 12-bits; and certain ly a factor of 4 difference.

You may want to look at what DxOMark call "Tonal Range".

What exactly are you trying to accomplish.

Primarily to be able to talk intelligently about why to be using a high bit depth capture system when capturing extremely low contrast negatives.
Also to be able to calculate for instance, if the negative I'm capturing is occupying 5 percent of the tonal range from absolute black to absolute white, how many levels of gradient do I have to work with if I use a 16 bit scanner or high end camera vs a 12 bit or 14 bit camera and at what point can I expect to see posterization given the capture system.

The other numbers for other bit depths are primarily just because I was curious.

As John Sheehy alludes to the answer also depends on noise which is why I pointed you toward the DxOMark Tonal Range measure.

DxOMark links are messed up but here's the mathematics of it:

Think of it this way; if two values are close together they are only visually distinguishable if they differ by enough to overcome noise.

Getting noise information for a scanner will be problematic.

I suspect 14-bits is a good sweet spot.

-- hide signature --

Bill ( Your trusted source for independent sensor data at PhotonsToPhotos )

Ellis Vener
Ellis Vener Forum Pro • Posts: 11,313
Re: Bit depth = levels of gradient?
1

filmrescue wrote:

I've been searching around to find some sort of chart or list that tells me how many levels of gradient there is for different single channel bit depths but I'm not having a lot of luck. Can anyone fill me in or link me to the info I'm looking for. I'd like to see from 2 all the way up to 20 or so.

I do not know if the following supplies the answer you are looking for but each increase in bit depth doubles the number of tones per channel with the difference being the gradient of intermediate tones between 0 (black) and white keep in mind that most sensors are of the Bayer matrix RGB type, so there are three channels: Red, Green, and Blue. The Bayer matrix is designed to model healthy human vision and to do this there are two green sensitive pixels to each red and blue one (a 1:2:1 ratio) with most of the luminance information recorded by the green channel. (There are evolutionary reasons for why our vision works this way.)

so starting at 0

0 (black or white)

1 bit (2 tones: black and white)

2 bits (4 tones: black, a dark tone, a lighter tone, and white)

3 bits (8 tones : black, a dark tone, a lighter tone, and white)

4 bits (16 tones)

5 bits (32 tones)

6 bits (64 tones)

7 bits (128 tones)

8 bits (256 tones)

9 bits (512 tones)

10 bits (1024 tones)

11 bits (2048 tones)

12 bits (4096 tone possibilities)

13 bits (8192 tone possibilities)

14 bits ( 16,384 tone possibilities)

15 bits (32,768 tone possibilities)

16 bits (65,536 tone possibilities)

what this appears to tell you is that more bits means a smoother slope or gradient, but it is not the entire story.

Even with a 12, 14, or true 16 bit image at the lower luminance range, the quarter tones, the perceptual differences are both harder to see and more susceptible to the signal being hidden by the ever present electronic background noise of the system, hence the common advice to expose to the right where there are “richer” data fields when referring to a histogram. A histogram is a bar chart showing the distribution of tones in the image from left to right (black to white) per channel with the height of each bar telling you how full that bit depth level is. Most histograms are limited to an 8 bit (256 tones) scale.

What is also missing is the effect of a color space’s gamma or a more complex tone curve has on how you see the image contents. sRGB has a relatively complex tone curve that boosts quarter tone values more strongly than it does middle tones and highlight values. Adobe RGB (1998) uses a more straightforward Gamma of 2.2, and ProPhoto RGB (which because of the sheer size of its gamut not only encompasses what a healthy human eye can see, but a fair chunk of a range of “blue” color which doesn’t actually exist.

Mellissa RGB is a color space used only by Adobe Lightroom for previewing images. It has an sRGB like tone response curve but the gamut of ProPhoto RGB.

Keep in mind that all device independent RGB color spaces are all just mathematical constructs and are not connected to the real world except that they contain digital co-ordinates in a three dimensional space  for colors which we can see.

Finally there is the gamut and bit depth of the devices: a camera’s recording format (either the 8 bit per channel JPG, or the camera model specific “raw” formats which has do not  have bit depth or a fixed color profile as an intrinsic aspect; the gamut of the display/monitor, and if you print, the gamut of the printer, ink, and media you are printing with.

If I haven’t answered your question or confused the issue I apologize. I am not sure there is a straightforward simple answer to your question, or if there is I have read your question wrong.

-- hide signature --

Ellis Vener
To see my work please visit http://www.ellisvener.com
Or on instagram @therealellisv

 Ellis Vener's gear list:Ellis Vener's gear list
Nikon D850 +1 more
Ellis Vener
Ellis Vener Forum Pro • Posts: 11,313
Re: Bit depth = levels of gradient?

filmrescue wrote:

bclaff wrote:

filmrescue wrote:

I've been searching around to find some sort of chart or list that tells me how many levels of gradient there is for different single channel bit depths but I'm not having a lot of luck. Can anyone fill me in or link me to the info I'm looking for. I'd like to see from 2 all the way up to 20 or so.

The answer is not simple and is not the answer you will typically receive.
For example, with most systems there is no visible difference between 14-bits and 12-bits; and certain ly a factor of 4 difference.

You may want to look at what DxOMark call "Tonal Range".

What exactly are you trying to accomplish.

Primarily to be able to talk intelligently about why to be using a high bit depth capture system when capturing extremely low contrast negatives.
Also to be able to calculate for instance, if the negative I'm capturing is occupying 5 percent of the tonal range from absolute black to absolute white, how many levels of gradient do I have to work with if I use a 16 bit scanner or high end camera vs a 12 bit or 14 bit camera and at what point can I expect to see posterization given the capture system.

since you are asking about scanning negatives , try this technique: in the scanning software bring up the left end point to just a few points (3-5) below the end where you see the end of the mountain range.

This will distribute the data better throughout the actual scan so you should get better tonal separations.

The other numbers for other bit depths are primarily just because I was curious.

-- hide signature --

Ellis Vener
To see my work please visit http://www.ellisvener.com
Or on instagram @therealellisv

 Ellis Vener's gear list:Ellis Vener's gear list
Nikon D850 +1 more
bclaff Veteran Member • Posts: 8,544
Re: Bit depth = levels of gradient?

No ... and you perpetuate the myth

-- hide signature --

Bill ( Your trusted source for independent sensor data at PhotonsToPhotos )

Ellis Vener
Ellis Vener Forum Pro • Posts: 11,313
Re: Bit depth = levels of gradient?

bclaff wrote:

No ... and you perpetuate the myth

That’s an enlightening response. Care to flesh it out?

-- hide signature --

Ellis Vener
To see my work please visit http://www.ellisvener.com
Or on instagram @therealellisv

 Ellis Vener's gear list:Ellis Vener's gear list
Nikon D850 +1 more
John Sheehy Forum Pro • Posts: 21,841
Re: Bit depth = levels of gradient?
4

filmrescue wrote:

John Sheehy wrote:

J A C S wrote:

The levels are as many as I said. How many of them are distinguishable in a typical photo with whatever criterion, is a different question.

The OP isn't factoring in noise, though, so the OP is basically applying math to myth.

Explain please. This ultimately is to calculate how many levels of gradient I would have to work with given different systems of capturing extremely low contrast film negatives.

There's no such thing as what you are implying. It is a commonly held myth, but it doesn't exist. Noise is the limitation of capture; not bit depth, unless the bit depth causes levels too be so sparse in the darkest tonal ranges that the noise no longer dithers, and posterization results. If you have not approached the threshold of over-quantization based on noise, you have virtually analog mean local levels, and only noise varies.

What you need is minimal noise, and you will get that by scanning multiple times, and adding the results together.

In practical use, it isn't noise that I see when capturing a low contrast film negative with 12 BIT dslr vs a 16 bit scanner it's posterization because I don't have enough levels of gradient with a 12 bit capture for an extremely low contrast negative. Not sure how that's myth.

The more noise you have in your scan, the more noise you have after you've boosted the contrast. This is about noise; not levels.

I just checked the RAW values from a 12x10-pixel tile from a single RAW color channel of an ISO 100 14-bit RAW, from a very out-of-focus blank wall, and the average RAW value was about 11,500 with values ranging from about 9,500 to 14,500. No value was used twice. 120 different values, over a range of 5000 values. This was a bright tone, only a fraction of a stop below the highlight clipping point. Your model would be expecting just a few, very close tones, most likely, but it's nothing like that. The reality of RAW capture is jagged and noisy. Smooth gradients only appear through dithering.

OP filmrescue Contributing Member • Posts: 802
Re: Bit depth = levels of gradient?

Thanks for your thoughtful answer.  I do have negatives that are captured with a DSLR that maybe occupy 5% of the tonal range where I end up with what I thought would be best described as posterization or banding that will disappear after I scanned them with a 16 bit actual film scanner. So you're saying that's not coming from a lack of band width but from noise? Yes, of course as you bring up contrast you bring up all sorts of other issues, grain, dust, scratches, mottling...all minor imperfections become major issues.

John Sheehy Forum Pro • Posts: 21,841
Re: Bit depth = levels of gradient?
1

filmrescue wrote:

Thanks for your thoughtful answer. I do have negatives that are captured with a DSLR that maybe occupy 5% of the tonal range where I end up with what I thought would be best described as posterization or banding that will disappear after I scanned them with a 16 bit actual film scanner.

What file format? JPEG tends to create posterization to make more compressible images. RAW captures are rarely posterized except in near-blacks in a small number of cameras, mostly the first Exmor sensors that had too little noise at base ISO for 12 bits.

So you're saying that's not coming from a lack of band width but from noise?

Not posterization. Posterization is a lack of noise before quantization.

Yes, of course as you bring up contrast you bring up all sorts of other issues, grain, dust, scratches, mottling...all minor imperfections become major issues.

The less artifacts you get in the capture, the less you'll see when you boost the contrast. You could consider using the principals of astrophotography with multiple exposure to minimize any artifacts that aren't already in the film.

OP filmrescue Contributing Member • Posts: 802
Re: Bit depth = levels of gradient?

Ellis Vener wrote:

filmrescue wrote:

I've been searching around to find some sort of chart or list that tells me how many levels of gradient there is for different single channel bit depths but I'm not having a lot of luck. Can anyone fill me in or link me to the info I'm looking for. I'd like to see from 2 all the way up to 20 or so.

I do not know if the following supplies the answer you are looking for but each increase in bit depth doubles the number of tones per channel with the difference being the gradient of intermediate tones between 0 (black) and white keep in mind that most sensors are of the Bayer matrix RGB type, so there are three channels: Red, Green, and Blue. The Bayer matrix is designed to model healthy human vision and to do this there are two green sensitive pixels to each red and blue one (a 1:2:1 ratio) with most of the luminance information recorded by the green channel. (There are evolutionary reasons for why our vision works this way.)

so starting at 0

0 (black or white)

1 bit (2 tones: black and white)

2 bits (4 tones: black, a dark tone, a lighter tone, and white)

3 bits (8 tones : black, a dark tone, a lighter tone, and white)

4 bits (16 tones)

5 bits (32 tones)

6 bits (64 tones)

7 bits (128 tones)

8 bits (256 tones)

9 bits (512 tones)

10 bits (1024 tones)

11 bits (2048 tones)

12 bits (4096 tone possibilities)

13 bits (8192 tone possibilities)

14 bits ( 16,384 tone possibilities)

15 bits (32,768 tone possibilities)

16 bits (65,536 tone possibilities)

what this appears to tell you is that more bits means a smoother slope or gradient, but it is not the entire story.

Even with a 12, 14, or true 16 bit image at the lower luminance range, the quarter tones, the perceptual differences are both harder to see and more susceptible to the signal being hidden by the ever present electronic background noise of the system, hence the common advice to expose to the right where there are “richer” data fields when referring to a histogram. A histogram is a bar chart showing the distribution of tones in the image from left to right (black to white) per channel with the height of each bar telling you how full that bit depth level is. Most histograms are limited to an 8 bit (256 tones) scale.

What is also missing is the effect of a color space’s gamma or a more complex tone curve has on how you see the image contents. sRGB has a relatively complex tone curve that boosts quarter tone values more strongly than it does middle tones and highlight values. Adobe RGB (1998) uses a more straightforward Gamma of 2.2, and ProPhoto RGB (which because of the sheer size of its gamut not only encompasses what a healthy human eye can see, but a fair chunk of a range of “blue” color which doesn’t actually exist.

Mellissa RGB is a color space used only by Adobe Lightroom for previewing images. It has an sRGB like tone response curve but the gamut of ProPhoto RGB.

Keep in mind that all device independent RGB color spaces are all just mathematical constructs and are not connected to the real world except that they contain digital co-ordinates in a three dimensional space for colors which we can see.

Finally there is the gamut and bit depth of the devices: a camera’s recording format (either the 8 bit per channel JPG, or the camera model specific “raw” formats which has do not have bit depth or a fixed color profile as an intrinsic aspect; the gamut of the display/monitor, and if you print, the gamut of the printer, ink, and media you are printing with.

If I haven’t answered your question or confused the issue I apologize. I am not sure there is a straightforward simple answer to your question, or if there is I have read your question wrong.

Thanks so much Ellis...you're a well informed person. I understand most of this but for my practical purposes, I'm primarily dealing with B&W film negative but I do generally keep them and deliver them in the sRGB color space because it's what most of my devices are and also likely the client's too.
Another poster suggested that the reason I sometimes get banding is because of noise and not because of bit depth when capturing very low contrast B&W negatives with a DSLR. I'm not fully understanding that though. The banding goes away when I scan it with an actual film scanner at 16 bit but maybe he's right and that's not a bit depth issue.

Yes...I already keep my exposure on the higher end or right side of the histogram without clipping the highlight. Should that be brought right to the limit...pre clipping?

bclaff Veteran Member • Posts: 8,544
Re: Bit depth = levels of gradient?

Ellis Vener wrote:

bclaff wrote:

No ... and you perpetuate the myth

That’s an enlightening response. Care to flesh it out?

I think John Sheehy  is doing fine in explaining further.

-- hide signature --

Bill ( Your trusted source for independent sensor data at PhotonsToPhotos )

OP filmrescue Contributing Member • Posts: 802
Re: Bit depth = levels of gradient?

John Sheehy wrote:

filmrescue wrote:

Thanks for your thoughtful answer. I do have negatives that are captured with a DSLR that maybe occupy 5% of the tonal range where I end up with what I thought would be best described as posterization or banding that will disappear after I scanned them with a 16 bit actual film scanner.

What file format? JPEG tends to create posterization to make more compressible images. RAW captures are rarely posterized except in near-blacks in a small number of cameras, mostly the first Exmor sensors that had too little noise at base ISO for 12 bits.

I need to go back and look at what's going on in ACR but I generally only see the jpegs after the work is done (by someone else who works here) The original files are 12 bit NEFs and are opened as 16 bit files and saved as jpegs. The banding is fairly uncommon and only manifests itself if the conditions are right. Conditions being right meaning that the image isn't overwhelmed by actual distressed film issues so that what appears to be a digital artifact comes through in the form of banding. Perhaps the issue I'm concerned about is only showing up in the save to JPEG.

So you're saying that's not coming from a lack of band width but from noise?

Not posterization. Posterization is a lack of noise before quantization.

Maybe I have my terminology wrong. I thought "posterization" and "banding" were interchangable?

Yes, of course as you bring up contrast you bring up all sorts of other issues, grain, dust, scratches, mottling...all minor imperfections become major issues.

The less artifacts you get in the capture, the less you'll see when you boost the contrast. You could consider using the principals of astrophotography with multiple exposure to minimize any artifacts that aren't already in the film.

I had considered trying that at some point but there's only so much that can be done at the price point we charge...in the end though you're onto something about noise albeit in my case the biggest issue is analogue noise in the form of film grain overwhelming the image.
The original point of this post was I'm trying to make a decision on a new capture camera for when we photograph these negatives with a camera. A camera because so many of them are extremely low contrast to the point, you can't find the frames with a preview scan on an actual film scanner and often the negatives are so dense, an actual film scanner can't scan through them.  For context, my company specializes in the processing of lost and found expired film...that's why we deal with so many terrible negatives.
Any thoughts on what would be the actual best thing to do our captures with. in contention right now is ...
A Nikon D850 - the most affordable option

A Microbox Book2net book scanning camera - 16 bit and multispectral but not sure if the multispectral part of that will help us or not. We have test film with them now.

A Fuji GFX 100

A Phase One ixg with their DT film scanning set up - this is likely more of a pipe dream because it's hardly affordable for us but if it does something amazing we'd probably find a way.
Most of these were chosen because they're 16 bit...You don't think we need that? I'd love your thoughts on any of these as a film capture device. You seem well informed.

bclaff Veteran Member • Posts: 8,544
Re: Bit depth = levels of gradient?
1

filmrescue wrote:

John Sheehy wrote:

filmrescue wrote:

Thanks for your thoughtful answer. I do have negatives that are captured with a DSLR that maybe occupy 5% of the tonal range where I end up with what I thought would be best described as posterization or banding that will disappear after I scanned them with a 16 bit actual film scanner.

What file format? JPEG tends to create posterization to make more compressible images. RAW captures are rarely posterized except in near-blacks in a small number of cameras, mostly the first Exmor sensors that had too little noise at base ISO for 12 bits.

... The original files are 12 bit NEFs and are opened as 16 bit files and saved as jpegs.

Which camera? Uncompressed, lossless compressed, or lossy compressed.
(If lossy that could contribute to posterization.)

... Maybe I have my terminology wrong. I thought "posterization" and "banding" were interchangable?

Banding and posterization are different effects; you are almost certain seeing posterization.

-- hide signature --

Bill ( Your trusted source for independent sensor data at PhotonsToPhotos )

Keyboard shortcuts:
FForum MMy threads