Tonality in Terms of Possible Gray Scale Values (Bit Depth).

flyinglentris

Senior Member
Messages
1,369
Solutions
1
Reaction score
346
Location
US
A 16 bit depth for camera cells is common these days and possibly 14 of those 16 bits are used for tonality. That means 64K different possible values.

It must be acceptable to most photographers, as the desire for say, 32-bit depth, is rarely begged for, but is available in some post production software, usually with regard to color tonality memory addressing. 32 bits, if say, only 24 were used, translates to around 16.8 million possible values. The human eye can hardly differentiate that, even over a high mp resolution.

The fineness of a photograph in black and white is strictly determined by the the gray scale bit depth and the resolution of the image (mp).

The fineness of a photograph in color is determined by the bit depth and resolution of the image, but applied to possible colors, not just grayscale.

In film emulsions, gray scale and color tonality might be considered extremely fine as its values are not discrete and limited by a fixed digital metric. The values are analog.

Photographers are more often than not concerned with mp resolution than ramping up bit depth.

As it would appear that a film emulsion might produce finer gradations of tonality than the 14 bit digital discrete limit, I would like to press the question whether there are some photographers who are not happy with the 14 bit tonality limit and for fine art photography, would still prefer film, or a 32 bit depth camera?

Seriously, with cameras peaking over 45mp, would a 24 bit depth tonality produce a visible and desirable increase the fineness of photographic images?

Is it possible that future 35mm cameras will evolve to 32 bit?

What significance would a 32-bit depth play in terms of noise reduction?

What significance would a 32-bit depth play in large prints?

--
"If you are among those who believe that it has all been done already and nothing new can be achieved, you've murdered your own artistry before ever letting it live. You abort it in its fetal state. There is much that has yet to be spoken in art and composition and it grows with the passage of time. Evolving technologies, world environments and ideologies all drive change in thoughts, passion and expression. There is no way that it can all ever be done already. And therein lies the venue for the creative artist, a venue that is as diverse as the universe is unmapped and unexplored." - Quote from FlyingLentris
~
flyinglentris in LLOMA
 
Last edited:
A 16 bit depth for camera cells
What's a camera cell?
is common these days
Maybe on some expensive medium format cameras.
and possibly 14 of those 16 bits are used for tonality. That means 64K different possible values.
2^16 = 65,536

2^14 = 16,384

Consider that some cameras have a full-well capacity of something like 80,000 electrons, and so in order to capture all of these levels at one time, you'd need at least 17 bits. However, good analog-to-digital converters are very expensive, may not be available at high bit depths, and they don't work as well as you might hope. So much confusing business that goes into camera ISO adjustments is due to limited bit depth ADCs.

I would assume that *all* of the bits are used for tonality. But unless you shoot at base ISO, with great lighting, the lower order bits are probably going to be too noisy to be of much practical use. For example, my D750 is reputed to have exceptionally low noise, but 14 bit mode is almost useless for anything much over base ISO.
It must be acceptable to most photographers, as the desire for say, 32-bit depth, is rarely begged for, but is available in some post production software, usually with regard to color tonality memory addressing. 32 bits, if say, only 24 were used, translates to around 16.8 million possible values. The human eye can hardly differentiate that, even over a high mp resolution.
32 bits is not useful for a final image, especially if it is printed, where you only have a dynamic range of 100:1 or maybe 300:1 at best, so 8 bits usually suffice. 10 bits per color channel is useful for high dynamic range monitors, as it reportedly can avoid banding. I have a 10 bit monitor, but I can't see any difference with my images.

How the images are gamma encoded will have an effect as well. Raw data is usually linear, so that the device values read from the camera are proportional to the light falling on the pixels, with the exception of nonlinearities where the light is dim and camera read noise overwhelms the signal, and when the sensor approaches full saturation. The trouble with linear encoding is that half of the data bits are allocated to the brightest stop of light, half of the rest to the next stop, and so forth and so on.

Log recording, often used in video, will allocate more bits to darker tones and fewer to bright, so you basically get more range from the same number of bits.

But 32 bits is great for image editing, because there are lots of calculations involved in rendering an image, and it is better if those calculations are done with high precision. High numbers of bits are also essential for high dynamic range photography, where multiple images with a wide range of exposures are taken. Exposure stacking, commonly used in high-end smartphones to obtain lower effective base ISOs, needs high precision as well.

But 32 bits is also useful for regular images. For example, when converting a D750 raw file to sRGB, here are some calculations involved for an Incandescent white balance:
sRGB Red = 2.4 raw Red - 0.81 raw Green - 0.18 raw Blue

sRGB Green = -0.25 raw Red + 1.4 raw Green - 0.59 raw Blue

sRGB Blue = 0.11 raw Red - 0.87 raw Green + 4.0 raw Blue
Look at the last equation: the multiplication of the raw Blue channel by 4.0 is the same as dropping two bits of precision when using integer arithmetic. Likewise, in the middle equation, the 0.25 factor for the raw Red channel also reduces the precision for that channel by two bits. Also, summing the three values together will drop precision by nearly one bit. So even if you start with 16 bits of precision, you'll end up with less, once the calculation is done.

Another major issue is that a lot of raw processing software will truncate intermediate results, forcing values to 0 or the maximum integer value, which can lead to undesirable results with subsequent processing. I see this in Adobe software, where the blue color channel gets pegged to pure black under some conditions, and no amount of shadow boosting can bring up detail in that channel. The data is truncated at some intermediate processing step and it can not be recovered. (Raw processors which do not truncate intermediate results are said to use "unbounded" calculations: RawTherapee is one such unbounded processor.)

32 bit software does not use integers, but rather a floating point representation of data. Here, the data values can be extremely large or extremely low, with an unchanging number of bits assigned to the precision, no matter the size of the numbers. For one common 32 bit floating point data format, 23 bits are assigned to the base value or signficand, one bit for the sign, and 8 bits are for the exponent. Here, multiplying or dividing numbers does not drop precision like with integer values, but rather just changes the exponent. The rounding errors associated with raw values read from a camera and subsequent calculations are practically insignificant when put in this format, a very tiny proportion compared to integer arithmetic.

I occasionally use RawTherapee in floating point mode, and the results are quite impressive, with fine tonal transitions and less noise and artifacts due to the processing. However, it is really slow, even with a decent processor. I would think that real-time floating point processing in cameras would be both expensive and power-hungry.
The fineness of a photograph in black and white is strictly determined by the the gray scale bit depth and the resolution of the image (mp).

The fineness of a photograph in color is determined by the bit depth and resolution of the image, but applied to possible colors, not just grayscale.

In film emulsions, gray scale and color tonality might be considered extremely fine as its values are not discrete and limited by a fixed digital metric. The values are analog.
Except for the clumping of grain!
Photographers are more often than not concerned with mp resolution than ramping up bit depth.
That's kind of a shame, because really astounding results can be had if you get a really clean base image, even if the resolution isn't all that great. That's one of the secrets of modern smartphone cameras, when they do exposure stacking.
As it would appear that a film emulsion might produce finer gradations of tonality than the 14 bit digital discrete limit, I would like to press the question whether there are some photographers who are not happy with the 14 bit tonality limit and for fine art photography, would still prefer film, or a 32 bit depth camera?
Under optimal conditions, film does extremely well, but I would think that ordinary digital photography greatly outperforms ordinary film photography. The evidence is where once professionals and enthusiasts used medium format, they now almost universally use full frame or even smaller formats.

I don't think that 32 bit cameras will be available any time soon, due to the unavailability of suitable analog-to-digital converters, as well as inadequate "full well capacity" or whatever the precise technological description happens to be.

However, with exposure stacking and in-camera HDR, it is entirely possible to get high dynamic range images with lots of bits of image data, albeit not in one shot. One problem is that the lens itself limits dynamic range, and so many bits of data won't really help much, if it is linearly encoded. But if floating point is used, or some log format, then we could construct an image that has the same number of bits of data for both the extreme highlights as well as the darkest shadows. The shadows would have to be exposed more of course, but this could be done automatically.
Seriously, with cameras peaking over 45mp, would a 24 bit depth tonality produce a visible and desirable increase the fineness of photographic images?
Not unless some multi-shot method is used, or some sort of technique is used that will vary the exposure on a per-pixel basis (like maybe the quanta image sensor with its jots?)

But as mentioned, the greatest benefit is in the processing. Even if you don't collect all that many bits of raw data, having lots of bits of processing can help considerably.
Is it possible that future 35mm cameras will evolve to 32 bit?
Who knows? Maybe. I don't even hear of rumors that this is being planned. Everything uses low bit depth integer arithmetic as far as I know.
What significance would a 32-bit depth play in terms of noise reduction?
Very much, if exposure stacking or in-camera HDR is used.
What significance would a 32-bit depth play in large prints?
None because that much dynamic range or precision isn't needed at all when outputting to a printer. The benefit, as mentioned, is in processing the image to its final form.
 
The word you are looking for is "gradation".

The new model of Leica Monochrom gives extremely good gradation. So does shooting with 10x8 inch film (or plates).

Don Cox
 
Let's not confuse dynamic range with gradation. The dynamic range is the difference between the lightest and darkest tones, while gradation is the number of steps in between.

You can have an image that goes from dark grey to light grey in (say) 1500 steps, and another that goes from black to white in 256 steps, or even in one step.

The dynamic range of a print on paper is low, but a print from large format film can have superb gradation.
 
A 16 bit depth for camera cells
What's a camera cell?
A photo receptor (Grayscale and Color) for pixel map of an image from the sensor.
is common these days
Maybe on some expensive medium format cameras.
and possibly 14 of those 16 bits are used for tonality. That means 64K different possible values.
2^16 = 65,536

2^14 = 16,384
Correct. My typo.
Consider that some cameras have a full-well capacity of something like 80,000 electrons, and so in order to capture all of these levels at one time, you'd need at least 17 bits. However, good analog-to-digital converters are very expensive, may not be available at high bit depths, and they don't work as well as you might hope. So much confusing business that goes into camera ISO adjustments is due to limited bit depth ADCs.
Excellent Point. Banding at high ISO might well be eliminated going to 24 bits. As far as ADC Converter expense goes, who cares? If a technology jump is achieved seeking to implement it and the process is worked into the normal manufacturing of a camera, that's really not so pertinent.

Consider ... can the fineness of a 20mp sensor be improved in the bit depth dimension by going to 24 bit, so that it is very competitive with a 45mp sensor at 14 bit? And desirable over the 45mp?
I would assume that *all* of the bits are used for tonality. But unless you shoot at base ISO, with great lighting, the lower order bits are probably going to be too noisy to be of much practical use. For example, my D750 is reputed to have exceptionally low noise, but 14 bit mode is almost useless for anything much over base ISO.
The dynamic range of a sensor is not effected by bit depth. The slicing of tonality within a given dynamic range is. To be honest, I am not quite sure how a higher bit depth would effect higher ISO noise levels in shadows, but am interested to hear any feed back on that, possibly from existing cameras that do use 24 bit. I am not aware of any, but they may exist.
It must be acceptable to most photographers, as the desire for say, 32-bit depth, is rarely begged for, but is available in some post production software, usually with regard to color tonality memory addressing. 32 bits, if say, only 24 were used, translates to around 16.8 million possible values. The human eye can hardly differentiate that, even over a high mp resolution.
32 bits is not useful for a final image, especially if it is printed, where you only have a dynamic range of 100:1 or maybe 300:1 at best, so 8 bits usually suffice. 10 bits per color channel is useful for high dynamic range monitors, as it reportedly can avoid banding. I have a 10 bit monitor, but I can't see any difference with my images.
Now. What will the future be?
How the images are gamma encoded will have an effect as well. Raw data is usually linear, so that the device values read from the camera are proportional to the light falling on the pixels, with the exception of nonlinearities where the light is dim and camera read noise overwhelms the signal, and when the sensor approaches full saturation. The trouble with linear encoding is that half of the data bits are allocated to the brightest stop of light, half of the rest to the next stop, and so forth and so on.
Understood. And an increase of bit depth may draw out more shadow quality than highlight.
Log recording, often used in video, will allocate more bits to darker tones and fewer to bright, so you basically get more range from the same number of bits.

But 32 bits is great for image editing, because there are lots of calculations involved in rendering an image, and it is better if those calculations are done with high precision. High numbers of bits are also essential for high dynamic range photography, where multiple images with a wide range of exposures are taken. Exposure stacking, commonly used in high-end smartphones to obtain lower effective base ISOs, needs high precision as well.
I believe I mentioned this for post processing.
But 32 bits is also useful for regular images. For example, when converting a D750 raw file to sRGB, here are some calculations involved for an Incandescent white balance:
sRGB Red = 2.4 raw Red - 0.81 raw Green - 0.18 raw Blue

sRGB Green = -0.25 raw Red + 1.4 raw Green - 0.59 raw Blue

sRGB Blue = 0.11 raw Red - 0.87 raw Green + 4.0 raw Blue
Look at the last equation: the multiplication of the raw Blue channel by 4.0 is the same as dropping two bits of precision when using integer arithmetic. Likewise, in the middle equation, the 0.25 factor for the raw Red channel also reduces the precision for that channel by two bits. Also, summing the three values together will drop precision by nearly one bit. So even if you start with 16 bits of precision, you'll end up with less, once the calculation is done.

Another major issue is that a lot of raw processing software will truncate intermediate results, forcing values to 0 or the maximum integer value, which can lead to undesirable results with subsequent processing. I see this in Adobe software, where the blue color channel gets pegged to pure black under some conditions, and no amount of shadow boosting can bring up detail in that channel. The data is truncated at some intermediate processing step and it can not be recovered. (Raw processors which do not truncate intermediate results are said to use "unbounded" calculations: RawTherapee is one such unbounded processor.)
Are we talking about an accepted processing method that might be changed if 24 bits were used?
32 bit software does not use integers, but rather a floating point representation of data. Here, the data values can be extremely large or extremely low, with an unchanging number of bits assigned to the precision, no matter the size of the numbers. For one common 32 bit floating point data format, 23 bits are assigned to the base value or signficand, one bit for the sign, and 8 bits are for the exponent. Here, multiplying or dividing numbers does not drop precision like with integer values, but rather just changes the exponent. The rounding errors associated with raw values read from a camera and subsequent calculations are practically insignificant when put in this format, a very tiny proportion compared to integer arithmetic.
Understood. And not to loose any lurkers on this thread, the boost from 14bit to 32bit in post processing software ought to be explained, usually during merges, which you mentioned above. And here, I am foggy whether the merge creates the higher depth by interpolation or something else. It is not something that currently emerges from the camera sensors.
I occasionally use RawTherapee in floating point mode, and the results are quite impressive, with fine tonal transitions and less noise and artifacts due to the processing. However, it is really slow, even with a decent processor. I would think that real-time floating point processing in cameras would be both expensive and power-hungry.
I don't see floating point being used when ADC comes directly from a 32 bit cell (24 bit).
The fineness of a photograph in black and white is strictly determined by the the gray scale bit depth and the resolution of the image (mp).

The fineness of a photograph in color is determined by the bit depth and resolution of the image, but applied to possible colors, not just grayscale.

In film emulsions, gray scale and color tonality might be considered extremely fine as its values are not discrete and limited by a fixed digital metric. The values are analog.
Except for the clumping of grain!
Certainly, but the graininess of a film emulsion is often a choice for a desired result.
Photographers are more often than not concerned with mp resolution than ramping up bit depth.
That's kind of a shame, because really astounding results can be had if you get a really clean base image, even if the resolution isn't all that great. That's one of the secrets of modern smartphone cameras, when they do exposure stacking.
As it would appear that a film emulsion might produce finer gradations of tonality than the 14 bit digital discrete limit, I would like to press the question whether there are some photographers who are not happy with the 14 bit tonality limit and for fine art photography, would still prefer film, or a 32 bit depth camera?
Under optimal conditions, film does extremely well, but I would think that ordinary digital photography greatly outperforms ordinary film photography. The evidence is where once professionals and enthusiasts used medium format, they now almost universally use full frame or even smaller formats.

I don't think that 32 bit cameras will be available any time soon, due to the unavailability of suitable analog-to-digital converters, as well as inadequate "full well capacity" or whatever the precise technological description happens to be.
Ah. And now you demonstrate that they are possible and perhaps, will be a future. I think a motive will bring them to fruition must faster by driving a need for the missing links.
However, with exposure stacking and in-camera HDR, it is entirely possible to get high dynamic range images with lots of bits of image data, albeit not in one shot. One problem is that the lens itself limits dynamic range, and so many bits of data won't really help much, if it is linearly encoded. But if floating point is used, or some log format, then we could construct an image that has the same number of bits of data for both the extreme highlights as well as the darkest shadows. The shadows would have to be exposed more of course, but this could be done automatically.
The effect of lenses on 24 bit depth is a good point and I have considered it, but not mentioned it in my original post. I know there will be an impact, but am not certain how that may be treated as cameras evolve.
Seriously, with cameras peaking over 45mp, would a 24 bit depth tonality produce a visible and desirable increase the fineness of photographic images?
Not unless some multi-shot method is used, or some sort of technique is used that will vary the exposure on a per-pixel basis (like maybe the quanta image sensor with its jots?)

But as mentioned, the greatest benefit is in the processing. Even if you don't collect all that many bits of raw data, having lots of bits of processing can help considerably.
Is it possible that future 35mm cameras will evolve to 32 bit?
Who knows? Maybe. I don't even hear of rumors that this is being planned. Everything uses low bit depth integer arithmetic as far as I know.
Economics currently rules these decisions and perhaps technological road blocks too. I perceive that market competition may drive the jump.
What significance would a 32-bit depth play in terms of noise reduction?
Very much, if exposure stacking or in-camera HDR is used.
What significance would a 32-bit depth play in large prints?
None because that much dynamic range or precision isn't needed at all when outputting to a printer. The benefit, as mentioned, is in processing the image to its final form.
Large format printers? Sure a large image will usually be seen from a distance and the increase in quality will be mute in many cases. Exceptions? Detail photography, beginning with scientific, satellite and military applications? Be careful not to stifle advances in print technology by implying that nothing will change there to adapt to 24 bit image output.
 
The word you are looking for is "gradation".

The new model of Leica Monochrom gives extremely good gradation. So does shooting with 10x8 inch film (or plates).

Don Cox
I wasn't looking for the word. I just didn't use it. But yes, tonality gradation. I used the term slicing, just now, to avoid banding with its negative connotations. I should have used gradation.
 
While it is always better to use more bits for editing, the tonal gradations in an image are usually dithered by noise.

Even a full-frame sensor has enough noise to dither the difference between tones with 10-bit precision, even when you account for gamma encoding. A 1" sensor can dither 8-bit files quite successfully.

The only area of an image where 14-bit ADCs cause quantisation problems is in the bottom stop, which is usually too noisy to be of any practical use anyway. 16-bit ADCs make little practical difference.

Note, the JND for a human eye is just under 1% of the luminance value on a display. Even with no noise, we could not distinguish the levels on a 10-bit display with proper hybrid log-gamma encoding.
 
While it is always better to use more bits for editing, the tonal gradations in an image are usually dithered by noise.

Even a full-frame sensor has enough noise to dither the difference between tones with 10-bit precision, even when you account for gamma encoding. A 1" sensor can dither 8-bit files quite successfully.

The only area of an image where 14-bit ADCs cause quantisation problems is in the bottom stop, which is usually too noisy to be of any practical use anyway. 16-bit ADCs make little practical difference.

Note, the JND for a human eye is just under 1% of the luminance value on a display. Even with no noise, we could not distinguish the levels on a 10-bit display with proper hybrid log-gamma encoding.
The thing I find amusing about this explanation is its relevance when there is a constant drive to increase the number of stops in dynamic range while giving no consideration to the gradation steps within an increasing DR.
 
While it is always better to use more bits for editing, the tonal gradations in an image are usually dithered by noise.

Even a full-frame sensor has enough noise to dither the difference between tones with 10-bit precision, even when you account for gamma encoding. A 1" sensor can dither 8-bit files quite successfully.

The only area of an image where 14-bit ADCs cause quantisation problems is in the bottom stop, which is usually too noisy to be of any practical use anyway. 16-bit ADCs make little practical difference.

Note, the JND for a human eye is just under 1% of the luminance value on a display. Even with no noise, we could not distinguish the levels on a 10-bit display with proper hybrid log-gamma encoding.
The thing I find amusing about this explanation is its relevance when there is a constant drive to increase the number of stops in dynamic range while giving no consideration to the gradation steps within an increasing DR.
Well the point is that gradation in all but the bottom most stops at the lowest ISOs has nothing at all to do with bit depth. Instead ISO and sensor size is really what sets gradation as that sets the photon shot noise.
 
While it is always better to use more bits for editing, the tonal gradations in an image are usually dithered by noise.

Even a full-frame sensor has enough noise to dither the difference between tones with 10-bit precision, even when you account for gamma encoding. A 1" sensor can dither 8-bit files quite successfully.

The only area of an image where 14-bit ADCs cause quantisation problems is in the bottom stop, which is usually too noisy to be of any practical use anyway. 16-bit ADCs make little practical difference.

Note, the JND for a human eye is just under 1% of the luminance value on a display. Even with no noise, we could not distinguish the levels on a 10-bit display with proper hybrid log-gamma encoding.
The thing I find amusing about this explanation is its relevance when there is a constant drive to increase the number of stops in dynamic range while giving no consideration to the gradation steps within an increasing DR.
Well the point is that gradation in all but the bottom most stops at the lowest ISOs has nothing at all to do with bit depth. Instead ISO and sensor size is really what sets gradation as that sets the photon shot noise.
Let's look at it from a different perspective - photo receptors as photon counters. It is easier to quantify photons when there are many of them (highlights) than as they becomes progressively fewer (shadows), especially at the extreme shadow limits. This loss of capability to quantify photons in shadows (low light levels) can be assessed as being reason for not differentiating gradations within the extreme shadow regions as it may well be that no definable difference can be detected in photon counts. The noise in shadows results from this vagueness, especially when more current is applied to boost gain of the receptors (ISO).

So, we can recognize a problem in the bottom stops that may not be resolvable in total.

But as we proceed upward from the bottom stop(s), is there a point where increased bit depth differentiation of the remaining stops may be valuable to generating finer detail above that point?
 
Bit depth gets into the arguments we have about different raw formats- 12 bit vs. 14 bit for instance. You can read up on some of the older threads here about that.

For instance there are arguments about 12 bit raw vs 14 bit raw. I did some research and what I found was that for most uses there is no difference. I shoot JPG when I can get away with it and a JPG is only 8 bit, which in some cases is probably enough. When I say enough I mean enough bits to create sufficient gray scale values.

The problem is if you want to make significant tonal adjustments somewhere in the range. There are enough bits for the image until you want to take for instance the lower part of the tonal range and spread it out to full range of values. Then the bits might get stretched apart enough that it starts to matter. In the raw format arguments this is sometimes given as the support for working with the larger 14 bit image files.

Therefore my take on it is that with the current dynamic range of sensors, there is no advantage to going beyond 14 bits, and certainly not beyond 16 bits.
 
But as we proceed upward from the bottom stop(s), is there a point where increased bit depth differentiation of the remaining stops may be valuable to generating finer detail above that point?
Poisson gave us the answer to this 183 years ago. The answer is no. More gradation in the upper stops is useless. Photon shot noise is vastly higher in power than quantization noise in all but the lowest few stops of the lowest ISOs of the highest DR sensors today.
 
But as we proceed upward from the bottom stop(s), is there a point where increased bit depth differentiation of the remaining stops may be valuable to generating finer detail above that point?
Poisson gave us the answer to this 183 years ago. The answer is no. More gradation in the upper stops is useless. Photon shot noise is vastly higher in power than quantization noise in all but the lowest few stops of the lowest ISOs of the highest DR sensors today.
So, there is no possibility of improving fineness of image quality with higher bit depth.

END OF THREAD
 
But as we proceed upward from the bottom stop(s), is there a point where increased bit depth differentiation of the remaining stops may be valuable to generating finer detail above that point?
Poisson gave us the answer to this 183 years ago. The answer is no. More gradation in the upper stops is useless. Photon shot noise is vastly higher in power than quantization noise in all but the lowest few stops of the lowest ISOs of the highest DR sensors today.
So, there is no possibility of improving fineness of image quality with higher bit depth.

END OF THREAD
Ken is quite correct. Noise increases as slightly more than the square root of the signal.

But even if there is no noise, the human sensitivity to luminance differences is a more or less constant ratio of the overall luminance.

So, if we have a luminance of 1 nit on a display in the dark shadows, we need a difference of at least 0.01 nits to see any difference. If we look at the brightest display level, say 1000 nits, we now need a difference if 10 nits to see the same difference, or 1000X more.

So even though noise is increase, our sensitivity is being reduced even faster and our sensitivity to hue differences is even lower.

This is why we only need 10-bits to display every visible tone we can see on any display, even an HDR display with 16 EV of contrast range, provided a log-gamma curve is applied to space them out evenly according to sensitivity, like a gamma curve does for prints.

The apparent improvement in 'tonality' with larger sensors is largely a result of using the better shadow SNR to extend the highlight range closer to film, or around 4 stops. This is only possible because...

1. Read noise is so low on modern sensors that we can meter at 9% or even lower without any fear of obnoxious shadow noise when we apply the output tone-curve.

2. On larger sensors, the SNR in midtones is higher, so underexposing by 1.5 stops (which would give us the same rolloff as a good medium format negative film) does not reveal excessive mid-tone noise when we make the same adjustment.

In addition, editing in a large colour space, like ProphotoRGB, will prevent chromaticity posterisation. As lot of issues I see in display images are due to editing in sRGB, which results in some horrible compromises with regards to colour transformations.

So editing in 16-bit or 32-bit is good. Editing in 32-bit with ProphotoRGB is better. But when it's all done, we can happily compress the file down to a 10-bit HEIC file in DCI-P3 and see no posterisation or other issues on an HDR cinema display.

So, DR and SNR are not irrelevant to tonality, but bit-depth is something of a red-herring. It's not the NUMBER of tones that people rave about, its their distribution. The closer we meter to 18%, the fewer stops we have between mid-tones and highlight clipping, which severely restricts our ability to achieve a nice distribution.
 

Keyboard shortcuts

Back
Top