Tonality in Terms of Possible Gray Scale Values (Bit Depth).

Started 1 month ago | Discussions thread
Mark Scott Abeln
Mark Scott Abeln Forum Pro • Posts: 15,564
Bits and their uses

flyinglentris wrote:

A 16 bit depth for camera cells

What's a camera cell?

is common these days

Maybe on some expensive medium format cameras.

and possibly 14 of those 16 bits are used for tonality. That means 64K different possible values.

2^16 = 65,536

2^14 = 16,384

Consider that some cameras have a full-well capacity of something like 80,000 electrons, and so in order to capture all of these levels at one time, you'd need at least 17 bits. However, good analog-to-digital converters are very expensive, may not be available at high bit depths, and they don't work as well as you might hope. So much confusing business that goes into camera ISO adjustments is due to limited bit depth ADCs.

I would assume that *all* of the bits are used for tonality. But unless you shoot at base ISO, with great lighting, the lower order bits are probably going to be too noisy to be of much practical use. For example, my D750 is reputed to have exceptionally low noise, but 14 bit mode is almost useless for anything much over base ISO.

It must be acceptable to most photographers, as the desire for say, 32-bit depth, is rarely begged for, but is available in some post production software, usually with regard to color tonality memory addressing. 32 bits, if say, only 24 were used, translates to around 16.8 million possible values. The human eye can hardly differentiate that, even over a high mp resolution.

32 bits is not useful for a final image, especially if it is printed, where you only have a dynamic range of 100:1 or maybe 300:1 at best, so 8 bits usually suffice. 10 bits per color channel is useful for high dynamic range monitors, as it reportedly can avoid banding. I have a 10 bit monitor, but I can't see any difference with my images.

How the images are gamma encoded will have an effect as well. Raw data is usually linear, so that the device values read from the camera are proportional to the light falling on the pixels, with the exception of nonlinearities where the light is dim and camera read noise overwhelms the signal, and when the sensor approaches full saturation. The trouble with linear encoding is that half of the data bits are allocated to the brightest stop of light, half of the rest to the next stop, and so forth and so on.

Log recording, often used in video, will allocate more bits to darker tones and fewer to bright, so you basically get more range from the same number of bits.

But 32 bits is great for image editing, because there are lots of calculations involved in rendering an image, and it is better if those calculations are done with high precision. High numbers of bits are also essential for high dynamic range photography, where multiple images with a wide range of exposures are taken. Exposure stacking, commonly used in high-end smartphones to obtain lower effective base ISOs, needs high precision as well.

But 32 bits is also useful for regular images. For example, when converting a D750 raw file to sRGB, here are some calculations involved for an Incandescent white balance:

sRGB Red = 2.4 raw Red - 0.81 raw Green - 0.18 raw Blue

sRGB Green = -0.25 raw Red + 1.4 raw Green - 0.59 raw Blue

sRGB Blue = 0.11 raw Red - 0.87 raw Green + 4.0 raw Blue

Look at the last equation: the multiplication of the raw Blue channel by 4.0 is the same as dropping two bits of precision when using integer arithmetic. Likewise, in the middle equation, the 0.25 factor for the raw Red channel also reduces the precision for that channel by two bits. Also, summing the three values together will drop precision by nearly one bit. So even if you start with 16 bits of precision, you'll end up with less, once the calculation is done.

Another major issue is that a lot of raw processing software will truncate intermediate results, forcing values to 0 or the maximum integer value, which can lead to undesirable results with subsequent processing. I see this in Adobe software, where the blue color channel gets pegged to pure black under some conditions, and no amount of shadow boosting can bring up detail in that channel. The data is truncated at some intermediate processing step and it can not be recovered. (Raw processors which do not truncate intermediate results are said to use "unbounded" calculations: RawTherapee is one such unbounded processor.)

32 bit software does not use integers, but rather a floating point representation of data. Here, the data values can be extremely large or extremely low, with an unchanging number of bits assigned to the precision, no matter the size of the numbers. For one common 32 bit floating point data format, 23 bits are assigned to the base value or signficand, one bit for the sign, and 8 bits are for the exponent. Here, multiplying or dividing numbers does not drop precision like with integer values, but rather just changes the exponent. The rounding errors associated with raw values read from a camera and subsequent calculations are practically insignificant when put in this format, a very tiny proportion compared to integer arithmetic.

I occasionally use RawTherapee in floating point mode, and the results are quite impressive, with fine tonal transitions and less noise and artifacts due to the processing. However, it is really slow, even with a decent processor. I would think that real-time floating point processing in cameras would be both expensive and power-hungry.

The fineness of a photograph in black and white is strictly determined by the the gray scale bit depth and the resolution of the image (mp).

The fineness of a photograph in color is determined by the bit depth and resolution of the image, but applied to possible colors, not just grayscale.

In film emulsions, gray scale and color tonality might be considered extremely fine as its values are not discrete and limited by a fixed digital metric. The values are analog.

Except for the clumping of grain!

Photographers are more often than not concerned with mp resolution than ramping up bit depth.

That's kind of a shame, because really astounding results can be had if you get a really clean base image, even if the resolution isn't all that great. That's one of the secrets of modern smartphone cameras, when they do exposure stacking.

As it would appear that a film emulsion might produce finer gradations of tonality than the 14 bit digital discrete limit, I would like to press the question whether there are some photographers who are not happy with the 14 bit tonality limit and for fine art photography, would still prefer film, or a 32 bit depth camera?

Under optimal conditions, film does extremely well, but I would think that ordinary digital photography greatly outperforms ordinary film photography. The evidence is where once professionals and enthusiasts used medium format, they now almost universally use full frame or even smaller formats.

I don't think that 32 bit cameras will be available any time soon, due to the unavailability of suitable analog-to-digital converters, as well as inadequate "full well capacity" or whatever the precise technological description happens to be.

However, with exposure stacking and in-camera HDR, it is entirely possible to get high dynamic range images with lots of bits of image data, albeit not in one shot. One problem is that the lens itself limits dynamic range, and so many bits of data won't really help much, if it is linearly encoded. But if floating point is used, or some log format, then we could construct an image that has the same number of bits of data for both the extreme highlights as well as the darkest shadows. The shadows would have to be exposed more of course, but this could be done automatically.

Seriously, with cameras peaking over 45mp, would a 24 bit depth tonality produce a visible and desirable increase the fineness of photographic images?

Not unless some multi-shot method is used, or some sort of technique is used that will vary the exposure on a per-pixel basis (like maybe the quanta image sensor with its jots?)

But as mentioned, the greatest benefit is in the processing. Even if you don't collect all that many bits of raw data, having lots of bits of processing can help considerably.

Is it possible that future 35mm cameras will evolve to 32 bit?

Who knows? Maybe. I don't even hear of rumors that this is being planned. Everything uses low bit depth integer arithmetic as far as I know.

What significance would a 32-bit depth play in terms of noise reduction?

Very much, if exposure stacking or in-camera HDR is used.

What significance would a 32-bit depth play in large prints?

None because that much dynamic range or precision isn't needed at all when outputting to a printer. The benefit, as mentioned, is in processing the image to its final form.

 Mark Scott Abeln's gear list:Mark Scott Abeln's gear list
Nikon D200 Nikon D7000 Nikon D750 Nikon AF-S DX Nikkor 35mm F1.8G Nikon AF Nikkor 50mm f/1.8D +2 more
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark MMy threads
Color scheme? Blue / Yellow