The 16-Bit Fallacy: Why More Isn't Always Better in Medium Format Cameras

I thought the two lowest bits may carry some information with the larger sensor. I did a brief test similar to yours (link) and did not see much difference between 14 and 16 bits, single shot (IQ4 150). However, when using in-camera frame averaging, the results were much better with 16 bits, while 14 bits had heavy posterizations in deep shadows.
Interestesting. Although not 16bits, Olympus introduced this in 2016 on their E-M1 mk2 they termed it HHHR aka hand held high resolution : stacking eight jpeg images processed in camera to bring benefits of such stacking (whilst hand holding E-M1 mk2 not on tripod). HHHR is in various subsequent Olympus as well as OM System cameras alongside a few subsequent M4/3 Panasonics.

--
Photography after all is interplay of light alongside perspective.
 
Last edited:
Hi,

And an 8 bit external bus to save some cost. 8088 v 8086 (so in between the 8080 and the 8086). Those were interesting days.

Stan
 
Thank you for this post. Could you also post it on your blog, so I can link to it whenever the topic comes up, either on this or other forums :).
Done.
Thanks!
By "medium format sensors," I assume you refer to a 44x33 sensor, as 53.4x40 sensors have about one stop more DR.
Yes, as I said in the beginning, I was thinking about the GFX 100x and X2D. But I don't see why the P1 sensor would behave any differently wrt to what I said about precision.
I thought the two lowest bits may carry some information with the larger sensor. I did a brief test similar to yours (link) and did not see much difference between 14 and 16 bits, single shot (IQ4 150). However, when using in-camera frame averaging, the results were much better with 16 bits, while 14 bits had heavy posterizations in deep shadows.
While storing images in 16-bit format is not harmful, you could mention that it may involve larger raw file sizes (not with Hasselblad) and slower readouts. Slower readouts mean longer blackouts and more rolling shutter.
There is that, but I was trying to keep it simple.
+1
I have been using averaging a lot lately so I think this is a very important real and practical advantage to be aware of. Thanks
 
Thank you for the useful points to consider
 
As the GFX series drop you out of 16 bit whenever you use exposure bracketing I have rarely had any 16 bit images to test with. I always use exposure bracketing. I am not sure if the X2D does the same thing. I can’t remember if the GFX drops you with focus bracketing also but I believe any bracketing series drops you to 14 bit.

For my needs MF has always been a case of more native resolution at capture for printing later on. Instead of interpolation up from smaller camera files.

Paul
 
Last edited:
Thank you for this post. Could you also post it on your blog, so I can link to it whenever the topic comes up, either on this or other forums :).
Done.
Thanks!
By "medium format sensors," I assume you refer to a 44x33 sensor, as 53.4x40 sensors have about one stop more DR.
Yes, as I said in the beginning, I was thinking about the GFX 100x and X2D. But I don't see why the P1 sensor would behave any differently wrt to what I said about precision.
I thought the two lowest bits may carry some information with the larger sensor. I did a brief test similar to yours (link) and did not see much difference between 14 and 16 bits, single shot (IQ4 150). However, when using in-camera frame averaging, the results were much better with 16 bits, while 14 bits had heavy posterizations in deep shadows.
While storing images in 16-bit format is not harmful, you could mention that it may involve larger raw file sizes (not with Hasselblad) and slower readouts. Slower readouts mean longer blackouts and more rolling shutter.
There is that, but I was trying to keep it simple.
+1
I have been using averaging a lot lately so I think this is a very important real and practical advantage to be aware of. Thanks
If you average in post you can use any precision that pleases you.
 
”. . . it offers little to no advantage over 14-bit for dynamic range, tonal smoothness, or color accuracy.“

I feel a little bit like Jim Carey here . . . LOL What would be the “little” advantage offered?

Rand
Black point accuracy.
That sounds like it can be a pretty important advantage, correct?
You can calibrate it properly in Lightroom. I’ve never found it an issue in real world photography.
 
Thank you for this post. Could you also post it on your blog, so I can link to it whenever the topic comes up, either on this or other forums :).
Done.
Thanks!
By "medium format sensors," I assume you refer to a 44x33 sensor, as 53.4x40 sensors have about one stop more DR.
Yes, as I said in the beginning, I was thinking about the GFX 100x and X2D. But I don't see why the P1 sensor would behave any differently wrt to what I said about precision.
I thought the two lowest bits may carry some information with the larger sensor. I did a brief test similar to yours (link) and did not see much difference between 14 and 16 bits, single shot (IQ4 150). However, when using in-camera frame averaging, the results were much better with 16 bits, while 14 bits had heavy posterizations in deep shadows.
While storing images in 16-bit format is not harmful, you could mention that it may involve larger raw file sizes (not with Hasselblad) and slower readouts. Slower readouts mean longer blackouts and more rolling shutter.
There is that, but I was trying to keep it simple.
+1
I have been using averaging a lot lately so I think this is a very important real and practical advantage to be aware of. Thanks
If you average in post you can use any precision that pleases you.
Thanks, that was clear above but for now I don't bother or know how to quickly do that (though I should learn soon).
 
”. . . it offers little to no advantage over 14-bit for dynamic range, tonal smoothness, or color accuracy.“

I feel a little bit like Jim Carey here . . . LOL What would be the “little” advantage offered?

Rand
Black point accuracy.
That sounds like it can be a pretty important advantage, correct?
You can calibrate it properly in Lightroom. I’ve never found it an issue in real world photography.
I guess I didn't understand this point correctly by doing a simple google search. Can you please expand a bit what do you mean by black point accuracy and how to properly calibrate it?
 
Thank you for this post. Could you also post it on your blog, so I can link to it whenever the topic comes up, either on this or other forums :).
Done.
Thanks!
By "medium format sensors," I assume you refer to a 44x33 sensor, as 53.4x40 sensors have about one stop more DR.
Yes, as I said in the beginning, I was thinking about the GFX 100x and X2D. But I don't see why the P1 sensor would behave any differently wrt to what I said about precision.
I thought the two lowest bits may carry some information with the larger sensor. I did a brief test similar to yours (link) and did not see much difference between 14 and 16 bits, single shot (IQ4 150). However, when using in-camera frame averaging, the results were much better with 16 bits, while 14 bits had heavy posterizations in deep shadows.
While storing images in 16-bit format is not harmful, you could mention that it may involve larger raw file sizes (not with Hasselblad) and slower readouts. Slower readouts mean longer blackouts and more rolling shutter.
There is that, but I was trying to keep it simple.
+1
I have been using averaging a lot lately so I think this is a very important real and practical advantage to be aware of. Thanks
If you average in post you can use any precision that pleases you.
Thanks, that was clear above but for now I don't bother or know how to quickly do that (though I should learn soon).
You can do it in Ps.
 
Thank you for this post. Could you also post it on your blog, so I can link to it whenever the topic comes up, either on this or other forums :).
Done.
Thanks!
By "medium format sensors," I assume you refer to a 44x33 sensor, as 53.4x40 sensors have about one stop more DR.
Yes, as I said in the beginning, I was thinking about the GFX 100x and X2D. But I don't see why the P1 sensor would behave any differently wrt to what I said about precision.
I thought the two lowest bits may carry some information with the larger sensor. I did a brief test similar to yours (link) and did not see much difference between 14 and 16 bits, single shot (IQ4 150). However, when using in-camera frame averaging, the results were much better with 16 bits, while 14 bits had heavy posterizations in deep shadows.
While storing images in 16-bit format is not harmful, you could mention that it may involve larger raw file sizes (not with Hasselblad) and slower readouts. Slower readouts mean longer blackouts and more rolling shutter.
There is that, but I was trying to keep it simple.
+1
I have been using averaging a lot lately so I think this is a very important real and practical advantage to be aware of. Thanks
If you average in post you can use any precision that pleases you.
Thanks, that was clear above but for now I don't bother or know how to quickly do that (though I should learn soon).
Apart from PS, you can also use these tools for frame averaging.

There is a Lightroom plug-in:


There is a tool for Hasselblad files:

 
1. Myth: 16-Bit Provides More Dynamic Range A 16-bit file can, in theory, encode 96 dB of dynamic range versus 84 dB for 14-bit. However, the real-world dynamic range of medium format sensors is limited by photon shot noise and read noise, typically capping at around 14 stops (about 84 dB). Once quantization noise is well below the sensor's analog noise floor, increasing bit depth adds no practical dynamic range.
Jim, thank you again for another lesson, Al clear except the first argument.

Isn’t a medium format @ ISO100 a corner case where DR might benefit from a few extra bits? I know my FF sensor just reaches the limit of DR where 14 bits is enough.

thanks Ruud
 
I thought the two lowest bits may carry some information with the larger sensor. I did a brief test similar to yours (link) and did not see much difference between 14 and 16 bits, single shot (IQ4 150). However, when using in-camera frame averaging, the results were much better with 16 bits, while 14 bits had heavy posterizations in deep shadows.
Interestesting. Although not 16bits, Olympus introduced this in 2016 on their E-M1 mk2 they termed it HHHR aka hand held high resolution : stacking eight jpeg images processed in camera to bring benefits of such stacking (whilst hand holding E-M1 mk2 not on tripod). HHHR is in various subsequent Olympus as well as OM System cameras alongside a few subsequent M4/3 Panasonics.
HHHR and HR (tripod-based) are not really frame averaging, though they do some averaging and improve noise. It stacks raw files and has a "raw" or JPEG file as an output.

While Olympus sensors read 12-bit data, they recently added a 14-bit mode for both HR options. IIRC, the 14-bit mode works better when lifting deep shadows. The higher bit option probably means that it is used internally for assembly and, therefore, produces higher IQ.
 
If you average in post you can use any precision that pleases you.
Thanks, that was clear above but for now I don't bother or know how to quickly do that (though I should learn soon).
Apart from PS, you can also use these tools for frame averaging.

There is a Lightroom plug-in:

https://mackman.net/mergeraw/

There is a tool for Hasselblad files:

https://photography.marcoristuccia.com/hbcomposer-an-hasselblad-raw-file-composer/
Very useful, thanks both
 
1. Myth: 16-Bit Provides More Dynamic Range A 16-bit file can, in theory, encode 96 dB of dynamic range versus 84 dB for 14-bit. However, the real-world dynamic range of medium format sensors is limited by photon shot noise and read noise, typically capping at around 14 stops (about 84 dB). Once quantization noise is well below the sensor's analog noise floor, increasing bit depth adds no practical dynamic range.
Jim, thank you again for another lesson, Al clear except the first argument.

Isn’t a medium format @ ISO100 a corner case where DR might benefit from a few extra bits? I know my FF sensor just reaches the limit of DR where 14 bits is enough.

thanks Ruud
I believe it is 13 bits for FF, and Nikon's FF cameras use 13 bits internally (source: Thom Hogan)
 
Black point accuracy.
That sounds like it can be a pretty important advantage, correct?
You can calibrate it properly in Lightroom. I’ve never found it an issue in real world photography.
I guess I didn't understand this point correctly by doing a simple google search. Can you please expand a bit what do you mean by black point accuracy and how to properly calibrate it?
In RAW development, black point accuracy means ensuring that the mapping of zero-signal (or near-zero) data is handled correctly, neither artificially crushed nor lifted. Accurate black point calibration begins with subtracting the correct digital black level, continues with linear tone handling, and ends with perceptually faithful rendering.

Here are the controls in Lr:

4ce46d1e18c24daaa1f69165e85e66be.jpg.png

The top control is the one you want for this.

--
https://blog.kasson.com
 
Last edited:

Keyboard shortcuts

Back
Top