The 16-Bit Fallacy: Why More Isn't Always Better in Medium Format Cameras

JimKasson

Community Leader
Forum Moderator
Messages
52,267
Solutions
52
Reaction score
59,051
Location
Monterey, CA, US
I've written on this subject before, but I've not done a piece that deals with the common counterarguments. Here is one.

The Fujifilm GFX 100-series and Hasselblad X2D cameras support 16-bit RAW files. At first glance, this seems like an obvious win: more bits should mean more data, more dynamic range, and more flexibility in post-processing. But in practice, the benefits of 16-bit precision over 14-bit are negligible for photographic applications. Here are the arguments often made in favor of 16-bit capture and why they don't hold up under scrutiny.

1. Myth: 16-Bit Provides More Dynamic Range A 16-bit file can, in theory, encode 96 dB of dynamic range versus 84 dB for 14-bit. However, the real-world dynamic range of medium format sensors is limited by photon shot noise and read noise, typically capping at around 14 stops (about 84 dB). Once quantization noise is well below the sensor's analog noise floor, increasing bit depth adds no practical dynamic range.

2. Myth: 16-Bit Prevents Banding in Edits It is often claimed that more bits reduce banding in gradients during aggressive post-processing. But in RAW files, the tonal resolution of a 14-bit file already exceeds the eye's ability to detect steps, especially once converted to a working color space and edited in a 16-bit pipeline. Any banding in real workflows is usually due to limitations in output color space or lossy compression, not insufficient bit depth in the original capture. In addition, shot noise smears over the quantization noise.

3. Myth: 16-Bit is Better for Color Grading While more bits may benefit extreme color grading in video or scientific imagery, photographic sensors do not generate color information with 16-bit fidelity. The signal is already quantized, and color differences at the bottom 2 bits of a 16-bit file are buried in noise. Color precision is far more influenced by lens transmission, sensor design, and spectral response than bit depth.

4. Myth: 16-Bit is Needed for Future-Proofing Some argue that 16-bit data ensures longevity in the face of evolving editing software or display technologies. But if the source data carries no meaningful information in the bottom bits, storing them is like preserving empty decimal places. 14-bit files already provide more granularity than is practically usable for current sensors.

5. Myth: Scientific or Industrial Applications Justify 16-Bit While true for specialized imaging tasks like fluorescence microscopy or machine vision, these use cases have little in common with handheld photography. In those domains, exposure, temperature, and electronic noise are tightly controlled. In photography, the environment is uncontrolled and analog noise dominates.

Conclusion The 16-bit RAW format in cameras like the GFX 100 series and Hasselblad X2D is more about marketing than measurable photographic benefit. While there is no harm in storing images in 16-bit format, it offers little to no advantage over 14-bit for dynamic range, tonal smoothness, or color accuracy. Photographers should base their expectations on physics and perceptual limits—not on file format headlines.
 
This blog post refers to an extreme case where the 16-bit file had some advantage. Can you frame these extreme cases?
 
Thank you for this post. Could you also post it on your blog, so I can link to it whenever the topic comes up, either on this or other forums :).

By "medium format sensors," I assume you refer to a 44x33 sensor, as 53.4x40 sensors have about one stop more DR.

While storing images in 16-bit format is not harmful, you could mention that it may involve larger raw file sizes (not with Hasselblad) and slower readouts. Slower readouts mean longer blackouts and more rolling shutter.
 
Last edited:
This blogpost refers to an extreme case where the 16-bit file had some advantage. Can you frame these extreme cases?
Black point compensation. Easily fixed in Lr.
 
Fujifilm was interested in Toshiba, almost bought it. Primarily for the medical division.

Fujifilm has agreements with SanDisk.

Fujifilm owns Hitachi, including the storage division.

Fujifilm is also the world's largest supplier of data tape.

So shoot 16-bit, use more storage and keep the backup tape rolling.
 
Thanks for this...
 
”. . . it offers little to no advantage over 14-bit for dynamic range, tonal smoothness, or color accuracy.“

I feel a little bit like Jim Carey here . . . LOL What would be the “little” advantage offered?

Rand
 
Hi,

I do.

R&D didn't want to add 16 bit full well knowing all about the uselessness. Marketing wanted it as a bullet point. And that is Fair Enough. It sells a few extra units.

Me, I know all 16 bit storage is gonna get me is larger files where those extra bits are ratty. Well, perhaps at times Bit 15 might be slightly useful, but I can't have it without that guaranteed ratty Bit 16 tagging along.

Stan
 
Hi,

I do.

R&D didn't want to add 16 bit full well knowing all about the uselessness. Marketing wanted it as a bullet point. And that is Fair Enough. It sells a few extra units.

Me, I know all 16 bit storage is gonna get me is larger files where those extra bits are ratty. Well, perhaps at times Bit 15 might be slightly useful, but I can't have it without that guaranteed ratty Bit 16 tagging along.

Stan
Bit by bit we'll get to the bottom of this.
 
”. . . it offers little to no advantage over 14-bit for dynamic range, tonal smoothness, or color accuracy.“

I feel a little bit like Jim Carey here . . . LOL What would be the “little” advantage offered?

Rand
Black point accuracy.
 
Thank you for this post. Could you also post it on your blog, so I can link to it whenever the topic comes up, either on this or other forums :).
Done.
By "medium format sensors," I assume you refer to a 44x33 sensor, as 53.4x40 sensors have about one stop more DR.
Yes, as I said in the beginning, I was thinking about the GFX 100x and X2D. But I don't see why the P1 sensor would behave any differently wrt to what I said about precision.
While storing images in 16-bit format is not harmful, you could mention that it may involve larger raw file sizes (not with Hasselblad) and slower readouts. Slower readouts mean longer blackouts and more rolling shutter.
There is that, but I was trying to keep it simple.

--
https://blog.kasson.com
 
Last edited:
Thank you for this post. Could you also post it on your blog, so I can link to it whenever the topic comes up, either on this or other forums :).
Done.
Thanks!
By "medium format sensors," I assume you refer to a 44x33 sensor, as 53.4x40 sensors have about one stop more DR.
Yes, as I said in the beginning, I was thinking about the GFX 100x and X2D. But I don't see why the P1 sensor would behave any differently wrt to what I said about precision.
I thought the two lowest bits may carry some information with the larger sensor. I did a brief test similar to yours (link) and did not see much difference between 14 and 16 bits, single shot (IQ4 150). However, when using in-camera frame averaging, the results were much better with 16 bits, while 14 bits had heavy posterizations in deep shadows.
While storing images in 16-bit format is not harmful, you could mention that it may involve larger raw file sizes (not with Hasselblad) and slower readouts. Slower readouts mean longer blackouts and more rolling shutter.
There is that, but I was trying to keep it simple.
+1
 
Thank you for this post. Could you also post it on your blog, so I can link to it whenever the topic comes up, either on this or other forums :).
Done.
Thanks!
By "medium format sensors," I assume you refer to a 44x33 sensor, as 53.4x40 sensors have about one stop more DR.
Yes, as I said in the beginning, I was thinking about the GFX 100x and X2D. But I don't see why the P1 sensor would behave any differently wrt to what I said about precision.
I thought the two lowest bits may carry some information with the larger sensor. I did a brief test similar to yours (link) and did not see much difference between 14 and 16 bits, single shot (IQ4 150). However, when using in-camera frame averaging, the results were much better with 16 bits, while 14 bits had heavy posterizations in deep shadows.
That makes sense, since you’re averaging out the read noise. But that’s implementation dependent. You could easily produce 16 bit averaged files from 14 bit input files.
While storing images in 16-bit format is not harmful, you could mention that it may involve larger raw file sizes (not with Hasselblad) and slower readouts. Slower readouts mean longer blackouts and more rolling shutter.
There is that, but I was trying to keep it simple.
+1
 
The only advantage I surmised was if pushing two adjacent colours values around once or several times, they would stay apart and not irreversibly become one - and why HB took up 16bit to move the output from the sensor into their liking for colour. Though that was a guess, and just theoretical on my part.

My main question is more fundamental, why do computers often follow 8bit 16bit 32bit 64bit? Is there some advantage to this neat doubling?
 
The only advantage I surmised was if pushing two adjacent colours values around once or several times, they would stay apart and not irreversibly become one - and why HB took up 16bit to move the output from the sensor into their liking for colour. Though that was a guess, and just theoretical on my part.

My main question is more fundamental, why do computers often follow 8bit 16bit 32bit 64bit? Is there some advantage to this neat doubling?
Note that those are powers of 2 (2³, 2⁴, 2⁵, 2⁶). Thus they align naturally with binary architecture.

/Bill
 
So here is the practical question: What is the recommended RAW mode in GFX series (100RF) preserving high IQ and saving space?
 
The only advantage I surmised was if pushing two adjacent colours values around once or several times, they would stay apart and not irreversibly become one - and why HB took up 16bit to move the output from the sensor into their liking for colour. Though that was a guess, and just theoretical on my part.

My main question is more fundamental, why do computers often follow 8bit 16bit 32bit 64bit? Is there some advantage to this neat doubling?
Note that those are powers of 2 (2³, 2⁴, 2⁵, 2⁶). Thus they align naturally with binary architecture.

/Bill
Right, so there’s an advantage for raw files to also align with that?
 
On the question of the neat doublings of bit size in computer architecture, I don't think it was historically that neat. To begin with bit lengths of internal registers wasn't always the same as bit lengths of memory addressing. For example the early IBM PCs had 16 bit internal registers and 20 bit address registers.
 

Keyboard shortcuts

Back
Top