The 16-Bit Fallacy: Why More Isn't Always Better in Medium Format Cameras

Jim, thank you for that explanation. "I think" I understand! I have questions.

Does this change with larger/better sensors?
Larger, no, assuming same pixel pitch and technology.

Better, quite possibly.
Meaning, if a sensor larger than 44x33 comes along with better tech and more efficiency (whatever that means technically), would it benefit from 16-bit capture?

The question also goes in the opposite direction... How much benefit do 14bit files offer over 12bit, or is that dependent on sensor size?
At base ISO with state of the art CMOS sensors, 14 bits offers real advantages over 12 bits. At higher ISOs, not so much. Above ISO 800 or so, not at all.
In other words, does bit depth have an optimal value for each sensor size, say 14bit for 44x33, 12bit for 36x24, etc.
As I said above, no.
 
Jim, thank you for that explanation. "I think" I understand! I have questions.

Does this change with larger/better sensors?
Larger, no, assuming same pixel pitch and technology.

Better, quite possibly.
Meaning, if a sensor larger than 44x33 comes along with better tech and more efficiency (whatever that means technically), would it benefit from 16-bit capture?

The question also goes in the opposite direction... How much benefit do 14bit files offer over 12bit, or is that dependent on sensor size?
At base ISO with state of the art CMOS sensors, 14 bits offers real advantages over 12 bits. At higher ISOs, not so much. Above ISO 800 or so, not at all.
M43 cameras use 12 bits. I am unsure whether using 14 bits would give them any advantage, as 12 bits seems enough for the max DR that their sensors can provide.
In other words, does bit depth have an optimal value for each sensor size, say 14bit for 44x33, 12bit for 36x24, etc.
As I said above, no.
 
The sensor in the GFX 100x and the X2D behaves like those in smaller format cameras that use the same pixel aperture. So, no.
Ofcourse. Thank you Jim.

But then the otherway around is also true - sensors with big pixels (Sony A7S series, 12Mpix / FF), could benefit from extra bits.

But they are marketed for video, where you need speed (read-out) over granularity (extra bits). So it doesn't make sense either (to answer my own question)
 
Last edited:
I suppose the question is: at what point will the improvement in DR, provided by an increase in the bit file size, be such that it can’t be ignored and will the file size be too large to be of any use to most photographers?
I've written on this subject before, but I've not done a piece that deals with the common counterarguments. Here is one.

The Fujifilm GFX 100-series and Hasselblad X2D cameras support 16-bit RAW files. At first glance, this seems like an obvious win: more bits should mean more data, more dynamic range, and more flexibility in post-processing. But in practice, the benefits of 16-bit precision over 14-bit are negligible for photographic applications. Here are the arguments often made in favor of 16-bit capture and why they don't hold up under scrutiny.

1. Myth: 16-Bit Provides More Dynamic Range A 16-bit file can, in theory, encode 96 dB of dynamic range versus 84 dB for 14-bit. However, the real-world dynamic range of medium format sensors is limited by photon shot noise and read noise, typically capping at around 14 stops (about 84 dB). Once quantization noise is well below the sensor's analog noise floor, increasing bit depth adds no practical dynamic range.

2. Myth: 16-Bit Prevents Banding in Edits It is often claimed that more bits reduce banding in gradients during aggressive post-processing. But in RAW files, the tonal resolution of a 14-bit file already exceeds the eye's ability to detect steps, especially once converted to a working color space and edited in a 16-bit pipeline. Any banding in real workflows is usually due to limitations in output color space or lossy compression, not insufficient bit depth in the original capture. In addition, shot noise smears over the quantization noise.

3. Myth: 16-Bit is Better for Color Grading While more bits may benefit extreme color grading in video or scientific imagery, photographic sensors do not generate color information with 16-bit fidelity. The signal is already quantized, and color differences at the bottom 2 bits of a 16-bit file are buried in noise. Color precision is far more influenced by lens transmission, sensor design, and spectral response than bit depth.

4. Myth: 16-Bit is Needed for Future-Proofing Some argue that 16-bit data ensures longevity in the face of evolving editing software or display technologies. But if the source data carries no meaningful information in the bottom bits, storing them is like preserving empty decimal places. 14-bit files already provide more granularity than is practically usable for current sensors.

5. Myth: Scientific or Industrial Applications Justify 16-Bit While true for specialized imaging tasks like fluorescence microscopy or machine vision, these use cases have little in common with handheld photography. In those domains, exposure, temperature, and electronic noise are tightly controlled. In photography, the environment is uncontrolled and analog noise dominates.

Conclusion The 16-bit RAW format in cameras like the GFX 100 series and Hasselblad X2D is more about marketing than measurable photographic benefit. While there is no harm in storing images in 16-bit format, it offers little to no advantage over 14-bit for dynamic range, tonal smoothness, or color accuracy. Photographers should base their expectations on physics and perceptual limits—not on file format headlines.
 
Last edited:
In the communications theory course I taught to wet behind the ears grad students, one of the problems I gave was to show that in white gaussian noise using only one bit quantization (sign) one loses less than 3 dB in performance compared to infinite dynamic range. Also one could chip away at that loss through over sampling (at one bit). The second part of the problem was develop the loss curves for sample rate vs. signal bandwidth. The results seem counter intuitive but many a spread spectrum receiver in the early days was based on this concept.

Clearly this was in the case of RF where shot noise was not an issue which is the primary noise source in imaging sensors. Most RF engineers (assuming white noise) would put 1- 2 bits in the noise floor to allow for integration gain.

In image sensors, shot noise is the limiting factor and from Jim's analysis it seems that the 16 bits is overkill in most cases.
 
Jim, thank you for that explanation. "I think" I understand! I have questions.

Does this change with larger/better sensors?
Larger, no, assuming same pixel pitch and technology.

Better, quite possibly.
Meaning, if a sensor larger than 44x33 comes along with better tech and more efficiency (whatever that means technically), would it benefit from 16-bit capture?

The question also goes in the opposite direction... How much benefit do 14bit files offer over 12bit, or is that dependent on sensor size?
At base ISO with state of the art CMOS sensors, 14 bits offers real advantages over 12 bits. At higher ISOs, not so much. Above ISO 800 or so, not at all.
M43 cameras use 12 bits. I am unsure whether using 14 bits would give them any advantage, as 12 bits seems enough for the max DR that their sensors can provide.
Thanks for that.

I have no experience with M43 cameras.
In other words, does bit depth have an optimal value for each sensor size, say 14bit for 44x33, 12bit for 36x24, etc.
As I said above, no.
 
Are these bit depths the quantization of the measurements of each raw photosite at the point of capture or each channel of the resulting RGB pixel after demosaicking?
The former.
 
Jim, thank you for that explanation. "I think" I understand! I have questions.

Does this change with larger/better sensors?
Larger, no, assuming same pixel pitch and technology.

Better, quite possibly.
Meaning, if a sensor larger than 44x33 comes along with better tech and more efficiency (whatever that means technically), would it benefit from 16-bit capture?

The question also goes in the opposite direction... How much benefit do 14bit files offer over 12bit, or is that dependent on sensor size?
At base ISO with state of the art CMOS sensors, 14 bits offers real advantages over 12 bits. At higher ISOs, not so much. Above ISO 800 or so, not at all.
M43 cameras use 12 bits. I am unsure whether using 14 bits would give them any advantage, as 12 bits seems enough for the max DR that their sensors can provide.
Thanks for that.

I have no experience with M43 cameras.
In other words, does bit depth have an optimal value for each sensor size, say 14bit for 44x33, 12bit for 36x24, etc.
As I said above, no.
And hence no experience of Panny's clever and convenient automatic pixel shift raws with motion compensation, which is a shame (for me) as I would have loved to see your analysis of those.
 
M43 cameras use 12 bits. I am unsure whether using 14 bits would give them any advantage, as 12 bits seems enough for the max DR that their sensors can provide.
First couple generations as my Pany G1 was 10bit raw. Later generations nearly all of them 12bit raw. Pany Gh5s Gh6 Gh7 have 14bit raw. Latest OM-3 14bit raw in HR and HHHR modes also OM-1mk2 14bit raw in HR and HHHR modes. Afai read online.

I haven't chanced upon any Dpr posts comparing 12bit vs 14bit HR HHHR of OM1-mk2 OM-3.

--
Photography after all is interplay of light alongside perspective.
 
Last edited:
Thanks for that.

I have no experience with M43 cameras.
And hence no experience of Panny's clever and convenient automatic pixel shift raws with motion compensation, which is a shame (for me) as I would have loved to see your analysis of those.
I have a soft spot for m4/3 my first ever mirrorless in 2010 Oly EP1 adapted various manual lenses to it took thousands photographs with adapted lenses.

This january 2025 when I returned to photography was is because of a m4/3 camera. Lost desire for photography serveral years.

Ibis in my m4/3 E-pl7 released 2014 (just £70 including lens ebay) enabled this 1sec handheld I took February this year after returning to photography.


1sec hand held ibis on my E-Pl7 released 2014. February this year. Waiting at the Light Fantastic.


--
Photography after all is interplay of light alongside perspective.
 

Attachments

  • 4478339.jpg
    4478339.jpg
    3.1 MB · Views: 0
Last edited:
In the communications theory course I taught to wet behind the ears grad students, one of the problems I gave was to show that in white gaussian noise using only one bit quantization (sign) one loses less than 3 dB in matched filter detection performance compared to infinite dynamic range. Also one could chip away at that loss through over sampling (at one bit). The second part of the problem was develop the loss curves for sample rate vs. signal bandwidth. The results seem counter intuitive but many a spread spectrum receiver in the early days was based on this concept.

Clearly this was in the case of RF where shot noise was not an issue which is the primary noise source in imaging sensors. Most RF engineers (assuming white noise) would put 1- 2 bits in the noise floor to allow for integration gain.

In image sensors, shot noise is the limiting factor and from Jim's analysis it seems that the 16 bits is overkill in most cases.
 
M43 cameras use 12 bits. I am unsure whether using 14 bits would give them any advantage, as 12 bits seems enough for the max DR that their sensors can provide.
First couple generations as my Pany G1 was 10bit raw. Later generations nearly all of them 12bit raw. Pany Gh5s Gh6 Gh7 have 14bit raw. Latest OM-3 14bit raw in HR and HHHR modes also OM-1mk2 14bit raw in HR and HHHR modes. Afai read online.
The sensor readout from OM-1 and OM-3 is 12 bits. The 14 bits are used internally for the assembly.

Interestingly, G9 II uses 16-bit output, but it seems wasted, even though it has the highest PDR of all m43 cameras (it also has the lowest base ISO, 100 vs 200).
I haven't chanced upon any Dpr posts comparing 12bit vs 14bit HR HHHR of OM1-mk2 OM-3.
I posted about my findings, without presenting details:

 
Jim

I don't fully agree with you (but you are right on manly things)

Ok, just to introduce, I am working since a while on a company which manufacture expensive CCD and CMOS large sensors (also infrared sensors) and cameras for scientific applications (astronomy, physics, spectroscopy, xrays, biology...)

CMOS is not only Sony.

We have designed and made last 2 years a sCMOS which has real 17 bits dynamic range with 1 e- readout noise and 130,000 e- full well capacity. CCD even better with 2 e- readout noise and more than 300,000 e- FWC. Cameras with 18 bits ADC are available. Cameras have more than 100dB dynamic range.

Photon shot noise is not an issue and is not taken when calculating the Signal to noise ratio (dynamic range) as it is the FWC/RN. For me it is in practice a bit worse as I consider the way to see small levels against non saturated high levels. So I consider the minimum as a S/N of 3 to 5x to be able to see low light without saturating highlights.

Anyway... for consumer like 4/3, APS-C, full frame, MF formats, lot of sensors are made by Sony (or camera manufacturers like Canon, or in the past Nikon with some sensors developed by them but manufactured at TowerJazz foundry (or equivalent) for high end DSLR. Fuji for some sensors as well as sigma for Foveon ones (I forget some...)

12, 14, 16 bits... that's right 16 bits on DSLR (hybrid) is something useless and totally marketing matter as it is impossible on Sony CMOS to get the maximum full well capacity with the minimum readout noise. So you can get for exemple 60 ke- with 4 e- RN which should give you 14 bits maximum where the 1 e- RN is only achieved if you have something like 15 ke- which is also 14 bits theorical DR.
I worked with low dynamic range cameras in the past, also with Sony A7S with 11 bits compressed RAW or lot of 14 bits cameras.

Consumer market is different than scientific market. Volume, prices, innovations (color rendition, AF, ergonomics...) are key features against competition.

Ok, I am astronomer and also photograph (I switched from different manufacturers like Nikon, canon, Sony, Panasonic Lumix, now Fuji) and I always tested sensor capabilities.

Now, as I am less in the art photography area, I decided to sell my Lumix S1R + optics to get a GFX100RF as I already know the Sony sensor in my astronomy cooled camera. Are 16 bits a key changer ? no (I am right now doing some tests on 14 vs 16 bits of same images taken in same conditions but result will be close (vey close).

I will say 16 bits is fine because you fully fill the 2 bytes in a file :-D

I know very well the Sony IMX461 (the one inside GFX100 (also RF) and its capabilities in astronomy in monochrome version (I am just not enthusiastic on real hardware binning 2x2 as it is only a half binning (called in reality Charge Domain Binning as it is not like CCDs). Anyway, future multiple binning technology on CMOS is in development right now)
 
Jim

I don't fully agree with you (but you are right on manly things)

Ok, just to introduce, I am working since a while on a company which manufacture expensive CCD and CMOS large sensors (also infrared sensors) and cameras for scientific applications (astronomy, physics, spectroscopy, xrays, biology...)

CMOS is not only Sony.

We have designed and made last 2 years a sCMOS which has real 17 bits dynamic range with 1 e- readout noise and 130,000 e- full well capacity. CCD even better with 2 e- readout noise and more than 300,000 e- FWC. Cameras with 18 bits ADC are available. Cameras have more than 100dB dynamic range.

Photon shot noise is not an issue and is not taken when calculating the Signal to noise ratio (dynamic range) as it is the FWC/RN. For me it is in practice a bit worse as I consider the way to see small levels against non saturated high levels. So I consider the minimum as a S/N of 3 to 5x to be able to see low light without saturating highlights.

Anyway... for consumer like 4/3, APS-C, full frame, MF formats, lot of sensors are made by Sony (or camera manufacturers like Canon, or in the past Nikon with some sensors developed by them but manufactured at TowerJazz foundry (or equivalent) for high end DSLR. Fuji for some sensors as well as sigma for Foveon ones (I forget some...)

12, 14, 16 bits... that's right 16 bits on DSLR (hybrid) is something useless and totally marketing matter as it is impossible on Sony CMOS to get the maximum full well capacity with the minimum readout noise. So you can get for exemple 60 ke- with 4 e- RN which should give you 14 bits maximum where the 1 e- RN is only achieved if you have something like 15 ke- which is also 14 bits theorical DR.
I worked with low dynamic range cameras in the past, also with Sony A7S with 11 bits compressed RAW or lot of 14 bits cameras.

Consumer market is different than scientific market. Volume, prices, innovations (color rendition, AF, ergonomics...) are key features against competition.

Ok, I am astronomer and also photograph (I switched from different manufacturers like Nikon, canon, Sony, Panasonic Lumix, now Fuji) and I always tested sensor capabilities.

Now, as I am less in the art photography area, I decided to sell my Lumix S1R + optics to get a GFX100RF as I already know the Sony sensor in my astronomy cooled camera. Are 16 bits a key changer ? no (I am right now doing some tests on 14 vs 16 bits of same images taken in same conditions but result will be close (vey close).

I will say 16 bits is fine because you fully fill the 2 bytes in a file :-D

I know very well the Sony IMX461 (the one inside GFX100 (also RF) and its capabilities in astronomy in monochrome version (I am just not enthusiastic on real hardware binning 2x2 as it is only a half binning (called in reality Charge Domain Binning as it is not like CCDs). Anyway, future multiple binning technology on CMOS is in development right now)
The scope of my post was limited to the sensor in the GFX 100x and X2D cameras.
 
The scope of my post was limited to the sensor in the GFX 100x and X2D cameras.
Ok I understood like this but 16 bits will arrive (soon) on some cameras (not only IMX465 sensor based cameras)
And when the RN gets to less one LSB of a 14-bit ADC, there will be a new recommendation. This recommendation is for the situation in which X2D and GFX 100x users find themselves in today.

There are all kinds of tricks that you can use with astro cameras that don't apply to real-world photography with the X2D and GFX 100x.
 
The scope of my post was limited to the sensor in the GFX 100x and X2D cameras.
Ok I understood like this but 16 bits will arrive (soon) on some cameras (not only IMX465 sensor based cameras)
And when the RN gets to less one LSB of a 14-bit ADC, there will be a new recommendation. This recommendation is for the situation in which X2D and GFX 100x users find themselves in today.

There are all kinds of tricks that you can use with astro cameras that don't apply to real-world photography with the X2D and GFX 100x.
 
If I understood the post it is recorded 14 bit and then the hardware in the camera switches it to 16 bit? did I get it right? if I did what are the current cameras that record real 16 bit information allowing for a real DR of 16 bit?
 
If I understood the post it is recorded 14 bit and then the hardware in the camera switches it to 16 bit? did I get it right? if I did what are the current cameras that record real 16 bit information allowing for a real DR of 16 bit?
You can tell the camera whether it should read 14 or 16 bits from the sensor. GFX saves 14-bit readouts as 14-bit raw data, while X2D saves them as 16-bit raw data.
 

Keyboard shortcuts

Back
Top