GFX100S 14-bit vs 16-bit

hi, thanks for your test. I remember a test I saw with a hasselblad a few years ago. the capabilty to discern red tones & hues seemed significantly enhanced. in 16bit
I don't understand why the analog to digital converter would care what color filter was in front of the pixel it is quantizing.
this just came to my mind, when I saw the closeup of the red book fabric that looks different in 14 & 16bit in your test.

maybe it would be more telling to try an object with a csubtle olour gradient that spans a wide range, and to a split image comparison.

it could also be, that the 33x44 sensor is still not there, where the larger hasselblad sensors were previously.

I 'm still trying to find this test (unforuntately I can't remember where I saw it)
100S test:

 
Last edited:
The real difference between bits depth can be seen when shooting scenes with very high DR. It's constantly unrelated to the camera model.

Modern camera with 14 bit RAW

52256833835_f6e4cf5be2_k.jpg


Almost 18 years old camera with 16 bit RAW

50667340001_45c8c5f114_h.jpg


The same place. Almost the same lighting conditions. The same photographer - me. 🙂

Moreover, I can say my skills have been improved when the first shoot was made.

For the such scenes 16 bits RAW is more preferable.

In general situations is very difficult to find out the difference between 12 and 16 bits. As a result, I very often shoot using an old A900 with 12 but RAW since I like its colors. 🤷‍♂️🙂

PS. Have no idea how to fix post (I tried but couldn't, sorry 🙏)
 
The real difference between bits depth can be seen when shooting scenes with very high DR. It's constantly unrelated to the camera model.

Modern camera with 14 bit RAW

52256833835_f6e4cf5be2_k.jpg


Almost 18 years old camera with 16 bit RAW

50667340001_45c8c5f114_h.jpg


The same place. Almost the same lighting conditions. The same photographer - me. 🙂

Moreover, I can say my skills have been improved when the first shoot was made.

For the such scenes 16 bits RAW is more preferable.

In general situations is very difficult to find out the difference between 12 and 16 bits. As a result, I very often shoot using an old A900 with 12 but RAW since I like its colors. 🤷‍♂️🙂

PS. Have no idea how to fix post (I tried but couldn't, sorry 🙏)
I am not sure what issue you see.

I don't know which 18 year old CCD camera you used, just as information, the Phase One backs of yore had 14 bit files that were blown up to 16 bit in raw conversion. With the IQ3100 Phase introduced a file format that held 16 bits of data.

The A900 is a bit of a corner case, where the deep shadows can be blotchy. May have to do with handling of near blacks.

The A900 also has lossy compression on raw files, with one part of the coding yielding possible artifacts on high contrast edges.

Phase One backs may have tended to kind of underexpose the images. Exposure is of course determined by the photographer, but both histograms on my P45+ and default processing in Capture One sort of lure the photographer to expose well below saturation and that can protect the highlights.



This compares extreme darks between Phase One P45+, Sony A7rII and Sony Alpha 900. Note that both Phase One and Alpha 900 are noisy, but the A900 is blotchy. May depend on 12 bits, or not.
This compares extreme darks between Phase One P45+, Sony A7rII and Sony Alpha 900. Note that both Phase One and Alpha 900 are noisy, but the A900 is blotchy. May depend on 12 bits, or not.





The ETTR exposed highlight part of the same images. I can see very little difference, if any.
The ETTR exposed highlight part of the same images. I can see very little difference, if any.

The subject here was a set with two subjects, one illuminated by studio strobe with a grid and the global illumination was another strobe at low power in a different room.

The table was covered with a black cloth. I would think that luminance range was about 15 stops, based HDR

Exposure was based on careful comparisons of raw data, using RawDigger and should be close with 0.1 or may be 0.2 EV on highlights.

Best regards

Erik



--
Erik Kaffehr
Website: http://echophoto.dnsalias.net
Magic tends to disappear in controlled experiments…
Gallery: http://echophoto.smugmug.com
Articles: http://echophoto.dnsalias.net/ekr/index.php/photoarticles
 
The real difference between bits depth can be seen when shooting scenes with very high DR. It's constantly unrelated to the camera model.

Modern camera with 14 bit RAW

Almost 18 years old camera with 16 bit RAW

The same place. Almost the same lighting conditions. The same photographer - me. 🙂

Moreover, I can say my skills have been improved when the first shoot was made.

For the such scenes 16 bits RAW is more preferable.

In general situations is very difficult to find out the difference between 12 and 16 bits. As a result, I very often shoot using an old A900 with 12 but RAW since I like its colors. 🤷‍♂️🙂

PS. Have no idea how to fix post (I tried but couldn't, sorry 🙏)
No. A camera or digital back can perform the analog-to-digital conversion at whatever number bits its designer wants, but past a certain point, all you're getting is false precision. At some point, there's no detectable signal, only noise. So if the real analog sensor signal is only meaningful to say 13-bit precision--and I'm willing to bet good money that no eighteen-year-old regular camera or back has anything like 16 bits worth of real signal--you can use a 16-bit converter but the three least significant bits only reflect noise, not signal. For that matter, you could use a 24-bit analog-to-digital converter--but you'd still only have 13 bits of actual signal.

Pictures taken with different cameras / digital backs can look different for many reasons, and you might prefer one to another for many reasons. But the conclusion that one looks subjectively (to you) better than another because of the 16-bit output (if it's even that, as Erik raised!) from an eighteen-year-old camera / digital back is a leap unsupported by data or science.
 
Last edited:
52256833835_f6e4cf5be2_k.jpg




If I could convince some young beautiful model to stand there like that and I didn't have to pose her (I hate yelling at people to move their hand this way and that), I think I might actually start shooting people again.

That girl must be a ballerina. She is up on point. That right hip is thrust up and out in a very unnatural way.... But I like it.

If you look around on the street, you can often see women leaning up against walls posing like that. It is a normal site with the cowgirls in Texas.

Would I be a creep if I like snuck up and shot one of these?



--
Greg Johnson, San Antonio, Texas
 
a leap unsupported by data or science.
Those leaps are popular on photography forums. ;)
How dare you! All of my proclamations are supported by science and math. Jim clears all of my posts, so you can just naturally assume that if I post it on this forum, it is scientific fact.

Rob, you need to catch up and get with the program here on the Forum.

Rob - I can't find that post where you or someone posted a new YouTube video of tilt shift demos with the 3oTS...
 
NAwlins Contrarian, post: 67480994, member: 1366347"]
a leap unsupported by data or science.
Those leaps are popular on photography forums. ;)
How dare you! All of my proclamations are supported by science and math. Jim clears all of my posts, so you can just naturally assume that if I post it on this forum, it is scientific fact.

Rob, you need to catch up and get with the program here on the Forum.

Rob - I can't find that post where you or someone posted a new YouTube video of tilt shift demos with the 3oTS...
[/QUOTE]
The good news for you, Greg, is that sensor technology and performance seem to be reaching a point where there actually can be a benefit to 16-bit sensor precision, versus 14-bit. I think Jim has blogged on this, but I don't have a link handy. My understanding is that the appearance of 'real' 16-bit raw files resulted mainly from two developments: (1) the use of CMOS sensors instead of CCD and (2) the move of the analog-to-digital converters onto the sensor chips themselves, instead of having to operate through an analog connection to an off-sensor converter.

I'm sure some people around here remember similar discussions with digital audio in the 1980s and early 1990s. The new 16-bit CDs got massaged in a variety of ways, including some players using 18- or 20-bit digital-to-analog converters. And then there were early low-end-pro / prosumer digital recorders that you used an actual regular VCR to record a digital signal that was sometimes only 14-bit. I'm talking before the dedicated ADATs, you had a box that digitized an analog audio signal and then converted it back out formatted as a TV signal, fed to an outboard regular VCR, masquerading as composite video. Ah, the wonders of ancient technology.
 
The real difference between bits depth can be seen when shooting scenes with very high DR. It's constantly unrelated to the camera model.

Modern camera with 14 bit RAW

52256833835_f6e4cf5be2_k.jpg


Almost 18 years old camera with 16 bit RAW

50667340001_45c8c5f114_h.jpg


The same place. Almost the same lighting conditions. The same photographer - me. 🙂

Moreover, I can say my skills have been improved when the first shoot was made.

For the such scenes 16 bits RAW is more preferable.
In an 18-year old camera with 16 bit raw precision, the 3 or four LSBs are going to be noise.

--
 
The good news for you, Greg, is that sensor technology and performance seem to be reaching a point where there actually can be a benefit to 16-bit sensor precision, versus 14-bit. I think Jim has blogged on this, but I don't have a link handy. My understanding is that the appearance of 'real' 16-bit raw files resulted mainly from two developments: (1) the use of CMOS sensors instead of CCD and (2) the move of the analog-to-digital converters onto the sensor chips themselves, instead of having to operate through an analog connection to an off-sensor converter.

I'm sure some people around here remember similar discussions with digital audio in the 1980s and early 1990s. The new 16-bit CDs got massaged in a variety of ways, including some players using 18- or 20-bit digital-to-analog converters. And then there were early low-end-pro / prosumer digital recorders that you used an actual regular VCR to record a digital signal that was sometimes only 14-bit. I'm talking before the dedicated ADATs, you had a box that digitized an analog audio signal and then converted it back out formatted as a TV signal,
We engineers call those things modems.
fed to an outboard regular VCR, masquerading as composite video. Ah, the wonders of ancient technology.
 
The good news for you, Greg, is that sensor technology and performance seem to be reaching a point where there actually can be a benefit to 16-bit sensor precision, versus 14-bit. I think Jim has blogged on this, but I don't have a link handy. My understanding is that the appearance of 'real' 16-bit raw files resulted mainly from two developments: (1) the use of CMOS sensors instead of CCD and (2) the move of the analog-to-digital converters onto the sensor chips themselves, instead of having to operate through an analog connection to an off-sensor converter.

I'm sure some people around here remember similar discussions with digital audio in the 1980s and early 1990s. The new 16-bit CDs got massaged in a variety of ways, including some players using 18- or 20-bit digital-to-analog converters. And then there were early low-end-pro / prosumer digital recorders that you used an actual regular VCR to record a digital signal that was sometimes only 14-bit. I'm talking before the dedicated ADATs, you had a box that digitized an analog audio signal and then converted it back out formatted as a TV signal,
We engineers call those things modems.
Yep. The MOdulator-DEModulator. I got my first one IIRC for Christmas 1982, for sending or receiving 300 bits per second (baud) over a regular telephone line. I suspect it was around 1986 when I read a magazine (probably High Fidelity) review of a (maybe JVC or Yamaha?) digital audio recording / playback box that must have handled a little over 1,234,800 bits per second,* to be recorded (or played back) as a TV signal.

*IIRC, it was a stereo recorder, 14 bits per channel, at 44.1 kHz--so like a CD, but with 2 bits lower precision. I'm sure there must have been a little data recorded beyond the bare audio signals.
fed to an outboard regular VCR, masquerading as composite video. Ah, the wonders of ancient technology.
 
Last edited:
a leap unsupported by data or science.
Those leaps are popular on photography forums. ;)
How dare you! All of my proclamations are supported by science and math. Jim clears all of my posts, so you can just naturally assume that if I post it on this forum, it is scientific fact.

Rob, you need to catch up and get with the program here on the Forum.

Rob - I can't find that post where you or someone posted a new YouTube video of tilt shift demos with the 3oTS...
The good news for you, Greg, is that sensor technology and performance seem to be reaching a point where there actually can be a benefit to 16-bit sensor precision, versus 14-bit. I think Jim has blogged on this, but I don't have a link handy. My understanding is that the appearance of 'real' 16-bit raw files resulted mainly from two developments: (1) the use of CMOS sensors instead of CCD and (2) the move of the analog-to-digital converters onto the sensor chips themselves, instead of having to operate through an analog connection to an off-sensor converter.

I'm sure some people around here remember similar discussions with digital audio in the 1980s and early 1990s. The new 16-bit CDs got massaged in a variety of ways, including some players using 18- or 20-bit digital-to-analog converters. And then there were early low-end-pro / prosumer digital recorders that you used an actual regular VCR to record a digital signal that was sometimes only 14-bit. I'm talking before the dedicated ADATs, you had a box that digitized an analog audio signal and then converted it back out formatted as a TV signal, fed to an outboard regular VCR, masquerading as composite video. Ah, the wonders of ancient technology.
I am very interested in this because we all decided here on this Board 4 years ago that 16 was unnecessary, so I've never shot it. But I might start shooting 16 based on some of this discussion. I have plenty of storage and more than enough computational power.

I remember Manzur said that he was going to shoot 16 because you never know what future software will make use of. I sort of wish I had shot 16 these past few years with GFX.

Jim, seriously ... should I start shooting at 16? File size will almost double to 200 MB.
 
am very interested in this because we all decided here on this Board 4 years ago that 16 was unnecessary, so I've never shot it. But I might start shooting 16 based on some of this discussion. I have plenty of storage and more than enough computational power.

I remember Manzur said that he was going to shoot 16 because you never know what future software will make use of. I sort of wish I had shot 16 these past few years with GFX.

Jim, seriously ... should I start shooting at 16?
No.
File size will almost double to 200 MB.
 
*IIRC, it was a stereo recorder, 14 bits per channel, at 44.1 kHz--so like a CD, but with 2 bits lower precision. I'm sure there must have been a little data recorded beyond the bare audio signals.
If I remember right, the early CD standard allowed 14 and 16 bit precision, and early CDs were often 14 bits. In those days, building a 16-bit fast ADC was quite difficult.
 
a leap unsupported by data or science.
Those leaps are popular on photography forums. ;)
How dare you! All of my proclamations are supported by science and math. Jim clears all of my posts, so you can just naturally assume that if I post it on this forum, it is scientific fact.

Rob, you need to catch up and get with the program here on the Forum.

Rob - I can't find that post where you or someone posted a new YouTube video of tilt shift demos with the 3oTS...
The good news for you, Greg, is that sensor technology and performance seem to be reaching a point where there actually can be a benefit to 16-bit sensor precision, versus 14-bit. I think Jim has blogged on this, but I don't have a link handy. My understanding is that the appearance of 'real' 16-bit raw files resulted mainly from two developments: (1) the use of CMOS sensors instead of CCD and (2) the move of the analog-to-digital converters onto the sensor chips themselves, instead of having to operate through an analog connection to an off-sensor converter.

I'm sure some people around here remember similar discussions with digital audio in the 1980s and early 1990s. The new 16-bit CDs got massaged in a variety of ways, including some players using 18- or 20-bit digital-to-analog converters. And then there were early low-end-pro / prosumer digital recorders that you used an actual regular VCR to record a digital signal that was sometimes only 14-bit. I'm talking before the dedicated ADATs, you had a box that digitized an analog audio signal and then converted it back out formatted as a TV signal, fed to an outboard regular VCR, masquerading as composite video. Ah, the wonders of ancient technology.
Hi,

The number of useful bits is essentially the engineering DR expressed in EV.

DR is normally Full Well Capacity / readout noise. Modern CMOS may reach like 2-3 e- in readout noise. Now, 14 bits can encode 16384 values. So, if full well capacity is more than say 2.5 * 16384 -> 41000 e-, more bits may be useful. I would recall that FWC on 3.8 micron pixels is around 35-40 ke-. Almost there...

Larger pixels like 100 MP 54x41 mm may have higher FWC, but readout noise will increase with FWC. I think the Phase One IQ 3150 may have benefited from 14+ bits.

Going from 14 bit to 16 bit would increase scanning time by a factor of four, but I don't really see that on '16 bit sensors', I guess that 16 bits doesn't actually mean 16 bits but more like 'more than 14 bits', which would translate into 15 bits.

Best regards

Erik
 
Going from 14 bit to 16 bit would increase scanning time by a factor of four, but I don't really see that on '16 bit sensors', I guess that 16 bits doesn't actually mean 16 bits but more like 'more than 14 bits', which would translate into 15 bits.
The GFX 100x and X2D take twice as long to scan a frame at 16 bit precision than at 14 bit precision. They both used column ADCs with single-ramp design. Running the ADC faster than a quarter of the 14 bit rate doesn't mean that you don't get 16 bits, but it does increase the read noise.
 
Wow, thanks for the detailed reply Eric! It makes me even more annoyed that Fujifilm has it set to 16-bit by default. Besides marketing, I can see no other reason for them to do that.
Even if you have it set to 16 it will reset it for you if you are using drive modes.

Bottom line , you can keep it 16 and if you do bursts or basically make any different drive modes it goes down. In your sunrise sunset shot you will get the 16 since you will probably have it single shot.

As for Jim shooting 16, maybe we can start a poll.
 
Last edited:
*IIRC, it was a stereo recorder, 14 bits per channel, at 44.1 kHz--so like a CD, but with 2 bits lower precision. I'm sure there must have been a little data recorded beyond the bare audio signals.
If I remember right, the early CD standard allowed 14 and 16 bit precision, and early CDs were often 14 bits. In those days, building a 16-bit fast ADC was quite difficult.
And probably only a very small fraction of the material to be (often re-) released on CD had more than 14 bits of real signal. Remember the SPARS codes?
 
a leap unsupported by data or science.
Those leaps are popular on photography forums. ;)
How dare you! All of my proclamations are supported by science and math. Jim clears all of my posts, so you can just naturally assume that if I post it on this forum, it is scientific fact.

Rob, you need to catch up and get with the program here on the Forum.

Rob - I can't find that post where you or someone posted a new YouTube video of tilt shift demos with the 3oTS...
The good news for you, Greg, is that sensor technology and performance seem to be reaching a point where there actually can be a benefit to 16-bit sensor precision, versus 14-bit. I think Jim has blogged on this, but I don't have a link handy. My understanding is that the appearance of 'real' 16-bit raw files resulted mainly from two developments: (1) the use of CMOS sensors instead of CCD and (2) the move of the analog-to-digital converters onto the sensor chips themselves, instead of having to operate through an analog connection to an off-sensor converter.

I'm sure some people around here remember similar discussions with digital audio in the 1980s and early 1990s. The new 16-bit CDs got massaged in a variety of ways, including some players using 18- or 20-bit digital-to-analog converters. And then there were early low-end-pro / prosumer digital recorders that you used an actual regular VCR to record a digital signal that was sometimes only 14-bit. I'm talking before the dedicated ADATs, you had a box that digitized an analog audio signal and then converted it back out formatted as a TV signal, fed to an outboard regular VCR, masquerading as composite video. Ah, the wonders of ancient technology.
I am very interested in this because we all decided here on this Board 4 years ago that 16 was unnecessary, so I've never shot it. But I might start shooting 16 based on some of this discussion. I have plenty of storage and more than enough computational power.

I remember Manzur said that he was going to shoot 16 because you never know what future software will make use of. I sort of wish I had shot 16 these past few years with GFX.
If the two lowest bits in raw files are mainly noise, no future software can make better use of them.
Jim, seriously ... should I start shooting at 16? File size will almost double to 200 MB.
 
a leap unsupported by data or science.
Those leaps are popular on photography forums. ;)
How dare you! All of my proclamations are supported by science and math. Jim clears all of my posts, so you can just naturally assume that if I post it on this forum, it is scientific fact.

Rob, you need to catch up and get with the program here on the Forum.

Rob - I can't find that post where you or someone posted a new YouTube video of tilt shift demos with the 3oTS...
The good news for you, Greg, is that sensor technology and performance seem to be reaching a point where there actually can be a benefit to 16-bit sensor precision, versus 14-bit. I think Jim has blogged on this, but I don't have a link handy. My understanding is that the appearance of 'real' 16-bit raw files resulted mainly from two developments: (1) the use of CMOS sensors instead of CCD and (2) the move of the analog-to-digital converters onto the sensor chips themselves, instead of having to operate through an analog connection to an off-sensor converter.

I'm sure some people around here remember similar discussions with digital audio in the 1980s and early 1990s. The new 16-bit CDs got massaged in a variety of ways, including some players using 18- or 20-bit digital-to-analog converters. And then there were early low-end-pro / prosumer digital recorders that you used an actual regular VCR to record a digital signal that was sometimes only 14-bit. I'm talking before the dedicated ADATs, you had a box that digitized an analog audio signal and then converted it back out formatted as a TV signal, fed to an outboard regular VCR, masquerading as composite video. Ah, the wonders of ancient technology.
The number of useful bits is essentially the engineering DR expressed in EV.

DR is normally Full Well Capacity / readout noise. Modern CMOS may reach like 2-3 e- in readout noise. Now, 14 bits can encode 16384 values. So, if full well capacity is more than say 2.5 * 16384 -> 41000 e-, more bits may be useful. I would recall that FWC on 3.8 micron pixels is around 35-40 ke-. Almost there...

Larger pixels like 100 MP 54x41 mm may have higher FWC, but readout noise will increase with FWC. I think the Phase One IQ 3150 may have benefited from 14+ bits.

Going from 14 bit to 16 bit would increase scanning time by a factor of four, but I don't really see that on '16 bit sensors', I guess that 16 bits doesn't actually mean 16 bits but more like 'more than 14 bits', which would translate into 15 bits.
Before the technical side gets too ... messed up, maybe it would be best to say that we're at just the tip* of the area where more than 14-bit precision in digital camera raw files can be, in certain circumstances, useful. My sense is that at this point, any advantages are likely to be quite subtle / modest, and apply only under limited circumstances. But my sense is that the technology has progressed from the nonsense marketing claims of 16-bit precision with CCD-sensor medium format digital backs, to an arguable, situational, and slight--but arguably real--advantage with some of the newest larger sensors. Do you agree?

*For those whose minds are less afflicted by a juvenile sense of humor and/or whose familiarity with certain English slang and nuance is too low, I apologize for this joke.
 

Keyboard shortcuts

Back
Top