The 16-Bit Fallacy: Why More Isn't Always Better in Medium Format Cameras

1. Myth: 16-Bit Provides More Dynamic Range A 16-bit file can, in theory, encode 96 dB of dynamic range versus 84 dB for 14-bit.
Oh, the joys of linear DACs...
ADCs?
Hm. Sideways speak I, forgive me you will? ;-)
Each bit doubles the range and is thus 1 stop. A dB (decibel) is the far less intuitive, perception-based equation: 20*log10(2^bits). CMOS image sensors are basically (analog charge-based) photon counters. That makes them inherently digital: there really is a unit charge that corresponds to absorbing 1 photon. So, let's stick with calling things by stops and bits rather than dB. ;-)

The big problem is that, outside of some scientific applications, photography isn't about counting photons, but creating a model of scene appearance using photon arrival rate as the sampling mechanism. The naturally stochastic variation in photon emission rate from any source makes these counts an inherently imprecise way to sample scene appearance, and humans are log-sensitive to scene brightness. Thus, a linear ADC is a lousy way to encode values within a dynamic range: half the representable values get used to represent gradations in the brightest stop, and only two values are within the darkest stop. Even if you have 2^15 values to encode the gradations within the brightest stop of a 16-bit reading, that doesn't imply you have 1 part in 2^15 accuracy in sampling the scene brightness...
Using a linear ADC makes correlated double sampling, pixel response nonlinearity, and other such calibration methods easier than using a logarithmic or power law ADC.
I have nothing against attempting to make ADCs linear, I'm just saying you probably shouldn't be trusting all values to be good to the LSB. Even if the ADC is magically great, photon shot noise still limits accuracy.
D'accord.
Incidentally, this is where the so-called single-pixel imaging cameras get really sketchy. They use a DMD to select which pixel locations will contribute to the sum measured by a single high-quality sensel that is potentially linear to more than 16 bits: each measurement is essentially producing a presumed-linear equation with the number of unknowns equal to the number of pixels summed. They solve for the individual contributions over many pattern samplings (i.e., a system of many equations), and generally use compressive sampling methods to recover more than one pixel location's value per sample measured. However, photon shot noise effectively dynamically varies the contribution from each pixel site, violating the assumption that the weights of contributions from different pixel locations are known and constant (typically assumed to be 0 or 1)...

BTW, the the QIS (Quanta Image Sensor) work from Fossum and the TDCI (time domain continuous imaging) work that I've done don't necessarily have these shot-noise issues. QIS JOTs literally try to count each photon as a separate event, although they may combine them in sloppy ways to reduce output bandwidth. TDCI computes a model of noise and then selectively combines samples over time to reduce noise when the model determines that the scene content at that pixel site probably hasn't changed.
I'm excited at the prospect of those technologies appearing in consumer cameras.
I am dizzy….!!
 
Well that blows what I thought was my understanding of noise out of the water; and also what I thought was my understanding of what the ISO knob actually did (and which I thought we had agreed upon in previous discussions), too. And replaces it with...nothing.

I had to google "summation in quadrature" and I can't say my brief reading leaves me any better off.

Back to the drawing board :-(
 
Last edited:
My sources are dumbed down rather than quantum mechanics, but it appears I made at least one error: Shot noise is indeed something inherent in the QM nature of light. But It is a random fluctuation in the arrival of photons over time at a particular photosite (ie the deviation from the number of photons captured from that that might be expected from a (supposed) steady flow of photons).

My error is that I was thinking the randomness was caused by a randomness in the accuracy of the impact point on the sensor (i.e. photons hitting the wrong photosite) but it appears to be more a stuttering in the arrival time rather than the arrival point.

I'm not sure that makes a difference to my conclusions: shot noise is controlled by capturing a larger number of photos so the randomness is evened out and approaches the result we would expect from a naive steady flow concept.

Everything I read about read noise continues to indicate it is error introduced in the measurement of the number of electrons present in each well during the process of digitising i.e. it is noise introduced by the camera electronics.

I can't work out exactly which bit (or all of it) of my 'explanation' you were referring to with your "no"...
 
Last edited:
Hi,

The next step in the progression. :)

Stuff always shows up in some Government application, then an industrial one. Eventually, it gets down to commercial. And, finally, consumer.

So, look at what Industrial is using today to see what we will be getting tomorrow.

Stan
 
My sources are dumbed down rather than quantum mechanics, but it appears I made at least one error: Shot noise is indeed something inherent in the QM nature of light. But It is a random fluctuation in the arrival of photons over time at a particular photosite (ie the deviation from the number of photons captured from that that might be expected from a (supposed) steady flow of photons).

My error is that I was thinking the randomness was caused by a randomness in the accuracy of the impact point on the sensor (i.e. photons hitting the wrong photosite) but it appears to be more a stuttering in the arrival time rather than the arrival point.

I'm not sure that makes a difference to my conclusions: shot noise is controlled by capturing a larger number of photos so the randomness is evened out and approaches the result we would expect from a naive steady flow concept.

Everything I read about read noise continues to indicate it is error introduced in the measurement of the number of electrons present in each well during the process of digitising i.e. it is noise introduced by the camera electronics.

I can't work out exactly which bit (or all of it) of my 'explanation' you were referring to with your "no"...
Analogue gain main refer to the correlation of the charge in the pixels and the voltage over the pixel.

There is always some amplification between the pixel and the ADC. My understanding is that it is partially a bit of the pixel design, but there is also a pgrogrammable gain amplifier (PGA) etween the pixel readout line and the ADC in many designs.



That PGA may be usable to increase ISO.



Older CMOS designs often used of sensor ADCs, and those sensors were helped at high ISOs by increasing amplification before the ADC.

With modern designs, the ADC-s moved to both ends of the columns of the sensors. Those ADCs have less noise in their analogue signal paths, so they can do with less amplification.



Old an new generations of CMOS sensors, Nikon D5 uses off chip ADC while Z9 uses column ADCs. Around 400 ISO, part of the pixels is disconnected from the photodiode, which reduces the capacitance of the pixel. This yields a higher voltage into the readout amplifier, thus reducing noise.But, doing that decreases the full well capacity. At high ISO-s the full well would not be utilized, anyway, so this is a good strategy to improve dynamic range,Best regardsErik
Old an new generations of CMOS sensors, Nikon D5 uses off chip ADC while Z9 uses column ADCs. Around 400 ISO, part of the pixels is disconnected from the photodiode, which reduces the capacitance of the pixel. This yields a higher voltage into the readout amplifier, thus reducing noise.But, doing that decreases the full well capacity. At high ISO-s the full well would not be utilized, anyway, so this is a good strategy to improve dynamic range,Best regardsErik









--
Erik Kaffehr
Website: http://echophoto.dnsalias.net
Magic tends to disappear in controlled experiments…
Gallery: http://echophoto.smugmug.com
Articles: http://echophoto.dnsalias.net/ekr/index.php/photoarticles
 
Shot noise is the noise that is unavoidably generated by converting photons to electrons on the sensor. It is a consequence of quantum mechanics. To a first approximation, root mean square shot noise is equal to the mean number of photons counted.
...
I don't see what the Heisenberg Uncertainty Principle has to do with shot noise, which is a result of counting photons.

438b87e2ef28417dba08b19be2585483.jpg.png
And this is significant because it happens before the ADC stage and therefore isn't reduced by analogue gain.
Shot noise isn't just from "converting photons to electrons on the sensor." The quantum probability density function stuff applies to all aspects of photons: light itself is noisy.

For example, let's say you are trying to detect the brightness of a pixel's area of green scene content. Well, you aren't going to have anything to detect unless a photon with a roughly green wavelength happens to hit that spot in the scene such that it is reflected (perhaps guided by optics) toward the corresponding sensel and absorbed by the sensel to create a unit of charge. It's all about quantum probability densities...

Think of shot noise as granularity artifacts due to quantization at the quantum mechanical level.

You can see shot noise if you do coin tosses because heads/tails also is a quantized probabilistic phenomena. A fair coin should have the "ground truth" value that it is precisely 50% heads. However, over a very small number of coin tosses, you'll usually see a larger difference from 50% in the probability of heads than if you sample over a large number of coin tosses. Ironically, while the probability of measuring close to 50% increases with the number of tosses, the probability that you measure the coin as being precisely 50% heads decreases with more tosses because there are so many more ways it can be close without being precisely 50%.

In fact, let me use that analogy to explain my TDCI (time domain continuous imaging) research. TDCI looks at the variation across a time sequence of samplings and basically says "as long as the probabilities don't suggest that the coin being flipped has been changed, I can get a better approximation to the ground truth value by averaging more samples." For example, if you want to know the value of a rather dark pixel over a 1/1000s exposure, TDCI says that you can average all readings immediately before and after that 1/1000s for which the readings are within shot noise bounds of what was measured during the 1/1000s. In other words, if the scene hasn't changed, keep averaging so that you compute the rate over way more photon arrivals than happened in that 1/1000s.
 
The next step in the progression. :)

Stuff always shows up in some Government application, then an industrial one. Eventually, it gets down to commercial. And, finally, consumer.

So, look at what Industrial is using today to see what we will be getting tomorrow.
Actually, all three new approaches I mentioned first showed up as university research projects. You know -- the kind of stuff the federal government is in the process of defunding. TDCI was funded in part by NSF. :-(
 
Last edited:
I think silverEagle is slightly confused with what is going on.

There isn't any conversion, the IMX461 sensor in the Gfx100 cameras can record and output 16-bits. The problem is the read noise and shot noise is so high that the extra dynamic range is like stuck in "tall grass", so you can't see it.

Solution is to crack open your camera and add active cooling to the sensor. Add a copper heat sink and/or some liquid cooling to the sensor. That will help regain some of that dynamic range. Cutting back that tall noise grass.
You are correct,

I hope you and Jim explain it to me like you explain it to 4 year old child so I understand, noise is generated by the heat in the circuitry of the sensor? and what is the difference between shot noise and read noise ( what is the ultimate sensor on the market now that has the lowest level of both)
Read noise is the noise that is present independent of the signal level. Cooling the sensor can reduce some kinds of read noise. Shot noise is the noise that is unavoidably generated by converting photons to electrons on the sensor. It is a consequence of quantum mechanics. To a first approximation, root mean square shot noise is equal to the mean number of photons counted.
There I was thinking I understood the difference...

I thought read noise was a summation of the all the noise added by the camera's electronics and this is significant because it happens after the ADC process (and therefore analogue gain can help reduce it) while shot noise is basically a result of the Uncertainty Principle whereby photos, that in a classical world, would definitely end up hitting a particular photosite, because of Heisenberg can end up somewhere else, contaminating the accuracy of the signal. And this is significant because it happens before the ADC stage and therefore isn't reduced by analogue gain.
Have a look at this:

 
Hi,

Yes. Shame, that....

Once Research has a handle on something new, someone in the Governmental world sees a way to use that and then we get the first product to utilize same.

But if there are no Bucks, there is no Buck Rogers....

Stan
 
Hi,

Yes. Shame, that....

Once Research has a handle on something new, someone in the Governmental world sees a way to use that and then we get the first product to utilize same.

But if there are no Bucks, there is no Buck Rogers....

Stan
Total tangent but seeing that we're on the subject, one of the distinctive things about the place I work (University of Waterloo) is that you own your own Intellectual Property. People who discover the thing can commercialize the thing.
 
Fujifilm was interested in Toshiba, almost bought it. Primarily for the medical division.

Fujifilm has agreements with SanDisk.

Fujifilm owns Hitachi, including the storage division.

Fujifilm is also the world's largest supplier of data tape.

So shoot 16-bit, use more storage and keep the backup tape rolling.
My 14-bit and 16-bit raw files from the GFX (Uncompressed) are of the same size. Isn't it the same case with Hasselblad?
 
Hi,

I do.

R&D didn't want to add 16 bit full well knowing all about the uselessness. Marketing wanted it as a bullet point. And that is Fair Enough. It sells a few extra units.

Me, I know all 16 bit storage is gonna get me is larger files where those extra bits are ratty. Well, perhaps at times Bit 15 might be slightly useful, but I can't have it without that guaranteed ratty Bit 16 tagging along.

Stan
But the 16-bit files aren't any larger than the 14-bit files in my case. Not sure about Hasselblad.
 
Fujifilm was interested in Toshiba, almost bought it. Primarily for the medical division.

Fujifilm has agreements with SanDisk.

Fujifilm owns Hitachi, including the storage division.

Fujifilm is also the world's largest supplier of data tape.

So shoot 16-bit, use more storage and keep the backup tape rolling.
My 14-bit and 16-bit raw files from the GFX (Uncompressed) are of the same size. Isn't it the same case with Hasselblad?
 
Well that blows what I thought was my understanding of noise out of the water; and also what I thought was my understanding of what the ISO knob actually did (and which I thought we had agreed upon in previous discussions), too. And replaces it with...nothing.

I had to google "summation in quadrature" and I can't say my brief reading leaves me any better off.
Here you go: https://www.dpreview.com/forums/post/68266261
Back to the drawing board :-(
 
Hi,

Nice!

All my IP was owned by whatever company I was working for at the time. They did pay some rewards, the amount of which was dependent on how much said IP was worth to them.

But, to be fair, all that IP came about from trying to make something actually work. And I was getting paid for that, win or lose. But I hate to lose. ;)

So whatever bonus came about was fine by me. And I have even had some unexpected bonuses along the way after I no longer worked for a given company. This, because the IP was licenced and said company was getting regular payments for use of the IP from other outfits.

Stan
 
Hi,

And Lossless Compression is the way I want to run. So, I don't want to pay for 16 bit with extra size for two bits worth of ratty data. Just set those to Zero and ignore the rattiness and save the space.

Stan
 
Fujifilm was interested in Toshiba, almost bought it. Primarily for the medical division.

Fujifilm has agreements with SanDisk.

Fujifilm owns Hitachi, including the storage division.

Fujifilm is also the world's largest supplier of data tape.

So shoot 16-bit, use more storage and keep the backup tape rolling.
My 14-bit and 16-bit raw files from the GFX (Uncompressed) are of the same size. Isn't it the same case with Hasselblad?
Processing and storage hardware and protocols don't operate on bits - it operates on "words" which are comprised by "bytes." Memory is address in bytes. That has not always been true. Some early digital computers, e.g., the IBM 1401 was a variable word length machine where the programmer had to set word marks in memory and there was only a chunk of (core) memory.

The size of a byte has changed over time. Today a byte is 8 bit ASCII. Early one the basic data was BCD encoding which was based on a six bit. Some famous big machines were BCD based, e.g., IBM 7090, 7094 were 36 bit machines. The System 360 was the first IBM system based on ASCII (8 bit bytes). The heyday of the CDC "super computers" were BCD machines with the 6600 and 7600 being 60 bit words and similar for the Cyber series. Of course, today ASCII is the standard with processors and memory architectures based on multiples of 16 bit words.

Data was address and passed on a multiple of the 6bit BCD, 12 or 18 bits in this case. That carried over when ASCII replaced BCD and the basic structure is two bytes or a 16 bit word.

So while for the image capture a 14 bit quantization of the ADC seem to be sufficient to capture the sensor DR in most cases, the data organization does most likely does not change by selecting 14 bits over 16 - a 16 bit word is most likely still used. In many digital RF receivers, 12 bit ADC's are common. In this case it makes sense to pack 3 samples in two 16 bit words for acquisition and storage and pack on a process basis. However, for a 14 bit ADC, the overhead is not worth the small amount of memory savings. Using 16 bits also buys some flexibility in gain control. In digital RF systems sampling the noise floor is aways a design goal with normally at least one bit and often two are sampling the noise floor as week narrowband signals can often live under the wideband noise floor that are brought out by low pass filtering.

We once had a requirement for a VLF and ELF receiver base on a B field antenna to use a 24 bit I/Q ADC where one 24 bit complex ( two 24 bit real) samples were packed into 3 16 bit words and the data was unpacked on a process to process basis.

So there may be two questions to consider, one being that 14 bits is sufficient to match the sensor DR and the extra two bits are more or less sampling the noise. The second is the memory saved and storage space saved may not be that great and if one decides they want some sort of integration process on an image, there might be some information in the last two bits that will be brought out by the process.

The 14 bit lossless compressed file on the GFX100 is approximately 104 MB for 14 bit and 130 for 16 bit. The compressed 16 bit file is approximately 25% larger. However, there is the expense of having to decompress the file on a process by process basis in the computer.

For general photography, Jim has demonstrated that 14 bits is sufficient with the current sensors. On the other hand there is not much of a penalty involved in using 16 bit capture and on a rare occasion it might be useful depending on the intended follow on processing. However, one should know when that is the case in advance.
 
Fujifilm was interested in Toshiba, almost bought it. Primarily for the medical division.

Fujifilm has agreements with SanDisk.

Fujifilm owns Hitachi, including the storage division.

Fujifilm is also the world's largest supplier of data tape.

So shoot 16-bit, use more storage and keep the backup tape rolling.
My 14-bit and 16-bit raw files from the GFX (Uncompressed) are of the same size. Isn't it the same case with Hasselblad?
Processing and storage hardware and protocols don't operate on bits - it operates on "words" which are comprised by "bytes." Memory is address in bytes. That has not always been true. Some early digital computers, e.g., the IBM 1401 was a variable word length machine where the programmer had to set word marks in memory and there was only a chunk of (core) memory.

The size of a byte has changed over time. Today a byte is 8 bit ASCII. Early one the basic data was BCD encoding which was based on a six bit. Some famous big machines were BCD based, e.g., IBM 7090, 7094 were 36 bit machines. The System 360 was the first IBM system based on ASCII (8 bit bytes). The heyday of the CDC "super computers" were BCD machines with the 6600 and 7600 being 60 bit words and similar for the Cyber series. Of course, today ASCII is the standard with processors and memory architectures based on multiples of 16 bit words.

Data was address and passed on a multiple of the 6bit BCD, 12 or 18 bits in this case. That carried over when ASCII replaced BCD and the basic structure is two bytes or a 16 bit word.

So while for the image capture a 14 bit quantization of the ADC seem to be sufficient to capture the sensor DR in most cases, the data organization does most likely does not change by selecting 14 bits over 16 - a 16 bit word is most likely still used. In many digital RF receivers, 12 bit ADC's are common. In this case it makes sense to pack 3 samples in two 16 bit words for acquisition and storage and pack on a process basis. However, for a 14 bit ADC, the overhead is not worth the small amount of memory savings. Using 16 bits also buys some flexibility in gain control. In digital RF systems sampling the noise floor is aways a design goal with normally at least one bit and often two are sampling the noise floor as week narrowband signals can often live under the wideband noise floor that are brought out by low pass filtering.

We once had a requirement for a VLF and ELF receiver base on a B field antenna to use a 24 bit I/Q ADC where one 24 bit complex ( two 24 bit real) samples were packed into 3 16 bit words and the data was unpacked on a process to process basis.

So there may be two questions to consider, one being that 14 bits is sufficient to match the sensor DR and the extra two bits are more or less sampling the noise. The second is the memory saved and storage space saved may not be that great and if one decides they want some sort of integration process on an image, there might be some information in the last two bits that will be brought out by the process.

The 14 bit lossless compressed file on the GFX100 is approximately 104 MB for 14 bit and 130 for 16 bit. The compressed 16 bit file is approximately 25% larger. However, there is the expense of having to decompress the file on a process by process basis in the computer.

For general photography, Jim has demonstrated that 14 bits is sufficient with the current sensors. On the other hand there is not much of a penalty involved in using 16 bit capture and on a rare occasion it might be useful depending on the intended follow on processing. However, one should know when that is the case in advance.
Good summary, but the 360 character representation was EBCIDIC not ASCII.
 
Fujifilm was interested in Toshiba, almost bought it. Primarily for the medical division.

Fujifilm has agreements with SanDisk.

Fujifilm owns Hitachi, including the storage division.

Fujifilm is also the world's largest supplier of data tape.

So shoot 16-bit, use more storage and keep the backup tape rolling.
My 14-bit and 16-bit raw files from the GFX (Uncompressed) are of the same size. Isn't it the same case with Hasselblad?
Processing and storage hardware and protocols don't operate on bits - it operates on "words" which are comprised by "bytes." Memory is address in bytes. That has not always been true. Some early digital computers, e.g., the IBM 1401 was a variable word length machine where the programmer had to set word marks in memory and there was only a chunk of (core) memory.

The size of a byte has changed over time. Today a byte is 8 bit ASCII. Early one the basic data was BCD encoding which was based on a six bit. Some famous big machines were BCD based, e.g., IBM 7090, 7094 were 36 bit machines. The System 360 was the first IBM system based on ASCII (8 bit bytes). The heyday of the CDC "super computers" were BCD machines with the 6600 and 7600 being 60 bit words and similar for the Cyber series. Of course, today ASCII is the standard with processors and memory architectures based on multiples of 16 bit words.

Data was address and passed on a multiple of the 6bit BCD, 12 or 18 bits in this case. That carried over when ASCII replaced BCD and the basic structure is two bytes or a 16 bit word.

So while for the image capture a 14 bit quantization of the ADC seem to be sufficient to capture the sensor DR in most cases, the data organization does most likely does not change by selecting 14 bits over 16 - a 16 bit word is most likely still used. In many digital RF receivers, 12 bit ADC's are common. In this case it makes sense to pack 3 samples in two 16 bit words for acquisition and storage and pack on a process basis. However, for a 14 bit ADC, the overhead is not worth the small amount of memory savings. Using 16 bits also buys some flexibility in gain control. In digital RF systems sampling the noise floor is aways a design goal with normally at least one bit and often two are sampling the noise floor as week narrowband signals can often live under the wideband noise floor that are brought out by low pass filtering.

We once had a requirement for a VLF and ELF receiver base on a B field antenna to use a 24 bit I/Q ADC where one 24 bit complex ( two 24 bit real) samples were packed into 3 16 bit words and the data was unpacked on a process to process basis.

So there may be two questions to consider, one being that 14 bits is sufficient to match the sensor DR and the extra two bits are more or less sampling the noise. The second is the memory saved and storage space saved may not be that great and if one decides they want some sort of integration process on an image, there might be some information in the last two bits that will be brought out by the process.

The 14 bit lossless compressed file on the GFX100 is approximately 104 MB for 14 bit and 130 for 16 bit. The compressed 16 bit file is approximately 25% larger. However, there is the expense of having to decompress the file on a process by process basis in the computer.

For general photography, Jim has demonstrated that 14 bits is sufficient with the current sensors. On the other hand there is not much of a penalty involved in using 16 bit capture and on a rare occasion it might be useful depending on the intended follow on processing. However, one should know when that is the case in advance.
Good summary, but the 360 character representation was EBCIDIC not ASCII.
Ah, I remember (vaguely) EBCDIC and Packed Decimal formats from my time as a mainframe programmer working with the IBM machines the Met Office in Bracknell used as front ends to their supercomputer. We actually worked with data files byte by byte using hexadecimal and had to count the characters in the file formats by hand to work out which bytes to process....We used a program called NCC Filetab, if anyone remembers that. My job was to total up transaction records in accounting files and make sure they added up to the numbers declared in the accounts, then use a form of random sampling called Monetary Unit Sampling to select transactions for our auditors to go into the organisations and test.

I'll never forget working til midnight one Friday night desperately trying to get the Inland Revenue General Account to add up while friends fed me beer from the bar that operated on Friday evenings. I doubt the beer helped. I remember the IR accounting file spanned 30 tapes.

Those were the days.
 
Fujifilm was interested in Toshiba, almost bought it. Primarily for the medical division.

Fujifilm has agreements with SanDisk.

Fujifilm owns Hitachi, including the storage division.

Fujifilm is also the world's largest supplier of data tape.

So shoot 16-bit, use more storage and keep the backup tape rolling.
My 14-bit and 16-bit raw files from the GFX (Uncompressed) are of the same size. Isn't it the same case with Hasselblad?
Processing and storage hardware and protocols don't operate on bits - it operates on "words" which are comprised by "bytes." Memory is address in bytes. That has not always been true. Some early digital computers, e.g., the IBM 1401 was a variable word length machine where the programmer had to set word marks in memory and there was only a chunk of (core) memory.

The size of a byte has changed over time. Today a byte is 8 bit ASCII. Early one the basic data was BCD encoding which was based on a six bit. Some famous big machines were BCD based, e.g., IBM 7090, 7094 were 36 bit machines. The System 360 was the first IBM system based on ASCII (8 bit bytes). The heyday of the CDC "super computers" were BCD machines with the 6600 and 7600 being 60 bit words and similar for the Cyber series. Of course, today ASCII is the standard with processors and memory architectures based on multiples of 16 bit words.

Data was address and passed on a multiple of the 6bit BCD, 12 or 18 bits in this case. That carried over when ASCII replaced BCD and the basic structure is two bytes or a 16 bit word.

So while for the image capture a 14 bit quantization of the ADC seem to be sufficient to capture the sensor DR in most cases, the data organization does most likely does not change by selecting 14 bits over 16 - a 16 bit word is most likely still used. In many digital RF receivers, 12 bit ADC's are common. In this case it makes sense to pack 3 samples in two 16 bit words for acquisition and storage and pack on a process basis. However, for a 14 bit ADC, the overhead is not worth the small amount of memory savings. Using 16 bits also buys some flexibility in gain control. In digital RF systems sampling the noise floor is aways a design goal with normally at least one bit and often two are sampling the noise floor as week narrowband signals can often live under the wideband noise floor that are brought out by low pass filtering.

We once had a requirement for a VLF and ELF receiver base on a B field antenna to use a 24 bit I/Q ADC where one 24 bit complex ( two 24 bit real) samples were packed into 3 16 bit words and the data was unpacked on a process to process basis.

So there may be two questions to consider, one being that 14 bits is sufficient to match the sensor DR and the extra two bits are more or less sampling the noise. The second is the memory saved and storage space saved may not be that great and if one decides they want some sort of integration process on an image, there might be some information in the last two bits that will be brought out by the process.

The 14 bit lossless compressed file on the GFX100 is approximately 104 MB for 14 bit and 130 for 16 bit. The compressed 16 bit file is approximately 25% larger. However, there is the expense of having to decompress the file on a process by process basis in the computer.

For general photography, Jim has demonstrated that 14 bits is sufficient with the current sensors. On the other hand there is not much of a penalty involved in using 16 bit capture and on a rare occasion it might be useful depending on the intended follow on processing. However, one should know when that is the case in advance.
Good summary, but the 360 character representation was EBCIDIC not ASCII.
Ah, I remember (vaguely) EBCDIC and Packed Decimal formats from my time as a mainframe programmer working with the IBM machines the Met Office in Bracknell used as front ends to their supercomputer. We actually worked with data files byte by byte using hexadecimal and had to count the characters in the file formats by hand to work out which bytes to process....We used a program called NCC Filetab, if anyone remembers that. My job was to total up transaction records in accounting files and make sure they added up to the numbers declared in the accounts, then use a form of random sampling called Monetary Unit Sampling to select transactions for our auditors to go into the organisations and test.

I'll never forget working til midnight one Friday night desperately trying to get the Inland Revenue General Account to add up while friends fed me beer from the bar that operated on Friday evenings. I doubt the beer helped. I remember the IR accounting file spanned 30 tapes.

Those were the days.
Indeed - Started my IT days on a System 370 I think it was playing around with PL/1 or COBOL an CICS and a bit of Easytreve and FOCUS thrown in

Those were the days indeed
 

Keyboard shortcuts

Back
Top