14 Bit Advantage

Kevin Johnson

Well-known member
Messages
186
Reaction score
0
Location
Brussels, BE
Hello,

I am looking at either getting a 1DSIII or for the same money upgrading an old 1D to a 1DIIn and getting a 1DSII.

Ignoring the extra pixels, can anyone comment on the advantage of 14bits.

i know the 1DIII has 14 Bit, can anyone comment on how obvious this is compared to the 1DIIn.

I will mainly be using the camera for weddings, and landscape work, with some portraits.

I have just purchased the 85 1.2 II, and want to be able to use it as it was meant to be used (on full frame).

I am not really interested in the 5D (unless the do a nikon, and include the AF from the 1 series in the 5DII, then it could be interesting).

Sorry if this has been asked a thousand times, but the search is down.

If anyone can post some raw samples to show the differences, I would be eternally greatfull.

thanks
 
None of the modern DSLR sensors have the dynamic range to prouduce 14 bit of usable image data. Moreover, none of them can even fill 12 bit of data. For this reason there's no advantage in using 14-bit ADC over 12-bit ADC. It is purely a marketing feature.

Probably the only situation where 14-bit image processing might produce better images than 12-bit one is when the in-camera image processing is implemented badly (like suffers greatly from cumulative arithmetic errors in sloppily written firmware) and the inherently more "precise" 14-bit engine might compensate for the programming errors to some degree. Of course, this can (and will) only be camera- and firmvare-version-specific, not applicable in general.

Also you have to keep in mind that there are quite a few people out there who managed to convince themselves that 14-bit cameras do produce better results than 12-bit ones. Don't be surprised if you won't see any difference in the results yourself, or if the difference won't have anything to do with the "bittness" of the camera.
 
None of the modern DSLR sensors have the dynamic range to prouduce 14
bit of usable image data. Moreover, none of them can even fill 12 bit
of data. For this reason there's no advantage in using 14-bit ADC
over 12-bit ADC. It is purely a marketing feature.
You must have missed all the discussions about this issue. An analogy works best in this case. Consider the dynamic range of a sensor to be a loaf of bread. The more bits in the ADC, the thinner the slices in that loaf. It will not change the size of the loaf.

--
Best regards,
Jonathan Kardell
'Enlightenment is nowhere near as much fun as I thought it would be. :)'
 
The low end of the range is determined simply by how few photons the sensor can distinguish from whatever level of signal noise the sensor calls zero. The high end of the range is determined by how many photons saturate the sensor. Those values do change as technology improves, but (mostly) separately from the 12 bit to 14 bit shift.

With more bits, you do get an increased ability to heavily manipulate the photo without "posterization". This also helps somewhat with retaining editability after the conversion from the linear space of the raw sensor to the gamma 2.2 space used with sRGB and AdobeRGB, and helps even more in converting to ProphotoRGB. But, the effects are usually subtle.

Presuming the signal-to-noise ratio in shadow areas is in keeping with the change from 12 bits to 14 bits, the 14-bit sensors could have a quite noticeable increase in ability to maintain detail when you heavily brighten shadow areas. I don't know whether that presumption is warranted.
--
http://www.pbase.com/spencer_walker
 
While I understand the theory of this , I am not sure how this advantage would come thru all the way to the printer. My assumption (read: AFAIK) is that printers are not 14 bit , thus at "print time" these subtle values would need to be converted to a lower bit representation anyhow.

As for viewing on-screen I doubt that my LCD's can support 14 bit color , perhaps my CRT...
 
While I understand the theory of this , I am not sure how this
advantage would come thru all the way to the printer. My assumption
(read: AFAIK) is that printers are not 14 bit , thus at "print time"
these subtle values would need to be converted to a lower bit
representation anyhow.
As for viewing on-screen I doubt that my LCD's can support 14 bit
color , perhaps my CRT...
The primary benefits of greater bit depth (providing the quality of the ADC is equal or superior) is finer tonality. In its crudest form, limitations in bit depth show up as "posterization" effects.

In the original post, the question came up about how "obvious" a difference there would be. In practice the differences can be subtle. The largest benefits come to those who like to process their photos heavily. The finer gradations allow a smoother end result. In general, the larger bit depth will contribute to the over-all quality of the images in subtle, but desirable ways. Does it render 12 bit cameras useless? Of course not. It is not a deal-breaker situation, in my opinion, but larger bit depth is desirable.
--
Best regards,
Jonathan Kardell
'Enlightenment is nowhere near as much fun as I thought it would be. :)'
 
I would ignore the 14 bit thing and look at the cameras you would have, either a 1DsII plus 1DIIn vs 1D and 1DsIII.

The 1DsII and 1DIIn have the same interface, so moving from one to the other would make like easy (I have 5D and IDIIn and find moving between them a pain when working quickly). They also share the same battery and charger so that is another advantage.

1DsIII and 1D are different in their interface, you also have to carry different batteries and chargers.

If one of the 1Ds camera fails at a wedding, would the 1D camera be able to cover, the 1DMKIIn has double the pixels of the 1D and is a lot better in the noise department.

If landscapes are the big thing, what is the value of the 22MP of the 1DsIII over the 16MP of the 1DsII.

If it was me I would go for 1DsII and 1DIIn unless I could upgrade the 1D to a 1DIII shortly after getting the 1DsIII.
 
1. 1d to 1dmkIIn sensor is 4 years old and it is no comparison to 5D in terms of high ISO performance which you're going to need it more than you think
2. LiveView is great for wedding and landscape in difficult situtations

3. 22 MegaPix gives you greater capablility to do crop in a frame in post processing
4. New chip process much faster than old. You don't really need 8.5 fps

5. larger storage capacity support for the new. You can use 16Gig CF or SD cards with newer body.
6. 14 bit over 12 bit advtange
7. ISO6400
8. Dynamic range improvement
9. Improved NR
10. Dust cleaning
 
AndreyT's answer is spot on. Bit depth larger than the dynamic range of the camera produces no discernable advantage (except, perhaps, to astrophotographers who stack tens of identical images to tease signal out of the noise). A camera with a twelve stop DR and 14-bit ADC -- the case of the Mk3 -- simply adds two bits of random values in the extra two pixels, which have no utility in post-processing whatsoever. Because sensors are linear devices, the twelve bits of DR fill the first twelve bits of pixel data, and the last two are under the noise floor. There are no smoother tonal gradients, they are totally masked by the noise.

If you want to see the effect of bit depth on recovering smooth tonal gradients from a noisy signal, look at this demo:

http://theory.uchicago.edu/~ejm/pix/20d/posts/tests/noisegradient4.jpg

This is a tonal gradient of 16 levels on the 0-255 scale, to which noise of width 4 levels has been added. Then the bit depth was successively lowered -- first 8-bit, then 7-bit, etc on down to 3-bit. The contrast was then stretched by a factor 16 to make the result more apparent (and to mimic an extreme PP levels adjustment). The 8-bit sample has 2-bits more tonal depth than the DR, the 7-bit sample has one more bit depth than DR, the 6-bit sample has depth equal to DR, and so on. So you tell me if you can see any advantage to bit depth beyond dynamic range, or whether it's only a problem to have smooth tonal gradients when the bit depth is less than the DR (as in the 5-bit, 4-bit, and 3-bit samples).
None of the modern DSLR sensors have the dynamic range to prouduce 14
bit of usable image data. Moreover, none of them can even fill 12 bit
of data. For this reason there's no advantage in using 14-bit ADC
over 12-bit ADC. It is purely a marketing feature.

Probably the only situation where 14-bit image processing might
produce better images than 12-bit one is when the in-camera image
processing is implemented badly (like suffers greatly from cumulative
arithmetic errors in sloppily written firmware) and the inherently
more "precise" 14-bit engine might compensate for the programming
errors to some degree. Of course, this can (and will) only be camera-
and firmvare-version-specific, not applicable in general.

Also you have to keep in mind that there are quite a few people out
there who managed to convince themselves that 14-bit cameras do
produce better results than 12-bit ones. Don't be surprised if you
won't see any difference in the results yourself, or if the
difference won't have anything to do with the "bittness" of the
camera.
--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
None of the modern DSLR sensors have the dynamic range to prouduce 14
bit of usable image data. Moreover, none of them can even fill 12 bit
of data. For this reason there's no advantage in using 14-bit ADC
over 12-bit ADC. It is purely a marketing feature.
You must have missed all the discussions about this issue. An analogy
works best in this case. Consider the dynamic range of a sensor to be
a loaf of bread. The more bits in the ADC, the thinner the slices in
that loaf. It will not change the size of the loaf.
Whenever you cut something in reality, you can only cut it as thin as its basic unit. Once you cut a molecule (or the smallest group of molecules that together make bread "bread"), it stops being that something.

So, if you set up more cuts than the actual individual number of basic units, you're gonna end up with more cuts than necessary. Your cuts are gonna be fundamentally superfluous and there's nothing you can do about it (unless you make the loaf BIGGER!)

All evidence seems to point to this in the 14-bit saga. It seems that the accuracy of 14 bits is more than the accuracy the A/D and sensor can give. Read up on the 40D forum for very interesting discussions (as always, there is also some noise). Look up John Sheehy's posts, and Victor Engel has also made interesting comments. There are also a handful of other people that I forget, but most or all of them that work with DCRAW or IRIS say basically the same.
 
What about a situation where you failed to expose to the right and maybe underexpose by a stop or two. Would you not have more bits of information (detail) in the mids and highlights to work with? With 14bits, now the 4096 stop is two stops to the left of the histogram instead of the far right. Even better now if you expose to the right.
None of the modern DSLR sensors have the dynamic range to prouduce 14
bit of usable image data. Moreover, none of them can even fill 12 bit
of data. For this reason there's no advantage in using 14-bit ADC
over 12-bit ADC. It is purely a marketing feature.
 
What about a situation where you failed to expose to the right and
maybe underexpose by a stop or two. Would you not have more bits of
information (detail) in the mids and highlights to work with? With
14bits, now the 4096 stop is two stops to the left of the histogram
instead of the far right. Even better now if you expose to the right.
The issue is independent of exposure, it is simply a property of the sensor no matter how you expose.

A common misconception is that the advantage of exposing to the right stems from the higher number of tonal steps available there. In fact, the proper reason to expose to the right is that this maximizes the signal-to-noise ratio of the image captured by the sensor. Bit depth doesn't enter into it, so long as bit depth exceeds the S/N ratio (which it invariably does for Canon DSLR's whether 12-bit or 14-bit).

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
I just don't buy the statement that there is no advantage and that it's all marketing.
What about a situation where you failed to expose to the right and
maybe underexpose by a stop or two. Would you not have more bits of
information (detail) in the mids and highlights to work with? With
14bits, now the 4096 stop is two stops to the left of the histogram
instead of the far right. Even better now if you expose to the right.
The issue is independent of exposure, it is simply a property of the
sensor no matter how you expose.

A common misconception is that the advantage of exposing to the right
stems from the higher number of tonal steps available there. In
fact, the proper reason to expose to the right is that this maximizes
the signal-to-noise ratio of the image captured by the sensor. Bit
depth doesn't enter into it, so long as bit depth exceeds the S/N
ratio (which it invariably does for Canon DSLR's whether 12-bit or
14-bit).

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
Here's one statement regarding this issue by Canon, plus an example.

http://web.canon.jp/imaging/eosd/eos1dsm3/html/08.html

The Canon white paper goes into more detail.

It sounds like there is more than marketing hype that one will get from 14 bit. Yet, on my monitor I can't really tell much difference between the two shots Canon uses to make its point.

Given my experience with Lightroom, I hope that I'll see a meaningful difference when I process and then crop my photographs for printing, particularly where I'm filling in shadow and cropping beyond 50%.

Perhaps the real answer is to just bite the bullet and use the sort of studio lighting and framing techniques I notice whenever I come across the pros shooting outdoor TV or movies.

While I doubt that 14 bit will make up that sort of difference in technique, hope springs eternal that it will make up at least some of the difference.

Given my experience with the 5D, I have significant confidence that Canon will deliver on its claims, even given the ambiguity of the example Canon has provided.
 
Yeah, I have to laugh when I see that bit of marketing hype. Highlights are the LAST place you should expect an advantage from 14-bit tonal depth. The photon shot noise is tens or even more than a hundred 14-bit raw levels in highlight regions, totally swamping the least significant bits with random fluctuations. If there are to make any difference at all to an image, it will be in the deepest shadows where the noise is least (somewhat counterintuitively, the noise grows with the light intensity, just not as fast as the intensity so the S/N ratio improves with increasing signal; S/N is lowest in shadows, but also the absolute magnitude of the noise is least there as well).

BTW note the little disclaimer at the bottom right of the linked page: "Differences between the two photos have been exaggerated for illustration purposes." Made up out of whole cloth is more like it. Hah!
--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
No, you must have missed all the discussions about what the term "dynamic range" really means. Dynamic range is not just the size of the loaf. Dynamic range describes both the length of the loaf and the thinness of slices that can be made from that loaf before they start to tear. The dynamic range of any modern loaf (i.e. sensor) is not enough to make 2^12 slices out of it. If you try to slice it thinner than that, the slices will mash and tear.

Once again, because of the noise, no modern sensor produces more than 12 bit of information (which is exactly what dynamic range really tells us). Trying to extract 14 bit of information out of it will achieve absolutely nothing.
 
No, it gives you no advantage whatsoever. The main idea of technique of "exposing to the right" is to get better S/N ratio. This is comletely unrelated to how many bits you use to digitize the data. Secondary, significantly less important target of this technique, is to cover more discrete ADC levels. But for that secondary part, there's no point in trying to cover more than 2^12 levels, since, once again, even if exposed for its best possible S/N ratio, no modern sensor can produce more than 12 bit of information. Digitizing it with 14 bit ADC achieves nothing.
 
But I often play with my RAW files trying and getting the best possible sunset. Very often I suffer from a kind of ugly light and color gradations way more noticeable than Canon's example. And this problem is apparent in many other occasions where you play with the levels and amplify some rays of light or luminous spots.

If 14 bit sampling can help avoiding that, I will bite the bullet immediately.

--
Ludo from Paris
Tankers of tools, thimbels of talent
BestOf http://ludo.smugmug.com/gallery/1158249
 

Keyboard shortcuts

Back
Top