The truth about 14-bit

Adding noise will destroy any posterization that is present. It iis a post-processing trick used to eleimnate posterizing by adding noise when converting to a different bit depth.

Aren't you just destroying any real bit depth diffeences by adding the noise? How about a slowly changing monocromatic gradient at low ISO for instance. Noise isn;t two bit depths - 2 bits is 4x the number of tones 16k vs 4k - that surely isn't in the noise.

Not disagreeing, just asking?
--
Gene (aka hawkman) - Walk softly and carry a big lens

Please visit my wildlife galleries at:
http://www.pbase.com/gaocus
http://hawkman.smugmug.com/gallery/1414279

 
Haven't uyou picked a subject that is least affected by the manipulation you have done - randomly generated cloud textures, taking 1 bit away, the replacing it with a random bit? Why would you expect to see anything under that circumstance.

Take a mooth gradient and chop off a couple of bits replacing them with 0.

Gene

--
Gene (aka hawkman) - Walk softly and carry a big lens

Please visit my wildlife galleries at:
http://www.pbase.com/gaocus
http://hawkman.smugmug.com/gallery/1414279

 
Well, that is indeed the $64,000 question, I fully understand that.
But whatever the sensor records, a 14-bit ADC will convert it more
faithfully - that was my point.
I would agree with "at least as faithfully". And the distinction here
is precisely what we're contending.
We understand where we disagree, and that is a big improvement on another discussion I'm having at the moment! ;-)
 
I posted this on another thread. Reference this analysis on sensor performance:

http://www.clarkvision.com/imagedetail/digital.sensor.performance.summary/

It has to do with the true DR response of the sensor. A sensor like the 5D where the well capacities are huge (because of sensel size), the true DR of the sensor can exceed 14 stops.

Sensor DR = Full well capacity/read noise or 80,000/3.7 ~ 20,000

to represent each quantisization increment you need 14 bits minimum (2^14=16384).

According to this article the 12 bit A/D converter is limiting the dynamic range of the 5D sensor by 2 full stops, and that if it had a 14 bit A/D converter it would benefit by this amount.

But this is the 5D. For the 40D the full well capacities will be much less. For the MIII, it could be 10-20% less than the MII depending on how much improvement was made on read noise.

So perhaps for cameras with large pixel densities, there will be minimal improvement using 14 bits. That could be why you never saw a significant improvement on the MIII.
 
And what about the "highlight priority" mode? Might it not be handy
to have an extra bit or two available to digitize things if we turn
the gain of the PGAs down a stop before doing the A/D conversion when
shooting in HP mode?
Yes, I agree with you regarding "highlight priority" mode and the "might" part of your second sentence. IMO the extra resolution "should" be a VERY good thing; problem is I'm frustrated that I can't see any goodness looking at the m3 versus m2 files I discussed above in one of my posts.

Regards,

Joe Kurkjian, Pbase Supporter

http://www.pbase.com/jkurkjia/original



SEARCHING FOR A BETTER SELF PORTRAIT
 
I used to design extensively with 8, 10, and 12-bit D/A and A/D converters. It's been a few years, but there are generally various grades of a chip available - for example, the precision and linearity varies with the price of the chip - in other words, a 12-bit lower-grade (cheaper) device may not give better accuracy than a mil-spec 10-bit, although the resolution is better in the 12-bit device, though somewhat useless at the low end of its range. Than why not just use the 10bit? Depends. If you are going to start doing some math with that 10-bit device, you may wish that you had started out with the 12-bit one. Audiophiles have to deal with this same issue where digital processors are involved, and for that matter audio editors as well, 16-bit vs 24-bit etc...

Regarding these cameras, all things being equal the 14-bit's improvements to IQ may in fact be quite noticeable under certain conditions. HOWEVER (disclaimer) - test conditions of the production camera will be the final proof (or disproof) as to how wisely the designers have come about these extra bits - in theory this could add up to 4 times the information of the 12-bit system, if the chip quality scales, or, less than that if the 14-bit system does a much poorer job in the low end of its range than the 12-bit does in the low end of its range.
maybe the old 12bit ones also were a bit dodgy in the last bit or
two? then wouldn't this be better since we'd at least get a perfect
12bits, even not 14?
--
-Dennis W.
Austin, Texas

 
I posted my views on this in another thread dealing with this very same subject.
http://forums.dpreview.com/forums/read.asp?forum=1019&message=24438663

I have never seen any convincing evidence that humans, especially adults, can tell the difference from 16-bit 44kHz and 24-bit 96kHz or 192kHz audio. All I've read is ravings about the "high resolution" thing that border on the supernatural, about "ultra frequencies" and such, some even mention non-human animal hearing to support their case. I'd like to see a properly done double-blind test on this.

I think studios use high resolution, high-frequency sampling audio because they edit. The same reason we do 16-bit editing in photoshop.

But also, the thing is that even if some people could tell the difference, they would be a minority, and furthermore it would require thousands in excellent audio equipment, so it's a minority of a minority. To me, consumer "high resolution" audio was dead from the start, no matter the "format war". What are your thoughts?
At one time 16 bit audio ruled and everyone thought extra bits were
not needed.

Then eventually everyone wanted more bits, the next big step was 20
bit audio with a theoretical dynamic range of 120DB. Professional
studios only get about 110db S/N using high-end sources, so the 120DB
S/N was as much as would ever be needed. 110db is like being in the
front row of a rock concert, and 0db is the threshold of human
hearing.

But when 24-bit systems came out they sounded better than 20-bit
systems. It has been found that our ears could pull out information
that was buried in the noise by several bits. Could you hear it on
every recording, no. Most "rock" music is virtually
indistinguishable between 16-bit and 24-bit, but on other music it
was obvious. And now 24-bit audio can represent every level of
dynamics the human ear can detect, but we are still moving toward
32-bit.

Sample rates for audio have gone from 48Khz, to 96 Khz, and now
192Khz is the professional norm. That is 4x what is theoretically
necessary to capture all perfect human hearing frequencies.

Until we have the 40D in our hands and the RAW software has been
optimized to handle the 14-bit data, we can't make any determinations.

I think Canon feels there is a value to it. It is such an obscure
spec that there isn't any real marketing value in it, except amongst
us geeks. But maybe they are just paving the way for future sensors
that have a high DR.

mike
--
 
You may be right about the extra two bits being lost in noise but your experiment isn't the right one.

Right now you take your 12-bit image and squish it into 8 bits because 8 bits is all your screen can display and all your printer can print.

Everybody knows that if you take an 8 bit image and do things to it like increasing the contrast you'll get posterization. If you take your 12 bit image and do the same things, but to a greater extent you will ALSO get posterization. Guaranteed. If you take a shot with a 14-bit sensor and do the same image manipulation go away? Yes it does. Even if the extra two bits are just noise.

If the extra two bits are really just noise you can get the same improvement by taking your posterized image and adding your own noise. It will look better, but there will be no added real information. The same process is used in many types of printing where a limited number of ink levels is compensated by putting ink dots in a pattern or a random distribution to simulate in-between shades.

--
Robb

 
OK, so which image in the fourth post in this thread has
the higher bit depth, and can you pull out the extra info
contained in this extra bit?
--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
I looked at the tiffs and I couldn't tell.

But with it being B&W, all the colors are at the same levels, which could affect the results. All pixel to pixel transitions are very gradual. There is no data being fed through a bayer algorithm.

I'm not saying there is definitely going to be a difference, but trying to explain the difference between 16-bit audio and 24-bit audio can be tricky as well. The explanations are usually "more airy", "more open", "more depth", or "better sound stage".

--
http://www.pbase.com/chibimike
 
What specifically are you pointing to there? THere are
many aspects discussed.
It's all useful. But the fiirst two sections, "Introduction: RAW
conversion" (the explanation of gamma) and "Human vision and tonal
levels" are essential reading and commendably concise.
But these sections have little to do with the information content
of linear raw files; much more to do with latitude of converted
files after gamma correction etc. What I am concerned about
is the information content of the linear raw file, and what bit
depth is sufficient to encompass the image data recorded
by the sensor. How does the link address that? Naively these
two sections have to do with what happens far down the
image processing pipeline.

I am willing to believe that all current raw converters
have a non-optimal treatment of 12-bit images, and
that 14-bit images force the converters into a more
optimal method of processing a linear raw file.
What I find hard to believe is that the same end
result could not be achieved by padding out a 12-bit
file to 14 bits and running it through the same conversion
process, and along the way saving lots of storage space
on flash cards and hard drives.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
I still found it hard to tell a difference after applying a strong
contrast curve. Unless the dynamic range exceeds 11 stops or so, I
doubt the extra two bits is going to make any difference whatsoever.
You would do just as well taking a 12-bit image and adding two
randomly generated bits.
The second image is darker. For example, the "dark blob" near the bottom middle is much darker to me on image two. I can not tell you which is which, but I can clearly see a difference. both images are likely sufficiently good, but which is more accurate.

I see possibilities in more values for the extreme areas, but that is probably a question about the sensor rather than the number of bits.
 
Bit depth, I think, it is very hard to tell. Since usually the higher depth ad/da converters also have lower jitter which is very noticeable. Nobody makes a 16 bit ultra low jitter converter.

Sampling frequency I think yes I could, if the recording chain is up to the task. I think the interaction of multiple tones, complex reflections, and complex overtones produce frequencies well above 20Khz and will be captured by the higher frequencies.

mike
I posted my views on this in another thread dealing with this very
same subject.
http://forums.dpreview.com/forums/read.asp?forum=1019&message=24438663

I have never seen any convincing evidence that humans, especially
adults, can tell the difference from 16-bit 44kHz and 24-bit 96kHz or
192kHz audio. All I've read is ravings about the "high resolution"
thing that border on the supernatural, about "ultra frequencies" and
such, some even mention non-human animal hearing to support their
case. I'd like to see a properly done double-blind test on this.

I think studios use high resolution, high-frequency sampling audio
because they edit. The same reason we do 16-bit editing in photoshop.

But also, the thing is that even if some people could tell the
difference, they would be a minority, and furthermore it would
require thousands in excellent audio equipment, so it's a minority of
a minority. To me, consumer "high resolution" audio was dead from the
start, no matter the "format war". What are your thoughts?
--
http://www.pbase.com/chibimike
 
Jak my older monitor is a Professional Series PF225f Viewsonic... I am not seeing banding. However I checked it out on my sons older Graphics Series Viewsonic and I did see banding.

I plan on ditching my new LCD for this wide gamut LCD because it reproduces 96% of the Adobe RGB color space so it can display most colors in a photograph taken in Adobe RGB mode.

http://www.eizo.com/products/graphics/cg241w/index.asp

;) Guess you will have to put both of those on the table during the next summit.

By the way Photoshop converts 16 bit per channel images to 8 bit per channel images by adding a half a bit of noise, which can sometimes help visually eliminate mild banding problems.

http://www.reindeergraphics.com/index.php?option=com_content&task=view&id=204&Itemid=150
Jak,

Do you have a monitor and video card capable of displaying most of
the AdobeRGB gamut. If not it noticeably changes the way the images
appear.
Video card, no problem.

LCD monitor, I seriously doubt the gamut is large enough for the
entire AdobeRGB gamut. I spent a LOT of time testing monitors to get
one that actually displayed the lower bits after calibration. FWIW
my LCD monitor is a Viewsonic VP930. BTW, my CRT display is a
Viewsonic G790.

Both my LCD and CRT monitors clearly show Mach banding for both the
1DmkII and 1DmkIII files (that is why we used a CPL, to create every
possible opportunity for banding). Problem is, and this was a REAL
surprise to me, there is no improvement (i.e. reduction) in banding
when you look at the 14-bit 1DmkIII file (note, IMO not even a slight
teeny tiny hint of improvement).

Regards,

Joe Kurkjian, Pbase Supporter

http://www.pbase.com/jkurkjia/original



SEARCHING FOR A BETTER SELF PORTRAIT
 
Adding noise will destroy any posterization that is present. It iis a
post-processing trick used to eleimnate posterizing by adding noise
when converting to a different bit depth.

Aren't you just destroying any real bit depth diffeences by adding
the noise? How about a slowly changing monocromatic gradient at low
ISO for instance. Noise isn;t two bit depths - 2 bits is 4x the
number of tones 16k vs 4k - that surely isn't in the noise.
Yes, that is the point. I don't see the difference between recording two extra bits of noise under the noise floor of the sensor in the least significant bits of a 14-bit raw file; versus adding two bits of dither to a conventional 12-bit raw file. If you want extra steps to play with, multiply the raw data by four and add a random integer from zero to three. You'll have 16K of levels in either case and no more information in one file over the other.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
Jak my older monitor is a Professional Series PF225f Viewsonic... I
am not seeing banding. However I checked it out on my sons older
Graphics Series Viewsonic and I did see banding.
Are you referring to the two files shot by "FredLord"?
I plan on ditching my new LCD for this wide gamut LCD because it
reproduces 96% of the Adobe RGB color space so it can display most
colors in a photograph taken in Adobe RGB mode.

http://www.eizo.com/products/graphics/cg241w/index.asp
That should work nicely for you!
;) Guess you will have to put both of those on the table during the
next summit.

By the way Photoshop converts 16 bit per channel images to 8 bit per
channel images by adding a half a bit of noise, which can sometimes
help visually eliminate mild banding problems.
I didn't know that, thanks for the input; BTW, where did you find out about that information?
http://www.reindeergraphics.com/index.php?option=com_content&task=view&id=204&Itemid=150
Jak,

Do you have a monitor and video card capable of displaying most of
the AdobeRGB gamut. If not it noticeably changes the way the images
appear.
Video card, no problem.

LCD monitor, I seriously doubt the gamut is large enough for the
entire AdobeRGB gamut. I spent a LOT of time testing monitors to get
one that actually displayed the lower bits after calibration. FWIW
my LCD monitor is a Viewsonic VP930. BTW, my CRT display is a
Viewsonic G790.

Both my LCD and CRT monitors clearly show Mach banding for both the
1DmkII and 1DmkIII files (that is why we used a CPL, to create every
possible opportunity for banding). Problem is, and this was a REAL
surprise to me, there is no improvement (i.e. reduction) in banding
when you look at the 14-bit 1DmkIII file (note, IMO not even a slight
teeny tiny hint of improvement).

Regards,

Joe Kurkjian, Pbase Supporter
 
I ignored this branch of this thread for a while because I thought that Audo might have been some photo magazine or forum of which I was unaware. It just occurred to me that the word might have been meant to be audio.

Anyhow, here's something to look at, then.

OK, set aside for a moment the discussion of 14 Bit versus 12 Bit resolution, and move on to the discussion of 192KHz vs 48KHz sample rate.

Now think about the endless whining on here about how going to a higher pixel resolution doesn't help beyond wherever it is that you happen to feel like setting the bar at any given moment - right now, some people are screaming that 8 Megapixels is the limit, anything beyond this will degrade the image, not improve it.

But look at what increasing the sampling rate for audio signals does for us: It's not that we really care about capturing frequencies higher than about 20 KHz, it's that we want to avoid needing to use a really nasty sharp AA filter. By increasing the sampling rate to what at first seems to be a ridiculously high frequency, we are now able to use an AA filter that has a more gradual roll-off characteristic and which therefore does less damage to the phase information at the audible frequencies.

A lot of the "grit" that we hear, and localization that we DON'T hear in digital audio recordings is due to the effects of the AA filter totally hosing the phase of audible frequencies in the original signal. Sure, the frequency response may be "perfect" with a 44.7 KHz recording, but does it sound "real"? Nope. There's more to it than just flat frequency response to a sine wave from a test generator as it turns out.

I guess this is a long way of saying that by going to a much higher number of pixels, we can alter the optical AA filters to cause less blurring and still avoid aliasing. And in so doing, we get a sharper picture even if the noise levels remain the same and our lenses don't get any better.

As has been said on here before: Oversampling has its benefits.

A bit OT for this thread, but since it came up....

--
Jim H.
 

Keyboard shortcuts

Back
Top