The truth about 14-bit

Well... everyone knows that a amp that goes to 11 is better... Clearly 14 would be even better then better... I guess it takes musicians like the guys in Spinal Tap to fully understand this concept. :)
I think Canon feels there is a value to it.
Oh, yea, they do.

The marketing value is there all right. It's an easy number to
remember and compare for average Joe. Is 12 better or 14? Of course
it must be 14!

We've seen it before with the stupid megapixel war.
--
http://www.pbase.com/kowalow
 
I agree you can't determine that by simple inspection. But an
understanding of the maths will lead you to the conclusion that the
14-bit ADC is likely to be playing a part.
No, please, not the math.

During the conversion, MAX is the maximum value available.

MAX(8bits) = 255
MAX(12 bits) = 4096
MAX(14 bits) = 16384

Now, multiple by 0.461 (standard level of digital output)

MAX(8bits) * 0.461 = 118
MAX(12 bits) * 0.461 = 1888
MAX(14 bits) * 0.461 = 7553

This value corresponds to the 18% neutral gray subject with a reflectance rate of 100%. The Standard Output Sensitivity (S) is given by

S = 10/Hm

Hm is the input luminance value that maps to the standard output level.

Now, IF I understand correctly, I have my starting point for 18% gray at 7553 rather than 1888 assuming I have proper exposure. (based on the CIPA DC-004 standard, implemented by the Canon 40D).

I do not, unfortunately, have the math used to translate the data off the sensor to the values so continue with the speculation.
 
My Music Fidelity A3-24 can output at many different sample rates and the difference between 44.1 and 192 is plain as day.

But I'm not sure that's as relevant here as the reason is often jitter reduction and less destructuve filters. To avoid aliasing, 44.1 need to go from full signal at 20kHz to lower than -100dB at 22.05kHz, required hundreds of db/octave filters.

Much as I prefer to listen to my gear over arguments on St'phile/TAS/etc,, I'll prefer to see the results from a 40D over theoretical arguments.
 
I posted this on another thread. Reference this analysis on sensor
performance:

http://www.clarkvision.com/imagedetail/digital.sensor.performance.summary/

It has to do with the true DR response of the sensor. A sensor like
the 5D where the well capacities are huge (because of sensel size),
the true DR of the sensor can exceed 14 stops.

Sensor DR = Full well capacity/read noise or 80,000/3.7 ~ 20,000

to represent each quantisization increment you need 14 bits minimum
(2^14=16384).

According to this article the 12 bit A/D converter is limiting the
dynamic range of the 5D sensor by 2 full stops, and that if it had a
14 bit A/D converter it would benefit by this amount.

But this is the 5D. For the 40D the full well capacities will be much
less. For the MIII, it could be 10-20% less than the MII depending on
how much improvement was made on read noise.

So perhaps for cameras with large pixel densities, there will be
minimal improvement using 14 bits. That could be why you never saw a
significant improvement on the MIII.
Indeed it does depend on the true DR of the sensor.
THe analysis in

http://www.openphotographyforums.com/forums/showpost.php?p=31694&postcount=31

as well as the rest of that thread seems to indicate that the real world
DR of the 1D3 is about 11.6 stops. I would expect that of the 40D
to be somewhat less. My understanding is that the clarkvision analysis
overestimates the S/N by overestimating the max signal, while the
OPF results in the above link are taken from real-world 1D3 raw data.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
A good introduction to some of the quantization issue can be found here:

http://www.clarkvision.com/imagedetail/digital.sensor.performance.summary/index.html

As you can see from one of the graphs in there, even the 20D/30D sensor can in theory benefit from a 14-bit A/D converter, if amplifier noise is sufficiently controlled. It is likely the 40D does slightly better than the 20D here; I'm assuming that the improved microlens efficiency will have been paired with a deeper well capacity (sufficiently so that the slightly smaller photosite pitch won't cancel the effect).

Anyway, I think it's not unreasonable to expect that under realistic circumstances, there will be useful info in the 2 least-significant bits (although probably not for all ISO settings).

When that's the case, those extra 1 or 2 bits will translate to almost 1 or 2 stops of extra dynamic range. One advantage is that you'd then have more detail left in deep shadow areas. You can then manipulate your tone curves to bring that to an observable level in the output (print or screen).

You're most likely not going to see any significant gains using standard or quasi-standard curves (like the one used by the camera to map sensor data to JPG).

--
Daveed
http://vandevoorde.com
 
Sampling frequency I think yes I could, if the recording chain is up
to the task. I think the interaction of multiple tones, complex
reflections, and complex overtones produce frequencies well above
20Khz and will be captured by the higher frequencies.
Right, of course, I wasn't denying that ultra high frequencies can be captured (and reproduced), but what I'm skeptical of is that regular people, especially mature adults, can tell the difference. I would like to try myself, but I'm short several thousand dollars to get the equipment that purportedly can make me tell the difference. I would like to see some well-made tests with this, I haven't seen or read any.
 
You may be right about the extra two bits being lost in noise but
your experiment isn't the right one.

Right now you take your 12-bit image and squish it into 8 bits
because 8 bits is all your screen can display and all your printer
can print.
You've just made an assumption that is false. What I do when I test these things is to work an image in 16 bit mode, add a layer above the layer I'm interested in and use levels to expand the range of tones I'm interested in inspecting.
Everybody knows that if you take an 8 bit image and do things to it
like increasing the contrast you'll get posterization. If you take
your 12 bit image and do the same things, but to a greater extent you
will ALSO get posterization. Guaranteed. If you take a shot with a
14-bit sensor and do the same image manipulation go away? Yes it
does. Even if the extra two bits are just noise.
So then 14 bit a/d does not present any advantage. All that's needed is to pad the last 4 bits with random noise. I have no argument there.
If the extra two bits are really just noise you can get the same
improvement by taking your posterized image and adding your own
noise.
Same as what? A 14 bit image with true data down to the 14th bit? I don't think so. A 14 bit image that doesn't show posterization? Sure.
It will look better, but there will be no added real
information.
Right. If the extra 2 bits provides true information, there will be posterization if you truncate it to 12 bits. Just like I said. The raw converter can add noise to the least significant bits if desired. If they don't maybe they should. Hey, remember the old style plotters? They used random noise in their x,y position directives. This way they could position the pen more precisely. Random noise can very useful, indeed.

--
http://www.pbase.com/victorengel/

 
First off, I'm a fan of 14-bit resolution (assuming the DR and noise
of the sensor supports 14-bits and the A/D conversion is accurate to
+ - 1/2 LSB). I got REALLY REALLY REALLY excited about the 1DmkIII
because of the 14-bits and high ISO noise improvement; the noise
improvement was "real" but the advantage of the extra two bits, for
some reason, can't be seen in the area (clear sky going transitioning
from light to dark) where I had hoped to see an "obvious"
improvement. IMO the extra two bits "should be" a big help as you
map from RAW to RGB; however, if you can't see the difference you
have to question the utility of the feature.
So the upshot is that from here on out shooting RAW we can expect an extra, what, like 17% "marketing byte bonus" on our memory cards? Fun...

--
-CW
 
The problem with sampling at so close to double the highest frequency we want to capture is that to eliminate aliasing (and believe me, you MUST eliminate aliasing or you'll be really sorry) you need analog audio filters that pass 20 to 20KHz flat, but then somehow reduce the amplitude of that 22KHz tone to -100dB.

That's not a trivial task and the main problem is that the phase of signals anywhere near the limit of audibility can get so "wound up" that it makes a real mess of things.

It turns out that people locate the direction from which sounds come largely by hearing first arrivals. If you step on a twig while creeping through the forest, that "snap" is easy to locate on because it has a nice, sharp rise. Sort of a nice step or "square" shape to it.

That's just perfect for people or animals to hear and immediately be able to know what direction to look. The deer immediately knows right where you are and turns it's head right on target the first time.

We use the same thing, along with the bizarre shape of the outer ear (acts as a comb filter) to give us directional cues. Music recorded and/or played back on a 44.7 KHz system will need AA filters that are so sharp that they mess up the phase information and thus confuse our ear/brain system.

This is part of why early digital audio can have a harsh, gritty, and "unreal" sound to it compared to listening to a good quality vinyl record on a decent playback system. A good recording played back on a decent system lets you better hear where things are coming from.

Going to higher sampling rates eases the design of the AA filters necessary and allows us to capture the audible frequencies without totally messing up the phase information.

It's not that we actually hear things over 20KHz necessarily, but we can certainly hear the nasty results of an analog filter that passes everything up to that point "flat" and then manages to roll off so sharply that we don't get aliasing when recording at 44.7 KHz.

--
Jim H.
 
thanks for explaining it like that, it does make sense. You mentioned old recordings, though, but what about new 44.1 or 48 kHz recordings, made with top-notch equipment? I mean, is this AA limitation inherent to the frequencies used, or the equipment used? Could something be done in post-processing, like Creative labs and others claim to do? I'm sorry if I sounded so skeptical, but I guess you realize how much BS runs the audio world.
The problem with sampling at so close to double the highest frequency
we want to capture is that to eliminate aliasing (and believe me, you
MUST eliminate aliasing or you'll be really sorry) you need analog
audio filters that pass 20 to 20KHz flat, but then somehow reduce the
amplitude of that 22KHz tone to -100dB.

That's not a trivial task and the main problem is that the phase of
signals anywhere near the limit of audibility can get so "wound up"
that it makes a real mess of things.

It turns out that people locate the direction from which sounds come
largely by hearing first arrivals. If you step on a twig while
creeping through the forest, that "snap" is easy to locate on because
it has a nice, sharp rise. Sort of a nice step or "square" shape to
it.

That's just perfect for people or animals to hear and immediately be
able to know what direction to look. The deer immediately knows
right where you are and turns it's head right on target the first
time.

We use the same thing, along with the bizarre shape of the outer ear
(acts as a comb filter) to give us directional cues. Music recorded
and/or played back on a 44.7 KHz system will need AA filters that are
so sharp that they mess up the phase information and thus confuse our
ear/brain system.

This is part of why early digital audio can have a harsh, gritty, and
"unreal" sound to it compared to listening to a good quality vinyl
record on a decent playback system. A good recording played back on
a decent system lets you better hear where things are coming from.

Going to higher sampling rates eases the design of the AA filters
necessary and allows us to capture the audible frequencies without
totally messing up the phase information.

It's not that we actually hear things over 20KHz necessarily, but we
can certainly hear the nasty results of an analog filter that passes
everything up to that point "flat" and then manages to roll off so
sharply that we don't get aliasing when recording at 44.7 KHz.

--
Jim H.
 
This way they could position the pen more precisely. Random noise can
very useful, indeed.
not sure how directly this applies to the 40D, haha, but the paddlefish uses noise to increase its detection abilities. it makes an interesting read though.
 
i looked at the tiff versions too and i can still easily see a different character to each, whatever that is worth, if anything. both, throughtout the entire image have a different look.
 
A good introduction to some of the quantization issue can be found here:

http://www.clarkvision.com/imagedetail/digital.sensor.performance.summary/index.html

As you can see from one of the graphs in there, even the 20D/30D
sensor can in theory benefit from a 14-bit A/D converter, if
amplifier noise is sufficiently controlled.
A contrary result based on actual 1D3 images is to be found at

http://www.openphotographyforums.com/forums/showpost.php?p=31694&postcount=31

The surrounding thread is also rather illuminating.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
There's more snake oil in the high-end audio game than just about any field I've seen. But I've also found that sometimes people are hearing things that are for real even if the explanation sounds really far out.

What I meant by old digital recordings was the spate of initial CDs released when CD players first came onto the mass market. There are a few reasons why some of those first CDs sound so terrible.

And if I might digress, I'll mention something that is interesting to me, if not entirely on topic:

When CDs first came out, I bought what was alleged to be a very good player and connected it to my existing system. I also "pooged it" (some of you will know what that's all about - basically, just doing some hot-rod modifications). And of course, I bought some CDs of familiar records so that I'd have something to play on that new gizmo. I fully expected these CDs to sound better than my LPs of the same material. After all, CDs were "perfect" (that's what the ads said, right?).

But the thing I found was that they sounded harsh and irritating. There was something about them that you just couldn't put your finger on, but which literally ran you out of the room. I'd listen to an entire symphony at realistic volume levels off of an LP and be totally relaxed and not want to leave. I'd put another on, and then another. It was easy to spend hours just listening to music.

But if I put a CD on, what I often found was that for some reason, I'd find myself someplace else in the house. I'd be up in the kitchen or in my office or whatever. I had, for some reason, just gotten up from my chair and left the listening room without really thinking about it.

And strangely, this was also true for our cats. Where I could sit for hours with a cat on my lap listening to LPs, with others just sleeping in the room, if I put on a CD of the same material (same record, same label...), the cats would leave the room fairly soon. There was just something about the CD and/or the CD player that was (at least on a subconscious level) irritating.

Later, after people finally admitted that these first CDs and players were NOT in fact perfect (gee, if they're all perfect, how come some players or CDs sound better than others? Is one a better perfect and one a worse perfect?) I started to read some good explanations about why those first CDs were so awful (and still are since many have not been re-done - although a lot have).

First, when a master tape is created from which the lacquer master for an LP is cut, the engineers had a very good feel for what the entire chain from master tape right through to home-playback would do to the sound. So they altered the response on the master tape to account for that. Thus, things were, indeed very "bright" sounding on the master tape to account for all of the roll-off that would occur through the rest of the steps. This made the home playback just about right if it was done properly.

But in a rush to make a whole lot of CDs, the record companies just grabbed those tapes and digitally reproduced them without re-equalizing them. Nasty!

Also, (and the real topic here), the original digital recording systems had to use the state of the art electronics. And at that time, making an A/D converter that could sample at 44.1 KHz at 16 bits of resolution was a heck of a feat. So that's what they had to work with. Thus, for those early digital recordings, they had to use these very sharp analog filters to chop off any signals above 1/2 the sampling frequency (the Nyquist limit) or you'd get aliasing which creates "birdies" or beat frequencies that can be easily heard.

And making an analog filter that sharp is tough. And even if you get it to work, frequencies well below the cutoff point will have their phase altered by a lot. So we had nastiness on a number of levels.

Getting to your actual point:

What they can do now is to record using A/D converters operating at much higher sampling rates - say 192Khz. That lets them use a very gentle analog filter that doesn't need to knock off frequencies even close to our range of hearing. Thus, important phase information (to us) is kept intact.

Then, they can take that digital data and perform digital filtering and "noise shifting" and goodies like that which allow them to create a 44.1 KHz standard CD but without having to mess up the phase information (because the original capture was done at the higher rate and subsequent filtering was done numerically and not in the analog domain).

Then, very often, our CD players also upsample the data from that CD before generating the analog signal. Again, having a higher sampling rate come out of the D/A converters in the player allows for a more gentle analog filter on that end of things (or maybe none at all). So it is possible to have pretty good sound from a well done CD and a good CD player even if the data is encoded onto the CD at 44.1 KHz.

It's probably better to just stick with the higher sampling rate right up until the player does it's D/A conversion, but 44.1 as a "transport medium" can be pretty darn good. And standard CDs only hold so much information, so if we want an hour of stereo music, we're kind of stuck with 44.1 KHz at 16 bits.

My stepson, who has amazingly acute hearing, was amazed when I played some LPs for him. He'd never heard any before and had only heard digitized audio. He immediately found that he preferred my old records to most of what he was used to. But then again, most of what any of us hear is very low-end gear. A really good digital playback system can sound good.

But the masses never hear that stuff. The fact that MP-3s are so popular shows that quality is not what sells. And that was obvious years ago when pre-recorded cassettes outsold LPs. People value convenience over quality for the most part when it comes to music listening.

--
Jim H.
 
I was replying to this paragraph of yours:

"All you have to do to verify this is to do some pixel peeping. If using 14 bits were significant, then truncating at 12 bits would result in posterization. Until someone can show me an example of such posterization, I'll continue to believe there is no value to 14 bits. I haven't seen it yet, but that doesn't mean it doesn't exist."

Perhaps you were unclear about what you actually meant, but ANY image will show posterization if you push it far enough. Give me any 12-bit image and I'll show you a processed version that shows posterization.

Note also that having a 14-bit image, even with random data in the least significant bits, will eliminate that posterization. I'm not disagreeing with you (though I'm not agreeing either -- it should be tested) that the extra two bits don't necessarily have useful data in them. I'm disagreeing with the quoted paragraph. At the least it doesn't say what you mean.

--
Robb

 
I own and 1D Mark II, and I can't afford to upgrade to the Mark III.

Also, the auto focus issues with the Mark III make me happy I have the Mark II.

But after about 3 months of looking at the Mark III images ... I think there's some slight improvement. You don't always see it, but sometimes it's there ... I think, but I'm not sure.

And I'm not sure it's the 14-bits either. But maybe something.
 
So the implication is that the 1DMkIII (and presumably the 40D) has a stop or so (maybe 2?) more DR at the low end.

I wonder if that top end opens up when you've used the Highlight Priority mode? Why would it be there, but grayed out, if it had no use?

Curiouser and curiouser ;-)

--
Jim H.
 

Keyboard shortcuts

Back
Top