D3 dynamic range?

It also turns out that the raw files were shot in 12-bit mode on the D3. It will be > interesting to see if the readout methodology of the camera differs enough in > 14-bit mode to change the DR appreciably. If it is not, then the extra two bits are > going to be as useless as they are on the Canon 1DMk3.
I'll take more gradients in sunset & skintones and less banding. It may not affect DR, but the added bits will definitely affect the IQ of the shot - equally important, and useful for sure...

--
http://www.arizonadigitalphotography.com - finally up, give a look

http://www.davidlakephotos.com - wedding site in the works...
 
Just few days ago, I have a chance to test out D3 directly against my S5 pro and D200..

On DR issue, I find that the DR of D3 is about the same as 300% setting on my S5 pro.. This is how I tried it: I chose a particularly contrasty scene where there is this black suit on a chair and on the background there is a few bright light illuminating some cameras on display. D3 were set to aperture priority at f5.6, high Active D-lighting, matrix metering, exposure bias +0.7. 17-35mm at around 26mm. S5 pro setting also at aperture priority f/5.6, EV +0.7, matrix metering, Tamron 17-50 at 17mm and shot at 100, 133 and all the way up to 400% DR.

Picture were compared with the clipping warning on. The amount of clipping were almost similar between 300%DR on S5 pro and the D3. When these two shots were examined, the amount of details in Both are almost similar (mind you, when reviewing images in S5, you cant zoom in to the same extend as in D3, but when zoomed to the same extend, the details appeared almost similar)..

According to the Nikon technician who assisted me, the preproduction model will have more noise and hence worse DR on active D-lighting (one suppose to enhance the other).. And asked me to go back to re-examine the Final release model once thy have one, and not to draw early conclusion.. So, we will see...

But one thing I noticed is that, when I crank up the DR on my S5, the pic looks a bit flat without post-processing but one from D3 are still very nicely contrasty..
Hmm.. Sudden gush of NAS..
--
Lenz
D200 & S5 pro
 
This is the first I saw head to head comparison of DR between D3 and S5.

I ordered a D3 and also my new S5 is coming today! The reason I ordered a S5 after D3 is because I believe S5 should have more DR. However, my question is do you compare the raw between these images? I mean raw from D3 and S5? By my understanding, active D-lighting is just in-camera pp which cannot increase the real DR of D3, something like D-lighting in capture NX.

If the Dr of D3 is really close to that of the S5, there is no reason for me to have a S5, so to me this is very important information.
 
I am no mathematician but I was not aware that the bit depth had much to do with the DR..I think one thread has pointed out that 0-100% is the same whatever and the bit depth is effectively how many slices between those extremes...thus smoother gradation and therefore colour accuracy....But DR?

The only way DR can be increased surely is for a pixel to be able to handle a much wider range of brightness values without saturation, clipping or noise..Fuji decided even with it's relatively large pixels (6mp) that it needed extra small insensitive ones to truly extend the ability of the sensor at the highlight end. No other manufacturer has yet matched this design for DR..and as a Fuji user, it definately works!

Clearly this is born out by Nikon's method of D-lighting..this does not extend the DR in any linear way but does what film used to do and compresses the shadow and highlight information into the DR that it has by creating an "S" shaped curve. This may of course have the benefit of having a more film like look..not such a bad thing and possibly even a selling point for some.

This may be an effective method of ensuring that shadow detail is recorded with low noise and highlights are still recorded with detail...but it is not, by definition, extended DR. This was confirmed by Nikon staff at Vision 2007 last week.

Having said that I am sure the D3 with it FF and large pixels, combined with D-Lighting and low noise at high iso will still be a force to reckon with!
 
Sorry to say that I was unable to compare the maximum capability of RAW file because the RAW files by D3 is not final output yet and the the Nikon technician would not allow me to load the images onto my Mac.. All I can do is the comparing the images on the cameras' LCD and look at the highlight/shadow clipping and compare them with various DR of the S5.. I shot all pictures in RAW only format. My D200 (set to custom curve to minimized highlight clipping) clipped around 166%.. Not too bad though..
Hope this help..
Thinking to sell off my S5 (only 6 months old :-
--
Lenz
D200 & S5 pro
 
It also turns out that the raw files were shot in 12-bit mode on the D3. It will be > interesting to see if the readout methodology of the camera differs enough in > 14-bit mode to change the DR appreciably. If it is not, then the extra two bits are > going to be as useless as they are on the Canon 1DMk3.
I'll take more gradients in sunset & skintones and less banding. It
may not affect DR, but the added bits will definitely affect the IQ
of the shot - equally important, and useful for sure...
Yes this is a common misconception. It is a fact that adding bits beyond the dynamic range of the sensor conveys no advantage. See another post in this thread:

http://forums.dpreview.com/forums/read.asp?forum=1021&message=25612187

If the bit depth exceeds the DR, the additional bits under the noise floor are random. They do nothing to smooth tonal gradients. Sunsets (highlights) and skintones (midtones) are least affected -- the noise is tens on up to about a hundred raw levels in these exposure zones, and will be totally unaffected by the difference between 12-bit and 14-bit capture. Banding is part of sensor read noise and is certainly lower on the D3 than its predecessors, as it is on the latest Canon offerings relative to their predecessors; but this fact is not correlated to bit depth of capture.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
I did a more careful analysis of the shadow noise of the sample raws, leading to a more precise measurement of the DR at ISO 200, and the result is 11.7 stops (again according to the engineering definition of dynamic range = max signal/read noise).

I also analyzed
ISO 1600: DR is 10.2 stops
ISO 3200: DR is 9.5 stops

For reference, these numbers are nearly identical to those of the Canon 1Dmk3 found using the same DR definition, see for instance

http://www.pages.drexel.edu/~par24/rawhistogram/Mk3Test.html

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
A small point, but isn't the distribution is the extreme shadows
Poisson rather than Gaussian?
My understanding (but I'm no expert here and will defer to anyone
more qualified) is that read noise is more or less Gaussian; it is on
Canon cameras, see for instance

http://www.pages.drexel.edu/~par24/rawhistogram/Mk3Test.html

(I'm not sure what the funny dips are on the sides of the
distribution there, but it's roughly gaussian). Canon imparts a
bias voltage so that their blackpoint is at some nonzero raw level
(1024 on the 14-bit 1D3 and 40D, 128 on most earlier 12-bit cameras);
this is good because the deep shadows are not distorted by clipping.
In Nikons the blackpoint is set to zero, and the negative voltage
side of the noise distribution is clipped, distorting the deepest
shadows. In my more accurate second measurement, I evaluated the
noise a little above zero exposure at various points, plotted the
results and extrapolated to zero exposure. This avoids the problem
of clipped black point which underestimates the noise.

As an aside, the photon shot noise that dominates at higher exposure
levels is Poisson distributed, but as soon as it's a few hundred
photoelectrons it's also for all practical purposes Gaussian, due to
the central limit theorem.
Emil,

I'm not an expert either, but perhaps a little dialog will clarify the situation. I took my cue on the Poisson distribution from Norman Koren's Imatest site. Look down the page to the green box.

http://www.imatest.com/docs/noise.html

With counts near zero, the normal distribution can not be used or else you can get negative counts. The normal distribution is continuous, whereas counts and pixel values are discrete and are best handled with the Poisson distribution. This is discussed briefly on the Wikipedia article. Look for section 6.4.1

http://en.wikipedia.org/wiki/Normal_distribution

For a large number of counts, the central limit theorem applies and one can use the normal distribution.

I did the following tests with a Nikon D200. I took two dark frames with the lens cap on the camera at the highest shutter speed. I then used the astronomical program Iris to obtain the raw image without demosaicing and saved the files as FITs. I then used the freeware program ImageJ to analyze the files. The raw histogram of one file is shown on the left. The values appear cllipped at zero as you suggested. One can obtain the read noise by subtracting two bias frames. To avoid obtaining netative numbers, one can add a constant amount to one frame before the subtraction as Roger Clark recommends on his web site. I added 50 to all the pixel values in one file and subtracted the images. The results are shown on the right. The standard deviation on the right is for two frames, and to obtain the SD for 1 frame, one divides by the square root of 2. As can be seen, the SDs are nearly equal after the correction (1.23 vs 1.21), indicating that no data were lost in the first frame. I don't think that the camera CCD produces any negative voltages that are clipped by the ADC. The read noise is in ADUs.



--
Bill Janes
 
I am no mathematician but I was not aware that the bit depth had much
to do with the DR..I think one thread has pointed out that 0-100% is
the same whatever and the bit depth is effectively how many slices
between those extremes...thus smoother gradation and therefore colour
accuracy....But DR?
Your assumption is not true. For some background reading, please refer to Roger Cllark's web site figure 8 and the accompanying text:

http://www.clarkvision.com/imagedetail/digital.sensor.performance.summary/

Norman Koren has a table showing the maximum dynamic ranges for varioius bit depths. As the table shows, 10 bits can quantify 10 f/stops. If you extended the table, you would see that 12 bits can quantify 12 f/stops, etc.

http://www.normankoren.com/digital_tonality.html

--
Bill Janes
 
You think wrong.

The A/D bit depth has no effect on the dynamic range of the sensor.
If a system can handle 50 lux to 50000 lux in one exposure without
the blacks blocking or the light greys burning, it can do it
undepending on whether you slice the 50-50000 lux scale into 100 or
100000 slices.
The ADC does have an effect on the dynamic range. DR can be limited
by quantization or noise. In the first case, digital capture is
linear and the scale is linear. Therefore, 14 bits can represent
values in the range of 1..16384 and for 12 bits the range is 1..4096.
These values correspond to 14 and 12 f/stops respectively.
It depends on the mapping. 14 bits just means more gradations. The quality of the input signal is the key to the DR, though the designers might decide not to map the complete range of the signal to the output. After all, you could represent 100 stops of information by an 8 bit signal. After all how do you think the Fuji S5 gets more DR in the image than the D200 when both are using 8 bit JPG.
In most practical situations DR is limited by noise and electronics
engineers define DR as:
full well capacity/read noise, both expressed in electrons.

Read noise can be affected by the ADC (analog to digital converter).
The ideal SNR of an ADC equals 6.02N+1.76 dB, where N is the number
of bits. Going from an ADC of 12 bits to 14 bits can give 2
additional stops of DR. Of course, using a 14 bit ADC in front of a
sensor with a low DR will not help, since the DR is limited by the
sensor in this case rather than the ADC.

--
Bill Janes
Sounds sensible.

Nikon were boasting that the D300 uses ADC's on the sensor chip, rather than off chip which reduces noise (since the signal is converted from analogue as soon as possible). Apparently this is how Canon have been doing it for some time, and it is an advantage of CMOS.

Anyway, the lower noise of the D3 reported by testers might suggest a higher dynamic range, though whether that allows better highlight rendition is unclear.
 
A small point, but isn't the distribution is the extreme shadows
Poisson rather than Gaussian?
My understanding (but I'm no expert here and will defer to anyone
more qualified) is that read noise is more or less Gaussian; it is on
Canon cameras, see for instance

http://www.pages.drexel.edu/~par24/rawhistogram/Mk3Test.html
[snip]
Emil,

I'm not an expert either, but perhaps a little dialog will clarify
the situation. I took my cue on the Poisson distribution from Norman
Koren's Imatest site. Look down the page to the green box.

http://www.imatest.com/docs/noise.html

With counts near zero, the normal distribution can not be used or
else you can get negative counts. The normal distribution is
continuous, whereas counts and pixel values are discrete and are best
handled with the Poisson distribution. This is discussed briefly on
the Wikipedia article. Look for section 6.4.1

http://en.wikipedia.org/wiki/Normal_distribution

For a large number of counts, the central limit theorem applies and
one can use the normal distribution.
I think the point is that while photon shot noise has Poisson statistics, the read noise is roughly Gaussian -- random fluctuations in voltage, which can be positive or negative. So while the photon count must of course be positive, it gets converted to a voltage when the sensor is read, and read noise fluctuations get added which can in principle send the voltage negative. If one clips all negative voltages to zero, the distribution is truncated and you underestimate the noise (if one does the math, it turns out that a half-gaussian distribution has a width that is 1.66 times narrower than the full, double-sided gaussian). Canons apply a bias voltage before quantization, and so you can see the negative values that would have been clipped to zero had the bias not been applied. In the link I gave above, the 1D3 blackframe histogram is shown; the bias is 1024 raw levels and is at the center of the distribution. All the part of the histogram below 1024 would have got clipped off had this bias not been applied.
I did the following tests with a Nikon D200. I took two dark frames
with the lens cap on the camera at the highest shutter speed. I then
used the astronomical program Iris to obtain the raw image without
demosaicing and saved the files as FITs. I then used the freeware
program ImageJ to analyze the files. The raw histogram of one file is
shown on the left. The values appear cllipped at zero as you
suggested. One can obtain the read noise by subtracting two bias
frames. To avoid obtaining netative numbers, one can add a constant
amount to one frame before the subtraction as Roger Clark recommends
on his web site. I added 50 to all the pixel values in one file and
subtracted the images. The results are shown on the right. The
standard deviation on the right is for two frames, and to obtain the
SD for 1 frame, one divides by the square root of 2. As can be seen,
the SDs are nearly equal after the correction (1.23 vs 1.21),
indicating that no data were lost in the first frame. I don't think
that the camera CCD produces any negative voltages that are clipped
by the ADC. The read noise is in ADUs.

http://bjanes.smugmug.com/photos/221485299-O.gif
Your methodology is impeccable, but the question is how to interpret the result. I would say that when you subtracted the two clipped distributions, you will would of course have obtained values that were both positive and negative equally and that is why your distribution of the difference frame is symmetrical about the mean. But since the distribution you started with is 1.66 times narrower than the true read noise, the difference frame even after compensating by the sqrt 2 (which btw is only appropriate for gaussians, not half-gaussians; but the error is negligible) is still off by the same factor 1.66. The noise will be underestimated by this factor.

In what I did with the non-blackframe image samples, was to take small uniform samples from the shadow parts of the image. You could see the noise decreasing faster very near zero due to this clipping phenomenon, so I excluded the few blackest samples (those for which the raw level was not much more than the std dev of the patch) and extrapolated the noise in near-zero signal patches to zero level to get the noise used in the DR calculation. Once one is enough above zero signal so that the noise distibution is not clipped, the noise measurement will be unaffected by the clipping.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
You think wrong.

The A/D bit depth has no effect on the dynamic range of the sensor.
If a system can handle 50 lux to 50000 lux in one exposure without
the blacks blocking or the light greys burning, it can do it
undepending on whether you slice the 50-50000 lux scale into 100 or
100000 slices.
The ADC does have an effect on the dynamic range. DR can be limited
by quantization or noise. In the first case, digital capture is
linear and the scale is linear. Therefore, 14 bits can represent
values in the range of 1..16384 and for 12 bits the range is 1..4096.
These values correspond to 14 and 12 f/stops respectively.
It depends on the mapping. 14 bits just means more gradations. The
quality of the input signal is the key to the DR, though the
designers might decide not to map the complete range of the signal to
the output. After all, you could represent 100 stops of information
by an 8 bit signal. After all how do you think the Fuji S5 gets more
DR in the image than the D200 when both are using 8 bit JPG.
The sensor is a linear device, and so the raw data is linear -- twice the light intensity yields twice the raw value. This property is in contrast to the jpeg, which has had nonlinear tone mappings applied to the raw data, such as gamma correction and tone curves, and so there is no direct relationship between the 8-bit depth of jpeg and the dynamic range of the image when it was recorded.

But since the sensor is linear, bits and stops are correlated -- doubling the light intensity amounts to shifting the signal up one bit in the raw data. When the signal drops below the noise level, the practical value of using extra bits to encode the sensor data is almost nil, since the extra bits beyond the DR are effectively random.

It's a one-way street -- bit depth larger than the dynamic range in stops is wasteful since the extra bits are random and unuseable; bit depth smaller than the dynamic range throws away useful information about the scene as recorded.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
It depends on the mapping. 14 bits just means more gradations. The
quality of the input signal is the key to the DR, though the
designers might decide not to map the complete range of the signal to
the output. After all, you could represent 100 stops of information
by an 8 bit signal. After all how do you think the Fuji S5 gets more
DR in the image than the D200 when both are using 8 bit JPG.
That is true, but all digital cameras that I am aware of map linear raw files, and that limits DR to 1 f/stop per bit of the ADC. If you are using gamma encoded images, then coding is more efficient.

--
Bill Janes
 
I think the point is that while photon shot noise has Poisson
statistics, the read noise is roughly Gaussian -- random fluctuations
in voltage, which can be positive or negative. So while the photon
count must of course be positive, it gets converted to a voltage when
the sensor is read, and read noise fluctuations get added which can
in principle send the voltage negative. If one clips all negative
voltages to zero, the distribution is truncated and you underestimate
the noise (if one does the math, it turns out that a half-gaussian
distribution has a width that is 1.66 times narrower than the full,
double-sided gaussian). Canons apply a bias voltage before
quantization, and so you can see the negative values that would have
been clipped to zero had the bias not been applied. In the link I
gave above, the 1D3 blackframe histogram is shown; the bias is 1024
raw levels and is at the center of the distribution. All the part of
the histogram below 1024 would have got clipped off had this bias not
been applied.
Emil,

Your explanation is reasonable, but I don't know enough about CCDs to know if they output negative voltage and the ADC clips it to zero. Do you have any references describing this behavior?
Your methodology is impeccable, but the question is how to interpret
the result. I would say that when you subtracted the two clipped
distributions, you will would of course have obtained values that
were both positive and negative equally and that is why your
distribution of the difference frame is symmetrical about the mean.
But since the distribution you started with is 1.66 times narrower
than the true read noise, the difference frame even after
compensating by the sqrt 2 (which btw is only appropriate for
gaussians, not half-gaussians; but the error is negligible) is still
off by the same factor 1.66. The noise will be underestimated by
this factor.
Again, your explanation is eminently reasonable. During the subtraction, I would get no negative values, since I added a sufficiently large bias to prevent negative numbers as Roger Clark recommends on his web site. In the example, he was using a Canon camera, so the voltage bias you mention would have been applied. He has also reported on Nikon cameras, but did not describe the methodology.

http://www.clarkvision.com/imagedetail/evaluation-1d2/index.html

Here is another reference on determining read noise by subtraction of dark frames. They do not add a positive bias before subtracting and make no mention of clipped negative values.

http://www.qsimaging.com/ccd_noise_measure.html

I would appreciate your comments and analysis. It has always puzzled my why my bias frames appear clipped, but the subtraction of two frames after adding a bias to one shows a bell shaped curve.

--
Bill Janes
 
Raw converter software sets the black point at 128 rather than 0.
Then saturation is at 4095-128=3967, which is 11.95 stops. You lose
essentially nothing, but don't distort the noise spectrum at the
bottom end. Nikon should do this biasing, it's good engineering.
Actually, I think you are incorrectly calculating dynamic range. The dynamic range refers to the ratio of the brightest and the dimmest levels and not the arithmetic difference. If I had used a 14 bit converter for the exactly the same situation, the corresponding numbers would be 16383 and 512 since the sensor outputs are linear and the A/D outputs are still linear. If I used your method, 16383-512 would be about 13.95 stops. Obviously, just changing the number of bits used by the A/D converter would not change the dynamic range of the sensor (except that using too few could limit the standard measurement of DR). Of course, the ratio of 4095/128 is still quite close to 16383/512 so the DR remains the same independent of the number of bits.

--
Leon
http://homepage.mac.com/leonwittwer/landscapes.htm
 
Raw converter software sets the black point at 128 rather than 0.
Then saturation is at 4095-128=3967, which is 11.95 stops. You lose
essentially nothing, but don't distort the noise spectrum at the
bottom end. Nikon should do this biasing, it's good engineering.
Actually, I think you are incorrectly calculating dynamic range. The
dynamic range refers to the ratio of the brightest and the dimmest
levels and not the arithmetic difference. If I had used a 14 bit
converter for the exactly the same situation, the corresponding
numbers would be 16383 and 512 since the sensor outputs are linear
and the A/D outputs are still linear. If I used your method,
16383-512 would be about 13.95 stops. Obviously, just changing the
number of bits used by the A/D converter would not change the dynamic
range of the sensor (except that using too few could limit the
standard measurement of DR). Of course, the ratio of 4095/128 is
still quite close to 16383/512 so the DR remains the same independent
of the number of bits.
What is linearly encoded is the raw values between the offset and saturation. My understanding is that the voltage comes off the sensor read, and a bias voltage is added to it, then quantized in the ADC.

Your calculation is correct if properly interpreted. Going to 14-bit does increase the capacity of the encoding to capture dynamic range by two stops. That does not mean however that you necessarily get that dynamic range when using that encoding of a signal from a camera; the camera noise is the limiting factor. Good DSLR's now have a read noise at ISO 200 of a little over one raw level in 12-bit units, and so 12-bit encoding gives twelve stops of DR. Using a 14-bit encoding would mean the noise gets multiplied by four just as the signal does; so the signal is four times larger, the noise is four times larger, and the DR which is the ratio is unchanged. So the extra two bits don't buy you anything here.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
I think the point is that while photon shot noise has Poisson
statistics, the read noise is roughly Gaussian -- random fluctuations
in voltage, which can be positive or negative. So while the photon
count must of course be positive, it gets converted to a voltage when
the sensor is read, and read noise fluctuations get added which can
in principle send the voltage negative. If one clips all negative
voltages to zero, the distribution is truncated and you underestimate
the noise (if one does the math, it turns out that a half-gaussian
distribution has a width that is 1.66 times narrower than the full,
double-sided gaussian). Canons apply a bias voltage before
quantization, and so you can see the negative values that would have
been clipped to zero had the bias not been applied. In the link I
gave above, the 1D3 blackframe histogram is shown; the bias is 1024
raw levels and is at the center of the distribution. All the part of
the histogram below 1024 would have got clipped off had this bias not
been applied.
Emil,

Your explanation is reasonable, but I don't know enough about CCDs to
know if they output negative voltage and the ADC clips it to zero. Do
you have any references describing this behavior?
Just the same sources you already know -- Roger Clark's work and analyses by John Sheehy and others. But it is a fact that voltage fluctuations are as likely to be in the negative direction as in the positive direction; I can't envision any physical effect that would lead the noise sources in the electronics to have only one sign of voltage, it makes no sense.
Your methodology is impeccable, but the question is how to interpret
the result. I would say that when you subtracted the two clipped
distributions, you will would of course have obtained values that
were both positive and negative equally and that is why your
distribution of the difference frame is symmetrical about the mean.
But since the distribution you started with is 1.66 times narrower
than the true read noise, the difference frame even after
compensating by the sqrt 2 (which btw is only appropriate for
gaussians, not half-gaussians; but the error is negligible) is still
off by the same factor 1.66. The noise will be underestimated by
this factor.
Again, your explanation is eminently reasonable. During the
subtraction, I would get no negative values, since I added a
sufficiently large bias to prevent negative numbers as Roger Clark
recommends on his web site. In the example, he was using a Canon
camera, so the voltage bias you mention would have been applied. He
has also reported on Nikon cameras, but did not describe the
methodology.
I would appreciate your comments and analysis. It has always puzzled
my why my bias frames appear clipped, but the subtraction of two
frames after adding a bias to one shows a bell shaped curve.
That one is easy -- you are drawing numbers from the same probability distribution and subtracting them. It doesn't matter if the probability distribution itself is a gaussian, half-gaussian, poisson, or whatever. Imagine you subtracted the bias frames x-y; and I subtracted the two bias frames y-x. Your distribution would be the negative of mine. But all the bias frames are statistically the same -- there is no preferred order in which to subtract them. Thus the distribution must be symmetrical about zero. Then you add the bias, and it will be symmetrical about the bias value.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
You can do a 12 stop luminance range with a 4-bit A/D converter. The converter does not affect the sensor´s capability to capture differences in brightness in the mentioned range.

However, you will loose tremendous amount of data in between the 0% and 100% signal levels, as the 4-bits can´t cope with the incremental changes in between. We can cut off everything outside the 0-1 stops and 11-12 stops of the range and the 4-bit converter performs brilliantly on the 12 stop distance on the luminance scale. Maybe even better than a 32-bit converter :-)

The converter has to cope with the imaging unit output capabilities, not undermine it. Overkill bit depeths should not be used for marketing purposese either.

The logic in communicating the meaning of the 14-bit converting is a little backwards. It´s makes utilising a larger DR possible as the sensors get better. A 16-bit converter still does not give a D2x a 16 stops dynamic range.

(now you go and moderate away this message...)
 
Sorry Bill ..I don't believe it!!

The bit depth is all about the number of colours. The pixel's ability to respond to brightness range is something else.

The assumption that 12 bits= 12 F stops, 14 bits=14 F stops is very clearly flawed.

The Fuji's are capable of around 12 F stops best with a 14 bit colour depth...the new Nikons and Canons are claiming nothing like this. Note how the DR reduces on all dSLR's as the iso is pushed...same bit depth though?

Take the example of shooting in monochrome..are you saying the DR is any different...or you are shooting Jpeg which is 8 bit regardless..same DR..well actually not quite...you usually get around 1 F stop more in RAW, depending on processor.
 

Keyboard shortcuts

Back
Top