D600 High ISO in DX

Started Nov 23, 2012 | Questions thread
bobn2
Forum ProPosts: 26,181
Like?
Re: Clarkvision.com analysis
In reply to Leo360, Nov 23, 2012

Leo360 wrote:

bobn2 wrote:

Leo360 wrote:

Leo360 wrote:

So far from what I saw on Bill Claff's charts is that D600(DX mode) dynamic range outperforms D7000 at all ISOs. I have no reason to think that with D5200 it will be any different.

Leo

For those interested in the subject of sensor performance with all the gory details, please, read the sensor section at the clarkvision.com. Highly recommended read!

Only highly recommended if you want to end up getting all kinds of stuff wrong. Which seems to be what you have done. Particularly, the section on the effects pixel size is extremely confused.

-- hide signature --

Bob

And how exactly it is confused?

Where to start? I've done this so many times. If I was sensible I would just keep a bookmark, but I'm not that organised. OK, going from the top:

Dynamic range is defined in this document and elsewhere on this site as:

  • Dynamic Range = Full Well Capacity (electrons) / Read Noise (electrons)
Dynamic range is always bandwidth dependent, so of you want to define DR's so as to compare one with another, you need to declare the bandwidth.
The larger pixel has the potential to collect more light.
Well, no it doesn't because it occupies more area, so pixel size does not fundamentally affect the amount of light collected, if the overall collection area is fixed.
As pixels become smaller, signal level drops and read noise becomes more dominant, and that limits small pixel performance.
That is simply wrong, since read noise is inversely proportional to conversion gain, which is stongly linked to geometry (smaller = higher conversion gain).
The apparent image quality is given by the Full sensor Apparent Image Quality, or FSAIQ metric.
This is nonsense, the FSAIQ is something Roger made up, there is no evidence whatsoever that this metric has any connection to perceived 'apparent image quality'.
While the quantum efficiency of sensors in digital cameras has not really changed much, other factors that have improved include: fill factor (the fraction of a pixel that is sensitive to light), higher transmission of the filters over the sensor, better micro lenses, lower read noise, and lower fixed pattern noise.
This is contradictory. Quantum efficiency is at least partially dependent on fill factor, so if fill factor has risen the so must have QE, as it has. For instance the 5DIII has about double the quantum efficiency of the original 5D. The 5DIII was neatly at the mid point between them.
I'm a bit bored with this now, but there follows loads of graphs each of which is a bit silly or misleading.
Figure 1, 'Digital cameras: sensor full well', no it isn't, it's pixel full well. Unsurprisingly bigger pixels have a bigger FWC, but that tells us nothing about the overall full capacity of the sensor.
Figure 2, 'Digital cameras, signal to noise ratio on an 18% gray card at 100 ISO', again, it isn't the camera's SNR, it is at a pixel level, so each of the cameras is compared at a different bandwidth with respect to final image size. The graph tells us very little.
Figure 4, 'Sensor dynamic range'. Same thing applies.
Figure 5a, where we compare cameras with radically different sensor sizes and ascribe the (pixel scale) DR differences to the pixel size.
Same for figure 5b.
Figure 6. Exactly what is the point? (and in any case, this has nothing to do with 'ISO')
No, I really have to stop now, but the whole article goes on in the same vein. I'm sure Roger is a very decent fellow, but this article is confused, muddled and misleading from start to finish.

Roger has a more accessible essay "Does Pixel Size matter?" which is also a recommended reading for those who believe (not you) that read noise scales up linearly with pixel area.

Leo

Just as misleading, I counsel people form drawing any conclusions unless they have the background to see the flaws. Once again the examples compare cameras with radically different sensor sizes then ascribe the observed differences to pixel size.

The overall pint is this,Roger (and you) is obsessed with comparing things at the pixel sampling frequency, which means making comparisons over different bandwidths. That produces nonsense results, particularly nonsense if what you are interested in is, if you take like photos from the two cameras being compared, which looks better.

-- hide signature --

Bob

Reply   Reply with quote   Complain
Post (hide subjects)Posted by
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark post MMy threads
Color scheme? Blue / Yellow