D600 High ISO in DX

Started Nov 23, 2012 | Questions thread
Leo360
Senior MemberPosts: 1,031
Like?
Re: Clarkvision.com analysis
In reply to bobn2, Nov 23, 2012

bobn2 wrote:

Leo360 wrote:

bobn2 wrote:

Leo360 wrote:

Leo360 wrote:

So far from what I saw on Bill Claff's charts is that D600(DX mode) dynamic range outperforms D7000 at all ISOs. I have no reason to think that with D5200 it will be any different.

Leo

For those interested in the subject of sensor performance with all the gory details, please, read the sensor section at the clarkvision.com. Highly recommended read!

Only highly recommended if you want to end up getting all kinds of stuff wrong. Which seems to be what you have done. Particularly, the section on the effects pixel size is extremely confused.

-- hide signature --

Bob

And how exactly it is confused?

Where to start? I've done this so many times. If I was sensible I would just keep a bookmark, but I'm not that organised. OK, going from the top:

Dynamic range is defined in this document and elsewhere on this site as:

  • Dynamic Range = Full Well Capacity (electrons) / Read Noise (electrons)
Dynamic range is always bandwidth dependent, so of you want to define DR's so as to compare one with another, you need to declare the bandwidth.
Agreed. One way to normalize for a common bandwidth is to resample the higher bandwidth image to match the lower-bandwidth one. Hence, down-sampling.

The larger pixel has the potential to collect more light.
Well, no it doesn't because it occupies more area, so pixel size does not fundamentally affect the amount of light collected, if the overall collection area is fixed.
No, if sensors contain different number of pixels. Remember, we are comparing DX sensor size with 10MP (D600) versus 24MP (D5200/D3200). Given the same amount of light hitting the sensor D600 pixels will receiver twice the photon count per pixel.

As pixels become smaller, signal level drops and read noise becomes more dominant, and that limits small pixel performance.
That is simply wrong, since read noise is inversely proportional to conversion gain, which is stongly linked to geometry (smaller = higher conversion gain).

Agreed. Then you should correct the Wikipedia page you quoted which states exactly the opposite. It says that the read-noise "scales down" with the area (it should say "scales up").  Also the FW (scales up) and CG (scales down) interplay affects dynamic range and noise performance.
The apparent image quality is given by the Full sensor Apparent Image Quality, or FSAIQ metric.
This is nonsense, the FSAIQ is something Roger made up, there is no evidence whatsoever that this metric has any connection to perceived 'apparent image quality'.
While the quantum efficiency of sensors in digital cameras has not really changed much, other factors that have improved include: fill factor (the fraction of a pixel that is sensitive to light), higher transmission of the filters over the sensor, better micro lenses, lower read noise, and lower fixed pattern noise.
This is contradictory. Quantum efficiency is at least partially dependent on fill factor, so if fill factor has risen the so must have QE, as it has. For instance the 5DIII has about double the quantum efficiency of the original 5D. The 5DIII was neatly at the mid point between them.
I'm a bit bored with this now, but there follows loads of graphs each of which is a bit silly or misleading.
...Just as misleading, I counsel people form drawing any conclusions unless they have the background to see the flaws. Once again the examples compare cameras with radically different sensor sizes then ascribe the observed differences to pixel size.

Moreover, Roger predicts that 5um is the optimum pixel pitch for any CMOS sensor. Bob, what is your take on it?

The overall pint is this,Roger (and you) is obsessed with comparing things at the pixel sampling frequency, which means making comparisons over different bandwidths. That produces nonsense results, particularly nonsense if what you are interested in is, if you take like photos from the two cameras being compared, which looks better.

I cannot speak for Roger but I am NOT comparing across pixel sampling frequencies. On the contrary, I am re-sampling the higher freq. image (think down-sampling) to a common sampling rate and then comparing at the same reference frequency. And  down-sampling (when performed properly) tends to improve SNR.

Leo

Reply   Reply with quote   Complain
Post (hide subjects)Posted by
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark post MMy threads
Color scheme? Blue / Yellow