# D600 High ISO in DX

Started Nov 23, 2012 | Questions thread
Re: Roger's optimum pixel pitch of 5um
2

Leo360 wrote:

I am comparing one 6x6um pixel to an aggregated output of four 3x3um adjacent pixels. Think of it in this way. You have a 36MP FX sensor but the marketing department demands a 9MP camera. instead of creating the new sensor an existing 36MP sensor is taken and the readings from a cluster of 4 adjacent pixels are combined together producing a 9MP output. A competitor comes with a true 9MP camera. The question is which one has better SNR and DR? Of course, this is a stupid example but it illustrates what I mean by aggregation. This aggregation happens in one image. I am sorry, I do not understand what it all has to do with repeated observations you keep mentioning.

You really don't know the answer to that by now? Just compare the E-M5 sensor with the D600 sensor and you have your answer. Not quite a factor of four in total number of pixels but it gets close to a factor of three:

- E-M5: 3.73 μm, QE 53%

- D600: 5.9 μm, QE 53%

Same QE, thus the same photon shot noise. As I illustrated in a previous post in this thread (http://forums.dpreview.com/forums/post/50338850), the E-M5 has a slightly lower read noise for a given sensor area (I compared it for a 37.3 x 37.3 μm area because that divides nicely into 100 E-M5 pixels and 62 D600 pixels, but the same principle applies for any area you might choose).

Thus, FX camera made up of E-M5 pixels would have 62 MP aggregating the output of such a sensor down to 24 MP would yield the same photon shot noise and an even slightly better read noise than the D600 sensor.

If there is specific reason why 2D Fourier analysis and Petersen-Middleton (PM) theorem are not applicable in this very case, please, provide some more details or a link instead of just "swap space for time".

Just use your mathematical intuition, yes 2D (space) vs. 1D time might trip you up at first but since almost all light is uncorrelated in space (meaning the photons are created by random phenomena), any pixel array sampling light in a 2D grid is essentially sampling uncorrelated samples and thus can be seen as a 1D series of samples.

Thanks, Bob. I also find Rogers analysis flaky. But seriously speaking, there should be some sort of bounds on how small pixel can become and what is its optimum size for a given technology like CCD or CMOS. It cannot be smaller than the wavelength, I guess.:-)

For current technology certainly as optical elements (like microlenses) will show strong wavelength-dependent behaviour once they get down to size of the wavelength but we already have 1 μm pixels in phone cameras, that is still quite a long way to go for FX cameras. We would be getting close to a GP sensor with that.

Complain
Post ()
Keyboard shortcuts: