D600 High ISO in DX

Started Nov 23, 2012 | Questions thread
Leo360 Senior Member • Posts: 1,141
Re: Roger's optimum pixel pitch of 5um

noirdesir wrote:

Leo360 wrote:

I am comparing one 6x6um pixel to an aggregated output of four 3x3um adjacent pixels. Think of it in this way. You have a 36MP FX sensor but the marketing department demands a 9MP camera. instead of creating the new sensor an existing 36MP sensor is taken and the readings from a cluster of 4 adjacent pixels are combined together producing a 9MP output. A competitor comes with a true 9MP camera. The question is which one has better SNR and DR? Of course, this is a stupid example but it illustrates what I mean by aggregation. This aggregation happens in one image. I am sorry, I do not understand what it all has to do with repeated observations you keep mentioning.

You really don't know the answer to that by now? Just compare the E-M5 sensor with the D600 sensor and you have your answer. Not quite a factor of four in total number of pixels but it gets close to a factor of three:

- E-M5: 3.73 μm, QE 53%

- D600: 5.9 μm, QE 53%

Same QE, thus the same photon shot noise.

Yes, photon shot noise per unit area is the same. Shot-noise per single pixel reading is not (they have different areas and therefore, different photon count). Again, Bill Claff's charts give D600 one stop advantage in Dynamic Range over E-M5.

As I illustrated in a previous post in this thread (http://forums.dpreview.com/forums/post/50338850), the E-M5 has a slightly lower read noise for a given sensor area (I compared it for a 37.3 x 37.3 μm area because that divides nicely into 100 E-M5 pixels and 62 D600 pixels, but the same principle applies for any area you might choose).

This is a nice example.

Thus, FX camera made up of E-M5 pixels would have 62 MP aggregating the output of such a sensor down to 24 MP would yield the same photon shot noise and an even slightly better read noise than the D600 sensor.

Is it the read-noise std.dev. that should be scaled with the pixel area or the variance. In the latter case the read-noise per common 37.3x37.3um uber-pixel would be the same (I used your summation formula but applied scaling to the variances instead of std.dev). Also you are neglecting the dark current which gets stronger more pixels are combined.

If there is specific reason why 2D Fourier analysis and Petersen-Middleton (PM) theorem are not applicable in this very case, please, provide some more details or a link instead of just "swap space for time".

Just use your mathematical intuition, yes 2D (space) vs. 1D time might trip you up at first but since almost all light is uncorrelated in space (meaning the photons are created by random phenomena), any pixel array sampling light in a 2D grid is essentially sampling uncorrelated samples and thus can be seen as a 1D series of samples.

Square lattices are good (yet not optimal) samplers and should be all right for the practical purposes. The optimal 2-D sampler for uncorrelated signals is actually a hexagon lattice.

Thanks, Bob. I also find Rogers analysis flaky. But seriously speaking, there should be some sort of bounds on how small pixel can become and what is its optimum size for a given technology like CCD or CMOS. It cannot be smaller than the wavelength, I guess.:-)

For current technology certainly as optical elements (like microlenses) will show strong wavelength-dependent behaviour once they get down to size of the wavelength but we already have 1 μm pixels in phone cameras, that is still quite a long way to go for FX cameras. We would be getting close to a GP sensor with that.

Mind boggling 864MP FX sensor!

Leo

 Leo360's gear list:Leo360's gear list
Nikon D5100 Nikon D750 Nikon AF-S DX Nikkor 18-200mm f/3.5-5.6G ED VR II Nikon AF-S Nikkor 24-120mm F4G ED VR Nikon AF-S Nikkor 50mm f/1.8G
Post (hide subjects) Posted by
cpw
cpw
cpw
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark MMy threads
Color scheme? Blue / Yellow