D600 High ISO in DX

Started Nov 23, 2012 | Questions thread
Leo360 Senior Member • Posts: 1,141
Re: Roger's optimum pixel pitch of 5um

bobn2 wrote:

Leo360 wrote:

bobn2 wrote:

Leo360 wrote:

The resolution of a digital image depends (among other things) on your sampling rate (Nyquist theorem) and pixel size has A LOT to do with it.

Yes, but you were talking of the resolution of a pixel. There is no 'resolution' at a pixel level, only when you look at areas containing many pixels.

I think you misunderstood what I said. Resolution of a single sample makes no sense and I never talked that way.

You talked thusly:

'This is why pixel peeping reveals more noise-per-pixel for smaller photosites. The price to pay is reduced resolution.'

"Pixel peeping" is a sarcastic expression I use when comparison is done across different bandwidth and, thus, is apple-to-oranges. Now I understand, that my wording was taken for its face value which is unfortunate. Everything I wrote before or after is an attempt to transform signal to a common Nyquist sampling rate and then compare sample-per-sample which I think will be more like apple-to-apple comparison.

|| For a fair comparison we have to establish a common bandwidth, || i.e. common sampling rate.

Not 'i.e. common sampling rate'. You don't have to resample to observe over a common bandwidth, you just observe the same size, or normalise to the same bandwidth. It is about bandwidth, not resampling.

I am sorry, I want to keep human observer out of it. The sampling rate determines the reproducible bandwidth. This it true for multi-dimensional signals as well. Re-sampling at a lower rate is a method to modify signal's spectral density. For example, down-sampling (aka decimation) is used to cut-off higher frequencies to obtain lower resolution signal. (Again all of the above is not limited to 1D signals.) The moment one brings both signals to a common bandwidth  one can compare their signal-to-noise characteristics.

You brought it up. You talked about matching a 4x4 micron pixel to a 6 by 6 one. You are comparing the SNR in a singular pixel with another one and that makes no sense. When I point that out you propose that you shoot repeatedly, observe the same pixel over repeated observations. All that makes sense is comparing noise over equal areas of equal sized output images, not comparing a 4x4 pixel with a 6x6 one. Making multiple observations over time has nothing to do with it.

I am comparing one 6x6um pixel to an aggregated output of four 3x3um adjacent pixels. Think of it in this way. You have a 36MP FX sensor but the marketing department demands a 9MP camera. instead of creating the new sensor an existing 36MP sensor is taken and the readings from a cluster of 4 adjacent pixels are combined together producing a 9MP output. A competitor comes with a true 9MP camera. The question is which one has better SNR and DR? Of course, this is a stupid example but it illustrates what I mean by aggregation. This aggregation happens in one image. I am sorry, I do not understand what it all has to do with repeated observations you keep mentioning.

Or if one wants to reconstruct a 6x6 sample out of 4x4 ones the re-sampling rate of 2/3 can be used  thanks to PM-theorem (Nyquist for more than 1-D).

I just replied with the text-book definition of how one samples stochastic processes

Without understanding. Most textbooks are talking about variation in the temporal dimension, since they are talking about time varying signals. In photography we have a spatially varying signal, and you didn't spot that you just need to swap space for time.

Well, I do not think that going personal is that cute. You don't know who I am and what is my area of expertise which happens to be stochastic signal processing including multi-dimensional random fields (not in imaging though).

If there is specific reason why 2D Fourier analysis and Petersen-Middleton (PM) theorem are not applicable in this very case, please, provide some more details or a link instead of just "swap space for time".

Absent of any other specific info I have no reason to question why I cannot re-sample an image with a higher Nyquist freq. to the one with the lower Nyq.Fr. doing it by means of PM-theorem (direct generalization of Shannon-Nyquist). And there are well known ways to calculate SNR after such a linear transformation.

Anyway, Roger claims that 5um is the optimum pixel pitch for a CMOS sensor. Should one take it seriously? If yes, then we have a winer -- D800 .

I don't take nonsense seriously, even if by chance it gives a sensible result. The reason the D800 is the tops is because it has the smallest pixels of an FF camera, not because they happen to be 5 micron.

-- hide signature --

Bob

Thanks, Bob. I also find Rogers analysis flaky. But seriously speaking, there should be some sort of bounds on how small pixel can become and what is its optimum size for a given technology like CCD or CMOS. It cannot be smaller than the wavelength, I guess.:-)

Leo

 Leo360's gear list:Leo360's gear list
Nikon D5100 Nikon D750 Nikon AF-S DX Nikkor 18-200mm f/3.5-5.6G ED VR II Nikon AF-S Nikkor 24-120mm F4G ED VR Nikon AF-S Nikkor 50mm f/1.8G
Post (hide subjects) Posted by
cpw
cpw
cpw
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark MMy threads
Color scheme? Blue / Yellow