D600 High ISO in DX

Started Nov 23, 2012 | Questions
Leo360 Senior Member • Posts: 1,141
Roger's optimum pixel pitch of 5um

bobn2 wrote:

Leo360 wrote:

The resolution of a digital image depends (among other things) on your sampling rate (Nyquist theorem) and pixel size has A LOT to do with it.

Yes, but you were talking of the resolution of a pixel. There is no 'resolution' at a pixel level, only when you look at areas containing many pixels.

I think you misunderstood what I said. Resolution of a single sample makes no sense and I never talked that way.

With proper down-sampling (bicubic, etc) to the same level of detail one can hope to recover the SNR back by effectively combining outputs of multiple smaller pixels into an aggregate one but doing so does not entirely compensate for read-noise increase.

The 'downsampling' argument is a red herring. All that is required is to look at the images produced the same size.

It is not a red herring. Producing images at the same size with the same dpi means that we have to resample an output of hi-res camera to match the sampling rate of a low-res camera. Otherwise your prints will be of different size (dpi is the same, right?)

That is untrue. You do not resample the output of a high res camera to match the sampling rate of a low res camera. Generally you resample it to match the sampling rate demanded by the output image. You will generally resample all cameras so.

For a fair comparison we have to establish a common bandwidth, i.e. common sampling rate. Hence, resampling. For the sake of argument let this common rate be the sampling rate of the lower-res camera (D600 in DX). I resample D3200 image down to match it.

Correct me if I am wrong, but I think that you will have to jump through several hoops to match 4x4 um pixel SNR to 6x6 um one.

You cannot measure SNR in a single pixel in a single photograph. Now think on the implications of that.

-- hide signature --

Bob

Did I say anything about a single photograph? To measure pixel readings statistics one shoots repeatedly in controlled environments with constant light, exposure, etc. You get mean, variance, probability distribution etc in due course of statistical analysis.

This now becomes throughly artificial, that you make the multiple observations of the same pixel over multiple exposures. It is not even guaranteed that the noise so observed would match the noise observed in the spatial domain. A thoroughly artificial result of no interest.

-- hide signature --

Bob

Bob, you brought it "single pixel in a single photograph", not me.   I just replied with the text-book definition of how one samples stochastic processes

Anyway, Roger claims that 5um is the optimum pixel pitch for a CMOS sensor. Should one take it seriously? If yes, then we have a winer -- D800 .

Leo

 Leo360's gear list:Leo360's gear list
Nikon D5100 Nikon D750 Nikon AF-S DX Nikkor 18-200mm f/3.5-5.6G ED VR II Nikon AF-S Nikkor 24-120mm F4G ED VR Nikon AF-S Nikkor 50mm f/1.8G
bobn2
bobn2 Forum Pro • Posts: 62,183
Re: Clarkvision.com analysis
1

Leo360 wrote:

bobn2 wrote:

Leo360 wrote:

bobn2 wrote:

Leo360 wrote:

Leo360 wrote:

So far from what I saw on Bill Claff's charts is that D600(DX mode) dynamic range outperforms D7000 at all ISOs. I have no reason to think that with D5200 it will be any different.

Leo

For those interested in the subject of sensor performance with all the gory details, please, read the sensor section at the clarkvision.com. Highly recommended read!

Only highly recommended if you want to end up getting all kinds of stuff wrong. Which seems to be what you have done. Particularly, the section on the effects pixel size is extremely confused.

And how exactly it is confused?

Where to start? I've done this so many times. If I was sensible I would just keep a bookmark, but I'm not that organised. OK, going from the top:

Dynamic range is defined in this document and elsewhere on this site as:

  • Dynamic Range = Full Well Capacity (electrons) / Read Noise (electrons)

Moreover, Roger predicts that 5um is the optimum pixel pitch for any CMOS sensor. Bob, what is your take on it?

That is one of his most amusing results. He predicts that 5μm is the optimum pitch simply because he decided that was what he would do. Here is the definition of FSAIQ:

FSAIQ = StoN18 * MPix / 20.0 = sqrt(0.18*Full well electrons) * Mpix / 20.0,

where StoN18 is the signal-to-noise delivered by the sensor on an 18% gray target, assuming a 100% reflective target just saturates the sensor, and Mpix is the number of megapixels. StoN18 is computed from pixel performance before Bayer de-mosaicing: indicative of the true performance of each pixel.


So, FSAIQ is given by pixel size times QE (Full well electrons) times pixel count, which is sensor area times QE. Then Roger says proudly

The model closely predicts performance for all modern cameras

Not surprising, since all the model says is that the bigger the area and the higher the QE the better.

However, that result doesn't suit Roger, who goes on to fudge the FSAIQ curves by including 'diffraction'. The FSAIQ as above has no element for 'diffraction', but Roger puts that in the curve anyway. He does that by limiting the Mpix figure according to a 'diffraction limit' (another phenomenon that does not exist) chosen at arbitrary f-numbers so as to give the 5 micron peak. It is a completely bogus and false result imposed simply because it was the result he believed in and he fudged the figures to produce it. Utter nonsense.

The overall pint is this,Roger (and you) is obsessed with comparing things at the pixel sampling frequency, which means making comparisons over different bandwidths. That produces nonsense results, particularly nonsense if what you are interested in is, if you take like photos from the two cameras being compared, which looks better.

I cannot speak for Roger but I am NOT comparing across pixel sampling frequencies.

Yes you are, you might not think so, but by seeking to compare DR or SNT 'at the pixel level' that is exactly what you are doing.Pixel size determines sampling frequency which determines recorded bandwidth but not observed bandwidth.

On the contrary, I am re-sampling the higher freq. image (think down-sampling) to a common sampling rate and then comparing at the same reference frequency. And down-sampling (when performed properly) tends to improve SNR.

Down sampling is unnecessary, all that is necessary is to observe the images the same size. For instance an A3 print is 18MP on a Canon 300 ppi printer. On an Epson 360 ppi printer it is 25MP. So if I took, for instance, a 1DX image I could print at A3 on a Canon without significantly resampling. I could do the same with a D600 on an Epson. If I compared the prints the noise would be substantially the same, without any resampling having happened, simply because it is the viewing size and the acuity of the viewers eye that determines the viewing bandwidth, so long as the output device pixels are below the limit of acuity.

-- hide signature --

Bob

bobn2
bobn2 Forum Pro • Posts: 62,183
Re: Roger's optimum pixel pitch of 5um
1

Leo360 wrote:

bobn2 wrote:

Leo360 wrote:

The resolution of a digital image depends (among other things) on your sampling rate (Nyquist theorem) and pixel size has A LOT to do with it.

Yes, but you were talking of the resolution of a pixel. There is no 'resolution' at a pixel level, only when you look at areas containing many pixels.

I think you misunderstood what I said. Resolution of a single sample makes no sense and I never talked that way.

You talked thusly:

'This is why pixel peeping reveals more noise-per-pixel for smaller photosites. The price to pay is reduced resolution.'

So, to evaluate noise you are looking 'per pixel'. Now, when you talk about 'resolution' you are no longer talking 'per pixel', or are you? You can't include 'per pixel' in the one and not be talking 'per pixel' with respect to the other. (apart from, of course, as we learn below, there is  no noise 'per pixel').

For a fair comparison we have to establish a common bandwidth, i.e. common sampling rate.

Not 'i.e. common sampling rate'. You don't have to resample to observe over a common bandwidth, you just observe the same size, or normalise to the same bandwidth. It is about bandwidth, not resampling.

Hence, resampling. For the sake of argument let this common rate be the sampling rate of the lower-res camera (D600 in DX). I resample D3200 image down to match it.

Correct me if I am wrong, but I think that you will have to jump through several hoops to match 4x4 um pixel SNR to 6x6 um one.

You cannot measure SNR in a single pixel in a single photograph. Now think on the implications of that.

-- hide signature --

Bob

Did I say anything about a single photograph? To measure pixel readings statistics one shoots repeatedly in controlled environments with constant light, exposure, etc. You get mean, variance, probability distribution etc in due course of statistical analysis.

This now becomes throughly artificial, that you make the multiple observations of the same pixel over multiple exposures. It is not even guaranteed that the noise so observed would match the noise observed in the spatial domain. A thoroughly artificial result of no interest.

-- hide signature --

Bob

Bob, you brought it "single pixel in a single photograph", not me.

You brought it up. You talked about matching a 4x4 micron pixel to a 6 by 6 one. You are comparing the SNR in a singular pixel with another one and that makes no sense. When I point that out you propose that you shoot repeatedly, observe the same pixel over repeated observations. All that makes sense is comparing noise over equal areas of equal sized output images, not comparing a 4x4 pixel with a 6x6 one. Making multiple observations over time has nothing to do with it.

I just replied with the text-book definition of how one samples stochastic processes

Without understanding. Most textbooks are talking about variation in the temporal dimension, since they are talking about time varying signals. In photography we have a spatially varying signal, and you didn't spot that you just need to swap space for time.

Anyway, Roger claims that 5um is the optimum pixel pitch for a CMOS sensor. Should one take it seriously? If yes, then we have a winer -- D800 .

I don't take nonsense seriously, even if by chance it gives a sensible result. The reason the D800 is the tops is because it has the smallest pixels of an FF camera, not because they happen to be 5 micron.

-- hide signature --

Bob

Leo360 Senior Member • Posts: 1,141
Re: Clarkvision.com analysis

bobn2 wrote:

Leo360 wrote:

I cannot speak for Roger but I am NOT comparing across pixel sampling frequencies.

Yes you are, you might not think so, but by seeking to compare DR or SNT 'at the pixel level' that is exactly what you are doing.Pixel size determines sampling frequency which determines recorded bandwidth but not observed bandwidth.

No, I defer SNR and DR comparisons to after the re-sampling and do not compare them for naked pixels. And down-sampling done properly tends to increase the SNR by rejecting higher frequency noise. See below.

On the contrary, I am re-sampling the higher freq. image (think down-sampling) to a common sampling rate and then comparing at the same reference frequency. And down-sampling (when performed properly) tends to improve SNR.

Down sampling is unnecessary, all that is necessary is to observe the images the same size. For instance an A3 print is 18MP on a Canon 300 ppi printer. On an Epson 360 ppi printer it is 25MP. So if I took, for instance, a 1DX image I could print at A3 on a Canon without significantly resampling. I could do the same with a D600 on an Epson. If I compared the prints the noise would be substantially the same, without any resampling having happened, simply because it is the viewing size and the acuity of the viewers eye that determines the viewing bandwidth, so long as the output device pixels are below the limit of acuity.

-- hide signature --

Bob

I want to keep viewing acuity out of it because it is yet another low-pass filter and the problem is already complicated enough.

I want to compare signal and noise per sample (trying to avoid word pixel when sampling on both cameras is normalized to the same bandwidth (the same sampling rate). The sensor is the same size also. IIRC, this is what you proposed yourself in one of your earlier replies in this thread. If I know signal and noise properties of samples at a different sampling rate (different pixel pitch) AND I know the frequency response of the  down-sampler I can calculate effective SNR and DR after re-sampling to a low common rate. This is standard technique in Multirate Signal Processing. Doing so establishes the same effective bandwidth for both images and allows to compare effective SNR per sample as in apples-to-apples. This method allows for an automatic ala DxOmark comparisons without bringing a human eye into the picture.

Leo

 Leo360's gear list:Leo360's gear list
Nikon D5100 Nikon D750 Nikon AF-S DX Nikkor 18-200mm f/3.5-5.6G ED VR II Nikon AF-S Nikkor 24-120mm F4G ED VR Nikon AF-S Nikkor 50mm f/1.8G
bobn2
bobn2 Forum Pro • Posts: 62,183
Re: Clarkvision.com analysis
1

Leo360 wrote:

bobn2 wrote:

Leo360 wrote:

I cannot speak for Roger but I am NOT comparing across pixel sampling frequencies.

Yes you are, you might not think so, but by seeking to compare DR or SNT 'at the pixel level' that is exactly what you are doing.Pixel size determines sampling frequency which determines recorded bandwidth but not observed bandwidth.

No, I defer SNR and DR comparisons to after the re-sampling and do not compare them for naked pixels. And down-sampling done properly tends to increase the SNR by rejecting higher frequency noise. See below.

On the contrary, I am re-sampling the higher freq. image (think down-sampling) to a common sampling rate and then comparing at the same reference frequency. And down-sampling (when performed properly) tends to improve SNR.

Down sampling is unnecessary, all that is necessary is to observe the images the same size. For instance an A3 print is 18MP on a Canon 300 ppi printer. On an Epson 360 ppi printer it is 25MP. So if I took, for instance, a 1DX image I could print at A3 on a Canon without significantly resampling. I could do the same with a D600 on an Epson. If I compared the prints the noise would be substantially the same, without any resampling having happened, simply because it is the viewing size and the acuity of the viewers eye that determines the viewing bandwidth, so long as the output device pixels are below the limit of acuity.

I want to keep viewing acuity out of it because it is yet another low-pass filter and the problem is already complicated enough.

I want to compare signal and noise per sample (trying to avoid word pixel when sampling on both cameras is normalized to the same bandwidth (the same sampling rate). The sensor is the same size also. IIRC, this is what you proposed yourself in one of your earlier replies in this thread. If I know signal and noise properties of samples at a different sampling rate (different pixel pitch) AND I know the frequency response of the down-sampler I can calculate effective SNR and DR after re-sampling to a low common rate. This is standard technique in Multirate Signal Processing. Doing so establishes the same effective bandwidth for both images and allows to compare effective SNR per sample as in apples-to-apples. This method allows for an automatic ala DxOmark comparisons without bringing a human eye into the picture.

Leo

I think that you are getting so far away from what you originally said now, that we are discussing on shifting sand. So long as you don't want to make 'per pixel' comparisons, I won't argue with that. However, trying to take the 'human eye' out of photography is rather futile (what would the point be?). What we are trying to do when we decide which objective quantitative measurements to make, is to choose those which allow us fairly readily to make some predictions about what a human eye would see. In a real viewing situation the bandwidth is, or should be, limited by the acuity of the viewers eye. How we choose to simulate that limited bandwidth when we make quantitative comparisons is a methodological issue. For instance, DxO does not resample to obtain their normalised results, they make a calculation based on a normalised output resolution.

-- hide signature --

Bob

JimPearce
JimPearce Veteran Member • Posts: 9,201
Nikon says otherwise...

It's a different sensor.

-- hide signature --

Jim

 JimPearce's gear list:JimPearce's gear list
Nikon D7100 Nikon D500
Leo360 Senior Member • Posts: 1,141
Re: Roger's optimum pixel pitch of 5um

bobn2 wrote:

Leo360 wrote:

bobn2 wrote:

Leo360 wrote:

The resolution of a digital image depends (among other things) on your sampling rate (Nyquist theorem) and pixel size has A LOT to do with it.

Yes, but you were talking of the resolution of a pixel. There is no 'resolution' at a pixel level, only when you look at areas containing many pixels.

I think you misunderstood what I said. Resolution of a single sample makes no sense and I never talked that way.

You talked thusly:

'This is why pixel peeping reveals more noise-per-pixel for smaller photosites. The price to pay is reduced resolution.'

"Pixel peeping" is a sarcastic expression I use when comparison is done across different bandwidth and, thus, is apple-to-oranges. Now I understand, that my wording was taken for its face value which is unfortunate. Everything I wrote before or after is an attempt to transform signal to a common Nyquist sampling rate and then compare sample-per-sample which I think will be more like apple-to-apple comparison.

|| For a fair comparison we have to establish a common bandwidth, || i.e. common sampling rate.

Not 'i.e. common sampling rate'. You don't have to resample to observe over a common bandwidth, you just observe the same size, or normalise to the same bandwidth. It is about bandwidth, not resampling.

I am sorry, I want to keep human observer out of it. The sampling rate determines the reproducible bandwidth. This it true for multi-dimensional signals as well. Re-sampling at a lower rate is a method to modify signal's spectral density. For example, down-sampling (aka decimation) is used to cut-off higher frequencies to obtain lower resolution signal. (Again all of the above is not limited to 1D signals.) The moment one brings both signals to a common bandwidth  one can compare their signal-to-noise characteristics.

You brought it up. You talked about matching a 4x4 micron pixel to a 6 by 6 one. You are comparing the SNR in a singular pixel with another one and that makes no sense. When I point that out you propose that you shoot repeatedly, observe the same pixel over repeated observations. All that makes sense is comparing noise over equal areas of equal sized output images, not comparing a 4x4 pixel with a 6x6 one. Making multiple observations over time has nothing to do with it.

I am comparing one 6x6um pixel to an aggregated output of four 3x3um adjacent pixels. Think of it in this way. You have a 36MP FX sensor but the marketing department demands a 9MP camera. instead of creating the new sensor an existing 36MP sensor is taken and the readings from a cluster of 4 adjacent pixels are combined together producing a 9MP output. A competitor comes with a true 9MP camera. The question is which one has better SNR and DR? Of course, this is a stupid example but it illustrates what I mean by aggregation. This aggregation happens in one image. I am sorry, I do not understand what it all has to do with repeated observations you keep mentioning.

Or if one wants to reconstruct a 6x6 sample out of 4x4 ones the re-sampling rate of 2/3 can be used  thanks to PM-theorem (Nyquist for more than 1-D).

I just replied with the text-book definition of how one samples stochastic processes

Without understanding. Most textbooks are talking about variation in the temporal dimension, since they are talking about time varying signals. In photography we have a spatially varying signal, and you didn't spot that you just need to swap space for time.

Well, I do not think that going personal is that cute. You don't know who I am and what is my area of expertise which happens to be stochastic signal processing including multi-dimensional random fields (not in imaging though).

If there is specific reason why 2D Fourier analysis and Petersen-Middleton (PM) theorem are not applicable in this very case, please, provide some more details or a link instead of just "swap space for time".

Absent of any other specific info I have no reason to question why I cannot re-sample an image with a higher Nyquist freq. to the one with the lower Nyq.Fr. doing it by means of PM-theorem (direct generalization of Shannon-Nyquist). And there are well known ways to calculate SNR after such a linear transformation.

Anyway, Roger claims that 5um is the optimum pixel pitch for a CMOS sensor. Should one take it seriously? If yes, then we have a winer -- D800 .

I don't take nonsense seriously, even if by chance it gives a sensible result. The reason the D800 is the tops is because it has the smallest pixels of an FF camera, not because they happen to be 5 micron.

-- hide signature --

Bob

Thanks, Bob. I also find Rogers analysis flaky. But seriously speaking, there should be some sort of bounds on how small pixel can become and what is its optimum size for a given technology like CCD or CMOS. It cannot be smaller than the wavelength, I guess.:-)

Leo

 Leo360's gear list:Leo360's gear list
Nikon D5100 Nikon D750 Nikon AF-S DX Nikkor 18-200mm f/3.5-5.6G ED VR II Nikon AF-S Nikkor 24-120mm F4G ED VR Nikon AF-S Nikkor 50mm f/1.8G
John Motts Veteran Member • Posts: 5,738
Re: Nikon says otherwise...

Well, if the OP wasn't confused when he first asked the question, I'm pretty sure he is now!

bobn2
bobn2 Forum Pro • Posts: 62,183
Re: Roger's optimum pixel pitch of 5um

Not going to bother with most of this, because you are beginning to feel that the discussion is 'going personal', and it will only get more so, so just to skip to the end...

Leo360 wrote:

bobn2 wrote:

Leo360 wrote:

bobn2 wrote:

Leo360 wrote:

The resolution of a digital image depends (among other things) on your sampling rate (Nyquist theorem) and pixel size has A LOT to do with it.

Yes, but you were talking of the resolution of a pixel. There is no 'resolution' at a pixel level, only when you look at areas containing many pixels.

I think you misunderstood what I said. Resolution of a single sample makes no sense and I never talked that way.

You talked thusly:

'This is why pixel peeping reveals more noise-per-pixel for smaller photosites. The price to pay is reduced resolution.'

"Pixel peeping" is a sarcastic expression I use when comparison is done across different bandwidth and, thus, is apple-to-oranges. Now I understand, that my wording was taken for its face value which is unfortunate. Everything I wrote before or after is an attempt to transform signal to a common Nyquist sampling rate and then compare sample-per-sample which I think will be more like apple-to-apple comparison.

|| For a fair comparison we have to establish a common bandwidth, || i.e. common sampling rate.

Not 'i.e. common sampling rate'. You don't have to resample to observe over a common bandwidth, you just observe the same size, or normalise to the same bandwidth. It is about bandwidth, not resampling.

I am sorry, I want to keep human observer out of it. The sampling rate determines the reproducible bandwidth. This it true for multi-dimensional signals as well. Re-sampling at a lower rate is a method to modify signal's spectral density. For example, down-sampling (aka decimation) is used to cut-off higher frequencies to obtain lower resolution signal. (Again all of the above is not limited to 1D signals.) The moment one brings both signals to a common bandwidth one can compare their signal-to-noise characteristics.

You brought it up. You talked about matching a 4x4 micron pixel to a 6 by 6 one. You are comparing the SNR in a singular pixel with another one and that makes no sense. When I point that out you propose that you shoot repeatedly, observe the same pixel over repeated observations. All that makes sense is comparing noise over equal areas of equal sized output images, not comparing a 4x4 pixel with a 6x6 one. Making multiple observations over time has nothing to do with it.

I am comparing one 6x6um pixel to an aggregated output of four 3x3um adjacent pixels. Think of it in this way. You have a 36MP FX sensor but the marketing department demands a 9MP camera. instead of creating the new sensor an existing 36MP sensor is taken and the readings from a cluster of 4 adjacent pixels are combined together producing a 9MP output. A competitor comes with a true 9MP camera. The question is which one has better SNR and DR? Of course, this is a stupid example but it illustrates what I mean by aggregation. This aggregation happens in one image. I am sorry, I do not understand what it all has to do with repeated observations you keep mentioning.

Or if one wants to reconstruct a 6x6 sample out of 4x4 ones the re-sampling rate of 2/3 can be used thanks to PM-theorem (Nyquist for more than 1-D).

I just replied with the text-book definition of how one samples stochastic processes

Without understanding. Most textbooks are talking about variation in the temporal dimension, since they are talking about time varying signals. In photography we have a spatially varying signal, and you didn't spot that you just need to swap space for time.

Well, I do not think that going personal is that cute. You don't know who I am and what is my area of expertise which happens to be stochastic signal processing including multi-dimensional random fields (not in imaging though).

If there is specific reason why 2D Fourier analysis and Petersen-Middleton (PM) theorem are not applicable in this very case, please, provide some more details or a link instead of just "swap space for time".

Absent of any other specific info I have no reason to question why I cannot re-sample an image with a higher Nyquist freq. to the one with the lower Nyq.Fr. doing it by means of PM-theorem (direct generalization of Shannon-Nyquist). And there are well known ways to calculate SNR after such a linear transformation.

Anyway, Roger claims that 5um is the optimum pixel pitch for a CMOS sensor. Should one take it seriously? If yes, then we have a winer -- D800 .

I don't take nonsense seriously, even if by chance it gives a sensible result. The reason the D800 is the tops is because it has the smallest pixels of an FF camera, not because they happen to be 5 micron.

Thanks, Bob. I also find Rogers analysis flaky.

Then why did you thoroughly recommend it? That kind of flakiness pervades the whole thing.

But seriously speaking, there should be some sort of bounds on how small pixel can become and what is its optimum size for a given technology like CCD or CMOS. It cannot be smaller than the wavelength, I guess.:-)

Leo

I don't think there is a bound, but as they become smaller, you start to use them differently. For instance, Eric Fossum's speculative work on deep sub-diffraction pixels.

-- hide signature --

Bob

Leo360 Senior Member • Posts: 1,141
Re: Roger's optimum pixel pitch of 5um

bobn2 wrote:

Not going to bother with most of this, because you are beginning to feel that the discussion is 'going personal', and it will only get more so, so just to skip to the end...

Bob, with all due respect. First you are using a mildly insulting language, and when the other party gets offended, you use this fact as an excuse to dodge the hard questions. I find this conversation very illuminating (and a nice break from D600 oil/dust). If you have time, please, have a second look.

Leo

 Leo360's gear list:Leo360's gear list
Nikon D5100 Nikon D750 Nikon AF-S DX Nikkor 18-200mm f/3.5-5.6G ED VR II Nikon AF-S Nikkor 24-120mm F4G ED VR Nikon AF-S Nikkor 50mm f/1.8G
Leo360 Senior Member • Posts: 1,141
Re: Nikon says otherwise...

JimPearce wrote:

It's a different sensor.

Thanks! I did not know that Nikon is using two different 24MP DX sensors in their entry level cameras.

Leo

 Leo360's gear list:Leo360's gear list
Nikon D5100 Nikon D750 Nikon AF-S DX Nikkor 18-200mm f/3.5-5.6G ED VR II Nikon AF-S Nikkor 24-120mm F4G ED VR Nikon AF-S Nikkor 50mm f/1.8G
noirdesir Forum Pro • Posts: 13,632
Re: Roger's optimum pixel pitch of 5um
2

Leo360 wrote:

I am comparing one 6x6um pixel to an aggregated output of four 3x3um adjacent pixels. Think of it in this way. You have a 36MP FX sensor but the marketing department demands a 9MP camera. instead of creating the new sensor an existing 36MP sensor is taken and the readings from a cluster of 4 adjacent pixels are combined together producing a 9MP output. A competitor comes with a true 9MP camera. The question is which one has better SNR and DR? Of course, this is a stupid example but it illustrates what I mean by aggregation. This aggregation happens in one image. I am sorry, I do not understand what it all has to do with repeated observations you keep mentioning.

You really don't know the answer to that by now? Just compare the E-M5 sensor with the D600 sensor and you have your answer. Not quite a factor of four in total number of pixels but it gets close to a factor of three:

- E-M5: 3.73 μm, QE 53%

- D600: 5.9 μm, QE 53%

Same QE, thus the same photon shot noise. As I illustrated in a previous post in this thread (http://forums.dpreview.com/forums/post/50338850), the E-M5 has a slightly lower read noise for a given sensor area (I compared it for a 37.3 x 37.3 μm area because that divides nicely into 100 E-M5 pixels and 62 D600 pixels, but the same principle applies for any area you might choose).

Thus, FX camera made up of E-M5 pixels would have 62 MP aggregating the output of such a sensor down to 24 MP would yield the same photon shot noise and an even slightly better read noise than the D600 sensor.

If there is specific reason why 2D Fourier analysis and Petersen-Middleton (PM) theorem are not applicable in this very case, please, provide some more details or a link instead of just "swap space for time".

Just use your mathematical intuition, yes 2D (space) vs. 1D time might trip you up at first but since almost all light is uncorrelated in space (meaning the photons are created by random phenomena), any pixel array sampling light in a 2D grid is essentially sampling uncorrelated samples and thus can be seen as a 1D series of samples.

Thanks, Bob. I also find Rogers analysis flaky. But seriously speaking, there should be some sort of bounds on how small pixel can become and what is its optimum size for a given technology like CCD or CMOS. It cannot be smaller than the wavelength, I guess.:-)

For current technology certainly as optical elements (like microlenses) will show strong wavelength-dependent behaviour once they get down to size of the wavelength but we already have 1 μm pixels in phone cameras, that is still quite a long way to go for FX cameras. We would be getting close to a GP sensor with that.

Leo360 Senior Member • Posts: 1,141
Re: Roger's optimum pixel pitch of 5um

noirdesir wrote:

Leo360 wrote:

I am comparing one 6x6um pixel to an aggregated output of four 3x3um adjacent pixels. Think of it in this way. You have a 36MP FX sensor but the marketing department demands a 9MP camera. instead of creating the new sensor an existing 36MP sensor is taken and the readings from a cluster of 4 adjacent pixels are combined together producing a 9MP output. A competitor comes with a true 9MP camera. The question is which one has better SNR and DR? Of course, this is a stupid example but it illustrates what I mean by aggregation. This aggregation happens in one image. I am sorry, I do not understand what it all has to do with repeated observations you keep mentioning.

You really don't know the answer to that by now? Just compare the E-M5 sensor with the D600 sensor and you have your answer. Not quite a factor of four in total number of pixels but it gets close to a factor of three:

- E-M5: 3.73 μm, QE 53%

- D600: 5.9 μm, QE 53%

Same QE, thus the same photon shot noise.

Yes, photon shot noise per unit area is the same. Shot-noise per single pixel reading is not (they have different areas and therefore, different photon count). Again, Bill Claff's charts give D600 one stop advantage in Dynamic Range over E-M5.

As I illustrated in a previous post in this thread (http://forums.dpreview.com/forums/post/50338850), the E-M5 has a slightly lower read noise for a given sensor area (I compared it for a 37.3 x 37.3 μm area because that divides nicely into 100 E-M5 pixels and 62 D600 pixels, but the same principle applies for any area you might choose).

This is a nice example.

Thus, FX camera made up of E-M5 pixels would have 62 MP aggregating the output of such a sensor down to 24 MP would yield the same photon shot noise and an even slightly better read noise than the D600 sensor.

Is it the read-noise std.dev. that should be scaled with the pixel area or the variance. In the latter case the read-noise per common 37.3x37.3um uber-pixel would be the same (I used your summation formula but applied scaling to the variances instead of std.dev). Also you are neglecting the dark current which gets stronger more pixels are combined.

If there is specific reason why 2D Fourier analysis and Petersen-Middleton (PM) theorem are not applicable in this very case, please, provide some more details or a link instead of just "swap space for time".

Just use your mathematical intuition, yes 2D (space) vs. 1D time might trip you up at first but since almost all light is uncorrelated in space (meaning the photons are created by random phenomena), any pixel array sampling light in a 2D grid is essentially sampling uncorrelated samples and thus can be seen as a 1D series of samples.

Square lattices are good (yet not optimal) samplers and should be all right for the practical purposes. The optimal 2-D sampler for uncorrelated signals is actually a hexagon lattice.

Thanks, Bob. I also find Rogers analysis flaky. But seriously speaking, there should be some sort of bounds on how small pixel can become and what is its optimum size for a given technology like CCD or CMOS. It cannot be smaller than the wavelength, I guess.:-)

For current technology certainly as optical elements (like microlenses) will show strong wavelength-dependent behaviour once they get down to size of the wavelength but we already have 1 μm pixels in phone cameras, that is still quite a long way to go for FX cameras. We would be getting close to a GP sensor with that.

Mind boggling 864MP FX sensor!

Leo

 Leo360's gear list:Leo360's gear list
Nikon D5100 Nikon D750 Nikon AF-S DX Nikkor 18-200mm f/3.5-5.6G ED VR II Nikon AF-S Nikkor 24-120mm F4G ED VR Nikon AF-S Nikkor 50mm f/1.8G
noirdesir Forum Pro • Posts: 13,632
Re: Roger's optimum pixel pitch of 5um
1

Leo360 wrote:

noirdesir wrote:

Leo360 wrote:

I am comparing one 6x6um pixel to an aggregated output of four 3x3um adjacent pixels. Think of it in this way. You have a 36MP FX sensor but the marketing department demands a 9MP camera. instead of creating the new sensor an existing 36MP sensor is taken and the readings from a cluster of 4 adjacent pixels are combined together producing a 9MP output. A competitor comes with a true 9MP camera. The question is which one has better SNR and DR? Of course, this is a stupid example but it illustrates what I mean by aggregation. This aggregation happens in one image. I am sorry, I do not understand what it all has to do with repeated observations you keep mentioning.

You really don't know the answer to that by now? Just compare the E-M5 sensor with the D600 sensor and you have your answer. Not quite a factor of four in total number of pixels but it gets close to a factor of three:

- E-M5: 3.73 μm, QE 53%

- D600: 5.9 μm, QE 53%

Same QE, thus the same photon shot noise.

Yes, photon shot noise per unit area is the same. Shot-noise per single pixel reading is not (they have different areas and therefore, different photon count). Again, Bill Claff's charts give D600 one stop advantage in Dynamic Range over E-M5.

A single pixel does not have any noise. Display a single pixel fullscreen on your monitor and tell me whether you can see any noise in it. And we are comparing here different pixel sizes for the same sensor area. The D600 and the E-M5 don't have the same sensor area (the former is four times as large than the latter). On top of that the E-M5 sensor does keep its low read noise quite down to base ISO as good as the E-M5 but that is seen as an ADC problem not one of the pixel.

Thus, FX camera made up of E-M5 pixels would have 62 MP aggregating the output of such a sensor down to 24 MP would yield the same photon shot noise and an even slightly better read noise than the D600 sensor.

Is it the read-noise std.dev. that should be scaled with the pixel area or the variance. In the latter case the read-noise per common 37.3x37.3um uber-pixel would be the same (I used your summation formula but applied scaling to the variances instead of std.dev). Also you are neglecting the dark current which gets stronger more pixels are combined.

As far as I understand it the std. dev. is the thing that should scale with the area. And dark current is second order effect for the pixel size we are discussing here.

Leo360 Senior Member • Posts: 1,141
Re: Roger's optimum pixel pitch of 5um

noirdesir wrote:

A single pixel does not have any noise. Display a single pixel fullscreen on your monitor and tell me whether you can see any noise in it.

Saying that a single pixel does not have any noise in it is not quite accurate. There is mean number of photons that is supposed to hit this pixel under given exposure. But the actual number of photons registered by it will differ from the mean by amount of photon shot noise (Poisson distribution of photon count). On top of it it is contaminated by read-noise which makes photon-to-electron conversion noisy. So the value of this pixel reading will not be exactly right. In your experiment with a single pixel filling the whole screen you will have a solid color (no spacial variation) but with the intensity slightly off. I think that what you are trying to say is that one cannot observe spacial noise variability (from one pixel to another) when measuring only a single pixel.

As far as I understand it the std. dev. is the thing that should scale with the area. And dark current is second order effect for the pixel size we are discussing here.

I have to think about it more, for it sounds counter-intuitive on the first glance. With Gaussian random processes variance is the parameter which scales and combines. I am not sure that it is applicable here, though.

Leo

 Leo360's gear list:Leo360's gear list
Nikon D5100 Nikon D750 Nikon AF-S DX Nikkor 18-200mm f/3.5-5.6G ED VR II Nikon AF-S Nikkor 24-120mm F4G ED VR Nikon AF-S Nikkor 50mm f/1.8G
nizar ghosn
nizar ghosn Regular Member • Posts: 328
Re: D600 High ISO in DX

3D Gunner wrote:

That depends on your point of view. The relation noise to signal at pixel level is the same in FX and DX.

... absolutely right ...!!!

-- hide signature --
 nizar ghosn's gear list:nizar ghosn's gear list
Fujifilm X-T1
John Motts Veteran Member • Posts: 5,738
Re: D600 High ISO in DX

nizar ghosn wrote:

3D Gunner wrote:

That depends on your point of view. The relation noise to signal at pixel level is the same in FX and DX.

... absolutely right ...!!!

May be absolutely right, but it's meaningless in practice.

Leif Goodwin Senior Member • Posts: 1,390
Re: pixel pitch and SNR
1

Leo360 wrote:

bobn2 wrote:

Leo360 wrote:

There are two different things here. There is photon count per pixel and number of photons collected per unit area.

Those are indeed two different things.

The latter does not depend on the pixel pitch but the former does. And SNR per pixel gets larger with more photons collected by that pixel (photon shot-noise per photon gets weaker). For the same exposure larger pixels capture more photos and, thus, have higher SNR. This is why pixel peeping reveals more noise-per-pixel for smaller photosites.

Indeed, bt that is of little relevance to what we are actually trying to do in photography, which is make a picture that we can look at.

The price to pay is reduced resolution.

The resolution is identical, if you look at individual pixels, because a pixel just describes the value of light where it is. There is only 'resolution' when you look at an area, and if you want to compare 'resolution' it makes sense to compare the same area (or equivalent areas when magniified to the size of the final image). So, the bottom line is that 'resolution' makes no sense at the pixel level, and nor, in terms of image quality, does the SNR.

The resolution of a digital image depends (among other things) on your sampling rate (Nyquist theorem) and pixel size has A LOT to do with it.

With proper down-sampling (bicubic, etc) to the same level of detail one can hope to recover the SNR back by effectively combining outputs of multiple smaller pixels into an aggregate one but doing so does not entirely compensate for read-noise increase.

The 'downsampling' argument is a red herring. All that is required is to look at the images produced the same size.

It is not a red herring. Producing images at the same size with the same dpi means that we have to resample an output of hi-res camera to match the sampling rate of a low-res camera. Otherwise your prints will be of different size (dpi is the same, right?)

From the point of view of someone who wants to take a photograph, all that matters is the noise at a given output (print) size. Empirically it would appear that noise is roughly independent of pixel density, and depends only on sensor size. In other words the noise per unit area is roughly constant. Empirically a difference is seen at high ISO (above 1600) where high pixel density sensors seem to fall apart. Presumably this is because some components of noise are independent of pixel size and hence increasing the number of pixels increases that component as a proportion of the total noise.

-- hide signature --

______________________________
Warning: this forum may contain nuts.

 Leif Goodwin's gear list:Leif Goodwin's gear list
Nikon D200 Nikon D600 Nikon AF-S Nikkor 300mm f/4D ED-IF Nikon AF-S Micro-Nikkor 60mm F2.8G ED Nikon AF Micro-Nikkor 60mm f/2.8D +4 more
bobn2
bobn2 Forum Pro • Posts: 62,183
Re: Roger's optimum pixel pitch of 5um
2

Leo360 wrote:

bobn2 wrote:

Not going to bother with most of this, because you are beginning to feel that the discussion is 'going personal', and it will only get more so, so just to skip to the end...

Bob, with all due respect. First you are using a mildly insulting language,

I used no 'mildly insulting language'. All I said was that you had applied the standard textbook response without understanding. Sorry, that is what you did.

and when the other party gets offended, you use this fact as an excuse to dodge the hard questions.

I have dodged nothing. I have no interest in answering the (not so) hard questions if the other party decides to take offence at the discussion especially since they had said earlier 'I think you misunderstood what I said', which is by your own standards 'mildly insulting language'. Have the discussion on open terms or not at all.

I find this conversation very illuminating (and a nice break from D600 oil/dust). If you have time, please, have a second look.

Maybe I'll come back to your post, but for the while I'm going to engage in the continuing discussion, which might be more productive.

-- hide signature --

Bob

bobn2
bobn2 Forum Pro • Posts: 62,183
Re: Roger's optimum pixel pitch of 5um
1

Leo360 wrote:

noirdesir wrote:

A single pixel does not have any noise. Display a single pixel fullscreen on your monitor and tell me whether you can see any noise in it.

Saying that a single pixel does not have any noise in it is not quite accurate.

It is exactly accurate.

There is mean number of photons that is supposed to hit this pixel under given exposure.

No there is not. The process of photons hitting pixels is itself a random process. The noise is in the light, not the pixel. Even if the pixel had 100% efficiency and zero read noise, which means that it would count very photon that struck it, there would be the noise. Nothing in nature dictates that there is a mean number that 'should' hit a pixel. In fact nature dictates precisely that the incidence of photons is randomised.

But the actual number of photons registered by it will differ from the mean by amount of photon shot noise (Poisson distribution of photon count).

The number of photons registered by it should correspond to the number of photons which hit it. Shot noise is not because pixels are incorrectly counting the number of photons (that is read noise)

On top of it it is contaminated by read-noise which makes photon-to-electron conversion noisy. So the value of this pixel reading will not be exactly right.

Read noise is not 'on top of it'. Read noise is why the value of the pixel reading will not be exactly right.

In your experiment with a single pixel filling the whole screen you will have a solid color (no spacial variation) but with the intensity slightly off. I think that what you are trying to say is that one cannot observe spacial noise variability (from one pixel to another) when measuring only a single pixel.

Temporal noise variability (i.e. from one frame to the next) is of no interest to the still photographer. It is the spatial variability (or at least the integration of spatial and temporal variability over the exposure time) that we are interested in.

As far as I understand it the std. dev. is the thing that should scale with the area. And dark current is second order effect for the pixel size we are discussing here.

I have to think about it more, for it sounds counter-intuitive on the first glance. With Gaussian random processes variance is the parameter which scales and combines. I am not sure that it is applicable here, though.

The random process is occuring in the light not the pixel.

-- hide signature --

Bob

Keyboard shortcuts:
FForum MMy threads