more pixels are better!

Started Dec 14, 2008 | Discussions
ejmartin Veteran Member • Posts: 6,274
Re: SNR vs exposure

Graystar wrote:

bobn2 wrote:

I didn't say depends solely on sensor size and efficiency - that
would be silly. The floor of the DR is the read noise. Double that
between cameras (and it varies much more than that) and you have your
stop in DR.

So you're saying what...that the LX3 doesn't do any better in
highlights than the G10? It's all shadow detail?

I would say that the LX3 is better in both shadow detail and SNR in highlights. The LX3 full well capacity is higher, and its read noise is lower, even when normalized to a per area basis.

-- hide signature --
John Sheehy Forum Pro • Posts: 27,202
Re: SNR vs exposure

Graystar wrote:

So you're saying what...that the LX3 doesn't do any better in
highlights than the G10? It's all shadow detail?

When true, that just another way to say that the ISO "sensitivity" is stated conservatively for the camera.

The same photosites, with the same readout/ADC circuitry, could be used in two different cameras, one metering for ISO 80 at base ISO with very little highlight headroom but lots of footroom, and another at ISO 200 with lots of headroom, and not a lot of footroom. They would both have the same DR.

-- hide signature --

John

ejmartin Veteran Member • Posts: 6,274
Re: Are those figures for "print" or "screen"? (nt)

Les Olson wrote:

I disagree. For any quantity the correct approach is to compare
first the actual value. Adjusted, normalised, or standardised values
can be compared IF they serve a purpose, and it does not matter
whether you are talking about "seasonally adjusted unemployment" or
normalised SNR. The trap, as in this case, is to use normalisations
that encode ideas about the right answer.
--

Except that comparing values of a quantity X that depends on another quantity Y at two different values of Y is quite silly, unless one knows (as is the case here) how to translate a measured value X(Y1) into its value X(Y2) for the purpose of comparing to another measurement of the X at Y2.

Furthermore, it is not as though the noise at a fixed scale cannot be measured. It is merely that it is more convenient to measure the noise at the pixel level and use well established properties of pixel noise to infer its value at any other scale.

-- hide signature --
ejmartin Veteran Member • Posts: 6,274
Re: Are those figures for "print" or "screen"? (nt)

ejmartin wrote:

Les Olson wrote:

I disagree. For any quantity the correct approach is to compare
first the actual value. Adjusted, normalised, or standardised values
can be compared IF they serve a purpose, and it does not matter
whether you are talking about "seasonally adjusted unemployment" or
normalised SNR. The trap, as in this case, is to use normalisations
that encode ideas about the right answer.
--

Except that comparing values of a quantity X that depends on another
quantity Y at two different values of Y is quite silly, unless one
knows (as is the case here) how to translate a measured value X(Y1)
into its value X(Y2) for the purpose of comparing to another
measurement of the X at Y2.

Furthermore, it is not as though the noise at a fixed scale cannot be
measured. It is merely that it is more convenient to measure the
noise at the pixel level and use well established properties of pixel
noise to infer its value at any other scale.

Just to amplify on this a bit, the noise power as a function of spatial scale (frequency in lines/picture height -- finer scales to the right, coarser scales to the left) looks like this for a 40D (red points) and 50D (blue points):

They are the same for all practical purposes at any given spatial scale. The std dev of noise per pixel, however, is proportional to the square root of the area under the curve, and so is higher for the 50D than the 40D, simply because the 50D has greater resolution (image data at higher spatial frequencies), not because it has more noise at any fixed spatial scale. Measurements of pixel level noise, that do not take into account the scale dependence of noise, are a naive and misleading presentation of noise characteristics.

-- hide signature --
Graystar Veteran Member • Posts: 8,373
Re: SNR vs exposure

ejmartin wrote:

I would say that the LX3 is better in both shadow detail and SNR in
highlights. The LX3 full well capacity is higher, and its read noise
is lower, even when normalized to a per area basis.

Well capacity, eh? That sounds a lot like you're say DR is affected by pixel size.

But Bob says...

bobn2 wrote:

That is wrong, there is no evidence supporting the 'DR depends on
pixels size' argument, because it is physically incorrect.

Maybe you guys can get together and work it out and get back to me with the right answer.

ejmartin Veteran Member • Posts: 6,274
Re: SNR vs exposure

Graystar wrote:

ejmartin wrote:

I would say that the LX3 is better in both shadow detail and SNR in
highlights. The LX3 full well capacity is higher, and its read noise
is lower, even when normalized to a per area basis.

Well capacity, eh? That sounds a lot like you're say DR is affected
by pixel size.

Please note the phrase when normalized to a per area basis . If you don't like the phrase "well capacity", feel free to substitute "electron density at saturation", which doesn't refer to individual pixels.

But Bob says...

bobn2 wrote:

That is wrong, there is no evidence supporting the 'DR depends on
pixels size' argument, because it is physically incorrect.

Maybe you guys can get together and work it out and get back to me
with the right answer.

Nothing needs reconciling, since there is no disagreement. When DR is normalized to a per area basis , it is largely independent of pixel size for a given technology.

It so happens in this case the LX3 has better technology.

Any reason for the snarky tone?

-- hide signature --
Les Olson Senior Member • Posts: 2,081
Re: Are those figures for "print" or "screen"? (nt)

I do not follow what you mean by "pixel noise" in this context. A single pixel can only have its SNR estimated over time, by taking lots of output values from a series of separate exposures and calculating their mean and SD. That is interesting, but a lot of work, so the usual approach is to take output values from lots of pixels all exposed at once and calculate the mean and SD of their outputs. This is what DxO do and those are the values I worked with.

The estimated SNR cannot vary with the number of exposures, or pixels, you use to estimate it, once you have enough to be confident that the sample mean and SD are close to the population mean and SD. The mean and SD of the outputs (ie, SNR) of a suitably large sample of pixels (or exposures for a single pixel) must have a very low probability of being different from (ie, for practical purposes must be the same as) the mean and SD of the outputs (ie, SNR) of all the pixels (or all the exposures the camera will ever make for a single pixel).

In summary, unless the Central Limit Theorem has stopped applying chez vous the idea that you could have an "SNR per sensor area" is wrong (it is the same as saying that your camera's SNR increases every time you make an exposure because the total amount of light it has received increases - which is not to be confused with saying that SNR increases when you prolong a single exposure because the amount of light received increases).

In any case, there is an alternative: calculate the ratio of the SNR at 10% reflectance to the SNR at 1% reflectance, which takes account of anything there is to take account of. The multiple regression using pixel pitch and camera date as predictors of this ratio is practically identical to the regression using them as predictors of the component SNRs.
--
2 November 1975.

'... Ma come io possiedo la storia,
essa mi possiede; ne sono illuminato:
ma a che serve la luce?'

ejmartin Veteran Member • Posts: 6,274
Re: Are those figures for "print" or "screen"? (nt)

Les Olson wrote:

I do not follow what you mean by "pixel noise" in this context.

Std dev of pixel values in a uniform tonality region in an image.

The estimated SNR cannot vary with the number of exposures, or
pixels, you use to estimate it, once you have enough to be confident
that the sample mean and SD are close to the population mean and SD.
The mean and SD of the outputs (ie, SNR) of a suitably large sample
of pixels (or exposures for a single pixel) must have a very low
probability of being different from (ie, for practical purposes must
be the same as) the mean and SD of the outputs (ie, SNR) of all the
pixels (or all the exposures the camera will ever make for a single
pixel).

In summary, unless the Central Limit Theorem has stopped applying
chez vous the idea that you could have an "SNR per sensor area" is
wrong (it is the same as saying that your camera's SNR increases
every time you make an exposure because the total amount of light it
has received increases - which is not to be confused with saying that
SNR increases when you prolong a single exposure because the amount
of light received increases).

Noise is properly characterized not by a single number; it has a spectral power distribution as a function of spatial frequency. It is therefore important to take into account when comparing images from cameras with different pixel counts, that one has image data out to higher spatial frequencies than the other.

For instance, in the graph I presented above, the Nyquist frequency of the 40D was at 209 on the horizontal axis, while the Nyquist frequency of the 50D was at 256. The noise power spectra are clearly the same, out to the limit of resolution of the 40D; the 50D noise spectrum then continues out to the limit of its resolution.

The std dev of uniform tonality patches is a particular average over the power distribution -- the square root of the area under the power distribution curve. Because the 50D has more resolution, and only because of that, the std dev of noise in uniform tonality patches is higher for the 50D.

One way to see that the std dev is measuring a scale dependent quantity is to resample the 50D image to the dimensions of the 40D. This throws away all the image information at fine scales, and it also throws away all the noise power at high scales. A good downsampling algorithm (eg Lanczos) quite faithfully reproduces the noise power spectrum of the 40D from the 50D after downsampling, and the std devs are quite close. This is not because downsampling has reduced the noise at any particular spatial frequency -- the noise power at fixed spatial frequency differs very little after downsampling -- rather it is because downsampling has removed some frequencies from the spectrum altogether, and the noise power at those frequencies has been removed along with it.

In any case, there is an alternative: calculate the ratio of the SNR
at 10% reflectance to the SNR at 1% reflectance, which takes account
of anything there is to take account of. The multiple regression
using pixel pitch and camera date as predictors of this ratio is
practically identical to the regression using them as predictors of
the component SNRs.
--

The slope of the SNR of RAW data is a measure of something else entirely, indirectly and imprecisely an indicator of the photon gathering efficiency per pixel. The precise way to measure the latter is actually to plot the noise variance per pixel as a function of mean signal in raw levels; the inverse of the slope is the number of photons gathered per raw level.

Again, to be converted to a meaningful statistic, one should divide by the pixel area to get a sense of the efficiency of the sensor.

-- hide signature --
Iliah Borg Forum Pro • Posts: 29,605
a non-answer

I guess you got all sorts of theoretical answers

In practice you can't say one way or another without comparing results from particular cameras. That is because you can't find two sensors of the same size, manufactured using the same sensor technology, but having different number of pixels.

-- hide signature --
Les Olson Senior Member • Posts: 2,081
Re: Are those figures for "print" or "screen"? (nt)

ejmartin wrote:

Noise is properly characterized not by a single number; it has a
spectral power distribution as a function of spatial frequency. It
is therefore important to take into account when comparing images
from cameras with different pixel counts, that one has image data out
to higher spatial frequencies than the other.

Don't you mean "has image data at higher spatial frequencies if there are higher spatial frequencies in the image"? But a uniform tonality patch has zero modulation - that is what uniform tonality means. An image of such a patch does not contain data out to higher spatial frequencies, and it has only one SNR. If you want to argue that SNR at zero modulation is not the metric we need that's fine, but it is the current standard.

-- hide signature --

2 November 1975.

'... Ma come io possiedo la storia,
essa mi possiede; ne sono illuminato:
ma a che serve la luce?'

ejmartin Veteran Member • Posts: 6,274
Re: Are those figures for "print" or "screen"? (nt)

Les Olson wrote:

ejmartin wrote:

Noise is properly characterized not by a single number; it has a
spectral power distribution as a function of spatial frequency. It
is therefore important to take into account when comparing images
from cameras with different pixel counts, that one has image data out
to higher spatial frequencies than the other.

Don't you mean "has image data at higher spatial frequencies if there
are higher spatial frequencies in the image"? But a uniform tonality
patch has zero modulation - that is what uniform tonality means. An
image of such a patch does not contain data out to higher spatial
frequencies, and it has only one SNR. If you want to argue that SNR
at zero modulation is not the metric we need that's fine, but it is
the current standard.

How pedantic do you want me to be? I do not mean "has image data at higher spatial frequencies if there are higher spatial frequencies in the image"; there is always data up to the Nyquist frequency in an image. The amplitude of the signal at high spatial frequency can be small if there is no fine scale detail. Such is the case for a region of uniform tonality.

No photographic image of any scene has zero modulation, due to statistical fluctuations in the flux of light during an exposure. This is true even when an object being imaged has absolutely uniform reflectivity. The phenomenon is often called photon shot noise. Photon shot noise is uncorrelated from pixel to pixel in the RAW data, and therefore its spatial frequency distribution is entirely governed by this statistical randomness. So even if the signal has no high frequency content, the noise does (unless of course noise reduction filtering has been applied).

To illustrate some of the issues involved in characterizing the noise by the std dev of the pixel values, here are some patches having what are typically described as uniform tonality (the left one is an image of a square of a GM colorchecker, the other two are upsamplings of the left one by factors of 1.5 and 2); the contrast of each patch was adjusted so that the std dev of tonal values is the same for each:

Here are the noise power spectra (128 is Nyquist for each):

They look quite different, because the power distribution of noise is quite different for each -- the upsampled ones have little or no noise power above the original Nyquist frequency (2/3 of 128 for the 3/2 upsampling, 1/2 of 128 for the 2x upsampling), and they arrange for the std dev to be the same by having more noise power at lower frequencies. The original (fine-grained) patch has the characteristic noise power spectrum of uncorrelated noise, linearly rising toward the Nyquist frequency.

Noise reduction has an effect very similar to the above -- it acts locally on the image, and therefore removes noise power only at the highest frequencies, resulting in a "blotchy" appearance, as it is unable to remove strong noise power at low frequencies.

If one is going to state which image has more noise, one should state what spatial frequency is being referred to. The coarse grained images have less noise at high frequency, and more noise at low frequency. Similar issues exist when comparing cameras of different pixel count, as I demonstrated in an earlier post in this subthread.

-- hide signature --
ejmartin Veteran Member • Posts: 6,274
Re: a non-answer

Iliah Borg wrote:

I guess you got all sorts of theoretical answers

Nothing wrong with the use of well-established theory.

In practice you can't say one way or another without comparing
results from particular cameras. That is because you can't find two
sensors of the same size, manufactured using the same sensor
technology, but having different number of pixels.

You mean, like the D40 and the D40X? G9 and G10? 40D and 50D?

-- hide signature --
Iliah Borg Forum Pro • Posts: 29,605
Re: a non-answer

I guess you got all sorts of theoretical answers

Nothing wrong with the use of well-established theory.

I have a problem with your use of the word "use"

-- hide signature --
bobn2
bobn2 Forum Pro • Posts: 72,009
Re: a non-answer

Iliah Borg wrote:

I guess you got all sorts of theoretical answers

In practice you can't say one way or another without comparing
results from particular cameras. That is because you can't find two
sensors of the same size, manufactured using the same sensor
technology, but having different number of pixels.

You can get close, such as Nikon D40 & D60 or Sony A200 and A350. But I get the sense of what you say, comparing two cameras you don't want to demonstrate a bit of theory is less interesting than comparing two cameras you might be interested in. The problem is, a lot of people seem to want to make decisions on the basis of 'figures of merit', and some of these 'figures of merit' don't mean what their proposers say they do.

-- hide signature --

Bob

bobn2
bobn2 Forum Pro • Posts: 72,009
Re: a non-answer

Iliah Borg wrote:

I guess you got all sorts of theoretical answers

Nothing wrong with the use of well-established theory.

I have a problem with your use of the word "use"

It's interesting to speculate on how designers of cameras go about the process of designing a camera. Presumably, they are engineers, familiar with the theory that we discuss here, and presumably that theory drives their design decisions (well, those that the marketing people have left open). So we have an proposal here that when it comes to evaluating the results of their efforts, we leave the theory behind and rely on subjective evaluation and the word of experts? That's exactly what happened in the esoteric audio world. You end up with propositions that if you can't actually hear the difference between a $20000 amp and a $1000 one it's because your listening skills aren't good enough (even though quantitative evaluation shows that the differences are below the level of audibility). No-one's willing to own up to not having good listening skills, so the market gets completely distorted by a few opinionated reviewers.

If it goes that way in photography, we'll find manufacturers charging thousands of dollars for insignificant differences and a whole industry of 'expert' reviewers telling us that we just have to be refined and clever enough and we'll understand what those thousands of dollars are for. For anyone who points out that the emperor's attire is rather skimpy, we'll be told 'how can you trash a camera you haven't actually used - all you've done is commented on measurements people have made - must be a troll'.
I hope things don't go that way.

-- hide signature --

Bob

bobn2
bobn2 Forum Pro • Posts: 72,009
Re: a non-answer

ejmartin wrote:

Iliah Borg wrote:

I guess you got all sorts of theoretical answers

Nothing wrong with the use of well-established theory.

In practice you can't say one way or another without comparing
results from particular cameras. That is because you can't find two
sensors of the same size, manufactured using the same sensor
technology, but having different number of pixels.

You mean, like the D40 and the D40X? G9 and G10? 40D and 50D?

Interesting about the 50D. Canon claimed it as a new generation sensor, and trumpeted the gapless microlenses, but it does seem to perform much the same as the previous generation.

-- hide signature --

Bob

Graystar Veteran Member • Posts: 8,373
Re: SNR vs exposure

ejmartin wrote:

Graystar wrote:

ejmartin wrote:

I would say that the LX3 is better in both shadow detail and SNR in
highlights. The LX3 full well capacity is higher, and its read noise
is lower, even when normalized to a per area basis.

Well capacity, eh? That sounds a lot like you're say DR is affected
by pixel size.

Please note the phrase when normalized to a per area basis .

Sorry. I just thought that the "even" before the "when" meant that normalization per area was optional, as in "this is the result, and it is still the result even when..." But apparently that's not the case.

Any reason for the snarky tone?

Dunno...might be the confusing answers, as in the "even when" above, the taking of words out of context, people reading the phrase “Contrary to conventional wisdom, higher resolution actually compensates for noise” and somehow confusing the word “compensate” with “improves upon”, the seeming continual refusal to determine image quality by actually checking the quality of an image...or I could have just woken up on the wrong side of the bed. Who knows.

J1000 Senior Member • Posts: 1,339
Re: more pixels are better!

Maybe it's just me, but I find all that talk of photons to be obscuring the answer to this question.

To the original poster: The answer to the "are more pixels better" question is very simple: Look at the photos! It's a completely subjective decision. As you know, increasing the pixel count requires various compromises. There's no need to know exactly what they are, because they manifest themselves visually. If you like the output then good! If you don't, then bad.

Here are the things to look out for:

  • High-resolution files with low-def details (inadequate lens perhaps, or inadequate processing)

  • Noise (generally speaking, the higher the megapixel count, the higher the noise)

  • Noise-reduction artifacts (the higher the noise, the more NR needed)

It's not just about losing detail due to noise reduction, it's about watching new details appear that don't belong in the picture (artifacts).

And no, it does not double your dynamic range. Even if you have twice the pixels, on an individual level they are going to interpret the range of light the same way. So if one pixel can interpret light on a 1-5 scale, then adding another pixel simply does the same action twice. The color is no more precise that way either, because it's representing a different point on the image plane.

bobn2
bobn2 Forum Pro • Posts: 72,009
Re: more pixels are better!

J1000 wrote:

Maybe it's just me, but I find all that talk of photons to be
obscuring the answer to this question.

To the original poster: The answer to the "are more pixels better"
question is very simple: Look at the photos! It's a completely
subjective decision. As you know, increasing the pixel count requires
various compromises. There's no need to know exactly what they are,
because they manifest themselves visually. If you like the output
then good! If you don't, then bad.

Hardly useful for people looking for generalisations. Generalisations are useful, in that they create a short cut to decision making. However, to be useful they have to be rooted in reality, and the only way to do that is to use science for what it's for, distinguishing the general from the particular, therefore allowing one to deal with classes of things rather than dealing with everything case by case.

Here are the things to look out for:

  • High-resolution files with low-def details (inadequate lens

perhaps, or inadequate processing)

Or camera shake, or poor focus, or...

  • Noise (generally speaking, the higher the megapixel count, the

higher the noise)

See the discussion on the use of science in making valid generalisations. When you don't you end up saying silly things, like a generalisation which suggests that a D3x produces more noise than a digicam.

  • Noise-reduction artifacts (the higher the noise, the more NR needed)

Or other misleading generalisations which suggest you might need NR without even stopping to think about the size of image that you might be trying to produce.

It's not just about losing detail due to noise reduction, it's about
watching new details appear that don't belong in the picture
(artifacts).

One of us is getting confused here, and I don't think it's me.

And no, it does not double your dynamic range. Even if you have twice
the pixels, on an individual level they are going to interpret the
range of light the same way. So if one pixel can interpret light on a
1-5 scale, then adding another pixel simply does the same action
twice. The color is no more precise that way either, because it's
representing a different point on the image plane.

Spoken like a man who doesn't understand what's going on. Have a look at Emil's posts above, and instead of complaining about people talking about photons, try to understand it.
--
Bob

Graystar Veteran Member • Posts: 8,373
Re: Are those figures for "print" or "screen"? (nt)

Les Olson wrote:

For any quantity the correct approach is to compare
first the actual value. Adjusted, normalised, or standardised values
can be compared IF they serve a purpose, and it does not matter
whether you are talking about "seasonally adjusted unemployment" or
normalised SNR. The trap, as in this case, is to use normalisations
that encode ideas about the right answer.

I agree with this completely. The question, “which sensor produces the best image quality” should be answered by comparing the actual values in questions... images produced by those sensors. And I can't think of any better normalizing process that encodes the ideas about the right answer than to compare two prints of the same size.

The OP was wondering if a high MP compact ever could, under idea conditions and through the use of brute megapixel power, produce a print that would “rival a smaller count SLR” (and if not, why.) The answer is flat out no, it can't. The simple fact is that at all print sizes, the image from the small-sensored camera will always appear to have more noise than the same image from the large-sensored camera...regardless of the pixel counts of the two sensors. That's a sensor issue. Also, diffraction places limits on how large the small-sensored image can be enlarged. The large-sensored image will have greater acutance. That's a sensor/lens issue. Both of these issues, and others, combine to give the large-sensored image better image quality all the time.

The only way a small-sensored camera will ever produce an image that will rival a large-sensored camera is if it's a picture of white wall.

Keyboard shortcuts:
FForum MMy threads