Errors in Imaging Resource resolution article?

Ehrik

Veteran Member
Messages
8,014
Reaction score
0
Location
SE
http://www.luminous-landscape.com/tutorials/resolution.shtml

"For instance, you can put 60 million of pixels into a 35mm sensor, but only a diffraction-limited lens at f/5.6 would take advantage of it."

and

"The price to pay is in the form of huge files, and comparatively low signal to noise ratios (which translates to noise, narrower dynamic range, poorer tonal variability"

The first seems to go against the notion of ever diminishing returns, but no
"wall" when it comes to so called "diffraction limits".

And the second seems to be the old pixel density fallacy.

Maybe I've misunderstood the conclusions from the discussions on these topics which have been going on in this forum. Or I've lost orientation in their rather long article. Please comment!

Just my two oere
Erik from Sweden
 
It all depends on how you look at the pixels. If you expect 1 of the 60 million pixels to do the same job as 1 of 10 million...then what was said is correct. But if you start treating pixels like dots from an inkjet printer, where you don’t really see one but must combine them (either into larger pixels or viewed so small they blend together) then the noise and diffraction arguments fails.

Dynamic range has a few widely varying viewpoints, so until the full-frame, 200 MP sensor comes along and we can actually test, each person has to just decide for themselves. Personally I think the assessment is correct when it comes to dynamic range, but that may change with some of the newer sensor designs (if they ever make it into cameras.)
 
http://www.luminous-landscape.com/tutorials/resolution.shtml

"For instance, you can put 60 million of pixels into a 35mm sensor,
but only a diffraction-limited lens at f/5.6 would take advantage of
it."

"The price to pay is in the form of huge files, and comparatively low
signal to noise ratios (which translates to noise, narrower dynamic
range, poorer tonal variability"
I don't know why they chose f/5.6, but I think their point is that, because of diffraction, resolution is directly proportion to aperture, so the bigger the aperture the better the resolution (see http://www.microscopyu.com/tutorials/java/imageformation/airyna/index.html ). The catch is that aberrations are hard to control at large apertures, and are usually a worse problem than diffraction until small apertures. However, the idea that diffraction only occurs above f/11 is a myth.

The idea that pixel size does not affect signal to noise ratio is an oversimplification bordering on myth, for two reasons.

(1) Photon (AKA shot) noise is equal to the square root of photon flux, so (obviously) the higher the photon flux the higher the ratio of flux to its square root. But camera sensors are photon noise limited only when light is veru bright, and most of the time they are read noise limited. Read noise is a per pixel noise, so smaller pixels must have lower signal to noise ratios.

(2) Because photon noise is equal to the square root of photon flux, the lowest possible signal to noise ratio for a sensor is the square root of the full well capacity. Smaller pixels have lower full well capacity (because they have smaller wells) and lower S/N. You can get around that by binning pixels, but only by giving up resolution. In the extreme case, if you treat the whole sensor as a single pixel, S/N is determined by sensor area - but you do not have an image.

How is dynamic range separable from signal to noise ratio?

--
'Tell me Mr Clarke, have you ever heard of Bishop Barclay?'
'Of course I have. His brother used to be captain of India.'
 
I don't know why they chose f/5.6...
Because that's where MTF...and hence resolution...is usually the
greatest. It's easily seen by reviewing the lens reviews on DPR.
The charts (they are not actually MTFs) show that resolution (line pairs/picture height) is greatest at about f/5.6 because the effects of aberration in degrading the image are falling as aperture increases but the effects of diffraction in degrading the image are increasing, and for most lenses the best balance is there. That is not the same thing as saying that a lens is diffraction limited at f/5.6. That means that the lens is aberration free. If a lens is diffraction limited larger apertures mean higher resolution without limit, so why choose f/5.6? It may be that f/5.6 gives a cutoff frequency exactly the same as the Nyquist frequency of a 36x24mm 60MP sensor, but someone else will have to work it out.

--
'Tell me Mr Clarke, have you ever heard of Bishop Barclay?'
'Of course I have. His brother used to be captain of India.'
 
The idea that pixel size does not affect signal to noise ratio is an
oversimplification bordering on myth, for two reasons.

(1) Photon (AKA shot) noise is equal to the square root of photon
flux, so (obviously) the higher the photon flux the higher the ratio
of flux to its square root.
Yes, so it's the same per area independent of pixel size so long as
the downsizing doesn't make the pixels less efficient per area (which
seems to be the case for a wide range of pixel sizes and current
technology.)
But camera sensors are photon noise
limited only when light is veru bright, and most of the time they are
read noise limited. Read noise is a per pixel noise, so smaller
pixels must have lower signal to noise ratios.
But that depends on how the read noise scales with pixel size and
in practise, it seems that for a broad range of pixels sizes, it scales in a way
that doesn't increase the image noise when the pixel get smaller.

(And that is disregarding the benefit of smaller pixels giving a finer-grained
noise that is more pleasant to look at.)
(2) Because photon noise is equal to the square root of photon flux,
the lowest possible signal to noise ratio for a sensor is the square
root of the full well capacity. Smaller pixels have lower full well
capacity (because they have smaller wells) and lower S/N. You can
get around that by binning pixels, but only by giving up resolution.
But the comparison is to bigger pixels, which also give up resolution!

Just my two oere
Erik from Sweden
 
The charts (they are not actually MTFs)
They’re MTF-50 charts, which is what I was referring to.
show that resolution (line
pairs/picture height) is greatest at about f/5.6 because the effects
of aberration in degrading the image are falling as aperture
increases but the effects of diffraction in degrading the image are
increasing, and for most lenses the best balance is there.
Right. That’s the reason they used f/5.6 as the reference point.
That is
not the same thing as saying that a lens is diffraction limited at
f/5.6. That means that the lens is aberration free.
You didn’t read carefully enough. The charts are theoretical. The authors say...

“Table 1 reports the maximum resolution for a diffraction-limited (aberration free) lens at different apertures and for different levels of contrast”

And then later say...

“At this point you know that diffraction-limited lenses aren’t the normal case. Only a few top-class lenses approach the resolutions presented in the Table 1, and even that only by stopping down to medium aperture values.”
...so why choose f/5.6?
The authors say...

“Consider f/5.6 as a reference, although it is really difficult to find a diffraction-limited lens at that aperture[11]. Diffraction-limited resolutions for f/8 or f/11 aperture values are more realistic for mass-produced lenses.”

In the quest to answer who out-resolves who (lens or sensor) the authors are simply trying to give the lens the best possible advantage that is within the realm of reality.
 
The idea that pixel size does not affect signal to noise ratio is an
oversimplification bordering on myth, for two reasons.

(1) Photon (AKA shot) noise is equal to the square root of photon
flux, so (obviously) the higher the photon flux the higher the ratio
of flux to its square root.
Yes, so it's the same per area independent of pixel size so long as
the downsizing doesn't make the pixels less efficient per area (which
seems to be the case for a wide range of pixel sizes and current
technology.)
But camera sensors are photon noise
limited only when light is veru bright, and most of the time they are
read noise limited. Read noise is a per pixel noise, so smaller
pixels must have lower signal to noise ratios.
But that depends on how the read noise scales with pixel size and
in practise, it seems that for a broad range of pixels sizes, it
scales in a way
that doesn't increase the image noise when the pixel get smaller.
Read noise is a per pixel noise, and it is added to the photon noise and the dark noise for each pixel. So, because the signal cannot be bigger than the full well capacity, the S/N of small pixels must be lower than the S/N of big pixels unless read noise and dark noise are less for small pixels, and they are not.
(2) Because photon noise is equal to the square root of photon flux,
the lowest possible signal to noise ratio for a sensor is the square
root of the full well capacity. Smaller pixels have lower full well
capacity (because they have smaller wells) and lower S/N. You can
get around that by binning pixels, but only by giving up resolution.
But the comparison is to bigger pixels, which also give up resolution!
Bigger pixels give up theoretical resolution but not actual resolution, because of the limitations of lenses, which is exactly the point Luminous Landscape was making.

The limitations on printed images are also important. Printing a 10 x 8 at 300 dpi means 7.2MP. A D700 has 12MP on 864 square mm = 72 square microns each (roughly, ignoring waste space). An E520 has 10MP on 216 square mm of sensor = 22 square microns each. If you bin the pixels in the E520 in 2 x 2 blocks, so they are a bit bigger than the pixels in the D700, you only have 2.5MP to print with and your 10 x 8 can only be at 180dpi.
--
'Tell me Mr Clarke, have you ever heard of Bishop Barclay?'
'Of course I have. His brother used to be captain of India.'
 
But camera sensors are photon noise
limited only when light is veru bright, and most of the time they are
read noise limited. Read noise is a per pixel noise, so smaller
pixels must have lower signal to noise ratios.
But that depends on how the read noise scales with pixel size and
in practise, it seems that for a broad range of pixels sizes, it scales
in a way that doesn't increase the image noise when the pixel get smaller.
Do you have evidence for this? It's true that both read noise and pixel size have decreased over time, but sensor technology has been improving. Roger Clark tested several cameras, and concluded that there is no real trend with pixel pitch:

http://www.clarkvision.com/imagedetail/digital.sensor.performance.summary/index.html#read_noise

--
Alan Martin
 
(2) Because photon noise is equal to the square root of photon flux,
the lowest possible signal to noise ratio for a sensor is the square
root of the full well capacity. Smaller pixels have lower full well
capacity (because they have smaller wells) and lower S/N. You can
get around that by binning pixels, but only by giving up resolution.
In the extreme case, if you treat the whole sensor as a single pixel,
S/N is determined by sensor area - but you do not have an image.
Fortunately, there is an alternative, which is used by most digital cameras: algorithmic noise reduction. Like downsampling, this gives up resolution - but only for low-contrast detail. So I think the image-quality argument for larger pixels (on the same sensor size) comes down to read noise, not photon noise.

Incidentally, one of the main reasons for pixel binning - that is, combining pixels at the level of stored electrons, rather than digitally as in downsampling - is to reduce read noise. It works quite nicely on monochrome (or Foveon) sensors, but so far the attempts to use it on Bayer sensors haven't worked so well.

--
Alan Martin
 
http://www.luminous-landscape.com/tutorials/resolution.shtml

"For instance, you can put 60 million of pixels into a 35mm sensor,
but only a diffraction-limited lens at f/5.6 would take advantage of it."
That's based on 2 pixels per Airy disk diameter, which is much too pessimistic. The pixel pitch of such a sensor would be 3.8 µm. For comparison, the Canon G9 has a pixel pitch of 1.85 µm, and while its output is quite soft at f/5.6, it still has more detail than any 3-megapixel camera.

This isn't because of the Bayer filters - modern demosaicing algorithms obtain good monochrome resolution in spite of them. (But the related need for anti-aliasing filters to reduce moiré patterns does reduce potential resolution.)

It has more to do with the diffraction-limited MTF, and the fact that the Airy disk is measured to the first minimum, rather than by a more conventional measure like FWHM (full width at half maximum).
The first seems to go against the notion of ever diminishing returns,
but no "wall" when it comes to so called "diffraction limits".
Resolution limitations usually combine by addition of reciprocal squares, so improving any factor produces some gain, though with diminishing returns, as you say.

But pixel pitch is different. The MTF due to diffraction alone drops to zero at a finite resolution. Two pixels per cycle at that resolution is the Nyquist limit, beyond which smaller pixels capture no additional detail.

(Actually, that's only true when sinc filters are used for reconstruction; when using better-behaved filters, some margin beyond Nyquist is needed.)

A Bayer sensor doesn't reach the point of zero marginal return until the "blue" filter sub-grid reaches the Nyquist limit. (This would greatly simplify demosaicing, as each sub-grid could be reconstructed separately.)

--
Alan Martin
 
Read noise is a per pixel noise, and it is added to the photon noise
and the dark noise for each pixel. So, because the signal cannot be
bigger than the full well capacity, the S/N of small pixels must be
lower than the S/N of big pixels unless read noise and dark noise are
less for small pixels, and they are not.
Yes, but the article didn't talk about pixel S/N ratio, but the whole image.
Many smaller pixel with less S/N ratio can equal or beat fewer pixels with
higher S/N ratio!
Bigger pixels give up theoretical resolution but not actual
resolution, because of the limitations of lenses, which is exactly
the point Luminous Landscape was making.
Well, my view is that aberrations + diffraction cause blur to the image that
is projected on the sensor. Then from that point coarse pixels will add
more blur to that image than finer pixels.
The limitations on printed images are also important. ...
As they present their conclusion, one gets the impression it's an absolute

limit, not a limit relative to viewing size. If it is the latter, then my criticism
would switch to saying that their article is unclearly presented and will
leave many readers with the wrong conclusion.

Just my two oere
Erik from Sweden
 
But camera sensors are photon noise
limited only when light is veru bright, and most of the time they are
read noise limited. Read noise is a per pixel noise, so smaller
pixels must have lower signal to noise ratios.
But that depends on how the read noise scales with pixel size and
in practise, it seems that for a broad range of pixels sizes, it scales
in a way that doesn't increase the image noise when the pixel get smaller.
Do you have evidence for this?
No, but the tests I've seen seem to point in this direction, see below.
It's true that both read noise and
pixel size have decreased over time, but sensor technology has been
improving. Roger Clark tested several cameras, and concluded that
there is no real trend with pixel pitch:
Clark has been criticised for some aspects of his methodology and presentation.
E.g. for not considering that some cameras clip their raw data at the dark end,
which will make the read noise numbers he computes invalid.

Also, compacts tend to be calibrated with up to a stop less headroom than
DSLRs, thus their ISOs are a stop faster. This should be factored in when
comparing these figures.

Finally, when comparing par area, smaller pixels gain from the noise averaging
effect.

And, beside the noise-strength-per-area discussions, having the noise finer
grained is beneficial to the image quality.

See John Sheehy's demonstration here for example:
http://forums.dpreview.com/forums/read.asp?forum=1018&message=28607494

Just my two oere
Erik from Sweden
 
Great post, just the kind of replies I hoped for!
"For instance, you can put 60 million of pixels into a 35mm sensor,
but only a diffraction-limited lens at f/5.6 would take advantage of it."
That's based on 2 pixels per Airy disk diameter, which is much too
pessimistic.
I also felt the figures seemed unreasonable.

...
The first seems to go against the notion of ever diminishing returns,
but no "wall" when it comes to so called "diffraction limits".
Resolution limitations usually combine by addition of reciprocal
squares
Great! That's what I've felt intuitively, but I've not seen it stated
anywhere.
, so improving any factor produces some gain, though with
diminishing returns, as you say.

But pixel pitch is different. The MTF due to diffraction alone drops
to zero at a finite resolution. Two pixels per cycle at that
resolution is the Nyquist limit, beyond which smaller pixels capture
no additional detail.
Important point, that I had missed. Thanks.
...

A Bayer sensor doesn't reach the point of zero marginal return until
the "blue" filter sub-grid reaches the Nyquist limit. (This would
greatly simplify demosaicing, as each sub-grid could be reconstructed
separately.)
Yes.

Just my two oere
Erik from Sweden
 
http://www.luminous-landscape.com/tutorials/resolution.shtml

"For instance, you can put 60 million of pixels into a 35mm sensor,
but only a diffraction-limited lens at f/5.6 would take advantage of
it."
It SHOULD read "...would take FULL advantage of it."
and

"The price to pay is in the form of huge files, and comparatively low
signal to noise ratios (which translates to noise, narrower dynamic
range, poorer tonal variability"
It SHOULD add... compared to a sensor with fewer pixels using the same technology.
The first seems to go against the notion of ever diminishing returns,
but no
"wall" when it comes to so called "diffraction limits".
There is a "wall" when it comes to diffraction. A lens CANNOT resolve more than this theoretical limit. HOWEVER, the more resolution the sensor has (more pixels) the better it can "resolve" the circle of confusion. Also, when one speaks of a diffraction limited lens, one is speaking about a lens that is optically perfect and limited ONLY by diffraction. In other words an ordinary lens will be diffraction limited at f5.6 while an excellent lens will be diffraction limited at f4 and this will be at a higher resolution.
And the second seems to be the old pixel density fallacy.

Maybe I've misunderstood the conclusions from the discussions on
these topics which have been going on in this forum. Or I've lost
orientation in their rather long article. Please comment!

Just my two oere
Erik from Sweden
 
I think we're talking about two different things here. John Sheehy's example is pushed from 100 ISO, where most of the 400D's additive noise is ADC noise (or other post-amplification noise).

I wasn't counting that as "read noise" - I was following Roger Clark's terminology from the page I linked, where "read noise" refers to just the pre-amplification component, which becomes dominant at high ISOs.

Small pixels do have a potential advantage in ADC noise, when their output is resampled for comparison at the same pixel dimensions, because their smaller dynamic range is less demanding on the ADC.
Also, compacts tend to be calibrated with up to a stop less headroom
than DSLRs, thus their ISOs are a stop faster. This should be
factored in when comparing these figures.
By "ISOs are a stop faster", do you mean that at a given ISO the scaling factor from exposure in lux-seconds to RAW data numbers (normalized to the same full scale) is a factor of 2 larger? (This does not constitute an error in the stated ISO numbers.)

If so, you're right: that is a design decision rather than a consequence of pixel size, and it does help with ADC noise. (However, ADC noise isn't such an issue for compacts because their pre-amplification dynamic range is relatively small.)
Clark has been criticised for some aspects of his methodology and
presentation. E.g. for not considering that some cameras clip
their raw data at the dark end, which will make the read noise numbers
he computes invalid.
There are probably some methodological flaws - this kind of testing involves lots of assumptions, some of them hard to verify. But he does deal with that particular issue:
http://www.clarkvision.com/imagedetail/evaluation-1d2/howtotakedata.html

--
Alan Martin
 
Incidentally, one of the main reasons for pixel binning - that is,
combining pixels at the level of stored electrons, rather than
digitally as in downsampling - is to reduce read noise. It works
quite nicely on monochrome (or Foveon) sensors, but so far the
attempts to use it on Bayer sensors haven't worked so well.
According to http://www.microscopyu.com/tutorials/java/digitalimaging/signaltonoise/index.html read noise on CCD sensors comes mainly from the on-chip pre-amplifier. Binning reduces read noise relative to full well capacity by increasing FWC, but how does binning reduce read noise per pixel (binned or not as the case may be), if it comes from the pre-amp?

--
'Tell me Mr Clarke, have you ever heard of Bishop Barclay?'
'Of course I have. His brother used to be captain of India.'
 
Allright I give in, I have done the calculations.

The Rayleigh criterion separation at the sensor at f/5.6 is 3.3 microns for blue light and 4.4 microns for red light. A diffraction limited lens means one in which aberrations do not prevent points this far apart being resolved. A 60MP 36 x 24 sensor has photosites 14.4 square microns (assuming the whole sensor is taken up with photosites), so if the photosites are square they are 3.8 microns wide and the Nyquist limit for the sensor is way fewer line-pairs/mm than the lens can resolve. At f/8 the Rayleigh criterion separation is 6.3 microns for red light, so the sensor and the lens are neck and neck.

So that is why f/5.6 in the original article. If you had a lens which was diffraction limited at f/1.4 the Rayleigh criterion separation would be 0.8 microns for blue light and 1.1 microns for red light, so it would be worth having a 36 x 24 sensor with 175MP!
--
'Tell me Mr Clarke, have you ever heard of Bishop Barclay?'
'Of course I have. His brother used to be captain of India.'
 
Incidentally, one of the main reasons for pixel binning - that is,
combining pixels at the level of stored electrons, rather than
digitally as in downsampling - is to reduce read noise. It works
quite nicely on monochrome (or Foveon) sensors, but so far the
attempts to use it on Bayer sensors haven't worked so well.
According to
http://www.microscopyu.com/tutorials/java/digitalimaging/signaltonoise/index.html
read noise on CCD sensors comes mainly from the on-chip pre-amplifier.
Binning reduces read noise relative to full well capacity by increasing
FWC, but how does binning reduce read noise per pixel (binned or not
as the case may be), if it comes from the pre-amp?
I was considering binning as an alternative to downsampling. A 2x2 binned group of pixels would have the same read noise as a single pixel, but half as much as the summed output of a 2x2 group of individually converted pixels.

--
Alan Martin
 

Keyboard shortcuts

Back
Top