Detail Man wrote:
Bobn2 wrote:
David Rosser wrote:
Bobn2 wrote:
olliess wrote:
Bobn2 wrote:
olliess wrote:
quadrox wrote:
In the end that means that medium format and large format only hold an advantage as long as DSLR lenses are not diffraction limited.
I think it's correct to say that we're getting to the point where DSLRs significantly affected by diffraction, and may even be said to be diffraction-limited at some apertures.
I don't think it's correct to say that, or at least, not meaningful. We are still well short of pixel counts such that the resolution of the lens is the limit over the whole aperture range.
I didn't say "over the whole aperture range," I said "at some apertures."
But they've always been diffraction limited at the same 'some apertures' - it is a property of the lens, not the camera. Look at this lens resolution graph (taken from DxOmark in the good old days when the gave full MTF)
Here we have the same lens on a 12MP and 24MP FF camera. The downward slope to the right is where the lens is becoming 'diffraction limited' - that is, its resolution is limited by diffraction rather than aberrations. Notice that the slope starts at the same place. The only difference is that the 24Mp camera is extracting more from the diffraction limited lens. The 24MP is no more 'diffraction limited' that the 12MP.
In any case, mostly, the lens is giving its best resolution (which present cameras cannot entirely capture) when it is aberration, not diffraction limited.
Sure.
This whole idea of cameras being 'diffraction limited' due to high pixel count is bogus. Diffraction is a property of the lens, not the camera.
The camera isn't diffraction limited DUE to higher pixel counts. It's diffraction that is limiting your ability to gain much more from higher pixel counts.
Then we still have some way to go before the diffraction is the limiting factor on our camera systems at moderate apertures. As the above shows, even at the smallest apertures, increased pixel count is still yielding noticeably more.
--
Bob
Bob it might help a lot if you explained that the MTF curves in the figure above are composite curves - the MTF for the lens at some fixed number of lp/mm at each aperture multiplied by the MTF of the sensor at that same lp/mm figure. When you understand that you realise that unless you have a sensor with infinite resolution the combined MTF is always going to improve with increasing sensor resolution.
Thank you for explaining that, so that I don't have to.
Of course I have never seen a sensor MTF curve - the only hint is a Zeiss paper which suggests that sensor MTF falls linearly from 100% at 0 frequency to 0% at the Nyquist frequency. Now if anybody still tested lenses independent of camera bodies like they used to (I still have the EROS200 lens test date for my 55mm f/3.5 micro Nikkor that R.G.Lewis produced prior to selling me the lens) you could use the lens data and the combined data as prooduced by DxOmark to estimate the sensor MTF.
Perhaps as this site now uses DxOmark data Amazon could afford to buy DxOmark the modern equivalent of EROS200.
Indeed, you don't see too many experimentally determined lens (without camera) or sensor MTF curves.
There are some AA filter curves
here
the 'quartz' (i.e birefringent) is not a bad approximation to the theory, which is simply the Fourier transform of the box PSF that such a filter should give.
There are also some theoretical curves for a sensor without AA filter due to Bart van der Wolf
here These are assuming the 100% aperture function that 100% microlenses should give.
Interesting (theoretical) graph by Bart there. I don't see any mention by him in this post of his:
Reply #103 on
: July 13, 2010, 01:45:39 PM
http://www.luminous-landscape.com/forum/index.php?topic=44733.100
... about a micro-lens model being added to the calculations - but the curves differ from the straight
sinc function expected for a single photosite aperture. They have a bit of a gradual "tail" before crossing through zero magnitude - reminiscent of the
chat function associated with diffraction through a circular aperture.
After a fair amount of consultations with the interesting and knowledgeable
Frans van den Bergh (the developer of
MTF Mapper), I have used this spatial frequency domain model provided by him [for the spatial domain convolution of photosite aperture and optical ("AA") filter, expressed below as an equivalent trigonometric restatement of the product of the individual spatial frequency transforms]:
Photosite Aperture (100% Fill) convolved together with
Optical Lowpass Filter assembly:
Absolute Value ( ( A * Sin (pi * B *
f) / (pi * B *
f) ) + ( C * Sin (pi * D *
f) / (pi * D *
f) ) )
where: A = 1/2 + Offset; B = 1 + 2 * Offset; C = 1/2 - Offset; D = 1 - 2 * Offset
and
f is the dimensionless product of Spatial Frequency multiplied by Photosite Aperture (100% Fill).
Notes:
Offset of 0.250000 yields first zero response at 1.000000 times the Spatial Sampling Frequency.
Offset of 0.333333 yields first zero response at 0.750000 times the Spatial Sampling Frequency.
Offset of 0.375000 yields first zero response at 0.666667 times the Spatial Sampling Frequency.
Offset of 0.400000 yields first zero response at 0.625000 times the Spatial Sampling Frequency.
Offset of 0.500000 yields first zero response at 0.500000 times the Spatial Sampling Frequency.
From:
http://www.dpreview.com/forums/post/51323901
.
Here is what the spatial frequency transform (MTF) looks like for
photosite aperture:
The spatial frequency transform (MTF) looks like for an
optical ("AA") filter with a zero at Nyquist:
Multiplying the spatial frequency transforms (MTFs) yields a response with a zero response at the Nyquist spatial frequency, but which also contains additional lobes with peak magntitudes decreasing in inverse proportion to spatial frequency (F). "AA" filters are thus "periodic" filters (as opposed to "low-pass" filters). Their spatial frequency selective "roll-off" is actually a result of the sinc [ sin(f)/(f) ] function of the photosite aperture itself.
.
Regarding the validity of methods of individual photosite analysis as it relates to existing reality:
fvdbergh2501 wrote:
Detail Man wrote:
Have come to better understand that it is only of a limited significance to consider the convolution of an individual aperture with diffraction patterns that extend beyond it's perimeter - as what really occurs on the surface of an image-sensor is the spatial convolution of the diffraction patterns that exist from incoming beams of light together with 2-D pulse-trains of photosite apertures (with spatial repetition rate corresponding to the pixel-pitch, and duty-cycle proportional to the fill-factor), all even further complicated by the addition of an OLPF assembly.
Hmmm. As far as I understand it, those two views are identical under certain conditions. As discussed above, MTF Mapper represents the "incoming image" as a Cartesian plane with a white background and a black target rectangle. These are defined mathematically, i.e., no discretization to pixel boundaries involved at this level. This is equivalent to considering an infinite number of rays impinging on the sensor.
To measure the response of our sensor, all we have to do is to compute the integral of the "incoming image" multiplied by the system PSF, i.e., the convolution of the "incoming image" and the system PSF. This convolution only yields a single (scalar) result. The key concept is to realize that the "system PSF" I refer to here is a specific PSF centered on a specific photosite. (The PSF could vary from one photosite to the next, as it indeed does when an astigmatic lens is considered, but for simplicity we can assume that the PSF is the same for all photosites). We just have to repeat this process at every photosite to obtain the image captured by our sensor.
So rather than following the incoming light, and modelling how it is spread by diffraction and OLPFs in the "forward" direction, we run the process in reverse. Thus, each photosite PSF is instead multiplied and integrated over the entire incoming image. Because our incoming image is defined mathematically, we can do this to any required degree of accuracy, essentially simulating the "infinite number of rays" view. Since this demonstrably works, it implies that the system response is indeed fully defined by considering the PSF from the viewpoint of a single photosite.
The real catch, it would seem, is in what the input image is. In other words, the usefulness of all this analysis comes from observing the response to a known input. If both the "incoming image" and the photosite PSF are known, then the photosite-centric view is identical to the multiple-photosite pulse-train view. Because the step-edge input is so widely used, we have become accustomed to working with only the system PSF in relative comparisons.
http://www.dpreview.com/forums/post/51301707
.
Regarding whether the bounds of diffraction extinction having some effect upon the upper spatial frequencies of composite system spatial frequency (MTF) response are presently approached.
Assuming a "fudge factor" (
K) for a Bayer-arrayed, CFA image-sensor (which is likely a bit wider still, due to the fact that most de-mosaicing algorithms utilize wider than 2x2 arrays in their interpolations).
I calculate the "critical" (minumum) Photosite Aperture dimension to be:
Pa = ( Pa / ( K * Pp ) ) * ( Fz ) * ( W * N )
where:
Pa is Photosite Aperture;
K is the Bayer-arrayed and de-mosaiced "fudge factor";
Pp is Photosite Pitch;
Fz is the fraction of the spatial sampling frequency (which is equal to the reciprocal of Photosite Pitch) at which the first zero magnitude response occurs in the composite Optical ("AA") Filter combined (convolved in the spatial domain, multiplied in the spatial frequency domain) with the Photosite Aperture);
W is the Wavelength;
N is the F-Ratio of the lens-system.
.
Solving for the simple case of a 100% Fill Factor (Photosite Aperture equals Photosite Pitch), setting the value of
K to conservative value equal to 2, and setting the value of
Fz to 1/2 (the strongest possible "AA Filter" (resulting in a zero magnitude response at the Nyquist spatial frequency), the identity presented above simplifies to the following form:
Pa = ( W * N ) / 4
Re-arranging to solve for the maximum F-Ratio (
N) as a function of Wavelength (
W) and Photosite Aperture (
Pa):
N = ( 4 * Pa ) / W
.
Assigning a value of 700 nM for Wavelength, and
4.7 Micron Photosite Pitch of the
Nikon D800:
( 4 * 4.7 E-6 ) / (700 E-9) =
Nmax =
26.86
Note that in a case where no optical ("AA") filtering exists at all (as the D800E model may or may not actually represent),
Nmax is reduced by a factor of 2 to a value of
13.43. On the other hand, in such a case, diffraction through a circular aperture opening (as modelled here) functions as an "anti-alising" filter that is indeed more effective than optical filtering (because it does not exhibit a periodic spatial frequency magnitude response).
.
Assigning a value of 700 nM for Wavelength, and
3.73 Micron Photosite Pitch of the
Oly E-M5:
( 4 * 3.73 E-6 ) / (700 E-9) =
Nmax =
21.31
.
Thus, it does appear that (for lens-system F-Numbers of 22.627), decreasing Photosite dimensions below
3.960 Microns (may) begin to present some limitations at the upper bounds of the composite system spatial frequency (MTF) response. Please note that this does not constitute a "hard and fast limit" at anything other than the very highest spatial frequencies.
DM ...