Relationship between Sensor Size and diffraction

Bobn2 wrote:
Zlik wrote:
Bobn2 wrote:
olliess wrote:
Zlik wrote:

I think this whole discussion can be summarized into two non mutually exclusive claims:
  • One person (Bobn2) says that the higher megapixel camera will always produce better or at worse equal resolution than the low megapixel camera, which is supported by your graph: the red graph is at all points higher than the blue graph.
  • One person (olliess ) says that the higher megapixel camera will start to lose its resolution potential sooner and faster than the lower megapixel camera, which is again supported by your graph: the downwards slope starts sooner and declines faster for the red graph.
Thanks for summarizing so succinctly.

Seems like a good time to step away and let calmer heads prevail. ;)
It is not true that the 'higher megapixel camera will start to lose its resolution potential sooner' it starts to lose its resolution potential at exactly the same point.
This is true, in theory, but in practice, you will never be able to tell the difference in resolution from f/5.6 to f/8 on a 4MP full frame sensor because the curve is extremely flat around the high point (f/5.6). So yes, f/8 might have 98% of the theoretical resolution achieved on that sensor at f/5.6, and that's why you can say that the diffraction starts at the same point, but visually, the diffraction will start to show later on a low MP camera. On a 100MP camera, you will easily see a difference one stop down from optimal aperture, but on a 4MP full frame camera, you won't see any difference between f/5.6 and f/8, which is why I say that on a low MP camera, you can sometimes stop down a little beyond the theoretical optimal aperture without losing any resolution visually.
That demonstrates immediately the 'diffraction limit' theory is wrong. It predicts that the smaller pixels will show diffraction limiting earlier and that cannot be demonstrated in the real world. Furthermore, the decline of resolution (the 'diffraction limit' is not where the theory predicts it to be. So, in brief the theory does not hold up to experiment.

--
Bob
Your point above has some validity, but it still doesn't excuse the invention of a completely arbitrary 'limit', because there is nothing quantitative about your argument that would suggest that diffraction becomes 'visible' at some particular point.

I get impatient with McHugh and his disciples just because I have seen so many people completely bamboozled into thinking that high MP cameras produce lower resolution images than low MP ones above this 'diffraction limit'. Now I am aware that he's not actually saying that, but the problem is that what he is saying is of no consequence at all, so it's quite natural for people to take it as saying something of consequence, which turns out to be wrong.

The simple truth. There is no 'diffraction limit' with respect to sensor resolution.

The more complex truth. Higher MP sensors get closer to the lens' limits of performance than lower MP sensors, and this might mean that a high MP system is more often lens limited rather than sensor limited, particularly at small apertures when the lens is diffraction limited (and at large apertures when it is aberration limited). Nonetheless, a high MP system will extract more resolution from any given lens than will a low MP system, in all circumstances.
...the ideal situation, in terms of IQ, is to have a lens with as few aberrations as possible and a sensor with so many pixels that lens aberrations or diffraction act as the AA filter, so long, of course, as the sensor with that many pixels is at least as efficient as a sensor with fewer pixels.
 
Member said:
Bobn2 wrote:

Indeed, you don't see too many experimentally determined lens (without camera) or sensor MTF curves.

There are some AA filter curves here

the 'quartz' (i.e birefringent) is not a bad approximation to the theory, which is simply the Fourier transform of the box PSF that such a filter should give.

There are also some theoretical curves for a sensor without AA filter due to Bart van der Wolf here These are assuming the 100% aperture function that 100% microlenses should give.
Interesting (theoretical) graph by Bart there. I don't see any mention by him in this post of his:

Reply #103 on: July 13, 2010, 01:45:39 PM

http://www.luminous-landscape.com/forum/index.php?topic=44733.100

... about a micro-lens model being added to those calculations - but the curves differ from the straight sinc function expected for a single photosite aperture. They have a bit of a gradual "tail" before crossing through zero magnitude - reminiscent of the chat function associated with diffraction through a circular aperture.

After a fair amount of consultations with the interesting and knowledgeable Frans van den Bergh (the developer of MTF Mapper), I have used this spatial frequency domain model provided by him [for the spatial domain convolution of photosite aperture and optical ("AA") filter, expressed below as an equivalent trigonometric restatement of the product of the individual spatial frequency transforms]:

Photosite Aperture (100% Fill) convolved together with Optical Lowpass Filter assembly:

Absolute Value ( ( A * Sin (pi * B * f) / (pi * B * f) ) + ( C * Sin (pi * D * f) / (pi * D * f) ) )

where: A = 1/2 + Offset; B = 1 + 2 * Offset; C = 1/2 - Offset; D = 1 - 2 * Offset

and f is the dimensionless product of Spatial Frequency multiplied by Photosite Aperture (100% Fill).

Notes:

Offset of 0.250000 yields first zero response at 1.000000 times the Spatial Sampling Frequency.

Offset of 0.333333 yields first zero response at 0.750000 times the Spatial Sampling Frequency.

Offset of 0.375000 yields first zero response at 0.666667 times the Spatial Sampling Frequency.

Offset of 0.400000 yields first zero response at 0.625000 times the Spatial Sampling Frequency.

Offset of 0.500000 yields first zero response at 0.500000 times the Spatial Sampling Frequency.

From: http://www.dpreview.com/forums/post/51323901

.

Here is what the spatial frequency transform (MTF) looks like for photosite aperture:



The spatial frequency transform (MTF) looks like for an optical ("AA") filter with a zero at Nyquist:



Multiplying the spatial frequency transforms (MTFs) yields a response with a zero response at the Nyquist spatial frequency, but which also contains additional lobes with peak magntitudes decreasing in inverse proportion to spatial frequency (F). "AA" filters are thus "periodic" filters (as opposed to "low-pass" filters). Their spatial frequency selective "roll-off" is actually a result of the sinc [ sin(f)/(f) ] function of the photosite aperture itself.

.

Regarding the validity of methods of individual photosite analysis as they relate to existing reality:
Member said:
fvdbergh2501 wrote:
Member said:
Detail Man wrote:

Have come to better understand that it is only of a limited significance to consider the convolution of an individual aperture with diffraction patterns that extend beyond it's perimeter - as what really occurs on the surface of an image-sensor is the spatial convolution of the diffraction patterns that exist from incoming beams of light together with 2-D pulse-trains of photosite apertures (with spatial repetition rate corresponding to the pixel-pitch, and duty-cycle proportional to the fill-factor), all even further complicated by the addition of an OLPF assembly.
Hmmm. As far as I understand it, those two views are identical under certain conditions. As discussed above, MTF Mapper represents the "incoming image" as a Cartesian plane with a white background and a black target rectangle. These are defined mathematically, i.e., no discretization to pixel boundaries involved at this level. This is equivalent to considering an infinite number of rays impinging on the sensor.

To measure the response of our sensor, all we have to do is to compute the integral of the "incoming image" multiplied by the system PSF, i.e., the convolution of the "incoming image" and the system PSF. This convolution only yields a single (scalar) result. The key concept is to realize that the "system PSF" I refer to here is a specific PSF centered on a specific photosite. (The PSF could vary from one photosite to the next, as it indeed does when an astigmatic lens is considered, but for simplicity we can assume that the PSF is the same for all photosites). We just have to repeat this process at every photosite to obtain the image captured by our sensor.

So rather than following the incoming light, and modelling how it is spread by diffraction and OLPFs in the "forward" direction, we run the process in reverse. Thus, each photosite PSF is instead multiplied and integrated over the entire incoming image. Because our incoming image is defined mathematically, we can do this to any required degree of accuracy, essentially simulating the "infinite number of rays" view. Since this demonstrably works, it implies that the system response is indeed fully defined by considering the PSF from the viewpoint of a single photosite.

The real catch, it would seem, is in what the input image is. In other words, the usefulness of all this analysis comes from observing the response to a known input. If both the "incoming image" and the photosite PSF are known, then the photosite-centric view is identical to the multiple-photosite pulse-train view. Because the step-edge input is so widely used, we have become accustomed to working with only the system PSF in relative comparisons.
http://www.dpreview.com/forums/post/51301707

.

Regarding whether the bounds of diffraction extinction having some effect upon the upper spatial frequencies of composite system spatial frequency (MTF) response are presently approached.

Assuming a "fudge factor" (K) for a Bayer-arrayed, CFA image-sensor (which is likely a bit wider still, due to the fact that most de-mosaicing algorithms utilize wider than 2x2 arrays in their interpolations).

I calculate the "critical" (minumum) Photosite Aperture dimension to be:

Pa = ( Pa / ( K * Pp ) ) * ( Fz ) * ( W * N )

where:

Pa is Photosite Aperture;

K is the Bayer-arrayed and de-mosaiced "fudge factor";

Pp is Photosite Pitch;

Fz is the fraction of the spatial sampling frequency (which is equal to the reciprocal of Photosite Pitch) at which the first zero magnitude response occurs in the composite Optical ("AA") Filter combined (convolved in the spatial domain, multiplied in the spatial frequency domain) with the Photosite Aperture);

W is the Wavelength;

N is the F-Ratio of the lens-system.

.

Solving for the simple case of a 100% Fill Factor (Photosite Aperture equals Photosite Pitch), setting the value of K to conservative value equal to 2, and setting the value of Fz to 1/2 (the strongest possible "AA Filter" (resulting in a zero magnitude response at the Nyquist spatial frequency), the identity presented above simplifies to the following form:

Pa = ( W * N ) / 4

Re-arranging to solve for the maximum F-Ratio (N) as a function of Wavelength (W) and Photosite Aperture (Pa):

Nmax = ( 4 * Pa ) / W

.


Assigning a value of 700 nM for Wavelength, and 4.7 Micron Photosite Pitch of the Nikon D800:

( 4 * 4.7 E-6 ) / (700 E-9) = Nmax = 26.86

Note that in a case where no optical ("AA") filtering exists at all (as the D800E model may or may not actually represent), Nmax is reduced by a factor of 2 to a value of 13.43. On the other hand, in such a case, diffraction through a circular aperture opening (as modelled here) functions as an "anti-alising" filter that is indeed more effective than optical filtering (because it does not exhibit a periodic spatial frequency magnitude response).

.

Assigning a value of 700 nM for Wavelength, and 3.73 Micron Photosite Pitch of the Oly E-M5:

( 4 * 3.73 E-6 ) / (700 E-9) = Nmax = 21.31

.


Thus, it does appear that (for lens-system F-Numbers of 22.627), decreasing Photosite dimensions below 3.960 Microns (may) begin to present some limitations at the upper bounds of the composite system spatial frequency (MTF) response. Please note that this phenomenon does not constitute a "hard and fast limit" at anything other than the very highest spatial frequencies.

As pointed out by myself as well as others, a multitude of other factors simultaneously act to limit the composite system spatial frequency (MTF) response - and every single one of those other factors (save for "sharpening" operations in processing) will act (in some particular manner) to (further) decrease the magnitude of the composite system spatial frequency (MTF) response as a function of spatial frequency.
Member said:
Useful sources and a nice piece of work, thanks.
Thank you much, Bob. Your recognition means a lot to me. Please note that I have found here:

http://www.dpreview.com/forums/thread/3475094

... that as diffraction affects the composite system spatial frequency (MTF) response, the tangible advantages of smaller sized photosites do indeed (gradually) diminish, and asymptotically approach no significant advantage at all (in the extreme). Thus, some limits do exist.

DM ... :P
 
Last edited:
Detail Man wrote:
Bobn2 wrote:
David Rosser wrote:
Bobn2 wrote:
olliess wrote:
Bobn2 wrote:
olliess wrote:
quadrox wrote:

In the end that means that medium format and large format only hold an advantage as long as DSLR lenses are not diffraction limited.
I think it's correct to say that we're getting to the point where DSLRs significantly affected by diffraction, and may even be said to be diffraction-limited at some apertures.
I don't think it's correct to say that, or at least, not meaningful. We are still well short of pixel counts such that the resolution of the lens is the limit over the whole aperture range.
I didn't say "over the whole aperture range," I said "at some apertures."
But they've always been diffraction limited at the same 'some apertures' - it is a property of the lens, not the camera. Look at this lens resolution graph (taken from DxOmark in the good old days when the gave full MTF)



Here we have the same lens on a 12MP and 24MP FF camera. The downward slope to the right is where the lens is becoming 'diffraction limited' - that is, its resolution is limited by diffraction rather than aberrations. Notice that the slope starts at the same place. The only difference is that the 24Mp camera is extracting more from the diffraction limited lens. The 24MP is no more 'diffraction limited' that the 12MP.
In any case, mostly, the lens is giving its best resolution (which present cameras cannot entirely capture) when it is aberration, not diffraction limited.
Sure.
This whole idea of cameras being 'diffraction limited' due to high pixel count is bogus. Diffraction is a property of the lens, not the camera.
The camera isn't diffraction limited DUE to higher pixel counts. It's diffraction that is limiting your ability to gain much more from higher pixel counts.
Then we still have some way to go before the diffraction is the limiting factor on our camera systems at moderate apertures. As the above shows, even at the smallest apertures, increased pixel count is still yielding noticeably more.

--
Bob
Bob it might help a lot if you explained that the MTF curves in the figure above are composite curves - the MTF for the lens at some fixed number of lp/mm at each aperture multiplied by the MTF of the sensor at that same lp/mm figure. When you understand that you realise that unless you have a sensor with infinite resolution the combined MTF is always going to improve with increasing sensor resolution.
Thank you for explaining that, so that I don't have to.
Of course I have never seen a sensor MTF curve - the only hint is a Zeiss paper which suggests that sensor MTF falls linearly from 100% at 0 frequency to 0% at the Nyquist frequency. Now if anybody still tested lenses independent of camera bodies like they used to (I still have the EROS200 lens test date for my 55mm f/3.5 micro Nikkor that R.G.Lewis produced prior to selling me the lens) you could use the lens data and the combined data as prooduced by DxOmark to estimate the sensor MTF.

Perhaps as this site now uses DxOmark data Amazon could afford to buy DxOmark the modern equivalent of EROS200.
Indeed, you don't see too many experimentally determined lens (without camera) or sensor MTF curves.

There are some AA filter curves here

the 'quartz' (i.e birefringent) is not a bad approximation to the theory, which is simply the Fourier transform of the box PSF that such a filter should give.

There are also some theoretical curves for a sensor without AA filter due to Bart van der Wolf here These are assuming the 100% aperture function that 100% microlenses should give.
Interesting (theoretical) graph by Bart there. I don't see any mention by him in this post of his:

Reply #103 on: July 13, 2010, 01:45:39 PM

http://www.luminous-landscape.com/forum/index.php?topic=44733.100

... about a micro-lens model being added to the calculations - but the curves differ from the straight sinc function expected for a single photosite aperture. They have a bit of a gradual "tail" before crossing through zero magnitude - reminiscent of the chat function associated with diffraction through a circular aperture.

After a fair amount of consultations with the interesting and knowledgeable Frans van den Bergh (the developer of MTF Mapper), I have used this spatial frequency domain model provided by him [for the spatial domain convolution of photosite aperture and optical ("AA") filter, expressed below as an equivalent trigonometric restatement of the product of the individual spatial frequency transforms]:

Photosite Aperture (100% Fill) convolved together with Optical Lowpass Filter assembly:

Absolute Value ( ( A * Sin (pi * B * f) / (pi * B * f) ) + ( C * Sin (pi * D * f) / (pi * D * f) ) )

where: A = 1/2 + Offset; B = 1 + 2 * Offset; C = 1/2 - Offset; D = 1 - 2 * Offset

and f is the dimensionless product of Spatial Frequency multiplied by Photosite Aperture (100% Fill).

Notes:

Offset of 0.250000 yields first zero response at 1.000000 times the Spatial Sampling Frequency.

Offset of 0.333333 yields first zero response at 0.750000 times the Spatial Sampling Frequency.

Offset of 0.375000 yields first zero response at 0.666667 times the Spatial Sampling Frequency.

Offset of 0.400000 yields first zero response at 0.625000 times the Spatial Sampling Frequency.

Offset of 0.500000 yields first zero response at 0.500000 times the Spatial Sampling Frequency.

From: http://www.dpreview.com/forums/post/51323901

.

Here is what the spatial frequency transform (MTF) looks like for photosite aperture:



The spatial frequency transform (MTF) looks like for an optical ("AA") filter with a zero at Nyquist:



Multiplying the spatial frequency transforms (MTFs) yields a response with a zero response at the Nyquist spatial frequency, but which also contains additional lobes with peak magntitudes decreasing in inverse proportion to spatial frequency (F). "AA" filters are thus "periodic" filters (as opposed to "low-pass" filters). Their spatial frequency selective "roll-off" is actually a result of the sinc [ sin(f)/(f) ] function of the photosite aperture itself.

.

Regarding the validity of methods of individual photosite analysis as it relates to existing reality:
fvdbergh2501 wrote:
Detail Man wrote:

Have come to better understand that it is only of a limited significance to consider the convolution of an individual aperture with diffraction patterns that extend beyond it's perimeter - as what really occurs on the surface of an image-sensor is the spatial convolution of the diffraction patterns that exist from incoming beams of light together with 2-D pulse-trains of photosite apertures (with spatial repetition rate corresponding to the pixel-pitch, and duty-cycle proportional to the fill-factor), all even further complicated by the addition of an OLPF assembly.
Hmmm. As far as I understand it, those two views are identical under certain conditions. As discussed above, MTF Mapper represents the "incoming image" as a Cartesian plane with a white background and a black target rectangle. These are defined mathematically, i.e., no discretization to pixel boundaries involved at this level. This is equivalent to considering an infinite number of rays impinging on the sensor.

To measure the response of our sensor, all we have to do is to compute the integral of the "incoming image" multiplied by the system PSF, i.e., the convolution of the "incoming image" and the system PSF. This convolution only yields a single (scalar) result. The key concept is to realize that the "system PSF" I refer to here is a specific PSF centered on a specific photosite. (The PSF could vary from one photosite to the next, as it indeed does when an astigmatic lens is considered, but for simplicity we can assume that the PSF is the same for all photosites). We just have to repeat this process at every photosite to obtain the image captured by our sensor.

So rather than following the incoming light, and modelling how it is spread by diffraction and OLPFs in the "forward" direction, we run the process in reverse. Thus, each photosite PSF is instead multiplied and integrated over the entire incoming image. Because our incoming image is defined mathematically, we can do this to any required degree of accuracy, essentially simulating the "infinite number of rays" view. Since this demonstrably works, it implies that the system response is indeed fully defined by considering the PSF from the viewpoint of a single photosite.

The real catch, it would seem, is in what the input image is. In other words, the usefulness of all this analysis comes from observing the response to a known input. If both the "incoming image" and the photosite PSF are known, then the photosite-centric view is identical to the multiple-photosite pulse-train view. Because the step-edge input is so widely used, we have become accustomed to working with only the system PSF in relative comparisons.
http://www.dpreview.com/forums/post/51301707

.

Regarding whether the bounds of diffraction extinction having some effect upon the upper spatial frequencies of composite system spatial frequency (MTF) response are presently approached.

Assuming a "fudge factor" (K) for a Bayer-arrayed, CFA image-sensor (which is likely a bit wider still, due to the fact that most de-mosaicing algorithms utilize wider than 2x2 arrays in their interpolations).

I calculate the "critical" (minumum) Photosite Aperture dimension to be:

Pa = ( Pa / ( K * Pp ) ) * ( Fz ) * ( W * N )

where:

Pa is Photosite Aperture;

K is the Bayer-arrayed and de-mosaiced "fudge factor";

Pp is Photosite Pitch;

Fz is the fraction of the spatial sampling frequency (which is equal to the reciprocal of Photosite Pitch) at which the first zero magnitude response occurs in the composite Optical ("AA") Filter combined (convolved in the spatial domain, multiplied in the spatial frequency domain) with the Photosite Aperture);

W is the Wavelength;

N is the F-Ratio of the lens-system.

.

Solving for the simple case of a 100% Fill Factor (Photosite Aperture equals Photosite Pitch), setting the value of K to conservative value equal to 2, and setting the value of Fz to 1/2 (the strongest possible "AA Filter" (resulting in a zero magnitude response at the Nyquist spatial frequency), the identity presented above simplifies to the following form:

Pa = ( W * N ) / 4

Re-arranging to solve for the maximum F-Ratio (N) as a function of Wavelength (W) and Photosite Aperture (Pa):

N = ( 4 * Pa ) / W

.


Assigning a value of 700 nM for Wavelength, and 4.7 Micron Photosite Pitch of the Nikon D800:

( 4 * 4.7 E-6 ) / (700 E-9) = Nmax = 26.86

Note that in a case where no optical ("AA") filtering exists at all (as the D800E model may or may not actually represent), Nmax is reduced by a factor of 2 to a value of 13.43. On the other hand, in such a case, diffraction through a circular aperture opening (as modelled here) functions as an "anti-alising" filter that is indeed more effective than optical filtering (because it does not exhibit a periodic spatial frequency magnitude response).

.

Assigning a value of 700 nM for Wavelength, and 3.73 Micron Photosite Pitch of the Oly E-M5:

( 4 * 3.73 E-6 ) / (700 E-9) = Nmax = 21.31

.


Thus, it does appear that (for lens-system F-Numbers of 22.627), decreasing Photosite dimensions below 3.960 Microns (may) begin to present some limitations at the upper bounds of the composite system spatial frequency (MTF) response. Please note that this does not constitute a "hard and fast limit" at anything other than the very highest spatial frequencies.

DM ... :P
Useful sources and a nice piece of work, thanks.

--
Bob
 
Detail Man wrote:
Mark Scott Abeln wrote:
quadrox wrote:

Will Lenses for larger formats generally be sharper relative to sensors size than lenses for smaller sensors?

My intuition says no, there won't be much of a difference. And if my intuition is correct, then I am really wondering where the supposed superior resolution for larger formats is coming from. I appreciate any answers that make this clear to me!
Diffraction occurs along edges. A big lens with a big opening will have less diffraction than a small lens with a small opening.

It is a simple matter of geometry. A lens with a diameter d will have a surface area of π(d/2)^2, while its circumference (which is the size of the edge) is πd. The relative amount of diffraction of a lens is therefore the circumference divided by the area, which is 4/d. The relative amount of diffraction is therefore inversely proportional to the lens diameter: a small lens will have more diffraction, while a large lens will have less.

A big sensor of a given f/stop will have a larger lens for any given angle of view compared to a smaller sensor, therefore the big sensors will typically have less diffraction and potentially greater sharpness.
I think that all that matters where it comes to diffraction is the F-Ratio - the lens-system Focal Length divided by the diameter of the entrance-pupil (the size that the mechanical aperture appears to be when looking into the front element of the lens-system from the outside) multiplied by the Wavelength. I don't think that the aperture formed by the lens-system front element matters.
Mark,

I would like to avail myself (and learn from) your clearly deeper understanding (relative to my own) reagdring matters of lens-system design and characteristics. In a subsequent post, you stated:

If you take the general formula for diffraction within a circular aperture (see here ), and simplify it to the limiting case of theta -> zero — which had better be close to zero otherwise we wouldn’t get an image — we get the same 4/d (multiplied by a factor of the wavelength) as calculated above.

Real lenses especially of complicated design, as well as noncircular apertures would certainly generate more diffraction than the formula suggests. But we ought to expect that real lens designs, as much as possible, attempt to approximate a simple circular aperture, otherwise we may not get a distinct image at all, and we’d possibly also get bad looking bokeh too. I’ve never seen a square lens nor a triangular one. But this is a simple demonstration that we ought to expect much less diffraction in larger lenses. We sometimes find that compact point-and-shoot cameras are diffraction limited even while wide open, while normal large format lenses have little diffraction even when stopped down greatly.


My question is whether my expressed assumption (quoted above) that the mechanical aperture opening (let's assume of a circular shape for simplicity in this case) within a lens-system is typically dominant in determining diffraction effects (where the simplification of the Airy disk radius dimensions being proportional to 1.21967 times the product of Wavelength multiplied by F-Ratio is given), ...

... or whether the apertures formed by the dimensions of larger lens element(s) (also) typically come into play in what are numerically significant amounts - such that the observable and measurable entrance-pupil diameter does not (necessarily) constitute an accurate dimension by which the numerical value of the F-Ratio can (from Focal Length divided by entrance-pupil diameter) reliably be calculated ?

If such is the case, what kind of magnitude of percentage errors might you expect to be possibly the case (dependent upon factors such as the diameters of lens-system elements) ?

Thank you for your time and consideration,

DM ... :P
 
Iliah Borg wrote:
Tony Beach wrote:
Mark Scott Abeln wrote:
Iliah Borg wrote:
In the long run it would seem probable that medium/large format would become irrelevant - depending on exactly how much current sensors and lenses can be improved.
One of the reasons why MF/LF may never become irrelevant is front and rear standard movements.
Tilts, shifts, and swings as Iliah mentioned, once an essential part of the photographer’s toolkit, are now extremely rare.
Yeah, converging lines are something that always bother me, especially for landscapes.
Imagine you are shooting holding your camera over your head, in the crowd. Shifting the lens up and tilting it down gives very nice perspective for the shot and much better depth of field.
It's been awhile, but I have tried exaggerating the converging lines by shifting the lens by doing what you suggest. In your scenario it would be almost as if you are looking straight down at the crowd. It would be nice to have a screen that tilts for this approach, and it would be nice to be able to do it with a smaller camera (LF seems practically out of the question).

Being able to be further away from the intended focus plane by shifting does increase DOF, and that is something I also like about having that capability, though I always take into account that perspective changes based on where the camera is and not based how much the lens is shifted (so composition first, then worry about DOF, at least for me).
 
quadrox wrote:

In the end that means that medium format and large format only hold an advantage as long as DSLR lenses are not diffraction limited. In the long run it would seem probable that medium/large format would become irrelevant - depending on exactly how much current sensors and lenses can be improved.
More lpph will always give an advantage to the larger format when it comes to resolution.
 
Imagine you are shooting holding your camera over your head, in the crowd. Shifting the lens up and tilting it down gives very nice perspective for the shot and much better depth of field.
In your scenario it would be almost as if you are looking straight down at the crowd.
It depends on the amount of tilt and the focal length, with an 85mm you can get an actual view of the person(s) going through the crowd.
It would be nice to have a screen that tilts for this approach,
Someday they will make usable goggles ;)
 
Detail Man wrote:
Detail Man wrote:
Mark Scott Abeln wrote:
quadrox wrote:

Will Lenses for larger formats generally be sharper relative to sensors size than lenses for smaller sensors?

My intuition says no, there won't be much of a difference. And if my intuition is correct, then I am really wondering where the supposed superior resolution for larger formats is coming from. I appreciate any answers that make this clear to me!
Diffraction occurs along edges. A big lens with a big opening will have less diffraction than a small lens with a small opening.

It is a simple matter of geometry. A lens with a diameter d will have a surface area of π(d/2)^2, while its circumference (which is the size of the edge) is πd. The relative amount of diffraction of a lens is therefore the circumference divided by the area, which is 4/d. The relative amount of diffraction is therefore inversely proportional to the lens diameter: a small lens will have more diffraction, while a large lens will have less.

A big sensor of a given f/stop will have a larger lens for any given angle of view compared to a smaller sensor, therefore the big sensors will typically have less diffraction and potentially greater sharpness.
I think that all that matters where it comes to diffraction is the F-Ratio - the lens-system Focal Length divided by the diameter of the entrance-pupil (the size that the mechanical aperture appears to be when looking into the front element of the lens-system from the outside) multiplied by the Wavelength. I don't think that the aperture formed by the lens-system front element matters.
Mark,

I would like to avail myself (and learn from) your clearly deeper understanding (relative to my own) reagdring matters of lens-system design and characteristics. In a subsequent post, you stated:

If you take the general formula for diffraction within a circular aperture (see here ), and simplify it to the limiting case of theta -> zero — which had better be close to zero otherwise we wouldn’t get an image — we get the same 4/d (multiplied by a factor of the wavelength) as calculated above.

Real lenses especially of complicated design, as well as noncircular apertures would certainly generate more diffraction than the formula suggests. But we ought to expect that real lens designs, as much as possible, attempt to approximate a simple circular aperture, otherwise we may not get a distinct image at all, and we’d possibly also get bad looking bokeh too. I’ve never seen a square lens nor a triangular one. But this is a simple demonstration that we ought to expect much less diffraction in larger lenses. We sometimes find that compact point-and-shoot cameras are diffraction limited even while wide open, while normal large format lenses have little diffraction even when stopped down greatly.


My question is whether my expressed assumption (quoted above) that the mechanical aperture opening (let's assume of a circular shape for simplicity in this case) within a lens-system is typically dominant in determining diffraction effects (where the simplification of the Airy disk radius dimensions being proportional to 1.21967 times the product of Wavelength multiplied by F-Ratio is given), ...

... or whether the apertures formed by the dimensions of larger lens element(s) (also) typically come into play in what are numerically significant amounts - such that the observable and measurable entrance-pupil diameter does not (necessarily) constitute an accurate dimension by which the numerical value of the F-Ratio can (from Focal Length divided by entrance-pupil diameter) reliably be calculated ?

If such is the case, what kind of magnitude of percentage errors might you expect to be possibly the case (dependent upon factors such as the diameters of lens-system elements) ?

Thank you for your time and consideration,

DM ... :P
Deeper understanding? Ummmm…

Well, it appears that it does not, the problem getting worse with more lens complexity. Simply looking into a lens where it is stopped down slightly shows that light will likely get diffracted not only by the aperture, but also by other stuff downstream. By what percentage? Hard to say but certainly by a photographically significant amount. Since the lens no longer appears optically circular at wide apertures, we ought to expect some interesting phenomena especially towards the corners of the image, since the lens looks like a mandorla or a vesica piscis at those angles.

As you stop down the lens, most or all of the extra junk is lost, giving us a more circular lens — or at least a regular polygon — at that f/stop.

I think that largely for this reason lenses look sharper when stopped down by a stop or two.

--
http://therefractedlight.blogspot.com
 
Last edited:
Bobn2 wrote:

You have eaten McHugh's nonsense whole and now won't give it up. He and you are confused, not me.
Since I suppose that's possible, I'll try and limit my response to points of clarification.
"Pick a threshold for a "significant" decrease in resolution. For argument's sake, I'll pick 0.80 relative to the maximum, which I'm pretty sure would be noticeable."
'For arguments sak' means 'arbitrary'. You have presented no argument as to why 0.8 is significant, nor why 0.8 of different quantities would be equally significant. So, it is an arbitrary number, picked out of a hat, and arguments based on it mean nothing.
The number is a dimensionless ratio of two quantities with the same units, which are measured resolution.

The number 0.8 was chosen based on anecdotal reports that people can reliably see 15--20% more linear resolution. You yourself seemed to agree that differences of 17% are approaching "unnoticeable. Hence, I chose 0.8 so that I could illustrate the rest of the argument. The exact value chosen isn't essential to the concept.
Let's see your 'math' convolving the Bayer/AA filter with the Airy disc. Have you done it, or are you just talking about it?
I have indeed done it, which is what I'd expect anyone to do before talking about it.
the sensor with higher resolution loses more resolution due to diffraction. Not just in absolute resolution, but also in percentage of maximum resolution, meaning that the higher resolution sensor is not just "losing more because it had more to give."
It is exactly losing more because it had more to give.
The higher-resolution sensor is losing MORE than an amount proportional to its greater resolution.
Your 'limit' defines something that cannot be observed, that is arbitrarily defined for no cogent reason.
Once a threshold is specified, the "limit," as defined, can be "observed" because it can be determined unambiguously from a set of measurement. The choice of 0.8 has been explained several times now. If it turns out we find a better way of specifying the threshold (e.g., double blind testing or human visual measurements), then we can certainly use a more useful threshold than 0.8.
First of all, it seems inconsistent to argue against thresholds and then invoke DOF, since DOF is only meaningful when there a "threshold" has been defined for what is "acceptably sharp."
I didn't argue for any particular threshold for DOF, choose your own CoC,
You can choose your own threshold for diffraction "limiting" just as you choose your own CoC.

In fact, the CoC is a good example for how a "threshold" works. There's no sudden moment where objects in the scene go "out of focus." Once we define a CoC, however, there is an threshold for talking about "in focus" and "out of focus" zones.
Secondly, since the effects of defocus and diffraction are combined, it follows that a smaller aperture could, at least in principle, result in less maximum sharpness AND less DOF.
Also not true. You will only get 'maximum sharpness' at the point of focus, and if the lens is essentially diffraction limited, that sharpness will be defined by diffraction plus the pixellation blur (which decreases as pixel density is increased)
It's possible for an image will be acceptably sharp NOWHERE even at the point of focus (e.g., if diffraction is severe enough that that NO points are smaller than the CoC). I believe there is an example of such a photo (shot at f/22) posted elsewhere in this thread.
If you had truly don the 'math' to convolve the diffraction PSF with the AA PSF you would no that there is no such 'limit' when the diffraction blur suddenly becomes visible.
I have never said that the onset of diffraction was a "sudden" phenomenon. What I have said is that it can be useful to set a threshold for an "acceptable" resolution loss to diffraction, just as a CoC is a threshold for an "acceptable" amount of defocus.
If there is a 'threshold' it is not a proportion of the maximum MTF50. What it is is an output resolution relative to a given output image size - that is it depends on how big you view the image and not what is the peak resolution of the lens. Defining it in terms of peak resolution of the lens is absurd.
The advantage of defining a threshold in terms of percentage is that it scales with the peak resolution of the lens-sensor system and also with the output image size.
The charts you showed suggest that a sensor array with higher spatial resolution is proportionally MORE affected by diffraction.
No, it suggest that a sensor array with higher spatial resolution gets closer to capturing the full resolution given by the lens.
This does not contradict what I said above.
Were you truly 'familiar' with the maths you would know that and understand what I was saying straight away.
...
I would be wary of using the term 'stands to reason' when your arguments are so devoid of it.
...
Check back and see who started the 'little personal jabs'. It wasn't me. So if comical they are, the joke is on you. Especially since everyone who actually knows anything about the topic of diffraction knows you are talking garbage.
I agree, there must be some kind of cosmic joke going on here. To think that, years after doing a Ph.D. on detecting mutual refraction and nonlinear interactions between one kind of waves, using another kind of waves, I'm still being told that I don't understand diffraction... ;)
 
Last edited:

Keyboard shortcuts

Back
Top