# FZ200 Diffraction Limit - Panasonic Tech Service

Ron Tolmie wrote:

Jerry:

What the equation states is that if you want to achieve a given angular resolution then there is a corresponding aperture dimension that will produce that resolution so long as some other factor (like optical aberrations) does not override the consideration.

Looking at it the other way around, if the aperture diameter is 3mm and the lens is intended to be used for imaging a "normal" field of view (equivalent to what you get with a 50mm lens on a full frame camera) then the image will be sharp, and it doesn't matter what the focal length of the lens is. You might have a different opinion on what constitutes a "sharp" image, but in that case the dimension might be a little bigger or a little smaller than 3mm, but the point is that there is a particular diameter that will satisfy your objective.

You have not responded to the following inquires addressed to you (posted on this thread):

http://www.dpreview.com/forums/post/52070220

http://www.dpreview.com/forums/post/52064518

Depth of Field is inversely proportional to the diameter of the entrance-pupil (virtual aperture as measured from looking into the outer lens-system element) - which is the "aperture diameter" which is used in the derivation of F-Ratio ("F-Number").

Note that that the "virtual aperture" is not the same physical size as the actual mechanical aperture opening wothin the lens-system, and it's measuarble physical diameter changes in the case of a variable Focal Length lens-system.

A pin-hole camera can achieve a very high numerical value of Depth of Field - but always at the price of (also) very significant loss in the spatial frequency (MTF) response magnitude due to diffraction effects. Deep DOF and low lens diffraction effects (upon MTF response) are mutually exclusive.

The intensity of the Fraunhofer diffraction pattern of a circular aperture (the Airy pattern) is given by the squared modulus of the Fourier transform of the circular aperture:

where:

is the maximum intensity of the pattern at the Airy disc center,

is the Bessel function of the first kind of order one,

is the wavenumber,

is the radius of the aperture, and

is the angle of observation, i.e. the angle between the axis of the circular aperture and the line between aperture center and observation point.

, where q is the radial distance from the optics axis in the observation (or focal) plane and

(d=2a is the aperture diameter, R is the observation distance) is the f-number of the system.

http://en.wikipedia.org/wiki/Airy_disk#Mathematical_details

.

Cameras like the FZ200 and ZS20 have pushed the choice of sensor size right down to the point where diffraction is a critically important design consideration (although certainly not the only one!). They work well, especially if you apply some simple post processing. I printed up a batch of 11x14" ZS20 prints this afternoon and they were sharp enough to satisfy me. However, I did not print any of the images that had used the longest telephoto settings because they were not sharp enough for my tastes.

I would argue that these cameras are operating right at the limit of what is practical. Examining the impact of diffraction is one of the most basic considerations that determine if they perform satisfactorily - or do we need to revert to using much larger sensors like M4/3 or full frame?

What the equation implies is that we really only need to make modest changes in the aperture size (and hence the camera size) to get away from the diffraction limitation. A really big sensor may offer other advantages, such as wider ISO settings and a wider choice of apertures, but I am happy to give up those advantages if it means that I can put a wide-zoom camera in my pocket.

My sincere apologies. "What one sees" is clearly not "what one gets" with the DPreview editor. In the quoted version below, I have re-inserted the characters lost in the DPReview display system.

Detail Man wrote:

Ron Tolmie wrote:

What the equation states is that if you want to achieve a given angular resolution then there is a corresponding aperture dimension that will produce that resolution so long as some other factor (like optical aberrations) does not override the consideration.

Looking at it the other way around, if the aperture diameter is 3mm and the lens is intended to be used for imaging a "normal" field of view (equivalent to what you get with a 50mm lens on a full frame camera) then the image will be sharp, and it doesn't matter what the focal length of the lens is. You might have a different opinion on what constitutes a "sharp" image, but in that case the dimension might be a little bigger or a little smaller than 3mm, but the point is that there is a particular diameter that will satisfy your objective.

You have not responded to the following inquires addressed to you (posted on this thread):

http://www.dpreview.com/forums/post/52070220

http://www.dpreview.com/forums/post/52064518

Depth of Field is inversely proportional to the diameter of the entrance-pupil (virtual aperture as measured from looking into the outer lens-system element) - which is the "aperture diameter" which is used in the derivation of F-Ratio ("F-Number").

Note that that the "virtual aperture" is not the same physical size as the actual mechanical aperture opening wothin the lens-system, and it's measuarble physical diameter changes in the case of a variable Focal Length lens-system.

A pin-hole camera can achieve a very high numerical value of Depth of Field - but always at the price of (also) very significant loss in the spatial frequency (MTF) response magnitude due to diffraction effects. Deep DOF and low lens diffraction effects (upon MTF response) are mutually exclusive.

The intensity of the Fraunhofer diffraction pattern of a circular aperture (the Airy pattern) is given by the squared modulus of the Fourier transform of the circular aperture:

where:

is the maximum intensity of the pattern at the Airy disc center,

is the Bessel function of the first kind of order one,

is the wavenumber,

is the radius of the aperture, and

is the angle of observation, i.e. the angle between the axis of the circular aperture and the line between aperture center and observation point.

, where q is the radial distance from the optics axis in the observation (or focal) plane and

(d=2a is the aperture diameter, R is the observation distance) is the f-number of the system.

DM ...

Bridge Cameras

As Detail Man has indicated, there are various factors that affect resolution. The two most important factors, however, are the diffraction limit of the lens and the resolution of the sensor. At low F-numbers, it is the pixel density of the sensor that limits resolution. Around f/3.5 to f/4.0 the lens resolution and the sensor resolution are becoming comparable. When you get to higher F-numbers, it is the lens diffraction that limits the resolution. So, perhaps the Panasonic tech people were trying to indicate that diffraction effects begin to dominate once the F-number gets above approximately 4. This is the same for all of the small-sensor cameras.

DSLRs

It is interesting that the resolution of large-sensor cameras (including DSLRs with full-frame sensors or APSC size) is dominated by the sensor until you get to very long focal lengths. (Focal length enters the problem because of the interplay of the lens and sensor resolutions.) Because of the low pixel density on a DSLR, it requires a very expensive long-focal length lens to match the overall resolution of a bridge camera. There is an article that I wrote on this topic with some simple equations in the appendix for calculating the approximate 9% MTF resolution, including the lens and sensor components and their combination:

http://www.dpreview.com/articles/4110039430/detail-of-sx3040-vs-compact-slr

The equations are for white light and are not as detailed as using Magnitude Transfer Functions (MTF), but they are much simpler and I have found them to match what I see in tests for telephoto, macro and "telemacro" situations.

Other Related Articles and Postings:

My article on Telemacro using the same equations:

http://www.dpreview.com/articles/8819494033/macro-vs-telemacro-with-sx3040

My article on Macro Test Targets:

http://www.dpreview.com/articles/5039116594/my-search-for-an-inexpensive-macro-test-target More on Macro Test Targets (Challenge: How Small Can You See) :http://forums.dpreview.com/forums/post/50011528

Stephen Barrett wrote:

Bridge Cameras

As Detail Man has indicated, there are various factors that affect resolution. The two most important factors, however, are the diffraction limit of the lens and the resolution of the sensor. At low F-numbers, it is the pixel density of the sensor that limits resolution. Around f/3.5 to f/4.0 the lens resolution and the sensor resolution are becoming comparable. When you get to higher F-numbers, it is the lens diffraction that limits the resolution. So, perhaps the Panasonic tech people were trying to indicate that diffraction effects begin to dominate once the F-number gets above approximately 4. This is the same for all of the small-sensor cameras.

DSLRs

It is interesting that the resolution of large-sensor cameras (including DSLRs with full-frame sensors or APSC size) is dominated by the sensor until you get to very long focal lengths. (Focal length enters the problem because of the interplay of the lens and sensor resolutions.) Because of the low pixel density on a DSLR, it requires a very expensive long-focal length lens to match the overall resolution of a bridge camera. There is an article that I wrote on this topic with some simple equations in the appendix for calculating the approximate 9% MTF resolution, including the lens and sensor components and their combination:

http://www.dpreview.com/articles/4110039430/detail-of-sx3040-vs-compact-slr

The equations are for white light and are not as detailed as using Magnitude Transfer Functions (MTF), but they are much simpler and I have found them to match what I see in tests for telephoto, macro and "telemacro" situations.

Other Related Articles and Postings:

My article on Telemacro using the same equations:

http://www.dpreview.com/articles/8819494033/macro-vs-telemacro-with-sx3040

My article on Macro Test Targets:

http://www.dpreview.com/articles/5039116594/my-search-for-an-inexpensive-macro-test-target More on Macro Test Targets (Challenge: How Small Can You See) :http://forums.dpreview.com/forums/post/50011528

Stephen,

Thank you for your interesting post. Will be having a look at your published articles with interest.

The spatial sampling frequency at which the (combination) of Photosite aperture/pitch combined with an optical low-pass ("AA") filter has a significant effect - where no "AA" filter is most "sensitive" to higher F-Numbers and Wavelengths, and stronger "AA" filters are less "sensitive".

Additionally since even the crudest de-mosaicing algorithms combine 2x2 Bayer-arrayed photosites, a "fudge factor" on the order of (at least) 2 seems reasonable to add to the model.

My limited understanding of more sophisticated and more commonly used de-mosaicing algorithms is that they interpolate photosite image-data wider than 2x2 (and up to the 4x4 photosite spatial periodicity of color-filtered Bayer-arrayed photosites).

As a result, my calculations (below) take into account the first zero magnitude spatial frequency of the (convolution of) Photosite aperture/pitch combined with the optical low-pass ("AA") filter, as well as adding a (de-mosaicing-related) "fudge factor" with a conservative value of 2.

.

I calculate the "critical" (minumum) Photosite Aperture dimension to be:

Pa = ( Pa / ( K * Pp ) ) * ( Fz ) * ( W * N )

where:

Pa is Photosite Aperture;

K is the Bayer-arrayed and de-mosaiced "fudge factor";

Pp is Photosite Pitch;

Fz is the fraction of the spatial sampling frequency (which is equal to the reciprocal of Photosite Pitch) at which the first zero magnitude response occurs in the composite Optical ("AA") Filter combined (convolved in the spatial domain, multiplied in the spatial frequency domain) with the Photosite Aperture);

W is the Wavelength;

N is the F-Ratio of the lens-system.

.

Solving for the simple case of a 100% Fill Factor (Photosite Aperture equals Photosite Pitch), setting the value of K to conservative value equal to 2, and setting the value of Fz to 1/2 (the strongest possible "AA Filter" (resulting in a zero magnitude response at the Nyquist spatial frequency), the identity presented above simplifies to the following form:

Pa = ( W * N ) / 4

Re-arranging to solve for the maximum F-Ratio (N) as a function of Wavelength (W) and Photosite Aperture (Pa):

Nmax = ( 4 * Pa ) / W

http://www.dpreview.com/forums/post/51858399

.

For the DMC-FZ200, Pa ~ 1.5 Microns. For a worst-case Wavelength (W) of 700 nM, it appears that (in the base case), diffraction "extinction" is not an issue until F=8.571 (which exists, in fact, above the maximum F-Number adjustment value of F=8.0 for the FZ200).

The above case being for an optical low-pass ("AA") filter yielding a zero response at the Nyquist (1/2 of the spatial) frequency itself, a more likely situation is one where the optical low-pass ("AA") filter yields a zero response at (around) 2/3 of the spatial sampling frequency. In that case, the result of the above calculations being applied result in a maximum value equal to F=5.714.

http://www.dpreview.com/forums/post/52069420

.

It seems to be also important to take into account the FZ200 lens-system's specific Focal Length being considered - as net composite system spatial frequency (MTF) response characteristics are (also) going to depend on optical lens-system aberrations at various Focal Lengths.

DM ...

Thanks Detail Man,

Your approach is much more sophisticated than mine, which is quite crude.

I don't really know anything about the low-pass filters in cameras, or about de-mosaicing algorithms or details about Bayer arrays. What I have called "sensor resolution" also has an implicit "fudge factor" of 2" so maybe my "sensor resolution" could be considered to include some of these factors that you mention.

Combining my "lens resolution" and "sensor resolution" in quadrature has some cited precedent, but I have seen an argument ( http://www.normankoren.com/Tutorials/MTF.html ) that they should be combined linearly. For now though, I have kept the quadrature combination because it seems to match the resolutions that I see in my tests for a variety of situations (telephoto, macro & telemacro). In particular, the linear combination of factors predicts that the camera should not be able to resolve things that it can resolve, whereas the quadrature combination seems to work well. Because of the quadrature combination, only the larger of the two is noticeable when one is much larger than the other. The smaller factor only becomes noticeable when it grows to a size that is comparable to the larger one. This seems to match what people report seeing. For example, people do not report any diffraction effects at short focal lengths but, as focal length is increased so that images on the sensor are spread out over more pixels, the limits of lens resolution become apparent rather suddenly. The same thing seems to happen with change of aperture.

Most of us are probably unwilling or unable to deal with MTF functions and Bessel Functions, demosaicing algorithms etc. Is it possible to derive a simpler formula that combines several factors in order to compute resolution? Perhaps it would have to be calibrated for each camera + lens combination. The formulas that I have proposed seem to work well for my camera, but I don't really know about other cameras. Are these formulas reasonable, even if they are crude? Can they be corrected or refined? Any insight that you have on this would be much appreciated.

Stephen Barrett wrote:

Thanks Detail Man,

Your approach is much more sophisticated than mine, which is quite crude.

I don't really know anything about the low-pass filters in cameras, or about de-mosaicing algorithms or details about Bayer arrays. What I have called "sensor resolution" also has an implicit "fudge factor" of 2" so maybe my "sensor resolution" could be considered to include some of these factors that you mention.Combining my "lens resolution" and "sensor resolution" in quadrature has some cited precedent, but I have seen an argument ( http://www.normankoren.com/Tutorials/MTF.html ) that they should be combined linearly. For now though, I have kept the quadrature combination because it seems to match the resolutions that I see in my tests for a variety of situations (telephoto, macro & telemacro). In particular, the linear combination of factors predicts that the camera should not be able to resolve things that it can resolve, whereas the quadrature combination seems to work well. Because of the quadrature combination, only the larger of the two is noticeable when one is much larger than the other. The smaller factor only becomes noticeable when it grows to a size that is comparable to the larger one. This seems to match what people report seeing. For example, people do not report any diffraction effects at short focal lengths but, as focal length is increased so that images on the sensor are spread out over more pixels, the limits of lens resolution become apparent rather suddenly. The same thing seems to happen with change of aperture.

Have seen Koren's statement before. Here is a thread started by one of the most knowledgeable posters on these forums (Marianne Oelund), and with some others (including Falk Lumo) posting. Marianne recommends combining the variances (squares) as you already do:

http://www.dpreview.com/forums/thread/3360636

http://www.dpreview.com/forums/post/50576749

If you have not already seen it, you would probably be interested in Falk Lumo's paper here:

http://www.falklumo.com/lumolabs/articles/sharpness/

PDF format of the very same text and graphics (much easier on the eyes to read):

http://www.falklumo.com/lumolabs/articles/sharpness/ImageSharpness.pdf

See the paper's Section 2.3 (Defocus) - which shows how much more complicated things get mathematically regarding the composite MTFs if/when any focusing-error exists. Important to know.

While Lumo is describing focus-error at the film/sensor plane locations, it seems to me that the "defocus" (resulting in an estimated Circle of Confusion diameter on the film/sensor plane) which result from human visual perception limitations - at a specified viewing-size (usually 10 Inches in the largest dimension) and viewing-distance (usually 25 cm) would produce something of a similar mathematical model. Complex, because the human visual perception "contrast sensitivity function" is variable between persons, changes with aging, is light-level dependent, an represents a "band-pass" response of its own).

For more about common COC standards, read pages 1-4 of "Depth of Field in Depth" here:

http://www.largeformatphotography.info/articles/DoFinDepth.pdf

For (a bit) about the human visual "contrast sensitivity functions" effects, see:

http://www.bobatkins.com/photography/technical/mtf/mtf4.html

This thread (about human perceptions of "sharpness") and its references may also interest you:

http://www.dpreview.com/forums/thread/3135840

Most of us are probably unwilling or unable to deal with MTF functions and Bessel Functions, demosaicing algorithms etc. Is it possible to derive a simpler formula that combines several factors in order to compute resolution?

I don't think so (other than the combining of variances of the individual space-domain dimensions). The formulas for Diffraction, and for Photosite Aperture convolved with optical low-pass filters are not highly complicated. All one needs to do is to set the relevant parameters, and then multiply those two functions together (as real numbers in the spatial frequency domain).

Here is a post where those identities are shown (with constants for different "strength" AA filters):

http://www.dpreview.com/forums/post/51323901

The assumption in the above identities is that Photosite Pitch equals Photosite Aperture (Fill Factor = 100%). For making rough calls as to when Diffraction MTF "extinction" effects impact the net composite spatial frequency (MTF) response, I added some other things:

Assuming a "fudge factor" (K) for a Bayer-arrayed, CFA image-sensor (which is likely a bit wider still, due to the fact that most de-mosaicing algorithms utilize wider than 2x2 arrays in their interpolations).

I calculate the "critical" (minumum) Photosite Aperture dimension to be:

Pa = ( Pa / ( K * Pp ) ) * ( Fz ) * ( W * N )

where:

Pa is Photosite Aperture;

K is the Bayer-arrayed and de-mosaiced "fudge factor";

Pp is Photosite Pitch;

Fz is the fraction of the spatial sampling frequency (which is equal to the reciprocal of Photosite Pitch) at which the first zero magnitude response occurs in the composite Optical ("AA") Filter combined (convolved in the spatial domain, multiplied in the spatial frequency domain) with the Photosite Aperture);

W is the Wavelength;

N is the F-Ratio of the lens-system.

.

Solving for the simple case of a 100% Fill Factor (Photosite Aperture equals Photosite Pitch), setting the value of K to conservative value equal to 2, and setting the value of Fz to 1/2 (the strongest possible "AA Filter" (resulting in a zero magnitude response at the Nyquist spatial frequency), the identity presented above simplifies to the following form:

Pa = ( W * N ) / 4

Re-arranging to solve for the maximum F-Ratio (N) as a function of Wavelength (W) and Photosite Aperture (Pa):

Nmax = ( 4 * Pa ) / W

http://www.dpreview.com/forums/post/51858399

.

I use an Excel-type spreadsheet (which a friend created, and I have modified for my specific purposes) to perform the (basic) MTF calculations, and to create graphs with multiple individual plots within those graphs. I can email it to you if you want (PM me an email address if so). You would need to figure out on your own what is going on where with the individual variables used in the calculations, and then make modifications appropriate for your desired utilizations. There is no "users manual", and it has been a while since I messed around with it.

Perhaps it would have to be calibrated for each camera + lens combination. The formulas that I have proposed seem to work well for my camera, but I don't really know about other cameras. Are these formulas reasonable, even if they are crude?

They are crude. Whether they are "reasonable" depends entirely on the context it would seem ?

Can they be corrected or refined? Any insight that you have on this would be much appreciated.

One needs to be able to estimate the Photosite Aperture and the "strength" of the optical low-pass ("AA") filter. The Diffraction identity that I use is for an ideal lens (with a circular aperture opening).

Optical lens-system aberrations cannot be modelled using frequency domain multiplications (ray-tracing in the spatial domain has to be used instead), and such aberrations can only reduce the magnitude of the MTF responses.

So, the "ideal" identities are useful for comparing the interplay of diffraction, optical filtering, and photosite responses - but lens-aberrations, de-focusing, camera-motion, and de-mosaicing are all goint to (also) affect results (at the RAW image-data level, prior to further processing).

Other (also) relevant factors (at the RAW image-data level) may well be optical characteristics of the (entire) optical "filter stack" [other than the b-refringement ("AA") element], any micro-lens asemblies present, and optical properties of the semiconductor materials themselves. Have read that when Fill Factor is not equal to 100%, then optical properties of the semiconductor "diffusion layers" cannot be dealt with in the spatial frequency domain in a simple manner.

The greater our knowledge increases, the greater our ignorance unfolds.

- John F. Kennedy

DM ...

Source: http://www.cg.tuwien.ac.at/research/theses/matkovic/node20.html

From: http://www.dpreview.com/forums/post/51534113

From my searches relating to a human "contrast sensitivity" model, this one is typical, and has an associated mathematical identity for use in modelling. Note its inevitable generality, however.

See Pages 41-45 (and the graphic on Page 45) of this paper:

... for more specific information regarding human perceptual CSFs for various cases/conditions.

DM ...

Stephen Barrett wrote:

Thanks Detail Man,

Your approach is much more sophisticated than mine, which is quite crude.

I don't really know anything about the low-pass filters in cameras, or about de-mosaicing algorithms or details about Bayer arrays. What I have called "sensor resolution" also has an implicit "fudge factor" of 2" so maybe my "sensor resolution" could be considered to include some of these factors that you mention.

Combining my "lens resolution" and "sensor resolution" in quadrature has some cited precedent, but I have seen an argument ( http://www.normankoren.com/Tutorials/MTF.html ) that they should be combined linearly. For now though, I have kept the quadrature combination because it seems to match the resolutions that I see in my tests for a variety of situations (telephoto, macro & telemacro). In particular, the linear combination of factors predicts that the camera should not be able to resolve things that it can resolve, whereas the quadrature combination seems to work well. Because of the quadrature combination, only the larger of the two is noticeable when one is much larger than the other. The smaller factor only becomes noticeable when it grows to a size that is comparable to the larger one. This seems to match what people report seeing. For example, people do not report any diffraction effects at short focal lengths but, as focal length is increased so that images on the sensor are spread out over more pixels, the limits of lens resolution become apparent rather suddenly. The same thing seems to happen with change of aperture.Most of us are probably unwilling or unable to deal with MTF functions and Bessel Functions, demosaicing algorithms etc. Is it possible to derive a simpler formula that combines several factors in order to compute resolution? Perhaps it would have to be calibrated for each camera + lens combination. The formulas that I have proposed seem to work well for my camera, but I don't really know about other cameras. Are these formulas reasonable, even if they are crude? Can they be corrected or refined? Any insight that you have on this would be much appreciated.

Hi Stephen,

Though I can't recall if I posted a comment in a related thread, when it was drawn to my attention last year I read the following article with interest:

http://www.dpreview.com/articles/4110039430/detail-of-sx3040-vs-compact-slr

I too have adopted a relatively simple approach to assessing and measuring the resolution of a digital camera as described in my FZ50 report which is available for download as a 6 MB PDF file from here.

As discussed in Section 2 of that report, due to the effect of the edges of the lines of a black and white grid partially overlapping adjacent pixels, the resolution of a line pair, i.e. one black line and one white line, requires three pixels, i.e. 1.5 pixels per line width. Consequently the maximum resolution of a digital camera can be estimated with reasonable accuracy by dividing the number of pixels in the height of the sensor by 1.5.

Thus for the FZ200 which has a 4000 x 3000 pixel sensor the maximum resolution would be estimated to be 2000 lines per picture height, LPH. That value is within 5% and 10% respectively of the vertical resolution values for the JPEG and RAW images in the DPR FZ200 review.

As discussed in that report due to the effect of diffraction the resolution is reduced from the maximum value as the aperture is reduced. In addition for large apertures including the maximum value there is some loss of resolution due to several factors including shape imperfections and off axis effects. Both of these effects can be seen in the following image.

For compact travel cameras such as the TZ30 (ZS20) in which the maximum aperture of the lens has been restricted to limit its physical size the loss of resolution at maximum aperture may not be present. See for example the images included in my TZ30 report here

I hope you will find my alternative approach of some interest.

Jimmy

J C Brown

Thank you, Detail Man, for your detailed response and for the wealth of references.

It is going to take me some time to absorb all of this.

J C Brown wrote:

Thus for the FZ200 which has a 4000 x 3000 pixel sensor the maximum resolution would be estimated to be 2000 lines per picture height, LPH. That value is within 5% and 10% respectively of the vertical resolution values for the JPEG and RAW images in the DPR FZ200 review.

Just to clarify, when you say "2000 lines per picture height", I assume that this means the same thing as 1000 line-pairs per picture height (1000 white and 1000 black).

Thank you for your comments, Dr. Brown.

I have downloaded your paper and will read it with interest.

Stephen

Stephen Barrett wrote:

Thank you, Detail Man, for your detailed response and for the wealth of references.

It is going to take me some time to absorb all of this.

You are welcome. The bit about calculating the square-root of the sum of the squares with "Blur diameters" works for Gaussian distributions. The Standard Deviation of the convolution (which is what is happening in the spatial-domain) of two Gaussian distributions is equal to the square-root of the sum of the squares of the individual Standard Deviations (which relates to "Blur diameters") of the individusal Gaussian distributions

While a Gaussian disribution can be made to fairly closely fit the inner main-lobe of an Airy disk function (only), the side-lobes of the Airy disk extend quite a bit farther outwards. Therefore, the Bessel function of the Airy disk pattern is different enough from a Gaussian approximation that it is not a good approximation in the outer tails of the (space-domain) Point Spread Function, abbreviated as PSF (and correspondingly at low spatial frequencies in the MTF).

In fact, experiments have indicated that the some photon patterns are most closely described as a particular type of Gamma function. The deeper one goes, the more complex things appear.

The MTF of diffraction through a circular aperture is not Gaussian. It's so-called "Chinese hat" function is nearly downwardly sloping linear - except when near it's right-most position on the X-axis.

Where it comes to the MTF of a Photosite Aperture, that is a "sinc" [ sin(x)/(x) ] function. The addition of an optical low-pass ("AA") filter represents a mathematical product of two "sinc" functions that form a zero-magnitude point at some sub-multiple of the spatial sampling frequency

And it seems that any focusing error vastly complicates things still, significantly attenuating the MTF magnitude when the COC = 3 * Wavelength * F-Number, and profoundly attenuating MTF magnitude when approaching and surpassing COC = 5 * Wavelength * F-Number. Further, that function is not a Gaussian function, either, and it makes the rest look relatively trivial numerically.

What is comes down to is that radially-symmetric Gaussian distributions are so vastly easier to compute with (as compared to any other functional descriptions), that the "Gaussian" PSFs as something "close" in optical systems seems to have become inculcated in the minds of many.

At any rate, the idea of simply calculating a scaled arithmetic sum of blur-diameters seems likely to generate greater errors. My vote would be to compute in the spatial frequency (MFT) domain. The results are then directly in the "units" of interest (magnitude as a function of spatial frequency).

The arguments about "sharpness" are endless. Th overall shape of the net composite MTF response matters (much more comprehensive than a single data-point where MTF=50%, etc.), and it appears to be the integral of that MTF curve over a couple of critical "octaves" of spatial frequency that (coupled with individual perceptual CSFs in individual viewers) form our dominant impressions.

DM ...

Stephen Barrett wrote:

J C Brown wrote:

Thus for the FZ200 which has a 4000 x 3000 pixel sensor the maximum resolution would be estimated to be 2000 lines per picture height, LPH. That value is within 5% and 10% respectively of the vertical resolution values for the JPEG and RAW images in the DPR FZ200 review.

Just to clarify, when you say "2000 lines per picture height", I assume that this means the same thing as 1000 line-pairs per picture height (1000 white and 1000 black).

Thank you for your comments, Dr. Brown.

I have downloaded your paper and will read it with interest.

From my direct experience in asking Jimmy the very same question, it is true that "lines" is (there) implying line-pairs (per image height in this case). Bottom line, it takes 3 or more individual photosites to reproduce a single line-pair made up of alternating dark/light illumination.

The number of line-pairs recordable is equal to (at most) 2/3 of the spatial sampling elements; and

The number of dark lines recordable is equal to (at most) 1/3 of the spatial sampling elements.

(From Jimmy's excellent paper)

DM ...

Stephen Barrett wrote:

J C Brown wrote:

Just to clarify, when you say "2000 lines per picture height", I assume that this means the same thing as 1000 line-pairs per picture height (1000 white and 1000 black).

Thank you for your comments, Dr. Brown.

I have downloaded your paper and will read it with interest.

Stephen

Hi Stephen,

Thanks very much for your kind remarks. Your understanding that 2000 lines per picture height corresponds to 1000 line pairs per picture height (1000 white and 1000 black) is correct. Consequently for the 24 mm height of a 3000 pixel high full frame sensor a vertical resolution of 2000 lines per picture height (LPH) would correspond to a resolution of 41.67 line pairs per mm.

As discussed in my FZ50 report, based on a "Yes" "No" "Yes" response a 3000 pixel high sensor would be expected to be able to resolve 1500 pairs of alternate black and white lines.

The human eye with its ability to interact with the brain forms a control system which enables the eye to "home in" on the black and white lines bringing them into register with the "rods and cones" of the retina which acts as a sensor.

Unlike the human eye a digital camera doesn't have a control system to bring the black and white lines into register with the pixels on the sensor. Consequently the position of the edges of the lines in relation to the edges of the pixels will be entirely random with a very low probability that the edges of the black and white lines will be in exact register with the pixels.

If they do occur a zero overlap will result in alternate black and white pixels while a 50% overlap will result in the response of all of the pixels being a uniform 50% grey. All of the possible other overlap proportions will result in adjacent pixels alternating between light and dark grey of shades which depend on the percentage overlap.

There is very clear evidence of that behaviour in the images of the tapered black and white lines of the resolution test chart used in the resolution measurements presented in the DPR camera reviews. IMHO that effect makes it rather difficult to assess the resolution of a camera from an image of that test chart.

It was that effect combined with the realisation that the arrangement of the red, green and blue pixels in the Bayer matrix was likely to affect the resolution of different colours that led me to design the colour resolution test chart described in my FZ50 report.

My experience with using that chart which uses a single letter of each colour in each row led me to conclude that I should be able to make more accurate and reliable measurements with a chart which used groups of eleven letters of each colour with each of the eleven letter raised above the preceding one by a step of 0.1 of the thickness of the lines in the smallest letters.

Although they are too large to comply with the requirements of the DPR gallery copies of that chart and a black and white version of it are available for download here.

Jimmy

J C Brown

So in summary, all of this theory was to say that the lens and sensor combination of the FZ200 is diffraction limited to around F3.5. At higher stop values, fine details (resolution) begins to taper off due to diffraction of light due to the aperture blades. Does that sum it up properly?

If that's the case, how is it possible to get a long depth of field without loss of resolution with any bridge camera? Granted, a bridge camera isn't designed specifically for landscapes and architectural shots, it does seem to do a pretty good job of them by what I've seen. I suppose that it is fair to say that in such wide angle shot, fine detail resolution is far less important anyway.

Sometimes I feel like 2/3'rds Rice Krispies. Past "Snap" and "Crackle" but just shy of "Pop".

Detail Man wrote:

Stephen Barrett wrote:

J C Brown wrote:

Thank you for your comments, Dr. Brown.

I have downloaded your paper and will read it with interest.

From my direct experience in asking Jimmy the very same question, it is true that "lines" is (there) implying line-pairs (per image height in this case). Bottom line, it takes 3 or more individual photosites to reproduce a single line-pair made up of alternating dark/light illumination.

The number of line-pairs recordable is equal to (at most) 2/3 of the spatial sampling elements; and

The number of dark lines recordable is equal to (at most) 1/3 of the spatial sampling elements.

(From Jimmy's excellent paper)

DM ...

Hi DM,

Thanks very much for responding to Stephen's question and for your very complimentary remarks.

While I am in complete agreement with almost all of your reply it seems to me that in your first sentence your statement that "..., it is true that "lines" is (there) implying line-pairs (per image height in this case)." line-pairs should read lines.

Since we were last in touch I've had the opportunity to use my new chart to carry out a test with two DSLRs one with a 21 MP full frame sensor and one with an 8 MP DX sensor. The highest resolution for both corresponded to the 1.5 pixel line however as expected I could recognise a few 1.4 pixel Es in both images and one or two 1.3 pixel Es in the full frame image.

Based on these results I still regard my 1.5 pixel/line "rule of thumb" as a fairly accurate guide to the resolution of any digital camera which employs a Bayer matrix sensor. As stated in my earlier response to Stephen the resolution values at and close to maximum aperture will be reduced as a result of a variety of factors and due to the effects of diffraction the resolution will be reduced as the F/No is increased

Jimmy

J C Brown

MoreGooderPhotos wrote:

So in summary, all of this theory was to say that the lens and sensor combination of the FZ200 is diffraction limited to around F3.5. At higher stop values, fine details (resolution) begins to taper off due to diffraction of light due to the aperture blades. Does that sum it up properly?

Not according to my calculations. For the FZ200, I get around F=5.6 as being the point (at 700 nM worst-case wavelength, and with a typical strength "AA" filter assembly)) where the MTF of the lens-system diffraction begins to actually limit the highest spatial frequencies of the composite spatial frequency (MTF) response:

Detail Man wrote:

For the DMC-FZ200, Pa ~ 1.5 Microns. For a worst-case Wavelength (W) of 700 nM, it appears that (in the base case), diffraction "extinction" is not an issue until F=8.571 (which exists, in fact, above the maximum F-Number adjustment value of F=8.0 for the FZ200).

The above case being for an optical low-pass ("AA") filter yielding a zero response at the Nyquist (1/2 of the spatial) frequency itself, a more likely situation is one where the optical low-pass ("AA") filter yields a zero response at (around) 2/3 of the spatial sampling frequency. In that case, the result of the above calculations being applied result in a maximum value equal to F=5.714.

http://www.dpreview.com/forums/post/52081911

The MTF of the lens-system diffraction (itself) is always decreasing as the mathematical product of F-Number multiplied by Wavelength increases. These effects will also attenuate the composite system MTF magnitudes existing at higher spatial fequencies (though not "extincting" them, as is calculated for above).

At what point a failure partially reduce (some) optical aberrations (by varying amounts, depending on the aberration), as well as also pass upper ranges of spatial frequencies so that individual eyes may (individually, and subjectively) deem an "optimum" sharpness is not an easy number to "nail down" (as it essentially involves subjective human perceptual judgments).

On the other hand, a state of a lens-system being "diffraction limited" (which is not the same thing as a "diffraction limit" !) is defined as the F-Number (at a particular Focal Length) at which point the measured spatial resolution begins to decrease (instead of increase) with increasing F-Number.

Focal Length (and the amount of optical lens-aberrations) that we are talking about also matters.

If that's the case, how is it possible to get a long depth of field without loss of resolution with any bridge camera? Granted, a bridge camera isn't designed specifically for landscapes and architectural shots, it does seem to do a pretty good job of them by what I've seen. I suppose that it is fair to say that in such wide angle shot, fine detail resolution is far less important anyway.

Detail Man wrote:

The MTF of diffraction through a circular aperture is not Gaussian. It's so-called "Chinese hat" function is nearly downwardly sloping linear - except when near it's right-most position on the X-axis.

Where it comes to the MTF of a Photosite Aperture, that is a "sinc" [ sin(x)/(x) ] function. The addition of an optical low-pass ("AA") filter represents a mathematical product of two "sinc" functions that form a zero-magnitude point at some sub-multiple of the spatial sampling frequency

My vote would be to compute in the spatial frequency (MFT) domain. The results are then directly in the "units" of interest (magnitude as a function of spatial frequency).

Thanks for the explanations, Detail Man. I will PM you to take you up on your kind offer to send an MTF spreadsheet.

J C Brown wrote:

As discussed in that report due to the effect of diffraction the resolution is reduced from the maximum value as the aperture is reduced. In addition for large apertures including the maximum value there is some loss of resolution due to several factors including shape imperfections and off axis effects. Both of these effects can be seen in the following image.

For compact travel cameras such as the TZ30 (ZS20) in which the maximum aperture of the lens has been restricted to limit its physical size the loss of resolution at maximum aperture may not be present. See for example the images included in my TZ30 report here

I hope you will find my alternative approach of some interest.

Jimmy

-- hide signature --J C Brown

Dear Dr. Brown,

I am reading your report and learning a lot from it.

I find your graph of resolution for different colours to be surprising because I would have expected red to give the poorest resolution rather than the best. In your paper, you say:

Though the difference is not great, it is clear that the resolution is highest for the red and black Es, with magenta and blue slightly lower, followed by green then by yellow and cyan which show the poorest resolution. Part of that variation may however be due to differences in the relative intensities of the individual colours on the printed chart, which would of course depend on the accuracy with which my Canon printer printed the colours.

If aspects such as intensity and the particular shade or hue can move "red" from worst to best, how can the graph be used? My interpretation of the graph is that the average resolution decreases from approx 1800 LP/picture height at f/3 to about 1350 at f/11 and that there are colour/ intensity / hue / tint variations of approximately +/- 200. Would that be a fair interpretation if you were taking a picture of, say, a red flower and could not assess what kind of red it was?

Stephen Barrett wrote:

J C Brown wrote:

As discussed in that report due to the effect of diffraction the resolution is reduced from the maximum value as the aperture is reduced. In addition for large apertures including the maximum value there is some loss of resolution due to several factors including shape imperfections and off axis effects. Both of these effects can be seen in the following image.

For compact travel cameras such as the TZ30 (ZS20) in which the maximum aperture of the lens has been restricted to limit its physical size the loss of resolution at maximum aperture may not be present. See for example the images included in my TZ30 report here

I hope you will find my alternative approach of some interest.

Jimmy

-- hide signature --J C Brown

Dear Dr. Brown,

I am reading your report and learning a lot from it.

I find your graph of resolution for different colours to be surprising because I would have expected red to give the poorest resolution rather than the best. In your paper, you say:

Though the difference is not great, it is clear that the resolution is highest for the red and black Es, with magenta and blue slightly lower, followed by green then by yellow and cyan which show the poorest resolution. Part of that variation may however be due to differences in the relative intensities of the individual colours on the printed chart, which would of course depend on the accuracy with which my Canon printer printed the colours.

If aspects such as intensity and the particular shade or hue can move "red" from worst to best, how can the graph be used? My interpretation of the graph is that the average resolution decreases from approx 1800 LP/picture height at f/3 to about 1350 at f/11 and that there are colour/ intensity / hue / tint variations of approximately +/- 200. Would that be a fair interpretation if you were taking a picture of, say, a red flower and could not assess what kind of red it was?

Thanks for your comments. Stephen

I'm pleased to hear that you are finding my report useful but as I have no idea why you would expect red to give the poorest resolution I'm puzzled by your statement "I find the graph of resolution for the different colours surprising".

In your quotation from my report, marked in italics in your post, the significance of the second sentence is simply that as I am working in a domestic environment I don't have access to the calibrated sensors which would be required to check the accuracy of the six colours used in the chart.

As my Canon printer uses cyan, magenta and yellow inks to print these colours plus red, green and blue I have to rely on the accuracy with which my printer can reproduce the colours which I used to create the chart using the numerical values in Photoshop Elements and my visual assessment of that accuracy.

As any inaccuracies in the colour of the printed letters are in my opinion likely to be fairly small I wouldn't expect them to have a significant effect on the validity of any test results derived from them.

While I regard your assessment of the average values of the chart as fair I don't understand the significance of your question about assessing the kind of red when taking a picture of a red flower.

My main purpose in developing the colour resolution test chart was to allow the variation of resolution with colour to be assessed and compared with that of other cameras. The chart is not intended for use in assessing the colour accuracy of a camera,

I hope that I've managed to provide a satisfactory answer to your questions.

Jimmy

J C Brown

Detail Man wrote:

Ron Tolmie wrote:

Ian:

The angular resolution (in arc sec) is inversely proportional to the aperture diameter:

sin (angle) = 1.22 x wavelength/aperture diameter

The focal length is not a factor in determining the angular resolution. If the lenses in a view camera and a miniature camera have the same angular angular resolution they will appear to be equally sharp. If the aperture diameter is 3mm and the lens has no optical aberrations then the image will be acceptably sharp, and it doesn't matter if the focal length of the lens is 4mm or 4000mm. If the aperture diameter is 1mm the image will be soft, irrespective of the focal length.

In comparing lenses that have the same focal length (such as the 50mm lenses used in FF cameras) it is a common practice to resort to a linear measurement (lines per mm) because it is easier to make such measurements. Unfortunately that leads to confusion if you try to compare lenses that have different focal lengths. If you have two lenses of equal quality (i.e. the same angular resolution) but different focal lengths the short FL lens will deliver more lines per mm even though it is not any better than the long FL lens. To make a useful comparison of the lenses you would need to divide the l/mm number by the focal length.

My statement about diffraction as a function of the aperture diameter was correct. If you want to make comparisons between measurements made at different focal lengths then you need to measure the angular resolution, not the l/mm. Otherwise you end up in a state of confusion.

Hi Ron,

Have a look at the mathematical approximations and the correspondng identies in this section:

http://en.wikipedia.org/wiki/Airy_disk#Cameras

What do you think ? It seem (to me) to make sense (for small angles) to be able to restate your the correct identity quoted above to be of the form (in units of distance):

Distance = (Wavelength) * (F-Number)

If you disgaree, I would be very interested in learning from you - as I must have missed something.

The derivation makes an assumption that a (symmetrical, thin) lens is focused at "infinity". The Effective Focal Length being Focal Length (for "infinity" focus) multipled by [ 1 + M/P ] (where M is Image Magnifictaion, and P is Pupillary Magnification factor), it seems that this is the form that one might want to use for analyzing notably "close-up" shooting conditions.

DM ...

Hi DM,

I recently used a spreadsheet to calculate for the FZ200 the angular resolution in milliradian and the diameter of the Airy disc in microns and in relation to pixel height for a range of focal lengths and apertures. The results obtained for 35 mm equivalent focal lengths of 25 mm, 50 mm and 600 mm are shown in the following table which I thought might be of interest to you and to other members of this forum.

As I don't have access to the correct information the table includes F/Nos which may exceed the maximum values available at each of the chosen focal lengths.

While these results clearly show that the diameter of the Airy disc is directly proportional to the F/No and independent of the focal length I can see nothing in the data to suggest that there is anything special about an aperture with a diameter of 3 mm as suggested by Ron Tolmie.

Jimmy

J C Brown

- Canon EOS M58.8%
- Panasonic G85/G803.3%
- Panasonic FZ2500/FZ20001.9%
- Panasonic LX10/LX151.2%
- Panasonic GH5 development3.6%
- Sony a99 II15.9%
- Nikon KeyMission 170 and 801.0%
- Fujifilm GFX 50S development28.3%
- Olympus E-M1 II development18.7%
- Olympus E-PL80.1%
- Olympus 25mm F1.2 Pro1.5%
- Olympus 12-100mm F4 IS Pro1.9%
- Olympus 30mm F3.5 Macro0.1%
- Sigma 85mm F1.4 Art3.6%
- Sigma 12-24mm F4 Art2.6%
- Sigma 500mm F4 DG OS HSM Sport2.4%
- YI M12.2%
- GoPro Hero50.8%
- GoPro Karma drone2.2%