J A C S wrote:

alanr0 wrote:

J A C S wrote:

alanr0 wrote:

<snip>

Regarding truncation of the integration region:

For a diffraction-limited lens, all those small contributions lead to a "Chinese Hat" modulation transfer function, which is essentially conical at the origin. The MTF drops almost linearly at low spatial frequencies, with a singularity in the Laplacian at the origin. In contrast, the Gaussian MTF has zero slope at the origin, and much less attenuation of low spatial frequencies.

As scary as this hat looks, its singularity is due to signals which can be ignored. Truncating any infinite part of the PSF, no matter how small it is, will make the hat nice and round.

But now we have a measurement which depends too much on arbitrarily selected test conditions and calculation parameters.

It does already. Your PSF is truncated already. You do not have an infinite sensor.

I don't have a problem with the PSF extending over the entire sensor.

The large extent of the PSF is a key to my objection to the Buzzi and Guichard paper. Their theorem 1 requires that the blur kernel is a subset of the Schwartz space, and I do not believe this is valid in any meaningful way for the jinc² function.

Of course not, and we knew that already. On the other hand, it can be approximated with such as close as you want. The problem there is that the second moment blows up under such approximation but we know that since the integral is divergent.

That is rather the point. The second moment is the "Unique blur measure".

You are the one proposing that the PSF should be truncated at some radius, such as 4, 10 or 20 times the Airy disk radius to give a reduced second moment and modify the zero frequency singularity of the Laplacian of the MTF.

The blur measure of the Gaussian in my figures (1 to 4) is close to unity, provided the truncation radius is 5 or greater. In the same figures, the Airy disk has a central peak with a similar width at half-maximum intensity. If we truncate at 12 mm radius, the blur measure of the Airy disk is 2715 times larger than that of the Gaussian.

Yes, so? 12mm is too extreme. This is another way of saying that the integral is divergent but we knew that. Most of what you integrate to get this is so small that should be discarded. My suggestion was to truncate over the smallest disk where the difference does not make a visible difference, most of the time, and that would not be 1/2 of the height of the sensor.

But where?

Is there a truncation radius which "does not make a visible difference" for a useful range of real-world blur kernels?

Ideally if one finds Gaussian and Airy Disk PSF's delivering similar subjective blur, one would wish them to have similar blur metrics.

Yes, if that was the goal of that paper. But it was not; or at least the theorem does not imply anything like that.

Suppose this is the case. If we now double the radius of both, one would expect the subjective blur to scale in a similar fashion in each case,

It does.

with the second moment scaling as the square of the radius. Unfortunately, at constant integration radius, while the blur metric of the Gaussian would quadruple, we would get a smaller proportional increase in second moment for the Airy disk.

??? By a fine radius - you mean you integrate a truncated kernel? Then neither one will give you an exact nice multiple. If not, they both scale in the same way.

Keep the size of the truncated kernel constant, and apply the same radial scaling to both Gaussian and Airy disk PSF.

Provided the truncation radius is more than 5 or 6 times the standard deviation of the Gaussian PSF, its blur measure is substantially independent of the integration limit (figure 4 above).

For the Airy disk, this is not the case.

Of course, the integral is divergent. We established that already.

The central peak contributes around 61% of the equal zeroth moment Gaussian blur measure, and the outer rings make an additional contribution to the second moment which increases more or less linearly (monotonically with oscillations) as the truncation radius increases.

For fixed truncation radius 25, start with Airy radius 2.7 and Gaussian sdev=1, then scale Airy disk and Gaussian radii by the same size factor in each line of the table below:

size | gauss_blur | airy_blur | ratio airy_blur / gauss_blur

1.000 1.00000 5.72476 5.72476

2.000 4.00000 11.44821 2.86205

3.000 9.00000 17.85249 1.98361

4.000 16.00000 24.74498 1.54656

5.000 24.99874 30.44975 1.21805

6.000 35.94080 39.77807 1.10677

7.000 48.38571 41.78761 0.86364

8.000 61.14776 46.74666 0.76449

Ratio of Airy blur to Gauss blur measure decreases when both PSF's are expanded radially by the same factor.

EDIT: Did you normalize the Airy blur to unit area?

Airy_blur = second moment of PSF / zeroth moment of PSF

Assuming that's what you mean. PSF trunctated at radius 25, which is 25 micron radius for F/4 imaging at 555 nm.

Good. The Gaussian is "almost" compactly supported while jinc^2 is not and leads to a divergent integral. We knew that. On the other hand, with size 8, your truncation radius is comparable to the Airy radius and it is obviously too small.

I am not sure what you are trying to prove. I did not write this paper. The measure is a logical consequence of the assumptions. Good or bad, it is what it is. Even you said that one of the properties of that measure (not an assumption but follows from them) is reasonable. Take it is a negative result, if you do not like the outcome. It tells you that you must give up some of the assumptions then.

We do seem to be running out of things to disagree on.

For me, the most obvious point of contention is whether there is a sensible truncation radius which gives a useful measure of blur over a range of blur kernels. For DSLR photography that should include essentially diffraction-limited images between f/4 and f/22 (and preferably between f/2.8 and f/32), moderate levels of defocus and spherical aberration, and corner aberrations for low end travel zooms and wide angle lenses at maximum aperture.

We need to cope with Airy disks whose radius varies by at least a factor of 6, and preferably more to accommodate defocus and optical aberrations.

A moderately large fixed truncation radius is fine once the blur looks sufficiently close to Gaussian - in which case truncation may not be necessary as the size of integration region is no longer critical. This can arise either because something close to a genuine Gaussian blur has been applied and dominates, or because multiple blurs have been applied so that the central limit theorem kicks in.

It seems to me that if you give up the Schwartz class assumption (which is obviously too strong, a finite second moment should be enough), it follows that there is no such measure satisfying those conditions. This is interesting to know. The proof should be along those lines: if there is, over the Schwartz class, it is the second moment but then for more general blur kernels, it would be infinite.

Fair point. A large part of my concern is that they do not address these issues.

Buzzi and Guichard make it clear in the introduction that their metric is intended to apply to optical imaging chains in general, and to cameras specifically.

They present it as an objective measure which is related to the perceived level of blur.

They assume without any attempt at justification that the blur kernel will have a finite second moment.

"The transform of a point through an imaging chain is expected to have essentially a finite size. Thus we can assume that the kernel has finite moments of all orders."

As agreed above, this is invalid for an Airy disk PSF. I suspect it will be equally invalid for most photographic images where there is an abrupt drop in intensity at the edge of the exit pupil.

I don't claim to fully understand all the details of their mathematical presentation, so I may be missing something in the extension to more general blur kernels.

In the conclusion they state:

"But our result is independent of any assumption on the blur kernel (which we do not need to estimate and which maybe far from Gaussian). The uniqueness affords us a bridge between our mathematical approach and psychophysical facts".

I may be over-interpreting here, but there seems to be an unstated implication of a direct relationship between kernel variance and perceived blur, independent of the shape of the blur kernel. This clearly does not hold where the second moment integral diverges, and it is questionable even for shapes such as pill-box and cone.

The second moment blur measure may be unique (where it exists), but the correlation between it and the perceived blur level will depend on the profile of the blur kernel.

They further claim that perceived blur is especially sensitive to attenuation of low frequencies. As far as I can tell, this conclusion is not supported by the experiments they describe.

My understanding is that perceived sharpness is dominated by moderate spatial frequencies. Indeed, the human visual system is notoriously insensitive to the lowest spatial frequencies.

Their "low pass filter" leaves low to mid frequencies intact up to 20% of maximum spatial frequency.

Their "Gaussian filter" has little impact at low frequencies, and strongly attenuates frequencies around 20% and higher.

Their "high pass filter" is more of a mid-band attenuator. It follows the Gaussian blur spectral response to 20% of maximum frequency, reaches a minimum at 30%, then rises, but never exceeds unity at high frequencies.

I did try to dig up more about how the BxU measure is calculated, but couldn't find anything on the DxO site. There is an overview here, but the link to DxO Analyser is broken.

Imaging resource has some background, confirming that the target patches used are fairly small, but with little detail.

Cheers.