junk data, models, and measurements

Started Aug 22, 2018 | Discussions
J A C S
J A C S Forum Pro • Posts: 15,236
Re: BxU blur estimate: PSF variance

AiryDiscus wrote:

J A C S wrote:

I do not know and this is an honest answer. According to AD, you can barely see anything behind the 4th Airy disk. His estimate of the essential support is even smaller that what I would think and yet, he sounds very angry.

I am not very angry, but I wish you would not twist my words. It is not that there is no energy beyond the fourth ring of the airy disk. It is that unless your system has spectacularly low wavefront error, the PSF has been sufficiently reshaped by that wavefront error such that the beyond the fourth airy disk is smooth and lacks ripples.

OK, can you show me an actual image where I can see those ripple

Now, what is actually relevant is what effect on "typical images", whatever that means, would have truncating this blur kernel to say, 4, 10, 20, etc. Airy disks. We rarely observe the PSF, we see a convolution with it. And again, I have no definitive answer to this question but from what I have seen an measured, it should be negligibly small.

I do not understand what you are saying. Is your intent:

(1) to understand how natural images come to be

(2) to measure something

These are not congruent goals. For (1) you do not necessarily care about the PSF. For (2), depending what you want to measure, you very much care about the PSF.

Strictly speaking you never observe the PSF, since a Kronecker delta object suspended in vacuum does not exist in the real world.

I thought that I am the one who should be anal about what is exact and what is approximate.

My goal is to understand the effect of the "tail" of the PSF on natural images. So it is (1) and (2). The experiment with a dark sky and bright objects around is an attempt to use an extreme but still natural image where I might see something.

Are you suggesting we choose a different integration limit for each measurement?

Not different but finite. Yes, if you want to push the limits, you can bring the f/16 case. There is some limit which should work reasonably well for up to any real life f-stop, say f/22 or so.

It is bad science (TM) to impose arbitrary restrictions on measurement ranges. If we were a steering committee for ISO or some equivalent organization, I would vote against your proposal.

Fortunately, I am not proposing what you think I am. BTW, feet and meters are arbitrary.

With a fixed radius, if we truncate outside the 4th ring at F/4, we may only capture 2 rings at F/8, with an even tighter crop at F/16. The same lens might be aberration-limited at F/2, with a completely different PSF shape. How much defocus can we tolerate with a fixed radius?

See above. Choose some large enough radius and fix it.

The notion of "large enough" is very poorly defined.

That is why I used it.

Fig. 4. Dependence of second moment calculation on outer radius of integration for Gaussian PSF and Airy disk.

All this growth will disappear because it comes from multiplying small values which have to be truncated by large ones (the weight x^2+y^2) and then integrating, which introduces another large factor. This is, roughly speaking, noise amplification.

Not to the extent you imply when one considers more usual images.

You object to integrating over the outer rings of the Airy disk PSF where the intensity falls below 1/4000 of the peak, but that is missing the point. We are usually more interested in lines and edges when it comes to perceived sharpness. We calculate the line spread function by integrating the point spread function along a line. For a broader feature, we integrate contributions over the width of the feature. At a distance where the PSF intensity is 10 stops down from the centre of an isolated point source, we might find that the intensity a percent or so at the same distance from a bright line, rising to a few percent the same distance from a step rise in intensity.

OK, this is the point I expected somebody to make. I did not see this when I was writing the text above. Now - about the image I mentioned a few days ago. There was a house with Xmas lights, quite a few blown highlight, and a really dark sky. Yet, I was able to measure 2^(-12} below saturation or so and I posted my observations then. Iliah, who started the thread (his point was the significance of the flare), measured EV=-11 on his image. This is a bit below the typcal DR of a normally processed image and will overlap with a much brighter object generically.

I have raised my objection time and time again to these methods of measuring things. You must apply more rigor to your method. Consider that the best scatterometer money can buy has a dynamic range of 1e13, and is limited by air scatter across short distances < 1m.

http://thescatterworks.com/tts

Why would I do that? What is wrong with my method? I am looking for the order of the magnitude. Two times more or less would not make a big difference.

In "typical" images, I expect the flare and scattering by the air (which can be thought of as a part of the image) to play much more significant role than, say, the 20th Airy disk due to diffraction.

So are you arguing that your "finite" function is even less finite?

Actually, you did.

For a diffraction-limited lens, all those small contributions lead to a "Chinese Hat" modulation transfer function, which is essentially conical at the origin. The MTF drops almost linearly at low spatial frequencies, with a singularity in the Laplacian at the origin. In contrast, the Gaussian MTF has zero slope at the origin, and much less attenuation of low spatial frequencies.

As scary as this hat looks, its singularity is due to signals which can be ignored.

Bad form.

Agree. I prefer Western hats.

Truncating any infinite part of the PSF, no matter how small it is, will make the hat nice and round.

And then you don't have a good estimate of the transfer function of what you're measuring.

You do not have it anyway.

I find such dependence on the details of the measurement and calculation region deeply unsatisfactory for what is supposed to be a "unique blur measure".

It might be or not but there is no apparent conflict. One can argue that you can fix the size of the PSF to some reasonable size and then you get your measure. Then you fix it to another reasonable one, and you get a different one. So what? It might be just like switching from meters to feet, or not.

There is an exact translation between m and ft. There is not an exact translation between different window sizes for this because it depends on the signal in question.

No, it depends how I fix the two cutoffs. Exactly as m and ft.

It is unique once you agree on that approximation,

If you assume (know apriori) the PSF you are measuring, there is no reason to do the measurement. I would never agree to that approximation.

You do not need to know the PSF, you need to know a certain bound.

J A C S
J A C S Forum Pro • Posts: 15,236
Re: Dynamic range for point, line & step subjects

alanr0 wrote:

J A C S wrote:

alanr0 wrote:

<snip>

The second moment integral increases steadily with outer radius, with no obvious choice of cut-off point,

Yes, it does. See above and my earlier posts.

let alone one which gives a variance comparable with that of the Gaussian - which has an almost identical full width a half maximum.

Good but why is that a problem? See below.

Are you suggesting we choose a different integration limit for each measurement?

Not different but finite. Yes, if you want to push the limits, you can bring the f/16 case. There is some limit which should work reasonably well for up to any real life f-stop, say f/22 or so.

With a fixed radius, if we truncate outside the 4th ring at F/4, we may only capture 2 rings at F/8, with an even tighter crop at F/16. The same lens might be aberration-limited at F/2, with a completely different PSF shape. How much defocus can we tolerate with a fixed radius?

See above. Choose some large enough radius and fix it.

Fig. 4. Dependence of second moment calculation on outer radius of integration for Gaussian PSF and Airy disk.

All this growth will disappear because it comes from multiplying small values which have to be truncated by large ones (the weight x^2+y^2) and then integrating, which introduces another large factor. This is, roughly speaking, noise amplification.

Not to the extent you imply when one considers more usual images.

You object to integrating over the outer rings of the Airy disk PSF where the intensity falls below 1/4000 of the peak, but that is missing the point. We are usually more interested in lines and edges when it comes to perceived sharpness. We calculate the line spread function by integrating the point spread function along a line. For a broader feature, we integrate contributions over the width of the feature. At a distance where the PSF intensity is 10 stops down from the centre of an isolated point source, we might find that the intensity a percent or so at the same distance from a bright line, rising to a few percent the same distance from a step rise in intensity.

OK, this is the point I expected somebody to make. I did not see this when I was writing the text above. Now - about the image I mentioned a few days ago. There was a house with Xmas lights, quite a few blown highlight, and a really dark sky. Yet, I was able to measure 2^(-12} below saturation or so and I posted my observations then. Iliah, who started the thread (his point was the significance of the flare), measured EV=-11 on his image. This is a bit below the typcal DR of a normally processed image and will overlap with a much brighter object generically.

In "typical" images, I expect the flare and scattering by the air (which can be thought of as a part of the image) to play much more significant role than, say, the 20th Airy disk due to diffraction.

Let's look at intensities in the neighbourhood of point, line and step sources, against an ideal black background.

Fig. 5. Point and line spread functions plus step response for diffraction-limited imaging at F/4 and 555 nm.

Now on a log scale for a closer look at the shadow region.

Fig. 6. Point and line spread functions plus step response for diffraction-limited imaging at F/4 and 555 nm.

For the point spread function, the intensity of the 5th diffraction ring is 0.0004, roughly 11 stops below the peak intensity. Descending into the noise.

For the line spread function, the intensity of the 5th ring is around 0.003 (-8.4 stops).

The step response at the same radius is 0.009, (-7 stops), which should be measurable with useful accuracy.

The step response shows a smooth fall in intensity out to 50 microns and beyond. 0.002 of peak intensity is certainly measurable with a back-lit target such as Jim Kasson's razor blade, though it will be difficult to measure reflective targets printed on paper (I don't know if vantablack toner is available yet for laser printers).

I am not sure what a line and a step responses are but those numbers look small enough to me and this is the 5th ring only. What is your point?

Regarding truncation of the integration region:

For a diffraction-limited lens, all those small contributions lead to a "Chinese Hat" modulation transfer function, which is essentially conical at the origin. The MTF drops almost linearly at low spatial frequencies, with a singularity in the Laplacian at the origin. In contrast, the Gaussian MTF has zero slope at the origin, and much less attenuation of low spatial frequencies.

As scary as this hat looks, its singularity is due to signals which can be ignored. Truncating any infinite part of the PSF, no matter how small it is, will make the hat nice and round.

But now we have a measurement which depends too much on arbitrarily selected test conditions and calculation parameters.

It does already. Your PSF is truncated already. You do not have an infinite sensor.

Ideally if one finds Gaussian and Airy Disk PSF's delivering similar subjective blur, one would wish them to have similar blur metrics.

Yes, if that was the goal of that paper. But it was not; or at least the theorem does not imply anything like that.

Suppose this is the case. If we now double the radius of both, one would expect the subjective blur to scale in a similar fashion in each case,

It does.

with the second moment scaling as the square of the radius. Unfortunately, at constant integration radius, while the blur metric of the Gaussian would quadruple, we would get a smaller proportional increase in second moment for the Airy disk.

??? By a fine radius - you mean you integrate a truncated kernel? Then neither one will give you an exact nice multiple. If not, they both scale in the same way.

Not what I am looking for in a "Unique blur measure"

It is unique with certain properties.

alanr0 Senior Member • Posts: 2,237
Re: Dynamic range for point, line & step subjects

J A C S wrote:

alanr0 wrote:

J A C S wrote:

alanr0 wrote:

<snip>

The second moment integral increases steadily with outer radius, with no obvious choice of cut-off point,

Yes, it does. See above and my earlier posts.

let alone one which gives a variance comparable with that of the Gaussian - which has an almost identical full width a half maximum.

Good but why is that a problem? See below.

Are you suggesting we choose a different integration limit for each measurement?

Not different but finite. Yes, if you want to push the limits, you can bring the f/16 case. There is some limit which should work reasonably well for up to any real life f-stop, say f/22 or so.

With a fixed radius, if we truncate outside the 4th ring at F/4, we may only capture 2 rings at F/8, with an even tighter crop at F/16. The same lens might be aberration-limited at F/2, with a completely different PSF shape. How much defocus can we tolerate with a fixed radius?

See above. Choose some large enough radius and fix it.

Fig. 4. Dependence of second moment calculation on outer radius of integration for Gaussian PSF and Airy disk.

All this growth will disappear because it comes from multiplying small values which have to be truncated by large ones (the weight x^2+y^2) and then integrating, which introduces another large factor. This is, roughly speaking, noise amplification.

Not to the extent you imply when one considers more usual images.

You object to integrating over the outer rings of the Airy disk PSF where the intensity falls below 1/4000 of the peak, but that is missing the point. We are usually more interested in lines and edges when it comes to perceived sharpness. We calculate the line spread function by integrating the point spread function along a line. For a broader feature, we integrate contributions over the width of the feature. At a distance where the PSF intensity is 10 stops down from the centre of an isolated point source, we might find that the intensity a percent or so at the same distance from a bright line, rising to a few percent the same distance from a step rise in intensity.

OK, this is the point I expected somebody to make. I did not see this when I was writing the text above. Now - about the image I mentioned a few days ago. There was a house with Xmas lights, quite a few blown highlight, and a really dark sky. Yet, I was able to measure 2^(-12} below saturation or so and I posted my observations then. Iliah, who started the thread (his point was the significance of the flare), measured EV=-11 on his image. This is a bit below the typcal DR of a normally processed image and will overlap with a much brighter object generically.

In "typical" images, I expect the flare and scattering by the air (which can be thought of as a part of the image) to play much more significant role than, say, the 20th Airy disk due to diffraction.

Let's look at intensities in the neighbourhood of point, line and step sources, against an ideal black background.

Fig. 5. Point and line spread functions plus step response for diffraction-limited imaging at F/4 and 555 nm.

Now on a log scale for a closer look at the shadow region.

Fig. 6. Point and line spread functions plus step response for diffraction-limited imaging at F/4 and 555 nm.

For the point spread function, the intensity of the 5th diffraction ring is 0.0004, roughly 11 stops below the peak intensity. Descending into the noise.

For the line spread function, the intensity of the 5th ring is around 0.003 (-8.4 stops).

The step response at the same radius is 0.009, (-7 stops), which should be measurable with useful accuracy.

The step response shows a smooth fall in intensity out to 50 microns and beyond. 0.002 of peak intensity is certainly measurable with a back-lit target such as Jim Kasson's razor blade, though it will be difficult to measure reflective targets printed on paper (I don't know if vantablack toner is available yet for laser printers).

I am not sure what a line and a step responses are but those numbers look small enough to me and this is the 5th ring only. What is your point?

Line spread function is the intensity profile measured normal to an extended line source.

Step response (also referred to as Edge Spread Function) is the convolution of the line spread function with a step function. This simulates the image used for a slanted edge MTF measurement.

My intent was to quantify the impact of the Airy disk's PSF for a subject with extended uniformly bright and dark areas, prompted by your concerns regarding the relative importance of diffraction and flare.

Regarding truncation of the integration region:

For a diffraction-limited lens, all those small contributions lead to a "Chinese Hat" modulation transfer function, which is essentially conical at the origin. The MTF drops almost linearly at low spatial frequencies, with a singularity in the Laplacian at the origin. In contrast, the Gaussian MTF has zero slope at the origin, and much less attenuation of low spatial frequencies.

As scary as this hat looks, its singularity is due to signals which can be ignored. Truncating any infinite part of the PSF, no matter how small it is, will make the hat nice and round.

But now we have a measurement which depends too much on arbitrarily selected test conditions and calculation parameters.

It does already. Your PSF is truncated already. You do not have an infinite sensor.

I don't have a problem with the PSF extending over the entire sensor.

The large extent of the PSF is a key to my objection to the Buzzi and Guichard paper. Their theorem 1 requires that the blur kernel is a subset of the Schwartz space, and I do not believe this is valid in any meaningful way for the jinc² function.

You are the one proposing that the PSF should be truncated at some radius, such as 4, 10 or 20 times the Airy disk radius to give a reduced second moment and modify the zero frequency singularity of the Laplacian of the MTF.

The blur measure of the Gaussian in my figures (1 to 4) is close to unity, provided the truncation radius is 5 or greater. In the same figures, the Airy disk has a central peak with a similar width at half-maximum intensity. If we truncate at 12 mm radius, the blur measure of the Airy disk is 2715 times larger than that of the Gaussian.

Ideally if one finds Gaussian and Airy Disk PSF's delivering similar subjective blur, one would wish them to have similar blur metrics.

Yes, if that was the goal of that paper. But it was not; or at least the theorem does not imply anything like that.

Suppose this is the case. If we now double the radius of both, one would expect the subjective blur to scale in a similar fashion in each case,

It does.

with the second moment scaling as the square of the radius. Unfortunately, at constant integration radius, while the blur metric of the Gaussian would quadruple, we would get a smaller proportional increase in second moment for the Airy disk.

??? By a fine radius - you mean you integrate a truncated kernel? Then neither one will give you an exact nice multiple. If not, they both scale in the same way.

Keep the size of the truncated kernel constant, and apply the same radial scaling to both Gaussian and Airy disk PSF.

Provided the truncation radius is more than 5 or 6 times the standard deviation of the Gaussian PSF, its blur measure is substantially independent of the integration limit (figure 4 above).

For the Airy disk, this is not the case. The central peak contributes around 61% of the equal zeroth moment Gaussian blur measure, and the outer rings make an additional contribution to the second moment which increases more or less linearly (monotonically with oscillations) as the truncation radius increases.

For fixed truncation radius 25, start with Airy radius 2.7 and Gaussian sdev=1, then scale Airy disk and Gaussian radii by the same size factor in each line of the table below:

size | gauss_blur | airy_blur | ratio airy_blur / gauss_blur

1.000 1.00000 5.72476 5.72476
2.000 4.00000 11.44821 2.86205
3.000 9.00000 17.85249 1.98361
4.000 16.00000 24.74498 1.54656
5.000 24.99874 30.44975 1.21805
6.000 35.94080 39.77807 1.10677
7.000 48.38571 41.78761 0.86364
8.000 61.14776 46.74666 0.76449

Ratio of Airy blur to Gauss blur measure decreases when both PSF's are expanded radially by the same factor.

J A C S
J A C S Forum Pro • Posts: 15,236
Re: Dynamic range for point, line & step subjects

alanr0 wrote:

J A C S wrote:

alanr0 wrote:

J A C S wrote:

alanr0 wrote:

<snip>

The second moment integral increases steadily with outer radius, with no obvious choice of cut-off point,

Yes, it does. See above and my earlier posts.

let alone one which gives a variance comparable with that of the Gaussian - which has an almost identical full width a half maximum.

Good but why is that a problem? See below.

Are you suggesting we choose a different integration limit for each measurement?

Not different but finite. Yes, if you want to push the limits, you can bring the f/16 case. There is some limit which should work reasonably well for up to any real life f-stop, say f/22 or so.

With a fixed radius, if we truncate outside the 4th ring at F/4, we may only capture 2 rings at F/8, with an even tighter crop at F/16. The same lens might be aberration-limited at F/2, with a completely different PSF shape. How much defocus can we tolerate with a fixed radius?

See above. Choose some large enough radius and fix it.

Fig. 4. Dependence of second moment calculation on outer radius of integration for Gaussian PSF and Airy disk.

All this growth will disappear because it comes from multiplying small values which have to be truncated by large ones (the weight x^2+y^2) and then integrating, which introduces another large factor. This is, roughly speaking, noise amplification.

Not to the extent you imply when one considers more usual images.

You object to integrating over the outer rings of the Airy disk PSF where the intensity falls below 1/4000 of the peak, but that is missing the point. We are usually more interested in lines and edges when it comes to perceived sharpness. We calculate the line spread function by integrating the point spread function along a line. For a broader feature, we integrate contributions over the width of the feature. At a distance where the PSF intensity is 10 stops down from the centre of an isolated point source, we might find that the intensity a percent or so at the same distance from a bright line, rising to a few percent the same distance from a step rise in intensity.

OK, this is the point I expected somebody to make. I did not see this when I was writing the text above. Now - about the image I mentioned a few days ago. There was a house with Xmas lights, quite a few blown highlight, and a really dark sky. Yet, I was able to measure 2^(-12} below saturation or so and I posted my observations then. Iliah, who started the thread (his point was the significance of the flare), measured EV=-11 on his image. This is a bit below the typcal DR of a normally processed image and will overlap with a much brighter object generically.

In "typical" images, I expect the flare and scattering by the air (which can be thought of as a part of the image) to play much more significant role than, say, the 20th Airy disk due to diffraction.

Let's look at intensities in the neighbourhood of point, line and step sources, against an ideal black background.

Fig. 5. Point and line spread functions plus step response for diffraction-limited imaging at F/4 and 555 nm.

Now on a log scale for a closer look at the shadow region.

Fig. 6. Point and line spread functions plus step response for diffraction-limited imaging at F/4 and 555 nm.

For the point spread function, the intensity of the 5th diffraction ring is 0.0004, roughly 11 stops below the peak intensity. Descending into the noise.

For the line spread function, the intensity of the 5th ring is around 0.003 (-8.4 stops).

The step response at the same radius is 0.009, (-7 stops), which should be measurable with useful accuracy.

The step response shows a smooth fall in intensity out to 50 microns and beyond. 0.002 of peak intensity is certainly measurable with a back-lit target such as Jim Kasson's razor blade, though it will be difficult to measure reflective targets printed on paper (I don't know if vantablack toner is available yet for laser printers).

I am not sure what a line and a step responses are but those numbers look small enough to me and this is the 5th ring only. What is your point?

Line spread function is the intensity profile measured normal to an extended line source.

Step response (also referred to as Edge Spread Function) is the convolution of the line spread function with a step function. This simulates the image used for a slanted edge MTF measurement.

My intent was to quantify the impact of the Airy disk's PSF for a subject with extended uniformly bright and dark areas, prompted by your concerns regarding the relative importance of diffraction and flare.

Regarding truncation of the integration region:

For a diffraction-limited lens, all those small contributions lead to a "Chinese Hat" modulation transfer function, which is essentially conical at the origin. The MTF drops almost linearly at low spatial frequencies, with a singularity in the Laplacian at the origin. In contrast, the Gaussian MTF has zero slope at the origin, and much less attenuation of low spatial frequencies.

As scary as this hat looks, its singularity is due to signals which can be ignored. Truncating any infinite part of the PSF, no matter how small it is, will make the hat nice and round.

But now we have a measurement which depends too much on arbitrarily selected test conditions and calculation parameters.

It does already. Your PSF is truncated already. You do not have an infinite sensor.

I don't have a problem with the PSF extending over the entire sensor.

The large extent of the PSF is a key to my objection to the Buzzi and Guichard paper. Their theorem 1 requires that the blur kernel is a subset of the Schwartz space, and I do not believe this is valid in any meaningful way for the jinc² function.

Of course not, and we knew that already. On the other hand, it can be approximated with such as close as you want. The problem there is that the second moment blows up under such approximation but we know that since the integral is divergent.

You are the one proposing that the PSF should be truncated at some radius, such as 4, 10 or 20 times the Airy disk radius to give a reduced second moment and modify the zero frequency singularity of the Laplacian of the MTF.

The blur measure of the Gaussian in my figures (1 to 4) is close to unity, provided the truncation radius is 5 or greater. In the same figures, the Airy disk has a central peak with a similar width at half-maximum intensity. If we truncate at 12 mm radius, the blur measure of the Airy disk is 2715 times larger than that of the Gaussian.

Yes, so? 12mm is too extreme. This is another way of saying that the integral is divergent but we knew that. Most of what you integrate to get this is so small that should be discarded. My suggestion was to truncate over the smallest disk where the difference does not make a visible difference, most of the time, and that would not be 1/2 of the height of the sensor.

Ideally if one finds Gaussian and Airy Disk PSF's delivering similar subjective blur, one would wish them to have similar blur metrics.

Yes, if that was the goal of that paper. But it was not; or at least the theorem does not imply anything like that.

Suppose this is the case. If we now double the radius of both, one would expect the subjective blur to scale in a similar fashion in each case,

It does.

with the second moment scaling as the square of the radius. Unfortunately, at constant integration radius, while the blur metric of the Gaussian would quadruple, we would get a smaller proportional increase in second moment for the Airy disk.

??? By a fine radius - you mean you integrate a truncated kernel? Then neither one will give you an exact nice multiple. If not, they both scale in the same way.

Keep the size of the truncated kernel constant, and apply the same radial scaling to both Gaussian and Airy disk PSF.

Provided the truncation radius is more than 5 or 6 times the standard deviation of the Gaussian PSF, its blur measure is substantially independent of the integration limit (figure 4 above).

For the Airy disk, this is not the case.

Of course, the integral is divergent. We established that already.

The central peak contributes around 61% of the equal zeroth moment Gaussian blur measure, and the outer rings make an additional contribution to the second moment which increases more or less linearly (monotonically with oscillations) as the truncation radius increases.

For fixed truncation radius 25, start with Airy radius 2.7 and Gaussian sdev=1, then scale Airy disk and Gaussian radii by the same size factor in each line of the table below:

size | gauss_blur | airy_blur | ratio airy_blur / gauss_blur

1.000 1.00000 5.72476 5.72476
2.000 4.00000 11.44821 2.86205
3.000 9.00000 17.85249 1.98361
4.000 16.00000 24.74498 1.54656
5.000 24.99874 30.44975 1.21805
6.000 35.94080 39.77807 1.10677
7.000 48.38571 41.78761 0.86364
8.000 61.14776 46.74666 0.76449

Ratio of Airy blur to Gauss blur measure decreases when both PSF's are expanded radially by the same factor.

EDIT: Did you normalize the Airy blur to unit area?

Good. The Gaussian is "almost" compactly supported while jinc^2 is not and leads to a divergent integral. We knew that. On the other hand, with size 8, your truncation radius is comparable to the Airy radius and it is obviously too small.

I am not sure what you are trying to prove. I did not write this paper. The measure is a logical consequence of the assumptions. Good or bad, it is what it is. Even you said that one of the properties of that measure (not an assumption but follows from them) is reasonable. Take it is a negative result, if you do not like the outcome. It tells you that you must give up some of the assumptions then.

It seems to me that if you give up the Schwartz class assumption (which is obviously too strong, a finite second moment should be enough), it follows that there is no such measure satisfying those conditions. This is interesting to know. The proof should be along those lines: if there is, over the Schwartz class, it is the second moment but then for more general blur kernels, it would be infinite.

OP AiryDiscus Senior Member • Posts: 1,872
Re: MTF software

J A C S wrote:

I misread that. I was talking about the need to differentiate the blur of the edge.

It is noisy, but the window size does not matter enough to care about for that

alanr0 Senior Member • Posts: 2,237
Buzzi and Guichard & BxU blur measure

J A C S wrote:

alanr0 wrote:

J A C S wrote:

alanr0 wrote:

<snip>

Regarding truncation of the integration region:

For a diffraction-limited lens, all those small contributions lead to a "Chinese Hat" modulation transfer function, which is essentially conical at the origin. The MTF drops almost linearly at low spatial frequencies, with a singularity in the Laplacian at the origin. In contrast, the Gaussian MTF has zero slope at the origin, and much less attenuation of low spatial frequencies.

As scary as this hat looks, its singularity is due to signals which can be ignored. Truncating any infinite part of the PSF, no matter how small it is, will make the hat nice and round.

But now we have a measurement which depends too much on arbitrarily selected test conditions and calculation parameters.

It does already. Your PSF is truncated already. You do not have an infinite sensor.

I don't have a problem with the PSF extending over the entire sensor.

The large extent of the PSF is a key to my objection to the Buzzi and Guichard paper. Their theorem 1 requires that the blur kernel is a subset of the Schwartz space, and I do not believe this is valid in any meaningful way for the jinc² function.

Of course not, and we knew that already. On the other hand, it can be approximated with such as close as you want. The problem there is that the second moment blows up under such approximation but we know that since the integral is divergent.

That is rather the point. The second moment is the "Unique blur measure".

You are the one proposing that the PSF should be truncated at some radius, such as 4, 10 or 20 times the Airy disk radius to give a reduced second moment and modify the zero frequency singularity of the Laplacian of the MTF.

The blur measure of the Gaussian in my figures (1 to 4) is close to unity, provided the truncation radius is 5 or greater. In the same figures, the Airy disk has a central peak with a similar width at half-maximum intensity. If we truncate at 12 mm radius, the blur measure of the Airy disk is 2715 times larger than that of the Gaussian.

Yes, so? 12mm is too extreme. This is another way of saying that the integral is divergent but we knew that. Most of what you integrate to get this is so small that should be discarded. My suggestion was to truncate over the smallest disk where the difference does not make a visible difference, most of the time, and that would not be 1/2 of the height of the sensor.

But where?

Is there a truncation radius which "does not make a visible difference" for a useful range of real-world blur kernels?

Ideally if one finds Gaussian and Airy Disk PSF's delivering similar subjective blur, one would wish them to have similar blur metrics.

Yes, if that was the goal of that paper. But it was not; or at least the theorem does not imply anything like that.

Suppose this is the case. If we now double the radius of both, one would expect the subjective blur to scale in a similar fashion in each case,

It does.

with the second moment scaling as the square of the radius. Unfortunately, at constant integration radius, while the blur metric of the Gaussian would quadruple, we would get a smaller proportional increase in second moment for the Airy disk.

??? By a fine radius - you mean you integrate a truncated kernel? Then neither one will give you an exact nice multiple. If not, they both scale in the same way.

Keep the size of the truncated kernel constant, and apply the same radial scaling to both Gaussian and Airy disk PSF.

Provided the truncation radius is more than 5 or 6 times the standard deviation of the Gaussian PSF, its blur measure is substantially independent of the integration limit (figure 4 above).

For the Airy disk, this is not the case.

Of course, the integral is divergent. We established that already.

The central peak contributes around 61% of the equal zeroth moment Gaussian blur measure, and the outer rings make an additional contribution to the second moment which increases more or less linearly (monotonically with oscillations) as the truncation radius increases.

For fixed truncation radius 25, start with Airy radius 2.7 and Gaussian sdev=1, then scale Airy disk and Gaussian radii by the same size factor in each line of the table below:

size | gauss_blur | airy_blur | ratio airy_blur / gauss_blur

1.000 1.00000 5.72476 5.72476
2.000 4.00000 11.44821 2.86205
3.000 9.00000 17.85249 1.98361
4.000 16.00000 24.74498 1.54656
5.000 24.99874 30.44975 1.21805
6.000 35.94080 39.77807 1.10677
7.000 48.38571 41.78761 0.86364
8.000 61.14776 46.74666 0.76449

Ratio of Airy blur to Gauss blur measure decreases when both PSF's are expanded radially by the same factor.

EDIT: Did you normalize the Airy blur to unit area?

Airy_blur = second moment of PSF / zeroth moment of PSF

Assuming that's what you mean.  PSF trunctated at radius 25, which is 25 micron radius for F/4 imaging at 555 nm.

Good. The Gaussian is "almost" compactly supported while jinc^2 is not and leads to a divergent integral. We knew that. On the other hand, with size 8, your truncation radius is comparable to the Airy radius and it is obviously too small.

I am not sure what you are trying to prove. I did not write this paper. The measure is a logical consequence of the assumptions. Good or bad, it is what it is. Even you said that one of the properties of that measure (not an assumption but follows from them) is reasonable. Take it is a negative result, if you do not like the outcome. It tells you that you must give up some of the assumptions then.

We do seem to be running out of things to disagree on.

For me, the most obvious point of contention is whether there is a sensible truncation radius which gives a useful measure of blur over a range of blur kernels. For DSLR photography that should include essentially diffraction-limited images between f/4 and f/22 (and preferably between f/2.8 and f/32), moderate levels of defocus and spherical aberration, and corner aberrations for low end travel zooms and wide angle lenses at maximum aperture.

We need to cope with Airy disks whose radius varies by at least a factor of 6, and preferably more to accommodate defocus and optical aberrations.

A moderately large fixed truncation radius is fine once the blur looks sufficiently close to Gaussian - in which case truncation may not be necessary as the size of integration region is no longer critical.  This can arise either because something close to a genuine Gaussian blur has been applied and dominates, or because multiple blurs have been applied so that the central limit theorem kicks in.

It seems to me that if you give up the Schwartz class assumption (which is obviously too strong, a finite second moment should be enough), it follows that there is no such measure satisfying those conditions. This is interesting to know. The proof should be along those lines: if there is, over the Schwartz class, it is the second moment but then for more general blur kernels, it would be infinite.

Fair point. A large part of my concern is that they do not address these issues.

Buzzi and Guichard make it clear in the introduction that their metric is intended to apply to optical imaging chains in general, and to cameras specifically.

They present it as an objective measure which is related to the perceived level of blur.

They assume without any attempt at justification that the blur kernel will have a finite second moment.

"The transform of a point through an imaging chain is expected to have essentially a finite size. Thus we can assume that the kernel has finite moments of all orders."

As agreed above, this is invalid for an Airy disk PSF.  I suspect it will be equally invalid for most photographic images where there is an abrupt drop in intensity at the edge of the exit pupil.

I don't claim to fully understand all the details of their mathematical presentation, so I may be missing something in the extension to more general blur kernels.

In the conclusion they state:

"But our result is independent of any assumption on the blur kernel (which we do not need to estimate and which maybe far from Gaussian). The uniqueness affords us a bridge between our mathematical approach and psychophysical facts".

I may be over-interpreting here, but there seems to be an unstated implication of a direct relationship between kernel variance and perceived blur, independent of the shape of the blur kernel. This clearly does not hold where the second moment integral diverges, and it is questionable even for shapes such as pill-box and cone.

The second moment blur measure may be unique (where it exists), but the correlation between it and the perceived blur level will depend on the profile of the blur kernel.

They further claim that perceived blur is especially sensitive to attenuation of low frequencies.  As far as I can tell, this conclusion is not supported by the experiments they describe.

My understanding is that perceived sharpness is dominated by moderate spatial frequencies. Indeed, the human visual system is notoriously insensitive to the lowest spatial frequencies.

Their "low pass filter" leaves low to mid frequencies intact up to 20% of maximum spatial frequency.

Their "Gaussian filter" has little impact at low frequencies, and strongly attenuates frequencies around 20% and higher.

Their "high pass filter" is more of a mid-band attenuator. It follows the Gaussian blur spectral response to 20% of maximum frequency, reaches a minimum at 30%, then rises, but never exceeds unity at high frequencies.

I did try to dig up more about how the BxU measure is calculated, but couldn't find anything on the DxO site.  There is an overview here, but the link to DxO Analyser is broken.

Imaging resource has some background, confirming that the target patches used are fairly small, but with little detail.

Cheers.

-- hide signature --

Alan Robinson

alanr0 Senior Member • Posts: 2,237
Microcontrast or Texture blur

At the risk of derailing - or is it re-railing the discourse ...

We don't seem to have a good definition of microcontrast, but there is an IEEE standard for texture blur. Is this the same thing?

CPIQ is a standard for Camera Phone Image Quality, which is not something I have taken much interest in, but DxO and others are investing resources into this.

IEEE P1858 CPIQ Overview

Some background here: Cao: Dead leaves model

DxO, EI 2018 : Quantitative measurement of contrast, texture, color, and noise
for digital photography of high dynamic range scenes

Imatest: Spilled Coins, Dead Leaves, and Random Chart Analysis

DxO Scientific publications

And finally, Is BxU still used?

Enjoy.

-- hide signature --

Alan Robinson

Joofa Senior Member • Posts: 2,655
Re: Buzzi and Guichard & BxU blur measure

alanr0 wrote:

They assume without any attempt at justification that the blur kernel will have a finite second moment.

The blur kernel also represents the pdf of position of a photon in an image plane. With this understanding, the second moment measures "average spread", and using usual calculation yields infinite average spread in the diffraction pattern. However, IMHO, that is not the right way to look at the state of affairs. The sensor is limited in physical size. Only those photons that make their way to sensor are part of an image. Hence, it appears to me that the right pdf to use here is the conditional pdf - conditioned on the event that photons landed on the sensor. This pdf is essentially a truncated version of the original pdf appropriately normalized. However, it does have finite second moment.

Also, on a different line of thought, this whole thread is actually a waste of time. No point writing messages after messages on MTF interpretation of blur measure when it doesn't apply to naturally images in a practical way given the current tools that are out there. One can argue that MTF is what the authors of the papers sought in one way or another, but then one should be able to see that, as said, MTF is not the right tool for analyzing natural images.

BobORama
BobORama Senior Member • Posts: 2,722
Re: BxU blur estimate: PSF variance

AiryDiscus wrote:

J A C S wrote:

I do not know and this is an honest answer. According to AD, you can barely see anything behind the 4th Airy disk. His estimate of the essential support is even smaller that what I would think and yet, he sounds very angry.

I am not very angry, but I wish you would not twist my words. It is not that there is no energy beyond the fourth ring of the airy disk. It is that unless your system has spectacularly low wavefront error, the PSF has been sufficiently reshaped by that wavefront error such that the beyond the fourth airy disk is smooth and lacks ripples.

In my earlier years I hand figured telescope primaries some as large as 18" in diameter.   Then China, E-Bay, Marriage ruined everything.   Anyway...  when testing with Airy patterns, the transverse cross section of Airy pattern is ( much ) more diagnostic.  You can have completely "normal" looking Airy pattern at the focal plane which is really messed up when viewed as a transverse cross section.  The process was to rack focus through a few microns or travel and look for asymmetries in the transverse cross section, as well as radial asymmetry of the pattern.

 BobORama's gear list:BobORama's gear list
Pentax K-5 Pentax K-1 Sigma 10mm F2.8 EX DC HSM Diagonal Fisheye Pentax smc DA 18-55mm F3.5-5.6 AL WR Samyang 14mm F2.8 ED AS IF UMC +9 more
J A C S
J A C S Forum Pro • Posts: 15,236
Re: Buzzi and Guichard & BxU blur measure

alanr0 wrote:

Of course not, and we knew that already. On the other hand, it can be approximated with such as close as you want. The problem there is that the second moment blows up under such approximation but we know that since the integral is divergent.

That is rather the point. The second moment is the "Unique blur measure".

Incorrect quote.

You are the one proposing that the PSF should be truncated at some radius, such as 4, 10 or 20 times the Airy disk radius to give a reduced second moment and modify the zero frequency singularity of the Laplacian of the MTF.

The blur measure of the Gaussian in my figures (1 to 4) is close to unity, provided the truncation radius is 5 or greater. In the same figures, the Airy disk has a central peak with a similar width at half-maximum intensity. If we truncate at 12 mm radius, the blur measure of the Airy disk is 2715 times larger than that of the Gaussian.

Yes, so? 12mm is too extreme. This is another way of saying that the integral is divergent but we knew that. Most of what you integrate to get this is so small that should be discarded. My suggestion was to truncate over the smallest disk where the difference does not make a visible difference, most of the time, and that would not be 1/2 of the height of the sensor.

But where?

Is there a truncation radius which "does not make a visible difference" for a useful range of real-world blur kernels?

I think the best way is to fix a threshold and choose the radius according to that. It will vary with the f-stop, yes. On the other hand, when you measure things, this is what you do anyway - noise, errors, etc.

We do seem to be running out of things to disagree on.

For me, the most obvious point of contention is whether there is a sensible truncation radius which gives a useful measure of blur over a range of blur kernels. For DSLR photography that should include essentially diffraction-limited images between f/4 and f/22 (and preferably between f/2.8 and f/32), moderate levels of defocus and spherical aberration, and corner aberrations for low end travel zooms and wide angle lenses at maximum aperture.

See above.

We need to cope with Airy disks whose radius varies by at least a factor of 6, and preferably more to accommodate defocus and optical aberrations.

A moderately large fixed truncation radius is fine once the blur looks sufficiently close to Gaussian - in which case truncation may not be necessary as the size of integration region is no longer critical. This can arise either because something close to a genuine Gaussian blur has been applied and dominates, or because multiple blurs have been applied so that the central limit theorem kicks in.

Again - the choice should be dictated by the impact this makes on the image. Instead of choosing a radius (on the horizontal axis), choose a level (on the vertical one) which will give you the radius for each kernel.

Note that I am not suggesting that this is a good or a bad measure; neither do the authors. I am just saying that with a  good precision, the PSF can be assumed to be of finite (and not huge) support. This is one of their assumptions, and I see nothing wrong with it as a good approximation. It turns out that adding a small but long (still finite) "tail" might increase the measure significantly due to the r^2 factor, without making much visible difference. As I said, it is what it is. If you think that their conditions are reasonable, that is the measure, like it or not. It is unique with those properties.

It seems to me that if you give up the Schwartz class assumption (which is obviously too strong, a finite second moment should be enough), it follows that there is no such measure satisfying those conditions. This is interesting to know. The proof should be along those lines: if there is, over the Schwartz class, it is the second moment but then for more general blur kernels, it would be infinite.

Fair point. A large part of my concern is that they do not address these issues.

Buzzi and Guichard make it clear in the introduction that their metric is intended to apply to optical imaging chains in general, and to cameras specifically.

They present it as an objective measure which is related to the perceived level of blur.

This is implicit. They criticize measures based in the very high frequencies, make some experiments but they never say that their measure is better. They create that impression but do not claim it directly.

They assume without any attempt at justification that the blur kernel will have a finite second moment.

There are no infinite things around us (well, some models of the universe are infinite in a way but short of that, we deal with finite things).

"The transform of a point through an imaging chain is expected to have essentially a finite size. Thus we can assume that the kernel has finite moments of all orders."

As agreed above, this is invalid for an Airy disk PSF. I suspect it will be equally invalid for most photographic images where there is an abrupt drop in intensity at the edge of the exit pupil.

Yes, there are theorems for that. But all those are models/approximations and should be taken literally. So is the assumption of finite size.

I don't claim to fully understand all the details of their mathematical presentation, so I may be missing something in the extension to more general blur kernels.

They do not have such an extension. I was just saying what could be done. BTW, I glanced through the proof but did not check it carefully.

In the conclusion they state:

"But our result is independent of any assumption on the blur kernel (which we do not need to estimate and which maybe far from Gaussian). The uniqueness affords us a bridge between our mathematical approach and psychophysical facts".

I may be over-interpreting here, but there seems to be an unstated implication of a direct relationship between kernel variance and perceived blur, independent of the shape of the blur kernel. This clearly does not hold where the second moment integral diverges, and it is questionable even for shapes such as pill-box and cone.

When they say "any assumption", they still mean that their basic ones are in place but no other ones. One of them is finite momenta.

The second moment blur measure may be unique (where it exists), but the correlation between it and the perceived blur level will depend on the profile of the blur kernel.

Again, they do not make a claim that this is related to the perceived blur level (but leave that impression).

They further claim that perceived blur is especially sensitive to attenuation of low frequencies. As far as I can tell, this conclusion is not supported by the experiments they describe.

They do not make that precise. By "low" they may mean not too high. This can be seen in many applied math papers or even pure ones. There might be words or statement which are not precise but what matters are the theorems. In applied/engineering papers, words matter more (and there may be no theorems).

My understanding is that perceived sharpness is dominated by moderate spatial frequencies. Indeed, the human visual system is notoriously insensitive to the lowest spatial frequencies.

Their "low pass filter" leaves low to mid frequencies intact up to 20% of maximum spatial frequency.

Their "Gaussian filter" has little impact at low frequencies, and strongly attenuates frequencies around 20% and higher.

Their "high pass filter" is more of a mid-band attenuator. It follows the Gaussian blur spectral response to 20% of maximum frequency, reaches a minimum at 30%, then rises, but never exceeds unity at high frequencies.

I did try to dig up more about how the BxU measure is calculated, but couldn't find anything on the DxO site. There is an overview here, but the link to DxO Analyser is broken.

Imaging resource has some background, confirming that the target patches used are fairly small, but with little detail.

Cheers.

I also think that the middle frequencies have the most impact but I do not want to make that a precise statement. In general, I think that what microcontrast is should be left to our senses.

Jack Hogan Veteran Member • Posts: 7,124
Noise and MTF Slanted Edge Measurements

AiryDiscus wrote :

fvdbergh2501 wrote:

AiryDiscus wrote: I have always assumed that e.g. read noise does reduce the MTF, since you can assume e.g. spatially gaussian read noise has a gaussian FT, which can be baked into or divided out of an MTF curve.

Do you not agree?

Hmm. I have only tested the impact of noise on MTF50, way back in 2013. Those experiments did show that noise did not appear to reduce mean MTF50 over repeated measurements, although the variance shot up with increased noise.

Maybe I did not test with a large enough noise magnitude?

I see the impact on a comparable scale to what I had in mind (back of napkin, maybe 2% at "mtf50 frequencies"). The folks I am familiar with doing this sort of modeling usually use a gaussian of width (e.g.) 10% of the pixel width, and they are modeling worse sensors than what is in a D7000.

Anecdotally/Empirically I have noticed that in a slanted edge measurement the impact of shot noise in the bright part of the edge overwhelms any read noise potentially apparent in the dark side. If the edge is captured a number of times and an MTF derived each time, the noise seems to present as undulations of the linear MTF curve, oscillating around a mean, sometimes producing lower values sometimes higher at any given spatial frequency (MTF50, say), as Frans suggests.

The mean of these oscillations presumably represents the noiseless MTF curve. Intuitively because of superposition, in the simple applications I typically see (captures with good technique near the center at working f-numbers for landscapes) there appears to be little difference in the resulting mean MTF curve between averaging the actual slanted edge captures and then computing an aggregate MTF vs computing the MTF of each capture and then averaging the resulting MTF curves.

In the higher spatial frequencies - the typically low energy ones above Nyquist - noise is sometimes apparent in the MTF curve as ringing of a well defined frequency. I have never investigated where that frequency comes from and its correlations. Perhaps it could tell us something about the nature of the noise or perhaps it is just a byproduct of the methods used. Any ideas, anyone?

Jack

Joofa Senior Member • Posts: 2,655
Re: Noise and MTF Slanted Edge Measurements
1

Jack Hogan wrote:

[SNIP on that MTF thingy ...]

Any ideas, anyone?

Yes, what is the MTF of this image?

What is the MTF of a natural image?

BTW, can MTF solve world hunger, find cancer cure, and space trip to the nearest star? Unbelievable, that you guys keep on thinking regarding the magical properties of MTF.

-- hide signature --
J A C S
J A C S Forum Pro • Posts: 15,236
Re: Noise and MTF Slanted Edge Measurements
1

Joofa wrote:

Jack Hogan wrote:

[SNIP on that MTF thingy ...]

Any ideas, anyone?

Yes, what is the MTF of this image?

What is the MTF of a natural image?

Not really a direct answer to your question, but here is a paper which studied the MTF of the eyes of the cat:

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1330735/

It appears that the optical resolution is 3-5 times worse than the human one and it outresolves the "sensor" a lot. Those poor cats must see the world aliased.

J A C S
J A C S Forum Pro • Posts: 15,236
Re: Dynamic range for point, line & step subjects

Since I mentioned a RAW file where I can measure 12 stops difference in DR: Here you can find DNG files from the new Canon:

https://www.canonrumors.com/come-play-with-the-canon-eos-r-raw-files/

FRO_0007.dng is an image of a dark room with a bright window in the middle and plenty of areas with locally averaged G values 1 RAW unit above black (=511) or even 0.7. The bright window has an average ~5600. This is a 12-13 stop difference.

Keyboard shortcuts:
FForum MMy threads