ON MTF, Shift Invariance and Aliasing

The pedant's view: :-)
A naive question: A full photography system includes a display (or a print) and conversion to the continuous space again.

If we consider the system as a black box and analyze the full path, light to light, does MTF definition make mathematical sense? In theory, one can take a tiny light meter and measure output light in any point, continuously.
Yes, I believe it still does, there are just more components in the system, now all the way to the retina. Of course this introduces more variables (standard rendering and monitor?) and creates more opportunity for non-linearities and aliasing.

We get around the display issue by specifying that we try to measure the performance of the hardware only, leaving the downstream unaccounted for.

FWIW since shortly after the second world war, the vast majority of single metrics for perceived sharpness has tended to be integrals of System MTF curves, more recently weighted by the MTF of the human visual system otherwise known as Contrast Sensitivity Function (they obviously assume an immaterial contribution by the output device).

Starting in the '60s there was MTF area, then MTF-based SQF, and variations on the theme. In the last couple of decades smartphone manufacturers defined and made popular the similar CPIQ Acutance, though incrementally better performance can apparently be provided by Visual Strehl based on OTF instead of MTF (VSOTF , thanks OE).

Jack
 
Last edited:
We can calculate/model/measure a digital imaging System's MTF, including Lens, filter stack and effective pixel aperture,
Great.
right up to delta sampling,
Yup.
this last bit reminding us that the System MTF tells us nothing about aliasing?
This seems completely bonkers to me.

The MTF in front of the delta sampling is exactly what tells us how much of an aliasing problem we have. We need to know the sampling rate too. Or express the MTF in terms of the sampling rate.

Say we're sampling an analogue audio signal. We have a fancy high-order analogue filter in front of a sample-and-hold, with a pass-band up to 20kHz, and a stop-band above 24kHz. We can make a good estimate of the MTF in front of delta sampling (including a contribution from the aperture time of the s/h).

There is a sense in which this doesn't tell us anything about aliasing. Because we haven't said anything about the input signal or the sampling rate.

If we're sampling at 48kHz, we don't have a problem with aliasing.
Yes, 'nothing' is perhaps too strong a word in this context because a digital System MTF curve usually comes with some units, at a minimum cycles per set or cycles per sample . Though even then, was the sample shifted or unshifted ? ;-)

We could say that System MTF by itself does not tell us much about visible aliasing without knowing more about at least the sensor (Monochrome, Bayer, shifted, other?) and the subject (uniform fog, a fabric, edge, monochrome or chromatic?).

But we usually do know something about the target, say a knife-edge, and the sensor, say unshifted Bayer, so using this a-priori information we can make some educated guesses as to potential aliasing. The more the information, the better the guesses.

Jack
 
Last edited:
The pedant's view: :-)

Back to the basics: A linear operator is called translation invariant if it commutes with translations (sometimes, we just say that it commutes with translations, nothing more). To be more precise, we think of an operator mapping functions on R^n to itself (in our case, n=2). It does not even make sense to talk about translation invariance if it maps one space to another, like a plane to a lattice. It can map a lattice to itself though.

If it is linear and translation invariant (plus some continuity), then it is a convolution. This is actually an if and only if condition.

If it is a convolution, on the Fourier side it is a multiplier, called often a Fourier multiplier. Again, this is an if and only if condition. Then the absolute value of that multiplier, rescaled accordingly, is called MTF in optics and some other applied fields but it rarely if ever gets mentioned in math.

A map from the plane to a lattice cannot possibly be translation invariant. On the Fourier side, we have the continuous FT of an ideal image and the DFT of the samples. The map between them is linear but it does not make sense to call it a multiplier, having MTF or not. It is not a multiplication, say followed by a discretization. Aliasing basically chops the FT and moves pieces around.
Very well said JACS, it puts the discussion on a solid theoretical footing.
 
Last edited:
The pedant's view: :-)
A naive question: A full photography system includes a display (or a print) and conversion to the continuous space again.

If we consider the system as a black box and analyze the full path, light to light, does MTF definition make mathematical sense? In theory, one can take a tiny light meter and measure output light in any point, continuously.
Short answer: only if there is no aliasing.

Long answer: You want sampling respecting the Nyquist limit (some amount of oversampling helps), and you want interpolation respecting that, as well. The sinc one is a possible choice but it is very unpractical. With some doze of oversampling, you can use a more localized kernel. My favorite one, requiring 2x oversampling, is Lanczos-3, which is a good approximation.

The interpolation needs to be convolution based. See this short paper, for example. That means multiplying a chosen kernel by the sample values, moving it around, and summing up the results. Sinc is such, also linear or even the nearest neighbor but those two create aliasing even if that was not in the data. Bilinear or bicubic are not. On the other hand, with some oversampling, they get close enough.

The typical demosaicing algorithms are quite complex, all bets are off.
 
The pedant's view: :-)
A naive question: A full photography system includes a display (or a print) and conversion to the continuous space again.

If we consider the system as a black box and analyze the full path, light to light, does MTF definition make mathematical sense? In theory, one can take a tiny light meter and measure output light in any point, continuously.
Short answer: only if there is no aliasing.
Well understood, no aliasing.

So, if we go along the light-to-sensor-to-display-to-light pipeline, the MTF becomes meaningless after the conversion to sensor because of shift invariance breakage. Then MTF gains the original meaning again after the display stage. Doesn't that look strange?
 
Last edited:
The pedant's view: :-)
A naive question: A full photography system includes a display (or a print) and conversion to the continuous space again.

If we consider the system as a black box and analyze the full path, light to light, does MTF definition make mathematical sense? In theory, one can take a tiny light meter and measure output light in any point, continuously.
Short answer: only if there is no aliasing.
Well understood, no aliasing.

So, if we go along the light-to-sensor-to-display-to-light pipeline, the MTF becomes meaningless after the conversion to sensor because of shift invariance breakage. Then MTF gains the original meaning again after the display stage. Doesn't that look strange?
Careful with the display, it is discrete, and it often has a weird array. Our vison blurs it to some extent though.

You can recover, at least theoretically, and close enough approximately, the projected image, which, to a good approximation has MTF. This is the beauty of the sampling theorem, assuming no undersampling, of course. It does look strange the first time you encounter this theory but it is true.
 
Careful with the display, it is discrete, and it often has a weird array. Our vison blurs it to some extent though.
Sure, we assumed that the display actual resolution is high enough to be free of aliasing.
You can recover, at least theoretically, and close enough approximately, the projected image, which, to a good approximation has MTF. This is the beauty of the sampling theorem, assuming no undersampling, of course. It does look strange the first time you encounter this theory but it is true.
When saying "recover", do you mean that MTF ceases to exist after the sensor, does not exist in the pipeline between sensor and display, and then is re-created again after the display?
 
Last edited:
Careful with the display, it is discrete, and it often has a weird array. Our vison blurs it to some extent though.
Sure, we assumed that the display actual resolution is high enough to be free of aliasing.
You can recover, at least theoretically, and close enough approximately, the projected image, which, to a good approximation has MTF. This is the beauty of the sampling theorem, assuming no undersampling, of course. It does look strange the first time you encounter this theory but it is true.
When saying "recover", do you mean that MTF ceases to exist after the sensor, does not exist in the pipeline between sensor and display, and then is re-created again after the display?
Basically, yes.

For example, the MTF frequently extends beyond Nyquist, and even beyond the sampling rate.

That cannot be represented after sampling.
 
Basically, yes.

For example, the MTF frequently extends beyond Nyquist, and even beyond the sampling rate.

That cannot be represented after sampling.
I understand what you are saying.

So, if one wants to estimate the MTF of Voyager-1 camera, there is no way to do it because we only can get sampled data from it, right? And there is no way to bring it back to measure its lens separately.
 
Basically, yes.

For example, the MTF frequently extends beyond Nyquist, and even beyond the sampling rate.

That cannot be represented after sampling.
I understand what you are saying.

So, if one wants to estimate the MTF of Voyager-1 camera, there is no way to do it because we only can get sampled data from it, right? And there is no way to bring it back to measure its lens separately.
What you say is true only if you care about the MTF beyond the Nyquist frequency. There are plenty of point sources visible where the spacecraft is located. Many samples should provide data that can be used to estimate the MTF up to 0.5 cy/px.
 
Basically, yes.

For example, the MTF frequently extends beyond Nyquist, and even beyond the sampling rate.

That cannot be represented after sampling.
I understand what you are saying.

So, if one wants to estimate the MTF of Voyager-1 camera, there is no way to do it because we only can get sampled data from it, right? And there is no way to bring it back to measure its lens separately.
What you say is true only if you care about the MTF beyond the Nyquist frequency. There are plenty of point sources visible where the spacecraft is located. Many samples should provide data that can be used to estimate the MTF up to 0.5 cy/px.
Well, John Vickers and JACS say that there is no MTF in sampled domain. It can only be measured in continuous domain.

BTW, is the problem limited only to spatial sampling? For example, if the continues light intensity is digitized to, say, 12b, does it present a problem too?
 
Last edited:
Basically, yes.

For example, the MTF frequently extends beyond Nyquist, and even beyond the sampling rate.

That cannot be represented after sampling.
I understand what you are saying.

So, if one wants to estimate the MTF of Voyager-1 camera, there is no way to do it because we only can get sampled data from it, right? And there is no way to bring it back to measure its lens separately.
What you say is true only if you care about the MTF beyond the Nyquist frequency. There are plenty of point sources visible where the spacecraft is located. Many samples should provide data that can be used to estimate the MTF up to 0.5 cy/px.
Well, John Vickers and JACS say that there is no MTF in sampled domain. It can only be measured in continuous domain.
You can estimate System MTF accurately up to just before sampling by using for instance the slanted edge method, which gets around some of the issues introduced by sampling we talked about.
 
Last edited:
Basically, yes.

For example, the MTF frequently extends beyond Nyquist, and even beyond the sampling rate.

That cannot be represented after sampling.
I understand what you are saying.

So, if one wants to estimate the MTF of Voyager-1 camera, there is no way to do it because we only can get sampled data from it, right? And there is no way to bring it back to measure its lens separately.
What you say is true only if you care about the MTF beyond the Nyquist frequency. There are plenty of point sources visible where the spacecraft is located. Many samples should provide data that can be used to estimate the MTF up to 0.5 cy/px.
Well, John Vickers and JACS say that there is no MTF in sampled domain. It can only be measured in continuous domain.
I think that may be the difference between a mathematician and an engineer.

You ask a mathematician, “What’s 2 + 2?”
They say, “4.”

You ask a physicist,
They say, “It’s 4, give or take a small measurement error.”

You ask an engineer,
They say, “It’s 4, within tolerances.”

You ask an accountant,
They say, “What do you want it to be?”

You ask a lawyer,
They say, “Let’s negotiate.”

You ask a statistician,
They say, “The median is 4, but I’ll need a larger sample size to be sure.”
BTW, is the problem limited only to spatial sampling? For example, if the continues light intensity is digitized to, say, 12b, does it present a problem too?
 
Careful with the display, it is discrete, and it often has a weird array. Our vison blurs it to some extent though.
Sure, we assumed that the display actual resolution is high enough to be free of aliasing.
You can recover, at least theoretically, and close enough approximately, the projected image, which, to a good approximation has MTF. This is the beauty of the sampling theorem, assuming no undersampling, of course. It does look strange the first time you encounter this theory but it is true.
When saying "recover", do you mean that MTF ceases to exist after the sensor, does not exist in the pipeline between sensor and display, and then is re-created again after the display?
Assuming no aliasing, all the info about the projected image, then convolved with the active area of the photosites, is contained in the samples, including the MTF of the continuous image you eventually restore.
 
Careful with the display, it is discrete, and it often has a weird array. Our vison blurs it to some extent though.
Sure, we assumed that the display actual resolution is high enough to be free of aliasing.
You can recover, at least theoretically, and close enough approximately, the projected image, which, to a good approximation has MTF. This is the beauty of the sampling theorem, assuming no undersampling, of course. It does look strange the first time you encounter this theory but it is true.
When saying "recover", do you mean that MTF ceases to exist after the sensor, does not exist in the pipeline between sensor and display, and then is re-created again after the display?
Assuming no aliasing, all the info about the projected image, then convolved with the active area of the photosites, is contained in the samples, including the MTF of the continuous image you eventually restore.
On this, the mathematician and the engineer can agree. By the way, Harry Nyquist was an engineer, and Shannon could be considered both.

--
https://blog.kasson.com
 
Last edited:
Basically, yes.

For example, the MTF frequently extends beyond Nyquist, and even beyond the sampling rate.

That cannot be represented after sampling.
I understand what you are saying.

So, if one wants to estimate the MTF of Voyager-1 camera, there is no way to do it because we only can get sampled data from it, right? And there is no way to bring it back to measure its lens separately.
What you say is true only if you care about the MTF beyond the Nyquist frequency. There are plenty of point sources visible where the spacecraft is located. Many samples should provide data that can be used to estimate the MTF up to 0.5 cy/px.
Well, John Vickers and JACS say that there is no MTF in sampled domain. It can only be measured in continuous domain.
I think that may be the difference between a mathematician and an engineer.

You ask a mathematician, “What’s 2 + 2?”
They say, “4.”
How about they say: "0? 1? -1? 4? What kind of algebraic system are we talking about here ?"
 
In measurement we fulfil the first requirement by using a good sinusoidal target like a knife edge
?? Step function.
Which can be written as a sum of an infinite number of sinusoids. I think that's what Jack was getting at.
Of course, but if that's how you reckon it, any shape is sinusoidal. He said he was concerned with terminology.

A sinusoidal function?
A sinusoidal function?
 
Last edited:
In measurement we fulfil the first requirement by using a good sinusoidal target like a knife edge
?? Step function.
Which can be written as a sum of an infinite number of sinusoids. I think that's what Jack was getting at.
Seems like some of your interlocutors' time might be better spent reading than on writing:

Aliasing and the slanted-edge method: what you have to know

From: https://mtfmapper.blogspot.com/

.

PS - If there "is no MTF in sampled domain", describe a continuous method of measurement.

.

This here bold and foxy approach looks like it might possibly be a groovy "gain of function":

"Improvements on sampling of point spread function in OTF measurement" (2022)
 
Last edited:

Keyboard shortcuts

Back
Top