Limits of resolution, a simple demo

Nyquist is concerned mainly with the idea of a perfect sampling/reconstruction of a perfectly bandlimited waveform. As long as the bandlimiting and reconstruction filters are perfect (i.e. infinite length sin(x)/x functions), one can reconstruct a bandwidth <B by sampling at 2*B. More accurately, I think that Nyquist was mainly concerned with the inverse problem, of how what rate of telegraph pulses could be resolved in a limited bandwidth analog wire, but that turns out to be the same math?

Anyways, a camera sensor is a 2-d thing with a "sampling rate" given by the sensel pitch (for achromatic content), and different "sampling rates" for the green and the red/blue channels (assuming a Bayer colorfilter). It is far from the point sampler described in DSP classes, more like a 2-d sample & hold or box-car prefiltered sampler due to the sensel area, further complicated by OLPF (if present.).

Photographic relevant scenes are not arbitrary. Rather, I think they tend to be (spatially) lowpass and contain "structure". When sampled by a sensor that is a "suboptimal" Nyquist sampler, and a flawed optical lens in-between, I would think that optimal AI/ML-based processing is the only sensible way to extract as much scene-related info from the raw file as possible.
 
Photographic relevant scenes are not arbitrary. Rather, I think they tend to be (spatially) lowpass and contain "structure". When sampled by a sensor that is a "suboptimal" Nyquist sampler, and a flawed optical lens in-between, I would think that optimal AI/ML-based processing is the only sensible way to extract as much scene-related info from the raw file as possible.
*plausibly infer*
 
Now... if I wanted to check that loaves coming out of a bakery were the correct weight, the only way to be sure every loaf was the correct weight is to weigh every single one. But we can weigh some proportion and have a confidence level that all are OK.
If someone told me that loaf weight is function and Nyquist proves that loaf weight can be perfectly assessed testing a subset of loaves, I would protest that's false. I can't even offer a Fermatesque 'I can prove it, but not in the space I have'. Much less to the person who cites Fourier and the rest to prove that they can't possibly be wrong.
In math, picket signs do not work, proofs do.

You remind me of the guy who sees a giraffe in the zoo for the first time, and says: “there is no such animal!.”
 
Photographic relevant scenes are not arbitrary. Rather, I think they tend to be (spatially) lowpass and contain "structure". When sampled by a sensor that is a "suboptimal" Nyquist sampler, and a flawed optical lens in-between, I would think that optimal AI/ML-based processing is the only sensible way to extract as much scene-related info from the raw file as possible.
*plausibly infer*
Agreed. I guess you could say something similar about speculative debayer processing. It is possible to find special scenes where the processing fails, but for typical scenes, the speculative processing produce a result that is more similar to the true scene (and/or the users expectations and preferences) than a plain Nyquist-inspired linear debayer could achieve.
 
1438a17215d04af392030920a06d19aa.jpg.png
This is what an unnamed AI known to be good with code came up with in less than a minute, without any input from me whatsoever other than the following prompt followed by instancing the class as-is in Matlab:

write a matlab class that demonstrates signal sampling and reconstruction using a Lanczos filter. The signal is a step function. There are 256 samples with the step at sample 129. Plot the original continuous signal, the samples, the reconstructed signal

It chose Lanczos with the 'a' parameter equal to 5.
It chose Lanczos with the 'a' parameter equal to 5.

It could use some fine tuning (e.g. specify the units) but In the old days it would have taken me a couple of hours. I am kinda blown away but also a little bit sad.

Jack
 
Last edited:
I did read it earlier - although I struggled to understand some of it. What I took from it was that we agree (a) if the lens and sensor are balanced the whole setup is more effective than a great lens and poor sensor or poor lens and great sensor. (b) There is a point where we could add more pixels and get an improvement

but the improvement is too small to justify doing so.
I actually made a stronger statement than that.
But the area where we agree is weaker than each of us has said.
I don't understand your point about agreement. I don't see any agreement here. Don't confuse issues about reconstruction techniques with the correctness of the Nyquist sampling theorem. Everything I've said about that theorem is true.
After that point it's "how many angels can dance on the head of a pin" territory ; I'm in the "you can always fit another angel on" camp,
Please provide your reasoning, and why Nyquist and Shannon were wrong. Remember: “Extraordinary claims require extraordinary evidence.”
To the best of my limited understanding they were correct about the number of samples needed to avoid aliasing when a sampling a signal for a given frequency.
Then why do you insist that finer pitch is better without regard for the lens, without limit?
Now... if I wanted to check that loaves coming out of a bakery were the correct weight, the only way to be sure every loaf was the correct weight is to weigh every single one. But we can weigh some proportion and have a confidence level that all are OK.
If someone told me that loaf weight is function
I don't know what "loaf weight is a function" means.
and Nyquist proves that loaf weight can be perfectly assessed testing a subset of loaves, I would protest that's false. I can't even offer a Fermatesque 'I can prove it, but not in the space I have'. Much less to the person who cites Fourier and the rest to prove that they can't possibly be wrong.
You have constructed a situation to which the Nyquist sampling theorem does not apply, and now you're complaining that it doesn't apply.
Or another area I work with, queues; jobs arrive at random (with a probability function) and they take a random time to service (again with a probability function). I can draw a chart of the queue length. Does Nyquist tell me how often I need to check the queue length to say what the mean and median length are over a day ?
That's just silly.
There are lots of these sampling problems. If I want to know the RPM of an engine I'd expect Nyquist to help with sampling frequency. If I want to know the amount of fuel used over a race, Nyquist doesn't tell me how often to check the flow rate. I just know if I check often enough the error will be too small to worry about.

So there are cases where Nyquist applies, and where he doesn't, yes ? I think I a lot of questions we want to answer in photography fall into the "Not well formed with respect to the sampling theorem" - I can't find the words to quote verbatim. The continuously varying grey scale was one. The "how well can we separate hairs as they go out focus"
I already posted a analysis of a continuously varying gray scale. You said it doesn't have frequencies associated with it. I posted a Fourier analysis that showed that it did.
Proofs of where Nyquist does [not] apply? Not sure anyone has anything useful to add to the discussion. If you could prove he applies here I wouldn't understand it.
So your reaction to your unwillingness or inability to understand the sampling theorem, which requires an understanding of the frequency domain, is to reject its conclusions based on what?
while others will say there is expertise (not specifically about angels or pins) which can be used to give a number.
One is right and one is wrong. This is not something that can be settled by opinions.
OK, how many angels can dance on the head of a pin then ? :-) The question is not whether the Theorem is right or wrong, but whether a photograph is, in practice, a signal with a set of frequencies for which the sole question is aliasing.
You keep moving the goalposts. No one ever said the sole question was aliasing. But that's the question that applies to your assertion about needing to sample infinitely finely to get all the detail captured that a lens can lay down on the sensor.
 
In 1960, Claude Shannon asked the following question, "To what extent are functions, which are confined to a finite bandwidth, also concentrated in the time domain. " David Slepian went to work to answer that.

David Slepian of Bell Labs along with other colleagues in the 1960's with a series of papers to investigate better reconstruction of PW signals as clearly the real world did not really satisfy the conditions of the Shannon-Nyquist sampling theorem. The space of both time limited and band limited functions is the trivial space {0}.

In their original paper Slepian and Pollack considered the operator time limiting and then band limiting the time limited signal. That is the subspace of signals that is important in sampling. The Shannon sampling theory establishes special function basis for the Paley-Weiner functions in the form of sinc functions which give a reconstruction of the waveforms with the coefficients being samples at a sufficiently high rate.

Slepian and Pollack solved the eigen value problem for the operation of time limited and band limiting (turns out to be a convolution operator on L^2) that generates a complete orthogonal sequence of eigenvalues and eigenfunctions to reconstruct this space of functions. They could be shown to originate from a Strum-Louisville type boundary value problem similar to Bessel function, etc. These special functions are interesting enough associated with the solution of the Laplacian on a spherical surface and are known as Prolate Spherical Wave Functions, PSWF.



Slepian extended the results to multidimensional signals ( dimension two being images), in


In the mid 1970's Slepian extended his results to discrete signals that is signals already in digital form.


During the period from 1975 through 1979 my agency and AT&T were working closely on a problem that we jointly faced. Bell Labs worked closely with my office and IDA in Princeton to apply these concepts to an important problem we were facing at the time as a putative solution. . During that time I had an apartment in Princeton and an office at IDA where I normally spent a week or two a month.

It turned out that these "sampling concepts" worked much better than the standing signal processing approach at the expense of vastly expanded computational horsepower required. In 1979 AT&T made significant changes to the structure of their microwave relay systems which allowed encryption of important data which solved the problem at hand. That approach mitigated the need for the use of the Prolate approach we were developing in parallel with Bell Labs. Concurrently Aaron Wyner of Bell Labs proposed an approach for speech encryption that was based on the use of Prolates..


Prolates application in DSP is still an active area of investigation.


A good review of the work and current state of the art. Prolates have proved invaluable certain applications



I've always wondered why the use of the use of the PSWF sequence of functions hasn't gotten more play. This functional basis of the actual function space that is dealt with in DSP would eliminate many of the practical issues faced by the traditional approach, undershoot, overshoot, need to over sample, aliasing, etc. Interestingly enough the Kaiser window (a.k.a. Slepian window) is related to PSWF.
 
Cool Jim. A couple of questions to better understand:

What Lanczos did you use and how did you normalize it? It looks more energetic than typical ones I am used to seeing , both in over/undershoots and duration. Ripples for Lanczos-a start showing up 'a' pixels before the transition (7 here?).

Also how much do you oversample the convolution and reconstructed signal (I have used 9x in the past)?

It would seem that the rise of the reconstructed signal is not centered on pixel 20. Do you also oversample the samples, and center them on pixel 20.0? 100% fill factor, perfect square pixels?
Pixel aperture was off for that run.
What down-res algorithms would you use to end back at 1x and how would that look like compared to the shown ideal (top and/or middle plot)?

Maybe it would be useful to also plot the |FFT| of the 1x reconstructed signal on the last plot, to show differences.

Nice work!

Jack
I'll email you the code. Any suggestions you have will be welcome.



Jim

--
 
Now... if I wanted to check that loaves coming out of a bakery were the correct weight, the only way to be sure every loaf was the correct weight is to weigh every single one. But we can weigh some proportion and have a confidence level that all are OK.
If someone told me that loaf weight is function and Nyquist proves that loaf weight can be perfectly assessed testing a subset of loaves, I would protest that's false. I can't even offer a Fermatesque 'I can prove it, but not in the space I have'. Much less to the person who cites Fourier and the rest to prove that they can't possibly be wrong.
In math, picket signs do not work, proofs do.

You remind me of the guy who sees a giraffe in the zoo for the first time, and says: “there is no such animal!.”
TBH, that's quite rude.

I feel like the guy who is told there is no such thing as a horse, only a special case of the Zebra. (If you sample the pattern of the horse's stripes correctly). Or to stay with your example someone who says it is related to the okapi with a crowd yelling "It's a cameleopard"

There are an infinite number of possible numbers between zero and one. If someone brings a theory along from another domain that says he can write them all down, what's your response?

So if the light falling on a sensor goes from light to dark over the width of the sensor, can you put a number on how many divisions are needed to record that transition perfectly?

Imagine if there were another sensor beside it the level would come up from dark back to light, and another beyond it would go down again. So if we sample at 1/2 the frequency we would have perfection if we took one sample per sensor. Of course not, we'd just know there was no aliasing.

Still not sure. Cycle wider than the sensor too reductio ad absurdum ? Imagine a plot of brightness is a simple wave that completes 3 cycles of the sensor width. How many samples does the theorem say. And what would a picture with that number look like - perfect reproduction ?
 
Last edited:
Still not sure. Cycle wider than the sensor too reductio ad absurdum ? Imagine a plot of brightness is a simple wave that completes 3 cycles of the sensor width. How many samples does the theorem say. And what would a picture with that number look like - perfect reproduction ?
Perfect
Perfect

Imperfect
Imperfect

Even less perfect.
Even less perfect.

--
https://blog.kasson.com
 
Last edited:
I've always wondered why the use of the use of the PSWF sequence of functions hasn't gotten more play. This functional basis of the actual function space that is dealt with in DSP would eliminate many of the practical issues faced by the traditional approach, undershoot, overshoot, need to over sample, aliasing, etc. Interestingly enough the Kaiser window (a.k.a. Slepian window) is related to PSWF.
Maybe because the PSWF functions are not localized. There are many interpolation kernels one can use for recovery from samples, not just Lanczos. They are all well localized because of practical considerations.
 
There are an infinite number of possible numbers between zero and one. If someone brings a theory along from another domain that says he can write them all down, what's your response?
I'd say that this is a not well formulated question. We have the Axiom of Choice on one hand, but all (presumably real) numbers between 0 and 1 form an uncountable set, so he cannot write them down one after the other even if he had infinite time.
So if the light falling on a sensor goes from light to dark over the width of the sensor, can you put a number on how many divisions are needed to record that transition perfectly?
You mean, in a linear way, and I know that? I need two points.
Imagine if there were another sensor beside it the level would come up from dark back to light, and another beyond it would go down again. So if we sample at 1/2 the frequency we would have perfection if we took one sample per sensor. Of course not, we'd just know there was no aliasing.
I do not understand the question.
Still not sure. Cycle wider than the sensor too reductio ad absurdum ? Imagine a plot of brightness is a simple wave that completes 3 cycles of the sensor width. How many samples does the theorem say. And what would a picture with that number look like - perfect reproduction ?
See my post earlier how to deal with the final span of the sensor. With a hypothetical 24mp sensor, it would be so perfect that you would not be able to tell the difference visually at all. The only possible noticeable errors would be at the edges which a routinely cropped a bit anyway. With an infinite virtual sensor, and an infinite sinusoidal wave, it would be absolutely perfect.
 
I did read it earlier - although I struggled to understand some of it. What I took from it was that we agree (a) if the lens and sensor are balanced the whole setup is more effective than a great lens and poor sensor or poor lens and great sensor. (b) There is a point where we could add more pixels and get an improvement

but the improvement is too small to justify doing so.
I actually made a stronger statement than that.
But the area where we agree is weaker than each of us has said.
I don't understand your point about agreement. I don't see any agreement here. Don't confuse issues about reconstruction techniques with the correctness of the Nyquist sampling theorem. Everything I've said about that theorem is true.
OK. Let's do this simple thought experiment.

There is a pattern of light and dark falling on the sensor, it does follow a simple wave form. 3 cycles from darker to lighter and back over the full width the sensor.

Nyquist tells you how many samples you need to record those three cycles without aliasing. 3 cycles per image width so Nyquist says sample 6 times. So a sensor six pixels wide will do the job. Definitely no aliasing, just as Nyquist tells us.

Now are we seeing any problem with printing the our six pixel wide image ?

I said you'll never get the reproduction perfect (except in the purely theoretical case of a perfect sensor.) You rocked up with "Infidel How dare you question the prophet Nyquist". And we've wasted hours ever since. Respectfully, if anyone has conflated reconstruction techniques and sampling theorem, it wasn't me. What have I said is Nyquist tells you six samples covers three cycle signal, with no aliasing, but doesn't tell you can use a six pixel wide sensor to reproduce a 3 cycle image perfectly.
After that point it's "how many angels can dance on the head of a pin" territory ; I'm in the "you can always fit another angel on" camp,
Please provide your reasoning, and why Nyquist and Shannon were wrong. Remember: “Extraordinary claims require extraordinary evidence.”
To the best of my limited understanding they were correct about the number of samples needed to avoid aliasing when a sampling a signal for a given frequency.
Then why do you insist that finer pitch is better without regard for the lens, without limit?
Imagine I had the worst possible lens that could only resolve 3 line pairs per image width and work from there.
Now... if I wanted to check that loaves coming out of a bakery were the correct weight, the only way to be sure every loaf was the correct weight is to weigh every single one. But we can weigh some proportion and have a confidence level that all are OK.
If someone told me that loaf weight is function
I don't know what "loaf weight is a function" means.
and Nyquist proves that loaf weight can be perfectly assessed testing a subset of loaves, I would protest that's false. I can't even offer a Fermatesque 'I can prove it, but not in the space I have'. Much less to the person who cites Fourier and the rest to prove that they can't possibly be wrong.
You have constructed a situation to which the Nyquist sampling theorem does not apply, and now you're complaining that it doesn't apply.
No I'm not complaining, I'm saying that here is a problem domain, it relates to sampling, and if someone quoted Nyquist we'd all say they were wrong.... out of curiosity do you read to the end first ?
Or another area I work with, queues; jobs arrive at random (with a probability function) and they take a random time to service (again with a probability function). I can draw a chart of the queue length. Does Nyquist tell me how often I need to check the queue length to say what the mean and median length are over a day ?
That's just silly.
Indeed. But is a photograph more like a wave form in a signal (where Nyquist applies), or more like the chart of my queue length (where it would be silly). Because if you can show me it's the former they'll be a thack on my forehead and shout of DOH! that can be heard for miles.
There are lots of these sampling problems. If I want to know the RPM of an engine I'd expect Nyquist to help with sampling frequency. If I want to know the amount of fuel used over a race, Nyquist doesn't tell me how often to check the flow rate. I just know if I check often enough the error will be too small to worry about.

So there are cases where Nyquist applies, and where he doesn't, yes ? I think I a lot of questions we want to answer in photography fall into the "Not well formed with respect to the sampling theorem" - I can't find the words to quote verbatim. The continuously varying grey scale was one. The "how well can we separate hairs as they go out focus"
I already posted a analysis of a continuously varying gray scale. You said it doesn't have frequencies associated with it. I posted a Fourier analysis that showed that it did.
I either missed, or couldn't understand the fourier analysis but posed the question to someone else. If the frequency is very, very low, say one cycle is two sensors wide, do we get perfect reproduction with one sample per sensor ?
Proofs of where Nyquist does [not] apply? Not sure anyone has anything useful to add to the discussion. If you could prove he applies here I wouldn't understand it.
So your reaction to your unwillingness or inability to understand the sampling theorem, which requires an understanding of the frequency domain, is to reject its conclusions based on what?
As I said right at the outset: we end up in a dialogue of the deaf. You insist that the sampling theorem tells you a number of pixels required to guarantee perfect reproduction of any possible image. When I say such a thing is not possible, and its a misuse of correct theorem to say it is, you bluster to try to suggest I'm saying the theorem is wrong in general. I'm not.
while others will say there is expertise (not specifically about angels or pins) which can be used to give a number.
One is right and one is wrong. This is not something that can be settled by opinions.
OK, how many angels can dance on the head of a pin then ? :-) The question is not whether the Theorem is right or wrong, but whether a photograph is, in practice, a signal with a set of frequencies for which the sole question is aliasing.
You keep moving the goalposts. No one ever said the sole question was aliasing. But that's the question that applies to your assertion about needing to sample infinitely finely to get all the detail captured that a lens can lay down on the sensor.
OK. Does the theorem tell us (a) The number of samples required for perfect reproduction of the image. or (b) The number to prevent aliasing.

Last try on the infinite-resolution (or better expressed "perfect") thing.

In your terms, think of sampling a square wave. It's 100 or 0 , and what I know as the "duty cycle" is 50% (in case I'm using the term wrongly, 50% of the time at 100, 50% of the time at 0)
The frequency is 1Hz.
You sample at 2Hz and each sample is the average value over 0.1s
Now... there is a chance that the transition comes in during your 0.1 seconds, and if it is exactly in the middle you get 50/50/50/50 for all your samples. if it is a bit offset set you might get 70/30/70/30 and if it misses the boundary you get 100/0/100/0 . Simple enough to calculate the average.

Then we reduce the increase the duty cycle to 90% (so 90% at 100 and 10% at 0). So there is a high probability that all samples all fall in the 90% and we get 100/100/100/100. The pulses are happening at 1Hz but when you decompose the signal into its constituents it's not a simple 1Hz signal. The only way to detect the shorter time at 0, reliably is to sample more frequently.

And then we go to a duty cycle of 99% and so on....

Even if we make our samples 0.5 seconds at 99% we're going to see 100,98,100 which is too small a difference to count as resolved.

Now if you think of the 100% times as the stars in the example we used before and 0% times are the gaps, between as the gaps get smaller we need more samples to detect them, regardless of how far apart the stars are. As the gap tends to zero the number of samples to reliably detect it tends to infinity , and If the boundary between resolved and not resolved in the image is when the gap is zero width...

I'm tired. I'm grumpy. And I've had people shouting at me for things I haven't done while wasting your time and my own. If you have a kindergarten level explanation of why this is wrong, thanks. If we can metaphorically shake hands and say we've been talking at cross purposes because I was talking about something other than what you thought I was smashing. Otherwise you can find your own words for "James, sad to say you're too stupid to understand your own stupidity" and save yourself pages of proof.
 
I did read it earlier - although I struggled to understand some of it. What I took from it was that we agree (a) if the lens and sensor are balanced the whole setup is more effective than a great lens and poor sensor or poor lens and great sensor. (b) There is a point where we could add more pixels and get an improvement

but the improvement is too small to justify doing so.
I actually made a stronger statement than that.
But the area where we agree is weaker than each of us has said.
I don't understand your point about agreement. I don't see any agreement here. Don't confuse issues about reconstruction techniques with the correctness of the Nyquist sampling theorem. Everything I've said about that theorem is true.
OK. Let's do this simple thought experiment.

There is a pattern of light and dark falling on the sensor, it does follow a simple wave form. 3 cycles from darker to lighter and back over the full width the sensor.

Nyquist tells you how many samples you need to record those three cycles without aliasing. 3 cycles per image width so Nyquist says sample 6 times. So a sensor six pixels wide will do the job. Definitely no aliasing, just as Nyquist tells us.

Now are we seeing any problem with printing the our six pixel wide image ?
You do not. You reconstruct an image with whatever pixel count your printer requires using the Whittaker–Shannon interpolation formula or better yet, some of its better localized version assuming some oversampling. Then you print that.

The reconstructed image is not your samples plotted as squares. The samples serve as data for the reconstruction.
 
So if the light falling on a sensor goes from light to dark over the width of the sensor, can you put a number on how many divisions are needed to record that transition perfectly?
Yes.

Perfect
Perfect

Perfect
Perfect

Imperfect
Imperfect
I don't know what that those are showing. Because it looks like it is saying provided there are more than a couple of samples over the entire transition, the reproduction is perfect. Experience does suggest otherwise.
 
Still not sure. Cycle wider than the sensor too reductio ad absurdum ? Imagine a plot of brightness is a simple wave that completes 3 cycles of the sensor width. How many samples does the theorem say. And what would a picture with that number look like - perfect reproduction ?
Perfect
Perfect
The labels on the chars above and below appear to be the same. It's impossible for the uninitiated to get the point you're making

Imperfect
Imperfect

Even less perfect.
Even less perfect.
 
So if the light falling on a sensor goes from light to dark over the width of the sensor, can you put a number on how many divisions are needed to record that transition perfectly?
Yes.

Perfect
Perfect

Perfect
Perfect

Imperfect
Imperfect
I don't know what that those are showing. Because it looks like it is saying provided there are more than a couple of samples over the entire transition, the reproduction is perfect.
That is an oversimplification of what is going on . Look at the yellow line.
Experience does suggest otherwise.
Post a counterexample.

--
 
Still not sure. Cycle wider than the sensor too reductio ad absurdum ? Imagine a plot of brightness is a simple wave that completes 3 cycles of the sensor width. How many samples does the theorem say. And what would a picture with that number look like - perfect reproduction ?
Perfect
Perfect
The labels on the chars above and below appear to be the same. It's impossible for the uninitiated to get the point you're making
Look at the f-stops.


--
 
I always wonder why movies look so sharp :)

Maybe switching a consumer camera to 8K and post processing to assemble a hirez image is a viable alternative to still-image pixel-shift tricks or even to heavy tripods :)
Motion sharpening. Your own eye makes it appear sharp.

Lack of aliasing and having smooth bokeh also help. Still frames from a motion picture are actually rather blurry, especially considering the low shutter speeds used.
 

Keyboard shortcuts

Back
Top