Limits of resolution, a simple demo

I did read it earlier - although I struggled to understand some of it. What I took from it was that we agree (a) if the lens and sensor are balanced the whole setup is more effective than a great lens and poor sensor or poor lens and great sensor. (b) There is a point where we could add more pixels and get an improvement

but the improvement is too small to justify doing so.
I actually made a stronger statement than that.
But the area where we agree is weaker than each of us has said.
I don't understand your point about agreement. I don't see any agreement here. Don't confuse issues about reconstruction techniques with the correctness of the Nyquist sampling theorem. Everything I've said about that theorem is true.
OK. Let's do this simple thought experiment.

There is a pattern of light and dark falling on the sensor, it does follow a simple wave form. 3 cycles from darker to lighter and back over the full width the sensor.

Nyquist tells you how many samples you need to record those three cycles without aliasing. 3 cycles per image width so Nyquist says sample 6 times. So a sensor six pixels wide will do the job. Definitely no aliasing, just as Nyquist tells us.
No, the Fourier transform of a three cycle burst contains frequencies above the fundamental of the burst sine wave.
Now are we seeing any problem with printing the our six pixel wide image ?

I said you'll never get the reproduction perfect (except in the purely theoretical case of a perfect sensor.) You rocked up with "Infidel How dare you question the prophet Nyquist".
Did I ever use the word infidel? I don't think so.
And we've wasted hours ever since. Respectfully, if anyone has conflated reconstruction techniques and sampling theorem, it wasn't me. What have I said is Nyquist tells you six samples covers three cycle signal, with no aliasing, but doesn't tell you can use a six pixel wide sensor to reproduce a 3 cycle image perfectly.
You are making a false assumption. See above.
After that point it's "how many angels can dance on the head of a pin" territory ; I'm in the "you can always fit another angel on" camp,
Please provide your reasoning, and why Nyquist and Shannon were wrong. Remember: “Extraordinary claims require extraordinary evidence.”
To the best of my limited understanding they were correct about the number of samples needed to avoid aliasing when a sampling a signal for a given frequency.
Then why do you insist that finer pitch is better without regard for the lens, without limit?
Imagine I had the worst possible lens that could only resolve 3 line pairs per image width and work from there.
That's supposed to help me? Your claim would be that I need an infinite number of samples to resolve that perfectly.
Now... if I wanted to check that loaves coming out of a bakery were the correct weight, the only way to be sure every loaf was the correct weight is to weigh every single one. But we can weigh some proportion and have a confidence level that all are OK.
If someone told me that loaf weight is function
I don't know what "loaf weight is a function" means.
and Nyquist proves that loaf weight can be perfectly assessed testing a subset of loaves, I would protest that's false. I can't even offer a Fermatesque 'I can prove it, but not in the space I have'. Much less to the person who cites Fourier and the rest to prove that they can't possibly be wrong.
You have constructed a situation to which the Nyquist sampling theorem does not apply, and now you're complaining that it doesn't apply.
No I'm not complaining, I'm saying that here is a problem domain, it relates to sampling, and if someone quoted Nyquist we'd all say they were wrong.... out of curiosity do you read to the end first ?
Or another area I work with, queues; jobs arrive at random (with a probability function) and they take a random time to service (again with a probability function). I can draw a chart of the queue length. Does Nyquist tell me how often I need to check the queue length to say what the mean and median length are over a day ?
That's just silly.
Indeed. But is a photograph more like a wave form in a signal (where Nyquist applies), or more like the chart of my queue length (where it would be silly). Because if you can show me it's the former they'll be a thack on my forehead and shout of DOH! that can be heard for miles.
The former is the situation for which the sampling theorem was created. The only difference between the 1 D situations I have been showing because they are easier to understand, and the 2 D situations is the extra dimension. The same math applies.
There are lots of these sampling problems. If I want to know the RPM of an engine I'd expect Nyquist to help with sampling frequency. If I want to know the amount of fuel used over a race, Nyquist doesn't tell me how often to check the flow rate. I just know if I check often enough the error will be too small to worry about.

So there are cases where Nyquist applies, and where he doesn't, yes ? I think I a lot of questions we want to answer in photography fall into the "Not well formed with respect to the sampling theorem" - I can't find the words to quote verbatim. The continuously varying grey scale was one. The "how well can we separate hairs as they go out focus"
I already posted a analysis of a continuously varying gray scale. You said it doesn't have frequencies associated with it. I posted a Fourier analysis that showed that it did.
I either missed, or couldn't understand the fourier analysis but posed the question to someone else. If the frequency is very, very low, say one cycle is two sensors wide, do we get perfect reproduction with one sample per sensor ?
No. We need at least two. For a dc stimulus, one would suffice.
Proofs of where Nyquist does [not] apply? Not sure anyone has anything useful to add to the discussion. If you could prove he applies here I wouldn't understand it.
So your reaction to your unwillingness or inability to understand the sampling theorem, which requires an understanding of the frequency domain, is to reject its conclusions based on what?
As I said right at the outset: we end up in a dialogue of the deaf. You insist that the sampling theorem tells you a number of pixels required to guarantee perfect reproduction of any possible image. When I say such a thing is not possible, and its a misuse of correct theorem to say it is, you bluster to try to suggest I'm saying the theorem is wrong in general. I'm not.
Then precisely, and quantitatively state your assertion in a way that is testable.
while others will say there is expertise (not specifically about angels or pins) which can be used to give a number.
One is right and one is wrong. This is not something that can be settled by opinions.
OK, how many angels can dance on the head of a pin then ? :-) The question is not whether the Theorem is right or wrong, but whether a photograph is, in practice, a signal with a set of frequencies for which the sole question is aliasing.
You keep moving the goalposts. No one ever said the sole question was aliasing. But that's the question that applies to your assertion about needing to sample infinitely finely to get all the detail captured that a lens can lay down on the sensor.
OK. Does the theorem tell us (a) The number of samples required for perfect reproduction of the image. or (b) The number to prevent aliasing.
If there's no aliasing, we have the information we nees to perfectily reconstruct the image ignoring quantizing and noise.
Last try on the infinite-resolution (or better expressed "perfect") thing.

In your terms, think of sampling a square wave.
You do know that a square wave has an unbounded frequency domain representation, do you not? So you have selected an example where you can't sample at the Nyquist frequency. No, filter that square wave with a perfect lens of finite aperture, and we can.
It's 100 or 0 , and what I know as the "duty cycle" is 50% (in case I'm using the term wrongly, 50% of the time at 100, 50% of the time at 0)
The frequency is 1Hz.
You sample at 2Hz
2Hz is to low, even if it's a sine wave. I pointed that our earlier. You need to sample slightly above the Nyquist frequency. That's what the sampling theorem says.
and each sample is the average value over 0.1s
Now... there is a chance that the transition comes in during your 0.1 seconds, and if it is exactly in the middle you get 50/50/50/50 for all your samples. if it is a bit offset set you might get 70/30/70/30 and if it misses the boundary you get 100/0/100/0 . Simple enough to calculate the average.

Then we reduce the increase the duty cycle to 90% (so 90% at 100 and 10% at 0). So there is a high probability that all samples all fall in the 90% and we get 100/100/100/100. The pulses are happening at 1Hz but when you decompose the signal into its constituents it's not a simple 1Hz signal. The only way to detect the shorter time at 0, reliably is to sample more frequently.

And then we go to a duty cycle of 99% and so on....

Even if we make our samples 0.5 seconds at 99% we're going to see 100,98,100 which is too small a difference to count as resolved.
As I've pointed out before, you have constructed a situation that doesn't obey the sampling theorem, and no you're complaining that the samples don't provide enough information to do the reconstruction. Is that a surprise?
Now if you think of the 100% times as the stars in the example we used before and 0% times are the gaps, between as the gaps get smaller we need more samples to detect them, regardless of how far apart the stars are. As the gap tends to zero the number of samples to reliably detect it tends to infinity , and If the boundary between resolved and not resolved in the image is when the gap is zero width...

I'm tired. I'm grumpy. And I've had people shouting at me for things I haven't done while wasting your time and my own. If you have a kindergarten level explanation of why this is wrong, thanks. If we can metaphorically shake hands and say we've been talking at cross purposes because I was talking about something other than what you thought I was smashing. Otherwise you can find your own words for "James, sad to say you're too stupid to understand your own stupidity" and save yourself pages of proof.
You made an assertion early in the thread. I have posted examples that disproved it. You have offered no objections to my examples, but continue to construct mental exercises where the input is undersampled, and then claim that they disprove something about what happens when the input is properly sampled.

And now you are calling me names.

--
https://blog.kasson.com
 
Last edited:
There are an infinite number of possible numbers between zero and one. If someone brings a theory along from another domain that says he can write them all down, what's your response?
I'd say that this is a not well formulated question. We have the Axiom of Choice on one hand, but all (presumably real) numbers between 0 and 1 form an uncountable set, so he cannot write them down one after the other even if he had infinite time.
So, if we go from light to dark are there not an uncountable set of brightness values (before we think about recording an image. Obviously there number in a file is finite)
So if the light falling on a sensor goes from light to dark over the width of the sensor, can you put a number on how many divisions are needed to record that transition perfectly?
You mean, in a linear way, and I know that? I need two points.
Left of the sensor register what ever the maximum value is, right of the sensor records noise level. We can adequately record that in 256 steps. But how many to perfectly record it. The transition might be linear or a curve, the recording needs to capture that.
Imagine if there were another sensor beside it the level would come up from dark back to light, and another beyond it would go down again. So if we sample at 1/2 the frequency we would have perfection if we took one sample per sensor. Of course not, we'd just know there was no aliasing.
I do not understand the question.
The last case was one sensor where the change in brightness does not appear to be cyclical, it just goes from light to dark. But suppose this was part of a cycle which multiple sensors wide. So if we had two sensor side by side we'd see a dark going back to light, full cycle, 4 sensor = 2 cycles. etc..
Still not sure. Cycle wider than the sensor too reductio ad absurdum ? Imagine a plot of brightness is a simple wave that completes 3 cycles of the sensor width. How many samples does the theorem say. And what would a picture with that number look like - perfect reproduction ?
See my post earlier how to deal with the final span of the sensor. With a hypothetical 24mp sensor, it would be so perfect that you would not be able to tell the difference visually at all. The only possible noticeable errors would be at the edges which a routinely cropped a bit anyway. With an infinite virtual sensor, and an infinite sinusoidal wave, it would be absolutely perfect.
Ah. Hang on. Finite is so good it might as well be infinite, but you only get absolute perfection when it's infinite? Didn't someone say "If that were the case, Harry Nyquist was wrong."

(not my quote, Nyquist and I weren't on first name terms)
 
There are an infinite number of possible numbers between zero and one. If someone brings a theory along from another domain that says he can write them all down, what's your response?
I'd say that this is a not well formulated question. We have the Axiom of Choice on one hand, but all (presumably real) numbers between 0 and 1 form an uncountable set, so he cannot write them down one after the other even if he had infinite time.
So, if we go from light to dark are there not an uncountable set of brightness values (before we think about recording an image. Obviously there number in a file is finite)
You are confusing the data in the file with the data after reconstruction. JACS has pointed this out before.
So if the light falling on a sensor goes from light to dark over the width of the sensor, can you put a number on how many divisions are needed to record that transition perfectly?
You mean, in a linear way, and I know that? I need two points.
Left of the sensor register what ever the maximum value is, right of the sensor records noise level. We can adequately record that in 256 steps. But how many to perfectly record it.
Now you are off into quantizing. That was not part of the original discussion, or part of the assertion that you made at the top of this thread.
The transition might be linear or a curve, the recording needs to capture that.
Imagine if there were another sensor beside it the level would come up from dark back to light, and another beyond it would go down again. So if we sample at 1/2 the frequency we would have perfection if we took one sample per sensor. Of course not, we'd just know there was no aliasing.
I do not understand the question.
The last case was one sensor where the change in brightness does not appear to be cyclical, it just goes from light to dark. But suppose this was part of a cycle which multiple sensors wide. So if we had two sensor side by side we'd see a dark going back to light, full cycle, 4 sensor = 2 cycles. etc..
Still not sure. Cycle wider than the sensor too reductio ad absurdum ? Imagine a plot of brightness is a simple wave that completes 3 cycles of the sensor width. How many samples does the theorem say. And what would a picture with that number look like - perfect reproduction ?
See my post earlier how to deal with the final span of the sensor. With a hypothetical 24mp sensor, it would be so perfect that you would not be able to tell the difference visually at all. The only possible noticeable errors would be at the edges which a routinely cropped a bit anyway. With an infinite virtual sensor, and an infinite sinusoidal wave, it would be absolutely perfect.
Ah. Hang on. Finite is so good it might as well be infinite, but you only get absolute perfection when it's infinite? Didn't someone say "If that were the case, Harry Nyquist was wrong."

(not my quote, Nyquist and I weren't on first name terms)
 
There are an infinite number of possible numbers between zero and one. If someone brings a theory along from another domain that says he can write them all down, what's your response?
I'd say that this is a not well formulated question. We have the Axiom of Choice on one hand, but all (presumably real) numbers between 0 and 1 form an uncountable set, so he cannot write them down one after the other even if he had infinite time.
So, if we go from light to dark are there not an uncountable set of brightness values (before we think about recording an image. Obviously there number in a file is finite)
You are missing the main point again. You do not need to recover the intermediate values. Your recover them using the additional information that the signal is band-limited.
So if the light falling on a sensor goes from light to dark over the width of the sensor, can you put a number on how many divisions are needed to record that transition perfectly?
You mean, in a linear way, and I know that? I need two points.
Left of the sensor register what ever the maximum value is, right of the sensor records noise level. We can adequately record that in 256 steps. But how many to perfectly record it. The transition might be linear or a curve, the recording needs to capture that.
As I said, "and I know that." More generally, we know that the signal is band limited.
Imagine if there were another sensor beside it the level would come up from dark back to light, and another beyond it would go down again. So if we sample at 1/2 the frequency we would have perfection if we took one sample per sensor. Of course not, we'd just know there was no aliasing.
I do not understand the question.
The last case was one sensor where the change in brightness does not appear to be cyclical, it just goes from light to dark. But suppose this was part of a cycle which multiple sensors wide. So if we had two sensor side by side we'd see a dark going back to light, full cycle, 4 sensor = 2 cycles. etc..
This is not a band-limited signal, assuming that it is piecewise linear (a zig-zag graph).
Still not sure. Cycle wider than the sensor too reductio ad absurdum ? Imagine a plot of brightness is a simple wave that completes 3 cycles of the sensor width. How many samples does the theorem say. And what would a picture with that number look like - perfect reproduction ?
See my post earlier how to deal with the final span of the sensor. With a hypothetical 24mp sensor, it would be so perfect that you would not be able to tell the difference visually at all. The only possible noticeable errors would be at the edges which a routinely cropped a bit anyway. With an infinite virtual sensor, and an infinite sinusoidal wave, it would be absolutely perfect.
Ah. Hang on. Finite is so good it might as well be infinite, but you only get absolute perfection when it's infinite? Didn't someone say "If that were the case, Harry Nyquist was wrong."
??? The theorem requires infinitely many samples. There are versions with finitely many, and with error estimates.

As I said earlier all models are approximate. We teach that basically on day one. Still, we sent men to the Moon and probes to Mars based on them.
 
Last edited:
Last try on the infinite-resolution (or better expressed "perfect") thing.

In your terms, think of sampling a square wave. It's 100 or 0 , and what I know as the "duty cycle" is 50% (in case I'm using the term wrongly, 50% of the time at 100, 50% of the time at 0)
The frequency is 1Hz.
You sample at 2Hz and each sample is the average value over 0.1s
Now... there is a chance that the transition comes in during your 0.1 seconds, and if it is exactly in the middle you get 50/50/50/50 for all your samples. if it is a bit offset set you might get 70/30/70/30 and if it misses the boundary you get 100/0/100/0 . Simple enough to calculate the average.

Then we reduce the increase the duty cycle to 90% (so 90% at 100 and 10% at 0). So there is a high probability that all samples all fall in the 90% and we get 100/100/100/100. The pulses are happening at 1Hz but when you decompose the signal into its constituents it's not a simple 1Hz signal. The only way to detect the shorter time at 0, reliably is to sample more frequently.

And then we go to a duty cycle of 99% and so on....

Even if we make our samples 0.5 seconds at 99% we're going to see 100,98,100 which is too small a difference to count as resolved.
With a lens, there is going to be diffraction, which will limit the frequency response. The example that you've proposed requirees infinite frequency response.
 
I'm tired. I'm grumpy. And I've had people shouting at me for things I haven't done while wasting your time and my own. If you have a kindergarten level explanation of why this is wrong, thanks. If we can metaphorically shake hands and say we've been talking at cross purposes because I was talking about something other than what you thought I was smashing. Otherwise you can find your own words for "James, sad to say you're too stupid to understand your own stupidity" and save yourself pages of proof.
There is no one participating in this thread who is agreeing with your assertion about an infinite number of samples being required to reconstruct a band limited signal. However, WRT stupidity and me, I'm not the only person disagreeing with you. JACS, Jack, and Truman are all knowledgeable and don't support your position. Do you think that is a coincidence?
 
Here is a quick demo about a reconstruction with Lanczos3. The function sampled is a sum of several sine functions, the highest frequency is π. When the sampling rate is just 5% over Nyquist, the sinc interpolation (not shown) does well but it shows some errors near the endpoints because everything is truncated, as in an actual photo. Lanczos3 shows aliasing everywhere but in needs (approximately) 2xNyquist (sampling rate, not a frequency), so this is expected:


The signal is in blue, the Lanczos3 reconstruction is in red. Sampling rate 1.05 x Nyquist.

Now, get to 2xNyquist. You see one color only because the reconstruction is virtually perfect.


The signal is in blue, the Lanczos3 reconstruction is in red. Sampling rate 2 x Nyquist.

The code was written by this human, not by AI, which explains the bland look of the output...
 

Attachments

  • 4498856.jpg
    4498856.jpg
    67.2 KB · Views: 0
  • 4498855.jpg
    4498855.jpg
    62.2 KB · Views: 0
Last edited:
Nyquist tells you how many samples you need to record those three cycles without aliasing. 3 cycles per image width so Nyquist says sample 6 times. So a sensor six pixels wide will do the job. Definitely no aliasing, just as Nyquist tells us.
No, the Fourier transform of a three cycle burst contains frequencies above the fundamental of the burst sine wave.
Ah... now we might be converging on something
Now are we seeing any problem with printing the our six pixel wide image ?

I said you'll never get the reproduction perfect (except in the purely theoretical case of a perfect sensor.) You rocked up with "Infidel How dare you question the prophet Nyquist".
Did I ever use the word infidel? I don't think so
Of course you didn't, you use "prophet" either. But I felt I was being attacked with the great zeal. Caricature for illustration purposes seems to travel poorly. But you have expert knowledge and use it in a way which makes others (well me anyway) feel you are belittling them. In the end you might need to think of it as trying to teach a pig to sing....
.And we've wasted hours ever since. Respectfully, if anyone has conflated reconstruction techniques and sampling theorem, it wasn't me. What have I said is Nyquist tells you six samples covers three cycle signal, with no aliasing, but doesn't tell you can use a six pixel wide sensor to reproduce a 3 cycle image perfectly.

You are making a false assumption. See above.
I made an observation - one can't get to perfect reproduction of an arbitrary image. You said Nyquist says you can. And we could have put dozens of hours to better use since.
After that point it's "how many angels can dance on the head of a pin" territory ; I'm in the "you can always fit another angel on" camp,
Please provide your reasoning, and why Nyquist and Shannon were wrong. Remember: “Extraordinary claims require extraordinary evidence.”
To the best of my limited understanding they were correct about the number of samples needed to avoid aliasing when a sampling a signal for a given frequency.
Then why do you insist that finer pitch is better without regard for the lens, without limit?
Imagine I had the worst possible lens that could only resolve 3 line pairs per image width and work from there.
That's supposed to help me? Your claim would be that I need an infinite number of samples to resolve that perfectly.
It's explained below.
Or another area I work with, queues; jobs arrive at random (with a probability function) and they take a random time to service (again with a probability function). I can draw a chart of the queue length. Does Nyquist tell me how often I need to check the queue length to say what the mean and median length are over a day ?
That's just silly.
Indeed. But is a photograph more like a wave form in a signal (where Nyquist applies), or more like the chart of my queue length (where it would be silly). Because if you can show me it's the former they'll be a thack on my forehead and shout of DOH! that can be heard for miles.
The former is the situation for which the sampling theorem was created. The only difference between the 1 D situations I have been showing because they are easier to understand, and the 2 D situations is the extra dimension. The same math applies.
OK is the data to digitize in photography like a wave form or like the trace of a queue ?
OK. Does the theorem tell us (a) The number of samples required for perfect reproduction of the image. or (b) The number to prevent aliasing.
If there's no aliasing, we have the information we nees to perfectily reconstruct the image ignoring quantizing and noise.
Ah that takes me back to A level pure maths with mechanics, where the mechanics involved weights attached to mass-less ropes passing over frictionless massless pulleys, to move things along a frictionless surface in a vacuum.
Last try on the infinite-resolution (or better expressed "perfect") thing.

In your terms, think of sampling a square wave.
You do know that a square wave has an unbounded frequency domain representation, do you not? So you have selected an example where you can't sample at the Nyquist frequency. No, filter that square wave with a perfect lens of finite aperture, and we can.
I know because when I was doing A levels I thought I could learn to play synthesizers, that you can take a square wave, or sawtooth wave and apply a filter to it and get a different wave form because you filtered out other frequencies. And this seems like witchcraft because if the wave was x Hz where did the other frequencies come from.
It's 100 or 0 , and what I know as the "duty cycle" is 50% (in case I'm using the term wrongly, 50% of the time at 100, 50% of the time at 0)
The frequency is 1Hz.
You sample at 2Hz
2Hz is to low, even if it's a sine wave. I pointed that our earlier. You need to sample slightly above the Nyquist frequency. That's what the sampling theorem says.
and each sample is the average value over 0.1s
Now... there is a chance that the transition comes in during your 0.1 seconds, and if it is exactly in the middle you get 50/50/50/50 for all your samples. if it is a bit offset set you might get 70/30/70/30 and if it misses the boundary you get 100/0/100/0 . Simple enough to calculate the average.

Then we reduce the increase the duty cycle to 90% (so 90% at 100 and 10% at 0). So there is a high probability that all samples all fall in the 90% and we get 100/100/100/100. The pulses are happening at 1Hz but when you decompose the signal into its constituents it's not a simple 1Hz signal. The only way to detect the shorter time at 0, reliably is to sample more frequently.

And then we go to a duty cycle of 99% and so on....

Even if we make our samples 0.5 seconds at 99% we're going to see 100,98,100 which is too small a difference to count as resolved.
As I've pointed out before, you have constructed a situation that doesn't obey the sampling theorem, and no you're complaining that the samples don't provide enough information to do the reconstruction. Is that a surprise?
The flipside is that these situations I cite are what happens when we take photographs and ask questions like "my lens formed this image. How much detail can I see in the transitions between in focus and out of focus".

But, in fixating on the squareness wave you missed my point. Go back to the question of resolving two stars. We say they are resolved if we can see light, dark, light.

If we can't "see the dark bit" they aren't resolved (and that definition is necessarily loose). So... I get two posts and place them in front of a black background and I fix white targets on them. I take pictures and say "can I see two targets or do they blur into one". Is the resolution I need in the sensor determined to tell if I can see the the gap between the targets dependent on the how far apart my posts are. Or on how much clear space there is between the two targets. Because what I tried to say with duty-cycles was that as the space between the targets get smaller the number of samples to be sure of finding the gap depends on the "gap" size . I think you would say that in well resolved image (50% duty cycle if a square wave worked) there are a different mix of frequencies to a badly resolved one (10% DC if the it worked) the badly resolved one in effect has higher frequency harmonics (as synth playing me would have known them) then .... and at the limit of resolution those continue ad infinitum.

If we don't think in terms of perfect lenses forming perfectly focused images, or illumination on the sensor arranged to suit the theorem or not, and think instead of photo of someone's head with shallow D.o.F there's some point in the image where the gap between the hairs goes form "just there" to "not there". Does the captured image enable us to resolve right up to that point? (Which is one working definition of its being perfect) Not with any existing sensor but we get closer and closer as resolution increases. I'm never going to have a proof that the cases Nyquist doesn't tell us about prove that we never get to perfection. And if you have a proof that the cases he does tell us about proves the we do, I'm never going to be able to understand it. If you think I'm arguing a Xeno's paradox type case (Achilles gets closer and closer but never passes the tortoise) I do allow for the possibility of being wrong. Unfortunately your ability to explain and my ability to understand don't allow us to resolve it. (Similar kind of thing ... what gets learned depends on the how good the teacher is and how able the student is. Better students can learn from worse teachers, but the comes a point where the material is not taught at all, at which point the student would need to be infinitely good to learn it. Similarly if the student is incapable of grasping anything it needs an infinitely good teacher. If this hypothesis is correct, should the best teachers be given the most able or least able students ? [answer: depends on the objective. If it is maximize 'A' grades yes, if it is maximize passes no.] ).
Now if you think of the 100% times as the stars in the example we used before and 0% times are the gaps, between as the gaps get smaller we need more samples to detect them, regardless of how far apart the stars are. As the gap tends to zero the number of samples to reliably detect it tends to infinity , and If the boundary between resolved and not resolved in the image is when the gap is zero width...

I'm tired. I'm grumpy. And I've had people shouting at me for things I haven't done while wasting your time and my own. If you have a kindergarten level explanation of why this is wrong, thanks. If we can metaphorically shake hands and say we've been talking at cross purposes because I was talking about something other than what you thought I was smashing. Otherwise you can find your own words for "James, sad to say you're too stupid to understand your own stupidity" and save yourself pages of proof.
You made an assertion early in the thread. I have posted examples that disproved it. You have offered no objections to my examples, but continue to construct mental exercises where the input is undersampled, and then claim that they disprove something about what happens when the input is properly sampled.
OK. I'm trying to show that a shrinking "gap" between poorly resolved details leads to under-sampling. And there is such a thing as properly sampled input but as we near the limit of resolution in the image we can't avoid under-sampling. AND it follows that if we could say when we got proper and when under- things would become massively easier to follow.
And now you are calling me names.
I really hope not, or the names have been things like "expert". My peak ability to understand what you grasp was about 40 years ago, and I was probably unable to understand it even then. (I had a year out between doing Pure Maths with Mechanics A level and doing my Computer Science degree, we had to do a pure Maths unit which was for students doing Mathematics degrees having done Pure & applied (mechanics or stats) in one year instead of the normal two, and then spent the second year on further pure, and gone direct to uni before they could forget it I was in the upper quartile for maths, but not in the upper decile).
Your standing as a subject matter expert on the theorem is unquestioned. Your ability to explain to a dullard like me, without emphasising that gap in our knowledge ... not so much*. I hope I can make that distinction and say "hey that's not the kindest way to speak to me" without resorting to name calling. If that's where I've ended up then I owe you an apology, happy to give it here and now.

* I was trying to find a quote earlier ... the search ran two quotes together to provide something which might be use to both of us.

"Have the courage to be ignorant of a great number of things, in order to avoid the calamity of being ignorant of everything. He not only overflowed with learning, but stood in the slop."
 
Ah that takes me back to A level pure maths with mechanics, where the mechanics involved weights attached to mass-less ropes passing over frictionless massless pulleys, to move things along a frictionless surface in a vacuum.
This is part one of lecture one: the harmonic oscillator, a fundamental concept in math and physics. It is usually followed by the damped harmonic oscillator, within the same lecture, modeling friction and air resistance. The second lecture is usually the forced (damped or not) harmonic oscillator which introduces the fundamental notion of resonance. Then we go on to analyze the pendulum as an example of a nonlinear ODE approximated by a linear harmonic oscillator when the oscillations have small amplitudes but it can be analyzed without linearizing.

See below for an AI generated answer to the question of the importance of the harmonic oscillator that you are trying to ridicule:

The harmonic oscillator is important because it serves as a universal model for oscillatory systems in both classical and quantum physics, from the vibrations of atoms to light waves. Its mathematical simplicity allows for exact solutions, providing a foundational tool for understanding complex phenomena and developing technologies like quantum computing, spectroscopy, and electronics. By approximating real-world systems near equilibrium, the harmonic oscillator helps physicists derive crucial principles about energy, motion, and wave-particle duality. Why the harmonic oscillator is important:
  • Universal Model:Any system with a stable equilibrium point can be approximated as a harmonic oscillator for small displacements, making it applicable to a vast range of physical phenomena.
  • Quantum Mechanics Insights:It's a key system in quantum theory for understanding quantization, wave-particle duality, and energy transitions, providing insights into atomic and molecular behavior.
  • Exact Analytical Solutions:The harmonic oscillator is one of the few quantum systems for which exact analytical solutions can be found, serving as a vital pedagogical tool and a reference for more complex systems.
  • Building Blocks for Fourier Analysis:The sinusoidal nature of harmonic oscillations provides the building blocks for understanding and analyzing any periodic function using Fourier analysis.
Applications in Technology:
  • Spectroscopy:Understanding molecular vibrations (modeled as harmonic oscillators) is crucial for techniques that probe material properties.
  • Quantum Computing:Harmonic oscillator concepts are applied in developing quantum computing technologies by harnessing quantum phenomena in systems like photons in cavities.
  • Electronic Circuits:The principles of harmonic oscillation are used in designing and understanding electronic components like LC oscillators and other electronic devices that generate periodic signals.
Examples in Nature:
  • Molecular Vibrations:The oscillations of atoms within molecules can be effectively modeled as harmonic oscillators, especially at lower energy levels.
  • Electromagnetic Fields:Photons, the particles of light, can be described as modes of the electromagnetic field that behave like harmonic oscillators.
  • Solid-State Physics:The vibrations of atoms in a crystal lattice (phonons) can also be understood using the framework of the harmonic oscillator.
 
Hi James,

With this thread the OP (Erik) wanted to demonstrate that with current kit a smaller pixel aperture will typically improve a given imaging system resolution all else being equal, though with diminishing returns. He provided a good example of that, with the smaller pixel producing higher MTF at a specified spatial frequency, proving his point. Folks applauded him for it, all agreed, bravo.

I came to this thread in the middle of the exchange and did not read through it first, usually a mistake. From what I can tell the point of contention stems from this phrase from your very first post in reply to an applauder (I underlined the offending words):
But everything I've learnt says in theory to fully resolve the image of any lens needs infinite sensor resolution
This is demonstrably false, and you have been offered the theoretical background, references and examples to understand why that is by several people, should you choose to pursue it further (some of them used to teach this stuff). Don't take it personally, it's just a fact not a judgement, we are all here to learn. And other than a little exasperation transpiring here and there I have not seen any name calling, so no worries.

If the explanations were not clear and you have specific additional questions about them, feel free to ask. One at a time and concisely though, otherwise the discussion becomes dispersive and people lose interest. This is usually more productive than simply repeating one's point again and again.

Jack
 
Last edited:
Here is another demo, this time with the sinc interpolation kernel, the way it is in the original theorem. Sampling and reconstruction has been done 50% to the left and right of the shown window, to get closer to the original theorem, again. The original signal is in blue, the reconstruction is in red.




undersampling by just 5%: a disaster




oversampling by just 5%: a beauty
 

Attachments

  • 4498964.jpg
    4498964.jpg
    78.8 KB · Views: 0
  • 4498963.jpg
    4498963.jpg
    60.5 KB · Views: 0
Here is another demo, this time with the sinc interpolation kernel, the way it is in the original theorem. Sampling and reconstruction has been done 50% to the left and right of the shown window, to get closer to the original theorem, again. The original signal is in blue, the reconstruction is in red.


undersampling by just 5%: a disaster


oversampling by just 5%: a beauty
Good demo. Can you give me the specs for the sine waves (omega, phase, amplitude for each?) so that I can attempt to reproduce your test and try other reconstruction methods?

--
 
Copy and paste from Maple:



It has four discrete frequencies (because I am too lazy to add more). The interval shown is [0,50] but the computations are done on [-25, 75].

🦒
 

Attachments

  • 4498965.jpg
    4498965.jpg
    24 KB · Views: 0
Last edited:
Hi James,

From what I can tell the point of contention stems from this phrase from your very first post in reply to an applauder (I underlined the offending words):
But everything I've learnt says in theory to fully resolve the image of any lens needs infinite sensor resolution
Perhaps I was loose with the words. You have no idea how much I regret wandering into the MF Talk forum where this started and saying anything It's weird I've been on DPR for 22 years (23?) almost entirely in the Pentax forum where good manners are rather better enforced.
This is demonstrably false, and you have been offered the theoretical background, references and examples to understand why that is by several people, should you choose to pursue it further (some of them used to teach this stuff). Don't take it personally, it's just a fact not a judgement, we are all here to learn. And other than a little exasperation transpiring here and there I have not seen any name calling, so no worries.
Thank you for the last part. I've taken exception to the way some people have spoken to me so I should leave you to it from now on.
When I've said "But what about X" I've had "the sampling theorem doesn't fit that" so if there has been a demonstration that it is applicable to photographs in general, I've been too stupid to understand, as 3 or 4 posters here - whose presence seems to be doesn't seem motivated be motivated by a desire to spread knowledge but to show their superiority - have delighted in pointing out.
If the explanations were not clear and you have specific additional questions about them, feel free to ask. One at a time and concisely though, otherwise the discussion becomes dispersive and people lose interest. This is usually more productive than simply repeating one's point again and again.
Not sure that there's any point as usual folks will repeat the usual thing... But.

When we do things like D.o.F calculations we treat the finest detail as an infinitely small point. In fact diffraction means even if the lens were perfect, there is minimum point size set by the aperture. If we assume a perfect lens with a huge aperture we can get can calculated the smallest point any lens can resolve. OK so far ?

If we imagine a pattern of these points on a negative they'd look like this

• • • •

So we can treat the brightness as a wave form. The wavelength of that wave from is two minimal dots. If we can sample that and get light-bit dark-bit light-bit we can a sample anything with a finite number of samples. Thank you Mr Nyquist and good night.

Except. When the image is poorly resolved our minimum size dots have smaller gaps between them So if we have stars very close together they're more like this.

••••

(I can't do bigger dots with the same wavelength). If the gap between them is very small sampling that was sufficient for the first example is not enough. And the sampling experts should - I think - be saying "of course, that wave form has the same fundamental frequency (four • characters per width) but it contains higher frequency components so it needs more samples, if you don't put in extra samples, you'll be under sampled". Instead I get rants about dissing Nyquist and references to Physics 101 classes.

I think there is a minimum theoretical dot width, but no minimum theoretical dot separation. If we say a separation of zero is unresolved but an infinitesimal separation is resolved then we need infinite samples to sample in that separation find it. The problem ultimately isn't the sampling theorem, not even my take that photographs in general aren't signals with a frequency, but the assumption of the smallest dot gives a frequency that - if your could sample at it - would give you perfection.
 
You are confusing dots, and distances between them, with the band limit as the highest frequency present in a function/image. An equally spaced sequence of dots (modeled as deltas) in fact has a spectrum consisting of such deltas as well (Poisson summation formula), very far from being band-limited. A single dot (a delta) is as far as one can get from a band-limited image and still consider it is an idealized image - the spectrum is flat. Fortunately (or unfortunately) diffraction takes care of that, and our images have a band limit dictated by the diffraction limit.
 
You are confusing dots, and distances between them, with the band limit as the highest frequency present in a function/image. An equally spaced sequence of dots (modeled as deltas) in fact has a spectrum consisting of such deltas as well (Poisson summation formula), very far from being band-limited. A single dot (a delta) is as far as one can get from a band-limited image and still consider it is an idealized image - the spectrum is flat. Fortunately (or unfortunately) diffraction takes care of that, and our images have a band limit dictated by the diffraction limit.
Which is why I have been posting examples with an idea lens at specific wavelengths and f-stop to get diffraction into the examples.
 
You are confusing dots, and distances between them, with the band limit as the highest frequency present in a function/image. An equally spaced sequence of dots (modeled as deltas) in fact has a spectrum consisting of such deltas as well (Poisson summation formula), very far from being band-limited. A single dot (a delta) is as far as one can get from a band-limited image and still consider it is an idealized image - the spectrum is flat. Fortunately (or unfortunately) diffraction takes care of that, and our images have a band limit dictated by the diffraction limit.
No... the words individual words make sense, but only your exclusive group can understand the sentences.

Can we do a simple true or false. For two pairs of different size dots , with the same distance between their centres, detecting the small separation between larger dots needs more samples than detecting a large separation between smaller dots ? I.e. if I have LLLDDDLLL it needs fewer samples to get a D than LLLLDLLLL
 
Can we do a simple true or false. For two pairs of different size dots , with the same distance between their centres, detecting the small separation between larger dots needs more samples than detecting a large separation between smaller dots ? I.e. if I have LLLDDDLLL it needs fewer samples to get a D than LLLLDLLLL
Define "dots" of a given size. Are they small uniform disks? If so, they are not band limited regardless of how many you have, and at what distance they are to each other. In our images, they would be softened by diffraction and would look like (if they are really small) Airy disks.
 
Can we do a simple true or false. For two pairs of different size dots , with the same distance between their centres, detecting the small separation between larger dots needs more samples than detecting a large separation between smaller dots ? I.e. if I have LLLDDDLLL it needs fewer samples to get a D than LLLLDLLLL
Define "dots" of a given size. Are they small uniform disks? If so, they are not band limited regardless of how many you have, and at what distance they are to each other. In our images, they would be softened by diffraction and would look like (if they are really small) Airy disks.
I think that part of the problem is that Mr. O'Neill doesn't understand what the words "band limited" mean. Since he has shown no interested in learning about the Fourier transform or the frequency domain, I think this is unlikely to change.
 

Keyboard shortcuts

Back
Top