Diffraction Limit Discussion Continuation
bobn2 wrote:
Where your methodology falls flat is that the 'experiment' you're performing is fictitious, it's not based on real numbers, nor is it based on the theory.
A real world experiment could be to shoot at fstops from wide open to f/22 with for example 5D and D800 (using a good lens, the same if possible) , and see at which fstop the diffraction starts to be 'clearly visible' at 100% view. My guess is that the resolution limiting effect of diffraction will become visible at a larger aperture on D800.
Great Bustard wrote:
Jonny Boyd wrote:
It tells you plenty. You have made a universal claim that diffraction always causes peak sharpness at the same aperture, regardless of pixel count. While I agree that that is mathematically correct...
Not merely "mathematically correct", but supported by all the lens (system) tests.
My point was that this isn't always experienced in practice if the difference cannot be perceived. I'm not disputing that the peak is there, I'm just disputing whether it always makes a visible difference in an image, so potentially, as far as the eye can tell, there can be a plateau in resolution instead of a peak. It's like looking at a piece of ground and saying 'it looks flat,' then someone comes along with a laser and tells you where the highest point is. There might actually be a peak, but for practical purposes, there may as well not be.
I'm not making any claims about which lens/camera combinations this well apply to, I'm just saying that potentially this can happen.
...I also believe that a drop in resolution will in some cases only become noticable at a smaller aperture for a low res sensor than a large one.
When the drop in resolution becomes apparent depends on many factors, not the least of which is how large you display the photo.
Obviously. I'm dealing with a situation where you print at the same size and view from the same distance so that we can simply determine whether pixel count makes a difference to when diffraction limits resolution, when all else is equal. Hasn't that been the stated assumption in plenty of people's posts: 'all else being equal'?
In any case, we also all agree that the sensor with more pixels will have higher resolution, all else equal.
I don't think anyone has ever suggested otherwise so I don't know why it keeps bring brought up.
Thus, since a given lens peaks at a particular aperture regardless of the pixel count of the sensor, and a sensor with more pixels will always resolve more than a sensor with fewer pixels (all else equal), then in what sense does "diffraction limit" have any meaning,
In the post you've just replied to, I'm discussing the issue of where diffraction begins to visibly limit resolution. Mathematically (used as an antonym of 'visibly') diffraction always limits resolution at the same aperture for a lens, regardless of sensor pixel count. But the difference in resolution between apertures may be so small as to not be visible, meaning that diffraction doesn't visibly limit resolution until lower apertures than the mathematical peak.
or than how Bob characterized it:
http://www.dpreview.com/forums/post/53154169
The 'limit' is just a bogus idea. McHugh has taken a well defined optical term  a 'diffraction limited' system is one so good that diffraction is the only limit on its performance  turned it inside out and made it into something senseless.
Sometimes Bob talks nonsense. Or disagrees with someone without justifying why.
For instance he insists that neither the lens nor the sensor limit the resolution of the final image, implying that using a better sensor always gets a better image. The reality is that the lens and sensor both put limits on the resolution of an image. Increasing the resolution of one allows you to get progressively closer to the limit of the other, but never to exceed it.
That's all I'm claiming.
In what way are sensors with more pixels any more "diffraction limited" than sensors with fewer pixels? That when viewing at 100% on a computer monitor you can see the resolution fall faster from the peak aperture, even though the peak aperture is the same, regardless of the pixel count, and the sensor with the higher pixel count has greater resolution?
That's effectively what I've said many, many times. And it's not just at 100% on the computer monitor. It's reality.
If I can find one instance where numbers demonstrate it, then I am correct.
"A number multiplied by itself is always the same number. For example, 1x1 = 1." So because I found a single instance where numbers demonstrate the claim, does that make the claim correct?
You've got things backwards. You're making the universal claim. My claim is that there is an exception to the universal claim. If I find an exception, then I'm right.
If I said "A number multiplied by itself is always the same number' and you said 'no, there are exceptions,' then you'd only need one example of an exception to be proved correct and disprove the universal claim.
The universal claim I'm disproving is that peak visible resolution always occurs at the same aperture.
Steen Bay wrote:
bobn2 wrote:
Where your methodology falls flat is that the 'experiment' you're performing is fictitious, it's not based on real numbers, nor is it based on the theory.
A real world experiment could be to shoot at fstops from wide open to f/22 with for example 5D and D800 (using a good lens, the same if possible) , and see at which fstop the diffraction starts to be 'clearly visible' at 100% view. My guess is that the resolution limiting effect of diffraction will become visible at a larger aperture on D800.
Potentially. Though you'd really want cameras with a wider range of pixel counts. Ideally covering 2 orders of magnitude of linear resolution, or 4 orders of magnitude of pixel count. It would be a struggle to meet those conditions and keep everything else equal.
Steen Bay wrote:
bobn2 wrote:
Where your methodology falls flat is that the 'experiment' you're performing is fictitious, it's not based on real numbers, nor is it based on the theory.
A real world experiment could be to shoot at fstops from wide open to f/22 with for example 5D and D800 (using a good lens, the same if possible) , and see at which fstop the diffraction starts to be 'clearly visible' at 100% view. My guess is that the resolution limiting effect of diffraction will become visible at a larger aperture on D800.
That would be an experiment, but why would you be at all interested in comparing '100% view' for a 36 and 12 MP camera? As to you guess, you are welcome to guess anything you like.
Bob
Jonny Boyd wrote:
bobn2 wrote:
Jonny Boyd wrote:
bobn2 wrote:
I would not aggrandise it with the word 'theoretical'. There is no theory behind this setup, merely arbitrariness.
There's plenty of theory Bob, all explained in thus post and previous posts in this thread. It's just using the equation for determining resolution of a system with multiple components that each have linited resolution themselves.
That isn't 'theory'. The equation you're using is itself a 'rule of thumb', based on the idea that the MTFs . For a start, we don't know what you mean by 'resolution'. Are you taking MTF50, or what?
A while ago I was getting the impression that those who think there is no diffraction limit regarded this equation as the golden rule so I was happy to use it.
Please link to where I've ever said that equation is a golden rule. You will not find it, because I have never said that.
When it produces results you don't like, you want to get rid of it.
I never had it in the first place.
As far resolution, it would be the number of line pairs that can be distinguished per unit length.
Not sure that equation give that resolution, I would need to think about that one.
We're all agreed that it's a good equation
It depends what you mean by a 'good equation'  it's a decent approximation for some purposes.
And which purposes would they be and not be? You're being very vague.
It's a decent approximation if you want to work out the MTF of a lens on one camera, knowing its MTF on another. Better than a wild guess, anyway.
and as Anders requested, I'm working out the implications. I took a while to carefully explain my methodology, so if you'd like to contribute usefully here you could begin by highlighting where you think my methodology falls flat, rather than pouting off with an unsubstantiated opinion.
Where your methodology falls flat is that the 'experiment' you're performing is fictitious, it's not based on real numbers, nor is it based on the theory.
It's working out the implications of the equation Anders is so fond of, as he requested I do. In what way is it inapplicable in this situation?
As I said, the experiment you are doing is fictitious.
As for arbitrariness, the numbers I used are quite deliberate. I used lens resolution numbers that give a curve with a shape broadly the same as a standard MTF curve for a lens. Similar real ones have been shown plenty of times in this thread and others. The printer resolution was chosen to be less than the lens, but not so low that it would dominate the final resolution The sensor resolutions were chosen to cover the range of scenarios from sensor limited output, to lens limited, and stuff in between. You'll notice the senaor numbers logarithmically scale for precisely this reason.
In short, your numbers were chosen to get the result that you wanted to demonstrate.
Rather than merely making an assertion, how about you show your reasoning? What is unrealistic about the shape of the resolution curve? What is wrong with the sensor resolution numbers I picked? As I've already stated, they effectively cover all the possibilities because they range from situations where resolution is sensor limited to situations where it is lens limited.
They are not real numbers, they are ones that you made up. It's not me that has to show my reasoning.
so real world examples of sensors, lenses, and printers may have more or less pronounced behaviour, depending on actual resolution. My model also assumes that the percentage drop in relative resolution that becomes noticeable would be the same for every absolute resolution. It may be that at higher absolute resolutions a change in relative resolution would be noticeable at a higher or lower resolution. I'm not sure about that one.
This is a particularly futile exercise. Either do the theory or work the real numbers.
As I said, this is a simple application of an equation we all agree on.
A rule of thumb equation run with madeup numbers.
And again I ask, why is the equation not applicable here and what is wrong with the numbers?
The numbers are madeup, do I have to repeat that? As for the equation, it's a ruleofthumb, not a precision model of what actually happens  so the fact you see something using that equation fed with madeup numbers doesn't mean that it exists.
Working madeup numbers tells you absolutely nothing.
It tells you plenty. You have made a universal claim that diffraction always causes peak sharpness at the same aperture, regardless of pixel count. While I agree that that is mathematically correct, I also believe that a drop in resolution will in some cases only become noticable at a smaller aperture for a low res sensor thanma large one. That's all I'm claiming. If I can find one instance where numbers demonstrate it, then I am correct.
Your numbers do not at all demonstrate that 'that a drop in resolution will in some cases only become noticable at a smaller aperture for a low res sensor thanma large one', since your criterion for what is 'noticeable' was plucked from thin air, as were the numbers that you used.
Do you deny that there is both a greater absolute and greater relative drop in resolution between peak aperture and lower apertures for higher resolution cameras than lower resolution cameras?
Most of the ones I've seen follow that pattern. They also follow the pattern that the high resolution cameras give more resolution than the low resolution ones with the same lens, at any fnumber.
Does it not logically follow from this that all else being equal (printing size, viewing distance, etc.), a sufficiently resolution sensor will show no perceptible difference in quality due to diffraction at apertures where a higher resolution sensor will show a drop (but still retain far greater overall detail)?
A sufficiently low resolution sensor, I suppose you mean. Yes, but what of it? If you put a diffusing filter on the front, you'll probably see no drop at all with fnumber. Why is that worth spending all that effort on, it's pretty obvious that if you degrade the resolution enough, you'll level it down.
In any case, on what is based your assumption that a 5% change i resolution is noticeable? Do you know even that the noticeability simply scales? Maybe there's a threshold? Maybe it depends on viewing size?
I was assuming different viewing sizes because my interest was in the relationship between diffraction effects and pixel count. Introducing another variable would be unhelpful.
You haven't any real variables in any case, just fictitious ones.
So what if they're fictitious examples? It doesn't mean that they're unrealistic.
They may or may not be. If they're fictitious you can't claim that they demonstrate anything real.
Anyway, my point can be demonstrated trivially by looking at the f4 and f22 resoluts for s = 3 and s = 300. For s = 3, there is relative drop in sharpness of 0.1%. For s = 300, the drop is going to be 35.4%. The s = 3 drop is unnoticable.
You haven't even defined 'sharpness' properly, nor do you have any data on what is 'noticeable'.
I know that the human eye has limits on how well it can perceive changes in sharpness. If both the relative and absolute changes in resolution are sufficiently low when stopping down, then the eye will not perceive them. That's so obvious that I don't know why you're disputing it.
I'm not disputing it, what I'm disputing is whether your bogusly quantitative demonstration demonstrates anything useful.
If the s = 300 drop isn't noticable, then that sensor still has its peak sharpness well after the peak aperture. If the drop is noticable, then it has a peak sharpness at a lower aperture than the s = 3 sensor. Either way, my point is made.
I can't see your point  in every case the peak sharpness is at f/4 because that's where you put it. All that changes is the height of the peak. Not where it is.
My point is that if you can't see any visible change in sharpness between f/4 and f/22 i.e. they are indistinguishable, then the perceived sharpness will plateau across those apertures, rather than peaking at f/4.
Which is different to saying that it shifts.
So while the s = 300 sensor will show a drop in sharpness between f/4 and f/22 due to diffraction, the s = 3 sensor won't. Diffraction begins to limit sharpness for the s = 300 sensor at f/4, whereas it won't for the s = 3 sensor until f/22 or later.
No, the resolution starts to drop due to diffraction at the same fnumber, its just that the slope is flatter (going to very flat for a very low resolution camera).
I'm wondering why you feel it necessary to work so hard to find some meaning for 'diffraction limit' without being able to demonstrate that such a definition is even useful. What have you got invested in there being a 'diffraction limit'.
How about we discuss the subject at hand rather than speculating about people's motives?
It's a fair point, you've produced some impressive graphs of fictitious numbers. That must have taken you a fair time and effort, but in the ned they show nothing. However, they are bogusly quantitative, so why is it worth your time and effort to make something bogusly quantitative?
How are they bogusly quantitative?
Because they look quantitative (having numbers and percentages and all that stuff) but the numbers are all made up.
I clearly and repeatedly said that the numbers were illustrative only and didn't correlate with any particular lenses or sensors. They are however useful and realistic numbers in terms of demonstrating what happens when you change sensor resolution and look to see if there is any visible change in resolution between apertures.
I was quite careful to state all of that so I don't appreciate you constantly suggesting that I'm trying to be deceptive. There's nothing helpful about that attitude.
Particularly the second graph is highly bogus, because it shows '100%' at the same level, when the '100%' is of different things.
How is it bogus?
Because it's 100% of a different thing.
Bob
Jonny Boyd wrote:
The universal claim I'm disproving is that peak visible resolution always occurs at the same aperture.
You didn't disprove that the peak resolution always occurs at the same aperture, all else equal. At best, you've said that if the resolution is low enough and/or the photo is displayed small enough, there will be a large range of apertures where the loss of resolution either due to lens aberrations for apertures wider than the peak aperture or due to diffraction for apertures more narrow than the peak aperture, will not be noticed, none of which has anything, whatsoever, to do with being "diffraction limited".
Steen Bay wrote:
bobn2 wrote:
Where your methodology falls flat is that the 'experiment' you're performing is fictitious, it's not based on real numbers, nor is it based on the theory.
A real world experiment could be to shoot at fstops from wide open to f/22 with for example 5D and D800 (using a good lens, the same if possible) , and see at which fstop the diffraction starts to be 'clearly visible' at 100% view. My guess is that the resolution limiting effect of diffraction will become visible at a larger aperture on D800.
...the first thing we'd need to do is define "clearly visible". The fact of the matter is that the aperture of max resolution will be the same, regardless of pixel count, all else equal, but that resolution would fall off more quickly with the higher MP photo, although it will always be more detailed.
The question, then, is how does the term "diffraction limited" fit into all this? Everything points to the term not fitting into it at all.
Great Bustard wrote:
Jonny Boyd wrote:
The universal claim I'm disproving is that peak visible resolution always occurs at the same aperture.
You didn't disprove that the peak resolution always occurs at the same aperture, all else equal. At best, you've said that if the resolution is low enough and/or the photo is displayed small enough, there will be a large range of apertures where the loss of resolution either due to lens aberrations for apertures wider than the peak aperture or due to diffraction for apertures more narrow than the peak aperture, will not be noticed, none of which has anything, whatsoever, to do with being "diffraction limited".
+1. The formula Johnny uses can be justified under some assumptions, like a Gaussian blur. But it proves him wrong, as the plots illustrate (the formula, too, but I guess formulas are out of fashion nowadays).
Sorry, couldn't resist.
Admittedly the cubic fits are problematic. But it's almost as silly to fit a quadratic and discuss significance, treating this as a noisy sampling process. These are engineering measurements with fairly high precision, so presumably the issues with the quadratic fit are due to systematic problems with the model, which we'd expect from first principles, not from underdetermination.
Probably best to stick to just eyeball the raw data, and be satisfied with its 1 stop interval. Or, build a model with a sensible functional form and fit it to a larger sample of body/lens combinations.
You'd be 'comparing' different image sizes. This would be one more sityuation where '100% viewing' is impossible and/or useless.
Steen Bay wrote:
A real world experiment could be to shoot at fstops from wide open to f/22 with for example 5D and D800 (using a good lens, the same if possible) , and see at which fstop the diffraction starts to be 'clearly visible' at 100% view. My guess is that the resolution limiting effect of diffraction will become visible at a larger aperture on D800.
bobn2 wrote:
Jonny Boyd wrote:
bobn2 wrote:
Jonny Boyd wrote:
bobn2 wrote:
I would not aggrandise it with the word 'theoretical'. There is no theory behind this setup, merely arbitrariness.
There's plenty of theory Bob, all explained in thus post and previous posts in this thread. It's just using the equation for determining resolution of a system with multiple components that each have linited resolution themselves.
That isn't 'theory'. The equation you're using is itself a 'rule of thumb', based on the idea that the MTFs . For a start, we don't know what you mean by 'resolution'. Are you taking MTF50, or what?
A while ago I was getting the impression that those who think there is no diffraction limit regarded this equation as the golden rule so I was happy to use it.
Please link to where I've ever said that equation is a golden rule. You will not find it, because I have never said that.
It's an impression from a group of people, as I just told you, not a quote from any one individual
We're all agreed that it's a good equation
It depends what you mean by a 'good equation'  it's a decent approximation for some purposes.
And which purposes would they be and not be? You're being very vague.
It's a decent approximation if you want to work out the MTF of a lens on one camera, knowing its MTF on another. Better than a wild guess, anyway.
That's pretty much what I've done  used it to compare sharpness of different representative sensors using the same lens.
and as Anders requested, I'm working out the implications. I took a while to carefully explain my methodology, so if you'd like to contribute usefully here you could begin by highlighting where you think my methodology falls flat, rather than pouting off with an unsubstantiated opinion.
Where your methodology falls flat is that the 'experiment' you're performing is fictitious, it's not based on real numbers, nor is it based on the theory.
It's working out the implications of the equation Anders is so fond of, as he requested I do. In what way is it inapplicable in this situation?
As I said, the experiment you are doing is fictitious.
Hypothetical numbers aren't necessarily unrealistic. I've explained what the curve for resolution of the lens at different apertures is similar to that of real lenses and the range of sensor resolutions I've used is sufficiently broad to represent all situations. If you think the numbers are unrepresentative of reality, then it would save a lot of time by saying why.
As for arbitrariness, the numbers I used are quite deliberate. I used lens resolution numbers that give a curve with a shape broadly the same as a standard MTF curve for a lens. Similar real ones have been shown plenty of times in this thread and others. The printer resolution was chosen to be less than the lens, but not so low that it would dominate the final resolution The sensor resolutions were chosen to cover the range of scenarios from sensor limited output, to lens limited, and stuff in between. You'll notice the senaor numbers logarithmically scale for precisely this reason.
In short, your numbers were chosen to get the result that you wanted to demonstrate.
Rather than merely making an assertion, how about you show your reasoning? What is unrealistic about the shape of the resolution curve? What is wrong with the sensor resolution numbers I picked? As I've already stated, they effectively cover all the possibilities because they range from situations where resolution is sensor limited to situations where it is lens limited.
They are not real numbers, they are ones that you made up. It's not me that has to show my reasoning.
I've given my reasoning for why they are realistic and representative numbers. You haven't pointed out any flaws in my reasoning or given any reasoning of your own.
so real world examples of sensors, lenses, and printers may have more or less pronounced behaviour, depending on actual resolution. My model also assumes that the percentage drop in relative resolution that becomes noticeable would be the same for every absolute resolution. It may be that at higher absolute resolutions a change in relative resolution would be noticeable at a higher or lower resolution. I'm not sure about that one.
This is a particularly futile exercise. Either do the theory or work the real numbers.
As I said, this is a simple application of an equation we all agree on.
A rule of thumb equation run with madeup numbers.
And again I ask, why is the equation not applicable here and what is wrong with the numbers?
The numbers are madeup, do I have to repeat that?
I was the first one to say they were made up, so you don't need to tell me something I was the first to say. My question to you, which you've failed to answer, is what makes the numbers unrealistic or unrepresentative?
As for the equation, it's a ruleofthumb, not a precision model of what actually happens  so the fact you see something using that equation fed with madeup numbers doesn't mean that it exists.
I'm not looking for precision results. I'm showing that lower resolution sensors experience a lower absolute and relative drop in resolution as you reduce the aperture, compared with high resolution sensors. The general shape of curves is what matters, not great precision in the numbers. So again I ask, why is this not a suitable rule of thumb to use for this situation.
Working madeup numbers tells you absolutely nothing.
It tells you plenty. You have made a universal claim that diffraction always causes peak sharpness at the same aperture, regardless of pixel count. While I agree that that is mathematically correct, I also believe that a drop in resolution will in some cases only become noticable at a smaller aperture for a low res sensor thanma large one. That's all I'm claiming. If I can find one instance where numbers demonstrate it, then I am correct.
Your numbers do not at all demonstrate that 'that a drop in resolution will in some cases only become noticable at a smaller aperture for a low res sensor thanma large one', since your criterion for what is 'noticeable' was plucked from thin air, as were the numbers that you used.
Do you deny that there is both a greater absolute and greater relative drop in resolution between peak aperture and lower apertures for higher resolution cameras than lower resolution cameras?
Most of the ones I've seen follow that pattern.
Great, so you agree that reality matches the results my model gives. Fantastic.
They also follow the pattern that the high resolution cameras give more resolution than the low resolution ones with the same lens, at any fnumber.
That's a candidate for redundant statement of the year given the number of times that has been affirmed by just about everyone. As far as I can tell, that fact has never been in doubt. Certainly I've stated in on numerous occasions myself.
Does it not logically follow from this that all else being equal (printing size, viewing distance, etc.), a sufficiently resolution sensor will show no perceptible difference in quality due to diffraction at apertures where a higher resolution sensor will show a drop (but still retain far greater overall detail)?
A sufficiently low resolution sensor, I suppose you mean.
Yes.
Yes, but what of it? If you put a diffusing filter on the front, you'll probably see no drop at all with fnumber.
Exactly.
Why is that worth spending all that effort on, it's pretty obvious that if you degrade the resolution enough, you'll level it down.
every time I've said that, you or someone else has said 'no, that's nonsense.' Now you're agreeing with me.
All I've been saying here is that with a low enough resolution sensor, the drop in resolution across apertures will not be visible, even though it technically occurs. The obvious implication of that is that diffraction isn't limiting resolution in any practical sense until much smaller apertures for a low resolution camera. That's obvious enough that I don't know why you keep arguing against it.
In any case, on what is based your assumption that a 5% change i resolution is noticeable? Do you know even that the noticeability simply scales? Maybe there's a threshold? Maybe it depends on viewing size?
I was assuming different viewing sizes because my interest was in the relationship between diffraction effects and pixel count. Introducing another variable would be unhelpful.
You haven't any real variables in any case, just fictitious ones.
So what if they're fictitious examples? It doesn't mean that they're unrealistic.
They may or may not be. If they're fictitious you can't claim that they demonstrate anything real.
Why not? If they're representative and realistic, then of course they give an idea of what really happens.
Anyway, my point can be demonstrated trivially by looking at the f4 and f22 resoluts for s = 3 and s = 300. For s = 3, there is relative drop in sharpness of 0.1%. For s = 300, the drop is going to be 35.4%. The s = 3 drop is unnoticable.
You haven't even defined 'sharpness' properly, nor do you have any data on what is 'noticeable'.
I know that the human eye has limits on how well it can perceive changes in sharpness. If both the relative and absolute changes in resolution are sufficiently low when stopping down, then the eye will not perceive them. That's so obvious that I don't know why you're disputing it.
I'm not disputing it, what I'm disputing is whether your bogusly quantitative demonstration demonstrates anything useful.
You've already agreed with me that it produces similar results to real world examples. All I was taking out of it was that smaller resolution sensors have lower relative and absolute drops in sharpness when you stop down, so if you take a sufficiently low resolution sensor then the drop in resolution as you stop down from peak aperture will be so small as to be unnoticeable and that a noticeable drop in resolution won't occur until much later, meaning that as far as the eye can tell, you get peak sharpness across the range of apertures, not just the actual peak.
If the s = 300 drop isn't noticable, then that sensor still has its peak sharpness well after the peak aperture. If the drop is noticable, then it has a peak sharpness at a lower aperture than the s = 3 sensor. Either way, my point is made.
I can't see your point  in every case the peak sharpness is at f/4 because that's where you put it. All that changes is the height of the peak. Not where it is.
My point is that if you can't see any visible change in sharpness between f/4 and f/22 i.e. they are indistinguishable, then the perceived sharpness will plateau across those apertures, rather than peaking at f/4.
Which is different to saying that it shifts.
The point at which resolution perceptibly drops shifts to the edge of the plateau. You are still getting peak resolution, as far as the eye can tell, at smaller apertures. That's all I mean. Do you agree with that?
So while the s = 300 sensor will show a drop in sharpness between f/4 and f/22 due to diffraction, the s = 3 sensor won't. Diffraction begins to limit sharpness for the s = 300 sensor at f/4, whereas it won't for the s = 3 sensor until f/22 or later.
No, the resolution starts to drop due to diffraction at the same fnumber, its just that the slope is flatter (going to very flat for a very low resolution camera).
In a technical sense, yes, I've stated that fairly clearly. But here I'm talking about perceived resolution. What is so hard to understand about that?
Technically peak aperture occurs at one fixed aperture. We both agree on that. Therefore technically resolution begins to drop at the same point. We both agree on that.
What I'm also saying is that the drop is so slight for low resolution sensors that you do not perceive the drop until later fnumbers, so the practical resolution drop doesn't occur until later f numbers. Do you agree with that?
I'm wondering why you feel it necessary to work so hard to find some meaning for 'diffraction limit' without being able to demonstrate that such a definition is even useful. What have you got invested in there being a 'diffraction limit'.
How about we discuss the subject at hand rather than speculating about people's motives?
It's a fair point, you've produced some impressive graphs of fictitious numbers. That must have taken you a fair time and effort, but in the ned they show nothing. However, they are bogusly quantitative, so why is it worth your time and effort to make something bogusly quantitative?
How are they bogusly quantitative?
Because they look quantitative (having numbers and percentages and all that stuff) but the numbers are all made up.
Numbers, by definition, look quantitative. These numbers are used to illustrate a point. They are representative of reality.
I clearly and repeatedly said that the numbers were illustrative only and didn't correlate with any particular lenses or sensors. They are however useful and realistic numbers in terms of demonstrating what happens when you change sensor resolution and look to see if there is any visible change in resolution between apertures.
I was quite careful to state all of that so I don't appreciate you constantly suggesting that I'm trying to be deceptive. There's nothing helpful about that attitude.
Particularly the second graph is highly bogus, because it shows '100%' at the same level, when the '100%' is of different things.
How is it bogus?
Because it's 100% of a different thing.
I was asking whether the drop in sharpness relative to the peak aperture for a sensor happens any quicker or slower for different resolution sensors. That required me to plot the resolution of each sensor at different apertures, compared to resolution for that sensor at peak aperture. Hence the % plotting. When you compare sensors to see which drops relative resolution quickest, it makes perfect sense to putt hem on the same plot. I'm baffled as to why you would think that is wrong.
Consider a different example. If I wanted to compare how quickly three cars could reach their top speed, I would plot speed as a percentage of the top speed for each car and out those plots on the same chart. 100% represents a different speed for each car, but if all your concerned with is relative acceleration, that's fine.
For the lens/sensor situation, even if you plot the absolute drop in resolution compared to peak, you will get a chart that shows the same general result: resolution declines quicker for high res sensors.
if you look at the DXO data in the charts in the first post of this thread, you'll see exactly the same thing Fir the D800 and D3. The higher resolution camera loses resolution faster in relative and absolute terms (while of course retaining greater overall resolution).
Just another Canon shooter wrote:
Great Bustard wrote:
Jonny Boyd wrote:
The universal claim I'm disproving is that peak visible resolution always occurs at the same aperture.
You didn't disprove that the peak resolution always occurs at the same aperture, all else equal. At best, you've said that if the resolution is low enough and/or the photo is displayed small enough, there will be a large range of apertures where the loss of resolution either due to lens aberrations for apertures wider than the peak aperture or due to diffraction for apertures more narrow than the peak aperture, will not be noticed, none of which has anything, whatsoever, to do with being "diffraction limited".
+1. The formula Johnny uses can be justified under some assumptions, like a Gaussian blur. But it proves him wrong, as the plots illustrate (the formula, too, but I guess formulas are out of fashion nowadays).
And where exactly does it prove me wrong? It would be helpful if you elucidated rather than making bald assertions.
Jonny Boyd wrote:
It tells you plenty. You have made a universal claim that diffraction always causes peak sharpness at the same aperture, regardless of pixel count. While I agree that that is mathematically correct, I also believe that a drop in resolution will in some cases only become noticable at a smaller aperture for a low res sensor thanma large one. That's all I'm claiming. If I can find one instance where numbers demonstrate it, then I am correct.
With no official standard for what diffraction limit means, anyone is free to come up with one that fits the desired result. Some of those definitions will be more useful than others. You and bobn2 have different definitions and will never be able to agree who is right.
For the record, this is the same thing as when people argue over equivalence using different equivalence relation.
Ulric wrote:
Jonny Boyd wrote:
It tells you plenty. You have made a universal claim that diffraction always causes peak sharpness at the same aperture, regardless of pixel count. While I agree that that is mathematically correct, I also believe that a drop in resolution will in some cases only become noticable at a smaller aperture for a low res sensor thanma large one. That's all I'm claiming. If I can find one instance where numbers demonstrate it, then I am correct.
With no official standard for what diffraction limit means, anyone is free to come up with one that fits the desired result. Some of those definitions will be more useful than others. You and bobn2 have different definitions and will never be able to agree who is right.
Noone is 'right'  so long as someone defines what they say, then they can say what they mean.
For the record, this is the same thing as when people argue over equivalence using different equivalence relation.
I do have an argument over the whole McHugh 'diffraction limit' definition. My arguments are:
i) I can't see anything that would sensibly be called a 'limit'.
ii) A 'limit' that occurs at different magnifications for different cameras doesn't seem much use.
iii) Even if it were well founded and clearly demonstrable, it obviously has not been well articulated, since very many people on reading McHugh's site and discussions arising from it come away with the impression that high MP cameras give worse results than low MP cameras at small apertures, due to this diffraction limit. It is a common confusion, caused in no small part by that site and people who promulgate this bogus 'diffraction limit' idea. For instance, but one example:
http://www.dpreview.com/forums/post/52021604
Bob
robert1955 wrote:
You'd be 'comparing' different image sizes. This would be one more sityuation where '100% viewing' is impossible and/or useless.
OK, then let's upsample 5D to 36mp and compare at at least 100% view, or make 45x30" prints of the images. Perfectly possible to compare cameras with different MP count at the same output size. In DPR's comparison tool we can compare images downsampled to app. 8mp ('print') and app. 3mp ('web'). Would be nice if there also was a 'large print' option where the images were upsampled to for example 64mp (or even more, if we want to compare to the IQ180). Would also be useful.
bobn2 wrote:
[snip]
Noone is 'right'  so long as someone defines what they say, then they can say what they mean.
[snip]
I do have an argument over the whole McHugh 'diffraction limit' definition. My arguments are:
i) I can't see anything that would sensibly be called a 'limit'.
ii) A 'limit' that occurs at different magnifications for different cameras doesn't seem much use.
iii) Even if it were well founded and clearly demonstrable, it obviously has not been well articulated, since very many people on reading McHugh's site and discussions arising from it come away with the impression that high MP cameras give worse results than low MP cameras at small apertures, due to this diffraction limit. It is a common confusion, caused in no small part by that site and people who promulgate this bogus 'diffraction limit' idea. For instance, but one example:
http://www.dpreview.com/forums/post/52021604
 hide signature Bob
I define the "limit" imposed by diffraction with an equation...
The fNumber at which diffraction begins to inhibit a desired print resolution expressed in lp/mm, at an anticipated enlargement factor, an be calculated as:
(Equation 1)
fNumber "limit" = 1 / desired print resolution / anticipated enlargement factor / 0.00135383
Thus...
The greater the desired print resolution, the smaller the fNumber one must use to support an anticipated enlargement factor. (Consider the formula  the fNumber must go down when desired print resolution goes up). With all else being equal, the photographer who desires a final print resolution no higher than 2 lp/mm can use larger fNumbers than a photographer who desires a final print resolution of 4 lp/mm  before diffraction will begin to inhibit their respective desired print resolutions at the same anticipated enlargement factor.
The greater the enlargement factor, the smaller the fNumber one must use to avoid inhibiting a desired print resolution. (Consider the formula  the fNumber must go down when enlargement factor goes up.) With two cameras having the same pixel count but different sensor dimensions, producing likesized prints, the camera with the smaller sensor requires greater enlargement to achieve the final print dimensions, and thus, the fNumber used with the smaller sensor must be smaller, to make the Airy disk diameters at the sensor plane smaller, before the greater enlargement factor is applied to produce a likesized print.
It's analogous to CoC diameters: Small sensors (or film formats), suffering greater enlargement factors to achieve a given print size, require smaller CoC diameters for DoF calculations, than do larger sensors (or film formats). Smaller sensors and film formats similarly require smaller Airy disk diameters (use of smaller fNumbers) to withstand the greater enlargement factors required to achieve a given prints size.
The formula for calculating the maximum CoC diameter one should use for DoF calculations is:
(Equation 2)
CoC (mm) = viewing distance (cm) / desired finalimage resolution (lp/mm) for a 25 cm viewing distance / enlargement / 25
Source: http://en.wikipedia.org/wiki/Circle_of_confusion
If we assume that your "desired finalimage resolution (lp/mm)" has already incorporated your concerns for viewing distance, we can reduce the CoC calculation to this:
(Equation 3)
CoC (mm) = viewing distance (cm) / desired finalimage resolution (lp/mm) / enlargement
Look familiar? (See Equation 1, above.)
Yes, enlargement factor and desired print resolution are the only variables affecting the selection of a CoC diameter for DoF calculations (assuming you have considered viewing distance when specifying your desired print resolution). And they are the only variables affecting the fNumber at which diffraction will begin to inhibit a desired resolution.
Comparing Equations 1 and 3, we can see they differ by just one divisor, the constant 0.0013533, and thus, as long as we are using Equations 2 or 3 for calculating the maximum permissible CoC diameter used for DoF calculations, we can reduce Equation 1 to this:
(Equation 4)
fNumber "limit" = CoC / 0.00135383
Which might beg the question: How is the constant derived? See David Jacobson's Lens Tutorialat http://photo.net/learn/optics/lensTutorial and search for "0.00135383".
With both CoC (defocus) and fNumber "limit" (diffraction) calculations relying on userspecification of a "desired print resolution," further reading is available here: http://www.dpreview.com/forums/post/40100820
Mike Davis
http://www.AccessZ.com
Mike Davis wrote:
bobn2 wrote:
[snip]
Noone is 'right'  so long as someone defines what they say, then they can say what they mean.
[snip]
I do have an argument over the whole McHugh 'diffraction limit' definition. My arguments are:
i) I can't see anything that would sensibly be called a 'limit'.
ii) A 'limit' that occurs at different magnifications for different cameras doesn't seem much use.
iii) Even if it were well founded and clearly demonstrable, it obviously has not been well articulated, since very many people on reading McHugh's site and discussions arising from it come away with the impression that high MP cameras give worse results than low MP cameras at small apertures, due to this diffraction limit. It is a common confusion, caused in no small part by that site and people who promulgate this bogus 'diffraction limit' idea. For instance, but one example:
http://www.dpreview.com/forums/post/52021604
 hide signature Bob
I define the "limit" imposed by diffraction with an equation...
The fNumber at which diffraction begins to inhibit a desired print resolution expressed in lp/mm, at an anticipated enlargement factor, an be calculated as:
(Equation 1)
fNumber "limit" = 1 / desired print resolution / anticipated enlargement factor / 0.00135383
Thus...
The greater the desired print resolution, the smaller the fNumber one must use to support an anticipated enlargement factor. (Consider the formula  the fNumber must go down when desired print resolution goes up). With all else being equal, the photographer who desires a final print resolution no higher than 2 lp/mm can use larger fNumbers than a photographer who desires a final print resolution of 4 lp/mm  before diffraction will begin to inhibit their respective desired print resolutions at the same anticipated enlargement factor.
The greater the enlargement factor, the smaller the fNumber one must use to avoid inhibiting a desired print resolution. (Consider the formula  the fNumber must go down when enlargement factor goes up.) With two cameras having the same pixel count but different sensor dimensions, producing likesized prints, the camera with the smaller sensor requires greater enlargement to achieve the final print dimensions, and thus, the fNumber used with the smaller sensor must be smaller, to make the Airy disk diameters at the sensor plane smaller, before the greater enlargement factor is applied to produce a likesized print.
It's analogous to CoC diameters: Small sensors (or film formats), suffering greater enlargement factors to achieve a given print size, require smaller CoC diameters for DoF calculations, than do larger sensors (or film formats). Smaller sensors and film formats similarly require smaller Airy disk diameters (use of smaller fNumbers) to withstand the greater enlargement factors required to achieve a given prints size.
The formula for calculating the maximum CoC diameter one should use for DoF calculations is:
(Equation 2)
CoC (mm) = viewing distance (cm) / desired finalimage resolution (lp/mm) for a 25 cm viewing distance / enlargement / 25
Source: http://en.wikipedia.org/wiki/Circle_of_confusion
If we assume that your "desired finalimage resolution (lp/mm)" has already incorporated your concerns for viewing distance, we can reduce the CoC calculation to this:
(Equation 3)
CoC (mm) = viewing distance (cm) / desired finalimage resolution (lp/mm) / enlargement
Look familiar? (See Equation 1, above.)
Yes, enlargement factor and desired print resolution are the only variables affecting the selection of a CoC diameter for DoF calculations (assuming you have considered viewing distance when specifying your desired print resolution). And they are the only variables affecting the fNumber at which diffraction will begin to inhibit a desired resolution.
Comparing Equations 1 and 3, we can see they differ by just one divisor, the constant 0.0013533, and thus, as long as we are using Equations 2 or 3 for calculating the maximum permissible CoC diameter used for DoF calculations, we can reduce Equation 1 to this:
(Equation 4)
fNumber "limit" = CoC / 0.00135383
Which might beg the question: How is the constant derived? See David Jacobson's Lens Tutorialat http://photo.net/learn/optics/lensTutorial and search for "0.00135383".
With both CoC (defocus) and fNumber "limit" (diffraction) calculations relying on userspecification of a "desired print resolution," further reading is available here: http://www.dpreview.com/forums/post/40100820
 hide signature 
Mike Davis
http://www.AccessZ.com
That seems eminently sensible. I note pixel size doesn't come into it.
Bob
Thank you, Bob.
Mike Davis
http://www.AccessZ.com
Mike Davis wrote:
Yes, enlargement factor and desired print resolution are the only variables affecting the selection of a CoC diameter for DoF calculations (assuming you have considered viewing distance when specifying your desired print resolution). And they are the only variables affecting the fNumber at which diffraction will begin to inhibit a desired resolution.
What about the resolution of the sensor/system? Think it's necessary to take the max potential resolution in account too if we want to know at which point the resolution will be visibly affected. Both when calculating DoF and 'diffraction limit' at 100% view (equivalent to a large print). Guess that app. 2x pixel size/pitch is 'appropriate' in both cases.
TomFid wrote:
Sorry, couldn't resist.
Admittedly the cubic fits are problematic. But it's almost as silly to fit a quadratic and discuss significance, treating this as a noisy sampling process. These are engineering measurements with fairly high precision, so presumably the issues with the quadratic fit are due to systematic problems with the model, which we'd expect from first principles, not from underdetermination.
Probably best to stick to just eyeball the raw data, and be satisfied with its 1 stop interval. Or, build a model with a sensible functional form and fit it to a larger sample of body/lens combinations.
I'm hoping that the phrasing I used ("Even if you were to restrict yourself to a quadratic . . .") is not so subtle as to make you think I was suggesting it seriously. Please disabuse me of that hope if you thought otherwise.
As to inventing a curve by eyeballing the data, the results are completely without value. Indeed, the best estimate we have for a maximum is the maximum we have, not one based on speculation. You can then do some half hearted Bayesian modification by saying that you have a diffuse prior that, when taken into account, would suggest that that maximum could be in a range around the data maximum  maybe even 1 stop. But there's no value to eyeballing a curve.
gollywop

Follow us
