Relationship between Sensor Size and diffraction

David Rosser wrote:
Bobn2 wrote:
olliess wrote:
Bobn2 wrote:
olliess wrote:
quadrox wrote:

In the end that means that medium format and large format only hold an advantage as long as DSLR lenses are not diffraction limited.
I think it's correct to say that we're getting to the point where DSLRs significantly affected by diffraction, and may even be said to be diffraction-limited at some apertures.
I don't think it's correct to say that, or at least, not meaningful. We are still well short of pixel counts such that the resolution of the lens is the limit over the whole aperture range.
I didn't say "over the whole aperture range," I said "at some apertures."
But they've always been diffraction limited at the same 'some apertures' - it is a property of the lens, not the camera. Look at this lens resolution graph (taken from DxOmark in the good old days when the gave full MTF)



Here we have the same lens on a 12MP and 24MP FF camera. The downward slope to the right is where the lens is becoming 'diffraction limited' - that is, its resolution is limited by diffraction rather than aberrations. Notice that the slope starts at the same place. The only difference is that the 24Mp camera is extracting more from the diffraction limited lens. The 24MP is no more 'diffraction limited' that the 12MP.
In any case, mostly, the lens is giving its best resolution (which present cameras cannot entirely capture) when it is aberration, not diffraction limited.
Sure.
This whole idea of cameras being 'diffraction limited' due to high pixel count is bogus. Diffraction is a property of the lens, not the camera.
The camera isn't diffraction limited DUE to higher pixel counts. It's diffraction that is limiting your ability to gain much more from higher pixel counts.
Then we still have some way to go before the diffraction is the limiting factor on our camera systems at moderate apertures. As the above shows, even at the smallest apertures, increased pixel count is still yielding noticeably more.

--
Bob
Bob it might help a lot if you explained that the MTF curves in the figure above are composite curves - the MTF for the lens at some fixed number of lp/mm at each aperture multiplied by the MTF of the sensor at that same lp/mm figure. When you understand that you realise that unless you have a sensor with infinite resolution the combined MTF is always going to improve with increasing sensor resolution. Of course I have never seen a sensor MTF curve - the only hint is a Zeiss paper which suggests that sensor MTF falls linearly from 100% at 0 frequency to 0% at the Nyquist frequency. Now if anybody still tested lenses independent of camera bodies like they used to (I still have the EROS200 lens test date for my 55mm f/3.5 micro Nikkor that R.G.Lewis produced prior to selling me the lens) you could use the lens data and the combined data as prooduced by DxOmark to estimate the sensor MTF.

Perhaps as this site now uses DxOmark data Amazon could afford to buy DxOmark the modern equivalent of EROS200.

--
Dave
http://www.david-rosser.co
I should have said system curves rather than composite curves.

--
Dave
 
Zlik wrote:
If you want to print huge, and DoF is not an issue for the biggest part of the frame at f/5.6, you are better off shooting at f/5.6 instead of f/8 with the 36 MP camera, whereas the best possible result with the 12 MP camera would be at f/8, and not f/5.6.
Sigh ...

For the umpteenth time: no! No! NO!

If, for the same lens and the same sensor format, the result with the 36 MP camera at the plane of focus is best at f/5.6 then the result with the 12 MP camera will also be best at f/5.6. The aperture where the optimum happens to be does not, repeat: NOT depend on the pixel size. Got it?

Of course the loss at f/8 may possibly be clearly noticable with the 36 MP camera and hardly noticable with the 12 MP camera. So you may possibly shoot at f/8 with the 12 MP camera and possibly not see any significant loss—but still, the result will 1) not be better than with the 12 MP camera at f/5.6, and 2) not be better than with the 36 MP camera at f/8. If depth-of-field did play a role in the image composition then you'd be better off with f/8, of course (and maybe still better with f/11 or even f/16) ... but then, the same would be true for the 36 MP camera.
 
The OP asked if there is relationship. DM et al say that there is; others jumble up the subject with Airy disks, Nyquist, etc. etc.

The relationship from my own research is quite simple. It is the "zero contrast" frequency of the lens at a given aperture versus the sampling frequency of the sensor (reciprocal of the pixel pitch). Everything else evolves from there.

A key is staying in the spatial frequency domain (domain: a fancy way of saying using the same units) as already said by some previously in this and other threads.

For every aperture setting in a perfect lens there is a "zero contrast" frequency (Vo) at which the lens MTF is 0%. For an aperture of f/5.6 it is Vo = 322 cycles/mm.

For a sensor of 5um pitch which has 200 lines/mm, I make the maximum sampling frequency (Vf) 100 cycles/mm.

The relationship asked for is Vf/Vo = 0.31 in my example above.

From that relationship can be calculated the perfect MTF of a lens/sensor combination, 0.61 or 61% for the example above.

Note too that the relationship is dimensionless (fraction without units) which makes it generally applicable - as preferred by Engineers.

Armed with the above, this set of graphs might now make sense:

50mmlenses.gif


--
Regards,
Ted http://kronometric.org
SD9, SD10, EF-500, GH1.
 
Last edited:
Steen Bay wrote:

If there is a 'problem' with increasing pixel count, then I think that it's more the uneven amount of aberrations across the frame at large/moderate apertures, than it is the unavoidable diffraction at small apertures. Take for example a lens that already on a 12mp FF camera has a considerably higher resolution in the center of the image at e.g. f/5.6 than it has in the corners. If we use such a lens on e.g. a 100mp FF camera, then the center resolution would be much higher than it was on the 12mp camera, but the corner resolution would increase much less (expressed as a percentage), meaning that the relative difference between center and corner resolution would become even greater than it already was (on a 12mp camera).
This is pure silliness. Show me a photo that looks worse when taken from a D800 than when taken from the D700 based on the D800 having a greater relative difference between central resolution and edge resolution.
 
quadrox wrote:
Great Bustard wrote:
quadrox wrote:

I have never quite understood how larger formats are supposed to allow for higher resolution, but I have now realized a good way to phrase my question so I might get good answers.

As far as I understand it, the following points are generally accepted to be true when comparing sensors with identical resolution:
  1. All else being equal, smaller sensors will suffer more from difraction when using a given f-stop due to smaller pixels.
  2. All else being equal, smaller sensors have more DOF at a given f-stop.
My main question is now, will the two effects above cancel out for equivalent photos? That is:

Will equivalent photos taken at the same resolution, with the same FOV and DOF have different amounts of diffraction?

My intuition says that they will have the same amount of diffraction. If that is correct, I see one alternative aspect where larger formats may yield higher resolution: Lens sharpness.

Once more, it is my understanding that:
  1. Lenses for smaller sensors are easier to make sharper than lenses for larger sensors...
  2. ... But because larger sensors (having same resolution) will have larger pixels, this effect is again reduced
So therefore my second question is:

Will Lenses for larger formats generally be sharper relative to sensors size than lenses for smaller sensors?

My intuition says no, there won't be much of a difference. And if my intuition is correct, then I am really wondering where the supposed superior resolution for larger formats is coming from. I appreciate any answers that make this clear to me!
This should just about cover it:

http://www.josephjamesphotography.com/equivalence/index.htm#diffraction

Diffraction softening is unavoidable at any aperture, and worsens as the lens is stopped down. However, other factors mask the effects of the increasing diffraction softening: the increasing DOF and the lessening lens aberrations. As the DOF increases, more and more of the photo is rendered "in focus", making the photo appear sharper. In addition, as the aperture narrows, the aberrations in the lens lessen. For wide apertures, the increasing DOF and lessening lens aberrations far outweigh the effects of diffraction softening. At small apertures, the reverse is true. In the interim (usually around a two stop interval), the two effects roughly cancel each other out, and the balance point for the edges typically lags the balance point for the center by around a stop (the edges are usually suffer greater aberrations than the center). In fact, it is not uncommon for diffraction softening to be dominant right from wide open for lenses slower than f/5.6 equivalent on FF, and thus these lenses are sharpest wide open (for the portions of the scene within the DOF, of course).

However, the relationship between diffraction softening and pixel density is largely misunderstood. For a given sensor size and lens, more pixels always result in more detail. As we stop down and the DOF deepens, we reach a point where we begin to lose detail due to diffraction softening. As a consequence, photos made with more pixels will begin to lose their detail advantage earlier and quicker than images made with fewer pixels, but they will always retain more detail. Eventually, the additional detail afforded by the extra pixels becomes trivial (most certainly by f/32 on FF). See
here for an excellent example of the effect of pixel size on diffraction softening.

In terms of cross-format comparisons, all systems suffer the same from diffraction softening at the same DOF. This does not mean that all systems resolve the same detail at the same DOF, as diffraction softening is but one of many sources of blur (lens aberrations, motion blur, large pixels, etc.). However, the more we stop down (the deeper the DOF), diffraction increasingly becomes the dominant source of blur. By the time we reach the equivalent of f/32 on FF (f/22 on APS-C, f/16 on mFT and 4/3), the differences in resolution between systems is trivial.
Ah, thank you. I must somehow have missed it when I read your essay the last time. I did not remember it discussing diffraction at all. Thanks!
I'm glad you found it useful! The part about diffraction is "hidden" in the section on DOF.
 
Zlik wrote:
Allan Olesen wrote:
Zlik wrote:
Olaf Ulrich wrote:
quadrox wrote:
Or the other way around, it is interesting to know how far one can stop down before image quality degrades significantly.
This is interesting to know indeed. And the answer depends on lens quality and sensor size but totally not on pixel size.
You are right, but if you take away the "significantly" in the sentence above, then pixel size plays a role too. Example:

For a given lens, if a 12MP full frame camera doesn't lose any resolution (because it's already very low) when going from f/5.6 to f/8, then you could say that "one can stop down to f/8 on that camera without degrading image quality", whereas if a 36MP full sensor loses even a little bit of resolution (going from very high to high), I would say that "stopping down to f/8 degrades image quality already (compared to what is possible on that sensor)", and it wouldn't matter if the results at f/8 with the 36MP camera are better than the results at f/8 with the 12MP camera.

Or put in another way:

If you want to print huge, and DoF is not an issue for the biggest part of the frame at f/5.6, you are better off shooting at f/5.6 instead of f/8 with the 36MP camera, whereas the best possible result with the 12MP camera would be at f/8, and not f/5.6. And it doesn't matter if even f/11 on the 36MP would be better than anything possible with the 12MP sensor.
For a sensor with infinite resolution I will expect the most detailed photo at the aperture which has the best balance between lens aberrations (which are worst at large apertures) and diffraction (which is worst at small apertures).

This is the aperture where the most detailed image is being projected onto the sensor.

I hope we can agree on the above.
Agreed.
Now, can you explain why a low MP sensor should move this balance to another optimal aperture? The aperture where the most detailed image is being projected onto the sensor is still the same, independent of pixel count.
The lower MP doesn't move this balance. The low resolution may mask any resolution advantage of said optimal aperture compared to neighboring apertures, in which case, stopping slightly down away from optimal aperture would produce the same resolution but with more DoF, thus preferable in some situations.
One of the qualifications for your statement was: DoF is not an issue for the biggest part of the frame at f/5.6.
 
Najinsky wrote:
Donald B wrote:

not having ever understood diffraction, can someone post some photos showing the effect.

cheers don
Note, how diffraction effects through the choice of F/22 on a m.4/3 sensor renders the image unusable.

9b539f46e2c544ebb0e24707bbe206d4.jpg
...is a photo that has lost considerable resolution due to diffraction softening, but that the photo is successful despite the lower resolution and/or because the greater DOF mattered more than the loss of resolution.

It's not unlike me posting a photo at ISO 2500:

Canon 6D + 50 / 1.2L @ f/2.8, 1/60, ISO 2500

Canon 6D + 50 / 1.2L @ f/2.8, 1/60, ISO 2500

in a discussion about noise with the implication that photos in light do not suffer more noise than photos in good light.

Of course, display size, viewing distance, and visual acuity all play major roles in determining what is "good enough".
 
Najinsky wrote:

Perhaps it's the Feynman influence, his genius was in taking complexity and distilling it into what really matters.
Yes, Feynman’s approach is a big influence on me, and I realized that if I needed three pages of equations to illustrate an explanation, then I was probably doing things the wrong way: rather, the explanation should be as simple as needed, but not simpler. I am also influenced by G.K. Chesterton, who used similar methods as Feynman, but applied to culture.

I think Feynman often had an intuitive understanding of the ‘big picture’ whenever he solved a problem, or a grasp of basic principles. Intellectually, he was more of a classicist or a scholastic than a modernist, which probably is due to his upbringing. In his famed lectures on physics, he taught energy as a first principle, and this is superior to the standard course of teaching mechanics first. But if you understand energy first, then the principles of mechanics neatly become consequences of energy, and become intuitively understandable.

Mathematics education these days seems to require too much memorization and not enough understanding of what the math is doing. I’ve seen folks who were great with math, but they treated it like a black box, doing ten pages of calculations when an analysis of the problem could produce an easy result in one paragraph. But perhaps I’m just lazy — back when I was studying physics I called it “the principle of least work.”

--

 
Great Bustard wrote:
Steen Bay wrote:

If there is a 'problem' with increasing pixel count, then I think that it's more the uneven amount of aberrations across the frame at large/moderate apertures, than it is the unavoidable diffraction at small apertures. Take for example a lens that already on a 12mp FF camera has a considerably higher resolution in the center of the image at e.g. f/5.6 than it has in the corners. If we use such a lens on e.g. a 100mp FF camera, then the center resolution would be much higher than it was on the 12mp camera, but the corner resolution would increase much less (expressed as a percentage), meaning that the relative difference between center and corner resolution would become even greater than it already was (on a 12mp camera).
This is pure silliness. Show me a photo that looks worse when taken from a D800 than when taken from the D700 based on the D800 having a greater relative difference between central resolution and edge resolution.
I'm just pointing out (because I don't see it mentioned that often) that a higher MP count also means that the resolution (at large/moderate apertures) becomes more uneven across the frame than it is with the same lens on a camera with a lower MP count. Whether you consider that as an 'issue' or not, that'll depend on your preferences and shooting style (know that you don't, but some people do worry about soft corners ;-) ).
 
Great Bustard wrote:

It's not unlike me posting a photo at ISO 2500:

Canon 6D + 50 / 1.2L @ f/2.8, 1/60, ISO 2500

Canon 6D + 50 / 1.2L @ f/2.8, 1/60, ISO 2500

in a discussion about noise with the implication that photos in light do not suffer more noise than photos in good light.

Of course, display size, viewing distance, and visual acuity all play major roles in determining what is "good enough".
That’s a great shot.

--
 
Olaf Ulrich wrote:
Zlik wrote:
If you want to print huge, and DoF is not an issue for the biggest part of the frame at f/5.6, you are better off shooting at f/5.6 instead of f/8 with the 36 MP camera, whereas the best possible result with the 12 MP camera would be at f/8, and not f/5.6.
Sigh ...

For the umpteenth time: no! No! NO!

If, for the same lens and the same sensor format, the result with the 36 MP camera at the plane of focus is best at f/5.6 then the result with the 12 MP camera will also be best at f/5.6. The aperture where the optimum happens to be does not, repeat: NOT depend on the pixel size. Got it?
I never said optimum aperture changed depending on pixel size. I just said that in some cases, stopping down a little bit past optimum aperture wouldn't result in loss of resolution (= flatter curve near optimum aperture), in which case the bigger DoF obtained will actually benefit the picture. Is that so difficult to understand ? See this graph:

TS560x560~1747345.jpg


Optimum aperture is clearly around f/5.6, and yes, invariably of pixel pitch.

But, you can clearly see that with th 5D, going from f/5.6 to f/8 doesn't (within margins of error) visibly reduce resolution. That curve would be even flatter for a 6MP sensor. And even if the mathematical high point is still at f/5.6 even on a 1MP sensor, the curve would be so flat that stopping down one or two stops wouldn't affect resolution, and that's why I said that in those cases, stopping down might even benefit the image quality because of the bigger DoF.

"Sigh..."
Of course the loss at f/8 may possibly be clearly noticable with the 36 MP camera and hardly noticable with the 12 MP camera.
Exactly
So you may possibly shoot at f/8 with the 12 MP camera and possibly not see any significant loss
No significant loss, but increased DoF.
—but still, the result will 1) not be better than with the 12 MP camera at f/5.6,
Again, DoF...
and 2) not be better than with the 36 MP camera at f/8.
Of course not, and I never claimed it did, and it doesn't change the fact that the 12MP camera at f/8 with more DoF could produce better overall image quality than the 12MP camera at f/5.6. Did you read my other responses to the thread, minutes before the post you replied to?
If depth-of-field did play a role in the image composition then you'd be better off with f/8, of course
Exactly my point.
(and maybe still better with f/11 or even f/16)
That's different because now it's a matter of compromise (slightly reducing the overall sharpness for more parts within DoF) whereas what I say is that to some extent, on low resolution cameras, you can stop slightly down without even sacrificing resolution in the parts than are within DoF.
... but then, the same would be true for the 36 MP camera.
No, because stopping down on the 36MP camera would significantly reduce in the part that is within DoF.
 
Last edited:
quadrox wrote:
Bobn2 wrote:
This camera 'diffraction limit' nonsense comes from McHugh, and when you find someone completely confused about diffraction, it is generally because they have been reading that site. Diffraction is not the only topic it's completely wrong about, but it is most spectacularly wrong about this subject. Even Ken Rockwell's site is a better source, if you must, if less technical looking.
But it dosen't convey that information. Why did you choose 0.8 times the peak - there is no logic to that. In any case, where the peak is (in your case) depends on the lens, not the camera. In McHugh's case, his limit is arbitrary and just silly. A complete nonsense. It's a coincidence that yours and his coincide. Anyway, look at the curves again:
As I am not McHugh I can not vouch for this, but my clear understanding is that the mentioned article is assuming ideal lenses that are diffraction limited - and it explores the effects of diffraction on cameras with different pixel sizes.
And gets it entirely wrong. The idea that when the Airy disk is over two pixels wide diffraction suddenly becomes visible is absurd, and could only be written by someone struggling to conceptualise how the effects of diffraction would interact with the sampling of a digital sensor without having the physical or mathematical framework to deal with it. The truth is that 'Airy discs' do not conveniently align themselves with pixel centroids so that they have no visibility until they become bif enough to cover more than one pixel. In fact, except in astro-photography, you never see an isolated Airy disc in the wild, what you get is an image which is constructed by the translation of every infinitesimal point into an airy disc. That equals a slightly blurred image. The sampling function of the sensor and its AA filter adds in another blur, which can also be quantified. the resultant blur is the convolution of the point spread function of these two blurs. That operation doesn't give a result in which diffraction blur suddenly becomes visible when the Airy disc gets big enough. As the MTF functions of real lenses show, what you get is a gradually increasing composite blur of the diffraction and sensor blur, with no defined 'diffraction limit'. It is a nonsense, totally unsupported by any real evidence.
For diffraction limited lenses, the lenses play absolutely no role in the diffraction effects on the image.
What? The diffraction effects are entirely down to the lens. What you mean is that the performance of the lens is effectively defined by diffraction, rather than optical aberrations, and that is true by definition, it is what 'diffraction limited' means.
For a given aperture the diffraction effect will be the exact same no matter what lens you chose. But when pixel peeping the pixel size of the camera WILL make a difference, as smaller pixels will show diffraction effects earlier than larger pixels.
Not true. Smaller pixels show the onset of diffraction limiting at the same f-number as larger pixels. You are repeating is a prime myth that McHugh has promulgated. It is balony.
In essence, the question the article tries to address is how much sensor resolution can you have before it becomes pointless due diffraction, and how this changes with different apertures.
I don't know which question the article was trying to address, but it is rubbish from beginning to end. The truth is that increases in pixel density continue to give resolution increases deep into diffraction limiting. The is at some point an area of diminishing returns, but it isn't anywhere close to where his calculator predicts. Not at 24MP nor at 36MP at f/16 on FF, nor the same pixel count and DOF on any other format.

Don't try and defend what he says, it is just nonsense.
I am sure you can acknowledge that there is nothing wrong with that?
There is plenty wrong with it -see above.
Edit:

To summarize: the Nyquist Limit is real,
yes
and having infinite pixel resolution will not yield better image quality because of diffraction limits.
we are nowhere close to infinite pixel resolution or even effectively infinite, and his 'diffraction limit' fails to predict the point at which increases in pixel density stop producing increases in resolution.
Therefore it is interesting to discuss just how small pixels can be made (when considering a given aperture for taking images) before no improvements in image quality will be gained.
That is an interesting discussion, but McHugh's 'diffraction limit' will not give you the answer.
Or the other way around, it is interesting to know how far one can stop down before image quality degrades significantly.
You know that - you can look at the lens' MTF function, find the peak and you know if you stop down further than that image quality degrades significantly. It has nothing to do with the pixel density of the camera.
 
Zlik wrote:I never said optimum aperture changed depending on pixel size. I just said that in some cases, stopping down a little bit past optimum aperture wouldn't result in loss of resolution ...
Did you or didn't you say this: "... whereas the best possible result with the 12 MP camera would be at f/8, and not f/5.6."

And that's wrong. Please make sure you remember—and understand—your own statements properly.

Now you've switched to talking about acceptable results (as opposed to best-possible results), and you're also including the need of more depth-of-field in your considerations ... which you didn't before.
 
Steen Bay wrote:
Great Bustard wrote:
Steen Bay wrote:

If there is a 'problem' with increasing pixel count, then I think that it's more the uneven amount of aberrations across the frame at large/moderate apertures, than it is the unavoidable diffraction at small apertures. Take for example a lens that already on a 12mp FF camera has a considerably higher resolution in the center of the image at e.g. f/5.6 than it has in the corners. If we use such a lens on e.g. a 100mp FF camera, then the center resolution would be much higher than it was on the 12mp camera, but the corner resolution would increase much less (expressed as a percentage), meaning that the relative difference between center and corner resolution would become even greater than it already was (on a 12mp camera).
This is pure silliness.
He is actually right. He never said it would look worse than the lower MP camera, just that within the frame of the high MP camera, differences between center and corner sharpness would be higher than on the low MP camera. Simplified hypothetical example: with the 12MP camera: center resolution: 7MP, corner resolution: 5MP; with the 100MP camera: center sharpness: 70MP, corner sharpness: 20MP. Even though the 100MP sensor would produce a much sharper image, difference between center and corners would be bigger.
Show me a photo that looks worse when taken from a D800 than when taken from the D700 based on the D800 having a greater relative difference between central resolution and edge resolution.
But I agree that it would not be a "problem" ;), just a fact.
I'm just pointing out (because I don't see it mentioned that often) that a higher MP count also means that the resolution (at large/moderate apertures) becomes more uneven across the frame than it is with the same lens on a camera with a lower MP count. Whether you consider that as an 'issue' or not, that'll depend on your preferences and shooting style (know that you don't, but some people do worry about soft corners ;-) ).
 
Last edited:
Olaf Ulrich wrote:
Zlik wrote:I never said optimum aperture changed depending on pixel size. I just said that in some cases, stopping down a little bit past optimum aperture wouldn't result in loss of resolution ...
Did you or didn't you say this: "... whereas the best possible result with the 12 MP camera would be at f/8, and not f/5.6."
Yes that's exactly what I say. "Best possible result" doesn't mean optimum aperture.
And that's wrong. Please make sure you remember—and understand—your own statements properly.
Again, check the graph. There is a point where for "hypothetical low MP camera A", going 1 stop past optimum aperture does not result in lower resolution. Do we agree on that?
Now you've switched to talking about acceptable results (as opposed to best-possible results), and you're also including the need of more depth-of-field in your considerations ... which you didn't before.
I repeat: for camera A, stopping down will produce the absolute best result (no loss in resolution compared to optimum aperture but more DoF).
 
Last edited:
Great Bustard wrote:
Najinsky wrote:
Donald B wrote:

not having ever understood diffraction, can someone post some photos showing the effect.

cheers don
Note, how diffraction effects through the choice of F/22 on a m.4/3 sensor renders the image unusable.

9b539f46e2c544ebb0e24707bbe206d4.jpg
...is a photo that has lost considerable resolution due to diffraction softening, but that the photo is successful despite the lower resolution and/or because the greater DOF mattered more than the loss of resolution.

It's not unlike me posting a photo at ISO 2500:

Canon 6D + 50 / 1.2L @ f/2.8, 1/60, ISO 2500

Canon 6D + 50 / 1.2L @ f/2.8, 1/60, ISO 2500

in a discussion about noise with the implication that photos in light do not suffer more noise than photos in good light.

Of course, display size, viewing distance, and visual acuity all play major roles in determining what is "good enough".
That's a really sweet capture, I like it a lot (but do wish that stray arm wasn't there).

The intent of my photo was primarily to satisfy Donald's request for a photo that showed diffraction softening. The comment about it being unusable was to provoke thoughts about how unusable it is, something which is more subjective based on how one uses their images.

Following on from Mark's comment about sharpening, it can be sharpened up and I could happily get a 4-6MP type image out of it for printing (and for retina display), though I wouldn't post that image here as it would reveal a multitude of sins that printing/retina hides.

For regular web and HD TV slideshow use a 1.5MP image hides most of the sharpening sins, at least for casual observation:



9a6e357e445b4b7a9f8fd59e13ca25b7.jpg


I took a whole series of these at different apertures for a previous discussion related to diffraction and sharpness, but most are now archived offline from my computer at the moment.

However, here's the same subject from a different angle at F/11:



8d5b94199afe4101960ff8c12e97960c.jpg


At F/11 there is still a little diffraction but it's not really worth a mention. At full size, this image is notably sharper in the focal plane, but the shallower DOF counteracts it somewhat in that it leads to a much flatter and less interesting subject, though I still like it (which is why I kept it around). At 1.5MP the sharpness differences with the first are much less significant, while the DOF advantage of the first is still conveyed.

Like others, I'm finding the discussion interesting, but I especially liked Donald's request because it brings it all back to this; what does it really mean for our photography. Although I appreciate that isn't the primary point under discussion.

-Najinsky
 
Mark Scott Abeln wrote:
Najinsky wrote:

Perhaps it's the Feynman influence, his genius was in taking complexity and distilling it into what really matters.
Yes, Feynman’s approach is a big influence on me, and I realized that if I needed three pages of equations to illustrate an explanation, then I was probably doing things the wrong way: rather, the explanation should be as simple as needed, but not simpler. I am also influenced by G.K. Chesterton, who used similar methods as Feynman, but applied to culture.
That is OK for natural systems, but not for the systems that are created by humans that are not following Feynman’s line of logic when creating those systems. Things are getting more complicated when a system is composed of randomly chosen components and it's behavior is not very predictable under random photographic conditions.
 
olliess wrote:
Bobn2 wrote:
olliess wrote:

I found "invoking" his site useful to this discussion, since it got you to make some substantive comments beyond "I don't agree, your method conveys no information."
This camera 'diffraction limit' nonsense comes from McHugh, and when you find someone completely confused about diffraction, it is generally because they have been reading that site.
The arguments you presented in response to McHugh reveal a lot more about your level of confusion than his. You continue to be confused in your present post:
It only shows that the confused stick together. You have eaten McHugh's nonsense whole and now won't give it up. He and you are confused, not me.
Why did you choose 0.8 times the peak - there is no logic to that.
You quoted the passage where I explained this (the very first time around) quite a number of times, but apparently never fully read/understood it:

"Pick a threshold for a "significant" decrease in resolution. For argument's sake, I'll pick 0.80 relative to the maximum, which I'm pretty sure would be noticeable."
'For arguments sak' means 'arbitrary'. You have presented no argument as to why 0.8 is significant, nor why 0.8 of different quantities would be equally significant. So, it is an arbitrary number, picked out of a hat, and arguments based on it mean nothing.
In McHugh's case, his limit is arbitrary and just silly. A complete nonsense. It's a coincidence that yours and his coincide.
In my original post, I did say it was a coincidence. When you work out the math, however (e.g., by convolving the Bayer/AA filter with the Airy disk), you'll see why numbers are close and also why they seem reasonable.
Let's see your 'math' convolving the Bayer/AA filter with the Airy disc. Have you done it, or are you just talking about it?
Anyway, look at the curves again:



And another set



Notice that the position of the peak does dont depend on the pixel count.
You're still arguing about something which nobody is disputing: that the graphs show the peak to be at the same aperture no matter what sensor is used.
Good, so where is this 'diffraction limit'?
What I'm arguing about is what the graphs ALSO show: that the sensor with higher resolution loses more resolution due to diffraction. Not just in absolute resolution, but also in percentage of maximum resolution, meaning that the higher resolution sensor is not just "losing more because it had more to give."
It is exactly losing more because it had more to give. In any case, there is still no 'limit', which you and McHugh say there should be. The point is that as you increase pixel density, you get closer and closer to extracting all that the lens can give, including both aberration and diffraction blurring. We are still someway off extracting all that lenses can give with current pixel densities, even to f/16 on FF (or the same DOF on other formats)
Not also that there is no sudden drop when the 'limit' is reached.

Sorry, your 0.8 rule is as useless as McHugh's caculator.
This is why a "limit" needs to be defined using a "threshold." This is a standard way of defining limits, ranges, differences, etc. in science and engineering measurement.

If you don't understand the reasons why measurements are done this way, then ask. Just repeating that it's useless is, well, useless.
This is ultimately the point. Your 'limit' defines something that cannot be observed, that is arbitrarily defined for no cogent reason. It is nonsense, and worse than nonsense, confuses people into thinking that high pixel densities produces images with more diffraction blur than lower pixel count sensors. So, it is practically useless and confuses people into believing something that is not true - so worse than useless.
If such an aperture exists, then I know I will have to begin trading away maximum sharpness if I want more DOF.
Such an aperture doesn't exist. You can learn the peak aperture from lens to lens, go past that and you're below 'maximum' sharpness. And you trade away maximum sharpness for DOF always, that is a fact of life.
First of all, it seems inconsistent to argue against thresholds and then invoke DOF, since DOF is only meaningful when there a "threshold" has been defined for what is "acceptably sharp."
I didn't argue for any particular threshold for DOF, choose your own CoC, but what I said is always true whatever DOF decision you make, you trade DOF for maximum sharpness.
Secondly, since the effects of defocus and diffraction are combined, it follows that a smaller aperture could, at least in principle, result in less maximum sharpness AND less DOF.
Also not true. You will only get 'maximum sharpness' at the point of focus, and if the lens is essentially diffraction limited, that sharpness will be defined by diffraction plus the pixellation blur (which decreases as pixel density is increased)
... if the airy disk is 2-3 pixels or larger, diffraction is limiting resolution,
If one wants to be as technically precise as you seem to like, diffraction is always "limiting" resolution, because it will reduce any resolution "measurement" whether the airy disk is 0.2 pixels or 2.0 pixels wide. The question is, when does it start to limit resolution in a meaningful way?
Could you explain the use of the word 'meaningful' in this context?
In this context it means "in a visually noticeable way."
Lets see you perceptual evidence than that 0.8 of whatever happened to be the maximum is 'visually noticeable'. You have none. So, once again, nonsense.
Are you are arguing that a visual impact will be noticed at some point beyond that where resolution is measurably affected?
Resolution is measurably affected anywhere past the peak for that lens...There is no requirement for the Airy disc to have any particular relationship to pixel size. That is a fiction, and this 'diffraction limit' as calculated by that calculator does not exist.
I'm guessing what you're trying to say here is that there is no special relationship between the Airy disk size in pixel widths and a visually noticeable limit.
What I'm trying top say is what I said.
I don't know whether McHugh got his 2--3 pixel limit empirically,
Why defend it then, yet another arbitrary figure.
but it seems reasonable
Why is it 'reasonable'. lets have some hard perceptual evidence that 2-3 pixels width will suddenly produce 'visible diffraction'. You can't cite it because there is none.
when you work out the interaction between the Bayer/AA filtering and the Airy disk of about that size and then look at what happens to the MTF50.
How does that show this to be 'reasonable'? All you are saying is McHughs guess (as to a phenomenon that does not exist) is similar to your guess (as to a phenomenon which does not exist), which is not surprising, since your guess is directly inspired by McHugh's guess. If you had truly don the 'math' to convolve the diffraction PSF with the AA PSF you would no that there is no such 'limit' when the diffraction blur suddenly becomes visible. If there were there would also be a limit at which lens aberrations suddenly become 'visible', yet I don't see anyone talking about the aberration limit.
(even if the convolution pf point spread ficntions worked in a simple pixel by pixel way, which it doesn't).
The "convolution pf point spread ficntions?" Is this a technical term?
Yes. It's the maths that you need to do if you want to calculate the combined effect of diffraction and pixellation - take the PSF for each and convolve them.
I'm familiar with the maths.
You don't give any evidence of being so.
What you original said was a jumble.
No, I had some types, I typed 'pf' for 'of' and 'ficntions' instead of 'functions'. Less that it was not at all a jumble, it just says 'even if the convolution of point spread functions worked in a simple pixel by pixel way, which it doesn't', which is not at all jumbled. It just says that the convolution of the point spread functions (which is what you need to do to find the combined effect) doesn't work on a simple pixel by pixel basis, which indeed it doesn't. Were you truly 'familiar' with the maths you would know that and understand what I was saying straight away.
Now that you've clarified, I can guess that you meant "convolution of point spread functions." See the above.
That was such a hard guess to make.
Your example doesn't show anything about when diffraction becomes "visible." To do that you need to show us how you determined the threshold for visibility.
Which will depend on the size you present the image. There is no reason at all to think that it becomes 'visible' when it drops to 0.8 of its peak, or any other arbitrary proportion.
You agreed yourself in an earlier post that a resolution difference of 17% was probably "not that noticeable." Pick 0.85 or a higher threshold if that feels safer to you.
If there is a 'threshold' it is not a proportion of the maximum MTF50. What it is is an output resolution relative to a given output image size - that is it depends on how big you view the image and not what is the peak resolution of the lens. Defining it in terms of peak resolution of the lens is absurd.
...as the diffraction limit is approached, the first signs will be a loss of resolution in green and pixel-level luminosity....

As diffraction progressively blurs an image the Bayer artifacts become smaller not larger because the diffraction high pass filters the optical signal meaning that the Bayer array increasingly oversamples.
Diffraction low pass filters the optical signal. If it high passed the optical signal, it would increase the resolution, in which we'd love it. Read up on the MTF of the Airy disk pattern.
Sorry, miss type - substitute 'low pass filter' - it's like having an extra AA filter, mitigates the effects McHugh claims that it emphasises.
The charts you showed suggest that a sensor array with higher spatial resolution is proportionally MORE affected by diffraction.
No, it suggest that a sensor array with higher spatial resolution gets closer to capturing the full resolution given by the lens.
Since the array of green pixels has higher resolution than the red and blue arrays, respectively, it stands to reason that green (and luminosity) would be more affected, so what he is saying seems correct, although I'm not sure how much you'd notice this.
I would be wary of using the term 'stands to reason' when your arguments are so devoid of it. What matters is not how far off the peak of whatever lens it is, but whether it is sampling above the Nyquist limit for the applied signal, and as diffraction moves the Nyquist limit down the Bayer sampling becomes more and more securely oversampled. So far from what he is saying being 'correct' it is tosh, like most of what he says.
In the sense that differences between luminosity and red/blue resoution produce "artifacts," then sure, diffraction mitigates the artifacts. So would a stronger AA filter, or maybe just a more blurry lens.
You can't have it both ways.
From the example you showed, a smaller pixel (higher resolution) was associated with more reduction of resolution (both relative and absolute) due to diffraction. So he's actually right.
No, it's cobblers.
I can't help it if you don't agree with what your own data show.
It's cobblers. My data doesn't show that rectangular pixels show 'more diffraction' in one direction than another, what they would show is that rectangular pixels capture less resolution in one direction than another, and that applies to any image. You might as well say they show more camera shake in one direction than another, more lens aberrations in one direction than another, more defocus in one direction than another and so on. It's cobblers.
The form below calculates the size of the airy disk and assesses whether the camera has become diffraction limited.

As said previously, a camera does not become 'diffraction limited'
Re-stating what you previously believed doesn't make it any more true. Certainly, the arguments you've presented don't support your view.
And hanging on to McHugh's nonsense doesn't make that any more true, either. You cannot pick a 'diffraction limit' where either you or McHugh claims it is from the MTF curve.

The limit does not exist. His specious but pretty diagrams don't make it exist.
If you want to measure effects then you're probably going to need to define thresholds.
Why, you only define 'thesholds' where they exist. Lets define a 'threshold' of 40MPH and say that a 2CV is as fast as a Ferrari. Absurd reasoning, isn't it?
This is not rocket science.
Certainly it isn't. It's mumbo jumbo pseudo science.
I can plot the MTF curve of the (Bayer filtered) sensor and identify some threshold, such as the MTF50. Then, I can multiply it with an MTF for ideal diffraction for various pixel widths and look at how much the MTF50 has shifted. Then you can compare to your threshold for the change in MTF50.
Exactly the principle of GIGO. Define a meaningless threshold and you get a meaningless result.
On the other hand, if you insist that no threshold is "safe" (zero tolerance for diffraction!), then go ahead and shoot at optimum aperture all the time (or just ignore diffraction).
No, just know how much diffraction will degrade your image and decide whether it is acceptable to you. The acceptability would depend on whether you want to use the image for the web, or for large prints and on your own standards of sharpness. 80% of what you might have had is absurd, and leads people to the notion that they are better off losing 20% of a little than keeping 80% of a lot. Silly, silly reasoning, and leads to widespread misunderstandings we see all over these forums. You and McHugh should just stop it.
It's your perfect right to hang on to this nonsense if you want, but it's still balls, and I for one will do what I can to properly inform less gullible photographers.
Go for it. Just don't keep misreading (or failing to read) stuff and then spend pages arguing about how wrong it is.
I have read and understood it and it is nonsense.
As to whether the arguments I've presented support my view, well I wouldn't expect someone who thinks you can find a 'limit' by applying an arbitrary quotient to understand reasoned argument.
The little personal jabs are actually kind of comical.
Check back and see who started the 'little personal jabs'. It wasn't me. So if comical they are, the joke is on you. Especially since everyone who actually knows anything about the topic of diffraction knows you are talking garbage.

--
Bob
 
olliess wrote:
Zlik wrote:

I think this whole discussion can be summarized into two non mutually exclusive claims:
  • One person (Bobn2) says that the higher megapixel camera will always produce better or at worse equal resolution than the low megapixel camera, which is supported by your graph: the red graph is at all points higher than the blue graph.
  • One person (olliess ) says that the higher megapixel camera will start to lose its resolution potential sooner and faster than the lower megapixel camera, which is again supported by your graph: the downwards slope starts sooner and declines faster for the red graph.
Thanks for summarizing so succinctly.

Seems like a good time to step away and let calmer heads prevail. ;)
It is not true that the 'higher megapixel camera will start to lose its resolution potential sooner' it starts to lose its resolution potential at exactly the same point. That demonstrates immediately the 'diffraction limit' theory is wrong. It predicts that the smaller pixels will show diffraction limiting earlier and that cannot be demonstrated in the real world. Furthermore, the decline of resolution (the 'diffraction limit' is not where the theory predicts it to be. So, in brief the theory does not hold up to experiment.
 
Steen Bay wrote:
Great Bustard wrote:
Steen Bay wrote:

If there is a 'problem' with increasing pixel count, then I think that it's more the uneven amount of aberrations across the frame at large/moderate apertures, than it is the unavoidable diffraction at small apertures. Take for example a lens that already on a 12mp FF camera has a considerably higher resolution in the center of the image at e.g. f/5.6 than it has in the corners. If we use such a lens on e.g. a 100mp FF camera, then the center resolution would be much higher than it was on the 12mp camera, but the corner resolution would increase much less (expressed as a percentage), meaning that the relative difference between center and corner resolution would become even greater than it already was (on a 12mp camera).
This is pure silliness. Show me a photo that looks worse when taken from a D800 than when taken from the D700 based on the D800 having a greater relative difference between central resolution and edge resolution.
I'm just pointing out (because I don't see it mentioned that often) that a higher MP count also means that the resolution (at large/moderate apertures) becomes more uneven across the frame than it is with the same lens on a camera with a lower MP count. Whether you consider that as an 'issue' or not, that'll depend on your preferences and shooting style (know that you don't, but some people do worry about soft corners ;-) ).
Soft corners are a non-issue -- the corners of a higher MP capture will resolve at least as much detail as the corners of a lower MP capture.

What you are saying is that the photo will look worse because the center resolves proportionally higher than the corners with more pixels, even though the whole of the photo made with more pixels resolves more, including the corners.

It is this that I am calling "silly".
 

Keyboard shortcuts

Back
Top