# Diffraction Limit Discussion Continuation

Started Feb 21, 2014 | Discussions
Shop cameras & lenses ▾
 Like?
 Re: Enlargement factor has been ignored In reply to s_grins, Feb 22, 2014

s_grins wrote:

Great Bustard wrote:

s_grins wrote:

Mike Davis wrote:

If two sensors are the same size, but one has more pixels than the other, there's some probability that sooner or later, the guy equipped with the higher pixel count is going to make a larger print than the guy who has the lower pixel count on the same size sensor. Using the higher number of pixels on the same size sensor to produce a larger print causes the Airy disk diameters at the sensor, for any given f-Number, to suffer more magnification in the final print than they would in a smaller print, and thus a greater likelihood of diffraction inhibiting a desired print resolution. The photographer has to shrink the Airy disks at both the sensor and in the final print (after enlargement), by opening up so that his desired print resolution is not compromised by the larger Airy disks that come with the greater enlargement factor.

Do you really think that airy disk reveals itself on the print (final of course) as disk?

The Airy Disk "reveals itself" in terms of the blur it introduces into the photo, where the blur is a function of the relative size of the Airy Disk to the photo, not the absolute size:

http://www.josephjamesphotography.com/equivalence/index.htm#diffraction

For the same color and f-ratio, the Airy Disk will have the same diameter, but span a smaller portion of a larger sensor than a smaller sensor, thus resulting in less diffraction softening in the final photo. On the other hand, for the same color and DOF, the Airy Disk spans the same proportion of all sensors, and thus the effect of diffraction softening is the same for all systems at the same DOF.

Let's work an example using green light (λ = 530 nm = 0.00053mm). The diameter of the Airy Disk at f/8 is 2.44 · 0.00053mm·8 = 0.0103mm, and the diameter of the Airy Disk at f/4 is half as much -- 0.0052mm. For FF, the diameter of the Airy Disk represents 0.0103mm / 43.3mm = 0.024% of the sensor diagonal at f/8 and 0.005mm / 21.6mm = 0.012% of the diagonal at f/4. For mFT (4/3), the diameter of the Airy Disk represents 0.0103mm / 21.6mm = 0.048% at f/8 and 0.005mm / 21.6mm = 0.024% at f/4.

Thus, at the same f-ratio, we can see that the diameter of the Airy Disk represents half the proportion of a FF sensor as mFT (4/3), but at the same DOF, the diameter of the Airy Disk represents the same proportion of the sensor. In other words, all systems will suffer the same amount of diffraction softening at the same DOF and display dimensions.

However, the system that began with more resolution will always retain more resolution, but that resolution advantage will asymptotically vanish as the DOF deepens. In absolute terms, the earliest we will notice the effects of diffraction softening is when the diameter of the Airy Disk exceeds that of a pixel (two pixels for a Bayer CFA), but, depending on how large the photo is displayed, we may not notice until the diameter of the Airy Disk is much larger.

Typically, the effects of diffraction softening do not even begin to become apparent until f/11 on FF (f/7.1 on APS-C and f/5.6 on mFT -- 4/3), and start to become strong by f/22 on FF (f/14 on APS-C and f/11 on mFT -- 4/3). By f/32 on FF (f/22 on APS-C, f/16 on mFT -- 4/3) the effects of diffraction softening are so strong that there is little difference in resolution between systems, regardless of the lens, sensor size, or pixel count.

We can now summarize the effects of diffraction softening as follows:

• There is no "diffraction limit" except when resolution falls to zero.
• There is a point when the effects of diffraction softening will become the dominant source of blur, and this point will vary from lens to lens as well as where in the frame we are looking (e.g. center vs edges, where the edges typically lag a stop behind the center).
• All else equal, more pixels will always resolve more detail.
• All systems suffer the same diffraction softening at the same DOF, but do not necessarily resolve the same detail at the same DOF, as diffraction softening is merely one of many forms of blur (e.g. lens aberrations, motion blur, large pixels, etc.).
• As the DOF deepens, all systems asymptotically lose detail, and by f/32 on FF (f/22 on APS-C, f/16 on mFT -- 4/3), the differences in resolution between systems is trivial, regardless of the lens, sensor size, or pixel count.

Thank you for been educated.

But there is one thing that bothers me. Diffraction reveals itself in my camera as a dull image with excessive CA. I did not see any blur caused by diffraction. Probably, I use wrong lens

I don't know how the aperture and/or pixel size relate to CA, but here's an example of how diffraction affects the photo:

http://www.dpreview.com/forums/post/30148342

and here's a very nice demonstration:

http://www.lensrentals.com/blog/2013/03/overcoming-my-fentekaphobia

By the way, for those that feel diffraction is not a big deal, this comes with the added plus that neither do they need to be all that concerned over pixel count or how sharp their lens is, either.

Complain
 Like?
 Re: Enlargement factor has been ignored In reply to Great Bustard, Feb 22, 2014

Great Bustard wrote:

s_grins wrote:

Great Bustard wrote:

s_grins wrote:

Mike Davis wrote:

If two sensors are the same size, but one has more pixels than the other, there's some probability that sooner or later, the guy equipped with the higher pixel count is going to make a larger print than the guy who has the lower pixel count on the same size sensor. Using the higher number of pixels on the same size sensor to produce a larger print causes the Airy disk diameters at the sensor, for any given f-Number, to suffer more magnification in the final print than they would in a smaller print, and thus a greater likelihood of diffraction inhibiting a desired print resolution. The photographer has to shrink the Airy disks at both the sensor and in the final print (after enlargement), by opening up so that his desired print resolution is not compromised by the larger Airy disks that come with the greater enlargement factor.

Do you really think that airy disk reveals itself on the print (final of course) as disk?

The Airy Disk "reveals itself" in terms of the blur it introduces into the photo, where the blur is a function of the relative size of the Airy Disk to the photo, not the absolute size:

http://www.josephjamesphotography.com/equivalence/index.htm#diffraction

For the same color and f-ratio, the Airy Disk will have the same diameter, but span a smaller portion of a larger sensor than a smaller sensor, thus resulting in less diffraction softening in the final photo. On the other hand, for the same color and DOF, the Airy Disk spans the same proportion of all sensors, and thus the effect of diffraction softening is the same for all systems at the same DOF.

Let's work an example using green light (λ = 530 nm = 0.00053mm). The diameter of the Airy Disk at f/8 is 2.44 · 0.00053mm·8 = 0.0103mm, and the diameter of the Airy Disk at f/4 is half as much -- 0.0052mm. For FF, the diameter of the Airy Disk represents 0.0103mm / 43.3mm = 0.024% of the sensor diagonal at f/8 and 0.005mm / 21.6mm = 0.012% of the diagonal at f/4. For mFT (4/3), the diameter of the Airy Disk represents 0.0103mm / 21.6mm = 0.048% at f/8 and 0.005mm / 21.6mm = 0.024% at f/4.

Thus, at the same f-ratio, we can see that the diameter of the Airy Disk represents half the proportion of a FF sensor as mFT (4/3), but at the same DOF, the diameter of the Airy Disk represents the same proportion of the sensor. In other words, all systems will suffer the same amount of diffraction softening at the same DOF and display dimensions.

However, the system that began with more resolution will always retain more resolution, but that resolution advantage will asymptotically vanish as the DOF deepens. In absolute terms, the earliest we will notice the effects of diffraction softening is when the diameter of the Airy Disk exceeds that of a pixel (two pixels for a Bayer CFA), but, depending on how large the photo is displayed, we may not notice until the diameter of the Airy Disk is much larger.

Typically, the effects of diffraction softening do not even begin to become apparent until f/11 on FF (f/7.1 on APS-C and f/5.6 on mFT -- 4/3), and start to become strong by f/22 on FF (f/14 on APS-C and f/11 on mFT -- 4/3). By f/32 on FF (f/22 on APS-C, f/16 on mFT -- 4/3) the effects of diffraction softening are so strong that there is little difference in resolution between systems, regardless of the lens, sensor size, or pixel count.

We can now summarize the effects of diffraction softening as follows:

• There is no "diffraction limit" except when resolution falls to zero.
• There is a point when the effects of diffraction softening will become the dominant source of blur, and this point will vary from lens to lens as well as where in the frame we are looking (e.g. center vs edges, where the edges typically lag a stop behind the center).
• All else equal, more pixels will always resolve more detail.
• All systems suffer the same diffraction softening at the same DOF, but do not necessarily resolve the same detail at the same DOF, as diffraction softening is merely one of many forms of blur (e.g. lens aberrations, motion blur, large pixels, etc.).
• As the DOF deepens, all systems asymptotically lose detail, and by f/32 on FF (f/22 on APS-C, f/16 on mFT -- 4/3), the differences in resolution between systems is trivial, regardless of the lens, sensor size, or pixel count.

Thank you for been educated.

But there is one thing that bothers me. Diffraction reveals itself in my camera as a dull image with excessive CA. I did not see any blur caused by diffraction. Probably, I use wrong lens

I don't know how the aperture and/or pixel size relate to CA, but here's an example of how diffraction affects the photo:

http://www.dpreview.com/forums/post/30148342

and here's a very nice demonstration:

http://www.lensrentals.com/blog/2013/03/overcoming-my-fentekaphobia

By the way, for those that feel diffraction is not a big deal, this comes with the added plus that neither do they need to be all that concerned over pixel count or how sharp their lens is, either.

Agreed because I'm right

-- hide signature --

Camera in bag tends to stay in bag...

s_grins's gear list:s_grins's gear list
Panasonic Lumix DMC-GH2 Panasonic Lumix G Vario 14-45mm F3.5-5.6 ASPH OIS Panasonic Lumix G 20mm F1.7 ASPH Panasonic Lumix G Vario 45-200mm F4-5.6 OIS Sigma 30mm F2.8 EX DN +3 more
Complain
 Like? 1
 Re: Diffraction Limit Discussion Continuation In reply to Jonny Boyd, Feb 22, 2014

Jonny Boyd wrote:

The diffraction limit exists, it's just not well understood.

If you (or anybody else) tries to define what diffraction limit is, then there would be no need to argue if it exists. Thousands of threads here exist for the only reason that different people mean different things when they use the same words.

Just another Canon shooter's gear list:Just another Canon shooter's gear list
Canon EOS 5D Mark II Canon EF 15mm f/2.8 Fisheye Canon EF 35mm f/1.4L USM Canon EF 50mm f/1.2L USM Canon EF 135mm f/2.0L USM +4 more
Complain
 Like?
 Re: Diffraction Limit Discussion Continuation In reply to Great Bustard, Feb 22, 2014

Great Bustard wrote:

richarddd wrote:

bobn2 wrote:

Great Bustard wrote:

bobn2 wrote:

Great Bustard wrote:

The way I see it is that other sources of blur are usually of considerable more importance than diffraction. The main reason to consider diffraction is to avoid stopping down unnecessarily, such as shooting a landscape at f/16 when everything is easily within the DOF at f/8.

Many photographers seem to stop down far more than they need to to get their whole subject within the DOF.

I completely agree with this statement, and that is the point that Sergey often raises.

Partially, it's focussing on the wrong point. Sometimes, just not thinking about how much DOF they actually want.

For a portrait it's easy to choose a focus point...

The far eye, right?

...but for deep DOF should one focus at Merklinger's infinity or conventional wisdom's 1/3 (or is it 1/2) of the way or something else? Opinions clearly differ.

Whatever gets the portions of the scene you want within the DOF. Truth be told, for a static scene, I shoot a shot with a guess, then check by chimping, and adjust as necessary. Super sophisticated, I know, but I'm smart like that.

Hey, that's the way Strad built his fiddles (or at least that's what he told me) and Taskin built his harpsichords (or at least that's what he told my wife).

-- hide signature --

gollywop

Complain
 Like? 2
 Re: Diffraction Limit Discussion Continuation In reply to Just another Canon shooter, Feb 22, 2014

Just another Canon shooter wrote:

Jonny Boyd wrote:

The diffraction limit exists, it's just not well understood.

If you (or anybody else) tries to define what diffraction limit is, then there would be no need to argue if it exists. Thousands of threads here exist for the only reason that different people mean different things when they use the same words.

Usually the reason a definition isn't advanced is because a meaningful one doesn't exist. There are some concepts that exist only in someone's step-skipping imagination.

-- hide signature --

gollywop

Complain
 Like?
 Re: Diffraction Limit Discussion Continuation In reply to Anders W, Feb 22, 2014

Anders W wrote:

Jonny Boyd wrote:

I said that at low resolutions it's more of a plateau that a peak, so you effectively get the same resolution at smaller apertures.

No you didn't say that. You said the peak would occur at different apertures depending on sensor resolution (just as Cambridge in Colour). Do you want me to look up the specific posts for you?

Don't think Cambridge in Colour says that. As I read it he's talking about the point at which diffraction will start to become clearly visible at 100% view (with a good lens).

"Most will find that the f-stop given in the "diffraction limits extinction resolution" field tends to correlate well with the f-stop values where one first starts to see fine detail being softened. All other pages of this website therefore use this as the criterion for determining the diffraction-limited aperture."

http://www.cambridgeincolour.com/tutorials/diffraction-photography-2.htm

The calculator says f/7.3 for a 16mp mFT camera and f/4.9 for a 36mp mFT camera. Sounds guite reasonable to me, as a (rather gross) rule of thumb.

Complain
 Like?
 Cambridge in Colour In reply to Steen Bay, Feb 22, 2014

Steen Bay wrote:

Anders W wrote:

Jonny Boyd wrote:

I said that at low resolutions it's more of a plateau that a peak, so you effectively get the same resolution at smaller apertures.

No you didn't say that. You said the peak would occur at different apertures depending on sensor resolution (just as Cambridge in Colour). Do you want me to look up the specific posts for you?

Don't think Cambridge in Colour says that. As I read it he's talking about the point at which diffraction will start to become clearly visible at 100% view (with a good lens).

"Most will find that the f-stop given in the "diffraction limits extinction resolution" field tends to correlate well with the f-stop values where one first starts to see fine detail being softened. All other pages of this website therefore use this as the criterion for determining the diffraction-limited aperture."

http://www.cambridgeincolour.com/tutorials/diffraction-photography-2.htm

The calculator says f/7.3 for a 16mp mFT camera and f/4.9 for a 36mp mFT camera. Sounds guite reasonable to me, as a (rather gross) rule of thumb.

P.S. - Maybe I should also have quoted his conclusion :

"Thus far, you're probably thinking, "diffraction more easily limits resolution as the number of camera megapixels increases, so more megapixels must be bad, right?" No — at least not as far as diffraction is concerned. Having more megapixels just provides more flexibility. Whenever your subject matter doesn't require a high f-stop, you have the ability to make a larger print, or to crop the image more aggressively. Alternatively, a 20MP camera that requires an f-stop beyond its diffraction limit could always downsize its image to produce the equivalent from a 10MP camera that uses the same f-stop (but isn't yet diffraction limited).

Regardless, the onset of diffraction is gradual, and its limiting f-stop shouldn't be treated as unsurpassable. Diffraction is just something to be aware of when choosing your exposure settings, similar to how one would balance other trade-offs such as noise (ISO) vs shutter speed. While calculations can be a helpful at-home guide, the best way to identify the optimal trade-off is to experiment — using your particular lens and subject."

Again, sounds quite reasonable to me. Don't quite understand what all the fuss is about.

Complain
 Like? 1
 Re: Diffraction Limit Discussion Continuation In reply to Anders W, Feb 22, 2014

). ) Anders W wrote:

Jonny Boyd wrote:

Anders W wrote:

Jonny Boyd wrote:

there,s nothing there that I hadn't already said to you in other ways.

There most certainly is: The recognition that the point along the aperture range where peak image resolution occurs is independent of sensor resolution.

I never denied that.

Yes you did. Do you want me to look up the specific posts where you denied it or are you going to acknowledge it voluntarily?

There was a post (Re: Nope.) where I said that peak resolution for a lens affected only by diffraction, would be wide open. Other optical effects would limit that. The lens itself will have an inherent peak aperture and I made that point that as you increase the aperture and the resolution of the lens drops, cameras become less and less able to take advantage.

The way I described was over simplified because I forgot to take into account the fact that you do still gain advantage from having a sensor resolution higher than the lens resolution. If you had three sensors with resolution s_1 > s_2 > s_3 then when the lens is stopped down to reduce the resolution to about 3 * s_2, then you get no meaningful advantage from using s_1 instead of s_2, though you're still better using them than s_2. Stop down so that the lens resolution is about 3 * s_1 and there's no meaningful difference between the resolution of the sensors. That's due to diffraction, so that's the practical limit imposed by diffraction if you're trying to decide whether it's worth using a higher resolution camera or not. That seemed to me to be the practical point of asking where diffraction causes a limitation.

As I said, in that particular post I forgot to account for higher resolution sensors being useful up to a point and left out the factor of 3, but otherwise the broad point I was making was still correct.

In this post (Re: So, what are the m4/3 diffraction limits?) I said that for a sensor with much loser resolution than the resolution of the lens, then there would be a plateau of sharpness, rather than a peak. I've modelled that and put on a chart which demonstrates exactly that. I'm guessing the disagreement here is over the definition of peaks and plateaus, so I probably should have clarified that I wasn't talking about a perfectly flat sharpness, but rather a peak is so spread out, with minimal drop across multiple f-stops, certainly much flatter than the curve for higher res images, that it's virtually indistinguishable from flat. If system resolution only drops 1% between peak aperture and f/22, then that's a plateau compared to a system where resolution drops 40% between peak aperture and f/22.

In retrospect I should have taken more time to spell out exactly what I meant by plateau.

I said that at low resolutions it's more of a plateau that a peak, so you effectively get the same resolution at smaller apertures.

No you didn't say that. You said the peak would occur at different apertures depending on sensor resolution (just as Cambridge in Colour). Do you want me to look up the specific posts for you?

I havent read everything on the CiC site, but the bit I agreed with, which I thought was in dispute, was when diffraction begins to degrade detail, which is a different discussion to the question of when it limits you so much that you may as well use a lower resolution camera. CiC is right that diffraction becomes noticeable at wider apertures for higher resolution sensors and therefore degrades their detail sooner. As I've repeatedly said in many posts, that's not the same as saying that those higher resolution cameras don't still have a detail advantage.

Substantively, I have only two comments: That peak sharpness will occur at exactly rather than approximately the same aperture and that "my/our" side is hardly the one to blame for any conceptual or terminological misunderstandings.

Anders, I avoided assigning blame to anyone and put it down to misunderstanding.

Yes I saw that. So I pointed out what was missing.

Don't be in ass in response.

I am not being an ass. You decidedly are by calling me one for absolutely no good reason.

You felt it necessary to assign blame and point fingers when I had hoped the conversation could have a fresh start.

Look! A number of us took time to teach you (I don't apologize for the expression) what things are actually like.

Your first reply to me basically ignored the length post I'd written to try and work out where disagreement originated, and instead of engaging with any of the points, you basically said 'Here's a formula that shows why you're wrong. Go work out why.' Maybe you didn't mean it that way, but it came across as rather arrogant with the lack of engagement and disinclination to explain things.

We were rewarded by all sorts of insults.

Really? Where? I suggested that your first reply was unhelpful for reasons that I've outlined again here, and elsewhere suggested that you didn't understand diffraction and disagreed with the laws of physics (Re: So, what are the m4/3 diffraction limits?). In retrospect that was probably a misunderstanding over the word limit. I was talking about diffraction limiting the amount of detail that could be resolved while you were saying that higher res sensors would always improve detail. I thought you meant that you could get more detail than what diffraction effects limit you to i.e. the system resolution could exceed the diffraction-imposed lens resolution, but presumably you actually meant that you could always get closer and closer to that limit? Similarly you thought I was saying that higher resolution sensors stop adding detail at a point, when I meant that there's a level of detail that no sensor can exceed and the gains you get, at a certain point, are so negligible that you can ignore them.

In so far as I may have misunderstood you or failed to express myself clearly, I apologise.

Now we are somehow made up as the guilty party just because you not man enough to stand up and say you made a mistake.

I've stated quite clearly several times now that I regard the problems in the previous thread as largely being down to misunderstanding and had no interest i pointing the finger at anyone, preferring to make a fresh start. You're the one who thought it necessary to point the finger and assign blame, so I find your comments here rather ironic.

Where was I dismissive about the idea as I spelled it out above? Please provide specific references (the post/posts you have in mind and the passage/passages in those posts).

I'm not interested in dissecting the previous discussion.

For pretty obvious reasons.

Yes, the reason I stated: I would rather move forward with the discussion and add to the forum, rather than dig up the past and waste people's time. I'm sure other comemnters would rather read about photography than our personal disagreements, so can we get back the actual discussion?

Complain
 Like?
 Re: Diffraction Limit Discussion Continuation In reply to Anders W, Feb 22, 2014

Anders W wrote:

Jonny Boyd wrote:

I said that at low resolutions it's more of a plateau that a peak, so you effectively get the same resolution at smaller apertures.

No you didn't say that. You said the peak would occur at different apertures depending on sensor resolution (just as Cambridge in Colour). Do you want me to look up the specific posts for you?

I forgot to add the following quotes from CiC:

Diffraction thus sets a fundamental resolution limit that is independent of the number of megapixels, or the size of the film format. It depends only on the f-number of your lens, and on the wavelength of light being imaged. One can think of it as the smallest theoretical "pixel" of detail in photography. Furthermore, the onset of diffraction is gradual; prior to limiting resolution, it can still reduce small-scale contrast by causing airy disks to partially overlap.

In practice, the diffraction limit doesn't necessarily bring about an abrupt change; there is actually a gradual transition between when diffraction is and is not visible. Furthermore, this limit is only a best-case scenario when using an otherwise perfect lens; real-world results may vary.

This should not lead you to think that "larger apertures are better," even though very small apertures create a soft image; most lenses are also quite soft when used wide open (at the largest aperture available). Camera systems typically have an optimal aperture in between the largest and smallest settings; with most lenses, optimal sharpness is often close to the diffraction limit, but with some lenses this may even occur prior to the diffraction limit. These calculations only show when diffraction becomes significant, not necessarily the location of optimum sharpness (see camera lens quality: MTF, resolution & contrast for more on this).

n MTF of 1.0 represents perfect contrast preservation, whereas values less than this mean that more and more contrast is being lost — until an MTF of 0, where line pairs can no longer be distinguished at all. This resolution limit is an unavoidable barrier with any lens; it only depends on the camera lens aperture and is unrelated to the number of megapixels. The figure below compares a perfect lens to two real-world examples:

The aperture corresponding to the maximum MTF is the so-called "sweet spot" of a lens, since images will generally have the best sharpness and contrast at this setting. On a full frame or cropped sensor camera, this sweet spot is usually somewhere between f/8.0 and f/16, depending on the lens. The location of this sweet spot is also independent of the number of megapixels in your camera.

I don't think CiC says what you think it says, if you believe it claims that peak aperture depends on the sensor resolution rather than just the lens resolution.

Complain
 Like? 1
 Re: Diffraction Limit Discussion Continuation In reply to Steen Bay, Feb 22, 2014

Steen Bay wrote:

Anders W wrote:

Jonny Boyd wrote:

I said that at low resolutions it's more of a plateau that a peak, so you effectively get the same resolution at smaller apertures.

No you didn't say that. You said the peak would occur at different apertures depending on sensor resolution (just as Cambridge in Colour). Do you want me to look up the specific posts for you?

Don't think Cambridge in Colour says that. As I read it he's talking about the point at which diffraction will start to become clearly visible at 100% view (with a good lens).

Then he should have been very clear that's what he means.

"Most will find that the f-stop given in the "diffraction limits extinction resolution" field tends to correlate well with the f-stop values where one first starts to see fine detail being softened.

See, he makes no mention of 'clearly visible' nor '100% view'. Moreover, clearly visible at 100% view is a ridiculous condition, who looks at their images 100% view (and in any case, the reproduction size of '100% view' is entirely dependent on the monitor or other medium you use to look at it. As a criterion fro everyday photography, it's useless.

All other pages of this website therefore use this as the criterion for determining the diffraction-limited aperture."

http://www.cambridgeincolour.com/tutorials/diffraction-photography-2.htm

The calculator says f/7.3 for a 16mp mFT camera and f/4.9 for a 36mp mFT camera. Sounds guite reasonable to me, as a (rather gross) rule of thumb.

What 'sounds reasonable' to you is not generally a good indicator of what is useful. Even if we take your condition above ('clearly visible at 100% view'( and then add in the other riders that we'd need to make any sense of it ('assuming 100% view was on a 72 ppi monitor viewed at 60 cm') his calculator still doesn't calculate it.

-- hide signature --

Bob

Complain
 Like? 3
 Re: Cambridge in Colour In reply to Steen Bay, Feb 22, 2014

Steen Bay wrote:

Steen Bay wrote:

Anders W wrote:

Jonny Boyd wrote:

I said that at low resolutions it's more of a plateau that a peak, so you effectively get the same resolution at smaller apertures.

No you didn't say that. You said the peak would occur at different apertures depending on sensor resolution (just as Cambridge in Colour). Do you want me to look up the specific posts for you?

Don't think Cambridge in Colour says that. As I read it he's talking about the point at which diffraction will start to become clearly visible at 100% view (with a good lens).

"Most will find that the f-stop given in the "diffraction limits extinction resolution" field tends to correlate well with the f-stop values where one first starts to see fine detail being softened. All other pages of this website therefore use this as the criterion for determining the diffraction-limited aperture."

http://www.cambridgeincolour.com/tutorials/diffraction-photography-2.htm

The calculator says f/7.3 for a 16mp mFT camera and f/4.9 for a 36mp mFT camera. Sounds guite reasonable to me, as a (rather gross) rule of thumb.

P.S. - Maybe I should also have quoted his conclusion :

"Thus far, you're probably thinking, "diffraction more easily limits resolution as the number of camera megapixels increases, so more megapixels must be bad, right?" No — at least not as far as diffraction is concerned. Having more megapixels just provides more flexibility. Whenever your subject matter doesn't require a high f-stop, you have the ability to make a larger print, or to crop the image more aggressively. Alternatively, a 20MP camera that requires an f-stop beyond its diffraction limit could always downsize its image to produce the equivalent from a 10MP camera that uses the same f-stop (but isn't yet diffraction limited).

Regardless, the onset of diffraction is gradual, and its limiting f-stop shouldn't be treated as unsurpassable.

So don't use the word 'limit' which suggests the opposite.

Diffraction is just something to be aware of when choosing your exposure settings, similar to how one would balance other trade-offs such as noise (ISO) vs shutter speed.

So don't scare people into thinking there is some 'limit' in the first place.

While calculations can be a helpful at-home guide, the best way to identify the optimal trade-off is to experiment — using your particular lens and subject."

Which says 'my calculator is nonsense, don't take any notice'. The trouble is, people do.

Again, sounds quite reasonable to me. Don't quite understand what all the fuss is about.

He added that after the first wave of criticism of the ridiculous 'diffraction limit calculator'. Stating something ridiculous and then adding a rider saying 'don't take any notice' is pretty feeble. Just get rid of the ridiculousness.

-- hide signature --

Bob

Complain
 Like?
 Re: Diffraction Limit Discussion Continuation In reply to knickerhawk, Feb 22, 2014

Here's a further thought on practical limitations imposed by diffraction.

If you print a photo, at what point does diffraction reduce the perceived resolution for cameras with different numbers of pixels?

Mathematically of course, diffraction will reduce image quality at the same aperture for every camera. However this will not always produce a perceptible decline in quality so the practical limit may be different to the absolute technical limit.

The following will make use of resolution numbers that are made up for the purposes of illustration to demonstrate the point, rather than to make a declaration about the use of any particular combination of lens, sensor, and printer. This is a demonstration of principle, rather than an examination of any particular set up.

The resolution of a final image, r will be determined by the lens resolution l, the image resolution i, and the printer resolution p. That's somewhat simplifying things, but since the printer resolution will remain constant in this, then we can assume that any other factors affecting resolution can be included in that constant.

r = 1/( 1/(l^2) + 1/(s^2) + 1/(p^2) )^1/2

If we use a subset of the lens and sensor resolutions in my earlier example, and take p = 100, then we get the following results:

Naturally this looks very similar to earlier resolution charts.

Now if we turn to consider the issue of when a difference in quality will be noticable, it would be helpful perhaps to look at relative quality differences instead of absolute, so we'll now look at a chart of printer resolution relative to resolution of a print produced from an image taken at peak aperture (f/4).

Remember this is resolution relative to the resolution at peak aperture, so the lowest res sensor, which is relatively unaffected by diffraction, remains very close to 100% relative resolution at every aperture, but will, in absolute terms, be much worse quality than the highest res sensor which shows the biggest changes in relative resolution.

We now need a cut-off point for when a change in resolution will be noticeable. If we assume that a 5% change in resolution is noticeable i.e. when resolution drops below 95% of peak resolution, a difference is noticeable, then we see that diffraction only starts to limit the perceived quality of a print at smaller apertures for lower res sensors.

s = 1, 3, or 10 are never perceptibly limited by diffraction; s = 30 is limited at f/22; s = 100 at f/11; s = 300, 1000, 3000 at f/8.

Again, this is for a purely theoretical setup so real world examples of sensors, lenses, and printers may have more or less pronounced behaviour, depending on actual resolution. My model also assumes that the percentage drop in relative resolution that becomes noticeable would be the same for every absolute resolution. It may be that at higher absolute resolutions a change in relative resolution would be noticeable at a higher or lower resolution. I'm not sure about that one.

Complain
 Like?
 Re: Diffraction Limit Discussion Continuation In reply to bobn2, Feb 22, 2014

bobn2 wrote:

Steen Bay wrote:

Anders W wrote:

Jonny Boyd wrote:

I said that at low resolutions it's more of a plateau that a peak, so you effectively get the same resolution at smaller apertures.

No you didn't say that. You said the peak would occur at different apertures depending on sensor resolution (just as Cambridge in Colour). Do you want me to look up the specific posts for you?

Don't think Cambridge in Colour says that. As I read it he's talking about the point at which diffraction will start to become clearly visible at 100% view (with a good lens).

Then he should have been very clear that's what he means.

"Most will find that the f-stop given in the "diffraction limits extinction resolution" field tends to correlate well with the f-stop values where one first starts to see fine detail being softened.

See, he makes no mention of 'clearly visible' nor '100% view'. Moreover, clearly visible at 100% view is a ridiculous condition, who looks at their images 100% view (and in any case, the reproduction size of '100% view' is entirely dependent on the monitor or other medium you use to look at it. As a criterion fro everyday photography, it's useless.

Actually he does:

Use the following calculator to estimate when diffraction begins to influence an image. This only applies for images viewed on-screen at 100%; whether this will be apparent in the final print also depends on viewing distance and print size. To calculate this as well, please visit: diffraction limits and photography.

As evidenced in pretty much every photography forum i existence, people like to pixel-peep, even when minor details at 100% don't all show up in real world viewing.

All other pages of this website therefore use this as the criterion for determining the diffraction-limited aperture."

http://www.cambridgeincolour.com/tutorials/diffraction-photography-2.htm

The calculator says f/7.3 for a 16mp mFT camera and f/4.9 for a 36mp mFT camera. Sounds guite reasonable to me, as a (rather gross) rule of thumb.

What 'sounds reasonable' to you is not generally a good indicator of what is useful.

Yet you were quick to say that CiC is ridiculous because you can't imagine the real world usefulness.

Even if we take your condition above ('clearly visible at 100% view'( and then add in the other riders that we'd need to make any sense of it ('assuming 100% view was on a 72 ppi monitor viewed at 60 cm') his calculator still doesn't calculate it.

Actually it does. I refer you again to the quote above, in particular the link to his advanced calculator. Click on the 'Advanced' button and you can input those sorts of parameters.

Complain
 Like?
 Re: Diffraction Limit Discussion Continuation In reply to bobn2, Feb 22, 2014

bobn2 wrote <original post heavily snipped>:

See, he makes no mention of 'clearly visible' nor '100% view'. Moreover, clearly visible at 100% view is a ridiculous condition, who looks at their images 100% view (and in any case, the reproduction size of '100% view' is entirely dependent on the monitor or other medium you use to look at it. As a criterion fro everyday photography, it's useless.

Who looks at their images at 100%? Armies of pixel peepers, a group which correlates all too well with many who post here.

What 'sounds reasonable' to you is not generally a good indicator of what is useful.

It's the modern trend. Individual instinct trumps evidence. Look at statistics for the number who don't believe in evolution or who believe the sun revolves around the earth, etc., etc.

-- hide signature --
richarddd's gear list:richarddd's gear list
Olympus OM-D E-M5 Panasonic Lumix G Vario 7-14mm F4 ASPH Olympus M.Zuiko Digital ED 40-150mm 1:4-5.6 R Olympus M.Zuiko Digital 45mm F1.8 Samyang 7.5mm F3.5 UMC Fisheye MFT +3 more
Complain
 Like?
 Re: Diffraction Limit Discussion Continuation In reply to Jonny Boyd, Feb 22, 2014

Jonny Boyd wrote:

bobn2 wrote:

Steen Bay wrote:

Anders W wrote:

Jonny Boyd wrote:

I said that at low resolutions it's more of a plateau that a peak, so you effectively get the same resolution at smaller apertures.

No you didn't say that. You said the peak would occur at different apertures depending on sensor resolution (just as Cambridge in Colour). Do you want me to look up the specific posts for you?

Don't think Cambridge in Colour says that. As I read it he's talking about the point at which diffraction will start to become clearly visible at 100% view (with a good lens).

Then he should have been very clear that's what he means.

"Most will find that the f-stop given in the "diffraction limits extinction resolution" field tends to correlate well with the f-stop values where one first starts to see fine detail being softened.

See, he makes no mention of 'clearly visible' nor '100% view'. Moreover, clearly visible at 100% view is a ridiculous condition, who looks at their images 100% view (and in any case, the reproduction size of '100% view' is entirely dependent on the monitor or other medium you use to look at it. As a criterion fro everyday photography, it's useless.

Actually he does:

Use the following calculator to estimate when diffraction begins to influence an image. This only applies for images viewed on-screen at 100%; whether this will be apparent in the final print also depends on viewing distance and print size. To calculate this as well, please visit: diffraction limits and photography.

OK, my apologies to him. He tells people it will be useless. Still, in any case the calculator still doesn't actually tell you 'when diffraction begins to influence an image' because his theory of how diffraction and pixellation interact is just plain wrong.

As evidenced in pretty much every photography forum i existence, people like to pixel-peep, even when minor details at 100% don't all show up in real world viewing.

Liking something and it actually being useful are completely different things.

All other pages of this website therefore use this as the criterion for determining the diffraction-limited aperture."

http://www.cambridgeincolour.com/tutorials/diffraction-photography-2.htm

The calculator says f/7.3 for a 16mp mFT camera and f/4.9 for a 36mp mFT camera. Sounds guite reasonable to me, as a (rather gross) rule of thumb.

What 'sounds reasonable' to you is not generally a good indicator of what is useful.

Yet you were quick to say that CiC is ridiculous because you can't imagine the real world usefulness.

What is the 'real world usefulness'?

Even if we take your condition above ('clearly visible at 100% view'( and then add in the other riders that we'd need to make any sense of it ('assuming 100% view was on a 72 ppi monitor viewed at 60 cm') his calculator still doesn't calculate it.

Actually it does. I refer you again to the quote above, in particular the link to his advanced calculator. Click on the 'Advanced' button and you can input those sorts of parameters.

It doesn't - as I said, the theory on which it operates is wrong.

-- hide signature --

Bob

Complain
 Like? 1
 Re: Diffraction Limit Discussion Continuation In reply to Jonny Boyd, Feb 22, 2014

Jonny Boyd wrote:

Here's a further thought on practical limitations imposed by diffraction.

If you print a photo, at what point does diffraction reduce the perceived resolution for cameras with different numbers of pixels?

Mathematically of course, diffraction will reduce image quality at the same aperture for every camera. However this will not always produce a perceptible decline in quality so the practical limit may be different to the absolute technical limit.

The following will make use of resolution numbers that are made up for the purposes of illustration to demonstrate the point, rather than to make a declaration about the use of any particular combination of lens, sensor, and printer. This is a demonstration of principle, rather than an examination of any particular set up.

The resolution of a final image, r will be determined by the lens resolution l, the image resolution i, and the printer resolution p. That's somewhat simplifying things, but since the printer resolution will remain constant in this, then we can assume that any other factors affecting resolution can be included in that constant.

r = 1/( 1/(l^2) + 1/(s^2) + 1/(p^2) )^1/2

If we use a subset of the lens and sensor resolutions in my earlier example, and take p = 100, then we get the following results:

Naturally this looks very similar to earlier resolution charts.

Now if we turn to consider the issue of when a difference in quality will be noticable, it would be helpful perhaps to look at relative quality differences instead of absolute, so we'll now look at a chart of printer resolution relative to resolution of a print produced from an image taken at peak aperture (f/4).

Remember this is resolution relative to the resolution at peak aperture, so the lowest res sensor, which is relatively unaffected by diffraction, remains very close to 100% relative resolution at every aperture, but will, in absolute terms, be much worse quality than the highest res sensor which shows the biggest changes in relative resolution.

We now need a cut-off point for when a change in resolution will be noticeable. If we assume that a 5% change in resolution is noticeable i.e. when resolution drops below 95% of peak resolution, a difference is noticeable, then we see that diffraction only starts to limit the perceived quality of a print at smaller apertures for lower res sensors.

s = 1, 3, or 10 are never perceptibly limited by diffraction; s = 30 is limited at f/22; s = 100 at f/11; s = 300, 1000, 3000 at f/8.

Again, this is for a purely theoretical setup

I would not aggrandise it with the word 'theoretical'. There is no theory behind this setup, merely arbitrariness.

so real world examples of sensors, lenses, and printers may have more or less pronounced behaviour, depending on actual resolution. My model also assumes that the percentage drop in relative resolution that becomes noticeable would be the same for every absolute resolution. It may be that at higher absolute resolutions a change in relative resolution would be noticeable at a higher or lower resolution. I'm not sure about that one.

This is a particularly futile exercise. Either do the theory or work the real numbers. Working made-up numbers tells you absolutely nothing. In any case, on what is based your assumption that a 5% change i resolution is noticeable? Do you know even that the noticeability simply scales? Maybe there's a threshold? Maybe it depends on viewing size?

I'm wondering why you feel it necessary to work so hard to find some meaning for 'diffraction limit' without being able to demonstrate that such a definition is even useful. What have you got invested in there being a 'diffraction limit'.

-- hide signature --

Bob

Complain
 Like?
 Re: Diffraction Limit Discussion Continuation In reply to bobn2, Feb 22, 2014

bobn2 wrote:

Jonny Boyd wrote:

Here's a further thought on practical limitations imposed by diffraction.

If you print a photo, at what point does diffraction reduce the perceived resolution for cameras with different numbers of pixels?

Mathematically of course, diffraction will reduce image quality at the same aperture for every camera. However this will not always produce a perceptible decline in quality so the practical limit may be different to the absolute technical limit.

The following will make use of resolution numbers that are made up for the purposes of illustration to demonstrate the point, rather than to make a declaration about the use of any particular combination of lens, sensor, and printer. This is a demonstration of principle, rather than an examination of any particular set up.

The resolution of a final image, r will be determined by the lens resolution l, the image resolution i, and the printer resolution p. That's somewhat simplifying things, but since the printer resolution will remain constant in this, then we can assume that any other factors affecting resolution can be included in that constant.

r = 1/( 1/(l^2) + 1/(s^2) + 1/(p^2) )^1/2

If we use a subset of the lens and sensor resolutions in my earlier example, and take p = 100, then we get the following results:

Naturally this looks very similar to earlier resolution charts.

Now if we turn to consider the issue of when a difference in quality will be noticable, it would be helpful perhaps to look at relative quality differences instead of absolute, so we'll now look at a chart of printer resolution relative to resolution of a print produced from an image taken at peak aperture (f/4).

Remember this is resolution relative to the resolution at peak aperture, so the lowest res sensor, which is relatively unaffected by diffraction, remains very close to 100% relative resolution at every aperture, but will, in absolute terms, be much worse quality than the highest res sensor which shows the biggest changes in relative resolution.

We now need a cut-off point for when a change in resolution will be noticeable. If we assume that a 5% change in resolution is noticeable i.e. when resolution drops below 95% of peak resolution, a difference is noticeable, then we see that diffraction only starts to limit the perceived quality of a print at smaller apertures for lower res sensors.

s = 1, 3, or 10 are never perceptibly limited by diffraction; s = 30 is limited at f/22; s = 100 at f/11; s = 300, 1000, 3000 at f/8.

Again, this is for a purely theoretical setup

I would not aggrandise it with the word 'theoretical'. There is no theory behind this setup, merely arbitrariness.

There's plenty of theory Bob, all explained in thus post and previous posts in this thread. It's just using the equation for determining resolution of a system with multiple components that each have linited resolution themselves. We're all agreed that it's a good equation and as Anders requested,  I'm working out the implications. I took a while to carefully explain my methodology, so if you'd like to contribute usefully here you could begin by highlighting where you think my methodology falls flat, rather than  pouting off with an unsubstantiated opinion.

As for arbitrariness, the numbers I used are quite deliberate. I used lens resolution numbers that give a curve  with a shape broadly the same as a standard MTF curve for a lens. Similar real ones have been shown plenty of times in this thread and others. The printer resolution was chosen to be less than the lens, but not so low that it would dominate the final resolution  The sensor resolutions were chosen to cover the range of scenarios from sensor limited output, to lens limited, and stuff in between. You'll notice the senaor numbers logarithmically scale for precisely this reason.

so real world examples of sensors, lenses, and printers may have more or less pronounced behaviour, depending on actual resolution. My model also assumes that the percentage drop in relative resolution that becomes noticeable would be the same for every absolute resolution. It may be that at higher absolute resolutions a change in relative resolution would be noticeable at a higher or lower resolution. I'm not sure about that one.

This is a particularly futile exercise. Either do the theory or work the real numbers.

As I said, this is a simple application of an equation we all agree on.

Working made-up numbers tells you absolutely nothing.

It tells you plenty. You have made a universal claim that diffraction always causes peak sharpness at the same aperture, regardless of pixel count. While I agree that that is mathematically correct, I also believe that a drop in resolution will in some cases only become noticable at a smaller aperture for a low res sensor thanma large one. That's all I'm claiming. If I can find one instance where numbers demonstrate it, then I am correct.

In any case, on what is based your assumption that a 5% change i resolution is noticeable? Do you know even that the noticeability simply scales? Maybe there's a threshold? Maybe it depends on viewing size?

I was assuming different viewing sizes because my interest was in the relationship between diffraction effects and pixel count. Introducing another variable would be unhelpful. Anyway, my point can be demonstrated trivially by looking at the f4 and f22 resoluts for s = 3 and s = 300. For s = 3, there is relative drop in sharpness of 0.1%. For s = 300, the drop is going to be 35.4%. The s = 3 drop is unnoticable.  If the s = 300 drop isn't noticable, then that sensor still has its peak sharpness well after the peak aperture. If the drop is noticable, then it has a peak sharpness at a lower aperture than the s = 3 sensor. Either way, my point is made.

I'm wondering why you feel it necessary to work so hard to find some meaning for 'diffraction limit' without being able to demonstrate that such a definition is even useful. What have you got invested in there being a 'diffraction limit'.

How about we discuss the subject at hand rather than speculating about people's motives?

Complain
 Like? 1
 Re: Diffraction Limit Discussion Continuation In reply to Jonny Boyd, Feb 22, 2014

Jonny Boyd wrote:

It tells you plenty. You have made a universal claim that diffraction always causes peak sharpness at the same aperture, regardless of pixel count. While I agree that that is mathematically correct...

Not merely "mathematically correct", but supported by all the lens (system) tests.

...I also believe that a drop in resolution will in some cases only become noticable at a smaller aperture for a low res sensor than a large one.

When the drop in resolution becomes apparent depends on many factors, not the least of which is how large you display the photo. In any case, we also all agree that the sensor with more pixels will have higher resolution, all else equal.

Thus, since a given lens peaks at a particular aperture regardless of the pixel count of the sensor, and a sensor with more pixels will always resolve more than a sensor with fewer pixels (all else equal), then in what sense does "diffraction limit" have any meaning, or than how Bob characterized it:

http://www.dpreview.com/forums/post/53154169

The 'limit' is just a bogus idea. McHugh has taken a well defined optical term - a 'diffraction limited' system is one so good that diffraction is the only limit on its performance - turned it inside out and made it into something senseless.

That's all I'm claiming.

In what way are sensors with more pixels any more "diffraction limited" than sensors with fewer pixels?  That when viewing at 100% on a computer monitor you can see the resolution fall faster from the peak aperture, even though the peak aperture is the same, regardless of the pixel count, and the sensor with the higher pixel count has greater resolution?

If I can find one instance where numbers demonstrate it, then I am correct.

"A number multiplied by itself is always the same number.  For example, 1x1 = 1."  So because I found a single instance where numbers demonstrate the claim, does that make the claim correct?

Just saying that we have to be careful with language as sloppy language can lead to misinterpretations, just like the term "diffraction limited".

Complain
 Like? 1
 Re: Diffraction Limit Discussion Continuation In reply to Jonny Boyd, Feb 22, 2014

Jonny Boyd wrote:

bobn2 wrote:

Jonny Boyd wrote:

Here's a further thought on practical limitations imposed by diffraction.

If you print a photo, at what point does diffraction reduce the perceived resolution for cameras with different numbers of pixels?

Mathematically of course, diffraction will reduce image quality at the same aperture for every camera. However this will not always produce a perceptible decline in quality so the practical limit may be different to the absolute technical limit.

The following will make use of resolution numbers that are made up for the purposes of illustration to demonstrate the point, rather than to make a declaration about the use of any particular combination of lens, sensor, and printer. This is a demonstration of principle, rather than an examination of any particular set up.

The resolution of a final image, r will be determined by the lens resolution l, the image resolution i, and the printer resolution p. That's somewhat simplifying things, but since the printer resolution will remain constant in this, then we can assume that any other factors affecting resolution can be included in that constant.

r = 1/( 1/(l^2) + 1/(s^2) + 1/(p^2) )^1/2

If we use a subset of the lens and sensor resolutions in my earlier example, and take p = 100, then we get the following results:

Naturally this looks very similar to earlier resolution charts.

Now if we turn to consider the issue of when a difference in quality will be noticable, it would be helpful perhaps to look at relative quality differences instead of absolute, so we'll now look at a chart of printer resolution relative to resolution of a print produced from an image taken at peak aperture (f/4).

Remember this is resolution relative to the resolution at peak aperture, so the lowest res sensor, which is relatively unaffected by diffraction, remains very close to 100% relative resolution at every aperture, but will, in absolute terms, be much worse quality than the highest res sensor which shows the biggest changes in relative resolution.

We now need a cut-off point for when a change in resolution will be noticeable. If we assume that a 5% change in resolution is noticeable i.e. when resolution drops below 95% of peak resolution, a difference is noticeable, then we see that diffraction only starts to limit the perceived quality of a print at smaller apertures for lower res sensors.

s = 1, 3, or 10 are never perceptibly limited by diffraction; s = 30 is limited at f/22; s = 100 at f/11; s = 300, 1000, 3000 at f/8.

Again, this is for a purely theoretical setup

I would not aggrandise it with the word 'theoretical'. There is no theory behind this setup, merely arbitrariness.

There's plenty of theory Bob, all explained in thus post and previous posts in this thread. It's just using the equation for determining resolution of a system with multiple components that each have linited resolution themselves.

That isn't 'theory'. The equation you're using is itself a 'rule of thumb', based on the idea that the MTFs . For a start, we don't know what you mean by 'resolution'. Are you taking MTF50, or what?

We're all agreed that it's a good equation

It depends what you mean by a 'good equation' - it's a decent approximation for some purposes.

and as Anders requested, I'm working out the implications. I took a while to carefully explain my methodology, so if you'd like to contribute usefully here you could begin by highlighting where you think my methodology falls flat, rather than pouting off with an unsubstantiated opinion.

Where your methodology falls flat is that the 'experiment' you're performing is fictitious, it's not based on real numbers, nor is it based on the theory.

As for arbitrariness, the numbers I used are quite deliberate. I used lens resolution numbers that give a curve with a shape broadly the same as a standard MTF curve for a lens. Similar real ones have been shown plenty of times in this thread and others. The printer resolution was chosen to be less than the lens, but not so low that it would dominate the final resolution The sensor resolutions were chosen to cover the range of scenarios from sensor limited output, to lens limited, and stuff in between. You'll notice the senaor numbers logarithmically scale for precisely this reason.

In short, your numbers were chosen to get the result that you wanted to demonstrate.

so real world examples of sensors, lenses, and printers may have more or less pronounced behaviour, depending on actual resolution. My model also assumes that the percentage drop in relative resolution that becomes noticeable would be the same for every absolute resolution. It may be that at higher absolute resolutions a change in relative resolution would be noticeable at a higher or lower resolution. I'm not sure about that one.

This is a particularly futile exercise. Either do the theory or work the real numbers.

As I said, this is a simple application of an equation we all agree on.

A rule of thumb equation run with made-up numbers.

Working made-up numbers tells you absolutely nothing.

It tells you plenty. You have made a universal claim that diffraction always causes peak sharpness at the same aperture, regardless of pixel count. While I agree that that is mathematically correct, I also believe that a drop in resolution will in some cases only become noticable at a smaller aperture for a low res sensor thanma large one. That's all I'm claiming. If I can find one instance where numbers demonstrate it, then I am correct.

Your numbers do not at all demonstrate that 'that a drop in resolution will in some cases only become noticable at a smaller aperture for a low res sensor thanma large one', since your criterion for what is 'noticeable' was plucked from thin air, as were the numbers that you used.

In any case, on what is based your assumption that a 5% change i resolution is noticeable? Do you know even that the noticeability simply scales? Maybe there's a threshold? Maybe it depends on viewing size?

I was assuming different viewing sizes because my interest was in the relationship between diffraction effects and pixel count. Introducing another variable would be unhelpful.

You haven't any real variables in any case, just fictitious ones.

Anyway, my point can be demonstrated trivially by looking at the f4 and f22 resoluts for s = 3 and s = 300. For s = 3, there is relative drop in sharpness of 0.1%. For s = 300, the drop is going to be 35.4%. The s = 3 drop is unnoticable.

You haven't even defined 'sharpness' properly, nor do you have any data on what is 'noticeable'.

If the s = 300 drop isn't noticable, then that sensor still has its peak sharpness well after the peak aperture. If the drop is noticable, then it has a peak sharpness at a lower aperture than the s = 3 sensor. Either way, my point is made.

I can't see your point - in every case the peak sharpness is at f/4 because that's where you put it. All that changes is the height of the peak. Not where it is.

I'm wondering why you feel it necessary to work so hard to find some meaning for 'diffraction limit' without being able to demonstrate that such a definition is even useful. What have you got invested in there being a 'diffraction limit'.

How about we discuss the subject at hand rather than speculating about people's motives?

It's a fair point, you've produced some impressive graphs of fictitious numbers. That must have taken you a fair time and effort, but in the ned they show nothing. However, they are bogusly quantitative, so why is it worth your time and effort to make something bogusly quantitative?

Particularly the second graph is highly bogus, because it shows '100%' at the same level, when the '100%' is of different things.

-- hide signature --

Bob

Complain
 Like?
 Re: Diffraction Limit Discussion Continuation In reply to bobn2, Feb 22, 2014

bobn2 wrote:

Jonny Boyd wrote:

bobn2 wrote:

I would not aggrandise it with the word 'theoretical'. There is no theory behind this setup, merely arbitrariness.

There's plenty of theory Bob, all explained in thus post and previous posts in this thread. It's just using the equation for determining resolution of a system with multiple components that each have linited resolution themselves.

That isn't 'theory'. The equation you're using is itself a 'rule of thumb', based on the idea that the MTFs . For a start, we don't know what you mean by 'resolution'. Are you taking MTF50, or what?

A while ago I was getting the impression that those who think there is no diffraction limit regarded this equation as the golden rule so I was happy to use it. When it produces results you don't like, you want to get rid of it. As far resolution, it would be the number of line pairs that can be distinguished per unit length.

We're all agreed that it's a good equation

It depends what you mean by a 'good equation' - it's a decent approximation for some purposes.

And which purposes would they be and not be? You're being very vague.

and as Anders requested, I'm working out the implications. I took a while to carefully explain my methodology, so if you'd like to contribute usefully here you could begin by highlighting where you think my methodology falls flat, rather than pouting off with an unsubstantiated opinion.

Where your methodology falls flat is that the 'experiment' you're performing is fictitious, it's not based on real numbers, nor is it based on the theory.

It's working out the implications of the equation Anders is so fond of, as he requested I do. In what way is it inapplicable in this situation?

As for arbitrariness, the numbers I used are quite deliberate. I used lens resolution numbers that give a curve with a shape broadly the same as a standard MTF curve for a lens. Similar real ones have been shown plenty of times in this thread and others. The printer resolution was chosen to be less than the lens, but not so low that it would dominate the final resolution The sensor resolutions were chosen to cover the range of scenarios from sensor limited output, to lens limited, and stuff in between. You'll notice the senaor numbers logarithmically scale for precisely this reason.

In short, your numbers were chosen to get the result that you wanted to demonstrate.

Rather than merely making an assertion, how about you show your reasoning? What is unrealistic about the shape of the resolution curve? What is wrong with the sensor resolution numbers I picked? As I've already stated, they effectively cover all the possibilities because they range from situations where resolution is sensor limited to situations where it is lens limited.

so real world examples of sensors, lenses, and printers may have more or less pronounced behaviour, depending on actual resolution. My model also assumes that the percentage drop in relative resolution that becomes noticeable would be the same for every absolute resolution. It may be that at higher absolute resolutions a change in relative resolution would be noticeable at a higher or lower resolution. I'm not sure about that one.

This is a particularly futile exercise. Either do the theory or work the real numbers.

As I said, this is a simple application of an equation we all agree on.

A rule of thumb equation run with made-up numbers.

And again I ask, why is the equation not applicable here and what is wrong with the numbers?

Working made-up numbers tells you absolutely nothing.

It tells you plenty. You have made a universal claim that diffraction always causes peak sharpness at the same aperture, regardless of pixel count. While I agree that that is mathematically correct, I also believe that a drop in resolution will in some cases only become noticable at a smaller aperture for a low res sensor thanma large one. That's all I'm claiming. If I can find one instance where numbers demonstrate it, then I am correct.

Your numbers do not at all demonstrate that 'that a drop in resolution will in some cases only become noticable at a smaller aperture for a low res sensor thanma large one', since your criterion for what is 'noticeable' was plucked from thin air, as were the numbers that you used.

Do you deny that there is both a greater absolute and greater relative drop in resolution between peak aperture and lower apertures for higher resolution cameras than lower resolution cameras?

Does it not logically follow from this that all else being equal (printing size, viewing distance, etc.), a sufficiently resolution sensor will show no perceptible difference in quality due to diffraction at apertures where a higher resolution sensor will show a drop (but still retain far greater overall detail)?

In any case, on what is based your assumption that a 5% change i resolution is noticeable? Do you know even that the noticeability simply scales? Maybe there's a threshold? Maybe it depends on viewing size?

I was assuming different viewing sizes because my interest was in the relationship between diffraction effects and pixel count. Introducing another variable would be unhelpful.

You haven't any real variables in any case, just fictitious ones.

So what if they're fictitious examples? It doesn't mean that they're unrealistic.

Anyway, my point can be demonstrated trivially by looking at the f4 and f22 resoluts for s = 3 and s = 300. For s = 3, there is relative drop in sharpness of 0.1%. For s = 300, the drop is going to be 35.4%. The s = 3 drop is unnoticable.

You haven't even defined 'sharpness' properly, nor do you have any data on what is 'noticeable'.

I know that the human eye has limits on how well it can perceive changes in sharpness. If both the relative and absolute changes in resolution are sufficiently low when stopping down, then the eye will not perceive them. That's so obvious that I don't know why you're disputing it.

If the s = 300 drop isn't noticable, then that sensor still has its peak sharpness well after the peak aperture. If the drop is noticable, then it has a peak sharpness at a lower aperture than the s = 3 sensor. Either way, my point is made.

I can't see your point - in every case the peak sharpness is at f/4 because that's where you put it. All that changes is the height of the peak. Not where it is.

My point is that if you can't see any visible change in sharpness between f/4 and f/22 i.e. they are indistinguishable, then the perceived sharpness will plateau across those apertures, rather than peaking at f/4. So while the s = 300 sensor will show a drop in sharpness between f/4 and f/22 due to diffraction, the s = 3 sensor won't. Diffraction begins to limit sharpness for the s = 300 sensor at f/4, whereas it won't for the s = 3 sensor until f/22 or later.

I'm wondering why you feel it necessary to work so hard to find some meaning for 'diffraction limit' without being able to demonstrate that such a definition is even useful. What have you got invested in there being a 'diffraction limit'.

How about we discuss the subject at hand rather than speculating about people's motives?

It's a fair point, you've produced some impressive graphs of fictitious numbers. That must have taken you a fair time and effort, but in the ned they show nothing. However, they are bogusly quantitative, so why is it worth your time and effort to make something bogusly quantitative?

How are they bogusly quantitative? I clearly and repeatedly said that the numbers were illustrative only and didn't correlate with any particular lenses or sensors. They are however useful and realistic numbers in terms of demonstrating what happens when you change sensor resolution and look to see if there is any visible change in resolution between apertures.

I was quite careful to state all of that so I don't appreciate you constantly suggesting that I'm trying to be deceptive. There's nothing helpful about that attitude.

Particularly the second graph is highly bogus, because it shows '100%' at the same level, when the '100%' is of different things.

How is it bogus? I clearly explained that it shows the sharpness of the image relative to the peak for that sensor. Hence me saying:

Remember this is resolution relative to the resolution at peak aperture, so the lowest res sensor, which is relatively unaffected by diffraction, remains very close to 100% relative resolution at every aperture, but will, in absolute terms, be much worse quality than the highest res sensor which shows the biggest changes in relative resolution.

The aim was to show that relative sharpness decreases quicker for a high res sensor than a low res sensor, which shouldn't be remotely controversial because the resolution of an image from a low res sensor is dominated by the sensor's resolution, while the resolution of an image from a high res sensor is dominated by diffraction effects.

Both in relative and absolute terms, diffraction has a greater effect on higher resolution sensors.

Complain