Understanding Diffraction: Why It Matters for Photographers

"Pixel size does impact diffraction."

The picture in the article you cited shows the exact opposite of your claim

The diffracted spot size got larger when the lens was stopped down. The grid of pixels stayed the same.

The article was more carefully worded to say the visibility of the effects of diffraction are affected by pixel size.
Right, when we pixel peep (viewing at 100% or more), the smaller pixel size translates to zooming in more to the higher res photo than the lower res image. As a result, diffraction becomes more pronounced. When an equitable comparison of images of made - same size images viewed at the same distance (same magnification) - the image made with the higher res sensor shows more detail.
 
📷 What is Diffraction?

Diffraction happens when light waves bend as they pass through a small aperture.
They do not really "bend" light waves but I guess this is a good enough layman's explanation.
🔍 How Sensor Pixel Size Comes Into Play

The smaller the pixel pitch (distance between pixels), the more that diffraction blur affects the image. High‑resolution sensors with tiny pixels will show diffraction softening earlier than lower‑resolution sensors.

Pixel pitch and diffraction thresholds for Fujifilm X‑series bodies:

Camera ModelResolutionPixel Pitch (approx.)

Diffraction Noticeable

X‑T5 / X‑H240 MP~3.03 µmf/5.6–f/8

X‑T4 / X‑T3 / X‑H2S26 MP~3.74 µmf/8–f/11

X‑T2 24 MP~3.92 µmf/8–f/11

X‑T116 MP~4.76 µmf/11–f/13

Cameras with larger pixels (like the X‑T1) can be stopped down further before diffraction visibly softens fine detail. Higher‑resolution models like the X‑T5 produce more detail overall but reveal diffraction earlier.
Most of this is fundamentally wrong, and what can be salvaged, needs to be formulated properly.
Is the concept right that pixel size impacts diffraction?
Not in terms of cycles/um or cy/picture height for same size sensor, but in terms of cy/pixel, yes.

--
https://blog.kasson.com
 
Last edited:
"Pixel size does impact diffraction."

The picture in the article you cited shows the exact opposite of your claim

The diffracted spot size got larger when the lens was stopped down. The grid of pixels stayed the same.

The article was more carefully worded to say the visibility of the effects of diffraction are affected by pixel size.
Right, when we pixel peep (viewing at 100% or more), the smaller pixel size translates to zooming in more to the higher res photo than the lower res image. As a result, diffraction becomes more pronounced. When an equitable comparison of images of made - same size images viewed at the same distance (same magnification) - the image made with the higher res sensor shows more detail.
I need to keep this in mind when I try to explain why (all else being the same) a 60mpx FF sensor isn't noisier than a 15mpx FF sensor

--

Sherm

Sherms flickr page

P950 album

P900 album RX10iv album
OM1.2 150-600 album
 
"Pixel size does impact diffraction."

The picture in the article you cited shows the exact opposite of your claim

The diffracted spot size got larger when the lens was stopped down. The grid of pixels stayed the same.

The article was more carefully worded to say the visibility of the effects of diffraction are affected by pixel size.
Right, when we pixel peep (viewing at 100% or more), the smaller pixel size translates to zooming in more to the higher res photo than the lower res image. As a result, diffraction becomes more pronounced. When an equitable comparison of images of made - same size images viewed at the same distance (same magnification) - the image made with the higher res sensor shows more detail.
I need to keep this in mind when I try to explain why (all else being the same) a 60mpx FF sensor isn't noisier than a 15mpx FF sensor
Well, the 60MP sensors are not leading edge for read noise, so they are actually a little noisier than sensors around 15MP for current technology; the difference will seem to be much greater with 1:1 pixel views on a monitor, though. There is no photon noise disadvantage to higher pixel counts, and high pixel densities have no quantum efficiency disadvantage in the current range of pixel sizes.
 
"Pixel size does impact diffraction."

The picture in the article you cited shows the exact opposite of your claim

The diffracted spot size got larger when the lens was stopped down. The grid of pixels stayed the same.

The article was more carefully worded to say the visibility of the effects of diffraction are affected by pixel size.
Right, when we pixel peep (viewing at 100% or more), the smaller pixel size translates to zooming in more to the higher res photo than the lower res image. As a result, diffraction becomes more pronounced. When an equitable comparison of images of made - same size images viewed at the same distance (same magnification) - the image made with the higher res sensor shows more detail.
I need to keep this in mind when I try to explain why (all else being the same) a 60mpx FF sensor isn't noisier than a 15mpx FF sensor
Well, the 60MP sensors are not leading edge for read noise, so they are actually a little noisier than sensors around 15MP for current technology; the difference will seem to be much greater with 1:1 pixel views on a monitor, though. There is no photon noise disadvantage to higher pixel counts, and high pixel densities have no quantum efficiency disadvantage in the current range of pixel sizes.
Got it. Thank you

--

Sherm

Sherms flickr page

P950 album

P900 album RX10iv album
OM1.2 150-600 album
 
Been fine tuning this brief article with ChatGPT
It's tough getting wording to say exactly the right thing. ChatGPT generally doesn't do that.
When you stop down a lens to achieve more depth of field, you’re also introducing a fundamental optical effect called diffraction.
For instance, here. Most of the time the wording in your article seems to imply that diffraction is an On/Off effect. It is not. Diffraction is a property of light passing edges, and the way we record images, all light has passed an edge, so there's always some diffraction. Stopping down a lens does not "introduce" diffraction. Stopping down a lens changes the diffraction.
Diffraction happens when light waves bend as they pass through a small aperture.
Another poor wording example. Photons act as waves when passed through apertures. But just as a water wave doesn't "bend" when it goes through an opening, neither does light. The opening disrupts the light, introducing a wave effect, which redirects some of the light. Call it a spreading effect.
The smaller the opening (higher f‑stop), the more light spreads out, creating a larger blur circle called the Airy disk.
And see here, you used "spreads."
The smaller the pixel pitch (distance between pixels), the more that diffraction blur affects the image. High‑resolution sensors with tiny pixels will show diffraction softening earlier than lower‑resolution sensors.
"Affects" and "will show" become problematic here. Technically, diffraction hits the same sized sensor the same way. The image is the "same." However, you're sampling the diffraction impact (spreading) better with smaller pixels. Whether that would be seen by the image viewer or not depends a lot on the magnification at which the image is reproduced, though. Moreover, if we're talking a print, most of the "fine resolution" from inkjet printers comes from dithering, ink spread, and other things, which has its own way of masking what's happening in the actual capture data.
Pixel pitch and diffraction thresholds for Fujifilm X‑series bodies:
"Thresholds." Again, the implication of On/Off.
Camera ModelResolutionPixel Pitch (approx.)

Diffraction Noticeable
"Noticeable." Hmm. Doesn't that require a definition of how the data is being viewed? Magnification, display/print density, a whole bunch of things, and if you've got your image processor (or camera) set with any sharpening/noise reduction, that, too, would come into play.
Cameras with larger pixels (like the X‑T1) can be stopped down further before diffraction visibly softens fine detail.
Again, no definitions that allow us to verify that "visibly softens."
Higher‑resolution models like the X‑T5 produce more detail overall but reveal diffraction earlier.
"Earlier" is a very wrong word here. Does 40mp reveal diffraction at noon, and 24mp reveal it at 2pm?
Stopping down increases depth of field, which brings more of the scene into focus.
"Focus" only happens on a single plane in an image. Both diffraction impact and depth of field are about perceptions. Can you perceive the actual Airy disk (I generally say no, at least not until you're at 2x the photosite size in a Bayer sensor)? When do you perceive something as being "sufficiently in focus?" The Zeiss DoF algorithm that most people use is one theory; there are competing theories.
many photographers aim for a sweet spot aperture (f/4–f/8 on high‑MP APS‑C)
"Many" is the problem here. I don't know those "many," and the "many" I've worked with would say something different.
where sharpness and DOF balance out. For extreme DOF without diffraction softening, focus stacking is the best solution.
Technically, focus stacking is capturing multiple focus planes and interpolating between them. And how did that happen without diffraction? ;~) Again with the On/Off implication.
When you view images at 100% on a high‑resolution monitor, any loss of micro‑contrast from diffraction is obvious.
For decades now on dpreview we've argued about the terms "micro contrast" and its stand-ins. What exactly is that, and who defined it?
But prints are seen at lower resolution (usually 200–300 dpi) and at greater viewing distances.
My 5K monitor is about the size of a 24" print. Are you saying I view my monitor closer than I do my 24" print? Funny point: I don't remember putting a loupe up to my monitor, but I do remember using it on my prints ;~).
As a result:
  • Mild diffraction at f/8–f/11 is rarely visible in print, even at large sizes.
Suddenly diffraction isn't On/Off, but comes in Mild and Strong values? Where would we find the definitions of those?
  • The “softness” you see when zoomed in disappears when the image is downsampled for printing.
Now we're using "softness" instead of "blur." And why am I downsampling for printing?
✅ Takeaways for Photographers
  • Diffraction is unavoidable – it’s a law of physics.
Yes.
  • Cameras with larger pixels (X‑T1, X‑T2) are more forgiving at small apertures.
"Forgiving" is the problematic word here. Technically, large pixels may be large enough so that the Airy disc falls completely on an individual pixel.
  • High‑resolution cameras like X‑T5 reveal diffraction earlier but provide more detail overall.
Again with that "earlier" wording. I've written it for decades now: I'll always take more sampling. What additional sampling produces may have declining visual impacts, but I'd still want more sampling rather than less. It gives me a more accurate data set to start from.
  • Use f/4–f/8 for maximum sharpness on high‑MP APS‑C sensors.
Simply don't agree. Part of that has to do with the use of the word "sharpness."
  • With proper post‑processing, f/11 or even f/13 shots can still produce sharp, detailed prints.
  • Primes don’t change diffraction physics, but because they are usually sharper, they can still produce better small‑aperture images than zooms.
"Sharp" keeps getting used here. Yet we haven't talked about what sharpening does with blur and anti-aliasing. Hmm, maybe it introduces micro contrast (tongue sharply in cheek ;~).
  • Don’t panic about mild diffraction – prints hide it much better than screens.
Way too generic a construct. Most photos these days are being viewed on phones, maybe tablets. Both of which are small screens normally held at arms length, yet with high pixel density ("Retina displays"). Moreover, they're using striped arrays, not changeable pixel values. All kinds of variables are seeping in.
Personal thought: Maybe I should break out my XT1 more often and try with the newer lenses?
I keep finding my 6mp D100 images taken 20 years ago hold up really, really well. At least the ones where I was paying close attention to what I was doing. Boy do they have a lot of micro contrast (just kidding ;~).

Yes, I've been nit-picky harsh here. Generalizing any photographic topic is no easy chore. I get it wrong myself often enough to be embarrassed and having to fix things on my sites pretty much every month. The problem is that a lot of these generalizations end up myths that everyone believes are sacrosanct, and then they keep getting repeated.

Which brings us to ChatGTP and the other AI engines. Grossly simplified, they're pattern recognizers and repeaters. So when articles get written that use language loosely and less than accurate, the AI engines eventually scrape that into their model and we get even more repetition of the same language downstream. I'm finding more and more that I have to question the answer I get from a LLM AI engine.
 
Thanks for sharing your insights about AI. I've been thinking a lot of the same things. Even as I am finding it quite useful.
 
Thanks for sharing your insights about AI. I've been thinking a lot of the same things. Even as I am finding it quite useful.
If I have a dataset that I want to plot in Matlab, ChatGPT is where I start. The code it gives me isn't perfect, but it saves me a lot of time.
 
Similar story. For her ME degree, my daughter was taking a robotics class last spring, She calls me at ten in the morning one day and says a program using an Arduino microcontroller is due at midnight and she has no clue where to start. She has never programmed an Arduino. The professor has provided about ten modules that do specific functions like turn on LEDs, rotate servos, read switches and they are using an emulator. The logic of what the program is supposed to do is pretty convoluted. I am annoyed such a complicated assignment would be given as the first one they get for the class.

I've done a lot of Arduino programming so after lunch I take on the project and about six o'clock I have it done so I call her to walk her through it. And she tells me she finished it at two o'clock. One of her classmates showed her how you can just ask chatGPT to write the missing code to meet the required behavior using the supplied modules. And that is how the professor expected people to complete the assignment.
 
Thanks for sharing your insights about AI. I've been thinking a lot of the same things. Even as I am finding it quite useful.
No doubt AI is useful. But its usefulness requires care. I've found that pretty much every time I fire up a chatbot to just answer some questions, it gets something clearly wrong. It's like having a really good assistant that you need to do constant double checking on.

Most of my programming friends are using AI helpers in some fashion. But all of them have started using some very specific procedures in doing so.

The specific thing that I don't think works the way anyone believes it does is this: (A) you ask a question and (B) AI gives you the correct answer. In that particular AB system, there seems to be a real problem with pattern matching: B will always reflect A, so you'd better have been very careful with your question.

I've found that having the chatbot ask me questions provides better responses most of the time. To put that in a programming context: (A) provide an end goal (e.g. function code), (B) have AI ask you questions that would help it fulfill that goal, and (C) once you feel that it understands clearly what you need, then ask for the function code.
 
Thanks for the programming tips. Much appreciated. I found even just having my development environment (visual studio) open with my current program makes the answers from CoPilot much more useful.

I have been wondering if it would be worth it to pay for some kind of more advanced AI helper. Any thoughts on that would be appreciated.
 
Thanks for the programming tips. Much appreciated. I found even just having my development environment (visual studio) open with my current program makes the answers from CoPilot much more useful.
Right. All context tends to help a LLM AI engine. The more context you provide prior to getting to the nitty gritty "make it" stage helps. That's why I tell the chatbot to ask me questions that will help it fulfill my goal prior to asking it to do so. I'll keep it doing so until I'm sure that all the major things that might intersect with the goal are going to be considered. My final prompt is usually something along the line of "okay, now take all my responses to your questions into account while..."

The other thing I learned about AI engines while developing a new game is this: you can make them Monte Carlo simulate many times playing the game if they know all the game rules/pieces. I learned that a few of the parameters needed tweaking, as the game kept getting played the same way much of the time. Another interesting prompt was "how do you beat another player at this game?"
I have been wondering if it would be worth it to pay for some kind of more advanced AI helper. Any thoughts on that would be appreciated.
Maybe. It sort of depends upon how much speed and quantity you need. That's mostly what paying extra for helps (faster responses, ability to get more responses). Some of the AI engines have more "advanced" models available for hire, too, but I'm not sure I'm the person to ask about that. I have noticed that the more "trained" an LLM is, the more likely it will hallucinate at some point. As with Machine Learning, it seems that training past a certain point doesn't net you better results, it just starts complicating the results. Thus, I've stuck to the basic, general models.

One thing I'm about to experiment with is devoting a computer to a local AI engine that only lives off my data, and doesn't go off looking for new inputs elsewhere. I know that there are both programmers and organizations that are doing that same experiment, but don't know the answer as to whether that is better or worse than just using the general engines available publicly.

What a local, private engine does, though, is keep your queries and chat private, and not informing the company that's providing the engine. Again, consider the context of designing a game from scratch. I don't want a public AI model knowing everything about my game design before it's even released to the public. That increases the likelihood that someone else typing "design me a game that does..." gets my game design, since it's now a very well known pattern to the LLM.
 
Similar story. For her ME degree, my daughter was taking a robotics class last spring, She calls me at ten in the morning one day and says a program using an Arduino microcontroller is due at midnight and she has no clue where to start. She has never programmed an Arduino. The professor has provided about ten modules that do specific functions like turn on LEDs, rotate servos, read switches and they are using an emulator. The logic of what the program is supposed to do is pretty convoluted. I am annoyed such a complicated assignment would be given as the first one they get for the class.

I've done a lot of Arduino programming so after lunch I take on the project and about six o'clock I have it done so I call her to walk her through it. And she tells me she finished it at two o'clock. One of her classmates showed her how you can just ask chatGPT to write the missing code to meet the required behavior using the supplied modules. And that is how the professor expected people to complete the assignment.
In the late 2000s/early 2010s, I was a civil engineering student with an interest in photography who had been a Physics 1/2 TA. I took an optical engineering course as an elective (Physics 2 was the only prerequisite), thinking I'd like to know more about how optics work and that I could take advantage of being at school and learning about my favorite hobby. The professor jumped into the subject and immediately even the physics majors were lost. He gave us several homework assignments that he expected us to solve in Matlab, even though most of us had no idea how to use it. Then after we submitted those assignments, he ran through his Matlab scripts to show us how "easy" it was. I ended up taking that class Pass/Fail and learned much less than I had hoped. I'm sure part of it was that I wasn't a physics major, but some professors just have no idea how to clearly communicate concepts in a way that students can understand it, and to then build assignments that test the students' understanding, rather than their proficiency with tools that aren't directly related to the subject. Now of course, with AI, I could have zero understanding of Matlab and still get by. But I question how much the students are really learning about the subject they're studying in a similar way that I didn't learn much by failing to write those Matlab scripts.
 
In the late 2000s/early 2010s, I was a civil engineering student with an interest in photography who had been a Physics 1/2 TA. I took an optical engineering course as an elective (Physics 2 was the only prerequisite), thinking I'd like to know more about how optics work and that I could take advantage of being at school and learning about my favorite hobby. The professor jumped into the subject and immediately even the physics majors were lost. He gave us several homework assignments that he expected us to solve in Matlab, even though most of us had no idea how to use it. Then after we submitted those assignments, he ran through his Matlab scripts to show us how "easy" it was. I ended up taking that class Pass/Fail and learned much less than I had hoped. I'm sure part of it was that I wasn't a physics major, but some professors just have no idea how to clearly communicate concepts in a way that students can understand it, and to then build assignments that test the students' understanding, rather than their proficiency with tools that aren't directly related to the subject. Now of course, with AI, I could have zero understanding of Matlab and still get by.
Uh, not really. AI makes errors, and you have to know how to code to find and fix them.
But I question how much the students are really learning about the subject they're studying in a similar way that I didn't learn much by failing to write those Matlab scripts.
 
In the late 2000s/early 2010s, I was a civil engineering student with an interest in photography who had been a Physics 1/2 TA. I took an optical engineering course as an elective (Physics 2 was the only prerequisite), thinking I'd like to know more about how optics work and that I could take advantage of being at school and learning about my favorite hobby. The professor jumped into the subject and immediately even the physics majors were lost. He gave us several homework assignments that he expected us to solve in Matlab, even though most of us had no idea how to use it. Then after we submitted those assignments, he ran through his Matlab scripts to show us how "easy" it was. I ended up taking that class Pass/Fail and learned much less than I had hoped. I'm sure part of it was that I wasn't a physics major, but some professors just have no idea how to clearly communicate concepts in a way that students can understand it, and to then build assignments that test the students' understanding, rather than their proficiency with tools that aren't directly related to the subject. Now of course, with AI, I could have zero understanding of Matlab and still get by.
Uh, not really. AI makes errors, and you have to know how to code to find and fix them.
But I question how much the students are really learning about the subject they're studying in a similar way that I didn't learn much by failing to write those Matlab scripts.
My interactions with current students says otherwise, but to each their own. I'm guessing the types of problems you're trying to solve and how closely the work is being reviewed come into play. "Still get by" is a very low bar to clear. These aren't highly complex research assignments we're talking about.
 
All good points. In this particular case I suspect the situation was something of a setup with the modules being provided being debugged and amenable to use with AI. It turned out to be the only software homework assignment in the class. So I am not sure what the professor was intending to communicate with it. Possibly it was only in there so the class would count as fulfilling the accreditation requirements for the university ME program. My daughter did say she learned a lot from the class overall and told me about them. Some of them were things I have found lacking in the last three recent ME grads our company has hired.
 
All good points. In this particular case I suspect the situation was something of a setup with the modules being provided being debugged and amenable to use with AI. It turned out to be the only software homework assignment in the class. So I am not sure what the professor was intending to communicate with it. Possibly it was only in there so the class would count as fulfilling the accreditation requirements for the university ME program. My daughter did say she learned a lot from the class overall and told me about them. Some of them were things I have found lacking in the last three recent ME grads our company has hired.
OT. My former brother in law was an ME for Lockheed MSC. They had an ME basketball team called the Tri-dots. ME joke.
 
High‑resolution sensors with tiny pixels will show diffraction softening earlier....

...A prime lens cannot delay when diffraction begins.
Since you're trying to write a clear explanation, you should realize that time has nothing to do with what you're trying to say. Words such as "earlier" and "delay" and "when" are not the words to use. This is a glaring error in the presentation. If you still think it's all right, I have a question: which comes first, f/16 or f/4?. And what time is f/32?

I'll leave other comments to other people, and I'm sure other people have already commented.
 
Last edited:
AI makes errors, and you have to know how to code to find and fix them.
^^^^^ This!

And it's not just errors, per se, but poor practices.
I was a Smalltalk programmer before I became a Matlab programmer, and I've had some success in getting ChatGPT 4o to code in the style that I like. It occasionally slips back into more C++ style, but it takes correction well.

On a related topic, I hate to write the long introductory comments at the beginning of each class and each method. It seems so obvious to me while I'm coding, but I know when my future self reads the code in a few years, it won't be. ChatGPT is very good about writing explanatory comments with copious usage examples. I have to go clean it up a bit, but it's a huge timesaver.
 

Keyboard shortcuts

Back
Top