Been fine tuning this brief article with ChatGPT
It's tough getting wording to say exactly the right thing. ChatGPT generally doesn't do that.
When you stop down a lens to achieve more depth of field, you’re also introducing a fundamental optical effect called diffraction.
For instance, here. Most of the time the wording in your article seems to imply that diffraction is an On/Off effect. It is not. Diffraction is a property of light passing edges, and the way we record images, all light has passed an edge, so there's always some diffraction. Stopping down a lens does not "introduce" diffraction. Stopping down a lens changes the diffraction.
Diffraction happens when light waves bend as they pass through a small aperture.
Another poor wording example. Photons act as waves when passed through apertures. But just as a water wave doesn't "bend" when it goes through an opening, neither does light. The opening disrupts the light, introducing a wave effect, which redirects some of the light. Call it a spreading effect.
The smaller the opening (higher f‑stop), the more light spreads out, creating a larger blur circle called the Airy disk.
And see here, you used "spreads."
The smaller the pixel pitch (distance between pixels), the more that diffraction blur affects the image. High‑resolution sensors with tiny pixels will show diffraction softening earlier than lower‑resolution sensors.
"Affects" and "will show" become problematic here. Technically, diffraction hits the same sized sensor the same way. The image is the "same." However, you're sampling the diffraction impact (spreading) better with smaller pixels. Whether that would be seen by the image viewer or not depends a lot on the magnification at which the image is reproduced, though. Moreover, if we're talking a print, most of the "fine resolution" from inkjet printers comes from dithering, ink spread, and other things, which has its own way of masking what's happening in the actual capture data.
Pixel pitch and diffraction thresholds for Fujifilm X‑series bodies:
"Thresholds." Again, the implication of On/Off.
Camera ModelResolutionPixel Pitch (approx.)
Diffraction Noticeable
"Noticeable." Hmm. Doesn't that require a definition of how the data is being viewed? Magnification, display/print density, a whole bunch of things, and if you've got your image processor (or camera) set with any sharpening/noise reduction, that, too, would come into play.
Cameras with larger pixels (like the X‑T1) can be stopped down further before diffraction visibly softens fine detail.
Again, no definitions that allow us to verify that "visibly softens."
Higher‑resolution models like the X‑T5 produce more detail overall but reveal diffraction earlier.
"Earlier" is a very wrong word here. Does 40mp reveal diffraction at noon, and 24mp reveal it at 2pm?
Stopping down increases depth of field, which brings more of the scene into focus.
"Focus" only happens on a single plane in an image. Both diffraction impact and depth of field are about perceptions. Can you perceive the actual Airy disk (I generally say no, at least not until you're at 2x the photosite size in a Bayer sensor)? When do you perceive something as being "sufficiently in focus?" The Zeiss DoF algorithm that most people use is one theory; there are competing theories.
many photographers aim for a sweet spot aperture (f/4–f/8 on high‑MP APS‑C)
"Many" is the problem here. I don't know those "many," and the "many" I've worked with would say something different.
where sharpness and DOF balance out. For extreme DOF without diffraction softening, focus stacking is the best solution.
Technically, focus stacking is capturing multiple focus planes and interpolating between them. And how did that happen without diffraction? ;~) Again with the On/Off implication.
When you view images at 100% on a high‑resolution monitor, any loss of micro‑contrast from diffraction is obvious.
For decades now on dpreview we've argued about the terms "micro contrast" and its stand-ins. What exactly is that, and who defined it?
But prints are seen at lower resolution (usually 200–300 dpi) and at greater viewing distances.
My 5K monitor is about the size of a 24" print. Are you saying I view my monitor closer than I do my 24" print? Funny point: I don't remember putting a loupe up to my monitor, but I do remember using it on my prints ;~).
As a result:
- Mild diffraction at f/8–f/11 is rarely visible in print, even at large sizes.
Suddenly diffraction isn't On/Off, but comes in Mild and Strong values? Where would we find the definitions of those?
- The “softness” you see when zoomed in disappears when the image is downsampled for printing.
Now we're using "softness" instead of "blur." And why am I downsampling for printing?

Takeaways for Photographers
- Diffraction is unavoidable – it’s a law of physics.
Yes.
- Cameras with larger pixels (X‑T1, X‑T2) are more forgiving at small apertures.
"Forgiving" is the problematic word here. Technically, large pixels may be large enough so that the Airy disc falls completely on an individual pixel.
- High‑resolution cameras like X‑T5 reveal diffraction earlier but provide more detail overall.
Again with that "earlier" wording. I've written it for decades now: I'll always take more sampling. What additional sampling produces may have declining visual impacts, but I'd still want more sampling rather than less. It gives me a more accurate data set to start from.
- Use f/4–f/8 for maximum sharpness on high‑MP APS‑C sensors.
Simply don't agree. Part of that has to do with the use of the word "sharpness."
- With proper post‑processing, f/11 or even f/13 shots can still produce sharp, detailed prints.
- Primes don’t change diffraction physics, but because they are usually sharper, they can still produce better small‑aperture images than zooms.
"Sharp" keeps getting used here. Yet we haven't talked about what
sharpening does with blur and anti-aliasing. Hmm, maybe it introduces micro contrast (tongue sharply in cheek ;~).
- Don’t panic about mild diffraction – prints hide it much better than screens.
Way too generic a construct. Most photos these days are being viewed on phones, maybe tablets. Both of which are small screens normally held at arms length, yet with high pixel density ("Retina displays"). Moreover, they're using striped arrays, not changeable pixel values. All kinds of variables are seeping in.
Personal thought: Maybe I should break out my XT1 more often and try with the newer lenses?
I keep finding my 6mp D100 images taken 20 years ago hold up really, really well. At least the ones where I was paying close attention to what I was doing. Boy do they have a lot of micro contrast (just kidding ;~).
Yes, I've been nit-picky harsh here. Generalizing any photographic topic is no easy chore. I get it wrong myself often enough to be embarrassed and having to fix things on my sites pretty much every month. The problem is that a lot of these generalizations end up myths that everyone believes are sacrosanct, and then they keep getting repeated.
Which brings us to ChatGTP and the other AI engines. Grossly simplified, they're pattern recognizers and repeaters. So when articles get written that use language loosely and less than accurate, the AI engines eventually scrape that into their model and we get even more repetition of the same language downstream. I'm finding more and more that I have to question the answer I get from a LLM AI engine.