Worse than a sharp image of a fuzzy concept?

knickerhawk

Veteran Member
Messages
8,201
Solutions
15
Reaction score
7,806
Location
US
We all know Adams' aphorism that there is nothing worse than a sharp image of a fuzzy concept. But what about the opposite - a fuzzy image of a sharp concept? What would be the visible (if any) differences between the two? And since this is a photographic science and technology forum, I'm not asking for a philosophical discussion. I'm asking about the visible (if any) differences between equal system resolution achieved by using a lens with poor resolution on a camera with a high resolution sensor and vice versa (i.e., a lens with good resolution mated to a camera with a low resolution sensor). Where are we likely to see visible differences? Further, how will making the mismatches more extreme while keeping overall system resolution equal affect the answer, if at all?

The one obvious thing that comes to mind is aliasing and moire that should be more visible on the system with a sharp lens and a lower resolution sensor. Feel free to elaborate on that, but since I thnk I already have a decent grasp of that subject, I'm really more interested in other possible IQ issues. Thanks.
 
\What would be the visible (if any) differences between the two?
Here are some ridiculous examples:

f047cb63463040319004da39d9f59c2f.jpg

0c4f076408434f98b7d933ceb77df56d.jpg

I recently took this photo of a local shop window at night.

What's pretty obvious is that a direct comparison won't be necessarily easy if distinctive aberrations and distortion are present in the "bad lens" photo. There won't be a direct correspondence between the images thanks to distortion, and the color defects would have to be weighted somehow taking human color vision into account. Flare would also make the comparison more ambiguous.

My instinct would be to treat lens defects and resolution issues separately as much as possible.

--
 
In retrospect, perhaps the “bad lens” photo would have to corrected first, and the low resolution image smoothly upsampled, and then an MTF analysis could more easily be done. There still will be qualitative differences.

--
http://therefractedlight.blogspot.com
 
Last edited:
In retrospect, perhaps the “bad lens” photo would have to corrected first, and the low resolution image smoothly upsampled, and then an MTF analysis could more easily be done. There still will be qualitative differences.
Thanks, Mark, for the replies. Your observation above points out another aspect of the issue: to the extent each system resolution strategy has negative IQ consequences relative to the other one, which strategy's negative IQ elements are more easily managed/corrected/avoided? I recognize that this additional question really broadens the inquiry, perhaps to the point of being unanswerable.
 
For dreamlike images provoking memories unsharp images often do better than tack sharp images. The brain has more freedom to fill in the missing information. The concept may be about remembering, feeling and longing - or something else.

Perception psycology is part of the scientific universe.

Unsharp images of a well focused concept can work wonders. Artists have known this for a long time. Happily there are different schools of photography outside the tack sharp and well focused world of Ansel Adams. That may be one kind of difference.
 
Last edited:
We all know Adams' aphorism that there is nothing worse than a sharp image of a fuzzy concept. But what about the opposite - a fuzzy image of a sharp concept? What would be the visible (if any) differences between the two? And since this is a photographic science and technology forum, I'm not asking for a philosophical discussion. I'm asking about the visible (if any) differences between equal system resolution achieved by using a lens with poor resolution on a camera with a high resolution sensor and vice versa (i.e., a lens with good resolution mated to a camera with a low resolution sensor). Where are we likely to see visible differences? Further, how will making the mismatches more extreme while keeping overall system resolution equal affect the answer, if at all?

The one obvious thing that comes to mind is aliasing and moire that should be more visible on the system with a sharp lens and a lower resolution sensor. Feel free to elaborate on that, but since I think I already have a decent grasp of that subject, I'm really more interested in other possible IQ issues. Thanks.
I have the equipment that could test the issue: a Sigma DSLR that can shoot with pixels binned 2x2 on-sensor or full res., and some lenses that are "good" and not-so-good.

I also have a slant-edge target and QuickMTF which can display system MTF and edge spread.

However, I am at a complete loss as to how one might "equalize overall system resolution" - is it possible to explain that in considerably more detail ?

--
what you got is not what you saw ...
 
Last edited:
We all know Adams' aphorism that there is nothing worse than a sharp image of a fuzzy concept. But what about the opposite - a fuzzy image of a sharp concept? What would be the visible (if any) differences between the two? And since this is a photographic science and technology forum, I'm not asking for a philosophical discussion. I'm asking about the visible (if any) differences between equal system resolution achieved by using a lens with poor resolution on a camera with a high resolution sensor and vice versa (i.e., a lens with good resolution mated to a camera with a low resolution sensor). Where are we likely to see visible differences? Further, how will making the mismatches more extreme while keeping overall system resolution equal affect the answer, if at all?

The one obvious thing that comes to mind is aliasing and moire that should be more visible on the system with a sharp lens and a lower resolution sensor. Feel free to elaborate on that, but since I think I already have a decent grasp of that subject, I'm really more interested in other possible IQ issues. Thanks.
I have the equipment that could test the issue: a Sigma DSLR that can shoot with pixels binned 2x2 on-sensor or full res., and some lenses that are "good" and not-so-good.

I also have a slant-edge target and QuickMTF which can display system MTF and edge spread.

However, I am at a complete loss as to how one might "equalize overall system resolution" - is it possible to explain that in considerably more detail ?
My probably overly simplistic understanding is that since System MTF = Camera MTF x Lens MTF, you should be able to use your Sigma + various lens combos with binning turned on and off. I'd assume that your results would reasonably model what I was thinking about when I was thinking "system resolution". We can discuss what your MTF results means in terms of which spatial frequencies and curve similarities to judge "similarity" by. I suppose that's part of the broader underlying question I'm asking concerning visible differences. Does this make sense or am I off my rocker for pursuing this question?
 
We all know Adams' aphorism that there is nothing worse than a sharp image of a fuzzy concept. But what about the opposite - a fuzzy image of a sharp concept? What would be the visible (if any) differences between the two? And since this is a photographic science and technology forum, I'm not asking for a philosophical discussion. I'm asking about the visible (if any) differences between equal system resolution achieved by using a lens with poor resolution on a camera with a high resolution sensor and vice versa (i.e., a lens with good resolution mated to a camera with a low resolution sensor). Where are we likely to see visible differences? Further, how will making the mismatches more extreme while keeping overall system resolution equal affect the answer, if at all?

The one obvious thing that comes to mind is aliasing and moire that should be more visible on the system with a sharp lens and a lower resolution sensor. Feel free to elaborate on that, but since I think I already have a decent grasp of that subject, I'm really more interested in other possible IQ issues. Thanks.
I have the equipment that could test the issue: a Sigma DSLR that can shoot with pixels binned 2x2 on-sensor or full res., and some lenses that are "good" and not-so-good.

I also have a slant-edge target and QuickMTF which can display system MTF and edge spread.

However, I am at a complete loss as to how one might "equalize overall system resolution" - is it possible to explain that in considerably more detail ?
My probably overly simplistic understanding is that since System MTF = Camera MTF x Lens MTF, you should be able to use your Sigma + various lens combos with binning turned on and off. I'd assume that your results would reasonably model what I was thinking about when I was thinking "system resolution". We can discuss what your MTF results means in terms of which spatial frequencies and curve similarities to judge "similarity" by. I suppose that's part of the broader underlying question I'm asking concerning visible differences. Does this make sense or am I off my rocker for pursuing this question?
I might be able to do something with a spreadsheet that I made a long time ago. For a given pixel pitch ("resolution"), spatial frequency, f-number, lens quality I can figure a system MTF.

It may be that I could juggle something for your lens quality and pixel pitch opposites to somehow equalize system MTF thereby saving wear and tear on my camera ...

le voila:

0ed08b2e06b34dcbb0566f43fc8153e2.jpg

I could keep the f-number and spatial frequency constant and see if it is possible to vary the pixel pitch and lens degradation to get the same system MTF numbers. I have my doubts but haven't tried it yet.

What think?

--
what you got is not what you saw ...
 
Last edited:
[...] And since this is a photographic science and technology forum, I'm not asking for a philosophical discussion. I'm asking about the visible (if any) differences between equal system resolution achieved by using a lens with poor resolution on a camera with a high resolution sensor and vice versa (i.e., a lens with good resolution mated to a camera with a low resolution sensor). [...]
With due respect, your proposition is very interesting but does not have anything to do with Adam's statement. If we think of "statements" then the "sharpening" implies make guesses, at best educated ones, perhaps elaborating on our own on questions that the original statement didn't even have.

Resolving a fuzzy statement is more similar to the situation of AI sharpening of fuzzy images, wether they are blurred by poor focussing or motion. That seems to attract all the buzz nowadays!
 
We all know Adams' aphorism that there is nothing worse than a sharp image of a fuzzy concept. But what about the opposite - a fuzzy image of a sharp concept? What would be the visible (if any) differences between the two? And since this is a photographic science and technology forum, I'm not asking for a philosophical discussion. I'm asking about the visible (if any) differences between equal system resolution achieved by using a lens with poor resolution on a camera with a high resolution sensor and vice versa (i.e., a lens with good resolution mated to a camera with a low resolution sensor). Where are we likely to see visible differences? Further, how will making the mismatches more extreme while keeping overall system resolution equal affect the answer, if at all?

The one obvious thing that comes to mind is aliasing and moire that should be more visible on the system with a sharp lens and a lower resolution sensor. Feel free to elaborate on that, but since I think I already have a decent grasp of that subject, I'm really more interested in other possible IQ issues. Thanks.
I have the equipment that could test the issue: a Sigma DSLR that can shoot with pixels binned 2x2 on-sensor or full res., and some lenses that are "good" and not-so-good.

I also have a slant-edge target and QuickMTF which can display system MTF and edge spread.

However, I am at a complete loss as to how one might "equalize overall system resolution" - is it possible to explain that in considerably more detail ?
My probably overly simplistic understanding is that since System MTF = Camera MTF x Lens MTF, you should be able to use your Sigma + various lens combos with binning turned on and off. <>
I might be able to do something with a spreadsheet that I made a long time ago. For a given pixel pitch ("resolution"), spatial frequency Vf, f-number, lens quality I can figure a system MTF.

It may be that I could juggle something for your lens quality and pixel pitch opposites to somehow equalize system MTF thereby saving wear and tear on my camera ...

le voila:

I could keep the f-number and spatial frequency constant and see if it is possible to vary the pixel pitch and lens degradation to get the same system MTF numbers. I have my doubts but haven't tried it yet.
Here we go ....

At f/5.6 and Vf of 50 lp/mm:

Low res with a high quality lens - I set pixel pitch to 10um and lens sigma** to 0.1 got a Perfect system MTF of 0.51.

High res with an adjusted lens quality - I set pixel pitch to 5um and it took a lens sigma of 0.84 to get the same MTF of 0.51.

** Lens sigma is 0.1 for an almost perfect lens; 0.84 is pretty poor.

Of course, equal system MTF does not necessarily mean equal image quality ...
 
Last edited:
We all know Adams' aphorism that there is nothing worse than a sharp image of a fuzzy concept. But what about the opposite - a fuzzy image of a sharp concept? What would be the visible (if any) differences between the two? And since this is a photographic science and technology forum, I'm not asking for a philosophical discussion. I'm asking about the visible (if any) differences between equal system resolution achieved by using a lens with poor resolution on a camera with a high resolution sensor and vice versa (i.e., a lens with good resolution mated to a camera with a low resolution sensor). Where are we likely to see visible differences? Further, how will making the mismatches more extreme while keeping overall system resolution equal affect the answer, if at all?

The one obvious thing that comes to mind is aliasing and moire that should be more visible on the system with a sharp lens and a lower resolution sensor. Feel free to elaborate on that, but since I think I already have a decent grasp of that subject, I'm really more interested in other possible IQ issues. Thanks.
I have the equipment that could test the issue: a Sigma DSLR that can shoot with pixels binned 2x2 on-sensor or full res., and some lenses that are "good" and not-so-good.

I also have a slant-edge target and QuickMTF which can display system MTF and edge spread.

However, I am at a complete loss as to how one might "equalize overall system resolution" - is it possible to explain that in considerably more detail ?
My probably overly simplistic understanding is that since System MTF = Camera MTF x Lens MTF, you should be able to use your Sigma + various lens combos with binning turned on and off. <>
I might be able to do something with a spreadsheet that I made a long time ago. For a given pixel pitch ("resolution"), spatial frequency Vf, f-number, lens quality I can figure a system MTF.

It may be that I could juggle something for your lens quality and pixel pitch opposites to somehow equalize system MTF thereby saving wear and tear on my camera ...

le voila:

I could keep the f-number and spatial frequency constant and see if it is possible to vary the pixel pitch and lens degradation to get the same system MTF numbers. I have my doubts but haven't tried it yet.
Here we go ....

At f/5.6 and Vf of 50 lp/mm:

Low res with a high quality lens - I set pixel pitch to 10um and lens sigma** to 0.1 got a Perfect system MTF of 0.51.

High res with an adjusted lens quality - I set pixel pitch to 5um and it took a lens sigma of 0.84 to get the same MTF of 0.51.

** Lens sigma is 0.1 for an almost perfect lens; 0.84 is pretty poor.

Of course, equal system MTF does not necessarily mean equal image quality ...
Thanks, but I'm not sure how this progresses us on the original question which was weighing the visible negative aspects of each way to get to that equal system state. Or put another way: how to think about the trade-off of lens aberrations vs aliasing/moire since those are at least major factors at play here. Perhaps there's no good objective way to do what I'm trying to do here. Maybe it just comes down to personal preference and specific shooting scenarios.
 
We all know Adams' aphorism that there is nothing worse than a sharp image of a fuzzy concept. But what about the opposite - a fuzzy image of a sharp concept? What would be the visible (if any) differences between the two? And since this is a photographic science and technology forum, I'm not asking for a philosophical discussion. I'm asking about the visible (if any) differences between equal system resolution achieved by using a lens with poor resolution on a camera with a high resolution sensor and vice versa (i.e., a lens with good resolution mated to a camera with a low resolution sensor). Where are we likely to see visible differences? Further, how will making the mismatches more extreme while keeping overall system resolution equal affect the answer, if at all?

The one obvious thing that comes to mind is aliasing and moire that should be more visible on the system with a sharp lens and a lower resolution sensor. Feel free to elaborate on that, but since I think I already have a decent grasp of that subject, I'm really more interested in other possible IQ issues. Thanks.
I have the equipment that could test the issue: a Sigma DSLR that can shoot with pixels binned 2x2 on-sensor or full res., and some lenses that are "good" and not-so-good.

I also have a slant-edge target and QuickMTF which can display system MTF and edge spread.

However, I am at a complete loss as to how one might "equalize overall system resolution" - is it possible to explain that in considerably more detail ?
My probably overly simplistic understanding is that since System MTF = Camera MTF x Lens MTF, you should be able to use your Sigma + various lens combos with binning turned on and off. <>
I might be able to do something with a spreadsheet that I made a long time ago. For a given pixel pitch ("resolution"), spatial frequency Vf, f-number, lens quality I can figure a system MTF.

It may be that I could juggle something for your lens quality and pixel pitch opposites to somehow equalize system MTF thereby saving wear and tear on my camera ...

le voila:

I could keep the f-number and spatial frequency constant and see if it is possible to vary the pixel pitch and lens degradation to get the same system MTF numbers. I have my doubts but haven't tried it yet.
Here we go ....

At f/5.6 and Vf of 50 lp/mm:

Low res with a high quality lens - I set pixel pitch to 10um and lens sigma** to 0.1 got a Perfect system MTF of 0.51.

High res with an adjusted lens quality - I set pixel pitch to 5um and it took a lens sigma of 0.84 to get the same MTF of 0.51.

** Lens sigma is 0.1 for an almost perfect lens; 0.84 is pretty poor.

Of course, equal system MTF does not necessarily mean equal image quality ...
Thanks, but I'm not sure how this progresses us on the original question which was weighing the visible negative aspects of each way to get to that equal system state.
Oh.
Or put another way: how to think about the trade-off of lens aberrations vs aliasing/moire since those are at least major factors at play here.
I understand. My own example for poor lens with low res would be my SD9 with a pixel pitch of 9.12um which equates to about a nominal 55 lp/mm. Let's take diffraction as an "aberration", perhaps simulating an inability of the lens to focus properly. With that combination, diffraction does not become noticeable on my monitor at 100% zoom until f/16+. However, stair-casing is quite noticeable on sharp edges when viewed at 100% zoom or more. The SD9 has no AA filter at the sensor.

I also own the opposite: a Lumix G9 (3.9 pixel pitch) with an excellent 12-35mm Panasonic lens. With full-size captures, diffraction becomes noticeable at a much, much lower f-number - as one might expect.

If I were to analyze and compare two shots at say f/8 the SD9 would win on a diffraction blur basis. As to detail quality, the G9 would win (more pixels for a given detail size).
Perhaps there's no good objective way to do what I'm trying to do here.
II agree, maybe because of general terminology in the OP. As I have done above, it seems necessary to examine a particular aspect of image quality, rather than all of these at once:
Maybe it just comes down to personal preference and specific shooting scenarios.
My preference is the low res 3.4MP sensor because I only view on a 24" 2K monitor and I do not print.

If I need superb detail and excellent optical stabilization (I have shaky hands) out comes the 20MP G9.

--
what you got is not what you saw ...
 
Last edited:
A different take: So after my mom "passed" we were looking for photos. My uncle - the technophile with his rediculously expensive early digital cameras - had kept every shot he ever took including a bunch of 0.3 to 1.3 MP images from the late 1980s and early 1990s. These images were better than most of the surviving film, except some 35mm images. Polaroids, 126, 110, Disc - all have degraded significantly. So at the time the digital images were worse in every respect, except one: immutability.

As for you real question: Given sensibly "identical" images, it would be best to have a high resolution sensor sampling a horribly defective lens. Digital techniques like deconvolution and AI based methods are allowing for better and better image reconstruction systems. All those shots you tossed because they were "blurry" or not quite focused may some day be improved or reconstructed. The more pixels the more data going into that workflow. So this is one reason never to delete anything. I would argue that a good lens sampled at low resolution captures less information than a high resolution lens capture with fewer pixels.

Now that I have cast my lot, I'll read the responses to see how badly I guessed! LOL!

-- Bob
http://bob-o-rama.smugmug.com -- Photos
http://www.vimeo.com/boborama/videos -- Videos
 
<> I would argue that a good lens sampled at low resolution captures less information than a high resolution lens capture with fewer pixels.
Sorry, I don't fully understand the argument.

With fewer pixels even than "low resolution"?
 
<> I would argue that a good lens sampled at low resolution captures less information than a high resolution lens capture with fewer pixels.
Sorry, I don't fully understand the argument.

With fewer pixels even than "low resolution"?
Ugh, I misspoke. Dyslexia, I don't recommend it. I had intended to say that a higher resolution capture of a poorly performing lens would be better than a low resolution capture of a high performing lens, with the former lending itself to image processing techniques that might work less well with less pixels.

Where the OP scenario falls apart is the subjectivity of what "equal" image quality means. But lets say it is a scenario where a reasonable passerby would say "yeah, those look nearly identical" much like getting eyeglasses: the optician asking which is better "A" or "B" and you cannot decide.

The real question posed is can modern or future image processing techniques model and correct optical issues to a greater extent than methods to scale and otherwise "perfectly captured" lower resolution image. My money is on the former when it comes to numerical methods.

When it comes to an AI creating a pastiche, I think either image would be equally good. Today you can ask an AI to make a passable painting of the Pope if he was a turtle styled like a Vermeer - and that is a lot less input data than either image would provide.


-- Bob
http://bob-o-rama.smugmug.com -- Photos
http://www.vimeo.com/boborama/videos -- Videos
 
For dreamlike images provoking memories unsharp images often do better than tack sharp images. The brain has more freedom to fill in the missing information. The concept may be about remembering, feeling and longing - or something else.
I like that. Explains why many photos that I considered not the best exposure/focus-wise are liked by my audience. It evokes a memory of that moment and what someone said or did, which created that moment.
Perception psycology is part of the scientific universe.

Unsharp images of a well focused concept can work wonders. Artists have known this for a long time. Happily there are different schools of photography outside the tack sharp and well focused world of Ansel Adams. That may be one kind of difference.
 
Nothing illustrate a sharp but fuzzy image better, than a SOOC jpeg from recent smartphones.

The level of sharpening and denoising guarantees a feeling of sharpness when you eyes fall on it. However if you zoom in or otherwise scrutinize the image, you'd quickly realize you are staring into nothingness.

As a comparison, the same scene shot through an old prime lens wide-open. All the details will too have low contrast, but are clearer to the eyes.
 

Keyboard shortcuts

Back
Top