Resolution, noise and print size

Ah, so JIMD or whatever. This magic, unpublished noone-is-able-to-see-the-formula metric. Which we should take at face value like DxO?
You brought JIMD into discussion. I did not even mention it originally.
You say MTF is useless. I say suggest an alternative. You reference JIMD, but not by name.
One person saying it seems to be more sensitive than other options in ImageJ is not a strong sales pitch.
It is not just one person using it. I just provided a link to the comments of one person from years ago. But, apparently you were not satisfied.
Well, who else uses it? Anyone to really care about? Government agencies, major companies, etc? Many agencies (e.g. NASA) provide lists of the sorts of things they use, so NDAs would not be an issue en totale.
E.g. the slanted-edge MTF plugin for ImageJ is not very good,
Who cares about the slanted edge method when it can't let you do things of practical nature that are needed and are important: :-)

http://forum.luminous-landscape.com/index.php?topic=107311.msg893452#msg893452

BTW, there are several plugins in the general JIMD umbrella. Another, related one, JISR measured sharpness between two images as a ratio:

http://forum.luminous-landscape.com/index.php?topic=60585.msg489070#msg489070
so MTF from ImageJ is not a good measure of MTF,
Which has nothing to do with my ImageJ plugin.
and if poor metrics e.g. MTF50 are pulled from the measurement, the sensitivity is even worse.
I'm not talking about MTF50 here. I know you don't like the author of ImaTest and MTF50. But, he has created a software that a large number of people use for good purpose.
My thoughts on MTF50 are separate to my opinions of Imatest.
In one of the threads you link, you berate someone for questioning JIMD, "I find it interesting that you don't know the internals of JIDM but are quick to jump to a conclusion of 'anyone's guess'.", yet you yourself wrote it and refuse to share any of how it works, or even what is involved (Fourier analysis, wavelet decomposition, etc).

So please answer the following:

1. What is JIMD based on? This can be broad and categorical.

2. What is the minimum and maximum value it may take?

3. What difference is significant?

4. Is it linear, logarithimic, power law, or some other scale?

Or, propose as well documented and peer reviewed alternative to MTF, and make a case for its superiority. JIMD fails the underlined criteria. Do not suggest alternatives that fail these criteria.

You continuously call MTF antiquated and not worth looking at - why? Because it can't be applied to natural images? Simply because it's pretty old? Make a case, don't just state as fact that it's old and not worth using.
 
Imatest say that MTF50 is a good way to test the sharpness of different cameras and lenses. This is something that lens testing websites have taken to heart, as they normally use this measure.

This is useful for those testing lenses and cameras. It is also useful for people to work out how large they can print - in good light - from particular lens and camera combinations, while still preserving details on close viewing. Imatest specifically provide guidance on the issue of print size here, suggesting for example that resolution of 150lw per inch is a good standard to aim for: http://www.imatest.com/docs/sharpness/

In reality, however, the resolution of fine detail is greatly influenced by noise. A large number of my photographs (and I am sure I am not alone in this) are taken in low light conditions. In such conditions, I cannot print nearly as large as I can for photographs taken in good light, as the prints are either spoiled by visible noise, or smudged detail caused by noise reduction.

Based on my experience (Fuji XT2), I run out of resolution on my camera and lenses when printing 24 inches wide, however good my technique and however favourable the conditions. The results can still look good from a distance, but don't stand up to very close inspection. That seems to match the Imatest recommendations. However, to my surprise, noise doesn't seem to be a particular issue at this size, up to iso 800. That is despite noise being clearly visible when pixel peeping at that setting.

At iso 3200 and 6400 on the other hand, just two stops higher, I can only make decent medium sized prints, before noise becomes the limiting factor. The fine detail is just obscured. Prints at iso 12800 or more only look good at very small sizes - so small, I rarely bother to use this setting at all.

However, this is subjective stuff. What I cannot see is any way of relating resolution, noise levels and print size in any objective way.

Imatest mention noise and noise reduction a lot on their site, but mainly in the context of avoiding it influencing the test results. I learnt today (from another thread) that you can't work out the effect of noise by rerunning MTF tests in low light conditions, as MTF50 is not significantly affected by noise, though noise does make it harder to get an accurate reading: http://mtfmapper.blogspot.co.uk/2013/01/effects-of-iso-on-mtf50-measurements.htm

Is there a way to conceptualise the impact of noise on resolution and print size? Or is it just a question of personal experience?

Is noise just a factor impacting resolution of detail, or does it act like an absolute ceiling on resolution? Based on my experience, it seems to have a slight impact up to a certain point, and then a rapidly increasing impact thereafter, to a point noise is the only important factor, but I am interested why it seems to have this effect.

Will a higher pixel count camera still resolve more than a lower pixel count camera with the same sensor size and lens when light is sufficiently low? Or does the per area noise 'drown out' the fine detail to the same extent on each?

Is there a point at which noise means high resolution lenses cease to be an advantage?
The difficult bit is that the impact of noise on detail is progressive. It starts on small low contrast detail that's already at the limit of visibility. So MTF50 is not going to be very different at ISO200 or ISO400.

But if you SHARPEN the image, you need low contrast detail above MTF50 to increase sharpness, but all you end up sharpening is low contrast detail that is already mostly buried in the noise. So you are sharpening noise.

Resolution is determined by pixels, but the ISO limit is determined by sensor size. In other words, a 32MP has twice as many pixels as a 16MP image, but if we print at 36X24 instead of 24X16, we double the area and increase noise visibility by a stop. If you decide your full frame noise limit is ISO400 at 24X16, then it is ISO 200 at 36X24.
It's a good point that noise scales based on print area, whereas resolution is normally measured in linear fashion. That would explain my experience that iso 800 is OK at 24 inches wide, but iso 3200 needs to be about 12 inches wide (2 stops less light, 1/4 of the print area).

Also, it would explain why there comes a point where noise simply ceases to be an issue at all, and only resolution matters.

Your figures are much more conservative than mine, but based on my rule of thumb that iso 800 is Ok at 24 inches wide, iso 200 images could be printed at 48 inches wide and show similar noise levels. However, at this stage, resolution is limiting the print size, not noise.
Or, you could increase linear resolution by 1.42X and print at 24X16 to improve sharpness, but to see the full benefit of the extra sharpness, you would have to reduce ISO by 1 stop. Once again, you are limited to ISO200.

The easiest thing is to determine your own subjective limit - say APSC at ISO200 for 24X16 and 16MP, and adjust accordingly.
That's more or less what I've done. It's interesting that there doesn't seem to be much by way of objective testing the effect of noise on resolution, however.
Yes, this is true, but it's also very hard to test objectively. Perception and sharpness is complicated. Sometimes noise makes prints look sharper ;-)
 
You continuously call MTF antiquated and not worth looking at - why? Because it can't be applied to natural images? Simply because it's pretty old? Make a case, don't just state as fact that it's old and not worth using.
I have just said 'antiquated'. Not necessarily 'worth looking at'. That is your imagination.

I think I have stated the case a number of times before that MTF (say as in slated edge, or whatever your pick) is not always easy and straightforward to apply to natural images. The thing which got me looking into alternatives was also impacted by the need to find an automated method without any human intervention at all - say I have a bunch of images, of possibly different dimensions, resolutions, etc., and I just want to sort them by sharpness or detail without involving myself in the process at all. That was the motivation that went into that.

The above-mentioned need is a photographic use case that is asked in these forums a number of times, as opposed to very lab-like testing of MTF's. I think there is even a thread going on right now on how to compare images for such stuff with different sizes - something to that effect.

Also, on a separate, but kind of related note, Fourier basis from which MTF concept is coming from may not always be the best for many tasks. An overcomplete basis, which could include Fourier basis as a subset could be used, though. Many domains, say compressed sensing, use such concepts. And, I have mentioned this before also that in that context, IMHO, a newer meaning of 'MTF' would need to be understood.

--
Dj Joofa
http://www.djjoofa.com
 
Last edited:
Well, I think you answered your own question. Noise decreases resolution: if your image has nothing but noise, then it has no signal, and hence no resolution.

In my experience, when I'm shooting with extreme underexposure, I can shoot a lousy lens wide open with hardly any depth of field, and the final image would hardly improve with better optics. But sometimes, a gritty look is desirable.

For sure, you could apply the MTF methodology under less-than-pristine conditions, which I think it is counterproductive, as MTF is usually used as a means of determining what's possible under the best of circumstances. If you are looking for the greatest sharpness for making a large print, then by all means use the best technique.

Doing an MTF under noisy conditions is problematic, although it can be done. One problem is that the signal-to-noise ratio changes for tone, and so in principle you'll have less resolution in the shadows than in the highlights, although the methods of measuring MTF don't directly capture this variation. Also, you may find that a high-key image at a particular ISO setting will look fine while a low-key image will look terribly noisy.

Another problem is that noise varies with color, and this is dependent on your camera model, white balance, lighting conditions, subject, camera profile, raw processing settings, and raw processor, or in other words, is excessively complicated to measure definitively.

Noise reduction can both reduce resolution, and some methods generate false detail which will give misleading results.

But measuring all this would require changing the resolution measurement methodology and metrics. The popular slanted-slope method of estimating resolution and contrast uses black and white squares, but teasing out how a camera system responds under varying conditions within an image would perhaps require slanted edges of a variety of colors and tonality, which would lead to information overload as well as results that are only repeatable for very specific conditions, and so would not be generally useful or reliable.

Generally speaking, more megapixels even with inferior lenses, and better lens sharpness even with low megapixel cameras are usually desirable for many but not all subjects, and there does not seem to be an obvious upper limit on either, although the law of diminishing returns does apply.
 
You continuously call MTF antiquated and not worth looking at - why? Because it can't be applied to natural images? Simply because it's pretty old? Make a case, don't just state as fact that it's old and not worth using.
I have just said 'antiquated'. Not necessarily 'worth looking at'. That is your imagination.
Surely you understand the implication of calling something "antiquated."
I think I have stated the case a number of times before that MTF (say as in slated edge, or whatever your pick) is not always easy and straightforward to apply to natural images.
Well, pixelated imaging systems are not shift invariant, so the use of LSI theory for characterizing them is a bit shaky to begin with when the signal's periodicity approaches the pixel spacing. There have been numerous studies in the last 40 or so years correlating MTF to images, both by machine and human vision. They all have determined it well-describes the performance of a system.
The thing which got me looking into alternatives was also impacted by the need to find an automated method without any human intervention at all - say I have a bunch of images, of possibly different dimensions, resolutions, etc., and I just want to sort them by sharpness or detail without involving myself in the process at all. That was the motivation that went into that.
Lots of ways to do that - Entropy, just looking at how peaky the histogram is, etc. Is JIMD based on any of those? Or just total black box?
The above-mentioned need is a photographic use case that is asked in these forums a number of times, as opposed to very lab-like testing of MTF's. I think there is even a thread going on right now on how to compare images for such stuff with different sizes - something to that effect.
I think people would find that if they took all the time and energy they spent deliberately trying to avoid learning about MTF with any nuance and devoted that to gaining more detailed understanding of it, they would not be so "resistive" to it.
Also, on a separate, but kind of related note, Fourier basis from which MTF concept is coming from may not always be the best for many tasks.
Sure. It's a very, very good one for optics though. To suggest otherwise without detailed, rigorous critique is flying in the face of 70 or so years of fine optical engineering.
An overcomplete basis, which could include Fourier basis as a subset could be used, though.
Why add complexity? What are you trying to achieve?
Many domains, say compressed sensing, use such concepts.
Oh boy, more red herrings to lose the point of the thread in!
 
And unless you photograph finely spaced sine wave gratings, your object probably doesn't have a Fourier spectrum with a lot of energy at the camera's nyquist anyway, so even if the system transfer function is, in an ideal case, .6, if the object spectrum is .05, .6 'transmission' doesn't much matter.
Would any sharp edge not produce significant energy at Nyquist?
 
And unless you photograph finely spaced sine wave gratings, your object probably doesn't have a Fourier spectrum with a lot of energy at the camera's nyquist anyway, so even if the system transfer function is, in an ideal case, .6, if the object spectrum is .05, .6 'transmission' doesn't much matter.
Would any sharp edge not produce significant energy at Nyquist?
Sure, but put some defocus in the image and those get obliterated pretty quickly. "Natural" images, as are the focus of this thread, rarely are flat and focused as well as an MTF measurement scene. And the sharpness of the edge is often questionable, unless you're a commercial knife photographer :)
 
You continuously call MTF antiquated and not worth looking at - why? Because it can't be applied to natural images? Simply because it's pretty old? Make a case, don't just state as fact that it's old and not worth using.
I have just said 'antiquated'. Not necessarily 'worth looking at'. That is your imagination.
Surely you understand the implication of calling something "antiquated."
I think I have stated the case a number of times before that MTF (say as in slated edge, or whatever your pick) is not always easy and straightforward to apply to natural images.
Well, pixelated imaging systems are not shift invariant, so the use of LSI theory for characterizing them is a bit shaky to begin with when the signal's periodicity approaches the pixel spacing. There have been numerous studies in the last 40 or so years correlating MTF to images, both by machine and human vision. They all have determined it well-describes the performance of a system.
The thing which got me looking into alternatives was also impacted by the need to find an automated method without any human intervention at all - say I have a bunch of images, of possibly different dimensions, resolutions, etc., and I just want to sort them by sharpness or detail without involving myself in the process at all. That was the motivation that went into that.
Lots of ways to do that - Entropy, just looking at how peaky the histogram is, etc. Is JIMD based on any of those? Or just total black box?
The above-mentioned need is a photographic use case that is asked in these forums a number of times, as opposed to very lab-like testing of MTF's. I think there is even a thread going on right now on how to compare images for such stuff with different sizes - something to that effect.
I think people would find that if they took all the time and energy they spent deliberately trying to avoid learning about MTF with any nuance and devoted that to gaining more detailed understanding of it, they would not be so "resistive" to it.
Also, on a separate, but kind of related note, Fourier basis from which MTF concept is coming from may not always be the best for many tasks.
Sure. It's a very, very good one for optics though. To suggest otherwise without detailed, rigorous critique is flying in the face of 70 or so years of fine optical engineering.
An overcomplete basis, which could include Fourier basis as a subset could be used, though.
Why add complexity? What are you trying to achieve?
Many domains, say compressed sensing, use such concepts.
Oh boy, more red herrings to lose the point of the thread in!
How about you develop a software that can take natural images, of any size, resolution, aspect ratio, etc., and sort them by the measure of detail using MTF without any human intervention at all?

Until then no hypothetical musings. Ok.

--
Dj Joofa
http://www.djjoofa.com
 
Last edited:
How about you develop a software that can take natural images, of any size, resolution, aspect ratio, etc., and sort them by the measure of detail using MTF without any human intervention at all?

Until then no hypothetical musings. Ok.
That isn't even the point of this thread? It's not the point of MTF either?

Now, contextually, your argument reads like "MTF is antiquated, and can't do things it wasn't designed to do, so it's bad and best to not talk about it."

You can use MTF-like analysis though. Just look at the Fourier spectrum of the images and find the ones that have the most energy at high frequencies. They will contain the most detail by conventional understanding of the word "detail."
 
How about you develop a software that can take natural images, of any size, resolution, aspect ratio, etc., and sort them by the measure of detail using MTF without any human intervention at all?

Until then no hypothetical musings. Ok.
That isn't even the point of this thread? It's not the point of MTF either?

Now, contextually, your argument reads like "MTF is antiquated, and can't do things it wasn't designed to do, so it's bad and best to not talk about it."

You can use MTF-like analysis though. Just look at the Fourier spectrum of the images and find the ones that have the most energy at high frequencies. They will contain the most detail by conventional understanding of the word "detail."
I never said not to talk about MTF. I have just pointed out a case where it is not applicable in figuring out detail in natural images in an automatic process without human involvement. Thats all. The rest is just your projection and imagination. And, that process is a very needful and useful process for many photographic use cases for which MTF just can't be used directly without a lot of trouble - see, MTF is ok for lab-like measurement of lenses. But, sorting out natural images of various sizes, form factors, etc., automatically. Well, not, so ...

And, I can't be against MTF in general as you are making it out to be. It is just a transfer function. Heck, I'm an EE, and I have been playing with transfer functions of various varieties all of my life, whether MTF or other.
 
Your first words in this thread:

MTF is an outdated and old concept. Not a modern one. So lets just not talk about it.
 
Your first words in this thread:

MTF is an outdated and old concept. Not a modern one. So lets just not talk about it.
Does it say not to talk about it as in never? As in your projection you seem to be implying here.

This is all in the context of image resolution and detail measurement in an automated process. Why is that so difficult to understand?

--
Dj Joofa
http://www.djjoofa.com
 
Last edited:
Well, I think you answered your own question. Noise decreases resolution: if your image has nothing but noise, then it has no signal, and hence no resolution.

In my experience, when I'm shooting with extreme underexposure, I can shoot a lousy lens wide open with hardly any depth of field, and the final image would hardly improve with better optics. But sometimes, a gritty look is desirable.

For sure, you could apply the MTF methodology under less-than-pristine conditions, which I think it is counterproductive, as MTF is usually used as a means of determining what's possible under the best of circumstances. If you are looking for the greatest sharpness for making a large print, then by all means use the best technique.

Doing an MTF under noisy conditions is problematic, although it can be done. One problem is that the signal-to-noise ratio changes for tone, and so in principle you'll have less resolution in the shadows than in the highlights, although the methods of measuring MTF don't directly capture this variation. Also, you may find that a high-key image at a particular ISO setting will look fine while a low-key image will look terribly noisy.

Another problem is that noise varies with color, and this is dependent on your camera model, white balance, lighting conditions, subject, camera profile, raw processing settings, and raw processor, or in other words, is excessively complicated to measure definitively.

Noise reduction can both reduce resolution, and some methods generate false detail which will give misleading results.

But measuring all this would require changing the resolution measurement methodology and metrics. The popular slanted-slope method of estimating resolution and contrast uses black and white squares, but teasing out how a camera system responds under varying conditions within an image would perhaps require slanted edges of a variety of colors and tonality, which would lead to information overload as well as results that are only repeatable for very specific conditions, and so would not be generally useful or reliable.

Generally speaking, more megapixels even with inferior lenses, and better lens sharpness even with low megapixel cameras are usually desirable for many but not all subjects, and there does not seem to be an obvious upper limit on either, although the law of diminishing returns does apply.
 
Well, I think you answered your own question. Noise decreases resolution: if your image has nothing but noise, then it has no signal, and hence no resolution.

In my experience, when I'm shooting with extreme underexposure, I can shoot a lousy lens wide open with hardly any depth of field, and the final image would hardly improve with better optics. But sometimes, a gritty look is desirable.

For sure, you could apply the MTF methodology under less-than-pristine conditions, which I think it is counterproductive, as MTF is usually used as a means of determining what's possible under the best of circumstances. If you are looking for the greatest sharpness for making a large print, then by all means use the best technique.

Doing an MTF under noisy conditions is problematic, although it can be done. One problem is that the signal-to-noise ratio changes for tone, and so in principle you'll have less resolution in the shadows than in the highlights, although the methods of measuring MTF don't directly capture this variation. Also, you may find that a high-key image at a particular ISO setting will look fine while a low-key image will look terribly noisy.

Another problem is that noise varies with color, and this is dependent on your camera model, white balance, lighting conditions, subject, camera profile, raw processing settings, and raw processor, or in other words, is excessively complicated to measure definitively.

Noise reduction can both reduce resolution, and some methods generate false detail which will give misleading results.

But measuring all this would require changing the resolution measurement methodology and metrics. The popular slanted-slope method of estimating resolution and contrast uses black and white squares, but teasing out how a camera system responds under varying conditions within an image would perhaps require slanted edges of a variety of colors and tonality, which would lead to information overload as well as results that are only repeatable for very specific conditions, and so would not be generally useful or reliable.

Generally speaking, more megapixels even with inferior lenses, and better lens sharpness even with low megapixel cameras are usually desirable for many but not all subjects, and there does not seem to be an obvious upper limit on either, although the law of diminishing returns does apply.
 
However, I'm still not sure if more pixels have the same effect (for a given sensor size). After all, more pixels does then mean smaller pixels, and smaller pixels are more susceptible to noise. You can offset that noise by averaging across the greater number of pixels in a given sensor area. However, you then end up with the same amount of noise for a given sensor area/given output. Once noise becomes the limiting factor on print sizes, I would have thought you can't print larger from a sensor with more pixels but the same size. Your limitation is on a per sensor area basis, rather than a per pixel basis.
Noise isn't uniform across an image; higher relative amounts of noise are going to be found in the shadows, and so a higher resolution camera may very well capture more acceptable details in the highlight areas even though its shadow detail is a noisy mess. For me, that was often the case with small-sensor early digital cameras: I simply did some noise reduction in the shadows or did HDR to capture less noisy shadow details.

--
http://therefractedlight.blogspot.com
 
Last edited:
Yes, that makes sense. Thanks for the input.
 

Keyboard shortcuts

Back
Top