What is "detail"?

In this and other fora, we often read that item A captures better detail than item B. But detail is rarely qualified or even measured (except in my posts :-D ).

It turns out that the subject has been explained by none other than Don Williams of Kodak in a paper published here .

A precis transcript.

two items are required for defining the MTF, (1) a measure of spatial detail, called frequency, and (2) a fundamental measure for determining how that detail is preserved, called modulation transfer [contrast ratio].

1) Spatial detail can be measured by the spatial frequency content of a given feature. ... The higher the frequency, the greater the detail, the greater the number of cycles per unit distance, and the more closely spaced the lines become.

2) ... how well the input modulation [contrast] is preserved after
being imaged or in some way acted upon ...

Note that "greater" detail is not necessarily better. Because there are two, not one, parameters involved, it is necessary to consider both MTF and spatial frequency when judging the goodness of detail.

It is no coincidence that one measure of image quality is the so-called MTF50 which fixes one of the two parameters at a 50% contrast ratio, a.k.a. MTF. It tells us, somewhat arbitrarily, that a contrast in the image that is half of that of the scene is "good enough". Therefore, we could say that if the spatial frequency at which MTF50 occurs is higher than that of item B, then item A is "better at capturing detail".

However, fixing one parameter and considering only one value of spatial frequency in the continuum from zero to the max sampling rate of the sensor does not and can not tell the whole story.

I continue to believe that the shape of an MTF vs. frequency curve contributes greatly to the appearance of detail in an image and that an MTF50 up there close to the Nyquist frequency of the sensor is not necessarily a Good Thing.
This is fine for B&W but doesn't work for color sensors like Bayer since they interpolate color.
Mike, which part of the original post doesn't work for color sensors like Bayer?

Some time ago I tested the MTF of the SD9 and Bayer G9 where the color MTFs were shown in addition to the grayscale Y' response:

Best viewed original size.
Best viewed original size.

Does anything there relate to MTF somehow "failing to work" for the Bayer?
Just know what my tests revealed. Try it yourself and see.
 
A Bayer can measure a higher MTF than a Foveon but the Foveon can still "see" more colors than a Bayer can. I've checked this myself with a SD15 vs. NEX 7 comparison shot of the old USAF chart where the 24mp Bayer had a much higher LMP reading but when photographing a flower the Foveon could see subtle adjacent colors that were totally missing in the Bayer shot. For lack of a better way to put it, you can say that Bayer sensors are partially color blind.
If you still have those images, I'd love to compare them by extracting the hue maps and comparing for the degree of pixelation (of hues). Hopefully they are 16-bit??
Unfortunately, they're on a HD that died some years ago (along with all of my original SD10 and 14 raws, before I understood the importance of having a backup drive). But they were all 16 bit tiffs.

And I also made 16x24 (centered cropped) prints of both for comparison and the SD15 prints stilled showed more color detail and looked better than the NEX 7.

Tests can be good, but I'll always trust my eyes first.
 
Last edited:
In this and other fora, we often read that item A captures better detail than item B. But detail is rarely qualified or even measured (except in my posts :-D ).

It turns out that the subject has been explained by none other than Don Williams of Kodak in a paper published here .

A precis transcript.

two items are required for defining the MTF, (1) a measure of spatial detail, called frequency, and (2) a fundamental measure for determining how that detail is preserved, called modulation transfer [contrast ratio].

1) Spatial detail can be measured by the spatial frequency content of a given feature. ... The higher the frequency, the greater the detail, the greater the number of cycles per unit distance, and the more closely spaced the lines become.

2) ... how well the input modulation [contrast] is preserved after
being imaged or in some way acted upon ...

Note that "greater" detail is not necessarily better. Because there are two, not one, parameters involved, it is necessary to consider both MTF and spatial frequency when judging the goodness of detail.

It is no coincidence that one measure of image quality is the so-called MTF50 which fixes one of the two parameters at a 50% contrast ratio, a.k.a. MTF. It tells us, somewhat arbitrarily, that a contrast in the image that is half of that of the scene is "good enough". Therefore, we could say that if the spatial frequency at which MTF50 occurs is higher than that of item B, then item A is "better at capturing detail".

However, fixing one parameter and considering only one value of spatial frequency in the continuum from zero to the max sampling rate of the sensor does not and can not tell the whole story.

I continue to believe that the shape of an MTF vs. frequency curve contributes greatly to the appearance of detail in an image and that an MTF50 up there close to the Nyquist frequency of the sensor is not necessarily a Good Thing.
This is fine for B&W but doesn't work for color sensors like Bayer since they interpolate color.
Mike, which part of the original post doesn't work for color sensors like Bayer?

Some time ago I tested the MTF of the SD9 and Bayer G9 where the color MTFs were shown in addition to the grayscale Y' response:

Best viewed original size.
Best viewed original size.

Does anything there relate to MTF somehow "failing to work" for the Bayer?
Just know what my tests revealed. Try it yourself and see.
Gee thanks. I'll have to go and find "a flower" and see what happens.

--
Pedantry is hard work, but someone has to do it ...
 
A Bayer can measure a higher MTF than a Foveon but the Foveon can still "see" more colors than a Bayer can. I've checked this myself with a SD15 vs. NEX 7 comparison shot of the old USAF chart where the 24mp Bayer had a much higher LMP reading but when photographing a flower the Foveon could see subtle adjacent colors that were totally missing in the Bayer shot. For lack of a better way to put it, you can say that Bayer sensors are partially color blind.
If you still have those images, I'd love to compare them by extracting the hue maps and comparing for the degree of pixelation (of hues). Hopefully they are 16-bit??
Unfortunately, they're on a HD that died some years ago (along with all of my original SD10 and 14 raws, before I understood the importance of having a backup drive).
You have my sympathy; been there and done that ...
And I also made 16x24 (centered cropped) prints of both for comparison and the SD15 prints stilled showed more color detail and looked better than the NEX 7.
OK.
Tests can be good, but I'll always trust my eyes first.
My eyes are pretty poor, which is prefer numbers, graphs, just noticeable differences and so forth.

--
Pedantry is hard work, but someone has to do it ...
 
Last edited:
I continue to believe that the shape of an MTF vs. frequency curve contributes greatly to the appearance of detail in an image and that an MTF50 up there close to the Nyquist frequency of the sensor is not necessarily a Good Thing.
I agree!

This might be why Foveon images look so much more real to me, pixel for pixel.

In 1:1:1 Foveon, MTF50 at Nyquist would be due entirely to the lens, since the sensor has complete isolation between adjacent pixels.

In Bayer, a given "sensel" is used in the determination of data that makes up several adjacent "pixels" (of the same dimensions as the sensels), so the sensor itself degrades the MTF, in addition to the lens. Even downsizing cannot eliminate this effect, since it is "baked-in" at the original size in the demosaicing in the first place.

We have seen that modifying Bayer demosaicing can improve the MTF, thanks to a recent experiment by Ted:


In this experiment, sensel data is used only once and output resolution is an image with 1/4 the usual number of pixels (no sharing of sensel data as in regular Bayer demosaic processes).

This also applies to color resolution in the output image. The experimental image has color resolution equal to Foveon.

I'm trying to understand if lens limitations and Bayer sensor limitations can be told apart, and I guess this isn't possible.

Maybe the sensor performance of Foveon would be like a brick wall filter, and remain consistently very high, then drop off or alias quickly above Nyquist. On the other hand a Bayer response would remain high until a point where detail is several pixels in diameter, then drop off or alias more slowly as Nyquist is passed. And, this would be quite visible in images, I think.

From my own experience, downsizing a regular Bayer or other CFA image will improve the appearance of (image quality of) small details, but can't quite match Foveon rendering. I still wish I could routinely demosaic my Bayer images as Ted did above.

However, downsizing a regular CFA image to 1/2 or (better) 1/4 original pixels makes the image "good enough" for me. And, I've basically left my Sigmas behind now, because of this.

I did a test a good while back comparing Bayer and Foveon, using a Fuji X-T100 with 23mm F1.4 lens and a sd Quattro with 24mm F1.4 lens. Downsizing both to 1/4 original pixels gave me pretty much the same level of detail (basically "good enough" for me). But the greater dynamic range of the Bayer pretty much sealed the deal in terms of image quality overall, in spite of the smaller range of color differentiation in the Bayer.

See: https://www.dpreview.com/galleries/1438043515/albums/x-t100-vs-sd-quattro
 
A Bayer can measure a higher MTF than a Foveon but the Foveon can still "see" more colors than a Bayer can. I've checked this myself with a SD15 vs. NEX 7 comparison shot of the old USAF chart where the 24mp Bayer had a much higher LMP reading but when photographing a flower the Foveon could see subtle adjacent colors that were totally missing in the Bayer shot. For lack of a better way to put it, you can say that Bayer sensors are partially color blind.
If you still have those images, I'd love to compare them by extracting the hue maps and comparing for the degree of pixelation (of hues). Hopefully they are 16-bit??
Unfortunately, they're on a HD that died some years ago (along with all of my original SD10 and 14 raws, before I understood the importance of having a backup drive). But they were all 16 bit tiffs.

And I also made 16x24 (centered cropped) prints of both for comparison and the SD15 prints stilled showed more color detail and looked better than the NEX 7.

Tests can be good, but I'll always trust my eyes first.
Of course, correlation doesn't prove causation... it could be that the supposed subtle colour rendition of the SD15 was an artefact that just happens to look good to you. How could you tell the difference?
 
I continue to believe that the shape of an MTF vs. frequency curve contributes greatly to the appearance of detail in an image and that an MTF50 up there close to the Nyquist frequency of the sensor is not necessarily a Good Thing.
I agree!

This might be why Foveon images look so much more real to me, pixel for pixel.

In 1:1:1 Foveon, MTF50 at Nyquist would be due entirely to the lens, since the sensor has complete isolation between adjacent pixels.

In Bayer, a given "sensel" is used in the determination of data that makes up several adjacent "pixels" (of the same dimensions as the sensels), so the sensor itself degrades the MTF, in addition to the lens. Even downsizing cannot eliminate this effect, since it is "baked-in" at the original size in the demosaicing in the first place.

We have seen that modifying Bayer demosaicing can improve the MTF, thanks to a recent experiment by Ted:

https://www.dpreview.com/forums/post/65000409

In this experiment, sensel data is used only once and output resolution is an image with 1/4 the usual number of pixels (no sharing of sensel data as in regular Bayer demosaic processes).

This also applies to color resolution in the output image. The experimental image has color resolution equal to Foveon.

I'm trying to understand if lens limitations and Bayer sensor limitations can be told apart, and I guess this isn't possible.

Maybe the sensor performance of Foveon would be like a brick wall filter, and remain consistently very high, then drop off or alias quickly above Nyquist. On the other hand a Bayer response would remain high until a point where detail is several pixels in diameter, then drop off or alias more slowly as Nyquist is passed. And, this would be quite visible in images, I think.

From my own experience, downsizing a regular Bayer or other CFA image will improve the appearance of (image quality of) small details, but can't quite match Foveon rendering. I still wish I could routinely demosaic my Bayer images as Ted did above.

However, downsizing a regular CFA image to 1/2 or (better) 1/4 original pixels makes the image "good enough" for me. And, I've basically left my Sigmas behind now, because of this.

I did a test a good while back comparing Bayer and Foveon, using a Fuji X-T100 with 23mm F1.4 lens and a sd Quattro with 24mm F1.4 lens. Downsizing both to 1/4 original pixels gave me pretty much the same level of detail (basically "good enough" for me). But the greater dynamic range of the Bayer pretty much sealed the deal in terms of image quality overall, in spite of the smaller range of color differentiation in the Bayer.

See: https://www.dpreview.com/galleries/1438043515/albums/x-t100-vs-sd-quattro
Pixel shift might give you the best of both worlds as it removes demosaicking.

I don't actually understand the pixel shift mode in my G9. Rather than 4 shots removing the mosaic and producing a Foveon-like image with the same 20MP pixel count as the standard file, it outputs a 8 shot based 80MP file. I assume that is some kind of combination of mosaic removal and super resolution techniques but it's not clear to me what is going on. Especially as the output is a "raw" file (which can't strictly be a correct description, but it goes into raw converters and appears to be treated as a standard raw).

--
DPReview gallery: https://www.dpreview.com/galleries/0286305481
Website: http://www.whisperingcat.co.uk/ (2018 - website revived!)
Flickr: http://www.flickr.com/photos/davidmillier/ (very old!)
 
Last edited:
I continue to believe that the shape of an MTF vs. frequency curve contributes greatly to the appearance of detail in an image and that an MTF50 up there close to the Nyquist frequency of the sensor is not necessarily a Good Thing.
I agree!

This might be why Foveon images look so much more real to me, pixel for pixel.

In 1:1:1 Foveon, MTF50 at Nyquist would be due entirely to the lens, since the sensor has complete isolation between adjacent pixels.

In Bayer, a given "sensel" is used in the determination of data that makes up several adjacent "pixels" (of the same dimensions as the sensels), so the sensor itself degrades the MTF, in addition to the lens. Even downsizing cannot eliminate this effect, since it is "baked-in" at the original size in the demosaicing in the first place.

We have seen that modifying Bayer demosaicing can improve the MTF, thanks to a recent experiment by Ted:

https://www.dpreview.com/forums/post/65000409

In this experiment, sensel data is used only once and output resolution is an image with 1/4 the usual number of pixels (no sharing of sensel data as in regular Bayer demosaic processes).

This also applies to color resolution in the output image. The experimental image has color resolution equal to Foveon.

I'm trying to understand if lens limitations and Bayer sensor limitations can be told apart, and I guess this isn't possible.
Use a test original that shows coloured moire effects. Those colours are not caused by the lens but by the sensor.

A Foveon photo will show only monochrome moire.
Maybe the sensor performance of Foveon would be like a brick wall filter, and remain consistently very high, then drop off or alias quickly above Nyquist. On the other hand a Bayer response would remain high until a point where detail is several pixels in diameter, then drop off or alias more slowly as Nyquist is passed. And, this would be quite visible in images, I think.

From my own experience, downsizing a regular Bayer or other CFA image will improve the appearance of (image quality of) small details, but can't quite match Foveon rendering. I still wish I could routinely demosaic my Bayer images as Ted did above.

However, downsizing a regular CFA image to 1/2 or (better) 1/4 original pixels makes the image "good enough" for me. And, I've basically left my Sigmas behind now, because of this.

I did a test a good while back comparing Bayer and Foveon, using a Fuji X-T100 with 23mm F1.4 lens and a sd Quattro with 24mm F1.4 lens. Downsizing both to 1/4 original pixels gave me pretty much the same level of detail (basically "good enough" for me). But the greater dynamic range of the Bayer pretty much sealed the deal in terms of image quality overall, in spite of the smaller range of color differentiation in the Bayer.

See: https://www.dpreview.com/galleries/1438043515/albums/x-t100-vs-sd-quattro
Don
 
I continue to believe that the shape of an MTF vs. frequency curve contributes greatly to the appearance of detail in an image and that an MTF50 up there close to the Nyquist frequency of the sensor is not necessarily a Good Thing.
I agree!

This might be why Foveon images look so much more real to me, pixel for pixel.

In 1:1:1 Foveon, MTF50 at Nyquist would be due entirely to the lens, since the sensor has complete isolation between adjacent pixels.

In Bayer, a given "sensel" is used in the determination of data that makes up several adjacent "pixels" (of the same dimensions as the sensels), so the sensor itself degrades the MTF, in addition to the lens. Even downsizing cannot eliminate this effect, since it is "baked-in" at the original size in the demosaicing in the first place.

We have seen that modifying Bayer demosaicing can improve the MTF, thanks to a recent experiment by Ted:

https://www.dpreview.com/forums/post/65000409

In this experiment, sensel data is used only once and output resolution is an image with 1/4 the usual number of pixels (no sharing of sensel data as in regular Bayer demosaic processes).

This also applies to color resolution in the output image. The experimental image has color resolution equal to Foveon.

I'm trying to understand if lens limitations and Bayer sensor limitations can be told apart, and I guess this isn't possible.

Maybe the sensor performance of Foveon would be like a brick wall filter, and remain consistently very high, then drop off or alias quickly above Nyquist. On the other hand a Bayer response would remain high until a point where detail is several pixels in diameter, then drop off or alias more slowly as Nyquist is passed. And, this would be quite visible in images, I think.

From my own experience, downsizing a regular Bayer or other CFA image will improve the appearance of (image quality of) small details, but can't quite match Foveon rendering. I still wish I could routinely demosaic my Bayer images as Ted did above.

However, downsizing a regular CFA image to 1/2 or (better) 1/4 original pixels makes the image "good enough" for me. And, I've basically left my Sigmas behind now, because of this.

I did a test a good while back comparing Bayer and Foveon, using a Fuji X-T100 with 23mm F1.4 lens and a sd Quattro with 24mm F1.4 lens. Downsizing both to 1/4 original pixels gave me pretty much the same level of detail (basically "good enough" for me). But the greater dynamic range of the Bayer pretty much sealed the deal in terms of image quality overall, in spite of the smaller range of color differentiation in the Bayer.

See: https://www.dpreview.com/galleries/1438043515/albums/x-t100-vs-sd-quattro
Pixel shift might give you the best of both worlds as it removes demosaicking.
But do you get the fine detail on fast-moving subjects such as flowers that you can get with a Foveon sensor ?
I don't actually understand the pixel shift mode in my G9. Rather than 4 shots removing the mosaic and producing a Foveon-like image with the same 20MP pixel count as the standard file, it outputs a 8 shot based 80MP file. I assume that is some kind of combination of mosaic removal and super resolution techniques but it's not clear to me what is going on. Especially as the output is a "raw" file (which can't strictly be a correct description, but it goes into raw converters and appears to be treated as a standard raw).
Don
 
I continue to believe that the shape of an MTF vs. frequency curve contributes greatly to the appearance of detail in an image and that an MTF50 up there close to the Nyquist frequency of the sensor is not necessarily a Good Thing.
I agree!

This might be why Foveon images look so much more real to me, pixel for pixel.

In 1:1:1 Foveon, MTF50 at Nyquist would be due entirely to the lens, since the sensor has complete isolation between adjacent pixels.

In Bayer, a given "sensel" is used in the determination of data that makes up several adjacent "pixels" (of the same dimensions as the sensels), so the sensor itself degrades the MTF, in addition to the lens. Even downsizing cannot eliminate this effect, since it is "baked-in" at the original size in the demosaicing in the first place.

We have seen that modifying Bayer demosaicing can improve the MTF, thanks to a recent experiment by Ted:

https://www.dpreview.com/forums/post/65000409

In this experiment, sensel data is used only once and output resolution is an image with 1/4 the usual number of pixels (no sharing of sensel data as in regular Bayer demosaic processes).

This also applies to color resolution in the output image. The experimental image has color resolution equal to Foveon.

I'm trying to understand if lens limitations and Bayer sensor limitations can be told apart, and I guess this isn't possible.

Maybe the sensor performance of Foveon would be like a brick wall filter, and remain consistently very high, then drop off or alias quickly above Nyquist. On the other hand a Bayer response would remain high until a point where detail is several pixels in diameter, then drop off or alias more slowly as Nyquist is passed. And, this would be quite visible in images, I think.

From my own experience, downsizing a regular Bayer or other CFA image will improve the appearance of (image quality of) small details, but can't quite match Foveon rendering. I still wish I could routinely demosaic my Bayer images as Ted did above.

However, downsizing a regular CFA image to 1/2 or (better) 1/4 original pixels makes the image "good enough" for me. And, I've basically left my Sigmas behind now, because of this.

I did a test a good while back comparing Bayer and Foveon, using a Fuji X-T100 with 23mm F1.4 lens and a sd Quattro with 24mm F1.4 lens. Downsizing both to 1/4 original pixels gave me pretty much the same level of detail (basically "good enough" for me). But the greater dynamic range of the Bayer pretty much sealed the deal in terms of image quality overall, in spite of the smaller range of color differentiation in the Bayer.

See: https://www.dpreview.com/galleries/1438043515/albums/x-t100-vs-sd-quattro
Pixel shift might give you the best of both worlds as it removes demosaicking.
But do you get the fine detail on fast-moving subjects such as flowers that you can get with a Foveon sensor ?
I don't actually understand the pixel shift mode in my G9. Rather than 4 shots removing the mosaic and producing a Foveon-like image with the same 20MP pixel count as the standard file, it outputs a 8 shot based 80MP file. I assume that is some kind of combination of mosaic removal and super resolution techniques but it's not clear to me what is going on. Especially as the output is a "raw" file (which can't strictly be a correct description, but it goes into raw converters and appears to be treated as a standard raw).
Don
My G9 has something called Mode 2 which provides motion compensation. It worked when a jogger ran across my frame. I'm not sure if the price is a loss of detail in the moving area but it seems likely.
 
I would say that "detail" is the highest spatial frequency that gives 100% transfer, before the function starts to drop down to 50% and below.

That is, 100% of the maximum amplitude found lower down in the modulation frequency range.

Don
An interesting measure.

However, most of the curves I see drop below 100% immediately above zero frequency. for example:

Image courtesy of Bob Atkins :

cb25acb944c34e539e50befc78f5c1b6.jpg.gif

In some curves, usually for an over-sharpened image, the MTF rises above 100% then falls back below 100% at some frequency - so there could be two values for "detail" in such an image.

Still the basis of the idea is sound enough because, as implied, high contrast transfer at low detail is preferable for some folks. Hence my comment about the shape of the curve.
Perhaps I should have said 96%.

These MTF curves are designed to show the behaviour at high spatial frequencies. The X axis is a bit misleading in that the "ordinary" patches in the image (things that would not be called "detail" or "resolution") are all crowded together at the left hand side.

A graph with a log X axis would be more like what we see when looking at a whole photo, but it would be less good for showing the response at high frequencies.

Don Cox
 
I would say that "detail" is the highest spatial frequency that gives 100% transfer, before the function starts to drop down to 50% and below.

That is, 100% of the maximum amplitude found lower down in the modulation frequency range.

Don
An interesting measure.

However, most of the curves I see drop below 100% immediately above zero frequency. for example:

Image courtesy of Bob Atkins :

cb25acb944c34e539e50befc78f5c1b6.jpg.gif

In some curves, usually for an over-sharpened image, the MTF rises above 100% then falls back below 100% at some frequency - so there could be two values for "detail" in such an image.

Still the basis of the idea is sound enough because, as implied, high contrast transfer at low detail is preferable for some folks. Hence my comment about the shape of the curve.
Perhaps I should have said 96%.
Perhaps.
These MTF curves are designed to show the behaviour at high spatial frequencies. The X axis is a bit misleading in that the "ordinary" patches in the image (things that would not be called "detail" or "resolution") are all crowded together at the left hand side.

A graph with a log X axis would be more like what we see when looking at a whole photo, but it would be less good for showing the response at high frequencies.
A bit like some histograms where a log X-axis is available ...
Which is probably why Grainger et al came up with the Subjective Quality Factor in the 70's which combines human vision with MTF, well-explained by Bob Atkins here:

http://www.bobatkins.com/photography/technical/mtf/mtf4.html

--
Pedantry is hard work, but someone has to do it ...
 
Last edited:
I would say that "detail" is the highest spatial frequency that gives 100% transfer, before the function starts to drop down to 50% and below.

That is, 100% of the maximum amplitude found lower down in the modulation frequency range.

Don
An interesting measure.

However, most of the curves I see drop below 100% immediately above zero frequency. for example:

Image courtesy of Bob Atkins :

cb25acb944c34e539e50befc78f5c1b6.jpg.gif

In some curves, usually for an over-sharpened image, the MTF rises above 100% then falls back below 100% at some frequency - so there could be two values for "detail" in such an image.

Still the basis of the idea is sound enough because, as implied, high contrast transfer at low detail is preferable for some folks. Hence my comment about the shape of the curve.
Perhaps I should have said 96%.
Perhaps.
These MTF curves are designed to show the behaviour at high spatial frequencies. The X axis is a bit misleading in that the "ordinary" patches in the image (things that would not be called "detail" or "resolution") are all crowded together at the left hand side.

A graph with a log X axis would be more like what we see when looking at a whole photo, but it would be less good for showing the response at high frequencies.
A bit like some histograms where a log X-axis is available ...
Which is probably why Grainger et al came up with the Subjective Quality Factor in the 70's which combines human vision with MTF, well-explained by Bob Atkins here:

http://www.bobatkins.com/photography/technical/mtf/mtf4.html
Yes, that graph makes my point.

I think you need two graphs to see what's going on -- one with a linear X scale for fine detail, and one with a log scale to put it in proportion (and to show the effect of lens flare).

Don
 
I would say that "detail" is the highest spatial frequency that gives 100% transfer, before the function starts to drop down to 50% and below.

That is, 100% of the maximum amplitude found lower down in the modulation frequency range.

Don
An interesting measure.

However, most of the curves I see drop below 100% immediately above zero frequency. for example:

Image courtesy of Bob Atkins :

cb25acb944c34e539e50befc78f5c1b6.jpg.gif

In some curves, usually for an over-sharpened image, the MTF rises above 100% then falls back below 100% at some frequency - so there could be two values for "detail" in such an image.

Still the basis of the idea is sound enough because, as implied, high contrast transfer at low detail is preferable for some folks. Hence my comment about the shape of the curve.
Perhaps I should have said 96%.
Perhaps.
These MTF curves are designed to show the behaviour at high spatial frequencies. The X axis is a bit misleading in that the "ordinary" patches in the image (things that would not be called "detail" or "resolution") are all crowded together at the left hand side.

A graph with a log X axis would be more like what we see when looking at a whole photo, but it would be less good for showing the response at high frequencies.
A bit like some histograms where a log X-axis is available ...
Which is probably why Grainger et al came up with the Subjective Quality Factor in the 70's which combines human vision with MTF, well-explained by Bob Atkins here:

http://www.bobatkins.com/photography/technical/mtf/mtf4.html
Yes, that graph makes my point.

I think you need two graphs to see what's going on -- one with a linear X scale for fine detail, and one with a log scale to put it in proportion (and to show the effect of lens flare).
Fortunately, QuickMTF also gives the linear frequency value to several significant figures. So, although it looks a bit cramped on the graph, your 96% comes out with the numbers 0.1 cy/px or 388.8 LPH.

Even 99% MTF shows as 0.025 cy/px or 97.2 LPH.



--
Pedantry is hard work, but someone has to do it ...
 
I'm trying to understand if lens limitations and Bayer sensor limitations can be told apart, and I guess this isn't possible.
Use a test original that shows coloured moire effects. Those colours are not caused by the lens but by the sensor.

A Foveon photo will show only monochrome moire.
Yes I forgot this. In real life shooting though, I seldom have seen color moire. I think it is very rare in real life shooting.

The other part of this is that most of my shooting these days is with Fuji X-trans sensors, which resist color moire. In the past I owned a Fuji X-T100 (Bayer sensor), and months of use showed me Bayer moire only very seldom, maybe in 1% of my shots if I knew where to look for it.

Rainbow moire is maybe the most extreme case of color inaccuracy due to the Bayer CFA. Lesser instances of the moire effect might abound in most every CFA image, reducing color accuracy and texture detail but being otherwise invisible.
 
I would say that "detail" is the highest spatial frequency that gives 100% transfer, before the function starts to drop down to 50% and below.

That is, 100% of the maximum amplitude found lower down in the modulation frequency range.

Don
An interesting measure.

However, most of the curves I see drop below 100% immediately above zero frequency. for example:

Image courtesy of Bob Atkins :

cb25acb944c34e539e50befc78f5c1b6.jpg.gif

In some curves, usually for an over-sharpened image, the MTF rises above 100% then falls back below 100% at some frequency - so there could be two values for "detail" in such an image.

Still the basis of the idea is sound enough because, as implied, high contrast transfer at low detail is preferable for some folks. Hence my comment about the shape of the curve.
Perhaps I should have said 96%.
Perhaps.
These MTF curves are designed to show the behaviour at high spatial frequencies. The X axis is a bit misleading in that the "ordinary" patches in the image (things that would not be called "detail" or "resolution") are all crowded together at the left hand side.

A graph with a log X axis would be more like what we see when looking at a whole photo, but it would be less good for showing the response at high frequencies.
A bit like some histograms where a log X-axis is available ...
Which is probably why Grainger et al came up with the Subjective Quality Factor in the 70's which combines human vision with MTF, well-explained by Bob Atkins here:

http://www.bobatkins.com/photography/technical/mtf/mtf4.html
Yes, that graph makes my point.

I think you need two graphs to see what's going on -- one with a linear X scale for fine detail, and one with a log scale to put it in proportion (and to show the effect of lens flare).

Don
Somewhat belated but this page from Jack Hogan aggrees with your feeling that lower frequencies are more important, especially when viewing an image at a proper distance:

https://www.strollswithmydog.com/mtf50-perceived-sharpness/

See under heading "MTF50 not relevant at Standard Distance" ...

--
Pedantry is hard work, but someone has to do it ...
 
Last edited:
I'm trying to understand if lens limitations and Bayer sensor limitations can be told apart, and I guess this isn't possible.
Use a test original that shows coloured moire effects. Those colours are not caused by the lens but by the sensor.

A Foveon photo will show only monochrome moire.

Don
Yes I forgot this. In real life shooting though, I seldom have seen color moire. I think it is very rare in real life shooting.
It's mainly seen on fabrics such as clothes worn at weddings.
The other part of this is that most of my shooting these days is with Fuji X-trans sensors, which resist color moire. In the past I owned a Fuji X-T100 (Bayer sensor), and months of use showed me Bayer moire only very seldom, maybe in 1% of my shots if I knew where to look for it.

Rainbow moire is maybe the most extreme case of color inaccuracy due to the Bayer CFA. Lesser instances of the moire effect might abound in most every CFA image, reducing color accuracy and texture detail but being otherwise invisible.
Don
 
The easiest way to provoke and test colour moire IMO is to shoot a cityscape with lots of modern buildings with railings, window frames, blinds, other repeating man made objects.

A wide scene will almost certainly show moire and luminance aliasing all over the place. It's a subject with so many elements at different sizes and orientations that it is inevitable.
 
I would say that "detail" is the highest spatial frequency that gives 100% transfer, before the function starts to drop down to 50% and below.

That is, 100% of the maximum amplitude found lower down in the modulation frequency range.

Don
An interesting measure.

However, most of the curves I see drop below 100% immediately above zero frequency. for example:

Image courtesy of Bob Atkins :

cb25acb944c34e539e50befc78f5c1b6.jpg.gif

In some curves, usually for an over-sharpened image, the MTF rises above 100% then falls back below 100% at some frequency - so there could be two values for "detail" in such an image.

Still the basis of the idea is sound enough because, as implied, high contrast transfer at low detail is preferable for some folks. Hence my comment about the shape of the curve.
Perhaps I should have said 96%.
Perhaps.
These MTF curves are designed to show the behaviour at high spatial frequencies. The X axis is a bit misleading in that the "ordinary" patches in the image (things that would not be called "detail" or "resolution") are all crowded together at the left hand side.

A graph with a log X axis would be more like what we see when looking at a whole photo, but it would be less good for showing the response at high frequencies.
A bit like some histograms where a log X-axis is available ...
Which is probably why Grainger et al came up with the Subjective Quality Factor in the 70's which combines human vision with MTF, well-explained by Bob Atkins here:

http://www.bobatkins.com/photography/technical/mtf/mtf4.html
Yes, that graph makes my point.

I think you need two graphs to see what's going on -- one with a linear X scale for fine detail, and one with a log scale to put it in proportion (and to show the effect of lens flare).

Don
Somewhat belated but this page from Jack Hogan agrees with your feeling that lower frequencies are more important, especially when viewing an image at a proper distance:

https://www.strollswithmydog.com/mtf50-perceived-sharpness/

See under heading "MTF50 not relevant at Standard Distance" ...
I took a DP2 (not M) street shot and ran it through SPP4 with no adjustments. As expected there was some sharpening [by SPP] of the original slightly soft raw image.

I then exported the same raw from RawDigger as an RGB and edited the Contrast By Detail Levels (only) in RawTherapee - so as to emphasize the Grainger detail range leaving the 1-pixel level untouched. This accords with Don's mention of MTF values at lower frequencies.

The Result:

Please view at original size
Please view at original size

Comments/questions welcome.

--
Pedantry is hard work, but someone has to do it ...
 
Last edited:
In this and other fora, we often read that item A captures better detail than item B. But detail is rarely qualified or even measured (except in my posts :-D ).

It turns out that the subject has been explained by none other than Don Williams of Kodak in a paper published here .

A precis transcript.

two items are required for defining the MTF, (1) a measure of spatial detail, called frequency, and (2) a fundamental measure for determining how that detail is preserved, called modulation transfer [contrast ratio].

1) Spatial detail can be measured by the spatial frequency content of a given feature. ... The higher the frequency, the greater the detail, the greater the number of cycles per unit distance, and the more closely spaced the lines become.

2) ... how well the input modulation [contrast] is preserved after
being imaged or in some way acted upon ...

Note that "greater" detail is not necessarily better. Because there are two, not one, parameters involved, it is necessary to consider both MTF and spatial frequency when judging the goodness of detail.

It is no coincidence that one measure of image quality is the so-called MTF50 which fixes one of the two parameters at a 50% contrast ratio, a.k.a. MTF. It tells us, somewhat arbitrarily, that a contrast in the image that is half of that of the scene is "good enough". Therefore, we could say that if the spatial frequency at which MTF50 occurs is higher than that of item B, then item A is "better at capturing detail".

However, fixing one parameter and considering only one value of spatial frequency in the continuum from zero to the max sampling rate of the sensor does not and can not tell the whole story.

I continue to believe that the shape of an MTF vs. frequency curve contributes greatly to the appearance of detail in an image and that an MTF50 up there close to the Nyquist frequency of the sensor is not necessarily a Good Thing.
As a Sigma camera user, I like the detail provided in the Sigma camera images. But the problem is that the proper level of "detail" complements (in a good way) the image concerned. It is meaningless on its own. Or worse still, an utter distraction unless primarily supporting "the image" itself. It's kind of fun to take a picture of a forest scene and the examine the leaves, etc. Fun photo images are...fun. Are they great? Few images are great. Detail is at most only part of what goes into making a good image.

Gate Bois (aka Billy Noel) makes excellent images with great detail. He works hard to get that right. The images must have the underlying/supporting detail, and be interesting images too. Frankly, I think GB could do (just) as well without the detail from Sigma. I'm still glad he likes and uses Sigma cameras. It makes me feel better about my choice of Sigma cameras.

In my view detail is related to sharpening, in the sense that having excessive (noticeable) sharpening/detail is not helpful to the image, as a whole. The goal of the photo image is to be a "thing" in itself. It is not the thing: It is not the sunset or the chair or the pretty girl. That is one lesson from Magritte.

Defining "detail" has some value, I think, but I still think that the main issue/limitation with the Sigma cameras has to do with the higher (color) contrast of the resulting image itself. Bayer images are ...more useful? Easier to work with? More flexible in terms of ISO? And higher dynamic range ("DR")?

Arguing over or discussing the definition, and purpose, of detail has some minimal value. Assuming it's so wet or hot or dark outside that the main goal of the image maker is to get back inside as soon as possible. On the other hand the Sigma detail is kind of, shall we say, addictive. You see it a few times and you want to keep seeing it.

Richard
 

Keyboard shortcuts

Back
Top