Elements of image quality...

Erik Kaffehr

Veteran Member
Messages
8,199
Solutions
7
Reaction score
5,118
Location
Nyköping, SE
This is intended to small summary of the aspects going into image quality. The discussion ignores color and things unmeasurable.

The major factors are:
  • Photons - that is the light forming the image
  • Lens OTF - that is the information that lens transfers to the sensor
  • Diffraction - that is the reduction of lens OTF due to the physical diameter of the aperture.
  • The OLP-filter - the OLP filter reduces lens OTF to match sensor resolution
  • Defocus - the loss of OTF due to the subject being out of focus
MTF is normally used instead of OTF. MTF is the absolute value of the OTF, just so scientists stay happy.

Photons are creating the image

Photons are quantums of light. Light arrives quantums. The pixels capture a significant part of the incoming photons and convert the energy of the photon into a free electron that is collected in the photodiode as a charge. Each pixel can detect a maximum number of incoming photons, that is called Full Well Capacity (FWC). It is an important parameter of the pixel and it is normally measured in electron charges.

Arrival of photons is a random phenomena. So in absolutely even light the distribution of captured electron charges, representing photons, will be a Poison distribution.

The interesting aspect of a Poison distribution is that the standard of deviation for N-samples will be that square root of N.

So, increasing photon count by four will cut noise in half. The practical significance of this is that we want to capture as many photons as possible.

This shows a histogram of a ColorChecker, exposed around 1EV under saturation. Note the highest peak is narrow and the peaks get lower and wider for the darker patches. The widening of the peaks is the noise.
This shows a histogram of a ColorChecker, exposed around 1EV under saturation. Note the highest peak is narrow and the peaks get lower and wider for the darker patches. The widening of the peaks is the noise.

Now, check this ColorChecker that is exposed around 6EV under saturation. The peaks get much wider. The lowest patches and cardboard surrounding the patches float into another.
Now, check this ColorChecker that is exposed around 6EV under saturation. The peaks get much wider. The lowest patches and cardboard surrounding the patches float into another.

So reducing exposure, increases noise. This part of the noise is usually dominant. Increasing ISO has no effect on photon statistics.

Photon statistics depend much on sensor size, but not so much on pixel size. How many photons a sensor can detect depends on it's surface and photodiode technology. If that number of photons are distributed ov 25MP or 100MP matters little.

Lens OTF/MTF

The illustrations here are taken from Brandon Dube's article:

https://www.lensrentals.com/blog/2017/10/the-8k-conundrum-when-bad-lenses-mount-good-sensors/

If we shot an image of a very distant small light source, like a distant star trough a perfect lens, it would look like this:

Note the central spot, surrounded by small concentric circles. The small circles are called 'Airy rings' and they are diffraction pattern caused by the aperture of the lens.

Note the central spot, surrounded by small concentric circles. The small circles are called 'Airy rings' and they are diffraction pattern caused by the aperture of the lens.

Now, lenses are not perfect, they have aberrations:

This is a lens affected by astigmatism and coma.

This is a lens affected by astigmatism and coma.

Optical engineers try to eliminate these aberrations. Most aberrations increase moving away from the optical axis, and they usually decrease when stopping down.

A real spot image may look like this.

A real spot image may look like this.

Such a spot image could be called a Point Spread Function.

There is a matematical way of describing a PSF called OTF (Optical Transfer Function) that is the 'Fourier Transform' of the PSF. PSF is complex function. Normally lenses are characterized with the absolute value of the OTF. That is called MTF. Both OTF and MTF are two dimensional, but MTF is normally measured in two directions.

Let's take this image:

Two pairs of vertical bars. both equally bright.
Two pairs of vertical bars. both equally bright.

Now blur it with a Gaussian blur of radius 5.

We see that both line pairs are blurred, but the large one still keeps contrast while the smaller and tighter one has lost much of contrast.
We see that both line pairs are blurred, but the large one still keeps contrast while the smaller and tighter one has lost much of contrast.

A mathematician would say that we have convolved the image with a Gaussian PSF having the radius of 5.

Now let's take some real test samples:

This is a small crop of decently sharp image shot on my Hasselblad 555/ELD with a 39M P45+ back at 1:1 view.
This is a small crop of decently sharp image shot on my Hasselblad 555/ELD with a 39M P45+ back at 1:1 view.

This is the same image but shot with a Softar II filter.
This is the same image but shot with a Softar II filter.

The wedges allow us to calculate the MTF for each combo of lens, filter and sensor:

The upper figure shows the MTF. The blue line shows the MTF of the lens and the red one the MTF of the lens + the Softar II.
The upper figure shows the MTF. The blue line shows the MTF of the lens and the red one the MTF of the lens + the Softar II.

So, what these curves show how much contrast is lost with varying detail, coarse detail on the left fine detail on the right. The vertical line shows the resolution limit of the sensor.

Resolution is the limit, where we can tell things apart. We can look at fine details of the small resolution chart:

These shows two crops at 200%. Should be viewed at original size below. The structures within the black frame are resolved in both images. But in the left image they are resolved with high contrast and on the left one with low contrast .
These shows two crops at 200%. Should be viewed at original size below. The structures within the black frame are resolved in both images. But in the left image they are resolved with high contrast and on the left one with low contrast .

After viewing the picture at actual size we can notice that the image on the has color artifacts while the one on the right has low contrast but almost no artifacts.

In this case resolution is limited by the sensor. The MTf we measured was above zero at Nyquist that causes aliasing artifacts.

We could say that the image on the left shows a lens with high resolution and good MTF for small detail while the one the right has good resolution with poor MTF for small details.

It may be said that 'slanted wedges' here are too small and not sharp enough for good MTF calculations, but it may still be a good demo.

OLP filter

As discussed above, if the lens has to much MTF at the Nyquist limit, aliasing will occur. It may seem to accentuate perception of sharpness as it would yield false detail in what normally would be blurred. But with 'Bayer' filters in front of the sensor that false detail arises in different positions for different colors, we can get 'color moiré'.

To reduce that, cameras with large pixels mostly have an Optical Low Pass (OLP) filter. It is normally implemented using four way beam splitter:



Photograph of a point light source, without an OLP and with an OLP filter that was taken off a sensor.

Photograph of a point light source, without an OLP and with an OLP filter that was taken off a sensor.

The OLP filter acts as a softening filter that has most effect on the high frequencies, near Nyquist. OLP filters were optional for some early MFD cameras, but they were expensive and they reduce sharpness, so later MFD-s dropped OLP filtering. (For the Mamiya ZD, the normal IR filter was 1000$US and the OLP filter was 3000$US, I would recall)

Diffraction

Diffraction is a property of light. When light is passing through a hole a diffraction pattern emerges. Here is a nice diffraction pattern from Cambridge in Colour:

The diameter of the Airy pattern, as it is normally called, increasing with f-stop number. So, when we stop down the airy pattern gets larger.

The diameter of the Airy pattern, as it is normally called, increasing with f-stop number. So, when we stop down the airy pattern gets larger.

This shows a 3.9 micron pixel grid with an airy pattern for f/5.6. It seems that this may not affect image quality, but even apertures like f/5.6 have a small effect on sharpness.
This shows a 3.9 micron pixel grid with an airy pattern for f/5.6. It seems that this may not affect image quality, but even apertures like f/5.6 have a small effect on sharpness.

Stopping down to f/16 will have a significant effect on sharpness.
Stopping down to f/16 will have a significant effect on sharpness.

If we recall MTF, as discussed earlier, each Point Spread Function (PSF) can be approximated by an MTF. The MTF of system are the MTF of it's parts, multiplied. Each part of the system has it's own MTF.

This shows MTF of my Hasselblad Sonnar f/4 CFE at different apertures, on the P45+ back. Quite obviously f/5.6 is best of the apertures tested. Just as an example, it seems that it would outperform my Sony A7rII with my Sony 90/2.8 G at both f/5.6 and f/8. But, it would be a bit behind the Sony at f/11.
This shows MTF of my Hasselblad Sonnar f/4 CFE at different apertures, on the P45+ back. Quite obviously f/5.6 is best of the apertures tested. Just as an example, it seems that it would outperform my Sony A7rII with my Sony 90/2.8 G at both f/5.6 and f/8. But, it would be a bit behind the Sony at f/11.

By and large, observers mostly don't note diffraction before stopping down to f/16 or f/22. A part of the explanation is that our human vision is most sensitive to low frequencies, like 600 cy/PH and it that range the effect of diffraction is not so large.

Depth of field and defocus

I would think that it is easiest to understand DoF looking at the image plane. This is the best illustration I have seen:

Here we have three objects, one beyond focus distance (1), one at focusing distance (2) and one in front of the focusing distance. All throw a converging beam behind the lens, but only beam from the object in focus converges in the focal plane. The two converge in front or behind the focal planes. So the points out of focus will be blobs or 'bokeh balls'. Normally they would be called circle of confusion (CoC). Closing the aperture cuts down on the diameter of the beam. So the points are not getting in focus, but the 'blob' or 'CoC' gets smaller.

Here we have three objects, one beyond focus distance (1), one at focusing distance (2) and one in front of the focusing distance. All throw a converging beam behind the lens, but only beam from the object in focus converges in the focal plane. The two converge in front or behind the focal planes. So the points out of focus will be blobs or 'bokeh balls'. Normally they would be called circle of confusion (CoC). Closing the aperture cuts down on the diameter of the beam. So the points are not getting in focus, but the 'blob' or 'CoC' gets smaller.

Stopping down the aperture two stops cuts the diameter of 'blob', 'CoC' or 'bokeh ball' in half.

Now, the question arises what is sharp enough, tradition says that a 'blob' diameter of 1/1500 of the image diagonal will be regarded acceptably sharp. If we have a 24x36 mm camera it has an image diagonal of 43 mm. 43/1500 -> 0.0286 mm, that is around 0.03 mm and that figure is used in normal DoF calculations. A 33x44 mm sensor has a diameter of 55 mm so, we would end up with 55/1500 -> 0.037 mm.

But, our sensors may have 4-6 micron size! A 24 MP camera has 6 micron pixels. Our 0.03 mm CoC covers 706 square microns while a pixel is 36 square microns. So 19.6 pixels fit within that CoC. So, acceptable DoF corresponds 24/19.6 -> 1.2 MP.

Would we do the same calculation a 50 MP 44x33 sensor, the figures would be the same.

So, we may need to keep in mind that stopping down doesn't bring things into focus just making the blur smaller.

Stopping down to much, diffraction will come into play and we may need to rise ISO to stop motion,

So, what should we do?

We may need to keep things in balance. To get best image quality, it makes sense to use base ISO and expose as high as possible without burning out highlights.

We would also try to keep aperture near medium, to keep diffraction at bay.

After that we need to manage sharpness. There is only one plane of focus. Placing the main object in focus may make a lot of sense. After that we may need to stop down to get acceptable sharpness.
But stopping down to much we will lose some sharpness and perhaps we need high ISO to keep motion blur at bay.

Best regards

Erik

--
Erik Kaffehr
Website: http://echophoto.dnsalias.net
Magic uses to disappear in controlled experiments…
Gallery: http://echophoto.smugmug.com
Articles: http://echophoto.dnsalias.net/ekr/index.php/photoarticles
 
Last edited:
This is intended to small summary of the aspects going into image quality. The discussion ignores color and things unmeasurable.
I would have been more specific about color, and why it is being ignored; I would have said "things not being measured, not yet measurable or unmeasurable"
The major factors are:
I would have said "The remaining major factors are" to underline the above.
 
Last edited:
This is intended to small summary of the aspects going into image quality. The discussion ignores color and things unmeasurable.
I would have been more specific about color, and why it is being ignored; I would have said "things not being measured, not yet measurable or unmeasurable"
The major factors are:
I would have said "The remaining major factors are" to underline the above.
Hi Tex,

Thanks for the comments. The reason I wanted to avoid color is that involves many complex issues.

Best regards

Erik
 
  • Like
Reactions: tex
About RGB histrogram spread. The reason why RGB histograms spread at -6ev comes from manufacturer own tone correction applied to sensor data and applied by the the raw converter pre-visualization. This can be shown by looking at how much histogram values move when using exposure compensation sliders. I could observe that histogram distribution get more or less compressed depending on level.

About lenses, I could observe that most lenses resolve no more than about 50 line pairs per mm (at 50% contrast), in the center, without effect of diffraction. At 50% contrast threshold (the value used for drawing MTF charts), diffraction is a small contributor only unless the lens is stopped down to more than f/16. So, [ignoring shoot conditions], I conclude for myself some key considerations:

1) - full frame is essentially not diffraction limited, but still mostly limited by lens quality (so I guess that's why we see full frame lenses quite large and complex..., compared to old large format film lenses that are very small while still having large format resolve much more than 24x36 FF).

2) - medium format crop (e.g GFX50, GFX100, X1D, 645Z, 33x44mm sensor) can be diffraction limited, in some cases, but still not quite diffraction limited (f/16 still works well for getting large DoF with wide angle). Still limited by lens MTF, without much impact of diffraction.

3) - medium format (e.g Phase One, 40x54mm sensors), unless shooting portraits, e.g shooting landscape, lens MTF balanced with diffraction impact (e.g stopping down the lens to f/22 for a landscape image).

So going larger than 40mmx54 makes no sense for digital, for landscape / architecture shooting, because the system become mostly diffraction limited, and the only way to cross that barrier is via camera movement (tilting lenses % sensor plane).

And if my conclusion is right, it explains why camera makers mostly hang on full frame and put lots of efforts in lenses (complex and large lens designs, favoring fast apertures and lots of corrections), going against what was camera system design in the film era (i.e very large cameras with small slow lenses).
 
Last edited:
Hi,
About RGB histrogram spread. The reason why RGB histograms spread at -6ev comes from manufacturer own tone correction applied to sensor data and applied by the the raw converter pre-visualization. This can be shown by looking at how much histogram values move when using exposure compensation sliders. I could observe that histogram distribution get more or less compressed depending on level.
The data I show was raw data, so it is not processed. I am pretty sure that it is just laws of physics.
About lenses, I could observe that most lenses resolve no more than about 50 line pairs per mm (at 50% contrast), in the center, without effect of diffraction. At 50% contrast threshold (the value used for drawing MTF charts), diffraction is a small contributor only unless the lens is stopped down to more than f/16. So, [ignoring shoot conditions], I conclude for myself some key considerations:
I would be careful with MTF 50 data. How are those data points measured. Is it based on raw data or for instance from default processing.

Looking at MTF 50 data is quite misleading, in the sense that MTF 50 is relevant for sharpness but you would not want anything like 50MTF at the pixel level else you will have severely aliased data.

I would like to have a sensor that yields around 10% MTF at Nyquist with the best lenses I have.

Most of the lenses I have tested perform best at f/5.6, or so. So, diffraction certainly plays a role.
1) - full frame is essentially not diffraction limited, but still mostly limited by lens quality (so I guess that's why we see full frame lenses quite large and complex..., compared to old large format film lenses that are very small while still having large format resolve much more than 24x36 FF).
My most used lens is the Sigma 24-105/4 Art.

This is LensRentals MTF data measured on optical bench 50 lp/mm is clearly above 0.5.

This is LensRentals MTF data measured on optical bench 50 lp/mm is clearly above 0.5.
2) - medium format crop (e.g GFX50, GFX100, X1D, 645Z, 33x44mm sensor) can be diffraction limited, in some cases, but still not quite diffraction limited (f/16 still works well for getting large DoF with wide angle). Still limited by lens MTF, without much impact of diffraction.
https://blog.kasson.com/the-last-word/focus-shift-loca-of-fuji-1102-on-gfx/


Jim Kasson has tested most Fuji GFX lenses. The plot above shows MTF at apertures from f/2 to f/11. Best performance is around 3500 cy/PH at f/2.8 while f/11 reaches only around 1800 cy/PH.
3) - medium format (e.g Phase One, 40x54mm sensors), unless shooting portraits, e.g shooting landscape, lens MTF balanced with diffraction impact (e.g stopping down the lens to f/22 for a landscape image).
No, 100% sure that is wrong.
So going larger than 40mmx54 makes no sense for digital, for landscape / architecture shooting, because the system become mostly diffraction limited, and the only way to cross that barrier is via camera movement (tilting lenses % sensor plane).
Landscape is oftant distant, so you can shoot at any aperture. My own experience is with Hasselblad Zeiss lenses. I used to shoot them at f/11 on my P45+ back. But it seems that the Sonnars have best aperture at f/5.6. But achieving perfect focus is not easy.

I use tilts quite often with my Sony A7rII.
And if my conclusion is right, it explains why camera makers mostly hang on full frame and put lots of efforts in lenses (complex and large lens designs, favoring fast apertures and lots of corrections).
I guess that large apertures sell. Makers could build high quality lenses with smaller apertures, but they would still be expensive to make. Just as an example, Zeiss makes the Loxia line for Sony E-mount and they are not very fast but compact. The new designs seem to be extremely good.

Personally, I have Voigtlander 65/2 Apo Macro on order, will see how that works.

Best regards

Erik

--
Erik Kaffehr
Website: http://echophoto.dnsalias.net
Magic uses to disappear in controlled experiments…
Gallery: http://echophoto.smugmug.com
Articles: http://echophoto.dnsalias.net/ekr/index.php/photoarticles
 
About RGB histrogram spread. The reason why RGB histograms spread at -6ev comes from manufacturer own tone correction applied to sensor data and applied by the the raw converter pre-visualization.
Not true. It comes mostly from shot noise. There is no raw converter involved in creating in the histograms that Erik posted.
This can be shown by looking at how much histogram values move when using exposure compensation sliders.
RawDigger does not have such sliders. Amplifying shadows will amplify shot noise, too.
I could observe that histogram distribution get more or less compressed depending on level.

About lenses, I could observe that most lenses resolve no more than about 50 line pairs per mm (at 50% contrast), in the center, without effect of diffraction. At 50% contrast threshold (the value used for drawing MTF charts), diffraction is a small contributor only unless the lens is stopped down to more than f/16.
Not true. With good lenses on axis, diffraction is a strong contributor at f/5.6.
So, [ignoring shoot conditions], I conclude for myself some key considerations:

1) - full frame is essentially not diffraction limited, but still mostly limited by lens quality (so I guess that's why we see full frame lenses quite large and complex...,
Not true. See above.
compared to old large format film lenses that are very small while still having large format resolve much more than 24x36 FF).
It is true that slower lenses are more likely to be diffraction-limited.
2) - medium format crop (e.g GFX50, GFX100, X1D, 645Z, 33x44mm sensor) can be diffraction limited, in some cases, but still not quite diffraction limited (f/16 still works well for getting large DoF with wide angle). Still limited by lens MTF, without much impact of diffraction.
Definitely not true.


Jim
 
About RGB histrogram spread. The reason why RGB histograms spread at -6ev comes from manufacturer own tone correction applied to sensor data and applied by the the raw converter pre-visualization.
Not true. It comes mostly from shot noise. There is no raw converter involved in creating in the histograms that Erik posted.
This can be shown by looking at how much histogram values move when using exposure compensation sliders.
RawDigger does not have such sliders. Amplifying shadows will amplify shot noise, too.
I could observe that histogram distribution get more or less compressed depending on level.

About lenses, I could observe that most lenses resolve no more than about 50 line pairs per mm (at 50% contrast), in the center, without effect of diffraction. At 50% contrast threshold (the value used for drawing MTF charts), diffraction is a small contributor only unless the lens is stopped down to more than f/16.
Not true. With good lenses on axis, diffraction is a strong contributor at f/5.6.
So, [ignoring shoot conditions], I conclude for myself some key considerations:

1) - full frame is essentially not diffraction limited, but still mostly limited by lens quality (so I guess that's why we see full frame lenses quite large and complex...,
Not true. See above.
compared to old large format film lenses that are very small while still having large format resolve much more than 24x36 FF).
It is true that slower lenses are more likely to be diffraction-limited.
2) - medium format crop (e.g GFX50, GFX100, X1D, 645Z, 33x44mm sensor) can be diffraction limited, in some cases, but still not quite diffraction limited (f/16 still works well for getting large DoF with wide angle). Still limited by lens MTF, without much impact of diffraction.
Definitely not true.

https://blog.kasson.com/gfx-100/a-visual-look-at-gfx-100-diffraction-blur/

Jim
About histogram: no camera saves sensor data directly into a raw file, the image sensor data readout is processed by the camera image processor to create the raw file data. That is why none of the three cameras brands using exactly the same MF sensor can render colors exactly the same way, even from RAW.

Without a fixed target of micro-contrast, one can always say that diffraction is a strong contributor. That's why the topic can become controversial. For 50% contrast drop relative to the max (e.g 10 lp/mm), the very best lenses resolve 70 lp/mm at their very best settings, while the sensor of a 36Mp camera such as the D810 resolves more than 100 lp/mm at nearly 100% contrast. As shown above, one of the best lenses out there show ~40% drop of micro-contrast at 50lp/mm, in the very center of the lens image circle. The diffraction only hypothesis is theoretical only and practically flawed. Practically, if we take a smartphone, even with 100Mpixel and f2 and stack lots of frame to eliminate the noise completely, the resulting image result is still nowhere near what comes from a large sensor, which show evidence that diffraction plays a negligible role up until medium/large format. The advantage of images is that we can compare them side by side with our own eyes, and what we see is far more evidence that a partial theoretical model. If I use a theoretical model, I check the model with real results, doing so tells me if my model works reasonably well or not. If actual results don't show a significant difference against model prediction, then it means my model isn't good enough.

In my opinion, the matter is not to know how much is diffraction, but to know at what point the drop of micro-contrast from diffraction cancels out the increase of micro-contrast that we get from increasing the size of the sensor, while keeping the same pixel pitch.
 
Last edited:
About RGB histrogram spread. The reason why RGB histograms spread at -6ev comes from manufacturer own tone correction applied to sensor data and applied by the the raw converter pre-visualization.
Not true. It comes mostly from shot noise. There is no raw converter involved in creating in the histograms that Erik posted.
This can be shown by looking at how much histogram values move when using exposure compensation sliders.
RawDigger does not have such sliders. Amplifying shadows will amplify shot noise, too.
I could observe that histogram distribution get more or less compressed depending on level.

About lenses, I could observe that most lenses resolve no more than about 50 line pairs per mm (at 50% contrast), in the center, without effect of diffraction. At 50% contrast threshold (the value used for drawing MTF charts), diffraction is a small contributor only unless the lens is stopped down to more than f/16.
Not true. With good lenses on axis, diffraction is a strong contributor at f/5.6.
So, [ignoring shoot conditions], I conclude for myself some key considerations:

1) - full frame is essentially not diffraction limited, but still mostly limited by lens quality (so I guess that's why we see full frame lenses quite large and complex...,
Not true. See above.
compared to old large format film lenses that are very small while still having large format resolve much more than 24x36 FF).
It is true that slower lenses are more likely to be diffraction-limited.
2) - medium format crop (e.g GFX50, GFX100, X1D, 645Z, 33x44mm sensor) can be diffraction limited, in some cases, but still not quite diffraction limited (f/16 still works well for getting large DoF with wide angle). Still limited by lens MTF, without much impact of diffraction.
Definitely not true.

https://blog.kasson.com/gfx-100/a-visual-look-at-gfx-100-diffraction-blur/

Jim
About histogram: no camera saves sensor data directly into a raw file, the image sensor data readout is processed by the camera image processor to create the raw file data. That is why none of the three cameras brands using exactly the same MF sensor can render colors exactly the same way, even from RAW.
I don't think you have presented any evidence for that. Just someone having said something doesn't mean it is true.

Here is a comparison of the GFX 50S with the Pentax 645Z on the left, and the GFX50S with the Sony A7rIV comparison is split squares, with GFX 50S top left.
Here is a comparison of the GFX 50S with the Pentax 645Z on the left, and the GFX50S with the Sony A7rIV comparison is split squares, with GFX 50S top left.



And here are the Delta E-values.
And here are the Delta E-values.

I would say the cameras produce identical colors.
Without a fixed target of micro-contrast, one can always say that diffraction is a strong contributor. That's why the topic can become controversial. For 50% contrast drop relative to the max (e.g 10 lp/mm), the very best lenses resolve 70 lp/mm at their very best settings, while the sensor of a 36Mp camera such as the D810 resolves more than 100 lp/mm at nearly 100% contrast. As shown above, one of the best lenses out there show ~40% drop of micro-contrast at 50lp/mm, in the very center of the lens image circle. The diffraction only hypothesis is theoretical only and practically flawed. Practically, if we take a smartphone, even with 100Mpixel and f2 and stack lots of frame to eliminate the noise completely, the resulting image result is still nowhere near what comes from a large sensor, which show evidence that diffraction plays a negligible role up until medium/large format. The advantage of images is that we can compare them side by side with our own eyes, and what we see is far more evidence that a partial theoretical model. If I use a theoretical model, I check the model with real results, doing so tells me if my model works reasonably well or not. If actual results don't show a significant difference against model prediction, then it means my model isn't good enough.
Sorry what you writes make no sense. You can either measure lens MTF on an optical bench or you can measure system MTF, typically using a slanted edge target.

MTF 50 is a measurement of sharpness, not resolution. Lens, OLP filter and sensor each has it's own MTF. The MTf of the sensor is solely decided by the pixel aperture.

Best regards

Erik

--
Erik Kaffehr
Website: http://echophoto.dnsalias.net
Magic uses to disappear in controlled experiments…
Gallery: http://echophoto.smugmug.com
Articles: http://echophoto.dnsalias.net/ekr/index.php/photoarticles
 
[No message]
 
Hi,
About RGB histrogram spread. The reason why RGB histograms spread at -6ev comes from manufacturer own tone correction applied to sensor data and applied by the the raw converter pre-visualization. This can be shown by looking at how much histogram values move when using exposure compensation sliders. I could observe that histogram distribution get more or less compressed depending on level.
The data I show was raw data, so it is not processed. I am pretty sure that it is just laws of physics.
About lenses, I could observe that most lenses resolve no more than about 50 line pairs per mm (at 50% contrast), in the center, without effect of diffraction. At 50% contrast threshold (the value used for drawing MTF charts), diffraction is a small contributor only unless the lens is stopped down to more than f/16. So, [ignoring shoot conditions], I conclude for myself some key considerations:
I would be careful with MTF 50 data. How are those data points measured. Is it based on raw data or for instance from default processing.

Looking at MTF 50 data is quite misleading, in the sense that MTF 50 is relevant for sharpness but you would not want anything like 50MTF at the pixel level else you will have severely aliased data.

I would like to have a sensor that yields around 10% MTF at Nyquist with the best lenses I have.

Most of the lenses I have tested perform best at f/5.6, or so. So, diffraction certainly plays a role.
1) - full frame is essentially not diffraction limited, but still mostly limited by lens quality (so I guess that's why we see full frame lenses quite large and complex..., compared to old large format film lenses that are very small while still having large format resolve much more than 24x36 FF).
My most used lens is the Sigma 24-105/4 Art.

This is LensRentals MTF data measured on optical bench 50 lp/mm is clearly above 0.5.

This is LensRentals MTF data measured on optical bench 50 lp/mm is clearly above 0.5.
2) - medium format crop (e.g GFX50, GFX100, X1D, 645Z, 33x44mm sensor) can be diffraction limited, in some cases, but still not quite diffraction limited (f/16 still works well for getting large DoF with wide angle). Still limited by lens MTF, without much impact of diffraction.
https://blog.kasson.com/the-last-word/focus-shift-loca-of-fuji-1102-on-gfx/


Jim Kasson has tested most Fuji GFX lenses. The plot above shows MTF at apertures from f/2 to f/11. Best performance is around 3500 cy/PH at f/2.8 while f/11 reaches only around 1800 cy/PH.
3) - medium format (e.g Phase One, 40x54mm sensors), unless shooting portraits, e.g shooting landscape, lens MTF balanced with diffraction impact (e.g stopping down the lens to f/22 for a landscape image).
No, 100% sure that is wrong.
So going larger than 40mmx54 makes no sense for digital, for landscape / architecture shooting, because the system become mostly diffraction limited, and the only way to cross that barrier is via camera movement (tilting lenses % sensor plane).
Landscape is oftant distant, so you can shoot at any aperture. My own experience is with Hasselblad Zeiss lenses. I used to shoot them at f/11 on my P45+ back. But it seems that the Sonnars have best aperture at f/5.6. But achieving perfect focus is not easy.

I use tilts quite often with my Sony A7rII.
And if my conclusion is right, it explains why camera makers mostly hang on full frame and put lots of efforts in lenses (complex and large lens designs, favoring fast apertures and lots of corrections).
I guess that large apertures sell. Makers could build high quality lenses with smaller apertures, but they would still be expensive to make. Just as an example, Zeiss makes the Loxia line for Sony E-mount and they are not very fast but compact. The new designs seem to be extremely good.

Personally, I have Voigtlander 65/2 Apo Macro on order, will see how that works.

Best regards

Erik
Thank you for the comments, well appreciated, especially the two charts about the sigma lens and the max resolution measurement from the Fuji lens at f2.8.

I have found this earlier post about diffraction impact: https://www.dpreview.com/forums/post/51894737 . Looking at the third chart showing the resolution impact of diffraction only , at f16 (light blue curve) still resolves 42 lp/mm as opposed to the sigma lens that resolve 30 lp/mm (50% contrast) in corners with wide aperture, here the lens is more limiting than the diffraction even at f16. If we use f/11 or f/8 aperture on that sigma lens, diffraction is negligible, so it is clearly the lens glass that limits the micro-contrast rather than the diffraction effect of the aperture.
 
Yes, I agree. As you original post pointed out, the resulting image quality comes from the contribution of multiple sources of imperfections, diffraction is only one of them. Hence, using diffraction only model + lens optical center, given a partial only view on image quality impact.

What are the benefits and the conclusion of the post?

(to me it looks more like "self-talk" kind of post)
 
Last edited:
Thank you for the comments, well appreciated, especially the two charts about the sigma lens and the max resolution measurement from the Fuji lens at f2.8.

I have found this earlier post about diffraction impact: https://www.dpreview.com/forums/post/51894737 . Looking at the third chart showing the resolution impact of diffraction only , at f16 (light blue curve) still resolves 42 lp/mm as opposed to the sigma lens that resolve 30 lp/mm (50% contrast) in corners with wide aperture, here the lens is more limiting than the diffraction even at f16. If we use f/11 or f/8 aperture on that sigma lens, diffraction is negligible, so it is clearly the lens glass that limits the micro-contrast rather than the diffraction effect of the aperture.
Quantitative analysis indicates otherwise, in general.



Those posts include the effect of defocusing as well as pixel aperture and diffraction.

Jim
 
Reading your curves, based on models, I can see that the GFX50s at 60mm, focused at 30m, aperture f/16, the diffraction blur is roughly equal to de-focus blur average between 30m and infinity. The observation almost matches the initial comment I wrote: for landscape, larger than medium format doesn't make sense because diffraction become a larger contributor to blur and the system cost increases. On full frame, diffraction is still negligible for landscape, lens edge quality is #1 contributor :-) Perhaps my view doesn't match yours, but I like to come to a simple key conclusion, easier to remember.
 
Reading your curves, based on models, I can see that the GFX50s at 60mm, focused at 30m, aperture f/16, the diffraction blur is roughly equal to de-focus blur average between 30m and infinity. The observation almost matches the initial comment I wrote: for landscape, larger than medium format doesn't make sense because diffraction become a larger contributor to blur and the system cost increases. On full frame, diffraction is still negligible for landscape, lens edge quality is #1 contributor :-) Perhaps my view doesn't match yours, but I like to come to a simple key conclusion, easier to remember.
I am not debating DOF blur vs diffraction blur with you in this thread.
 
About RGB histrogram spread. The reason why RGB histograms spread at -6ev comes from manufacturer own tone correction applied to sensor data and applied by the the raw converter pre-visualization.
Not true. It comes mostly from shot noise. There is no raw converter involved in creating in the histograms that Erik posted.
This can be shown by looking at how much histogram values move when using exposure compensation sliders.
RawDigger does not have such sliders. Amplifying shadows will amplify shot noise, too.
I could observe that histogram distribution get more or less compressed depending on level.

About lenses, I could observe that most lenses resolve no more than about 50 line pairs per mm (at 50% contrast), in the center, without effect of diffraction. At 50% contrast threshold (the value used for drawing MTF charts), diffraction is a small contributor only unless the lens is stopped down to more than f/16.
Not true. With good lenses on axis, diffraction is a strong contributor at f/5.6.
So, [ignoring shoot conditions], I conclude for myself some key considerations:

1) - full frame is essentially not diffraction limited, but still mostly limited by lens quality (so I guess that's why we see full frame lenses quite large and complex...,
Not true. See above.
compared to old large format film lenses that are very small while still having large format resolve much more than 24x36 FF).
It is true that slower lenses are more likely to be diffraction-limited.
2) - medium format crop (e.g GFX50, GFX100, X1D, 645Z, 33x44mm sensor) can be diffraction limited, in some cases, but still not quite diffraction limited (f/16 still works well for getting large DoF with wide angle). Still limited by lens MTF, without much impact of diffraction.
Definitely not true.

https://blog.kasson.com/gfx-100/a-visual-look-at-gfx-100-diffraction-blur/

Jim
About histogram: no camera saves sensor data directly into a raw file, the image sensor data readout is processed by the camera image processor to create the raw file data.
Processed in what way? I'll give you double-sampling, analog gain, muxs, ADC's, stuck pixel mapping, dead pixel mapping, dual conversion gain, but none of those will spread the histogram.

PDAF interpolation will but it's extremely limited in its effect on the histo.
That is why none of the three cameras brands using exactly the same MF sensor can render colors exactly the same way, even from RAW.
Now you've introduced color. That's most affected by the CFA dyes and pigments, as well as the hot mirror.
Without a fixed target of micro-contrast, one can always say that diffraction is a strong contributor.
I don't understand that sentence.
That's why the topic can become controversial. For 50% contrast drop relative to the max (e.g 10 lp/mm), the very best lenses resolve 70 lp/mm at their very best settings, while the sensor of a 36Mp camera such as the D810 resolves more than 100 lp/mm at nearly 100% contrast. As shown above, one of the best lenses out there show ~40% drop of micro-contrast at 50lp/mm, in the very center of the lens image circle. The diffraction only hypothesis is theoretical only and practically flawed. Practically, if we take a smartphone, even with 100Mpixel and f2 and stack lots of frame to eliminate the noise completely, the resulting image result is still nowhere near what comes from a large sensor, which show evidence that diffraction plays a negligible role up until medium/large format. The advantage of images is that we can compare them side by side with our own eyes, and what we see is far more evidence that a partial theoretical model. If I use a theoretical model, I check the model with real results, doing so tells me if my model works reasonably well or not. If actual results don't show a significant difference against model prediction, then it means my model isn't good enough.

In my opinion, the matter is not to know how much is diffraction, but to know at what point the drop of micro-contrast from diffraction cancels out the increase of micro-contrast that we get from increasing the size of the sensor, while keeping the same pixel pitch.
That never happens at the narrow stops you're talking about on axis if the lenses are good and equivalent. Diffraction remains constant at picture level, and the larger format lens gets better off axis, because it's slower. The increased sampling frequency of the larger sensor of the same pitch is always an improvement, but the amount of that improvement gets smaller and smaller as the lenses are stopped down.
 
About RGB histrogram spread. The reason why RGB histograms spread at -6ev comes from manufacturer own tone correction applied to sensor data and applied by the the raw converter pre-visualization.
Not true. It comes mostly from shot noise. There is no raw converter involved in creating in the histograms that Erik posted.
This can be shown by looking at how much histogram values move when using exposure compensation sliders.
RawDigger does not have such sliders. Amplifying shadows will amplify shot noise, too.
I could observe that histogram distribution get more or less compressed depending on level.

About lenses, I could observe that most lenses resolve no more than about 50 line pairs per mm (at 50% contrast), in the center, without effect of diffraction. At 50% contrast threshold (the value used for drawing MTF charts), diffraction is a small contributor only unless the lens is stopped down to more than f/16.
Not true. With good lenses on axis, diffraction is a strong contributor at f/5.6.
So, [ignoring shoot conditions], I conclude for myself some key considerations:

1) - full frame is essentially not diffraction limited, but still mostly limited by lens quality (so I guess that's why we see full frame lenses quite large and complex...,
Not true. See above.
compared to old large format film lenses that are very small while still having large format resolve much more than 24x36 FF).
It is true that slower lenses are more likely to be diffraction-limited.
2) - medium format crop (e.g GFX50, GFX100, X1D, 645Z, 33x44mm sensor) can be diffraction limited, in some cases, but still not quite diffraction limited (f/16 still works well for getting large DoF with wide angle). Still limited by lens MTF, without much impact of diffraction.
Definitely not true.

https://blog.kasson.com/gfx-100/a-visual-look-at-gfx-100-diffraction-blur/

Jim
About histogram: no camera saves sensor data directly into a raw file, the image sensor data readout is processed by the camera image processor to create the raw file data.
Processed in what way? I'll give you double-sampling, analog gain, muxs, ADC's, stuck pixel mapping, dead pixel mapping, dual conversion gain, but none of those will spread the histogram.

PDAF interpolation will but it's extremely limited in its effect on the histo.
What we see is that SNR decreases with reduced exposure. That is simple statistics. What is shown is a simple way of making it visible.

What matters here is that most of the noise is simply the result of photon arrival statistics. Camera electronics cannot do anything about it, except possibly hiding it using noise reduction.

But, noise reduction is probable better applied in postprocessing that uses more powerful hardware and processes image slowly.

Also, noise reduction can easily detected using FFT. It seems that Canon EOS5R applies NR at base ISO, according to Bill Claff's data, but as far as I know they are pretty alone.
That is why none of the three cameras brands using exactly the same MF sensor can render colors exactly the same way, even from RAW.
Now you've introduced color. That's most affected by the CFA dyes and pigments, as well as the hot mirror.
What Jim says is true. My experiment has shown that Pentax 645Z, Fujifilm GFX and Sony A7rIV produce the same color well below JND without any other manipulations than using individually generated DCP profiles. Profile generation is an automatic process.

I think that this is simply 100 proof that 'pentaust' is wrong.
Without a fixed target of micro-contrast, one can always say that diffraction is a strong contributor.
I don't understand that sentence.
The statement does not make sense.



We can clearly see that diffraction has a huge effect on MTF, especially at high frequencies.
We can clearly see that diffraction has a huge effect on MTF, especially at high frequencies.

But it can of course be argued that we can compensate for it by sharpening.
That's why the topic can become controversial. For 50% contrast drop relative to the max (e.g 10 lp/mm), the very best lenses resolve 70 lp/mm at their very best settings, while the sensor of a 36Mp camera such as the D810 resolves more than 100 lp/mm at nearly 100% contrast. As shown above, one of the best lenses out there show ~40% drop of micro-contrast at 50lp/mm, in the very center of the lens image circle. The diffraction only hypothesis is theoretical only and practically flawed. Practically, if we take a smartphone, even with 100Mpixel and f2 and stack lots of frame to eliminate the noise completely, the resulting image result is still nowhere near what comes from a large sensor, which show evidence that diffraction plays a negligible role up until medium/large format. The advantage of images is that we can compare them side by side with our own eyes, and what we see is far more evidence that a partial theoretical model. If I use a theoretical model, I check the model with real results, doing so tells me if my model works reasonably well or not. If actual results don't show a significant difference against model prediction, then it means my model isn't good enough.

In my opinion, the matter is not to know how much is diffraction, but to know at what point the drop of micro-contrast from diffraction cancels out the increase of micro-contrast that we get from increasing the size of the sensor, while keeping the same pixel pitch.
That never happens at the narrow stops you're talking about on axis if the lenses are good and equivalent. Diffraction remains constant at picture level, and the larger format lens gets better off axis, because it's slower. The increased sampling frequency of the larger sensor of the same pitch is always an improvement, but the amount of that improvement gets smaller and smaller as the lenses are stopped down.
My experience is that the drop of sharpness say going from f/5.6 to f/8 and sometimes from f/4 to f/5.6 is quite noticeable on unsharpened images. Once we sharpen we can increase sharpness, but sharpening almost unavoidable also sharpens noise and introduces artifacts. So it is better to make an optimal image and use little sharpening than making a less sharp image and apply a lot of sharpening.

The way I see it, larger formats make most sense for careful work. I would assume that we buy into medium format with the intention to improve image quality and doing that it may be useful to understand how things work.

Just to say, I am not sure about Jim's reasoning about medium format lenses being better, because they are slower, when being stopped down.

But, I would agree that the MFD lenses I have owned were generally usable at maximum apertures while that may not always have been the case with 24x36 mm lenses.

Best regards

Erik

--
Erik Kaffehr
Website: http://echophoto.dnsalias.net
Magic uses to disappear in controlled experiments…
Gallery: http://echophoto.smugmug.com
Articles: http://echophoto.dnsalias.net/ekr/index.php/photoarticles
 
I think we tend to over complicate things.

Diffraction is no something that happens all of a sudden as lens aperture is being stopped down further, diffraction act as a weak low pass filter that progressively wash out image micro-contrast, as the lens diaphragm is being closed further.

The lens optical glass imperfections also involves some dispersion, drop of detail contrast as details get finer and finer.

The digital sampling pitch of the image sensor creates a resolution barrier beyond which no further detail is encoded or else it still is encoded as aliasing.

Therefore, it is incorrect to claim that lenses resolve 800Mpixels by looking at the diffraction effect only, since there is no upper limit of resolution that can capture further image detail until contrast is zero, because the optics and diffraction are continuous processes, while the digital sensor sampling process is continuous.

However, if I enlarge two images, one taken with a 50Mpixel FF camera and fast lens, and one image from a GFX50 with equivalent aperture, the GFX50 image still contains better defined details (more acuity, more micro-contrast). That effect comes from the lens optical path acting (diffraction aside) as low pass spatial filter, and that optical low pass effect is reduced when the sensor size increases.

That is why, if equivalence is set (equivalent aperture, equivalent focal length, and same pixel count), the medium format system consistently produces better looking images than the smartphone images , both images enlarged at the same size for viewing. And that is why reality show evidence that the diffraction only model of thinking is flawed.
 
Last edited:
I think we tend to over complicate things.

Diffraction is no something that happens all of a sudden as lens aperture is being stopped down further, diffraction act as a weak low pass filter that progressively wash out image micro-contrast, as the lens diaphragm is being closed further.

The lens optical glass imperfections also involves some dispersion, drop of detail contrast as details get finer and finer.

The digital sampling pitch of the image sensor creates a resolution barrier beyond which no further detail is encoded or else it still is encoded as aliasing.

Therefore, it is incorrect to claim that lenses resolve 800Mpixels by looking at the diffraction effect only, since there is no upper limit of resolution that can capture further image detail until contrast is zero, because the optics and diffraction are continuous processes, while the digital sensor sampling process is continuous.

However, if I enlarge two images, one taken with a 50Mpixel FF camera and fast lens, and one image from a GFX50 with equivalent aperture, the GFX50 image still contains better defined details (more acuity, more micro-contrast). That effect comes from the lens optical path acting (diffraction aside) as low pass spatial filter, and that optical low pass effect is reduced when the sensor size increases.

That is why, if equivalence is set (equivalent aperture, equivalent focal length, and same pixel count), the medium format system consistently produces better looking images than the smartphone images , both images enlarged at the same size for viewing. And that is why reality show evidence that the diffraction only model of thinking is flawed.
Hi,

As you say, there is no reason to over complicate things.

Just to say, there is little reason to compare a cell phone to a medium format camera.

The GFX 50 is a special case, BTW, as it has undersize microlenses. That means that it essentially samples like 100 MP camera, but with a sampling frequency like a 50 MP camera. That results in extensive aliasing:



Twice the MP almost eliminates aliasing.
Twice the MP almost eliminates aliasing.



Now, look at my measured data on the Sonnar 150. It has something like 45% MTF at Nyquist (around 2700 Cy/PH) using f/5.6. That leads to extensive aliasing. At f/11 it may be around 22% at Nyquist, still aliasing a lot. The MTF curve above Nyquist is not reliable, but it seems to be feasible that it would reach 10% at say 6000 cy/PH. That would correspond to 2 * 6000 * 2 * 6000 * 49/37 -> around 190 MP. So my 5 element Sonnar 150 made 2003 but developed many years ago would need around 190 MP for proper rendition at f/5.6.
Now, look at my measured data on the Sonnar 150. It has something like 45% MTF at Nyquist (around 2700 Cy/PH) using f/5.6. That leads to extensive aliasing. At f/11 it may be around 22% at Nyquist, still aliasing a lot. The MTF curve above Nyquist is not reliable, but it seems to be feasible that it would reach 10% at say 6000 cy/PH. That would correspond to 2 * 6000 * 2 * 6000 * 49/37 -> around 190 MP. So my 5 element Sonnar 150 made 2003 but developed many years ago would need around 190 MP for proper rendition at f/5.6.

Now, look up the f/11 line. it crosses 105 (0.1) at around 3700. So at f/11 the lens would need 72 MP for near aliasing free rendition.

I would say that Jim's calculations are proper. Also, he has been through a lot of gear test.

Best regards

Erik

--
Erik Kaffehr
Website: http://echophoto.dnsalias.net
Magic uses to disappear in controlled experiments…
Gallery: http://echophoto.smugmug.com
Articles: http://echophoto.dnsalias.net/ekr/index.php/photoarticles
 
I think we tend to over complicate things.
My objection to many of the things you have written in this thread is not that they are too simple, it's that they are objectively wrong. I have pointed out those places where I think you are in error. I have received little in the way of on-point response to those.
Diffraction is no something that happens all of a sudden as lens aperture is being stopped down further, diffraction act as a weak low pass filter that progressively wash out image micro-contrast, as the lens diaphragm is being closed further.
Leave out "weak" and I agree with that.
The lens optical glass imperfections also involves some dispersion, drop of detail contrast as details get finer and finer.
I agree with that, too.
The digital sampling pitch of the image sensor creates a resolution barrier beyond which no further detail is encoded or else it still is encoded as aliasing.
Right. But you left out the sampling aperture.
Therefore, it is incorrect to claim that lenses resolve 800Mpixels by looking at the diffraction effect only,
Now that's where you are putting up a straw man. My 800 MP estimate was not based on looking at diffraction only, but on modeling the Otus 85. And it turns out that, since I've done that work, I've determined that the Otus 85 is actually better at f/2 and f/2.8 and f/4 than I gave it credit for, due to deficiencies in the slanted edge target that I used for the original work. SO if I were to do that modeling over again, I'd come up with a higher number.
since there is no upper limit of resolution that can capture further image detail until contrast is zero, because the optics and diffraction are continuous processes, while the digital sensor sampling process is continuous.
The 800 MP number was not based on contrast vs frequency, but on control of aliasing errors.
However, if I enlarge two images, one taken with a 50Mpixel FF camera and fast lens, and one image from a GFX50 with equivalent aperture, the GFX50 image still contains better defined details (more acuity, more micro-contrast).
And more aliasing.
That effect comes from the lens optical path acting (diffraction aside) as low pass spatial filter, and that optical low pass effect is reduced when the sensor size increases.

That is why, if equivalence is set (equivalent aperture, equivalent focal length, and same pixel count), the medium format system consistently produces better looking images than the smartphone images , both images enlarged at the same size for viewing. And that is why reality show evidence that the diffraction only model of thinking is flawed.
Again, you talk about a diffraction-only model. I have not been using such a model. WHo in this thread are you accusing of that?

As to my take on format advantages, have a look here:

https://blog.kasson.com/the-last-word/format-size-and-image-quality/

Jim

--
https://blog.kasson.com
 
Last edited:
I think we tend to over complicate things.
My objection to many of the things you have written in this thread is not that they are too simple, it's that they are objectively wrong. I have pointed out those places where I think you are in error. I have received little in the way of on-point response to those.
I'm having a problem with people who say that someone else is WRONG. There are so many parameters, often some parameters are left out in discussions making someone believe that others are wrong, while they aren't necessarily wrong, just having different things in mind (+plus English isn't my mother tongue). I appreciate explanations, rather than being bluntly told that I'm wrong. Who is more wrong than who is the question? Or should we , instead of pointing who is wrong, pointing what is wrong in the thinking.
Diffraction is no something that happens all of a sudden as lens aperture is being stopped down further, diffraction act as a weak low pass filter that progressively wash out image micro-contrast, as the lens diaphragm is being closed further.
Leave out "weak" and I agree with that.
Sure, at some point, increasing diffraction isn't weak anymore. I agree and apologize for my improper formulation.
The lens optical glass imperfections also involves some dispersion, drop of detail contrast as details get finer and finer.
I agree with that, too.
The digital sampling pitch of the image sensor creates a resolution barrier beyond which no further detail is encoded or else it still is encoded as aliasing.
Right. But you left out the sampling aperture.
For simplification, I neglect pixel aperture, as I believe it kicks in when pixels become the size about 1um , there is no apsc, full frame or medium format camera at this pixel pitch, so it would be hard to see the effect on a real image with a real camera. Perhaps we can see this effect on compact cam. and phones.
Therefore, it is incorrect to claim that lenses resolve 800Mpixels by looking at the diffraction effect only,
Now that's where you are putting up a straw man. My 800 MP estimate was not based on looking at diffraction only, but on modeling the Otus 85. And it turns out that, since I've done that work, I've determined that the Otus 85 is actually better at f/2 and f/2.8 and f/4 than I gave it credit for, due to deficiencies in the slanted edge target that I used for the original work. SO if I were to do that modeling over again, I'd come up with a higher number.
Thanks for clarifying what was your criteria that lead to 800Mp concluding. This refers back to one of my comment about contrast + lp/mm reference when we want to assess the impact of diffraction vs dispersion from glass elements vs pixel pitch. My earlier point was about having the exact same contrast value reference (e.g 50%), so that to compare various effects on the same basis, the basis on contrast and cycles per mm. For instance, (i use diffraction again, because that's what comes to my mind easy), comparing a diffraction that render 20% contrast at 50lp/mm to lens optical MTF of 50lp/mm (contrast 50%), to 80% pixel contrast on sensor would lead to an incorrect comparison. So it's important, in my opinion, to use a common resolution baseline when we compare the relative effects of 3 parameters.
since there is no upper limit of resolution that can capture further image detail until contrast is zero, because the optics and diffraction are continuous processes, while the digital sensor sampling process is continuous.
The 800 MP number was not based on contrast vs frequency, but on control of aliasing errors.
Previous comment also applies here. I believe, without going up to 800Mpixels, there would still be lower resolutions that would well mitigate aliasing errors. Maybe 3um would be a pretty good trade-off between sensor performance and mitigation of aliasing, the GFX100 is almost there...
However, if I enlarge two images, one taken with a 50Mpixel FF camera and fast lens, and one image from a GFX50 with equivalent aperture, the GFX50 image still contains better defined details (more acuity, more micro-contrast).
And more aliasing.
Sure.
That effect comes from the lens optical path acting (diffraction aside) as low pass spatial filter, and that optical low pass effect is reduced when the sensor size increases.

That is why, if equivalence is set (equivalent aperture, equivalent focal length, and same pixel count), the medium format system consistently produces better looking images than the smartphone images , both images enlarged at the same size for viewing. And that is why reality show evidence that the diffraction only model of thinking is flawed.
Again, you talk about a diffraction-only model. I have not been using such a model. WHo in this thread are you accusing of that?

As to my take on format advantages, have a look here:

https://blog.kasson.com/the-last-word/format-size-and-image-quality/

Jim
I referred to diffraction often, because it was well modeled, and lots of curves were posted here to illustrate comments.
 

Keyboard shortcuts

Back
Top