what is your take on DxOMark tests usefulness?

Can we agree that blur occurs when photons from a single point in the scene do not all arrive at the same pixel?

Can we also agree that no lens is perfect enough to direct all the photons from a point in the scene to a single point on the sensor? (Note that was a point on the sensor, not a pixel on the senor. A pixel has dimensions, a point does not.)

A lens casts a sharp image when a sufficient portion of the photos arriving from a single point are directed to a single pixel. However, those photons may have been directed to different points on the pixel.

Does it not follow that when the pixels are larger, a lens can have a greater amount of error and still get a large enough portion of the photons from a single point to a single pixel?
The problem with your analysis is that, with the possible exception of some astrophotography use cases, we aren't interested in just sampling the photons from a single point. We are interested in sampling from a multitude of neighboring points. Thus, while the larger pixel has a better chance of capturing more of the photons from a single point source, it also has a better chance of capturing more photons from neighboring point sources as well. Depending on the particular image being projected onto the sensor, the result will not be less apparent blur but, rather, greater image blur or aliasing for fine detail in the scene.

If you have any lingering doubts, consider the following comparison of a low quality lens (the Oly 15mm body cap lens) when tested on 20mp, 16mp and 12mp cameras. As you can see, the bigger pixels of the EPM1 deliver the worst performance despite the lens having a "[great] amount of error" and its pixels capturing a larger "portion of the photons from a single point" per your analysis.



211433c3a59a46ba9610c6895515629b.jpg.png
 
Truly an interesting debate.

Average scores of several samples of the same lens would be more credible as an accurate representation of the quality of the lens design. Conversely, a low score from a single sample (as far as we know, DXOMark only tests one of each lens) might indicate a quality control problem by the manufacturer, particularly if other reviewers have documented better results.

Like many of us, I am indeed interested in comparative reviews such as 85mm/1.4 vs. 85mm/1.8, particularly when the price differential is so large. In cases like this, it is reassuring that the performance of the two lenses as observed by DXOMark (and others) is so similar. In the end, it is hard to criticize the work of DXOMark when they provide their information, even if imprefect, for all to see free of charge.

I do like the point the OP makes regarding owners of gear defending their purchases. I'm guilty of that.
 
Can we agree that blur occurs when photons from a single point in the scene do not all arrive at the same pixel?

Can we also agree that no lens is perfect enough to direct all the photons from a point in the scene to a single point on the sensor? (Note that was a point on the sensor, not a pixel on the senor. A pixel has dimensions, a point does not.)

A lens casts a sharp image when a sufficient portion of the photos arriving from a single point are directed to a single pixel. However, those photons may have been directed to different points on the pixel.

Does it not follow that when the pixels are larger, a lens can have a greater amount of error and still get a large enough portion of the photons from a single point to a single pixel?
The problem with your analysis is that, with the possible exception of some astrophotography use cases, we aren't interested in just sampling the photons from a single point. We are interested in sampling from a multitude of neighboring points. Thus, while the larger pixel has a better chance of capturing more of the photons from a single point source, it also has a better chance of capturing more photons from neighboring point sources as well. Depending on the particular image being projected onto the sensor, the result will not be less apparent blur but, rather, greater image blur or aliasing for fine detail in the scene.

If you have any lingering doubts, consider the following comparison of a low quality lens (the Oly 15mm body cap lens) when tested on 20mp, 16mp and 12mp cameras. As you can see, the bigger pixels of the EPM1 deliver the worst performance despite the lens having a "[great] amount of error" and its pixels capturing a larger "portion of the photons from a single point" per your analysis.
All that is very true. The case I would counter with is that when you look at DxOMark measurements for sharpness of FX lenses on 24MP FX and DX bodies, the lens is sharper on the FX body, despite throwing away the less sharp edge of the image circle when testing on DX.

What's going on here is that there are two different factors affecting our perception of sharpness. One is the concentration of photons from a single point into a single pixel and the other is the separation of photons from separate points into separate pixels. Using the same lens with different pixel counts in the same sensor area illustrates your point. Using the same lens with the same pixel count in different sensor areas illustrates mine.

faa19670e0354f82993e4b302386a18d.jpg

As you can see, the difference in acutance from the effect I am discussing is greater than the difference in acutance from the effect you are discussing.
 
Last edited:
Can we agree that blur occurs when photons from a single point in the scene do not all arrive at the same pixel?

Can we also agree that no lens is perfect enough to direct all the photons from a point in the scene to a single point on the sensor? (Note that was a point on the sensor, not a pixel on the senor. A pixel has dimensions, a point does not.)

A lens casts a sharp image when a sufficient portion of the photos arriving from a single point are directed to a single pixel. However, those photons may have been directed to different points on the pixel.

Does it not follow that when the pixels are larger, a lens can have a greater amount of error and still get a large enough portion of the photons from a single point to a single pixel?
The problem with your analysis is that, with the possible exception of some astrophotography use cases, we aren't interested in just sampling the photons from a single point. We are interested in sampling from a multitude of neighboring points. Thus, while the larger pixel has a better chance of capturing more of the photons from a single point source, it also has a better chance of capturing more photons from neighboring point sources as well. Depending on the particular image being projected onto the sensor, the result will not be less apparent blur but, rather, greater image blur or aliasing for fine detail in the scene.

If you have any lingering doubts, consider the following comparison of a low quality lens (the Oly 15mm body cap lens) when tested on 20mp, 16mp and 12mp cameras. As you can see, the bigger pixels of the EPM1 deliver the worst performance despite the lens having a "[great] amount of error" and its pixels capturing a larger "portion of the photons from a single point" per your analysis.
All that is very true. The case I would counter with is that when you look at DxOMark measurements for sharpness of FX lenses on 24MP FX and DX bodies, the lens is sharper on the FX body, despite throwing away the less sharp edge of the image circle when testing on DX.
By "throwing away" (i.e., crop) the outer portion of the image circle you're throwing away much more than just the "less sharp edge of the image circle" in your cross format comparison. You're essentially magnifying the lens' aberrations. For some reason lot's of forum participants fail to grasp this fact even though they understand that the same thing is happening on the sensor side when you "throw away" (i.e. crop) the outer portion of a fullframe image in Photoshop to mimic what happens on cropped sensor. Perhaps if we talked about "cropping" the image circle of a lens, the point would be more easily intuited.

As I noted in my response to beatboxa, a good focal reducer like the ones produced by Metabones will actually INCREASE resolution for a given lens when used on a crop sensor relative to use of the lens on a fullframe camera. Instead of "cropping" the image circle and effectively magnifying the lens' aberrations, you're now using the full image circle and shrinking the lens aberrations.
What's going on here is that there are two different factors affecting our perception of sharpness. One is the concentration of photons from a single point into a single pixel and the other is the separation of photons from separate points into separate pixels. Using the same lens with different pixel counts in the same sensor area illustrates your point. Using the same lens with the same pixel count in different sensor areas illustrates mine.
Except you aren't using the "same lens" in that you're not using the same image circle generated by the lens. You're using a "cropped" image circle with magnified aberrations. Thus, the disadvantage has nothing to do with the smaller pixels on the cropped sensor and everything to do with the blurrier image being projected onto the cropped sensor.
faa19670e0354f82993e4b302386a18d.jpg

As you can see, the difference in acutance from the effect I am discussing is greater than the difference in acutance from the effect you are discussing.
 
Last edited:
Can we agree that blur occurs when photons from a single point in the scene do not all arrive at the same pixel?

Can we also agree that no lens is perfect enough to direct all the photons from a point in the scene to a single point on the sensor? (Note that was a point on the sensor, not a pixel on the senor. A pixel has dimensions, a point does not.)

A lens casts a sharp image when a sufficient portion of the photos arriving from a single point are directed to a single pixel. However, those photons may have been directed to different points on the pixel.

Does it not follow that when the pixels are larger, a lens can have a greater amount of error and still get a large enough portion of the photons from a single point to a single pixel?
The problem with your analysis is that, with the possible exception of some astrophotography use cases, we aren't interested in just sampling the photons from a single point. We are interested in sampling from a multitude of neighboring points. Thus, while the larger pixel has a better chance of capturing more of the photons from a single point source, it also has a better chance of capturing more photons from neighboring point sources as well. Depending on the particular image being projected onto the sensor, the result will not be less apparent blur but, rather, greater image blur or aliasing for fine detail in the scene.

If you have any lingering doubts, consider the following comparison of a low quality lens (the Oly 15mm body cap lens) when tested on 20mp, 16mp and 12mp cameras. As you can see, the bigger pixels of the EPM1 deliver the worst performance despite the lens having a "[great] amount of error" and its pixels capturing a larger "portion of the photons from a single point" per your analysis.
All that is very true. The case I would counter with is that when you look at DxOMark measurements for sharpness of FX lenses on 24MP FX and DX bodies, the lens is sharper on the FX body, despite throwing away the less sharp edge of the image circle when testing on DX.
By "throwing away" (i.e., crop) the outer portion of the image circle you're throwing away much more than just the "less sharp edge of the image circle" in your cross format comparison.
Yes.
You're essentially magnifying the lens' aberrations.
It would be more correct to say you are enlarging the effect of the aberrations but even that is not right.

The effect of an aberration is to direct a photon a point other than the point to which ideally it should be directed. If the effect of the aberration was being magnified, the displacement of the photon across the sensor would be a larger absolute distance. That doesn't happen. That same lens continues to displace photons the same distance across the DX sensor as it did across the FX sensor. However, that same distance may put it in the correct pixel on the FX sensor but in an incorrect pixel on the DX sensor since the pixels on the 24MP DX sensor are smaller than on the 24MP FX sensor. In the case where the displacement is large enough to cause the photon to miss the target pixel on the FX sensor, only then is the effect of the aberrations enlarged when you use a smaller sensor with the same pixel count.

To put it another way, the size of the pixel determines what portion of photons displaced by aberrations end up in the wrong pixel. That is neither enlargement nor magnification. It is selection.
...

As I noted in my response to beatboxa, a good focal reducer like the ones produced by Metabones will actually INCREASE resolution for a given lens when used on a crop sensor relative to use of the lens on a fullframe camera.
That is because it essentially compresses the spread of light over a smaller area, so any photons displaced by a lens aberration are displaced a shorter distance.
...
What's going on here is that there are two different factors affecting our perception of sharpness. One is the concentration of photons from a single point into a single pixel and the other is the separation of photons from separate points into separate pixels. Using the same lens with different pixel counts in the same sensor area illustrates your point. Using the same lens with the same pixel count in different sensor areas illustrates mine.

Except you aren't using the "same lens" in that you're not using the same image circle generated by the lens.
You are using a subset of the same image circle. and dividing that subset over the same number of pixels that the full image circle was divided amongst on the FX sensor.
You're using a "cropped" image circle with magnified aberrations.
No. You have to consider the stage at which the digitization - the sampling at a frequency, takes place.
Thus, the disadvantage has nothing to do with the smaller pixels on the cropped sensor and everything to do with the blurrier image being projected onto the cropped sensor.
No, the image being projected onto the cropped sensor is exactly the same image that is being projected onto the non-cropped senor. Some of that image misses the smaller sensor. The difference is that the same image is being divided among more pixels. so aberrations that had no effect on sharpness on the FX sensor (those photons that hit the wrong spot but the spot was still on the correct pixel) now have an effect because they are now hitting the wrong pixel. It is not the case that the effect of every photon redirection is being magnified/enlarged/amplified.
faa19670e0354f82993e4b302386a18d.jpg

As you can see, the difference in acutance from the effect I am discussing is greater than the difference in acutance from the effect you are discussing.
 
Sidenote: most people are usually getting more resolution on their FF than their APS-C because the lenses don't need to resolve as sharply for a larger pixel to register the signal.
It's not because of differences in pixel size. It's because you are using a smaller area of the image circle the lens was designed for and, as a result, magnifying the aberrations of the lens. Use a high quality focal reducer from Metabones on the APS-C camera and the resolution of the FF lens goes back up despite the fact that the pixel size of the APS-C camera remains unchanged.
See my prior post here .

You're making several assumptions in your statement beyond the post I was responding to, and nothing you wrote contradicts what I wrote.
See the text I've bolded in your original post. The issue isn't related to larger pixels somehow being better able to "register the signal."
Yes it is--pixel size is a part of resolution. You are assuming the use of a high-quality FF lens on a crop-frame body. And again, nothing contradicts what I wrote. The link I provided gives even more detail than the above, including use of the center portion of a FF lens on a crop frame body.
Just an observation.

The APSC lenses need to have a resolution that is crop factor times the resolution of a similar FF lens to give equivalent sharpness. If the FF lens has a resolution of 40 lpmm, then a similar APSC lens needs to have 60 lpmm resolution.

So, for a similar sensor MP rating, using an APSC lens on a APSC body will give almost similar sharpness equivalent to a FF lens on a FF body. But using a FF lens on the APSC body will mean that one is using a lesser resolution lens on the cropped body, and so, the result should not match with FF lens & body combo.
 
The APSC lenses need to have a resolution that is crop factor times the resolution of a similar FF lens to give equivalent sharpness. If the FF lens has a resolution of 40 lpmm, then a similar APSC lens needs to have 60 lpmm resolution.

So, for a similar sensor MP rating, using an APSC lens on a APSC body will give almost similar sharpness equivalent to a FF lens on a FF body.
Just because it needs to have the same resolution, it does not mean that it has it.
 
[...]

I thought I have seen it all but this post is a new achievement...
 
Sidenote: most people are usually getting more resolution on their FF than their APS-C because the lenses don't need to resolve as sharply for a larger pixel to register the signal.
It's not because of differences in pixel size. It's because you are using a smaller area of the image circle the lens was designed for and, as a result, magnifying the aberrations of the lens. Use a high quality focal reducer from Metabones on the APS-C camera and the resolution of the FF lens goes back up despite the fact that the pixel size of the APS-C camera remains unchanged.
See my prior post here .

You're making several assumptions in your statement beyond the post I was responding to, and nothing you wrote contradicts what I wrote.
See the text I've bolded in your original post. The issue isn't related to larger pixels somehow being better able to "register the signal."
Yes it is--pixel size is a part of resolution. You are assuming the use of a high-quality FF lens on a crop-frame body. And again, nothing contradicts what I wrote. The link I provided gives even more detail than the above, including use of the center portion of a FF lens on a crop frame body.
Just an observation.

The APSC lenses need to have a resolution that is crop factor times the resolution of a similar FF lens to give equivalent sharpness. If the FF lens has a resolution of 40 lpmm, then a similar APSC lens needs to have 60 lpmm resolution.

So, for a similar sensor MP rating, using an APSC lens on a APSC body will give almost similar sharpness equivalent to a FF lens on a FF body. But using a FF lens on the APSC body will mean that one is using a lesser resolution lens on the cropped body, and so, the result should not match with FF lens & body combo.
True fact.
 
By "throwing away" (i.e., crop) the outer portion of the image circle you're throwing away much more than just the "less sharp edge of the image circle" in your cross format comparison.
Yes.
You're essentially magnifying the lens' aberrations.
It would be more correct to say you are enlarging the effect of the aberrations but even that is not right.

The effect of an aberration is to direct a photon a point other than the point to which ideally it should be directed. If the effect of the aberration was being magnified, the displacement of the photon across the sensor would be a larger absolute distance. That doesn't happen. That same lens continues to displace photons the same distance across the DX sensor as it did across the FX sensor. However, that same distance may put it in the correct pixel on the FX sensor but in an incorrect pixel on the DX sensor since the pixels on the 24MP DX sensor are smaller than on the 24MP FX sensor. In the case where the displacement is large enough to cause the photon to miss the target pixel on the FX sensor, only then is the effect of the aberrations enlarged when you use a smaller sensor with the same pixel count.

To put it another way, the size of the pixel determines what portion of photons displaced by aberrations end up in the wrong pixel. That is neither enlargement nor magnification. It is selection.
But, of course, the effect of the aberration is enlarged or magnified or whatever synonym you want to apply here! Let's do a little thought experiment inspired by your "name." Dip your finger in some paint and then dab it once on the center of the front element of an FX lens. Now, if you used this severely "aberrated" lens mounted on a DX and then a FX body to frame and shoot a resolution chart, in which shot will the effect of this "aberration" be larger?
...

As I noted in my response to beatboxa, a good focal reducer like the ones produced by Metabones will actually INCREASE resolution for a given lens when used on a crop sensor relative to use of the lens on a fullframe camera.
That is because it essentially compresses the spread of light over a smaller area, so any photons displaced by a lens aberration are displaced a shorter distance.
Continuing our experiment, add a focal reducer to the DX setup, reframe as necessary and reshoot the resolution chart. Is the effect of the "aberration" changed?

The rest of your response is just a repeat of the same conceptual error.

P.S. My suggestion is that, in the future, you stay away from thinking about the applicable physics of photography on the individual photon and pixel level and, instead, think about it at the image level. Your mode of thinking is what trips lots of people up when they're trying to consider the effects of noise and diffraction as well. Hope this helps!
 
How do they translate to real life, to you? I'm not questioning their credibility which I have no reason to doubt, but rather if they influence your lens purchase decisions and to what extent and if you feel the results are compatible with your own experiences.

Here is an example, Zeiss Planar 85mm f/1.4 vs Nikkor 85mm f /1.4G on a nikon d810 body:

https://www.dxomark.com/Lenses/Comp...KKOR-85mm-f14G-on-Nikon-D810__338_963_388_963

The nikkor has a score of 42, the zeiss gets 31. It is a big gap. I happen to own both (selling the nikkor currently) and I don't see a difference except in caracter and subjective atributes like bokeh. I'm still going to conduct more tests, i'm curious on this one.

The "angry photographer" has a youtube review praising the zeiss over the nikkor 85mm f1.4g and he usually (in my opinion) is very accurate and credible. In all the forums and stuff I've read (even before deciding getting the zeiss) people seem to say both are optically great and talk about the obvious manual vs auto focus and subjective atributes.

So I'm kind of curious of how such a big score gap doesn't reflect into real life perceptions.
I have never cared about their opinions, findings, test charts. Never will either. I'm only interested in real world photos, If I can't see the performance for myself through test shots, sample photos, etc, then I take their conclusions with a grain of salt.

I could post dozens of examples of lenses that they rate highly but when I've shot them and see the actual images, I draw completely different conclusions; but since this thread is almost maxed there's little point.

None the less, like DXO or find them irrelevant, either way I'd never ever just use one website to draw conclusions about anything. Research, research, research!
 
Let's do a little thought experiment inspired by your "name." Dip your finger in some paint and then dab it once on the center of the front element of an FX lens. Now, if you used this severely "aberrated" lens mounted on a DX and then a FX body to frame and shoot a resolution chart, in which shot will the effect of this "aberration" be larger?
You seem to be under the mistaken impression that if you do something to the centre of the fron element it only affects the centre of the image circle.

It isn't an aberration. It doesn't deflect the photon path - it blocks it. The most significant effect will be identical on both sensors. It will darken the whole frame in a nearly uniform manner, Just as stopping down darkens the whole frame. It's secondary effect will be a mirror image of the effect of stopping down, since it blocks the centre of the lens instead of its edges. The whole frame will become less sharp. On both sensors the sharpness loss will probably be greater in the centre than on the edges.
Continuing our experiment, add a focal reducer to the DX setup, reframe as necessary and reshoot the resolution chart. Is the effect of the "aberration" changed?
The change will be roughly the same as adding a focal reducer to a stopped down lens.
P.S. My suggestion is that, in the future, you stay away from thinking about the applicable physics of photography on the individual photon and pixel level and, instead, think about it at the image level. Your mode of thinking is what trips lots of people up when they're trying to consider the effects of noise and diffraction as well. Hope this helps!
It is very unhelpful. If you don't understand the physics, you are more likely to misunderstand what happens at the whole image level.
 
But as they just give wholesale metrics, don't evaluate af motors or build quality, and mix in sensors I tend to just use it as a rough guesstimate rather than the be all end all of lens metrics

Lensrentals evaluations are by far the best I have ever seen
 
But as they just give wholesale metrics, don't evaluate af motors or build quality, and mix in sensors I tend to just use it as a rough guesstimate rather than the be all end all of lens metrics

Lensrentals evaluations are by far the best I have ever seen
I'd suggest that the fact that DxOMarkreport output as attached to specific bodies makes their measurements more useful, not less. Nobody uses a lens without a body. A wonderfully sharp lens on a small-sensored low pixel count sensor will produce a less sharp image than a middle-of-the road lens on a large sensor high pixel count body.

LensRental's optical bench tests are great for comparing performance on the same sensor, but not as much help for comparing lenses intended for use on different bodies. They'll tell you that lens A is sharper than lens B but how will you tell if lens A on a K1 is sharper than lens B on a 5DS R? They are great for choosing a lens for a body you already have or know you will get, but not nearly as much help in choosing a system.

Each approach has its strengths and weaknesses.
 
I realize this thread is a few months old, but I have been perplexed by the presentation of DxOMark's sharpness measurements. The transformation of a sharpness score into a color seems bizarre, especially when the difference between 12 pmpix and the maximum possible pmpix surely must be significant, but it would be lost in the representation of both as the same color. Everything from 12 onward is lumped in the same color representation.

They have the data for sharpness scores at every case tested, why don't they make it available? A table would be fine.

Is there some way to access the actual scores that comprise their silly color-coded charts?

--
https://www.flickr.com/gp/143821723@N06/sRBm53
 
Last edited:
DXO has a testing methodology is a consistent approach to measuring samples of a wide variety of lenses. While individual samples can vary, information shown in the measurements tab provides insights into lenses and comparisons that are unavailable elsewhere. Individual reviews and impressions can be useful as well, but those are even less definable than DXO results.

Summary scores and comparisons of lenses with different camera bodies can be misleading. Taken these provisos into account DXOmark information can be a useful in addition to photo comparisons.
 
Last edited:
DXO has a testing methodology is a consistent approach to measuring samples of a wide variety of lenses. While individual samples can vary, information shown in the measurements tab provides insights into lenses and comparisons that are unavailable elsewhere. Individual reviews and impressions can be useful as well, but those are even less definable than DXO results.
What they do is testing one sample of each lens, as any other testing site, except Lens Rentals, and occasionally, TDP. In the end, the scramble the data and present in in some mystical way.
 
I realize this thread is a few months old, but I have been perplexed by the presentation of DxOMark's sharpness measurements. The transformation of a sharpness score into a color seems bizarre, especially when the difference between 12 pmpix and the maximum possible pmpix surely must be significant, but it would be lost in the representation of both as the same color. Everything from 12 onward is lumped in the same color representation.

They have the data for sharpness scores at every case tested, why don't they make it available? A table would be fine.

Is there some way to access the actual scores that comprise their silly color-coded charts?
 
I realize this thread is a few months old, but I have been perplexed by the presentation of DxOMark's sharpness measurements. The transformation of a sharpness score into a color seems bizarre, especially when the difference between 12 pmpix and the maximum possible pmpix surely must be significant, but it would be lost in the representation of both as the same color. Everything from 12 onward is lumped in the same color representation.

They have the data for sharpness scores at every case tested, why don't they make it available? A table would be fine.

Is there some way to access the actual scores that comprise their silly color-coded charts?

--
https://www.flickr.com/gp/143821723@N06/sRBm53
Yes. "Measurements - Sharpness - Profiles" Shows a graph of acutance (as a %) across the frame for each tested aperture/focal length setting tested. Not measured in terms of their "dumbed down" "P-Mpix" scores.
Acutance - another mystical metric.
 

Keyboard shortcuts

Back
Top