Perceptual MPix score by DxOMark/DPreview ?

Started Dec 18, 2012 | Discussions thread
OP blue_skies Forum Pro • Posts: 11,305
Re: Perceptual MPix score by DxOMark/DPreview ?

Erik Magnuson wrote:

A single metric only has value if its scope isn't too broad.

The scope of a metric also depends on how you choose to interpret it. I expect that for a single camera body, it will correlate pretty well with user impressions of overall lens sharpness. But you do need to remember the limits of precision, i.e. it's only reported to integer units. So you really cannot tell if a value of 15 was 14.5 rounded up and 14 was 14.4 rounded down. It's also not clear if copy variation is taken into account. So IMHO, you must consider values +/- 1 as essentially identical.

All we know is that it's a weighted average across the field based on MTF measurements. We are also told the weighting is based on human perceptional biases. This could be as simple as using MTF50 (like almost every other review site) and weighting it 60% center, 30% at rule of thirds lines, and 10% edge. Of it could be more complex using both MTF50 and MTF10, horizontal and vertical and diagonal with a more continuous weighting scheme.

So when you come up with a score like MPix score, how is that going to effect my photography?

The intended use would be to allow you to include/exclude lenses from consideration based on relative sharpness. You could set a minimum threshold (e.g. don't consider anything < 10) or simply decide between the 3 highest scoring choices based on other factors.

I already see discrepancies with it, where the older Pentax K10d with 10 MP is testing better than the new 16 MP sensor,

At this point in time, I'd treat it like other lens test data and NOT compare scores tested on different bodies.

With lenses I don't mind seeing one number, but you need a few different categories, like landscape, portrait natural light, portrait studio light, etc.

DxoMark has these on their website. But they are just different (unknown) weightings of the same data.

Manufacturers are always trying to game scores and figures to their favor.

Which may be why DxOMark does not reveal the exact scoring formula.

-- hide signature --


DPreview uses a single scoring number for cameras,

Car&Driver uses a single scoring number for cars.

You can't say that we are not used to nullifying everything to its simplest form.

Like the DPreview and Car&Driver, ratings are relative to same class products, and taken across you may get the wrong impression.

But when I read these, and other, posts, what everyone always wants to see is "which lens is best", as in absolutes. Not in composite (build quality, material, durabilitiy), not in relative (this lens is 0.4% better than that lens), but in "this lens wins".

It is what gave Leica a market - people pay good money for 'best-in-class'.

In an example, e.g. comparing the 24mm Sony Zeiss E with the 30mm Sigma E. Let's say that the S30 even outclasses the E24 (see the lensrentals discussion), and this would reflect in a single higher number for the S30 over the E24.

What does it mean? Only compare focal length? Only compare center resolution? Let an 'artist' interpretation be meaningful?

I find lens ratings useful in getting information as to good and bad aspects of a lens. But I don't take ratings as absolutes. Even if they are truly correct, the delta may not matter the inconvenience (of more bulk, heft, lack of AF, being pricier, and so on).

I get the itchy feeling that DxOMark may not reveal its formula yet as they are testing the waters, expecting some backlash. Less data is more forgiving than more.

-- hide signature --


 blue_skies's gear list:blue_skies's gear list
Sony Alpha NEX-6 Sony a6000 Sony a5100 Sony Alpha a7 II Sony Alpha a7R II +35 more
Post (hide subjects) Posted by
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark MMy threads
Color scheme? Blue / Yellow