DxOmark -- why "only" 71 points?

Why are the m43 sensors 20+ points behind the best sensors like the D600? Is it because the pixels on a FF sensor are larger and can be manufactured to have better dynamic range? Or does Nikon simply make better sensors than Sony/Olympus?
Looks like many people are disappointed by DXO measurements.
So what evidence can you point to in support of that impression? Most MFT users who participated in the original thread on the subject seemed quite pleased. See here:

http://forums.dpreview.com/forums/readflat.asp?forum=1041&message=42579101
I guess it's simply the result of unreasonable expectations, should I say irrational exuberance? Even DPR gave 80% score to OMD and 79% to NEX 5N, while the DXO measurements simply confirmed what was reasonable to expect in the first place that 1.5 larger NEX sensor would provide approx. 1.5 better results in every category.
So what exactly does 1.5 better mean? What categories are you talking about? And on what do you base your expectations? Please be specific.
What it means is pretty clear, the results scale with the sensor size.
 
No. The point I am trying to make is that while the area difference have implications of exactly the kind you indicate for photon noise, it does not have such implications when it comes to read noise and thus DR and shadow noise. This is an important point since it is usually missing in standard dicussions involving cross-format comparisons.
Yeah, but neither position is fully right. One cannot solely look at photon shot noise in the darker areas but only looking at read noise is also incomplete. Bill Claff has reported that for several cameras he studied, using his definition of DR, the photon shot noise was higher than the read noise in his cut-off lower end (one being higher does not mean the other is irrelevant). Essentially, the only relevant comparison is that of full SNR curves like the one Marianne has produced:
http://forums.dpreview.com/forums/read.asp?forum=1021&message=40756917
 
Why are the m43 sensors 20+ points behind the best sensors like the D600? Is it because the pixels on a FF sensor are larger and can be manufactured to have better dynamic range? Or does Nikon simply make better sensors than Sony/Olympus?
Looks like many people are disappointed by DXO measurements.
So what evidence can you point to in support of that impression? Most MFT users who participated in the original thread on the subject seemed quite pleased. See here:

http://forums.dpreview.com/forums/readflat.asp?forum=1041&message=42579101
I guess it's simply the result of unreasonable expectations, should I say irrational exuberance? Even DPR gave 80% score to OMD and 79% to NEX 5N, while the DXO measurements simply confirmed what was reasonable to expect in the first place that 1.5 larger NEX sensor would provide approx. 1.5 better results in every category.
So what exactly does 1.5 better mean? What categories are you talking about? And on what do you base your expectations? Please be specific.
What it means is pretty clear, the results scale with the sensor size.
I realized that you meant something like that but asked you to be specific. You chose not to be. I can see why.
 
First, I'd be interested in taking a second look at the images where you had color shift in the corners with the G1 (using the 14-45 if I recall correctly).
I will have to dig those up at some point. Don't have a pro flickr account, so those got nuked the last time I went through to clear out space.
Yes, I noticed that. I found our prior exchange where you posted them but the image (or images) were gone. See here:

http://forums.dpreview.com/forums/readflat.asp?forum=1041&thread=41287373&page=1
You have all three, so would you be willing to take some simple test shots, similar to those last posted by me in that thread, using for example your 45/1.8, preferably wide open.
Actually, I "only" have the GF1/GF2/G1/GH2 on hand... the EM5 is camera I am all but sold on, but will probably wait until the snow season starts (after the new year) and getting RRS brackets / spare batteries is something I still have to tackle (not that easy from Japan).
OK. I thought I read somewhere that you bought the E-M5 but must have mixed that up somehow. But what you say about RRS bracket and spare batteries surprise me a bit. Why would it be more difficult to get those in Japan than elsewhere? If anything, I would have guessed the opposite.

Even though you don't have the E-M5, I'd be very interested in seeing directly comparable samples from what you have, especially the G1 and the GH2. That would help settle the question of whether this is a Pany-versus-Oly difference or whether some Panys are at least partly affected. As far as I can tell from the experiments I can perform myself, the E-M5 is strongly affected by the purpleness problem and the G1 not at all. So the question is where the GH2 ends up. In exactly the same camp as the G1 or somewhere inbetween the G1 and the E-M5.
 
Why are the m43 sensors 20+ points behind the best sensors like the D600? Is it because the pixels on a FF sensor are larger and can be manufactured to have better dynamic range? Or does Nikon simply make better sensors than Sony/Olympus?
Looks like many people are disappointed by DXO measurements.
So what evidence can you point to in support of that impression? Most MFT users who participated in the original thread on the subject seemed quite pleased. See here:

http://forums.dpreview.com/forums/readflat.asp?forum=1041&message=42579101
I guess it's simply the result of unreasonable expectations, should I say irrational exuberance? Even DPR gave 80% score to OMD and 79% to NEX 5N, while the DXO measurements simply confirmed what was reasonable to expect in the first place that 1.5 larger NEX sensor would provide approx. 1.5 better results in every category.
So what exactly does 1.5 better mean? What categories are you talking about? And on what do you base your expectations? Please be specific.
What it means is pretty clear, the results scale with the sensor size.
I realized that you meant something like that but asked you to be specific. You chose not to be. I can see why.
You seem to want to argue about something but not saying about what, and I'm not going to waste my time playing this game.
 
DPR assessed the whole camera, not the sensor alone.

We're getting close to the stage where the latest sensors are good enough for anything but the most specialised purposes.

Therefore the score reflected DPR's opinion of the capabilities of the camera features and functionality.

Another few years and this obsession with sensors will have faded and we will be back to the film era when cameras were assessed purely on their features and usability for their target market.
 
No. The point I am trying to make is that while the area difference have implications of exactly the kind you indicate for photon noise, it does not have such implications when it comes to read noise and thus DR and shadow noise. This is an important point since it is usually missing in standard dicussions involving cross-format comparisons.
Yeah, but neither position is fully right. One cannot solely look at photon shot noise in the darker areas but only looking at read noise is also incomplete. Bill Claff has reported that for several cameras he studied, using his definition of DR, the photon shot noise was higher than the read noise in his cut-off lower end (one being higher does not mean the other is irrelevant). Essentially, the only relevant comparison is that of full SNR curves like the one Marianne has produced:
http://forums.dpreview.com/forums/read.asp?forum=1021&message=40756917
I am not saying you shouldn't look at other things. Rather I am saying that DR is my preferred simplification, if I have to choose a single one.

What Bill reports is due to the special definition of DR he uses. The higher you set the noise floor, the more photon noise will dominate on all levels of the remaining dynamic range.

Consider the discussion between Great Bustard and me to which I link below:

http://forums.dpreview.com/forums/read.asp?forum=1041&message=41640150

As you can see, I show that for the GH1, which happened to be the example one of us picked, when used at ISO 3200, the point at which photon noise becomes more important than read noise is only 4.8 stops down from the clipping point and 2.7 stops above the noise floor as conventionally defined, i.e. a point well into the midtones rather than the darkest shadows of a normal image.

Further, I encourage you to perform the following experiment. Compare, using the DPR "compare RAW" tool, some cameras tested by DxO that differ significantly in terms of high-ISO sensor performance. Establish how large the difference is in ISO terms, i.e., try to find a pair of reasonably high ISO settings (one for the better camera and one for the worse) such that the images are of approximately the same quality as far as noise is concerned (where I am sure you will find that the noise is most bothering in the shadows).

This may be complicated by the fact that the sensor resolution varies, so try to the extent possible to keep that factor constant or to somehow take it into account. Ideally, one should do here what I did in my D800 versus E-M5 comparison posted above, i.e., process identically from the RAWs in both cases and keep final display resolution the same. But that requires a bit of work.

Note how big the difference between the two ISOs is. Now go to DxO and check the difference a) in "sports" scores and b) in DR at the same ISO within the relevant part of the DR curve. When looking at the DR curves, you might also want to check how close the DR is at the two ISOs where you found the two cameras to show roughly the same noise level. Repeat for at least a small sample of camera pairs.

Then tell me which of the two DxO differences better match the difference you found by looking at the images. I am pretty sure that you will find, as I have, that the difference in DR better matches your perceptual impressions than the one in "sports" scores, and that the DR at the two ISOs where you found the two cameras to be roughly the same is roughly on a par.

Obviously all three differences (the two in DxO numbers and the one you found by looking at the images) are likely to be in the same direction so that's not the point here. The point is the magnitude. How big is the difference measured in these three ways and which pairs of measures are better aligned with each other in that regard.
 
Why are the m43 sensors 20+ points behind the best sensors like the D600? Is it because the pixels on a FF sensor are larger and can be manufactured to have better dynamic range? Or does Nikon simply make better sensors than Sony/Olympus?
Looks like many people are disappointed by DXO measurements.
So what evidence can you point to in support of that impression? Most MFT users who participated in the original thread on the subject seemed quite pleased. See here:

http://forums.dpreview.com/forums/readflat.asp?forum=1041&message=42579101
I guess it's simply the result of unreasonable expectations, should I say irrational exuberance? Even DPR gave 80% score to OMD and 79% to NEX 5N, while the DXO measurements simply confirmed what was reasonable to expect in the first place that 1.5 larger NEX sensor would provide approx. 1.5 better results in every category.
So what exactly does 1.5 better mean? What categories are you talking about? And on what do you base your expectations? Please be specific.
What it means is pretty clear, the results scale with the sensor size.
I realized that you meant something like that but asked you to be specific. You chose not to be. I can see why.
You seem to want to argue about something but not saying about what, and I'm not going to waste my time playing this game.
My point is that you don't know what you are talking about. Apparently you are unable to even exemplify what you mean.
 
Why are the m43 sensors 20+ points behind the best sensors like the D600? Is it because the pixels on a FF sensor are larger and can be manufactured to have better dynamic range? Or does Nikon simply make better sensors than Sony/Olympus?
Nikon use Sony sensors in their top models.
Not in the top model. D4 is a Nikon sensor. In fact, currently Nikon sensors top and tail the DSLR range.

--
Bob
 
No. The point I am trying to make is that while the area difference have implications of exactly the kind you indicate for photon noise, it does not have such implications when it comes to read noise and thus DR and shadow noise. This is an important point since it is usually missing in standard dicussions involving cross-format comparisons.
Yeah, but neither position is fully right. One cannot solely look at photon shot noise in the darker areas but only looking at read noise is also incomplete. Bill Claff has reported that for several cameras he studied, using his definition of DR, the photon shot noise was higher than the read noise in his cut-off lower end (one being higher does not mean the other is irrelevant). Essentially, the only relevant comparison is that of full SNR curves like the one Marianne has produced:
http://forums.dpreview.com/forums/read.asp?forum=1021&message=40756917
I am not saying you shouldn't look at other things. Rather I am saying that DR is my preferred simplification, if I have to choose a single one.

What Bill reports is due to the special definition of DR he uses. The higher you set the noise floor, the more photon noise will dominate on all levels of the remaining dynamic range.

Consider the discussion between Great Bustard and me to which I link below:

http://forums.dpreview.com/forums/read.asp?forum=1041&message=41640150

As you can see, I show that for the GH1, which happened to be the example one of us picked, when used at ISO 3200, the point at which photon noise becomes more important than read noise is only 4.8 stops down from the clipping point and 2.7 stops above the noise floor as conventionally defined, i.e. a point well into the midtones rather than the darkest shadows of a normal image.

Further, I encourage you to perform the following experiment.
I encourage myself to trust my judgement and wait for somebody doing quantitative measurements instead of relying on visual judgements which I only trust when integrated over a large enough sample (and I consider one test scene and one person, ie, myself very far from large enough) in particular when said sample is imperfect in regard to exposure consistency.

And I actually went down that road and compared the full SNR curves from DxO from different cameras to replicate Marianne's graphs for other cameras. But they are a pain to use as one cannot access the numbers directly which is necessary to scale them properly, ie, one either scales images of curves and pastes them together or manually reads the values out of the graph.
 
No. The point I am trying to make is that while the area difference have implications of exactly the kind you indicate for photon noise, it does not have such implications when it comes to read noise and thus DR and shadow noise. This is an important point since it is usually missing in standard dicussions involving cross-format comparisons.
Yeah, but neither position is fully right. One cannot solely look at photon shot noise in the darker areas but only looking at read noise is also incomplete. Bill Claff has reported that for several cameras he studied, using his definition of DR, the photon shot noise was higher than the read noise in his cut-off lower end (one being higher does not mean the other is irrelevant). Essentially, the only relevant comparison is that of full SNR curves like the one Marianne has produced:
http://forums.dpreview.com/forums/read.asp?forum=1021&message=40756917
I am not saying you shouldn't look at other things. Rather I am saying that DR is my preferred simplification, if I have to choose a single one.

What Bill reports is due to the special definition of DR he uses. The higher you set the noise floor, the more photon noise will dominate on all levels of the remaining dynamic range.

Consider the discussion between Great Bustard and me to which I link below:

http://forums.dpreview.com/forums/read.asp?forum=1041&message=41640150

As you can see, I show that for the GH1, which happened to be the example one of us picked, when used at ISO 3200, the point at which photon noise becomes more important than read noise is only 4.8 stops down from the clipping point and 2.7 stops above the noise floor as conventionally defined, i.e. a point well into the midtones rather than the darkest shadows of a normal image.

Further, I encourage you to perform the following experiment.
I encourage myself to trust my judgement and wait for somebody doing quantitative measurements instead of relying on visual judgements which I only trust when integrated over a large enough sample (and I consider one test scene and one person, ie, myself very far from large enough) in particular when said sample is imperfect in regard to exposure consistency.
I have done this exercise over a large enough sample to trust my observations. And as far as persons and their perceptions are concerned, my own are the ones I primarily care about. I want my pictures to look good to myself. If they look good to others too, that's fine, but still secondary, especially since these others may be rather different from one case to another.

Personally, I find it quite important to find my own linkages between the numbers cranked out by places like DxO and my own visual perceptions of image quality. I like the numbers because once I know how to interpret them, it saves me the more time-consuming trouble of inspecting, quite carefully, a large number of image samples. I do the latter too, from time to time, but I don't do it unless I am really interested in the camera in question and want to verify that this particular case is no exception to the rule in how the numbers translate to visible imaqe quality.
And I actually went down that road and compared the full SNR curves from DxO from different cameras to replicate Marianne's graphs for other cameras. But they are a pain to use as one cannot access the numbers directly which is necessary to scale them properly, ie, one either scales images of curves and pastes them together or manually reads the values out of the graph.
Yes, doing the exercise I proposed is much easier.
 
Further, I encourage you to perform the following experiment.
I encourage myself to trust my judgement and wait for somebody doing quantitative measurements instead of relying on visual judgements which I only trust when integrated over a large enough sample (and I consider one test scene and one person, ie, myself very far from large enough) in particular when said sample is imperfect in regard to exposure consistency.
I have done this exercise over a large enough sample to trust my observations. And as far as persons and their perceptions are concerned, my own are the ones I primarily care about. I want my pictures to look good to myself.
I of course get satisfaction from my own photos, but I get more satisfaction if other people like them.
And I actually went down that road and compared the full SNR curves from DxO from different cameras to replicate Marianne's graphs for other cameras. But they are a pain to use as one cannot access the numbers directly which is necessary to scale them properly, ie, one either scales images of curves and pastes them together or manually reads the values out of the graph.
Yes, doing the exercise I proposed is much easier.
But much less precise and much more biased.
 
Further, I encourage you to perform the following experiment.
I encourage myself to trust my judgement and wait for somebody doing quantitative measurements instead of relying on visual judgements which I only trust when integrated over a large enough sample (and I consider one test scene and one person, ie, myself very far from large enough) in particular when said sample is imperfect in regard to exposure consistency.
I have done this exercise over a large enough sample to trust my observations. And as far as persons and their perceptions are concerned, my own are the ones I primarily care about. I want my pictures to look good to myself.
I of course get satisfaction from my own photos, but I get more satisfaction if other people like them.
OK. Your choice. But where do you find good data on how the average perception of the particular people to whom you show your pictures correlate with numbers like those produced by DxO. And even if you had such information, how would it help you when you PP your images (as I guess you do). I for the most part rely on my own eyes here, although I might occasionally ask my wife's opinion.
And I actually went down that road and compared the full SNR curves from DxO from different cameras to replicate Marianne's graphs for other cameras. But they are a pain to use as one cannot access the numbers directly which is necessary to scale them properly, ie, one either scales images of curves and pastes them together or manually reads the values out of the graph.
Yes, doing the exercise I proposed is much easier.
But much less precise and much more biased.
In what ways would it be less precise and more biased. As far as I can see, there is neither less precision nor more bias built into the procedure I proposed. It's just a way of checking which of two alternative DxO measures are better attuned to your perceptions.
 
I hear you, but I think your analysis may be darker than it needs to be.

People seem to sometimes get confused about DxO: this is NOT AN INDEPENDENT LAB, BUT A COMMERCIAL VENTURE.

These guys are out there to sell their software, and attempt to use the tests as a marketing tool for this purpose. And it seems to work well for them because each time they publish a couple results it magically gets on all the photography web sites - well at least all the geeky sites where people discuss technical details rather than real photography... ;-)

Given this, I am certain that it would be easy for any of the big guns to pay DxO to:

(1) manage the timing of results for any competitor camera that might present a risk to their product line

(2) worse, tweak the black box so that it makes their cameras look better. Currently it's very clear that the black box gives a premium to absolute resolution for example, irrespective of a much more important things like AF repeatability and precision ( )

This being said: I don't think any of the big guns could have done (1) above to hurt the EM-5: this camera will probably mostly sell to people with an existing m43 lens owners. And I think they'd really think twice before going for (2) above because if found out then they would look bad, and DxO would look even worse.

CONCLUSION: DxO is just a black box, and also they never mention a margin of error when that margin is probably at least 10% ie all cameras between 63 and 77 points are probably similar. But the guys at DxO seem to be very skilled at selling their largely useless stuff. Good for them.

( ) I realise that DxO say they're testing mostly the sensors, but they publish their results based on the cameras built around those sensors.
 
I've just fininshed responding to another post of yours over on the canon 1D forum. Both reveal you to have a juvenile view of how industry and commerce works.
I hear you, but I think your analysis may be darker than it needs to be.

People seem to sometimes get confused about DxO: this is NOT AN INDEPENDENT LAB, BUT A COMMERCIAL VENTURE.

These guys are out there to sell their software, and attempt to use the tests as a marketing tool for this purpose. And it seems to work well for them because each time they publish a couple results it magically gets on all the photography web sites - well at least all the geeky sites where people discuss technical details rather than real photography... ;-)
Whic promotes their expertise and sales of DxO tools. Good business.
Given this, I am certain that it would be easy for any of the big guns to pay DxO to:

(1) manage the timing of results for any competitor camera that might present a risk to their product line

(2) worse, tweak the black box so that it makes their cameras look better. Currently it's very clear that the black box gives a premium to absolute resolution for example, irrespective of a much more important things like AF repeatability and precision ( )
That would be a silly thing to do, because the point of independents like DxO is that their USP's depend on them being independent. All the major camera manufacturers do their own processing tools. One big attaction of DxO or Adobe is that one set of tools will give consistent treatment against a range of manufacturers. To maintain that, these companies have to collaborate equally with all the manufacturers - once they are seen to be giving 'special treatment' to one or the other, the co-operation of the others will be prejudice. thus while allowing themselves to be paid off might give a short term gain, it would be very poor for their overall business.

--
Bob
 
As soon as anyone uses insults, or even slightly insulting comments, such as "juvenile views", it means that they are running out of rational arguments.

Or worse, that they have vested interests...

In any case:
  • fact is that DxO do not publish margins of error, and that these margins are likely at least 10 per cent and thus most of the score differences they highlight and so skillfully make buzz about, are meaningless
  • fact is that DxO's black box is not credible. Suffice it to remember that when they first introduced mid format cameras in their rankings, they came out quite low. When photographers protested that this was ridiculous and certainly did not reflect real life, DxO just tweaked the black box so they would come up on top. Ridiculous.
  • fact is that both Adobe and DxO have repeatedly proved behind the new camera curve, and not providing the consistent processing you mention
Good for DxO if they're managing to create all that excitement with their largely phoney rankings - but these have very little to do with actual photography.

Back to the "juvenile" comment - you should perhaps be more careful, because perhaps the person you try to slightly insult, has more business experience, more experience of regularly meeting CEOs around the world, more experience investing in various sectors, than you ever will?
 
As soon as anyone uses insults, or even slightly insulting comments, such as "juvenile views", it means that they are running out of rational arguments.
Or that your views are juvenile
Or worse, that they have vested interests...
And that kind of suggestion would back that view up.
In any case:
  • fact is that DxO do not publish margins of error,
like every other consumer test site.
and that these margins are likely at least 10 per cent
on what do you base that estimate? What is its margin of error.
and thus most of the score differences they highlight and so skillfully make buzz about, are meaningless
Not meaningless, just that they have a margin of error. Different thing.
  • fact is that DxO's black box is not credible. Suffice it to remember that when they first introduced mid format cameras in their rankings, they came out quite low. When photographers protested that this was ridiculous and certainly did not reflect real life, DxO just tweaked the black box so they would come up on top. Ridiculous.
Your statement is ridiculous. The MF backs that scored low still score low. New MF backs have scored better. Look just a little and it's easy to see why they didn't score well, in proportion to their size with respect to state of the art CMOS FF sensors. Much higher read noise, much lower QE.
  • fact is that both Adobe and DxO have repeatedly proved behind the new camera curve, and not providing the consistent processing you mention
Wouldn't know, don't use either, but if that statement is as reliable as most of your 'facts', I'll discount it.
Good for DxO if they're managing to create all that excitement with their largely phoney rankings - but these have very little to do with actual photography.
A juvenile position to take. The fact there is no published margin of error would make all consumer camera and lens tests phony. The idea that SNR or DR or colour depth have little to do with actual photography is silly.
Back to the "juvenile" comment - you should perhaps be more careful, because perhaps the person you try to slightly insult, has more business experience, more experience of regularly meeting CEOs around the world, more experience investing in various sectors, than you ever will?
In this case, I very much doubt it. The way you write and the quality of your analysis speaks for itself. Juvenile.

--
Bob
 
well check those technical measurement .. the OM-D had a good sensor, yes but the sensor is not up to the latest of FF but almost matching the APS-C. I wonder what would be the latest reading Dxo would get from this newer slew of 16MP Sony ( NEX-5R, NEX-6 , K-5IIs )

--
  • Franka -
Franka,

As soon as the new or better senspr technology would be addopted fot APS-C and FF sensors the gap will be back in place.

Also, with a larger sensor the lens would better resolve smaller detail at the same viewing angle and equivalent fous distance.
It seems as a long game ... :-)
Leo
 
well check those technical measurement .. the OM-D had a good sensor, yes but the sensor is not up to the latest of FF but almost matching the APS-C. I wonder what would be the latest reading Dxo would get from this newer slew of 16MP Sony ( NEX-5R, NEX-6 , K-5IIs )

--
  • Franka -
Franka,

As soon as the new or better senspr technology would be addopted fot APS-C and FF sensors the gap will be back in place.
What makes you think that new or better sensor technology would be adopted for APS-C and FF but not for MFT? Or do you think that MFT currently has some technology advantage that the other formats have yet to get? If so, on what basis?
Also, with a larger sensor the lens would better resolve smaller detail at the same viewing angle and equivalent fous distance.
What makes you think that?
It seems as a long game ... :-)
Leo
 

Keyboard shortcuts

Back
Top