As I have also pointed out, Bob, the lack of uncertainties ("margins of error") makes comparisons and reuse of their data problematic.
Sure, but no more problematic than any other comparison site - they none of them publish error margins. One would not expect them to, it is not part of common practice for consumer information sites. So we get left with the situation where DxO seems to be held uniquely culpable for not publishing error margins sites such as DPR, which also do not publish error margins (and moreover have quite glaring procedural faults in their tests). It seems inconsistent that DxO is picked out.
When you are looking at data to 0.1 EV resolution then the uncertainties should really back that up. If not, then you have implicit error bars of unknown magnitude in all of the DxO data (and your sensorgen information).
And every other spurce of data. That includes that coming from people like Emil Martinec and Bill Claff.
That would be about my guesstimate, too, assuming a tightly controlled test setup. Somewhere between 5 and 20 percent.
If you want a guesstimate, that seems way too large. In the case of the sensor measurements, most of them relate to the SNR at some illumination level. A laboratory illuminant should be controllable to a fraction of a percent. If the SNR is taken using a 100 pixel square, that is a sample size of 10,000 - which will give a confidence interval for the SD (noise) of a couple of percent. So my guesstimate would be something smaller than 5%, which is 0.07 stops.
We also have little idea of "native" variability for these cameras. Again, my guesstimate would be around 0.2 EV or lower (for higher-end equipment).
You mean sample to sample variation. I have no idea - gain that is a weakness for all these sites that rely on single sample tests, which is all of them.
I recall reading about DxO "tweaking" results before.
That doesn't make it true. Many people wish to discredit DPR for their own reasons.
Then they have the Canon EF 70-200mm f/2.8L IS II USM erroneous results, that they did revisit after enough complaints -- but did they really believe that a new design much-lauded by every other review site was worse than the original?
They haven't changed their view. they still give the MkII a DxO mark of 17 and the original 21. So the story that they 'revisited' is bunkum. The DxO overall scores should be taken with a pinch of salt anyway - as they say 'The DxOMark Score is the performance of a lens (with a camera body) for its best focal length and aperture combination. It does not show how the lens behaves over its entire focal range.' so it is quite possible that the MkII is better because it has a more balanced performance over the whole range. Of course, Canon fanboys, like Olympus and any other fanboys don't bother to read or think about what the numbers mean - if they don't see their pet product performing best, they just throw the toys out of the pram.
That depends on how high that uncertainty is.
The idea that SNR, DR or colour depth have little to do with actual photography is independent of the quality of the measurements of those things.
If you have a "14.0 EV" camera and a "13.0 EV" camera, but your uncertainties for each result are 0.5 EV, then the chances are that the actual delta is 0.7-1.3 EV (this depends on the distribution, or type of curve e.g. Gaussian or rectangular etc.).
Another case may be if you have a "13.2 +/- 0.5 EV" camera and a "13.0 EV +/- 0.1 EV" camera then you may wish to buy the camera with the tighter uncertainty to be surer of getting at least 12.9 EV.
Sure, but you have now thrown out every single photographic test site, not just DxO. They have their uses, you just need to be aware of their limitations. Since none of the publish the error margins, there is no way you could make that decision. In any case, it is juvenile to get obsessive about ranking cameras. Really all that matters to an individual is whether a camera (or lens) is right for them. These sites offer some reassurance that you are not getting an absolute dog.
In this case, I very much doubt it. The way you write and the quality of your analysis speaks for itself. Juvenile.
Well, your view of DxOMark represents a conflict of interest.
We've been doing quite well recently. If we want to continue to do so, you need to understand how offensive comments like that are, especially based on your previous posturing on the issue. There is no conflict of interest. I make no commercial gain from Sensorgen, nor does my professional reputation depend on it. I produced it purely because people were asking for such a thing and I could do it. It costs me hardly anything to run (one reason why the server is so slow) and its sufficient for me that it is useful for people who want to use it. If you don't, then don't. DxO is convenient in that it is the largest and best constructed source of data. No available source of data publishes error margins, and nor am I going to go into testing the cameras for myself (in that case, it certainly would have to go commercial).
Overlooking the qualitative component of the "magic numbers" -- the uncertainties -- is an odd thing for someone with a science background to do.
I'm in the same camp as Emil Martinec there:
http://theory.uchicago.edu/~ejm/pix/20d/tests/noise/
He's a better scientist than I was or will ever be, so I'm not too worried. I'm certainly not at all worried about what you think.
--
Bob