A6000 DxO Marked

So if I translate this in what i means in practice: the A6000 basically gives you no advantage over a 3.5 year-old Nikon D7000. Unless you do a lot of low-light / high ISO shooting. And it gives you a DIS-advantage if you do a lot of landscapes. Frankly I'd have expected better.
Low light is what happens often. Indoor without flash. Happens all the time. It's nice to be able to go to ISO 1600, 3200, or 6400 even, and get good results. It makes up for the generally slower lenses, and the result is a more compact system.

And then all the other advantages of the A6000 are mounting. Video much better. Auto focus much better. In-camera panorama stitching. Better HDR because of it taking the shots faster. 11 fps full resolution bursting. Articulating screen (and not the clumsy side way). Countless of adapters allowing legacy lenses for just about all brands and lens mounts. Available low cost focal reducers that perform pretty decent. List goes on and on.

I say the A6000 is quite the leap forward.
P.S. The best APS-C camera is the Nikon D5200 which scores 84.
I think the A6000 is much better overall.
 
Hi, Is Sony alpha ILC 6000 HDMI out "clean". Does it make an output of 4:2:2 at 10bit/sec?
 
This score of 82 on APS-c sensor even matches the 82 of the FF Canon 6D in overall score, though not a big improvement over Nex 7.

This score of 82 on APS-c sensor even matches the 82 of the FF Canon 6D in overall score, though not a big improvement over Nex 7.

--
Dave
Hi Dave,

I must say I'm rather un-impressed by the score.

So the A6000 scores 82 overall?

Well just for a reality check: I own a Nikon D7000, which uses a Sony sensor, and which came out late in 2010 ie about 3.5 years ago. This camera scored 80 on DxO, i.e. just 2 points less than the brand new A6000. Furthermore it is likely that such a small difference falls within the margin of error, i.e. there is NO actual difference!
The D7000 is very large. That means they can afford to use more powerful amplifiers (and thus lower noise) and lower noise power supplies while keeping temperatures low. Having more room in general allows for cleaner design of support circuitry. The a6000 should mainly be compared to other mirrorless or previous generation NEX cameras. The smaller body is more difficult to cool plus the live view adds warming. Perhaps the only thing to be impressed about is the fact that Sony has overcome the disadvantages inherent in such a small body.

Plus the improvements in other areas such as faster readout speed, higher pixel count (which drives faster data rates), better video naturally compete with DxO-favored figures of merit.
Frankly I'd have expected better.
Judging from the trend scatter plot at DxO, the a6000 appears to be almost exactly what you should expect. Ie., it's neither particularly remarkable nor poor (perhaps it's better than average, but not by much.) Particularly in light of the fact that other sensor features are also being scaled up (such as better video, higher pixel count, faster frame rate, etc..) And these features don't figure into DxO scores.

Bart
--
 
Overall score is pointless.

The key to appreciation is a 24mp sensor with phase detect pixel that just scored at the top of DXO sensor rating. Even these numbers by themselves mean little, but they do indicate that you should get very good results at ISO 1600.

Then theres such things as DR and color depth. Those are at base ISO and for most part just about any camera in this class from Sony or Nikon over last four years has it as a non issue. I would be more interested in... if now ISO 1600 is very good with noise, how is DR and Color depth at that particular ISO for same exposure?

Not even the "best" measured numbers provide the story much less an "overall score" would.
 
>So if I translate this in what it means in practice: the A6000 basically gives you no advantage
Except black frame insertion multi-exposure within the camera, that clearly produces superior results in low light for static shots.
Can you please explain what you mean by "black frame insertion multi-exposure?" Not familiar with that term, and Google's not helping.

Thanks for clarifying some of the math, here!
 
Where did you pull that from for 1/3 stop of loss of light, that is mirrorless not the Sony SLT. There is no such 1/3 loss of light coming off mirrorless with phase detection. I think your comment is nonsense. Please show me the evidence of pictures and the wordings from Sony. Thanks. Or your comment is dismissed.
http://www.dxomark.com/Reviews/Sony...X-6-versus-Sony-NEX-7-Incremental-Improvement
That link has nothing to do with the claim that there's a 1/3 stop light loss for PDAF on a sensor... in fact, it shows an OSPDAF-equipped sensor (A6000) performing 1/3 stop BETTER than a sensor of the exact same resolution without OSPDAF (NEX-7)!

I'm in agreement with naththo... I've seen zero evidence to support OSPDAF causing anywhere near that level of light loss. I think someone's applying one set of facts to a totally incompatible situation. They have probably read that an SLT mirror (or DSLR mirror/prism) diverts 30% of the light to the AF, so they assume 30% is some sort of magical formula for what an AF system needs for its exclusive use. This is a totally different situation, though... the PDAF sensors in those cameras are not on the chip, so light that goes to them can't be used to create the image. OSDPAF sensors ARE on the chip, so they get exposed to all the same light that regular pixels get exposed to... they don't need a special, dedicated allocation of 30% of the light sent only to them... as if that were possible.

As someone else mentioned, CDAF requires 100% of the light (all the image info) in order to work, yet you're still able to use that image info to actualy record an image file on your SD card. When you're working with stuff on the sensor, the purposes available light can be used for are no longer mutually exclusive.
 
Last edited:
>- the A6000 gets Portrait 24.1 bits vs. 23.5 bits of colour depth. A tiny 2.6 percent difference.
Eh, the difference is in bits :-)
The difference in linear scale is ~52%.
We can argue about limits of human visual acuity and minimal noticeable difference, but let's do the math right, shall we.
>- the A6000 gets 13.1 EVs of dynamic range, vs. 13.9 EVs on the D7000. Which is 5.8 percent
>LESS than a 3.5 year old camera...
Again, the difference is 0.8EV, 1 EV is one full stop (exposure step, i.e. doubling of exposure). Almost one full stop. It's a drastically different than a 5.8% improvement. You have to think in terms of the scales and not try to linearly compare non-linear scales. It makes no sense, unless you really know what you are doing :-)
>- finally the A6000 gets low light ISO of 1347 vs 1167 on the D7000. That's a sizeable, and >significant 15 percent difference over the D7000. So there was clear progress on this front,

Actually, that is a linear scale (ISO sensitivity for a decent S/N). a 25% difference on that scale represents roughly 1/3 EV difference and often is not discernible in basic use.

>So if I translate this in what it means in practice: the A6000 basically gives you no advantage
Except black frame insertion multi-exposure within the camera, that clearly produces superior results in low light for static shots.
Not to mention other "slight" differences in other features of the cameras.
>Frankly I'd have expected better.
I also expected better. Guess we are approaching the limits of physics with the current CMOS-designs.
A color sensitivity of 22bits is excellent, and differences below 1 bit are barely noticeable.

A value of 12 EV is excellent, with differences below 0.5 EV usually not noticeable.

A difference in low-light ISO of 25% represents 1/3 EV and is only slightly noticeable

Which is why to me, other things are MUCH more important (EG IBIS, 1/8000 vs 1/4000 max) available lenses ETC.

And why, if focus and FPS is your need, I would (currently) recommend the A6000 for that and other cameras (including M4/3) for users with different requirements and IQ would only be a minor factor (if at all).
 
Also D7000 sensor gets hot easily. Turn on the live view/ video, you can see hot pixels in no time. I used to own one. It uses the first generation of 16mp Sony sensor that was infamous for that problem.
 
Last edited:
The fact it does it with on sensor phase detect (normally robbing a good third stop) and that PDAF covering a much larger area of the sensor is nothing short of amazing.
Where'd you get this third-stop number? It's my understanding that the PD pixels are a very small portion of the total--in the 1% range.

Bart

--
http://bhimages.zenfolio.com
Technically more than a third stop since traditionally, AF uses about 30% of the light.
Okay, a half stop. Whatever. Point is, it's huge.

Where are you getting these large numbers? What do you mean by "traditional"? Are you talking about the Sony SLT or a split mirror in a DSLR? I don't think there's anything traditional about on-sensor AF. The technology is still changing quite rapidly.

Bart

--
http://bhimages.zenfolio.com
I am getting these numbers from how PDAF in DSLR/SLT works: they both direct about 30% (half stop) of light coming thru the lens for PDAF. I would expect about the same, at least a third, used via the "new" approach.

Or, do you have a different number in mind?
I didn't have any specific number in mind. I was just wondering where you got yours.
As I have said couple of times, from the way PDAF has been for nearly three decades now.
On a DSLR, it only takes that much because the other 70% is needed for the OVF. And then the mirror flips up and the sensor gets all of the light.
Eh, so you're assuming that 30% directed to PDAF module is an afterthought, leftover? I would say, it is by design. And don't forget yet another path in DSLRs: metering module.
The SLT steals the 30% all the time which is a disadvantage of it relative to the SLR although I suppose one could also design the mirror to flip up in an SLT.
What has this got to do with anything? There are compromises with EVERY design, one chooses that gives up something to gain something else, INCLUDING a mirrorless camera which is really the point of this discussion. There is no free lunch.
The on sensor PDAF is fundamentally different because, well, it's on the sensor.
Fundamentally different in the way light is being received for AF. You're assuming that it uses less light to do the same thing that DSLRs/SLTs need 30% of the light for. Perhaps you're assuming it is significantly less (less than 20% or a third stop?).
In terms of conservation of information, yeah it makes sense you'd need the same amount of light to get the same performance. In the case of CDAF, you need all the light for AF, but it turns out you can re-use it for the image, so there's no harm done. It's possible that for on-sensor PDAF at least some of that light can be re-used for the image as well. I think that's the case for the Canon version of on-sensor PDAF.

So the real thing I'm trying to get at isn't how much the AF needs to do its work. The real question is, how much is available to make the image.
You can't have one or the other. You've to deal with both (there would be no creation of light either way).
 
Low dynamic range is expected.

DXOMark records 12.42 EV in per pixels (screen) mode, which is already beyond the theoretical limit of 12-bit ADC.
 
1/2 stop (the areal difference between m43 and APS-C is 1/2 stop) Check this:

http://j.mp/1hiOKIX

Notice the Nex lens is 1/2 stop slower.

I have the 40-150. It is a sweet lens, lens than half the cost of the Nex equivalent.
Smaller sensor, smaller lens, inferior image quality.

If size is the key here, then might as well compare the Nikon V1 with a 30-100mm lens with the Olympus PEN.....it is smaller, with a 1" sensor with inferior image quality to M43.
Looking at these comparisons, I don't think it's that clear cut:

http://www.dpreview.com/reviews/ima...1&x=-0.08716825132627709&y=-0.915036406945894

http://www.dpreview.com/reviews/ima...1&x=0.5929647685834335&y=-0.21809331939641075

Or are there better comparisons out there we can refer to?
DXO testings and ratings must be completely wrong then. Cannot believe how they try to fool people by stating the E-M1 sensor ratings are not in the same playing field as the newer APS-C sensors.
From what I understand, a large portion of the DXO score is accounted for by noise performance, even at high ISOs. Most of the inroads in sensor technology has allowed smaller sensors to take usable shots at higher ISOs which were previously only possible on larger sensors.

For example, my bottom of the barrel E-PM2 has a DXO score of 72. The Canon 7D scores 66 while the 5D with a sensor 4 times larger scores a 71. But even the NEX F3 which came out the same year as the E-PM2 only has a DXO score of 73. Even the RX100 matches the 66 of the APS-C 7D.

Does a bigger sensor automatically result in better performance? At ISOs between 100 and 3200 which most people use, I don't think the differences are that large or even noticeable to most which is why I linked to the DPReview tests.
So you are saying that based on a DPreview test chart, or because you own and use both systems?
I've actually been using a Fuji XE-1 with the 18-55 kit lens for the past week. Although image quality is great, AF and overall performance doesn't match that of my E-PM2 to the extent that I'm missing shots of our active toddler. The JPEG NR and need for a different RAW processor to get great results have made me reconsider keeping it. I'm also getting comparable results with my Sigma 30 and 60 for MFT.

In case I return the Fuji, I'm looking at an E-M10 or an A6000 with kit lens and maybe the NEX Sigmas.
I, on the other hand, can say that I bit into the E-M1 hype and purchased it when it was released last year with the 12-40 kit. It went back 3-4 days later.

IMO, there is just something about the E-M1 images that I did not like. Actually, it is pretty impossible to describe what it is. I guess if I could try my best, it is basically flatness/blown shadows/color punch/grainy/detail all into one mixed together? In no way am I trying to be sarcastic or trying to be funny, I just for the life of me cannot describe into one word what I did not like about them. They were just lacking something.

Not saying it is a bad camera or that it takes bad photos. I just happened to like the punchy, "3d" photos of my NEX-6 a lot more.
With the right lenses and post processing, I don't think there's much of a difference:


But that's just me and camera choice IS subjective.
 
Yes, saw this already, looks very good. the little wonder fits.



--
Hynek



favN.jpg





 
On a DSLR, it only takes that much because the other 70% is needed for the OVF. And then the mirror flips up and the sensor gets all of the light.
Eh, so you're assuming that 30% directed to PDAF module is an afterthought, leftover? I would say, it is by design. And don't forget yet another path in DSLRs: metering module.
The SLT steals the 30% all the time which is a disadvantage of it relative to the SLR although I suppose one could also design the mirror to flip up in an SLT.
What has this got to do with anything? There are compromises with EVERY design, one chooses that gives up something to gain something else, INCLUDING a mirrorless camera which is really the point of this discussion. There is no free lunch.
The on sensor PDAF is fundamentally different because, well, it's on the sensor.
Fundamentally different in the way light is being received for AF. You're assuming that it uses less light to do the same thing that DSLRs/SLTs need 30% of the light for. Perhaps you're assuming it is significantly less (less than 20% or a third stop?).
That's right. It only blocks some (all?) of the light for a small number of pixels. For SLT, you are forced to divert 30% of the light over the entire area.
In terms of conservation of information, yeah it makes sense you'd need the same amount of light to get the same performance. In the case of CDAF, you need all the light for AF, but it turns out you can re-use it for the image, so there's no harm done. It's possible that for on-sensor PDAF at least some of that light can be re-used for the image as well. I think that's the case for the Canon version of on-sensor PDAF.

So the real thing I'm trying to get at isn't how much the AF needs to do its work. The real question is, how much is available to make the image.
You can't have one or the other. You've to deal with both (there would be no creation of light either way).
The PDAF sensors could possibly be able to register light intensity as an aside.

But worst-case, even if they completely cover some of the pixels, they block significantly less than having to block/cover the entire sensor for the small number of points.
 
On a DSLR, it only takes that much because the other 70% is needed for the OVF. And then the mirror flips up and the sensor gets all of the light.
Eh, so you're assuming that 30% directed to PDAF module is an afterthought, leftover? I would say, it is by design. And don't forget yet another path in DSLRs: metering module.
The SLT steals the 30% all the time which is a disadvantage of it relative to the SLR although I suppose one could also design the mirror to flip up in an SLT.
What has this got to do with anything? There are compromises with EVERY design, one chooses that gives up something to gain something else, INCLUDING a mirrorless camera which is really the point of this discussion. There is no free lunch.
The on sensor PDAF is fundamentally different because, well, it's on the sensor.
Fundamentally different in the way light is being received for AF. You're assuming that it uses less light to do the same thing that DSLRs/SLTs need 30% of the light for. Perhaps you're assuming it is significantly less (less than 20% or a third stop?).
That's right. It only blocks some (all?) of the light for a small number of pixels. For SLT, you are forced to divert 30% of the light over the entire area.
In terms of conservation of information, yeah it makes sense you'd need the same amount of light to get the same performance. In the case of CDAF, you need all the light for AF, but it turns out you can re-use it for the image, so there's no harm done. It's possible that for on-sensor PDAF at least some of that light can be re-used for the image as well. I think that's the case for the Canon version of on-sensor PDAF.

So the real thing I'm trying to get at isn't how much the AF needs to do its work. The real question is, how much is available to make the image.
You can't have one or the other. You've to deal with both (there would be no creation of light either way).
The PDAF sensors could possibly be able to register light intensity as an aside.

But worst-case, even if they completely cover some of the pixels, they block significantly less than having to block/cover the entire sensor for the small number of points.
 
On a DSLR, it only takes that much because the other 70% is needed for the OVF. And then the mirror flips up and the sensor gets all of the light.
Eh, so you're assuming that 30% directed to PDAF module is an afterthought, leftover? I would say, it is by design. And don't forget yet another path in DSLRs: metering module.
The SLT steals the 30% all the time which is a disadvantage of it relative to the SLR although I suppose one could also design the mirror to flip up in an SLT.
What has this got to do with anything? There are compromises with EVERY design, one chooses that gives up something to gain something else, INCLUDING a mirrorless camera which is really the point of this discussion. There is no free lunch.
The on sensor PDAF is fundamentally different because, well, it's on the sensor.
Fundamentally different in the way light is being received for AF. You're assuming that it uses less light to do the same thing that DSLRs/SLTs need 30% of the light for. Perhaps you're assuming it is significantly less (less than 20% or a third stop?).
That's right. It only blocks some (all?) of the light for a small number of pixels. For SLT, you are forced to divert 30% of the light over the entire area.
In terms of conservation of information, yeah it makes sense you'd need the same amount of light to get the same performance. In the case of CDAF, you need all the light for AF, but it turns out you can re-use it for the image, so there's no harm done. It's possible that for on-sensor PDAF at least some of that light can be re-used for the image as well. I think that's the case for the Canon version of on-sensor PDAF.

So the real thing I'm trying to get at isn't how much the AF needs to do its work. The real question is, how much is available to make the image.
You can't have one or the other. You've to deal with both (there would be no creation of light either way).
The PDAF sensors could possibly be able to register light intensity as an aside.

But worst-case, even if they completely cover some of the pixels, they block significantly less than having to block/cover the entire sensor for the small number of points.
 
From Imaging Resource website I downloaded both A6000 ISO 3200 NR0 and Nex 7 ISO 3200 NR1 (low) There is no NR0 for that Nex 7 unfortunately.

I compared them in photoshop and they are very close although Nex 7 still wins for the saturation level still retains when using NR both colour and luminance while Nex A6000 have lost some saturation when using high iso 3200. Although unfortunately the A6000 images were not properly test yet from imaging resource I noticed due to they haven't done a custom wb due to is a first shot only though. A6000 some how is a little darker than Nex 7 but is only a first shot so have to wait until they review it.

Only best headline of that A6000 is much faster AF with Phase Detection compare to Nex 7 didn't have it and is slower AF than A6000. Thats the only improvement I can see from A6000. I am not confident to upgrade from my exist Nex 7 to that due raw file difference is a bit worry. So cannot really judge who is the best when A6000 is only first shot.
 
From Imaging Resource website I downloaded both A6000 ISO 3200 NR0 and Nex 7 ISO 3200 NR1 (low) There is no NR0 for that Nex 7 unfortunately.

I compared them in photoshop and they are very close although Nex 7 still wins for the saturation level still retains when using NR both colour and luminance while Nex A6000 have lost some saturation when using high iso 3200. Although unfortunately the A6000 images were not properly test yet from imaging resource I noticed due to they haven't done a custom wb due to is a first shot only though. A6000 some how is a little darker than Nex 7 but is only a first shot so have to wait until they review it.

Only best headline of that A6000 is much faster AF with Phase Detection compare to Nex 7 didn't have it and is slower AF than A6000. Thats the only improvement I can see from A6000. I am not confident to upgrade from my exist Nex 7 to that due raw file difference is a bit worry. So cannot really judge who is the best when A6000 is only first shot.
Interesting. I would say that the menu system is another big improvement as well as the usability of the EVF in low light.
 
On a DSLR, it only takes that much because the other 70% is needed for the OVF. And then the mirror flips up and the sensor gets all of the light.
Eh, so you're assuming that 30% directed to PDAF module is an afterthought, leftover? I would say, it is by design. And don't forget yet another path in DSLRs: metering module.
The SLT steals the 30% all the time which is a disadvantage of it relative to the SLR although I suppose one could also design the mirror to flip up in an SLT.
What has this got to do with anything? There are compromises with EVERY design, one chooses that gives up something to gain something else, INCLUDING a mirrorless camera which is really the point of this discussion. There is no free lunch.
The on sensor PDAF is fundamentally different because, well, it's on the sensor.
Fundamentally different in the way light is being received for AF. You're assuming that it uses less light to do the same thing that DSLRs/SLTs need 30% of the light for. Perhaps you're assuming it is significantly less (less than 20% or a third stop?).
That's right. It only blocks some (all?) of the light for a small number of pixels. For SLT, you are forced to divert 30% of the light over the entire area.
In terms of conservation of information, yeah it makes sense you'd need the same amount of light to get the same performance. In the case of CDAF, you need all the light for AF, but it turns out you can re-use it for the image, so there's no harm done. It's possible that for on-sensor PDAF at least some of that light can be re-used for the image as well. I think that's the case for the Canon version of on-sensor PDAF.

So the real thing I'm trying to get at isn't how much the AF needs to do its work. The real question is, how much is available to make the image.
You can't have one or the other. You've to deal with both (there would be no creation of light either way).
The PDAF sensors could possibly be able to register light intensity as an aside.

But worst-case, even if they completely cover some of the pixels, they block significantly less than having to block/cover the entire sensor for the small number of points.
 

Keyboard shortcuts

Back
Top