Comparison of processing between Pixel 2XL and Sony A6000

By the way, even when you brighten the shadows of the Dro 5 A6000 image, it looks worse than the Pixel 2 due to possible vignetting of the Sony lens (and because the A6000 doesn't have the latest Aps-c sensor). Awful color noise.

I even think that my Nexus 5x might perform a little bit better at base Iso than the Pixel 2 due to the larger sensor, but when you use Night Sight, the Pixel 2 should be better at base Iso.
Yes, I had a nexus 5x and I agree that camera might have been a bit better. It was the first to use the new Google algorithms, but it was slow. Unfortunately my 5x bootlooped like many/most of them do eventually.
 
Yup, I've done the test too, Pixel 1 vs 80D and more recently Pixel 3 vs a7 III. However I think the scene you chose is an outlier. Most scenes do not have such wild DR to capture. Such a scene is the Pixel's strength.

On an a6000 I've have shot that with highlight priority metering and +1EV of exposure compensation and then pulled the highlights down in post and pulled shadows and midtones up.

However I'd be willing to bet the default profile in DxO PhotoLab would produce an image from your raw that looked just like the Pixel 2's only with more detail. DxO's smart lighting algorithm is pretty excellent with no user input.

Here's an a7 III vs Pixel 3 Night Sight (basically HDR+ without any time constraints) shot in a scene that plays to the Pixel 3's strengths. The a7iii shot was just processed through the standard preset on DxO PhotoLab 2. If you look closely the Sony shot is clearly better, but at web sizes they look fairly interchangeable.

0beb7de4defb424db34754624376d13d.jpg

0f5f26dc3bd748f0add48c2ea7052d67.jpg

Also as far as getting that Pixel style tile&merge tech in camera, computer power needs to catch up first. The a6000 can't read images from its sensor fast enough do what the Pixel does (Pixel uses a stacked BSI sensor like the a9 and only has 12mp to deal with on a physically smaller sensor). The growth in processing time required going from 12mp to 24mp isn't a simple 2x linear, its more like 4x. You need both fast readout speeds from the sensor and an SoC that can sustain fast image processing of multiple 24mp frames without making the rest of the camera unusable.
I have DXo and I may repeat my test just for curiosity if I can find the time.

But of course the inconvenience of PP can't be ignored.
 
Yup, I've done the test too, Pixel 1 vs 80D and more recently Pixel 3 vs a7 III. However I think the scene you chose is an outlier. Most scenes do not have such wild DR to capture. Such a scene is the Pixel's strength.

On an a6000 I've have shot that with highlight priority metering and +1EV of exposure compensation and then pulled the highlights down in post and pulled shadows and midtones up.

However I'd be willing to bet the default profile in DxO PhotoLab would produce an image from your raw that looked just like the Pixel 2's only with more detail. DxO's smart lighting algorithm is pretty excellent with no user input.

Here's an a7 III vs Pixel 3 Night Sight (basically HDR+ without any time constraints) shot in a scene that plays to the Pixel 3's strengths. The a7iii shot was just processed through the standard preset on DxO PhotoLab 2. If you look closely the Sony shot is clearly better, but at web sizes they look fairly interchangeable.

0beb7de4defb424db34754624376d13d.jpg

0f5f26dc3bd748f0add48c2ea7052d67.jpg

Also as far as getting that Pixel style tile&merge tech in camera, computer power needs to catch up first. The a6000 can't read images from its sensor fast enough do what the Pixel does (Pixel uses a stacked BSI sensor like the a9 and only has 12mp to deal with on a physically smaller sensor). The growth in processing time required going from 12mp to 24mp isn't a simple 2x linear, its more like 4x. You need both fast readout speeds from the sensor and an SoC that can sustain fast image processing of multiple 24mp frames without making the rest of the camera unusable.
Again, its hugely impressive what the top phones can achieve, I just wish they'd look at processing algorithms, if you look at the crops below the Pixel hasn't quite got the rendering of detail like the Sony. That being said it is getting better as OEM's tweak their processing to look more natural, but it still has a way to go.



Pixel 3
Pixel 3



Sony
Sony



--
Jostian
 
However I'd be willing to bet the default profile in DxO PhotoLab would produce an image from your raw that looked just like the Pixel 2's only with more detail. DxO's smart lighting algorithm is pretty excellent with no user input.
OK, I shot the two shots again (to be sure the lighting/time of day had not changed).

I used DXO Smart Lighting from RAW, keeping all of the other settings default.

This is MUCH closer. The DXO I'd say is a hair too green, and the Pixel is too warm, but from a DR and exposure POV they are quite close. The Pixel did better protecting from blowout on the outside portion of the photo, but DXO did slightly better raising the shadows in the dark areas.

Still, my point is still made that the Pixel 2 is really impressive in what it does automatically with no PP. And it is really excellent at protecting from blowouts.

I hope you have all found this thread interesting, I certainly have.





Pixel 2 XL, SOOC
Pixel 2 XL, SOOC



Sony A6000, RAWwith DXO default and smart lighting set to STRONG
Sony A6000, RAWwith DXO default and smart lighting set to STRONG
 
Still, my point is still made that the Pixel 2 is really impressive in what it does automatically with no PP. And it is really excellent at protecting from blowouts.
night sight mode would even get better white balance and sharpness.
 
By the way, even when you brighten the shadows of the Dro 5 A6000 image, it looks worse than the Pixel 2 due to possible vignetting of the Sony lens (and because the A6000 doesn't have the latest Aps-c sensor). Awful color noise.

I even think that my Nexus 5x might perform a little bit better at base Iso than the Pixel 2 due to the larger sensor, but when you use Night Sight, the Pixel 2 should be better at base Iso.
Yes, I had a nexus 5x and I agree that camera might have been a bit better. It was the first to use the new Google algorithms, but it was slow. Unfortunately my 5x bootlooped like many/most of them do eventually.
The mainboard of my Nexus died and the battery was also 99% dead, but LG repaired it.
 
Last edited:
However I'd be willing to bet the default profile in DxO PhotoLab would produce an image from your raw that looked just like the Pixel 2's only with more detail. DxO's smart lighting algorithm is pretty excellent with no user input.
OK, I shot the two shots again (to be sure the lighting/time of day had not changed).

I used DXO Smart Lighting from RAW, keeping all of the other settings default.

This is MUCH closer. The DXO I'd say is a hair too green, and the Pixel is too warm, but from a DR and exposure POV they are quite close. The Pixel did better protecting from blowout on the outside portion of the photo, but DXO did slightly better raising the shadows in the dark areas.

Pixel 2 XL, SOOC
Pixel 2 XL, SOOC

Sony A6000, RAWwith DXO default and smart lighting set to STRONG
Sony A6000, RAWwith DXO default and smart lighting set to STRONG
The Sony Dxo result is unusable in my opinion. There is a ton of green color noise at the left side of the image (including a green arc at the top left corner). This happens when shadows are too dark for the sensor (a new Aps-c sensor would likely perform better), too dark shadows can be caused by a lens that has much vignetting. The Pixel 2 has much better shadows here in this case, so technically the Pixel 2 shows better dynamic range here.
 
Last edited:
However I'd be willing to bet the default profile in DxO PhotoLab would produce an image from your raw that looked just like the Pixel 2's only with more detail. DxO's smart lighting algorithm is pretty excellent with no user input.
OK, I shot the two shots again (to be sure the lighting/time of day had not changed).

I used DXO Smart Lighting from RAW, keeping all of the other settings default.

This is MUCH closer. The DXO I'd say is a hair too green, and the Pixel is too warm, but from a DR and exposure POV they are quite close. The Pixel did better protecting from blowout on the outside portion of the photo, but DXO did slightly better raising the shadows in the dark areas.

Pixel 2 XL, SOOC
Pixel 2 XL, SOOC

Sony A6000, RAWwith DXO default and smart lighting set to STRONG
Sony A6000, RAWwith DXO default and smart lighting set to STRONG
The Sony Dxo result is unusable in my opinion. There is a ton of green color noise at the left side of the image (including a green arc at the top left corner). This happens when shadows are too dark for the sensor (a new Aps-c sensor would likely perform better), too dark shadows can be caused by a lens that has much vignetting. The Pixel 2 has much better shadows here in this case, so technically the Pixel 2 shows better dynamic range here.
Yes, which is amazing.

Another poster commented that "most pictures don't have as large a DR as this" which implied that this wasnt a reasonable test case. But if you interpret that, what they were really saying is that the Pixel has better DR than an APS-C camera! And the A6000 picture is from a RAW.
 
Yes, which is amazing.

Another poster commented that "most pictures don't have as large a DR as this" which implied that this wasnt a reasonable test case. But if you interpret that, what they were really saying is that the Pixel has better DR than an APS-C camera! And the A6000 picture is from a RAW.
The Pixel does not have better dynamic range than APS-C cameras. I've been playing with the DNG files from the Pixel 3 (which is stacked from multiple shots) and compared that to the raw files from the Nokia 9 (5? shots merged into one), Canon M100 and Sony A7R III - it has the least amount of details that you can recovery in post processing.

What you're talking about is the Pixel 2 is doing exposure stacking where it's taking different exposures and merging it together for the final shot but it's not dynamic range.

Dynamic range example: https://leicarumors.com/2019/03/12/check-out-the-leica-q2-dynamic-range.aspx/
 
Last edited:
Yes, which is amazing.

Another poster commented that "most pictures don't have as large a DR as this" which implied that this wasnt a reasonable test case. But if you interpret that, what they were really saying is that the Pixel has better DR than an APS-C camera! And the A6000 picture is from a RAW.
The Pixel does not have better dynamic range than APS-C cameras. I've been playing with the DNG files from the Pixel 3 (which is stacked from multiple shots) and compared that to the raw files from the Nokia 9 (5? shots merged into one), Canon M100 and Sony A7R III - it has the least amount of details that you can recovery in post processing.

What you're talking about is the Pixel 2 is doing exposure stacking where it's taking different exposures and merging it together for the final shot but it's not dynamic range.

Dynamic range example: https://leicarumors.com/2019/03/12/check-out-the-leica-q2-dynamic-range.aspx/
It's dynamic range from the Pixel's default processing versus that of the A6000 in my example case.

Of course the raw of a Pixel single image isn't going to have more DR than an APS-C sensor. But the examples speak for themselves above - the protection from blowout on the Pixel, with it's auto stacking of multiple underexposed images, does better in the example case than the A6000.

And for those of you who don't have Pixels, the auto processing is pretty much instantaneous.
 
It's dynamic range from the Pixel's default processing versus that of the A6000 in my example case.

Of course the raw of a Pixel single image isn't going to have more DR than an APS-C sensor. But the examples speak for themselves above - the protection from blowout on the Pixel, with it's auto stacking of multiple underexposed images, does better in the example case than the A6000.

And for those of you who don't have Pixels, the auto processing is pretty much instantaneous.
You don't seem to understand...the pixel 3 stacks multiple raw images into one raw image to maximize raw image data and it still has less dynamic range than one shot from an APS-C camera. I have played with the files in a raw editor and it's not even close.

What you see is a good auto exposure setting but what happens if you wanted to keep shadow detail? All you're showing is that the default on the a6000 is to preserve shadow detail and the default on the pixel 2 is to obliterate shadow detail via exposure stacking. You can take the worst sensor in the world and if you stack enough shots, you can get the same exposure as your pixel.

I would suggest you go use your A6000 in the real world and you will see a world of difference vs a dark room with a bright window out front. That said, the Pixel 2 is a great phone so enjoy. I've said my piece - i'm out!
 
It's dynamic range from the Pixel's default processing versus that of the A6000 in my example case.
I would suggest you go use your A6000 in the real world and you will see a world of difference vs a dark room with a bright window out front.
when did a dark room with a bright window not become a real world usage? it is real world to take party pictures of guests in the living room and not staging the shot by closing the curtains to even out the internal lighting versus outside lighting, without flash.
 
Last edited:
Yes, which is amazing.

Another poster commented that "most pictures don't have as large a DR as this" which implied that this wasnt a reasonable test case. But if you interpret that, what they were really saying is that the Pixel has better DR than an APS-C camera! And the A6000 picture is from a RAW.
What you're talking about is the Pixel 2 is doing exposure stacking where it's taking different exposures and merging it together for the final shot but it's not dynamic range.
The Pixel 2 doesn't combine different exposures. It's a common misunderstanding. Google combines identical exposures. Each exposure is the same. Combining multiple exposures reduces shadow noise, therefore Google HDR+ raw files have better dynamic range than a single exposure dng file of a Google phone.
 
Last edited:
However I'd be willing to bet the default profile in DxO PhotoLab would produce an image from your raw that looked just like the Pixel 2's only with more detail. DxO's smart lighting algorithm is pretty excellent with no user input.
OK, I shot the two shots again (to be sure the lighting/time of day had not changed).

I used DXO Smart Lighting from RAW, keeping all of the other settings default.

This is MUCH closer. The DXO I'd say is a hair too green, and the Pixel is too warm, but from a DR and exposure POV they are quite close. The Pixel did better protecting from blowout on the outside portion of the photo, but DXO did slightly better raising the shadows in the dark areas.

Pixel 2 XL, SOOC
Pixel 2 XL, SOOC

Sony A6000, RAWwith DXO default and smart lighting set to STRONG
Sony A6000, RAWwith DXO default and smart lighting set to STRONG
The Sony Dxo result is unusable in my opinion. There is a ton of green color noise at the left side of the image (including a green arc at the top left corner). This happens when shadows are too dark for the sensor (a new Aps-c sensor would likely perform better), too dark shadows can be caused by a lens that has much vignetting. The Pixel 2 has much better shadows here in this case, so technically the Pixel 2 shows better dynamic range here.
Yes, which is amazing.

Another poster commented that "most pictures don't have as large a DR as this" which implied that this wasnt a reasonable test case. But if you interpret that, what they were really saying is that the Pixel has better DR than an APS-C camera! And the A6000 picture is from a RAW.
My point wasn't that it isn't a reasonable test case, but more that in the real world you wouldn't run into this situation all that often. In this particular case the Pixel managed to outperform the a6000, but that wouldn't be the case in the majority of photos. And yes obviously in some circumstances a Pixel can have DR competitive with an APS-C sensor, hell even full frame.

You have options to allow the a6000 to produce a better result in this scene too. You could use the popup flash and point it at the ceiling to add light to the scene. You could expose to maximize the sensor's DR. You can overexpose the highlights more (highlight priority metering and +1.3 EV exp comp should do the trick) and pull them back in post. You'd have more information in the shadows doing this and prime can handle the color noise. Obviously this is more effort than just pressing the shutter on a Pixel (try Night Sight the result will be even better) but it is doable.

The color temperature issues are due to Sony's cold reptilian auto white balance (largely corrected on newer cameras, my a7 III is much better than my a6500 was). Set the auto white balance to +2 amber and +1.5 magenta and you'll be happier with the results. It's a more Canon-esque white balance.
 
It's dynamic range from the Pixel's default processing versus that of the A6000 in my example case.
I would suggest you go use your A6000 in the real world and you will see a world of difference vs a dark room with a bright window out front.
when did a dark room with a bright window not become a real world usage? it is real world to take party pictures of guests in the living room and not staging the shot by closing the curtains to even out the internal lighting versus outside lighting, without flash.
A room with all the lights turned off? The only light present in this scene seems to originate from the bright window. How commonplace is that? Most people have lights on in their homes. Also as a photographer wouldn't you try to control the light a bit? Maybe not shoot a horribly backlit subject if you could avoid it by standing elsewhere? Or use a fill flash?
 
It's dynamic range from the Pixel's default processing versus that of the A6000 in my example case.

Of course the raw of a Pixel single image isn't going to have more DR than an APS-C sensor. But the examples speak for themselves above - the protection from blowout on the Pixel, with it's auto stacking of multiple underexposed images, does better in the example case than the A6000.

And for those of you who don't have Pixels, the auto processing is pretty much instantaneous.
You don't seem to understand...the pixel 3 stacks multiple raw images into one raw image to maximize raw image data and it still has less dynamic range than one shot from an APS-C camera. I have played with the files in a raw editor and it's not even close.

What you see is a good auto exposure setting but what happens if you wanted to keep shadow detail? All you're showing is that the default on the a6000 is to preserve shadow detail and the default on the pixel 2 is to obliterate shadow detail via exposure stacking. You can take the worst sensor in the world and if you stack enough shots, you can get the same exposure as your pixel.

I would suggest you go use your A6000 in the real world and you will see a world of difference vs a dark room with a bright window out front. That said, the Pixel 2 is a great phone so enjoy. I've said my piece - i'm out!
If you want to PP then that may be true. But for any type of default processing, the Pixel is far better.
 
You have options to allow the a6000 to produce a better result in this scene too. You could use the popup flash and point it at the ceiling to add light to the scene.
So you are saying use a flash to compensate for the lack of A6000 DR vs the Pixel?
You could expose to maximize the sensor's DR.
You are saying that in-camera DRO or HDR with 6 stops between isn't enough exposure to maximize the sensor's DR?
You can overexpose the highlights more (highlight priority metering and +1.3 EV exp comp should do the trick) and pull them back in post.
Lots of work there versus the phone.
You'd have more information in the shadows doing this and prime can handle the color noise. Obviously this is more effort than just pressing the shutter on a Pixel (try Night Sight the result will be even better) but it is doable.
Look, I have the A6000 and I totally buy into the overall superiority of an ILC. But you can't deny the impressive capabilities of the phone due to computational photography.
The color temperature issues are due to Sony's cold reptilian auto white balance (largely corrected on newer cameras, my a7 III is much better than my a6500 was). Set the auto white balance to +2 amber and +1.5 magenta and you'll be happier with the results. It's a more Canon-esque white balance.
Thanks for that hint, I may try that.
 
It's dynamic range from the Pixel's default processing versus that of the A6000 in my example case.
I would suggest you go use your A6000 in the real world and you will see a world of difference vs a dark room with a bright window out front.
when did a dark room with a bright window not become a real world usage? it is real world to take party pictures of guests in the living room and not staging the shot by closing the curtains to even out the internal lighting versus outside lighting, without flash.
A room with all the lights turned off? The only light present in this scene seems to originate from the bright window. How commonplace is that? Most people have lights on in their homes. Also as a photographer wouldn't you try to control the light a bit? Maybe not shoot a horribly backlit subject if you could avoid it by standing elsewhere? Or use a fill flash?
The room is not that dark, it's just that outside is quite bright.

You are making excuses for the room lighting rather than discussing the DR differences in the cameras. There are plenty of very high contrast pictures we take with sun/shade, backlight, white clouds in sky with deep shadowed objects, etc. Plenty of high DR cases in real life.
 

Keyboard shortcuts

Back
Top