HDR on Cannon M50 vs iPhone XS

cromwell143

New member
Messages
4
Reaction score
2
Hi Everyone,

New to digital phogoraphy/video and trying to understand why at least to my eye, the dynamic range on my iphone's photos and video time lapses look so much better then the stuff coming out of my Cannon M50.

Wondering if there's something I'm doing wrong. Here's an example of what I'm seeing:

This is a photo I shot on my cannon M50 using a 22mm F2 Cannon lens. Whether it's shooting in auto or manual, it seems like I have to pick between exposing for the clouds or for the land.

Shot with Cannon M50
Shot with Cannon M50

This is the same view, same moment ,shot with my iphone XS.

 Shot on iphone XS - same time of day
Shot on iphone XS - same time of day

TIA!
 
Your iPhone is processing the image for HDR. It takes several different exposures, and merges them. You can do the same with your M50. I'm sure it has an HDR setting, which will do that. Or, you can shoot a multi-shot bracket in RAW, and merge them in an HDR program (or use Lightroom's functionalisty). Or, you can shoot a single RAW and adjust the highlights and shadows in PP. Or, you can do that with the JPEG. I took your JPEG, and did some quick adjustments. This is the result:



c67e598c6cfd43bebcecefddcc5e3d64.jpg

Bear in mind that this is the worst of the options available to you with the M50, and it still isn't bad. If you really want to take advantage of the full dynamic range of a camera, you should shoot RAW.

--
As the length of a thread approaches 150, the probability that someone will make the obvious "it's not the camera, it's the photographer" remark approaches 1.
Alastair
Equipment in profile
 
the dedicated camera industry is getting its tukus kicked by smartphones. I can already parrot the typical responses to a post like this found here at DPR. They will go over the fine details of why the M50 photo is far superior to the one from the smartphone. What they don't see is that you took the shot with the smartphone and walked away with a photo that is more than acceptable to the overwhelming majority of people. Sure we here can go back to a computer and spend time tweaking the M50 photo to look better but is it worth doing this for every one we take? I don't think so for most of us and for most photos we shoot. People here are enthusiasts but the ILC makers are giving up on a lot of potential buyers by not making the image processing side a much less cumbersome affair which most people want, and receive, from a smartphone.

This is why I have been advocating for a long while that Canon, Sony, Nikon, Fuji, Panasonic, Olympus etc. need to put modes in their MILC cameras that compete with those found in smartphones. It would give people who are familiar with smartphones more incentive to buy a MILC while taking advantage of the benefits such as lens quality, focus tracking, frame rates, low light noise etc. The tediousness of having to learn image processing software and using a computer to tweak the photo to look good is what turns most people away from MILCs after size/cost differences are considered. The ILC market is going to be under continual assault by smartphones and ILC manufacturers had better start matching smartphone image software processing to their models or suffer an eventual defeat over the long term.
 
Last edited:
ILC makers are giving up on a lot of potential buyers by not making the image processing side a much less cumbersome affair which most people want, and receive, from a smartphone.

Canon, Sony, Nikon, Fuji, Panasonic, Olympus etc. need to put modes in their MILC cameras that compete with those found in smartphones.
Very true!

There should be an option giving the best of both worlds.
 
the dedicated camera industry is getting its tukus kicked by smartphones. I can already parrot the typical responses to a post like this found here at DPR. They will go over the fine details of why the M50 photo is far superior to the one from the smartphone. What they don't see is that you took the shot with the smartphone and walked away with a photo that is more than acceptable to the overwhelming majority of people. Sure we here can go back to a computer and spend time tweaking the M50 photo to look better but is it worth doing this for every one we take? I don't think so for most of us and for most photos we shoot. People here are enthusiasts but the ILC makers are giving up on a lot of potential buyers by not making the image processing side a much less cumbersome affair which most people want, and receive, from a smartphone.

This is why I have been advocating for a long while that Canon, Sony, Nikon, Fuji, Panasonic, Olympus etc. need to put modes in their MILC cameras that compete with those found in smartphones. It would give people who are familiar with smartphones more incentive to buy a MILC while taking advantage of the benefits such as lens quality, focus tracking, frame rates, low light noise etc. The tediousness of having to learn image processing software and using a computer to tweak the photo to look good is what turns most people away from MILCs after size/cost differences are considered. The ILC market is going to be under continual assault by smartphones and ILC manufacturers had better start matching smartphone image software processing to their models or suffer an eventual defeat over the long term.
Wow! Can you share what adjustments you made?
 
Last edited:
ILC makers are giving up on a lot of potential buyers by not making the image processing side a much less cumbersome affair which most people want, and receive, from a smartphone.

Canon, Sony, Nikon, Fuji, Panasonic, Olympus etc. need to put modes in their MILC cameras that compete with those found in smartphones.
Very true!

There should be an option giving the best of both worlds.
I know this in spades. I have a Pixel 2 and an M6/M50 and I get out of the camera results similar to what is displayed in the OP. I shoot 100% raw and I find that if I use CC 2018 ACR and click on "Auto" ACR will pull up shadows and pull down highlights to make the M image look like the out of the camera Pixel 2 image.

Here is a thread I posted a while ago that shows examples.

Yes, it is a pain to have to do this with each image. I use the Pixel 2 as much as I can so I don't have to waste time thrashing with M raw files. But the Ms are better when I need better. I'm not going to get rid of my Ms.

Now. But I'd switch in a heartbeat to an ILC system (or P&S with optical zoom) that used Apple or Google level computational photography technology.

Wayne
 
ILC makers are giving up on a lot of potential buyers by not making the image processing side a much less cumbersome affair which most people want, and receive, from a smartphone.

Canon, Sony, Nikon, Fuji, Panasonic, Olympus etc. need to put modes in their MILC cameras that compete with those found in smartphones.
Very true!

There should be an option giving the best of both worlds.
I know this in spades. I have a Pixel 2 and an M6/M50 and I get out of the camera results similar to what is displayed in the OP. I shoot 100% raw and I find that if I use CC 2018 ACR and click on "Auto" ACR will pull up shadows and pull down highlights to make the M image look like the out of the camera Pixel 2 image.

Here is a thread I posted a while ago that shows examples.

Yes, it is a pain to have to do this with each image. I use the Pixel 2 as much as I can so I don't have to waste time thrashing with M raw files. But the Ms are better when I need better. I'm not going to get rid of my Ms.

Now. But I'd switch in a heartbeat to an ILC system (or P&S with optical zoom) that used Apple or Google level computational photography technology.
I continually scratch my head over why they don't do this. They need to make every camera they sell emulate smartphones. It doesn't take a single thing away from what these cameras currently do. It only adds a much needed, and wanted, capability. It is quite amazing to me how rigid in design philosophy cameras makers have become. They seem to look down on the very technology that is making them slowly go extinct. At some point a smartphone maker(s) will take all the knowledge and manufacturing expertise they have developed regarding photography and make an extinction level small camera for these dinosaur camera companies to deal with.
 
Last edited:
the dedicated camera industry is getting its tukus kicked by smartphones. I can already parrot the typical responses to a post like this found here at DPR. They will go over the fine details of why the M50 photo is far superior to the one from the smartphone. What they don't see is that you took the shot with the smartphone and walked away with a photo that is more than acceptable to the overwhelming majority of people. Sure we here can go back to a computer and spend time tweaking the M50 photo to look better but is it worth doing this for every one we take? I don't think so for most of us and for most photos we shoot. People here are enthusiasts but the ILC makers are giving up on a lot of potential buyers by not making the image processing side a much less cumbersome affair which most people want, and receive, from a smartphone.

This is why I have been advocating for a long while that Canon, Sony, Nikon, Fuji, Panasonic, Olympus etc. need to put modes in their MILC cameras that compete with those found in smartphones. It would give people who are familiar with smartphones more incentive to buy a MILC while taking advantage of the benefits such as lens quality, focus tracking, frame rates, low light noise etc. The tediousness of having to learn image processing software and using a computer to tweak the photo to look good is what turns most people away from MILCs after size/cost differences are considered. The ILC market is going to be under continual assault by smartphones and ILC manufacturers had better start matching smartphone image software processing to their models or suffer an eventual defeat over the long term.
Wow! Can you share what adjustments you made?
I think this question is for me, although you quoted Mike's post. In LIghtroom, I increased exposure by about .4 of a stop, pulled up shadows by 85 (on the 100 point scale), reduced highlights by 30, added some clarity and vibrance and a little dehaze. It took 15 seconds. I find it easier to shoot RAW and use preset adjustments. Lightroom allows you to define adjustments for specific cameras and ISO levels, to be applied on import. With most of my shots, I don't need to make any adjustments at all, over and above the ones I have programmed in. I understand why Mike wants camera manufacturers to incorporate the same kind of processing as phone manufacturers. I would never use it myself, but I'm sure others would. For me, once I've customized the adjustments that Lightroom applies on import, I'm much happier with the results than what I get from my iPhone 8, which does a really nice job as phones go. But I am an enthusiast photographer. I learned everything on manual film SLR's back in the 70's, loaded my own film, and did my own developing. Processing is a large part of the fun for me, though I spend much less time on it now than I did when I first started shooting RAW on my original Digital Rebel. That's because the image quality from modern ILCs is so high, and the result from Lightroom so good and easily customizable.
 
It is quite amazing to me how rigid in design philosophy cameras makers have become. They seem to look down on the very technology that is making them slowly go extinct. At some point a smartphone maker(s) will take all the knowledge and manufacturing expertise they have developed regarding photography and make an extinction level small camera for these dinosaur camera companies to deal with.
Good analogy.

We haven't heard much from Sony lately. While all the other ILC companies are praising themselves for putting bog standard FF sensors in mirrorless bodies and putting slightly tweaked lenses on new mounts, maybe Sony has teamed up with Google or Apple and the next thing we will see will be a RX100-CP (Computational Photography edition) that uses computational photography to produce images that are better than FF. And maybe an A7 CP that produces images better than medium format.

Much as I like my Canon equipment I am dismayed at how crude it all is compared to what I experience with my Pixel 2. Yep we are strolling through Jurassic Park now.

Wayne
 
Hi Everyone,
New to digital phogoraphy/video and trying to understand why at least to my eye, the dynamic range on my iphone's photos and video time lapses look so much better then the stuff coming out of my Cannon M50.

Wondering if there's something I'm doing wrong. Here's an example of what I'm seeing:

This is a photo I shot on my cannon M50 using a 22mm F2 Cannon lens. Whether it's shooting in auto or manual, it seems like I have to pick between exposing for the clouds or for the land.

Shot with Cannon M50
Shot with Cannon M50

This is the same view, same moment ,shot with my iphone XS.

Shot on iphone XS - same time of day
Shot on iphone XS - same time of day

TIA!

What you're noticing is that the Apple camera on your phone (which is the top model, by the way) is "exposing better". Although the truth is that Apple's software is merging data and then lowering highlights, altering color hues (based on an algorithm that enhances blue skies) and raises shadow detail captures via Dynamic Range. The end result is an image you might have processed yourself using a combination of Lightroom and Photoshop. In fact I took your M50 image and altered the levels, colors, hues and tonal range and was able to match your iPhone XS image quite accurately in less than 2 minutes - with just one difference: the M50 image was softer in the mid distance where noise reduction and clarity in the trees and foliage were less defined.
.

APPLE iPhone Cameras...
That iPhone XS is the current pinnacle of Phone Camera technology but it's also terribly flawed. I don't say this as someone who despises Apple because I've been an avid Apple user since the 90s and the late Steve Jobs occasionally used to use my work to highlight his new tech when opening a Keynote address to the public. I've been using the Apple handheld devices since they first came onto market (technically even earlier than this). The lifespan of that expensive little toy is estimated to be just 2 years. I don't think I'd consider that to be a good investment. If you want a telephoto zoom or reliable performance for professional work you'll be in for a disappointment. But for brightly lit daylight shots you'll get some nice "wide" images with it. When you attempt to shoot in very low light you'd be better off with an OPPO or a PIXEL phone. And if you want to create shots with synthetic bokeh, you might be in for a shock (see below). In your samples you noted that the HDR on your new iPhone was pulling more hidden details from the shadow... and the reason for this is that your camera more aggressively processing the multiple images it took in order to reveal the details. It also has something to do with you not exposing properly with enough images taken for your M50 HDR shot. The iPhone has also been carefully programmed to produce the best HDR of any modern smart-phone camera. It still suffers in lowlight and it also suffers lens flare more than any 'real' camera when light enters from the side of the lens. For that price you might have considered a Full Frame camera like the EOS R or a DSLR with a FF sensor and a nice cheap lowlight lens. You can properly expose for a nearly identical shot with your EOS M camera if you are willing to take the time to figure out how. But your iPhone will make this a no-brainer exercise in automatic photography so why not use it for those tricky shots?
.
A good example is the two shots directly below. What I discovered was that the iPhone handled backlit scenes better than dedicated cameras - unless you take the time to prepare a HDR shot with your camera before taking the shot.
.


iPhone 6S - not taken with HDR

EOS M + EF-M 22mm f/2 lens
EOS M + EF-M 22mm f/2 lens


iPhone6S


EOS M + EF-M 22mm f/2 lens

.


The tiny little lens on my iPhone6S - considered to be one of the first decent camera phones on the Apple system at the time.


Given suitable lighting, decent pictures are possible with modern smartphone cameras.


EOS 6D + EF 85mm f/1.2L lens (different location and time)


EOS M + EF-M 28mm Macro lens (note the ring-light reflection)

 Bad Bokeh - allegedly from the iPhone XS
Bad Bokeh - allegedly from the iPhone XS

Bad Bokeh from the iPhone - the iPhone rendered out the glass itself !!!
Bad Bokeh from the iPhone - the iPhone rendered out the glass itself !!!

.
I often enjoyed taking pictures with my iPhone to compare with what the results might be from a DLSR or mirrorless camera. Each year a new phone model tends to be released with a few tweaks and a more advanced set of processing algorithms. The most impressive images I've seen for lowlight (night) shots in the streets came from the PIXEL. The best results for indoors in extremely low light came from the OPPO and the best results I've seen in daylight and HDR have come from the newest iPhone XS.
.
Next year you'll see another model that will (allegedly) take a better picture than the existing phones. The trick is to only update your phone based on the features you need rather than to buy into the sales pitch. But even now, a quality Subcompact Digital Camera with a 1" sensor or larger can usually take a better picture than the average phone camera. Make it a 1.5" inch sensor for a compact Digital camera and things get interesting. But APS-C is vastly superior if you want to add PP Noise-Reduction into the equation.
.

iPhone 6S (handheld at night) - NOTE: The complete loss of detail.
iPhone 6S (handheld at night) - NOTE: The complete loss of detail.


EOS M + EF-M 22mm f/2 lens (handheld at night) - NOTE: the detail captured.

iPhone 6S
iPhone 6S

EOS M + EF-M 22mm f/2 lens - not my best food shot... but the same subject
EOS M + EF-M 22mm f/2 lens - not my best food shot... but the same subject


EOS 6D - in-camera HDR used


EOS M6 + 11-22mm lens - in-camera HDR used

.
The shots above show the HDR capabilities from the EOS 6D and EOS M6 using simple in-camera settings. The two shots below shot the general results that distinguish between the iPhone that I've currently got Vs the image quality from the EOS M with a quality lens. The difference used to be one of weight and size... but now the difference is more about price. it woudn't be hard to show you images that the Apple isn't capable of... but for wide shots taken of day-to-day subjects, the Apple takes a nice picture and the newer models handle low light so much better than earlier models. Having a nice, modern phone is an advantage and it's just another handy tool to have on hand should you need it. More people will leave their cameras at home as these smart phones get even smarter. And smart phones are really the only threat to camera sales these days. An advantage is that as people become interested in photography (triggered from using their new phones), they sometimes look at more dedicated devices to enjoy. That's why cameras like ours will always have a following, no matter how smart the camera phones get. But they are indeed limited by the tiny optics they use.
.


iPhone 6S


EOS M + EF 24mm f/1.4L lens

--
Regards,
Marco Nero.
 

Attachments

  • 3582809.jpg
    3582809.jpg
    334.6 KB · Views: 0
iphone is using Computational Photography to give it an edge extending its dynamic range. That iphone photo is actually a Multi-Frame Stacking with process HDR (all done in less than 1 sec thanks to the superior speed of A12 processor). Naturally its Image, Color, and Dynamic Range is superior to any camera. Traditionalist will argue that is cheating Practical / Realist in me say I don't CARE - so long as I get better image.

You're photo show WHY consumer has abandoned dedicated camera in favor of Smartphone. This trend is accelerating, as smartphone CPU improve by leaps & bounds years to years, its open up endless possibilities of instant photo processing.

Bottom line is this:
  • Consumers judge a camera by its Final JPEG - its the final image that matters
  • Enthusiastic judge a camera by its Technical merits, then process image on computers.
  • They don't see eye to eyes, and for most People (the consumer) Smartphone Image looks better, therefore, the better camera




Hi Everyone, New to digital phogoraphy/video and trying to understand why at least to my eye, the dynamic range on my iphone's photos and video time lapses look so much better then the stuff coming out of my Cannon M50.

Wondering if there's something I'm doing wrong. Here's an example of what I'm seeing:

This is a photo I shot on my cannon M50 using a 22mm F2 Cannon lens. Whether it's shooting in auto or manual, it seems like I have to pick between exposing for the clouds or for the land.

Shot with Cannon M50
Shot with Cannon M50

This is the same view, same moment ,shot with my iphone XS.

Shot on iphone XS - same time of day
Shot on iphone XS - same time of day

TIA!
 
My M50 has an HDR "art-vivid" mode that would've fixed that for you.
 
Some cameras may do better than others with the blending, or have more or less controls to adjust the amount of exposure difference between the shots, but you do have a function on your M50 that you could have used to help deal with that scene.

Canon's instruction:

"When shooting a scene having bright and dark areas. When you take a picture, three continuous shots will be taken at different exposures. The loss of detail in highlights and shadows will be reduced in the final image.

1. Set the power to <ON>.

2. Tap [P], and then select [HDR Backlight Control]."

This would perform the same type of processing as the phone did - taking 3 shots, merging the highlights, mids and shadows together, and delivering one final JPG shot. It wouldn't require any post-processing, similar to the phone.

In-camera multiple-frame stacking for noise and for HDR has been around on cameras for about 9-10 years now...before it even got to phones.
 
Some cameras may do better than others with the blending, or have more or less controls to adjust the amount of exposure difference between the shots, but you do have a function on your M50 that you could have used to help deal with that scene.

Canon's instruction:

"When shooting a scene having bright and dark areas. When you take a picture, three continuous shots will be taken at different exposures. The loss of detail in highlights and shadows will be reduced in the final image.

1. Set the power to <ON>.

2. Tap [P], and then select [HDR Backlight Control]."

This would perform the same type of processing as the phone did - taking 3 shots, merging the highlights, mids and shadows together, and delivering one final JPG shot. It wouldn't require any post-processing, similar to the phone.

In-camera multiple-frame stacking for noise and for HDR has been around on cameras for about 9-10 years now...before it even got to phones.
I think where the smartphone has an advantage is that it takes the multiple photos much closer together where the M50 spreads them apart. This doesn't help in keeping the photos sharp when stitched together. This is especially so when hand holding the camera.
 
Hi Everyone,
New to digital phogoraphy/video and trying to understand why at least to my eye, the dynamic range on my iphone's photos and video time lapses look so much better then the stuff coming out of my Cannon M50.
Cromwell, given this is your very first post on DPReview, and given you say you are new to this, I'm a little bit suspicious about your knowledge of things like "dynamic range". That said, here are my comments:

As phones, Apple and Google's products are essentially the same, so they are using photography to differentiate competitively. They have huge technical and financial resources available to them to perform "Computational Photography" in the device. Canon and Sony (as big as they are) do not have those same resources and are lagging behind. In theory, they could have features on their cameras that would exceed what those phones can do.

Until Canonikon catch up in computational photography, phones will generally produce better results without post-processing (as you demonstrated). But because cameras have larger lenses and sensors, you can still exceed what phones can do, but only with post processing.
 
That's one of the main issues that "cameras" have: Bad image processing and therefore smartphones will give you better 12 megapixel out-of-camera wide-angle jpgs.

You can forget Canon's HDR, it is very slow, it likely can't handle movements and you get neither HDR raw nor raw.

The Canon M50 has a mode called Highlight Tone Priority. You need to choose this mode, then the result should be a tiny bit better. Other cameras have a similar, but probably slightly better mode than Canon's Highlight Tone Priority. For example Fuji's DR mode or Fuji's option to make the shadows brighter, Sony's DRO for making the shadows brighter, Panasonic's iDynamic, etc.. But they all can't automatically adjust the shadows(or expose) as well as Google or Apple smartphones. In extreme low-light conditions Huawei phones are even better than Google or Apple.

Canon's sensor might have slightly more dynamic range, but this doesn't mean that the jpgs will have a more visible dynamic range without editing them. And even when you edit photos, the usable dynamic range of the raw files can be significantly reduced by a lens that has much vignetting (like the 15-45mm).
 
Last edited:
Canon's sensor might have slightly more dynamic range, but this doesn't mean that the jpgs will have a more visible dynamic range without editing them. And even when you edit photos, the usable dynamic range of the raw files can be significantly reduced by a lens that has much vignetting (like the 15-45mm).
This correlates with what I reported earlier in this thread. I was comparing OTC Pixel 2 JPEGs with M6 raw files shadow/highlight pushed/pulled and I reported that they both had the same dynamic range. In the above linked thread (starting here) I also exceeded the dynamic range of both (dark interior room and bright sunlight landscape out a window) and they both fell apart at about the same place. I was using my 15-45mm lens, which derates the dynamic range of the M6 raws, as you describe.

I never did anything with Canon HDR or Highlight Tone Priority. I just used straight M6 raw files.

Wayne
 
Last edited:
Here is the reality - though really cameras produce technically better images it doesn't matter. Can cameras with 1" and APS-C size sensor do better? Absolutely.

However the smart phones produce good pictures EASILY which are more than good enough for most people. AND it fits in your pocket AND easily allows you to post pictures online AND makes phone calls AND texts And gets email, etc...

Does your camera do that?

Frankly I have always hated messing around with Lightroom and Photoshop. Its a hassle, its something else that where I have to learn the ins/outs to make nice pictures. The smart phones eliminate this automatically and there are apps that do a lot of the work for you. Who needs Lightroom (or Photoshop).

I have an app that on my phone that will take any picture and make it look like any of 40 historic films including Kodak, Fuji and Agfa. I regular take pictures from my phone and make it look like Kodachrome. People are mesmerized by the colors.

Up until the last year I had dismissed using the smart phone with the assumption that it produced grainy, crappy images. Boy did I have a rude awakening. The pictures I am seeing now are astounding. Granted if you start zooming and doing the pixel peeping thing they are not as good as a regular camera today. But looked at full size on a computer or phone, they look great. Now that I can see what these phones can do I want a multiple lens phone like the Pixel.

Convenience and ease of use trumps pretty much anything else. Thats why these smartphones are becoming so popular for photography. At this point no matter what the camera companies do its going to be very tough to ever get back the regular consumer photography market.
 
Google's Pixel 3 camera is a game changer. A top notch camera with algorithms that blow away anything the iPhone offers. Canon et. al. needs to include this type of processing for the jpeg output and leave RAW for those "special" images.

for those interested:

Two podcasts available on iTunes go into this in depth. A DPREVIEW expert goes into the Pixel 3 processing algorithms on The New Screen Savers podcast #178. The Pixel 2 is reviewed on The New Screen Savers #179. View the video versions if you can.
 
Wow. Thanks to everyone for the amazing responses!


Seems like there are some settings I can change to make the Cannon better for this particular shooting situation.But overall the consensus seems to be that unless I'm willing to go into post processing or have a very specific shooting need, the high-end phone camera's are going to provide a much better time to Avg quality output ratio.

If I hadn't bought this camera with the intention of shooting video as well, I might be putting my new M50 on ebay right now.


Thanks!
 

Keyboard shortcuts

Back
Top