iPhone 11 Pro vs 1DXii Locked

Started 2 months ago | Discussions thread
This thread is locked.
stevo23 Forum Pro • Posts: 22,871
Re: iPhone 11 Pro vs 1DXii

Dexter75 wrote:

The Davinator wrote:

Dexter75 wrote:

stevo23 wrote:

Any camera can take multiple images and the photographer can process them to his liking And it’s very easy to make truly superior results. I think what you’re actually saying is that you don’t feel like tapping into the superior IQ of an ILC because you don’t feel like post processing.

Sure, you can do that if you want to spend an hour in post doing what the iPhone can do with smart HDR in a fraction of a second. I’ve been a pro photographer for 15 years, I spend plenty of time in post. No, my iPhone won’t replace my camera, especially for my studio work, but it’s plenty capable of most everything else. I will be using it for a few natural light portrait shoots coming up though.

If you are spending an hour in post doing an HDR stack, you know nothing about being a photographer. I do multi shot HDR with Fuji 50R in less than a minute...with results that slaughter any phone. Same with any DSLR

Cool, let me know when your Fuji is capable of this without the need for any LR.

Smart HDR

Smart HDR is used for effects like Portrait Mode, but also all images on the ultra-wide camera, bright images on the wide-angle, and the very brightest images on the telephoto, like outside in the middle of the day, where the dynamic range can be high and handling harsh contrasts and preserving and pulling maximum detail from highlights and shadows is paramount. Now in its second generation, it's been machine-learned to recognize people and process them.. as people, to maintain the best highlight, shadows, and skin tones possible.

Deep Fusion

Deep Fusion, currently in beta, is used for mid to low-light images, like being inside. The process is … smarter than smart. It takes four standard exposures and four short exposures before capture, and then a long exposure at capture. Then, it takes the best of the short exposures, closest to the time of capture, the one with minimal motion and zero shutter lag, and that gets fed to the machine learning system.

so just how does that machine learning work?

Next, it takes the best three of the standard exposures and the long exposure, and fuses that into a single, synthetic long exposure. That gets fed into the machine learning system as well.

Then, both 12mp images are pushed through a series of neural networks that compares each pixel from each image, within the full context of the image, to put together the best possible tone and detail data for the image, the highest texture with the lowest noise. It understands faces and skin, skies and trees, wood grain and cloth weave. And optimizes for all of it. The result is a single output image built from all those input images, with the best of all possible sharpness, detail, and color accuracy

Can’t argue with neural networks, it’s impossible.

 stevo23's gear list:stevo23's gear list
Fujifilm X-Pro2 Fujifilm XF 60mm F2.4 R Macro Fujifilm XF 14mm F2.8 R Fujifilm XF 27mm F2.8 Fujifilm XF 23mm F1.4 R +3 more
Post (hide subjects) Posted by
MOD Biggs23
MOD Biggs23
MOD Labe
MOD Labe
(unknown member)
(unknown member)
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark MMy threads
Color scheme? Blue / Yellow