D800E vs. 5D3: Diminishing Returns or Reversal of Returns?

Started Jul 19, 2013 | Discussions thread
Rick Knepper
OP Rick Knepper Forum Pro • Posts: 15,622
Re: Scaling images with BiCubic

The_Suede wrote:

I just thought to add that the BiCubic algorithm in Photoshop is a 4x4 convolution window. And it's not a very GOOD 4x4 - meaning that it has an optimum "working window" between 75% and 45% scale of original.

The window is the area in the original that the algorithm "looks at" when creating a new pixel for the resampled image. So when you scale to 25% or less, you're actually not even getting all the original pixels included into the resampled image.

Scaling 36MP to ~2.2MP is right on the edge - a ~24% scale, meaning that you will quite probably get aliasing in the finished result. You will at the very least NOT get optimal results regarding detail sharpness.

I agree with this but reductions of this magnitude are sometimes a necessary evil.

Doing ANY kind of large change in PS Bicubic is a bad idea if you want to preserve information. This means (regarding the ORIGINAL thread discussion material!) that an 1800x1200px end format might actually give better results with a 22MP camera than with a 36MP camera.

Thank you!!

IF you do the transform in one step. Do both cameras in two steps, with an intermediate at maybe 3600x2400px, and the result may actually change. This depends on how sharp the original is, and how you apply sharpening in the worklow.

This is very interesting. I've always felt that any kind of reduction in PS also results in a degradation, at least that is what my eyes told me, lying or not, while said reduction is still a necessary evil.

Kind of off-topic: My general workflow involves a sharpening routine that necessarily requires a beginning image 1.66 times larger than the intended final dimension (in this case 1800x1200). I avoid multiple reductions with bicubic in PS by adding a reduction to the conversion process in ACR. Therefore, the resulting tif becomes 2988x1992 in preparation for a one time bicubic reduction to the final image dimension. Am I kidding myself that I am avoiding a stage of degradation? Is this reduction in ACR actually a bicubic algorithm?

But genreally - for 1800x1200px, general signal theory says that anything more than twice the end resolution should give almost identical results. Add in a bit of slack for the raw-converter interpolation engine, and say another 25%.
That's still not more than 2.2MP x 4 x 1.25^2 = 13.75MP original.

So, given a good raw converter, the original 5D "mk1" should give almost identical images as the D800 at 1800x1200px scale. If all scaling operations are made in a sensible way...

All of this seems quite sensible (I am sure that it is a condensed summary of signal theory). I want to ask about the characterization "almost identical". This is an itch I need to scratch and perhaps you could help me.

Since there are more pixels per horizontal & vertical lines in a 36 MP sensor vs a 21 MP sensor given both have the same physical surface area, it seems like a fair assumption that finer (or more) detail is captured. Assuming that you will agree with this, that extra detail doesn't just disappear during reductions does it? That "extra" detail may lose its own resolution along with the rest of the detail that would have been captured under a 21 MP regime as pixels are thrown away. However, reductions also seem to work in the same way that increases in viewing distance do. To me, this represents two distinct ways that additional detail can become "irrelevant". Am I traveling on any kind of right path here?

The question is, are the extra details really gone? I say they are not. Not all of them. If I want to discern these details, I decrease the viewing distance.

As I mentioned in the OP (and you seem to suggest a similar thing but I want to be crystal clear about what I think you are suggesting), there is a point where a given reduction harms the higher MP image more than a lower MP image. Do you believe there is a tipping point under any reasonable and responsible reduction scheme where the image starting out with less MPs ends up with more resolution than the image with higher MPs assuming one can see the details?

We keep characterizing an 1800x1200 image as small. This size image, viewed on my monitor, is like holding up a 13x19 paper print 18" away or closer. That's really quite large enough to see these "minute" differences quite readily.

Therefore, I am not sure how we can describe a 5D image as being identical to a D800 image at least not reduced to a 1800x1200 level. Maybe I've looked at so many images at this size, I have trained my viewing to discern detail at this size.


On the raw's presented here, I get very similar - almost perfectly identical - results when the images are at ~6MP, that is 3000x2000px. Given a double-blind ABX test with images as similar as those results, I probably would fail to identify either camera correctly.

Identifying a camera is one thing, but I think identifying the image with the better resolution and/or more/finer detail can be pointed out easily. The result just might not be what one would expect.

So, one sensible conclusion that can be drawn from your experiment, is that the advice in the OP that one should check their particular application for the amount of the so-called diminishing return is a valid one?

-- hide signature --

Rick Knepper, photographer, non-professional, shooting for pleasure, check my profile for gear list and philosophy. TJ said, "Every generation needs a new revolution".

 Rick Knepper's gear list:Rick Knepper's gear list
Pentax 645Z Canon EOS 5DS R Fujifilm GFX 50S Pentax smc D FA 645 55mm F2.8 AL (IF) SDM AW Pentax DA645 28-45mm F4.5 +8 more
Post (hide subjects) Posted by
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark MMy threads
Color scheme? Blue / Yellow