Downsizing images and initial image quality

Mark S Abeln

Forum Pro
Messages
20,527
Solutions
56
Reaction score
16,906
Location
Washington, MO, US
This is a question perhaps for the more theoretically inclined or very experienced readers....

If you are targeting your images for Internet display -- with final dimensions of say 500 pixels on a side -- then naively I'd think that so much of what goes into image quality is lost upon downsizing. Many high-quality, sharp pixels downsized should be equivalent to fewer, lower quality pixels when downsized --- right? There is a common opinion that high-quality equipment is wasted for web display -- is this opinion correct?

But that is not what I see. Images taken with good technique and equipment to emphasize sharpness generally looks much better downsized than those taken with less care. For example, when I use my sharp macro lens, the final image (even when greatly reduced in size) looks far sharper than merely getting in close with my ordinary zoom lens. The same goes when I stitch together photos into a panorama -- sharpness is enhanced even when reduced greatly in size.

Medium-format digital images look like medium format images, even when greatly reduced in size. The perceived sharpness blows away images taken with tiny sensors.

So how does this fit into sampling theory according to Nyquist and Shannon? Does the fact that we are initially capturing highly-defined pixels mean we still get better pixels after downsizing? So if we eliminate ten pixels in a row by downsizing, we aren't merely doing an average of those ten pixels, but instead are doing something closer to dropping the excess pixels? The latter seems to me to preserve more original pixel contrast.

Also, I'm curious to see what algorithms folks use for downsizing. I use Photoshop and my choices are limited. I find Bicubic Sharper to look too artificially sharp, while Bicubic is a bit soft and needs additional sharpening to correct. Lately I've been using Bilinear, which tends to look better, at least for landscape pictures. For my best work, I'd be willing to use other algorithms even if it means I have to use additional software.

I know some of you are mathematically inclined -- so I'd like to hear some theory!

--
http://therefractedlight.blogspot.com
 
When resizing you loose some of the detail due to pixel merging. But, in my exp, usually the final image, even before any extra sharpening, is better than one coming from a camera with less pixels (assuming both are of same quality and sensor size, like an APS-C camera, or both FF). And files with more pixels usually sharpen slower and with better final results, since the sharpening settings use how many pixels as one of the parameters, and for a denser sensor that means you can do finer sharpening with it, showing less artifacts.

Here's a reading about that:

http://www.cambridgeincolour.com/tutorials/image-resize-for-web.htm

And here is an example of D90 and D7000 resized to D90's size, from IR samples using CNX2 and just their regular resizing tool, straight and then with very little sharpening (20,2,0).

D90



D7000



D7000 + USM (20,2,0)



--
Renato.
http://www.flickr.com/photos/rhlpedrosa/
OnExposure member
http://www.onexposure.net/

Good shooting and good luck
(after Ed Murrow)
 
I often do fairly severe edits to my photos, particularly with architectural shots, where I'll do perspective correction in Photoshop, or at least do minor corrections due to slight camera tilt during capture. These corrections will often cause loss of sharpness, sometimes great loss. Alas, my eyesight is not all that good, and the tiny viewfinder of my camera really doesn't allow me all that much accuracy when setting up a shot.

Is there any advantage to upsizing an image in Adobe Camera RAW before doing these kinds of edits? I suppose in this case, the algorithm used will be of some importance.

--
http://therefractedlight.blogspot.com
 
Is there any advantage to upsizing an image in Adobe Camera RAW before doing these kinds of edits? I suppose in this case, the algorithm used will be of some importance.
I am a casual user of ACR but your up-size idea made me look in ACR. How does one up-size in ACR?
TIA
Bert
 
This is a question perhaps for the more theoretically inclined or very experienced readers....

If you are targeting your images for Internet display -- with final dimensions of say 500 pixels on a side -- then naively I'd think that so much of what goes into image quality is lost upon downsizing. Many high-quality, sharp pixels downsized should be equivalent to fewer, lower quality pixels when downsized --- right? There is a common opinion that high-quality equipment is wasted for web display -- is this opinion correct?

But that is not what I see. Images taken with good technique and equipment to emphasize sharpness generally looks much better downsized than those taken with less care. For example, when I use my sharp macro lens, the final image (even when greatly reduced in size) looks far sharper than merely getting in close with my ordinary zoom lens. The same goes when I stitch together photos into a panorama -- sharpness is enhanced even when reduced greatly in size.

Medium-format digital images look like medium format images, even when greatly reduced in size. The perceived sharpness blows away images taken with tiny sensors.

So how does this fit into sampling theory according to Nyquist and Shannon? Does the fact that we are initially capturing highly-defined pixels mean we still get better pixels after downsizing? So if we eliminate ten pixels in a row by downsizing, we aren't merely doing an average of those ten pixels, but instead are doing something closer to dropping the excess pixels? The latter seems to me to preserve more original pixel contrast.

Also, I'm curious to see what algorithms folks use for downsizing. I use Photoshop and my choices are limited. I find Bicubic Sharper to look too artificially sharp, while Bicubic is a bit soft and needs additional sharpening to correct. Lately I've been using Bilinear, which tends to look better, at least for landscape pictures. For my best work, I'd be willing to use other algorithms even if it means I have to use additional software.

I know some of you are mathematically inclined -- so I'd like to hear some theory!
Have a read through this:
http://www.luminous-landscape.com/reviews/kidding.shtml

--
Bob
 
Many high-quality, sharp pixels downsized should be equivalent to fewer, lower quality pixels when downsized --- right?
What are high or low quality pixels?
As we all know, a pixel is a defined part of a picture that has a certain RGB value assigned, described with three numbers, for example: (30, 107, 64).

The higher quality the pixel, the nicer these three numbers are. So for example: a really pleasing pixel may be (30, 107, 64) while at the other extreme we may have something like (30, 107, 64) - although some purists claim there are apparently certain subjective aspects to this.

Also, I gather, a given pixel may be more or less noisy - a case in point: (30, 107, 64).

It may also have improved contrast - consider, say, (30, 107, 64).

See? All perfectly obvious ;-) RP
 
At the bottom of the ACR window is shows your color space, bit depth, image size, and pixels-per-inches.

Click it, and it brings up the Workflow Options window. You can change the image size there.

This might not work if you are using Photoshop Elements. Not sure what ACR in Lightroom looks like.

--
http://therefractedlight.blogspot.com
 
Amazing! I do know that a perennial desire of many here is a compact camera with high image quality -- many do not want a DSLR, period.
The point I was going to make is that once equipment gets past a certain quality, the image quality is much more up to the photographer, not the camera. Top end cameras produce better photos because the demographic of their owners includes many more top end photographers. I also think that intangible things such as pride of ownership are important in how well someone operates a camera.
--
Bob
 
Many high-quality, sharp pixels downsized should be equivalent to fewer, lower quality pixels when downsized --- right?
What are high or low quality pixels?
As we all know, a pixel is a defined part of a picture that has a certain RGB value assigned, described with three numbers, for example: (30, 107, 64).

The higher quality the pixel, the nicer these three numbers are. So for example: a really pleasing pixel may be (30, 107, 64) while at the other extreme we may have something like (30, 107, 64) - although some purists claim there are apparently certain subjective aspects to this.

Also, I gather, a given pixel may be more or less noisy - a case in point: (30, 107, 64).

It may also have improved contrast - consider, say, (30, 107, 64).

See? All perfectly obvious ;-) RP
30,107,64 is great, especially the green component, which is of prime quality. 30 and 64 are both sub-prime, however. A good downsizer would map this onto 29,107,67, which is of course triple-prime. That's going to look great on a web page.
 
At the bottom of the ACR window is shows your color space, bit depth, image size, and pixels-per-inches.

Click it, and it brings up the Workflow Options window. You can change the image size there.

This might not work if you are using Photoshop Elements. Not sure what ACR in Lightroom looks like.
I am running a somewhat dated version of ACR (CS3 and ACR v4.6) and the bottom of my ACR window shows only: Zoom Size and Save Image.
Thanks,
Bert
 
Bert,

I used CS3 and ACR 4.6. It had that feature! Click where the red arrow points.





This is on a Mac. If you use Windows, I can't image why they wouldn't include this feature. Maybe something else is going on.

--
http://therefractedlight.blogspot.com
 
High quality pixels are those that more accurately reflect the scene at a given point. For example, if two adjacent pixels map to opposite sides of a high contrast edge, they would be high quality if their RGB values are very different in value; low quality would be fuzzier, and so would be closer in value.

Sharper optics will tend to produce images that have higher contrast between adjacent pixels. I suspect that some downsizing methods will preserve this contrast better than other methods, but I just don't know the math.

--
http://therefractedlight.blogspot.com
 
Sharper optics will tend to produce images that have higher contrast between adjacent pixels.
Yes - the only reason for poking (slightly fish-in-a-barrel) fun at the idea of good or bad pixels, is because it is far more fruitful IMO to think of this kind of a thing as a property of the image , not the pixels. The pixels are one "sampling method" of an image's picture content, but only one among many possible samplings, and these other sampling methods will also elicit all the same properties. Maximum contrast between pixels can be a subjectively bad thing when we are seeing jazzy moire patterns, or heavily aliased edges, or when the image is being oversharpened to the detriment of textural (microcontrast) detail.

AFAICT there is no particular special status for the pixel level image SCALE apart from the accidental fact that this acts as an artificial cutoff to still finer measurement scales.

A higher resolution sensor will show the exact same black : white edge spread across several pixels, and thus have lower pixel-to-pixel contrast. It is true that the contrast of these transition pixels will also improve the better the optics; but we need to judge these pixels in a different context than the pixel pair representing the same edge in the lower resolution image, because they are occurring at a different physical scale. The same is true when we resample subsequently.

RP
 


This is a bayer array

A bayer sensor array produces a large image with poor image quality on a per pixel basis, because pixels are blended together, which is why the image looks so much better when scaled down, because photoshop is using (as an example) 4 poor quality pixels to make 1 higher quality pixel. Image quality improves a lot.



This is a foveon array

With a foveon sensor array you have a small image with excelent image quality on a per pixel basis. If you apply a 50% shrink on a foveon image the quality hardly improves... if at all.

So, basicaly, you are flawed in your notion of thinking that most cameras take a realy good picture on a 1:1 pixel basis.... the vast majority do not.
 

Keyboard shortcuts

Back
Top