Downsizing images and initial image quality

Mark Scott Abeln wrote:

Many high-quality, sharp pixels downsized should be equivalent to fewer, lower quality pixels when downsized --- right?
Actually better.

Higher resolution image when downsampled properly to a lower resolution image should result in a higher quality than an image acquired directly at the lower resolution.
So how does this fit into sampling theory according to Nyquist and Shannon?
There is a very strong connection here, though not directly by Shannon.

Downsampling using linear methods is often not properly implemented. That is why one has to use heuristics-based sharpening for pleasing affect. However, sampling theory does provide some notions for determining "optimal sharpening" filters. For e.g., displayed below is the "optimal" sharpening filter for a simple 2x2 linear interpolation downsampling.


Does the fact that we are initially capturing highly-defined pixels mean we still get better pixels after downsizing?
Yes, if you compare the resulting lower resolution target image to an image acquired directly at the target resolution.
So if we eliminate ten pixels in a row by downsizing, we aren't merely doing an average of those ten pixels, but instead are doing something closer to dropping the excess pixels? The latter seems to me to preserve more original pixel contrast.
Basically, with better methods you can have a sharper cut-off filter than simple averaging filter.
Also, I'm curious to see what algorithms folks use for downsizing. I use Photoshop and my choices are limited. I find Bicubic Sharper to look too artificially sharp, while Bicubic is a bit soft and needs additional sharpening to correct. Lately I've been using Bilinear, which tends to look better, at least for landscape pictures. For my best work, I'd be willing to use other algorithms even if it means I have to use additional software.
My impression is that Photoshop uses filters designed for interpolation (upsampling) for downsampling, without proper pre-fitering. See the above image on one way they could have derived the "optimal sharpening" filters.

Joofa

--
Dj Joofa
 
OK, so higher image quality will retain higher quality even after reduction in size. I suspected this was true, even if it didn't make immediate sense.
Downsampling using linear methods is often not properly implemented. That is why one has to use heuristics-based sharpening for pleasing affect. However, sampling theory does provide some notions for determining "optimal sharpening" filters. For e.g., displayed below is the "optimal" sharpening filter for a simple 2x2 linear interpolation downsampling.

Interesting. It appears to enhance pixels diagonal to the target pixel.
My impression is that Photoshop uses filters designed for interpolation (upsampling) for downsampling, without proper pre-fitering. See the above image on one way they could have derived the "optimal sharpening" filters.
I found this article on downsampling, http://en.wikipedia.org/wiki/Downsampling which led me to more references. The important quote is:
"Since downsampling reduces the sampling rate, we must be careful to make sure the Shannon-Nyquist sampling theorem criterion is maintained. If the sampling theorem is not satisfied then the resulting digital signal will have aliasing.

"To ensure that the sampling theorem is satisfied, a low-pass filter is used as an anti-aliasing filter to reduce the bandwidth of the signal before the signal is downsampled; the overall process (low-pass filter, then downsample) is called decimation."
Which is a fancy way of saying that the image has to be blurred before making it smaller. How much blur, and what kind of blur to use, is still problematic to me (the article recommends the Sync algorithm, but I don't know how to implement it), but this makes sense. I've been long familiar with the Nyquist sampling theorem:
http://en.wikipedia.org/wiki/Shannon-Nyquist_sampling_theorem

The following source has a great description of the issues here, as well as a highly critical test target to test resizing algorithms:
http://www.xs4all.nl/~bvdwolf/main/foto/down_sample/down_sample.htm

Playing around with the examples given there quickly showed me, in a very obvious way, some problems with the various Photoshop downsizing algorithms that I've used. Now clearly, I get fairly good results most of the time, but some images are problematic and I really didn't know why they resized poorly. As I've upgraded my camera equipment, I've noticed downsizing problems I didn't experience before. I like nice crisp images, but sharpening after downsizing has always been problematic to me, with some giving me excessive artifacts -- I sometimes get images that are fairly soft throughout with excessive halos around sharp edges. For my best work, I retouch after all this processing, but that is time-consuming and I have a project where I have to reduce hundreds of images, and they need to be of top quality, without taking too much of my time.

Now since I just use Photoshop, my choices are limited, but I think that I'll first use a little Gaussian blur before doing a downsize, experimenting with the critical ring target I mentioned above to see what radius of blur is appropriate for a given downsize amount. Next I will evaluate other software packages that implement quality resampling methods. The most obvious to me is ImageMagick ( http://www.imagemagick.org ), which is free and will work on my Mac; it is a command-line utility, but that's fine with me. I also have Gimp, which may have some decent resize utilities. Ultimately, it would be nice to get a good image resizing plugin to Photoshop.

Although ImageMagick offers many algorithms, they recommend Lanczos for downsizing and Mitchell for upsizing photographic images.

My major consideration will be integrating the resize process into my workflow; as I mentioned, I'll have hundreds of photos which need to be resized to dimensions that I won't know until quite late in the production process. Once I get the final image dimensions, I'll have to do exact resizing and then final sharpening.

Thanks to everyone!

--
http://therefractedlight.blogspot.com
 
That only works OK if you are resizing in exact integer multiples, and then only works for certain types of images: it would work poorly for photographs, especially on upsizing, since it would severely pixellate. (Video game designers use some quite esoteric upsizing algorithms to ensure that upsized graphics images scale well, particularly when emulating legacy video games on new platforms.)

What the ImageMagick documentation says about box resizing:
"This causes all sorts of artifacts and Moiré or Aliasing effects when both shrinking images and enlarging."
This is not suitable for photographs, especially if there are any patterns in the image, since it would generate artifacts.

--
http://therefractedlight.blogspot.com
 
Actually, 'Box' resize filter works for all kinds of pictures, and it doesn't produce Moiré effects if you do an exact integer reduction (binning). And why not downscale in that manner if your main priority is to not worsen the quality?

Upscaling, of course, is a completely different issue.
 
Mark Scott Abeln wrote:

OK, so higher image quality will retain higher quality even after reduction in size. I suspected this was true, even if it didn't make immediate sense.
Yes, it is true.
Which is a fancy way of saying that the image has to be blurred before making it smaller. How much blur, and what kind of blur to use, is still problematic to me (the article recommends the Sync algorithm, but I don't know how to implement it), but this makes sense.
The amount of blurr can be calculated for a particular algorithm. I did provide a picture for the blur amount and shape for a 2x2 linear downsampling in my previous message.
Playing around with the examples given there quickly showed me, in a very obvious way, some problems with the various Photoshop downsizing algorithms that I've used. Now clearly, I get fairly good results most of the time, but some images are problematic and I really didn't know why they resized poorly.
Because, in my understanding Photoshop is not providing appropriate pre-blur for their chosen methods of downsampling. For many images it won't matter, but for some, where perhaps, the downsampling ratio is high, problems will start to appear.
Now since I just use Photoshop, my choices are limited, but I think that I'll first use a little Gaussian blur before doing a downsize, experimenting with the critical ring target I mentioned above to see what radius of blur is appropriate for a given downsize amount.
Yes, that is the source of problem with Photoshop in my understanding. One has to provide an external blurring step, which they should have taken care of in a proper downsampling operation.

Joofa

--
Dj Joofa
 

Keyboard shortcuts

Back
Top