State of the art in upsizing algorithms?

Started Jun 9, 2014 | Discussions thread
Detail Man
Detail Man Forum Pro • Posts: 17,171
Re: State of the art in upsizing algorithms ?

Mark Scott Abeln wrote:

hjulenissen wrote:

The baseline for image scaling is working in some desirable domain (gamma vs linear, colour space, ...) and choosing the appropriate linear filter (some time vs frequency trade-off, for instance lanczos2/3). For some applications you might want to do post sharpening tightly integrated with the scaling operation. The ImageMagick link that you found is an excellent resource.

Dr. Robidoux, the author of the paper, gave me the link himself.

One striking finding in the link is that while downsampling ought to be done in a linear space, up sampling instead should be done in a gamma space or another space that is gamma-like such as Lab.

Now I’m not quite sure how this finding ought to be applied — that demosiacing ought to be done after gamma correction, if I am ultimately upsizing an image?

From my reading and experimentation, it seems that demosaicing ought to be done rather late in the image processing pipeline. One manual for dcraw makes the case that it ought to be done after white balance, and for the same reasons it seems prudent to do it after color correction. But after gamma also?

One technique that I found very useful, back in 2008 with Photoshop CS3, was using ACR to produce a larger image — I used about 11 MP — from my 6 MP camera images I got from my old D40. I found that I could do a lot of geometric image transformation, including perspective distortion, barrel distortion removal, defringing, etc. without losing too much detail or having too much additional softening, and the perception of sharpness could be restored with good downsampling algorithms and sharpening. Note that this was a terribly slow ordeal on my old iMac with 1 GB memory but well worth it.

When I used to shoot JPG only with earlier cameras, and used (the 8-bit processing) PaintShop Pro 9.01 as an editor to post-process those JPGs, I would use PSP (Bicubic, then with fixed parameters) up-sampling to create BMPs (not needing the alpha-channels that TIFF image-files include) of an integer multiple of the original JPG's pixel-size, and perform editing on that item.

Found that a great deal of nasty looking (too often spikey and prominent) artifacts that tweaking with various PSP controls often caused were significantly smoothed, and thus attenuated, due to low-pass filtering occuring upon down-sampling prior to usually applying some post-downsampling USM, then followed by using PSP's export to JPG function - which, similar to Adobe's various similar offerings, it turns out does not allow for a JPG Quality Factor=100%, though it is possible to specify the Chroma Sub-sampling mode in versions of PSP.

(Whether or not in-processing cropping was performed), I would (at times) select whole-number fractional up-sampling (followed by down-sampling) ratios to use in that processing. Down-sampling (decimating only) by an odd integer such as 3 seems to collapse 3x3 arrays of pixels into the image-data of a single pixel - although this is impractical to achieve when cropping in editing.

Later, using (the 16-bit processing) PSP X4.3 (which I generally only use for it's 16-bit USM in my RAW>JPG workflow), similar positive effects regarding RGB histogram smoothness can be achieved.

If a (TIF or JPG) source-image is of smaller pixel-size than my desired size, I still up-sample on the TIF level prior to editing with DxO Optics Pro and/or PSP X4.3, then down-sample using a 16-bit application with a GUI using some particular Lanczos-3 algorithm (that appears identical to the result of performing Lanczos-3 down-sampling on a TIF using RT 4.x in some eyeball tests).

I don't think that the application that I use performs (a linearized) re-sampling. Don't know about RT 4.x's re-sampling processes. That would be good to know. Considering up-sampling only, it appears (in that paticular case) that (gamma-corrected process) results may be better ?

B-Spline, Gaussian, etc. (at least in the 8-bit XnView - which has an excellent export to JPG that allows straightforward user-control of Quality Factor as well as Chroma Sub-sampling) yield very soft results. As a result, I use (described unspecified flavor of 16-bit) Lanczos-3 for up/down-sampling.

PSP (X4, anyway) uses a Bicubic (with a "sharpness/softness" adjustment slider that does not appear to have very much effect), and (I think) the Adobe apps still utilize their own "in-house" Bicubic item.

The "better than linear filtering" commercial applications that are available seems to often rely on some kind of edge-adaptive algorithm, meaning that sharp edges (e.g. text) can be blown up sharp and smooth.

I think that there are some applications of "non-local" methods for denoising/upscaling that might benefit certain kinds of images (i.e. exploiting image self-similarity).

My photograph is of a large, historic, top-quality mosaic, and so has lots of brightly colored triangles and trapezoids separated by thin lines of mortar. This is a subject that could really benefit from an algorithm that detects and preserves edges.

There is that commercial package (Perfect Resize 8) which upsizes using fractal decomposition. I’ll see if this might work well also.

Another direction I’ve been thinking of is using software to create a Scalable Vector Graphics or SVG representation of my image — IF it can detect the edges well enough and crisply enough, then I can blend the perfectly resized vector graphics with the upsampled image. Seems plausible, but I don’t know if anyone’s tried this or what software might work best.

For my own humble needs (18MP APS-C, A2 printer), I have found that the sensel grid is seldom a significant limitation (but lens quality/focus can be).

If I had known that my photograph would be purchased, I’d have done a more heroic effort in my photography. But as it happens, I did use good technique (heavy tripod, manual live view focus, mirror lockup and remote shutter release) and a good lens, because I did want a quality final image for my own use.

My concern is that the final image will be printed at 100 DPI or maybe less (they haven’t yet told me the final dimensions). Certainly I’ve had good success with even lower resolution large prints, albeit heavily processed — and most civilians aren’t pixel peepers — but I do want even better results this time.

It sounds like the (composite) "down-sampling" (including presentation effects) could dominate ?

DM

Post (hide subjects) Posted by
(unknown member)
(unknown member)
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark MMy threads
Color scheme? Blue / Yellow