State of the art in upsizing algorithms?

Started Jun 9, 2014 | Discussions thread
The_Suede Contributing Member • Posts: 652
Re: State of the art in upsizing algorithms?

JimKasson wrote:

I've long thought that was the way to go, but I haven't seen any evidence of commercial tools working that way. Demosaicing involves building a model of the scene based on sparse samples, and that model should be valuable in producing an upsampled result; it seems a shame to throw it away.

Take a look at some of the frequency-domain methods in this survey paper:

If you're adventurous, you could use one of the frequency domain methods and effectively do your resampling when converting back to the xy domain. However, my experience with image frequency domain processing is quite limited, and I have seen minor divergence from expected results, even with 64-bit floating point precision.

If you stay in the space domain, there's a possible win in not going to 16-bit integer precision between demosaicing and resampling. That win should be more significant as the scaling methods and sharpening filters get more complex.


The data accuracy is never greater than 14-bits worth of precision, meaning that only resonant algorithms are affected by using floating point. But since you have to handle maths functions, float isn't really that big of a performance concern in modern processors.


One of the things about the sparse sampling is that it is discrete in nature, set in a fixed pattern of sparsity (the Bayer pattern).

The Bayer pattern is the most densely populated sparsity root possible in the chain, so it (or rather; the mathematically perfect re-modelling of the underlying material in it) defines the maximum possible resolution you can hope to achieve.

You don't actually GAIN anything by trying to merge the re-population of the sparse samples and the scaling, since your limitation is in the original sparse pattern. What you could gain is maybe some computational efficiency - IF you managed to write a sufficiently streamlined code for it. But I actually doubt that is possible, seeing the problems I have already now actually filling the data pipeline in modern processors.

What I guess you COULD do with great success is to minimize the amount of scaling aliasing - meaning a slightly blurry output image, but without nasty jaggies - but you won't "gain" any real resolution.


Well written code spends about 30% of the time waiting for data to process. As long as you keep your working dataset in L2-L3, it doesn't (much) access speed difference what thread you're in if the code is optimized.

Post (hide subjects) Posted by
(unknown member)
(unknown member)
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark MMy threads
Color scheme? Blue / Yellow