State of the art in upsizing algorithms?

Started Jun 9, 2014 | Discussions thread
hjulenissen Senior Member • Posts: 2,269
Re: State of the art in upsizing algorithms?
1

Mark Scott Abeln wrote:

hjulenissen wrote:

If this model (or something equally good) can be built from the demosaiced image, then one could equally well do the processing in stages. The question is if there is some unique information that needs to be propagated between the traditional blocks.

The unique information would be the actual values read from the mosaic.

Let me try to phrase what I was trying to say differently:

The intuitive approach to raw development pipeline is something ala what is done today; a stepwise "reversal" of the limitations made by the camera and associated capture mechanisms. For a radically different (more complex) algorithm, where the blocks are made "aware" of each other, or merged to have some merit, there must be some information _besides_ the demosaiced pixels that can benefit subsequent stages.

There are common algorithms that assume correlations between color channels which is a rather perilous assumption, I’d think, in many cases, particularly where there is fine color differences that need to be preserved.

There is ample evidence that the color-difference-channels _tends_ to be spatially smooth for natural scenes. Well-known exceptions includes birds feathers.

http://en.wikipedia.org/wiki/YCbCr

I wonder if changing the transform basis vector to square waves might help? That might eliminate some artifacts. But I’m not a mathematician.

I think of a finite-length DFT as a linear weighting applied to the input sequence. The DFT has some nice properties (it is orthogonal and ortonormal, and it can be computed cheaply). Further, it can be interpreted as "frequencies", something that is a lot more intuitive than "seemingly random correlator XYZ" that one might get from running some kind of PCA or ICA decomposition.

I am not so convinced that applying some linear transform is magically going to make the problem simple. Then, I could be wrong.

I would rather think that a problem description/solution that explicitly modelled 1)The signals that we expect to encounter (possibly adapted to the actual image), and 2)The analysis carried out in the human visual system, would have more promise. Now, understanding the HVS is not something that I pretend to do, and last I heard science struggles with that as well.

-h

Post (hide subjects) Posted by
(unknown member)
(unknown member)
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark MMy threads
Color scheme? Blue / Yellow