State of the art in upsizing algorithms?

Started Jun 9, 2014 | Discussions
 Forum
Re: Frequency domain
1

Mark Scott Abeln wrote:

My experience with frequency domain processing was disappointing, but I admit that I really don’t remember most of the math, except vaguely. Certainly, loss of precision is important, and I’m seeing a number of places where lack of precision in general harms the final image. Perhaps this is a good opportunity to experiment with extended precision arithmetic in Mathematica or whatever else is out there — since I’m concerned with a one-off image, long processing times is hardly an issue. However, I do need speed for the typical large project that I do a couple times a year.

I am in general suspicious of assuming viewing distances, but, if you do that, one potential advantage of fequency domain approaches is the ability to throw the human contrast sensitivity functions into the mix.

With respect to precision, you're using Octave, right? I use Matlab, which I think is very similar. In Matlab, you have to work at it to get the code to use anything less than 64-bit floating point.

Jim

-- hide signature --
JimKasson's gear list:JimKasson's gear list
Nikon Z7 Fujifilm GFX 100 Sony a7R IV Sony a9 II +1 more
Complain
Re: Frequency domain
1

JimKasson wrote:

With respect to precision, you're using Octave, right? I use Matlab, which I think is very similar. In Matlab, you have to work at it to get the code to use anything less than 64-bit floating point.

It appears that Octave uses 64 bit also.

-- hide signature --
Mark Scott Abeln's gear list:Mark Scott Abeln's gear list
Nikon D200 Nikon D7000 Nikon D750 Nikon AF-S DX Nikkor 35mm F1.8G Nikon AF Nikkor 50mm f/1.8D +2 more
Complain
Projection of convex sets?

There are some papers on this, and also some examples of a time-optimized version.

It's an iterative solution, where the information you mentioned in the previous post is successively merged into each other while minimizing a standard error function.

It's hard to make this kind of algorithm "sharp" since you need good reject(ance?) of noise to avoid ringing in the solution. What you basically get is a very high quality result (good image shape correlation, high SPSNRR) but at a slightly lower HF detail contrast.

It normally responds well to sharpening and further scaling / supersampling though.

Complain
Re: State of the art in upsizing algorithms ?

Calling on "armchair experts" to give feedback on possible improvements to the "sigmoidization" approach to image resampling: http://www.imagemagick.org/discourse-server/viewtopic.php?f=1&t=25736

(Negative comments, as in "horrid artifacts of type XXX in situation YYY make this unusable" are welcome.)

Complain
Re: State of the art in upsizing algorithms ?
1
bizi clop's gear list:bizi clop's gear list
Canon PowerShot G2 Sigma DP2 Merrill Canon PowerShot SX50 HS
Complain
Re: State of the art in upsizing algorithms ?

bizi clop wrote:

BTW, do you know this method? I think it's quite amazing:

http://www.cs.huji.ac.il/~raananf/projects/lss_upscale/index.html

Thatâ€™s pretty impressive. It does have a bit of a fake look about it â€” as does Genuine Fractals â€” but it is rather clean and sharp.  For my purposes â€” an image of a mosaic â€” I think it might be worth investigating, since Iâ€™ll have lots of edges that Iâ€™d like to keep sharp. The detail within the mosaic tesserae themselves is fairly uniform.

Mark Scott Abeln's gear list:Mark Scott Abeln's gear list
Nikon D200 Nikon D7000 Nikon D750 Nikon AF-S DX Nikkor 35mm F1.8G Nikon AF Nikkor 50mm f/1.8D +2 more
Complain
Re: State of the art in upsizing algorithms ?
1

You can find some matlab code there, but I don't know how complete they are.

bizi clop's gear list:bizi clop's gear list
Canon PowerShot G2 Sigma DP2 Merrill Canon PowerShot SX50 HS
Complain
Re: Frequency domain

JimKasson wrote:

I am in general suspicious of assuming viewing distances, but, if you do that, one potential advantage of fequency domain approaches is the ability to throw the human contrast sensitivity functions into the mix.

With respect to precision, you're using Octave, right? I use Matlab, which I think is very similar. In Matlab, you have to work at it to get the code to use anything less than 64-bit floating point.

Jim

-- hide signature --

Judging by the smoothness of the CSF, I would assume that one could do simple filter-banks (e.g. split into 3 bands using compact kernels, i.e. highly overlapping) and get those kinds of benefits while still working (a matter of semantics) in the spatial domain.

I do remember lectures on e.g. JPEG2k where the lack of frequency selectivity in transforms strikes me (as a more audio-centric guy). The same can be said for state-of-the-art image scaling: image processing seems to "like" compact filter kernels that give little frequency selectivity and little pre/post-ringing. (linear phase requirement goes without saying).

-h

Complain
Re: State of the art in upsizing algorithms ?
1

Mark Scott Abeln wrote:

...

That’s pretty impressive. It does have a bit of a fake look about it — as does Genuine Fractals — but it is rather clean and sharp.

...

Just blend with the result of enlarging with a filter better at preserving texture (bicubic, even).

Complain
Re: State of the art in upsizing algorithms ?

Here is the result of

BLUR=1.414213562373 && CONTRAST=12 && INFLEXION=80 && magick input.png -colorspace RGB +sigmoidal-contrast \$CONTRAST,\$INFLEXION% -define filter:blur=\$BLUR -filter lanczos -distort Resize 300% -sigmoidal-contrast \$CONTRAST,\$INFLEXION% -colorspace sRGB output.png

3x enlargement using sigmoidized elliptical weighted averaging with an EWA Lanczos stretched to minimize jaggies

Complain
Re: State of the art in upsizing algorithms ?

With less halo, thanks to the use of the EWA RobidouxSoft cubic filter, but more aliasing (using Elliptical Weighted Averaging with the standard smoothing cubic B-spline, a.k.a. Spline in ImageMagick, reduces the aliasing):

CONTRAST=25 && INFLEXION=85 && magick input.png -colorspace RGB +sigmoidal-contrast \$CONTRAST,\$INFLEXION% -define filter:c=.1601886205085204 -filter Cubic -distort Resize 300% -sigmoidal-contrast \$CONTRAST,\$INFLEXION% -colorspace sRGB output.png

3x enlargement with sigmoidized EWA filtering with the RobidouxSoft Keys cubic

Complain
Re: State of the art in upsizing algorithms ?

Note that the "original" kitchen test image used for the above examples was sharpened. Enlarging sharpened images brings up issues that are not otherwise present.

Complain
Re: State of the art in upsizing algorithms ?

Nicolas Robidoux wrote:

Note that the "original" kitchen test image used for the above examples was sharpened. Enlarging sharpened images brings up issues that are not otherwise present.

Is "sharpened" (in this context) similar to "aliased" or "non-Nyquist"? Screen shots of text may be high contrast and various degrees of aliasing (see image linked below)

It seems that font designers/text renderers have different philosohpy when it comes to trading position, weight, contrast etc in downsampling a high-resolution vectoried prototype to a low-resolution pixelized output. One might say that they use different filtering strategies (often hand-tuning one a letter-by-letter basis).

If sharpening was a simple high-pass filter with no clipping, then it would be an invertible function, and image scaling ought to tolerate it, right? But sharpening is common signal-dependent, and high-contrast edges will often be clipped into aliasing.

For something like a "vector-like" mosaic, it would seem like the original scene lends itself to non-linear edge-tracking algorithms, where "smooth & sharp" edges are more important than low-level textures?

-h

Complain
Re: State of the art in upsizing algorithms ?

hjulenissen wrote:

Nicolas Robidoux wrote:

Note that the "original" kitchen test image used for the above examples was sharpened. Enlarging sharpened images brings up issues that are not otherwise present.

Is "sharpened" (in this context) similar to "aliased" or "non-Nyquist"? ...

All I meant was that the "kitchen" image, like other test images used by the "self-training superresolution upsampling" researchers to demo their methods, shows the haloing typical of sharpening, and that both the sharpness and haloing (and lack of noise) are not typical of an image resampled early in a raw toolchain. This makes it more difficult to judge the effectiveness of a resampling method in the use case of the original poster.

Context is everything: Mathias Rauen, the developer of the madVR video renderer, found out that it's not easy to make sigmoidization work reliably with video images. I suspect that chroma subsampling (and the use of a color space will a very loose connection to RGB), compression and the nilly willy sharpening of many DVDs gets in the way.

Complain
Re: State of the art in upsizing algorithms ?

Nicolas Robidoux wrote:

Context is everything: Mathias Rauen, the developer of the madVR video renderer, found out that it's not easy to make sigmoidization work reliably with video images. I suspect that chroma subsampling (and the use of a color space will a very loose connection to RGB), compression and the nilly willy sharpening of many DVDs gets in the way.

Interesting.

Would you want to do video scaling in RGB anyways, as opposed to some luma/luminance representation?

CbCr downsampling, compression errors and source sharpening represents loss of information that one cannot regain, but they should be somewhat similar to what one might expect from an out-of-camera JPEG? (most JPEG encoders employs chroma subsampling at moderate and low bitrates).

The representation used in video might be different from still-images (primaries, effective gamma, color matrixing), but these are in principle invertible (on can re-represent the signal in some other form).

I think that moving images are a lot more sensitive to aliasing than still-images, and that one can live with slightly less sharp edges as long as they don't creep randomly in this and that direction.

-h

Complain
Re: State of the art in upsizing algorithms ?

hjulenissen wrote:

...

I think that moving images are a lot more sensitive to aliasing than still-images, and that one can live with slightly less sharp edges as long as they don't creep randomly in this and that direction.

-h

Mathias tried pushing things through RGB.

If I remember, what killed his attempt was unexpected results with anime source material, among other things, and no immediate "huge win" with "regular" DVD source, compared to EWA LanczosSharp used in combination with a fairly mature AR (Anti-Ring) limiter.

Complain
Re: State of the art in upsizing algorithms ?

Nicolas Robidoux wrote:

Note that the "original" kitchen test image used for the above examples was sharpened. Enlarging sharpened images brings up issues that are not otherwise present.

You have lots of great insights. Thanks!

I shall keep this in mind — I usually have some sharpening turned on in raw conversion.

Mark Scott Abeln's gear list:Mark Scott Abeln's gear list
Nikon D200 Nikon D7000 Nikon D750 Nikon AF-S DX Nikkor 35mm F1.8G Nikon AF Nikkor 50mm f/1.8D +2 more
Complain
Re: State of the art in upsizing algorithms ?

I am experimenting with new methods of resampling images, and I must say that I quite like what a new prototype does: EWA Lanczos (Jinc-windowed Jinc 3-lobe) through a gamma 2.5 color space (not that gamma 2 and 3 are that different).

Complain
 Forum