# 16-shot pixel shift shooting -- Unusable gimmic?

Started 5 months ago | Discussions thread
Re: 16-shot pixel shift shooting -- Unusable gimmic?
2

fieldray wrote:

JimKasson wrote:

fieldray wrote:

Pixel sampling issues are some of the most frustrating (sometimes impossible!) things to remove for me. The truth for me is that the 4x pixel shift is genuinely useful for dealing with color aliasing but I have not encountered a situation in my photography where spatial sampling with a 60MP array limits the effectiveness of my photography, so the 16-shot pixel shift is fun to play with but will not likely help my photography. This reminds me of some of the f#*lamda/p (Q) arguments we used to have designing space optics! Q=2 was our holy grail as there was by definition no aliasing, but from a practical point of view Q=1 or Q=1.2 was a better optimization. On some of our FLIRs with bigger pixels compared to the optics cutoff, sub-pixel shifting analogous to the 16-shot function on the Sony camera made a big difference in the final product

Q = 2 means two samples per Sparrow Distance, right? To do that with a Bayer CFA will take about two binary order of magnitudes decrease in pitch at reasonable f-stops. Or we could just stop way down...

Q=2 for a Sony A7riv sensor is about f/13 if you figure on a center wavelength of .55 um. So based on our old arguments, optimum f/number for practical use might be about f/8. Turns out that non-linear image restoration ‘ super resolution algorithms like to use Nyquist sampling (Q=2) but for pixel peeping, image detail is really getting soft at this point. Bayer sampling doesn’t change the sampling cutoff but it puts a dent in the higher frequency MTF.

The green diagonal sampling pitch is the same, but the green horizontal and vertical pitch changes by a factor of two and all the red and blue pitches change by a factor of two. You could argue that, if you had a priori knowledge that the subject were monochromatic, that the pitch doesn't change.

As for two binary order of magnitudes, I think that one order of magnitude (factor of 2) would be plenty. My experience in the technical world is that depth of focus over the field of view becomes the dominant challenge if you are trying to do diffraction limited imagery much below f/4 or f/5. Thanks for the discussion!

I agree with a almost all of what you have to say, but here's a counterexample to the notion that f/8 in a 3.76 um Bayer CFA sensor will eliminate aliasing.

https://blog.kasson.com/gfx-100/a-visual-look-at-gfx-100-diffraction-blur/

Of course, many subjects will not produce visible aliasing at f/8. But fabric often will.

You are entirely correct about DOF. Then you have to stack, use deconvolution, or extreme measures like this:

This is shown with my GFX 50R, but works fine with an a7RIV.

Jim

-- hide signature --
JimKasson's gear list:JimKasson's gear list
Nikon Z7 Fujifilm GFX 100 Sony a7R IV Sony a9 II +1 more
Complain
Post ()
Keyboard shortcuts: