Can pixel shift increase resolution?

I understand that image quality increases with pixel-shift. However, I wonder if it ever increases resolution.

Jim Kasson's analysis of a7rIV shows that the resolution does not increase with pixel shift: Does pixel-shift increase resolution?

Frans van den Bergh wrote that pixel shifting does not increase resolution (link).
If you mean pixel shift as capturing all three color channels at all pixel locations - it depends, for instance on scene content, but in practice it makes very little difference. And in fact even in humans there is no penalty for trichromacy, meaning that a monochromat has about the same measured acuity of a trichromat.

Frans is right that in ideal conditions the slanted edge method will not show any difference in resolution. Even though there may be an improvement in the effective Nyquist frequency, hence aliasing.
 
Jim is right to start with the definition of resolution. I disagree with his conclusion, though. In particular, aliasing limits resolution.

About the MTF (I did not read the second link carefully though): it is is pretty meaningless in this context. If you take every 4th G pixel, for example, you would get the same MTF but it will hurt the resolution.

Also, the proof is in the pudding. The test scene demonstrates the increased resolution pretty well. One may argue that the demosaicing is not optimal but for pure practical purposes, at least, whatever the reason, it is what it is.
 
Thank you for the responses.

It seems clear that pixel-shift improves image quality and that we can discern more details. What would an increased resolution add to the image quality?
 
It seems clear that pixel-shift improves image quality and that we can discern more details.
Another thing the pixel shift does is increase exposure and increase the signal to noise ratio. 16 pixel-shift images would reduce the effective exposure index by 1/16th and reduce noise to 25% of a single shot.

Extra clean images in many cases enlarge better than higher resolution but noisier images, although I’m not quite sure how to quantify the tradeoff.
 
It seems clear that pixel-shift improves image quality and that we can discern more details.
Another thing the pixel shift does is increase exposure and increase the signal to noise ratio. 16 pixel-shift images would reduce the effective exposure index by 1/16th and reduce noise to 25% of a single shot.

Extra clean images in many cases enlarge better than higher resolution but noisier images, although I’m not quite sure how to quantify the tradeoff.
Yes, reduced noise is an essential part of pixel shift benefits. It is especially beneficial when combined with handheld pixel-shift as handheld shooting often occurs at higher ISOs. For example, Olympus m43 cameras show almost three stops of improvements at higher ISOs (P2P).

For me, reduced noise is a more significant benefit than reduced aliasing.
 
It seems clear that pixel-shift improves image quality and that we can discern more details.
Another thing the pixel shift does is increase exposure and increase the signal to noise ratio. 16 pixel-shift images would reduce the effective exposure index by 1/16th and reduce noise to 25% of a single shot.

Extra clean images in many cases enlarge better than higher resolution but noisier images, although I’m not quite sure how to quantify the tradeoff.
Yes, reduced noise is an essential part of pixel shift benefits. It is especially beneficial when combined with handheld pixel-shift as handheld shooting often occurs at higher ISOs. For example, Olympus m43 cameras show almost three stops of improvements at higher ISOs (P2P).

For me, reduced noise is a more significant benefit than reduced aliasing.
Of course, you can just expose longer or stack images without pixel shift as well, if you want lower noise. "High ISO" means very little in this context.
 
If you mean pixel shift as capturing all three color channels at all pixel locations - it depends, for instance on scene content, but in practice it makes very little difference.
It makes a substantial difference. First there is the benefit of a longer synthetic exposure created by stacking 4 images. It reduces the impact of shot noise in the final resulting merged image.

Red and blue are sampled at twice the frequency and green at 1.4x the frequency. So seams between different colors are captured more distinctly.

As if that were not enough, It all but eliminates color aliasing / moire.

For any static subject where you can use a tripod or camera stand, it is always worth the effort.

As for slanted edge tests, and so on and so forth, and general sophistry in the subject, merely look at the studio scene comparison in DPR with / without it enabled. Don't accept our pontifications on the subject, myself included, DPT gives you abundant evidence as to the practical effect and value of this feature.

As an example, a 400% crop of a micrograph of a flatbed scanner linear CCD sensor. You see the blue and green color filters. Left is standard capture, right is pixel shift ( this is from a Pentax K-1 ) - it is very visibly effective. Its not a gimmick, I use is wherever it is practical.

459deac0c30c44d59eee422222161bbb.jpg


-- Bob
http://bob-o-rama.smugmug.com -- Photos
http://www.vimeo.com/boborama/videos -- Videos
 
It seems clear that pixel-shift improves image quality and that we can discern more details.
Another thing the pixel shift does is increase exposure and increase the signal to noise ratio. 16 pixel-shift images would reduce the effective exposure index by 1/16th and reduce noise to 25% of a single shot.

Extra clean images in many cases enlarge better than higher resolution but noisier images, although I’m not quite sure how to quantify the tradeoff.
Yes, reduced noise is an essential part of pixel shift benefits. It is especially beneficial when combined with handheld pixel-shift as handheld shooting often occurs at higher ISOs. For example, Olympus m43 cameras show almost three stops of improvements at higher ISOs (P2P).

For me, reduced noise is a more significant benefit than reduced aliasing.
Of course, you can just expose longer or stack images without pixel shift as well, if you want lower noise. "High ISO" means very little in this context.
When shooting handheld, I cannot "just expose longer." I can stack and align images in the post, but that is cumbersome and does not reduce aliasing.

Since the maximum exposure is limited (handheld), the low-exposure noise becomes less of an issue when shooting pixel-shift.

I am unsure if I can call handheld high-resolution modes pixel-shift, as it is more a camera-shift that generates the high-resolution file. The issues are the same, though.
 
It seems clear that pixel-shift improves image quality and that we can discern more details.
Another thing the pixel shift does is increase exposure and increase the signal to noise ratio. 16 pixel-shift images would reduce the effective exposure index by 1/16th and reduce noise to 25% of a single shot.

Extra clean images in many cases enlarge better than higher resolution but noisier images, although I’m not quite sure how to quantify the tradeoff.
Yes, reduced noise is an essential part of pixel shift benefits. It is especially beneficial when combined with handheld pixel-shift as handheld shooting often occurs at higher ISOs. For example, Olympus m43 cameras show almost three stops of improvements at higher ISOs (P2P).

For me, reduced noise is a more significant benefit than reduced aliasing.
Of course, you can just expose longer or stack images without pixel shift as well, if you want lower noise. "High ISO" means very little in this context.
When shooting handheld, I cannot "just expose longer."
I did say "or".
I can stack and align images in the post, but that is cumbersome and does not reduce aliasing.
I also said: "if you want lower noise".

Now, about reducing aliasing - it would. PhotoAcute worked well for increasing resolution and lowering noise but it is no longer supported. You can do stacking in PS as well.
Since the maximum exposure is limited (handheld), the low-exposure noise becomes less of an issue when shooting pixel-shift.
But it takes longer to expose than doing that with one shot. It is good to reduce handheld vibrations but not good to reduce motion blur in the scene.
I am unsure if I can call handheld high-resolution modes pixel-shift, as it is more a camera-shift that generates the high-resolution file. The issues are the same, though.
 
It seems clear that pixel-shift improves image quality and that we can discern more details.
Another thing the pixel shift does is increase exposure and increase the signal to noise ratio. 16 pixel-shift images would reduce the effective exposure index by 1/16th and reduce noise to 25% of a single shot.

Extra clean images in many cases enlarge better than higher resolution but noisier images, although I’m not quite sure how to quantify the tradeoff.
Yes, reduced noise is an essential part of pixel shift benefits. It is especially beneficial when combined with handheld pixel-shift as handheld shooting often occurs at higher ISOs. For example, Olympus m43 cameras show almost three stops of improvements at higher ISOs (P2P).

For me, reduced noise is a more significant benefit than reduced aliasing.
Handheld pixel shift, aka multiframe superresolution, is a whole other animal, since hand vibration can give you subpixel information.

 
It seems clear that pixel-shift improves image quality and that we can discern more details.
Another thing the pixel shift does is increase exposure and increase the signal to noise ratio. 16 pixel-shift images would reduce the effective exposure index by 1/16th and reduce noise to 25% of a single shot.

Extra clean images in many cases enlarge better than higher resolution but noisier images, although I’m not quite sure how to quantify the tradeoff.
Yes, reduced noise is an essential part of pixel shift benefits. It is especially beneficial when combined with handheld pixel-shift as handheld shooting often occurs at higher ISOs. For example, Olympus m43 cameras show almost three stops of improvements at higher ISOs (P2P).

For me, reduced noise is a more significant benefit than reduced aliasing.
Handheld pixel shift, aka multiframe superresolution, is a whole other animal, since hand vibration can give you subpixel information.

https://arxiv.org/abs/1905.03277
Thank you for the link to the interesting article.

Could you elaborate on why handheld pixel shift is something completely different? E.g., the Olympus tripod pixel-shift shifts eight times in one-micron increments (source DPR). Therefore, the pixel-shift movements may not be in whole pixel increments.
 
For me, reduced noise is a more significant benefit than reduced aliasing.
In that case you can also just take multiple captures and combine them later, no need for the hassle of setting up pixel shift mode. There can be other advantages to this approach, see below.
Handheld pixel shift, aka multiframe superresolution, is a whole other animal, since hand vibration can give you subpixel information.

https://arxiv.org/abs/1905.03277
Yes, this is what piccure or photoacute do and it can be worthwhile though computationally expensive. They seem to never have gotten real traction.
Thank you for the link to the interesting article.

Could you elaborate on why handheld pixel shift is something completely different? E.g., the Olympus tripod pixel-shift shifts eight times in one-micron increments (source DPR). Therefore, the pixel-shift movements may not be in whole pixel increments.
The short answer is that you are effectively shrinking pixel pitch, though pixel aperture remains unchanged. Smaller pitch = higher effective Nyquist frequency, and potentially higher resolution and lower aliasing depending on scene.

The old Olympus E-M5II had an 8-capture shift mode that also sampled the image in between pixels, effectively doubling sampling pitch. It did not last long though, so I assume there were diminishing returns:

https://www.strollswithmydog.com/olympus-e-m5-ii-high-res-40mp-shot-mode/

The slanted edge method does not pick up the additional resolution because it supersamples the edge itself already - and the filtering action of pixel aperture, which it does pick up, remains unchanged in single shot vs shift. Note how Ken takes me to task on this very issue in the comments, I was just starting to work through the MTF framework at the time so there are some inaccuracies in the article. The slight improvement shown is indeed due to external factors.

Jack
 
Last edited:
If you mean pixel shift as capturing all three color channels at all pixel locations - it depends, for instance on scene content, but in practice it makes very little difference.
It makes a substantial difference. First there is the benefit of a longer synthetic exposure created by stacking 4 images. It reduces the impact of shot noise in the final resulting merged image.

Red and blue are sampled at twice the frequency and green at 1.4x the frequency. So seams between different colors are captured more distinctly.

As if that were not enough, It all but eliminates color aliasing / moire.

For any static subject where you can use a tripod or camera stand, it is always worth the effort.

As for slanted edge tests, and so on and so forth, and general sophistry in the subject, merely look at the studio scene comparison in DPR with / without it enabled. Don't accept our pontifications on the subject, myself included, DPT gives you abundant evidence as to the practical effect and value of this feature.

As an example, a 400% crop of a micrograph of a flatbed scanner linear CCD sensor. You see the blue and green color filters. Left is standard capture, right is pixel shift ( this is from a Pentax K-1 ) - it is very visibly effective. Its not a gimmick, I use is wherever it is practical.
Hi Bob, I mostly agree with you per my posts in this thread, though I take a more nuanced view of the practical benefits for your average photographer. For context see:

https://www.strollswithmydog.com/bayer-cfa-effect-on-sharpness/

As a fun counterexample, which of these two unsharpened images from raw files captured with the same camera, lens and setup shows higher resolution and less aliasing? One is pixel shift, the other not.

469f11616e9b4f3391c8c3e510f92d45.jpg.png

For those of you who figured it out, how hard did you have to look and what gave it away?

Jack
 
Last edited:
Is the slanted edge measurement really an estimate of the optical system that specifically tries to work around limitations in the pixel grid, thus not being much affected by pixel shift tech?
 
Is the slanted edge measurement really an estimate of the optical system that specifically tries to work around limitations in the pixel grid, thus not being much affected by pixel shift tech?
Right -h, it effectively reconstitutes a good approximation of the continuous response of the imaging system around the center of the edge and in the direction perpendicular to it (see for instance https://www.strollswithmydog.com/the-slanted-edge-method/ ).

The orange line is the inferred Edge Spread Function, that is the continuous intensity profile of the response of the imaging system around the center of the edge in the direction perpendicular to it.
The orange line is the inferred Edge Spread Function, that is the continuous intensity profile of the response of the imaging system around the center of the edge in the direction perpendicular to it.

Continuous means that pixel pitch is no longer a variable (though pixel aperture still is).

Jack
 
Last edited:
I understand that image quality increases with pixel-shift. However, I wonder if it ever increases resolution.

Jim Kasson's analysis of a7rIV shows that the resolution does not increase with pixel shift: Does pixel-shift increase resolution?

Frans van den Bergh wrote that pixel shifting does not increase resolution (link).
It depends how you define resolution...

Does it improve MTF measurements of slanted black and white edges? No.

Does it increase the perceived sharpness of colour images? Yes.

Is there a convenient measurement for that? No, or at least not one that we see very often. In a world where most images are in colour, this seems to be an oversight.
 
If you mean pixel shift as capturing all three color channels at all pixel locations - it depends, for instance on scene content, but in practice it makes very little difference.
It makes a substantial difference. First there is the benefit of a longer synthetic exposure created by stacking 4 images. It reduces the impact of shot noise in the final resulting merged image.

Red and blue are sampled at twice the frequency and green at 1.4x the frequency. So seams between different colors are captured more distinctly.

As if that were not enough, It all but eliminates color aliasing / moire.

For any static subject where you can use a tripod or camera stand, it is always worth the effort.

As for slanted edge tests, and so on and so forth, and general sophistry in the subject, merely look at the studio scene comparison in DPR with / without it enabled. Don't accept our pontifications on the subject, myself included, DPT gives you abundant evidence as to the practical effect and value of this feature.

As an example, a 400% crop of a micrograph of a flatbed scanner linear CCD sensor. You see the blue and green color filters. Left is standard capture, right is pixel shift ( this is from a Pentax K-1 ) - it is very visibly effective. Its not a gimmick, I use is wherever it is practical.
Hi Bob, I mostly agree with you per my posts in this thread, though I take a more nuanced view of the practical benefits for your average photographer. For context see:

https://www.strollswithmydog.com/bayer-cfa-effect-on-sharpness/

As a fun counterexample, which of these two unsharpened images from raw files captured with the same camera, lens and setup shows higher resolution and less aliasing? One is pixel shift, the other not.

469f11616e9b4f3391c8c3e510f92d45.jpg.png

For those of you who figured it out, how hard did you have to look and what gave it away?

Jack
This is misleading on so many levels. Would you say which image/camera you used and would you post the whole images?
 
This is misleading on so many levels. Would you say which image/camera you used and would you post the whole images?
:-) Not misleading at all in the context of my posts in this thread. I will post info once I've gotten a few tries, my point being that it is not that obvious - despite this being the resolution and aliasing nightmare portion of DPR's studio scene.

What do you think, which is which? And why?
 
This is misleading on so many levels. Would you say which image/camera you used and would you post the whole images?
:-) Not misleading at all in the context of my posts in this thread. I will post info once I've gotten a few tries, my point being that it is not that obvious - despite this being the resolution and aliasing nightmare portion of DPR's studio scene.
It is misleading for the following reasons (some of them overlap):
  • You know what you want to achieve. You choose rendering for a specific crop of a specific scene to give you results similar to the pixel shift.
  • You can just render a B&W image and avoid the color aliasing displayed as such.
  • You know that the original is B&W and can use rendering designed to work well in that specific case.
  • You can always make the better image worse (knowingly or not), and then say - see, they are the same.
  • Pixel-shifted image is actually rendered at a higher mp count. If you downscale it for a comparison, the result depends on the way you do it, and you lose some of the extra resolution.
What do you think, which is which? And why?
The one on the right has more pronounced patterns but both have wrong ones (aliased) here and there. Since the main point is increased color resolution, and here the crop is monochromatic, and you know that, this is like testing the tweeters in your speakers with a bass guitar solo (well, I am exaggerating a bit).

Now, show us the whole images, with color info, and tell us which ones they are.
 

Keyboard shortcuts

Back
Top