Can pixel shift increase resolution?

I guess it depends on what you mean by 'resolving the Nyquist limit', I don't know if I would use the system MTF50 frequency as a gauge for that.

Even so, a system with MTF50 beyond 0.5 c/p would show horrible aliasing and would not be desirable imho - that's why there used to be AA filters and I wish there still were. Perhaps better ones, like Canon's Hi-Res .

One of my biggest complaints about the Z7 above is aliasing: paraphrasing Jim K, at first you don't notice it and your life is bliss; but once you start seeing it, it becomes hard to ignore and soon you see it everywhere. In a natural scene it may present like an oversharpened image, though with no sharpening applied.

Jack
MTF measurement has its own challenges aliasing for me doesnt mean it resolves it
Aliasing is a sensor issue, not a lens issue.
Yes the measurements involves the whole system so to say that it exceed or not is arbitrary
your sample though was under the limit
Which limit? Nyquist is 0.5 cy/px and aliasing of colour detail occurs at 0.25 cy/px.

Do you mean MTF50? That's an entirely arbitrary measurement. The normal 'limit' of visible detail is usually regarded as MTF10, but this too is rather ambiguous.
Yes that is an arbitrary measurement

When I say does it matter I meant there is an improvement of image quality that this means resolution or something else it does not matter as long as it is an improvement, which it is
 
To state the obvious, all other things being equal (including the number of captures) the answer depends on the scene, including its chromaticity and the direction of detail. It's not all or nothing, it's a continuum with many peaks and valleys throughout the image, from full degradation (half the sampling pitch) to virtually nothing.
This is true only if you know the chromaticity in advance, and we don't.
Ah, how to take advantage of what we know is a different question altogether that we shall leave as an exercise for the reader. Suffice it to say that most advanced demosaicing algorithms map out image raw data locally based on a-priori knowledge and apply different strategies depending on content. For instance knowledge of the green quincunx layout, the direction of detail, see Dubois' letter, directional lmmse, AMAZE and most others.

For example off the top of my head I think it would not be too hard to figure out neutral portions of an image and apply the best algorithm for that there. Or, once we understand that when R and B are about the same in a neighborhood aliasing will be close to monochrome's in the vertical and horizontal directions, take advantage of this fact if there is such detail there. Same diagonally where (R+G) is about equal to 2G (see the article on CFA sharpness linked earlier for context).

Left: Where the R and B color planes are about the same, baseband luma can reach monochrome Nyquist almost unscathed at the cardinal points, therefore with detail running in the horizontal and vertical directions. Right: same in the diagonal direction when R plus B are about equal to 2G.
Left: Where the R and B color planes are about the same, baseband luma can reach monochrome Nyquist almost unscathed at the cardinal points, therefore with detail running in the horizontal and vertical directions. Right: same in the diagonal direction when R plus B are about equal to 2G.

But I haven't really given it any thought and I don't know if it would be worth the effort. Though the theory shows us what would be possible with smart enough algorithms.

Jack
 
Last edited:
Today most cameras do not resolve the Nyquist limit of the sensor anyway due to limitations in the lens
I can't tell precisely what you mean by that sentence.

If you mean that real lenses can't deliver an MTF of nearly unity at the Nyquist frequency of most sensors, then I agree, but don't see the relevance.
Yes that is what I meant
If you mean that real lenses don't have sufficient contrast at the Nyquist frequency of most sensors to cause visible aliasing with high-frequency subjects, then I disagree strongly.
Of course there is aliasing you can see it so it is there
To talk about the system I'm most familiar with, all of the Fujifilm GF lenses are capable of aliasing at some f-stops on axis with the GFX 100x.

One way to think of lens and sensor resolution is to consider the balance between the two:

https://blog.kasson.com/the-last-word/whats-your-q/
Pixel shift cures this problem so it deliver a benefit in that respect but also others

So at the end what matters if resolution was increased or not which is the op question? The important part is that overall image quality is improved even if the so called high resolution shot does not deliver more resolution (but makes useable the resolution you can resolve without issues)

--

instagram http://instagram.com/interceptor121
My flickr sets http://www.flickr.com/photos/interceptor121/
Youtube channel http://www.youtube.com/interceptor121
Underwater Photo and Video Blog http://interceptor121.com
Deer Photography workshops https://interceptor121.com/2021/09/26/2021-22-deer-photography-workshops-in-woburn/
 
Today most cameras do not resolve the Nyquist limit of the sensor anyway due to limitations in the lens
I can't tell precisely what you mean by that sentence.

If you mean that real lenses can't deliver an MTF of nearly unity at the Nyquist frequency of most sensors, then I agree, but don't see the relevance.
Yes that is what I meant
If you mean that real lenses don't have sufficient contrast at the Nyquist frequency of most sensors to cause visible aliasing with high-frequency subjects, then I disagree strongly.
Of course there is aliasing you can see it so it is there
To talk about the system I'm most familiar with, all of the Fujifilm GF lenses are capable of aliasing at some f-stops on axis with the GFX 100x.

One way to think of lens and sensor resolution is to consider the balance between the two:

https://blog.kasson.com/the-last-word/whats-your-q/
Pixel shift cures this problem
Sometimes. Sometimes there is still aliasing. More often there are artifacts.
so it deliver a benefit in that respect but also others
This fact that pixel shift delivers a benefit sometimes is not in dispute. The nature of the benefit is.
So at the end what matters if resolution was increased or not which is the op question? The important part is that overall image quality is improved even if the so called high resolution shot does not deliver more resolution (but makes useable the resolution you can resolve without issues)
I'll bet you're in the camp that calls all lenses that have aberrations that are not raidally symmetric "decentered". Or as Roger Cicala once put it: "Calling everything that’s wrong with a lens decentering is like calling every reason your car won’t start ‘out of gas.’"

The point of the distinction between aliasing and MTF is to allow photographers to understand what they will -- and won't -- get from pixel shift.
 
Today most cameras do not resolve the Nyquist limit of the sensor anyway due to limitations in the lens
I can't tell precisely what you mean by that sentence.

If you mean that real lenses can't deliver an MTF of nearly unity at the Nyquist frequency of most sensors, then I agree, but don't see the relevance.
Yes that is what I meant
If you mean that real lenses don't have sufficient contrast at the Nyquist frequency of most sensors to cause visible aliasing with high-frequency subjects, then I disagree strongly.
Of course there is aliasing you can see it so it is there
To talk about the system I'm most familiar with, all of the Fujifilm GF lenses are capable of aliasing at some f-stops on axis with the GFX 100x.

One way to think of lens and sensor resolution is to consider the balance between the two:

https://blog.kasson.com/the-last-word/whats-your-q/
Pixel shift cures this problem
Sometimes. Sometimes there is still aliasing. More often there are artifacts.
so it deliver a benefit in that respect but also others
This fact that pixel shift delivers a benefit sometimes is not in dispute. The nature of the benefit is.
So at the end what matters if resolution was increased or not which is the op question? The important part is that overall image quality is improved even if the so called high resolution shot does not deliver more resolution (but makes useable the resolution you can resolve without issues)
I'll bet you're in the camp that calls all lenses that have aberrations that are not raidally symmetric "decentered". Or as Roger Cicala once put it: "Calling everything that’s wrong with a lens decentering is like calling every reason your car won’t start ‘out of gas.’"

The point of the distinction between aliasing and MTF is to allow photographers to understand what they will -- and won't -- get from pixel shift.
Pixel shift will give me better image quality. That improves or does not improve resolution measured with a set of monochrome targets is not important to a photographer only to a scientist or an engineer as we don't shoot test patterns, well at least I do not!
 
Pixel shift will give me better image quality. That improves or does not improve resolution measured with a set of monochrome targets is not important to a photographer only to a scientist or an engineer as we don't shoot test patterns, well at least I do not!
You have come to a forum about science and technology to trumpet the position that controlled testing is not useful. There is an assumption in your statement that nothing that is useful in real-world photography can be learned from such tests. I consider that perspective provably ludicrous. Can we learn nothing about real world shadow performance from photon transfer curves? Can we learn nothing about lens performance from optical bench tests? Should Imatest go out of business? Are the people who spend a quarter of a million bucks for an optical bench fools?

<scratching head>

Jim

--
https://blog.kasson.com
 
Last edited:
Today most cameras do not resolve the Nyquist limit of the sensor anyway due to limitations in the lens
I can't tell precisely what you mean by that sentence.

If you mean that real lenses can't deliver an MTF of nearly unity at the Nyquist frequency of most sensors, then I agree, but don't see the relevance.
Yes that is what I meant
If you mean that real lenses don't have sufficient contrast at the Nyquist frequency of most sensors to cause visible aliasing with high-frequency subjects, then I disagree strongly.
Of course there is aliasing you can see it so it is there
To talk about the system I'm most familiar with, all of the Fujifilm GF lenses are capable of aliasing at some f-stops on axis with the GFX 100x.

One way to think of lens and sensor resolution is to consider the balance between the two:

https://blog.kasson.com/the-last-word/whats-your-q/
Pixel shift cures this problem
Sometimes. Sometimes there is still aliasing. More often there are artifacts.
so it deliver a benefit in that respect but also others
This fact that pixel shift delivers a benefit sometimes is not in dispute. The nature of the benefit is.
So at the end what matters if resolution was increased or not which is the op question? The important part is that overall image quality is improved even if the so called high resolution shot does not deliver more resolution (but makes useable the resolution you can resolve without issues)
I'll bet you're in the camp that calls all lenses that have aberrations that are not raidally symmetric "decentered". Or as Roger Cicala once put it: "Calling everything that’s wrong with a lens decentering is like calling every reason your car won’t start ‘out of gas.’"

The point of the distinction between aliasing and MTF is to allow photographers to understand what they will -- and won't -- get from pixel shift.
Pixel shift will give me better image quality. That improves or does not improve resolution measured with a set of monochrome targets is not important to a photographer only to a scientist or an engineer as we don't shoot test patterns, well at least I do not!
If I compare the output of a 50MP sensor with the output of a 50MP pixel-shifted sensor, would the difference in resolution be noticeable?
 
Today most cameras do not resolve the Nyquist limit of the sensor anyway due to limitations in the lens
I can't tell precisely what you mean by that sentence.

If you mean that real lenses can't deliver an MTF of nearly unity at the Nyquist frequency of most sensors, then I agree, but don't see the relevance.
Yes that is what I meant
If you mean that real lenses don't have sufficient contrast at the Nyquist frequency of most sensors to cause visible aliasing with high-frequency subjects, then I disagree strongly.
Of course there is aliasing you can see it so it is there
To talk about the system I'm most familiar with, all of the Fujifilm GF lenses are capable of aliasing at some f-stops on axis with the GFX 100x.

One way to think of lens and sensor resolution is to consider the balance between the two:

https://blog.kasson.com/the-last-word/whats-your-q/
Pixel shift cures this problem
Sometimes. Sometimes there is still aliasing. More often there are artifacts.
so it deliver a benefit in that respect but also others
This fact that pixel shift delivers a benefit sometimes is not in dispute. The nature of the benefit is.
So at the end what matters if resolution was increased or not which is the op question? The important part is that overall image quality is improved even if the so called high resolution shot does not deliver more resolution (but makes useable the resolution you can resolve without issues)
I'll bet you're in the camp that calls all lenses that have aberrations that are not raidally symmetric "decentered". Or as Roger Cicala once put it: "Calling everything that’s wrong with a lens decentering is like calling every reason your car won’t start ‘out of gas.’"

The point of the distinction between aliasing and MTF is to allow photographers to understand what they will -- and won't -- get from pixel shift.
Pixel shift will give me better image quality. That improves or does not improve resolution measured with a set of monochrome targets is not important to a photographer only to a scientist or an engineer as we don't shoot test patterns, well at least I do not!
If I compare the output of a 50MP sensor with the output of a 50MP pixel-shifted sensor, would the difference in resolution be noticeable?
Yes you can see it yourself at naked eye. Some cameras can produce a non high rez and high rez image from the pixel shift just try to print them or look at the same scale
 
Pixel shift cures this problem
Sometimes. Sometimes there is still aliasing. More often there are artifacts.
Are you thinking of (subject) motion artifacts?
I have seen artifacts caused by camera motion, subject motion, and atmospheric effects.
Thanks.

IMO:

With handheld superresolution (relies on slight camera movements), faster readout speeds (less motion between the shots), and improvements in motion artifact removal, the results of multi-frame superresolution seem to be improving and becoming more practical.

The main benefit is improved image quality, less so resolution.
 
Pixel shift will give me better image quality. That improves or does not improve resolution measured with a set of monochrome targets is not important to a photographer only to a scientist or an engineer as we don't shoot test patterns, well at least I do not!
If I compare the output of a 50MP sensor with the output of a 50MP pixel-shifted sensor, would the difference in resolution be noticeable?
Yes you can see it yourself at naked eye. Some cameras can produce a non high rez and high rez image from the pixel shift just try to print them or look at the same scale
If you do this test, be sure to sharpen both images the same for your baseline test. This is a little tricky to accomplish, since the sampling frequency for the pixel shift image is twice that of the non-shifted image. To equalize noise, stack and average as many single-shot captures as there were shots in the pixel-shifted series. Be aware that some pixel shift combining software has non-defeatable sharpening.
 
Pixel shift will give me better image quality. That improves or does not improve resolution measured with a set of monochrome targets is not important to a photographer only to a scientist or an engineer as we don't shoot test patterns, well at least I do not!
You have come to a forum about science and technology to trumpet the position that controlled testing is not useful. There is an assumption in your statement that nothing that is useful in real-world photography can be learned from such tests. I consider that perspective provably ludicrous. Can we learn nothing about real world shadow performance from photon transfer curves? Can we learn nothing about lens performance from optical bench tests? Should Imatest go out of business? Are the people who spend a quarter of a million bucks for an optical bench fools?

<scratching head>

Jim
Sorry did not mean to offend your effort but I know where the op is coming from

My take is MTF50 is a measure of sharpness not resolution and when it goes overboard is because somewhere down the line things were overcooked. Without sharpening and other tricks I cannot see how a grid of 2000 lines can resolve 2500

This is consistent to what imatest says one of your sources


Suggesting a reference system is at 0.314 c/p this is to give a look that does not look sharpened most systems are designed not to go overboard. Of course there are cases where it does go overboard as it seems from your nikon example

The benefit of pixel shift is that it allows you to push higher this value without artefacts so the image does look perceptually better which is what the last graph of your article says on bullet 1.

With a Bayer-CFA sensor, there are additional advantages to pixel-shift shooting, but they are hard to quantify simply.
  • The reduced aliasing in 16-shot images will allow more sharpening to be used before aliasing artifacts become objectionable.
  • If the demosaicing algorithm used for the single-shot image softens edges, that will not be a problem with the 16-shot image.
  • False-color artifacts will be much less of an issue in 16-shot (or even 4-shot) pixel shift images.
In terms of quantification SNR improves log(sqrt(number of pixel shift)=2 stops which is visible at naked eye this is from a photographic point of view what matters.

Again sorry did not mean to sound rude to you nor dismissive
 
Pixel shift will give me better image quality. That improves or does not improve resolution measured with a set of monochrome targets is not important to a photographer only to a scientist or an engineer as we don't shoot test patterns, well at least I do not!
You have come to a forum about science and technology to trumpet the position that controlled testing is not useful. There is an assumption in your statement that nothing that is useful in real-world photography can be learned from such tests. I consider that perspective provably ludicrous. Can we learn nothing about real world shadow performance from photon transfer curves? Can we learn nothing about lens performance from optical bench tests? Should Imatest go out of business? Are the people who spend a quarter of a million bucks for an optical bench fools?

<scratching head>

Jim
Sorry did not mean to offend your effort but I know where the op is coming from
To clarify, op is a (retired) scientist and engineer, and very much interested in all kinds of new knowledge.
 
Pixel shift will give me better image quality. That improves or does not improve resolution measured with a set of monochrome targets is not important to a photographer only to a scientist or an engineer as we don't shoot test patterns, well at least I do not!
You have come to a forum about science and technology to trumpet the position that controlled testing is not useful. There is an assumption in your statement that nothing that is useful in real-world photography can be learned from such tests. I consider that perspective provably ludicrous. Can we learn nothing about real world shadow performance from photon transfer curves? Can we learn nothing about lens performance from optical bench tests? Should Imatest go out of business? Are the people who spend a quarter of a million bucks for an optical bench fools?

<scratching head>

Jim
Sorry did not mean to offend your effort but I know where the op is coming from
For the most part (neglecting phase effects and a few other things), lenses and sensors are linear systems. That means that if any set of inputs can be decomposed into a set of independent signals, the response of the system to the sum of those decomposed signals will be the sum of the system responses to each of the signals taken in turn.

That insight is the basis of determining the system response to simple, well-characterized inputs, and using that information to characterize the system, thus providing insight into the behavior of the system when presented with more complex stimuli.

Thus, even if you don't shoot charts, you can get useful information from analyses of camera/lens system behavior that are based on the results obtained from shooting charts.
 
Pixel shift will give me better image quality. That improves or does not improve resolution measured with a set of monochrome targets is not important to a photographer only to a scientist or an engineer as we don't shoot test patterns, well at least I do not!
You have come to a forum about science and technology to trumpet the position that controlled testing is not useful. There is an assumption in your statement that nothing that is useful in real-world photography can be learned from such tests. I consider that perspective provably ludicrous. Can we learn nothing about real world shadow performance from photon transfer curves? Can we learn nothing about lens performance from optical bench tests? Should Imatest go out of business? Are the people who spend a quarter of a million bucks for an optical bench fools?

<scratching head>

Jim
I think there is a disconnect between numbers and perception. This is not the fault of engineers as they are usually aware of the purpose and limitations of such measurements, but these limitations are not widely understood elsewhere.

MTF50 for instance is not easily applied to a random colour image. Any such perceptual estimate would have to account for any number of variables, such as the structure and colour of the image content, the demosaicing method, the processing applied, and the viewing medium.

This could render detail at that frequency as anything from invisible to very clear.

And I really would like to see some comparative studies of colour resolution in different situations - different chromaticity axes, threshold vs. supra-threshold, etc. I have seen a lot of research, but none of it related to cameras, only to human vision.

Yet most of us shoot colour images.
 
To state the obvious, all other things being equal (including the number of captures) the answer depends on the scene, including its chromaticity and the direction of detail. It's not all or nothing, it's a continuum with many peaks and valleys throughout the image, from full degradation (half the sampling pitch) to virtually nothing.
This is true only if you know the chromaticity in advance, and we don't.
Ah, how to take advantage of what we know is a different question altogether that we shall leave as an exercise for the reader. Suffice it to say that most advanced demosaicing algorithms map out image raw data locally based on a-priori knowledge and apply different strategies depending on content.
Not knowledge, guesses.
For instance knowledge of the green quincunx layout, the direction of detail, see Dubois' letter, directional lmmse, AMAZE and most others.
They do know the layout of the sensor, sure. The rest is a guess.
For example off the top of my head I think it would not be too hard to figure out neutral portions of an image and apply the best algorithm for that there.
It would be impossible. One can make guesses however. Funny enough, we both found color info in that crop but we discarded it because we were biased.
Or, once we understand that when R and B are about the same in a neighborhood aliasing will be close to monochrome's in the vertical and horizontal directions, take advantage of this fact if there is such detail there. Same diagonally where (R+G) is about equal to 2G (see the article on CFA sharpness linked earlier for context).

Left: Where the R and B color planes are about the same, baseband luma can reach monochrome Nyquist almost unscathed at the cardinal points, therefore with detail running in the horizontal and vertical directions. Right: same in the diagonal direction when R plus B are about equal to 2G.
Left: Where the R and B color planes are about the same, baseband luma can reach monochrome Nyquist almost unscathed at the cardinal points, therefore with detail running in the horizontal and vertical directions. Right: same in the diagonal direction when R plus B are about equal to 2G.
This kind of plots should have never been published in that paper.
But I haven't really given it any thought and I don't know if it would be worth the effort. Though the theory shows us what would be possible with smart enough algorithms.
Actually, the theory is pretty clear - it is impossible. Now, about the guessing: It goes at least 80 years back. Various forms or regularizations, each one based on its own a-priori/statistical assumptions, compressed sensing, and now, ML (which is actually quite old as well). The all "work" when they do, unless they fail; and each specialist has very convincing publications and examples proving that their method beats all the other.
 
Sure JACS, 'educated guesses' would be better wording than 'knowledge'. Though as you say all demosaicers have to make such guesses by definition. Surprisingly (?), in general photographers appear to be pretty happy about the guesses that they make: excluding AI, algorithms in commercial raw converters seem to have remained relatively stable over the last decade, a question of diminishing returns I suspect.

As to the plots, I took a cue from the Alleyson and Dubois work referenced in the article and I think they are valid for the purpose of illustrating the effect of a Bayer CFA in the frequency domain somewhat intuitively. If I were to generate them today I would do it slightly differently in order to eliminate a couple of additional variables I didn't think about at the time.

Jack
 
Last edited:

Keyboard shortcuts

Back
Top