Can pixel shift increase resolution?

But how can we measure colour MTF in a useful way (ie something akin to Delta-E) and how visible is this in comparison to luminance MTF?
I measure MTF on each of the raw channels, and for the past year or so I've been doing that on both sides of and through the focal plane. That gives lots of information about LoCA, LaCA, and various aberrations. However, I don't see much value in merging those together and converting to a CIE space.
Perhaps 57 is wondering about the perceptual side of things. I have often heard it said that the HVS takes sharpness cues from its monochromatic channel, and that it is much less sensitive to it in the color difference channels.

That makes good intuitive sense and seems to match personal experience, though it would be nice to see some data to support the feeling.
There are the CSF curves:

 
But how can we measure colour MTF in a useful way (ie something akin to Delta-E) and how visible is this in comparison to luminance MTF?
I measure MTF on each of the raw channels, and for the past year or so I've been doing that on both sides of and through the focal plane. That gives lots of information about LoCA, LaCA, and various aberrations. However, I don't see much value in merging those together and converting to a CIE space.
Wasn't suggesting you did, per se. Only asking how we can judge colour as opposed to luminance resolution. ie how does colour discrimination improve in the frequency domain with pixel shift, and how does it compare to human sensitivity?
I just posted this link in another response, but it may be useful in discussing color sharpness:


Chroma doesn't have to be sharp in order for the image to be perceived as sharp.
 
But how can we measure colour MTF in a useful way (ie something akin to Delta-E) and how visible is this in comparison to luminance MTF?
I measure MTF on each of the raw channels, and for the past year or so I've been doing that on both sides of and through the focal plane. That gives lots of information about LoCA, LaCA, and various aberrations. However, I don't see much value in merging those together and converting to a CIE space.
Perhaps 57 is wondering about the perceptual side of things. I have often heard it said that the HVS takes sharpness cues from its monochromatic channel,
Not entirely, apparently.

https://www.sciencedirect.com/science/article/pii/S0042698911000526
and that it is much less sensitive to it in the color difference channels. That makes good intuitive sense and seems to match personal experience, though it would be nice to see some data to support the feeling.
Less sensitive, yes - but not as much as once assumed.

But even if it were less sensitive, that doesn't mean that some improvement would not be clearly visible at normal viewing distances.

Intuitively, looking at quite low res Foveon images, I think the issue is underestimated.

Then there is the issue of demosaicing...
 
But how can we measure colour MTF in a useful way (ie something akin to Delta-E) and how visible is this in comparison to luminance MTF?
I measure MTF on each of the raw channels, and for the past year or so I've been doing that on both sides of and through the focal plane. That gives lots of information about LoCA, LaCA, and various aberrations. However, I don't see much value in merging those together and converting to a CIE space.
Wasn't suggesting you did, per se. Only asking how we can judge colour as opposed to luminance resolution. ie how does colour discrimination improve in the frequency domain with pixel shift, and how does it compare to human sensitivity?
I just posted this link in another response, but it may be useful in discussing color sharpness:

https://blog.kasson.com/the-last-word/contrast-sensitivity-functions-and-photography/

Chroma doesn't have to be sharp in order for the image to be perceived as sharp.
No, but there isn't always a significant luma contrast in coloured patterns.

I was aware of the original Mullen work on chroma CSF, but this seems to have been superceded.

https://www.sciencedirect.com/science/article/pii/S0042698911000526

Note the chart in Fig 5.
 
But how can we measure colour MTF in a useful way (ie something akin to Delta-E) and how visible is this in comparison to luminance MTF?
I measure MTF on each of the raw channels, and for the past year or so I've been doing that on both sides of and through the focal plane. That gives lots of information about LoCA, LaCA, and various aberrations. However, I don't see much value in merging those together and converting to a CIE space.
Wasn't suggesting you did, per se. Only asking how we can judge colour as opposed to luminance resolution. ie how does colour discrimination improve in the frequency domain with pixel shift, and how does it compare to human sensitivity?
I just posted this link in another response, but it may be useful in discussing color sharpness:

https://blog.kasson.com/the-last-word/contrast-sensitivity-functions-and-photography/

Chroma doesn't have to be sharp in order for the image to be perceived as sharp.
That's why a painting can combine lines drawn with a pen (giving a sharp luminance signal) with quite loose washes of watercolour. This technique is widely used in illustrations to children's books, from Beatrix Potter's "Peter Rabbit" series onward.

Don Cox
 
But how can we measure colour MTF in a useful way (ie something akin to Delta-E) and how visible is this in comparison to luminance MTF?
I measure MTF on each of the raw channels, and for the past year or so I've been doing that on both sides of and through the focal plane. That gives lots of information about LoCA, LaCA, and various aberrations. However, I don't see much value in merging those together and converting to a CIE space.
Wasn't suggesting you did, per se. Only asking how we can judge colour as opposed to luminance resolution. ie how does colour discrimination improve in the frequency domain with pixel shift, and how does it compare to human sensitivity?
I just posted this link in another response, but it may be useful in discussing color sharpness:

https://blog.kasson.com/the-last-word/contrast-sensitivity-functions-and-photography/

Chroma doesn't have to be sharp in order for the image to be perceived as sharp.
That's why a painting can combine lines drawn with a pen (giving a sharp luminance signal) with quite loose washes of watercolour. This technique is widely used in illustrations to children's books, from Beatrix Potter's "Peter Rabbit" series onward.
It's also why PhotoCD and many video encodings of color work and worked.
 
But how can we measure colour MTF in a useful way (ie something akin to Delta-E) and how visible is this in comparison to luminance MTF?
I measure MTF on each of the raw channels, and for the past year or so I've been doing that on both sides of and through the focal plane. That gives lots of information about LoCA, LaCA, and various aberrations. However, I don't see much value in merging those together and converting to a CIE space.
Perhaps 57 is wondering about the perceptual side of things. I have often heard it said that the HVS takes sharpness cues from its monochromatic channel,
Not entirely, apparently.

https://www.sciencedirect.com/science/article/pii/S0042698911000526
and that it is much less sensitive to it in the color difference channels. That makes good intuitive sense and seems to match personal experience, though it would be nice to see some data to support the feeling.
Less sensitive, yes - but not as much as once assumed.

But even if it were less sensitive, that doesn't mean that some improvement would not be clearly visible at normal viewing distances.

Intuitively, looking at quite low res Foveon images, I think the issue is underestimated.
I don't have any experience with Foveon images, but I have a lot of experience with Betterlight images, which also capture all three raw color planes at the same location. I think the reason the Betterlight 144MP backs punch so far above their weight is the relative lack of aliasing wrt Bayer-CFA images, not sharpness.
Then there is the issue of demosaicing...
 
But how can we measure colour MTF in a useful way (ie something akin to Delta-E) and how visible is this in comparison to luminance MTF?
I measure MTF on each of the raw channels, and for the past year or so I've been doing that on both sides of and through the focal plane. That gives lots of information about LoCA, LaCA, and various aberrations. However, I don't see much value in merging those together and converting to a CIE space.
Wasn't suggesting you did, per se. Only asking how we can judge colour as opposed to luminance resolution. ie how does colour discrimination improve in the frequency domain with pixel shift, and how does it compare to human sensitivity?
I just posted this link in another response, but it may be useful in discussing color sharpness:

https://blog.kasson.com/the-last-word/contrast-sensitivity-functions-and-photography/

Chroma doesn't have to be sharp in order for the image to be perceived as sharp.
That's why a painting can combine lines drawn with a pen (giving a sharp luminance signal) with quite loose washes of watercolour. This technique is widely used in illustrations to children's books, from Beatrix Potter's "Peter Rabbit" series onward.

Don Cox
It depends what you mean by "sharp". Clearly, outlining an edge will sharpen it.

But a sharp image does not have to contain any lines at all, just a boundary. It can be a colour or a luminance boundary.
 
It depends what you mean by "sharp". Clearly, outlining an edge will sharpen it.

But a sharp image does not have to contain any lines at all, just a boundary. It can be a colour or a luminance boundary.
Here's a visual demo of how the eye responds to chroma changes at varying spatial frequency.


You can see how it works with your own eyes.

Jim
 
It depends what you mean by "sharp". Clearly, outlining an edge will sharpen it.

But a sharp image does not have to contain any lines at all, just a boundary. It can be a colour or a luminance boundary.
The rules of European heraldry specify which colors or 'tinctures' in arms can be directly adjacent and which ones need a dividing line between them. The tinctures are divided into colours, which are dark, and metals which are light. No colour can be placed next to another colour, and no metal next to another metal without a dividing line. But it is OK to place metals and colours adjacent. While the history of this "rule of tincture" is unclear, it seems to have been observed from the beginning, and certainly it makes the patterns more visible.

Are you familiar with the optical illusion that occurs when two saturated colors, especially of similar brightness, but very different hue are placed adjacent to each other? This can cause an optical "vibration": red and green is a classic combination. It seems that these combinations make edge detection difficult or even ambiguous, perhaps by the reduction of perceived resolution.

This vibration seems related to the phenomenon of chromostereopsis, where it appears that one color is at a different depth than another adjacent color. I once went to an art exhibit which took advantage of chromostereopsis: the artworks were mainly illuminated with ultraviolet, and the resulting UV excitation colors were probably close to spectral, giving a very strong effect.
 
It depends what you mean by "sharp". Clearly, outlining an edge will sharpen it.

But a sharp image does not have to contain any lines at all, just a boundary. It can be a colour or a luminance boundary.
The rules of European heraldry specify which colors or 'tinctures' in arms can be directly adjacent and which ones need a dividing line between them. The tinctures are divided into colours, which are dark, and metals which are light. No colour can be placed next to another colour, and no metal next to another metal without a dividing line. But it is OK to place metals and colours adjacent. While the history of this "rule of tincture" is unclear, it seems to have been observed from the beginning, and certainly it makes the patterns more visible.
Thank you for that!
 
It depends what you mean by "sharp". Clearly, outlining an edge will sharpen it.

But a sharp image does not have to contain any lines at all, just a boundary. It can be a colour or a luminance boundary.
Here's a visual demo of how the eye responds to chroma changes at varying spatial frequency.

https://blog.kasson.com/the-last-word/chromaticity-csfs/

You can see how it works with your own eyes.

Jim
I can see how it works with a sine-wave grating.

But that research was done in 1985 by Kathy Mullen.

The role of double opponent neurons was discovered later. Threshold measurements of sine waves do not activate double-opponent neurons, but sharp-edged patterns, like Snellen letters, do. In which case, there is a band pass response and a higher peak frequency.

So it's not particularly representative of my response to image detail above threshold. I would be interested in your response to the paper I cited.
 
It depends what you mean by "sharp". Clearly, outlining an edge will sharpen it.

But a sharp image does not have to contain any lines at all, just a boundary. It can be a colour or a luminance boundary.
The rules of European heraldry specify which colors or 'tinctures' in arms can be directly adjacent and which ones need a dividing line between them. The tinctures are divided into colours, which are dark, and metals which are light. No colour can be placed next to another colour, and no metal next to another metal without a dividing line. But it is OK to place metals and colours adjacent. While the history of this "rule of tincture" is unclear, it seems to have been observed from the beginning, and certainly it makes the patterns more visible.

Are you familiar with the optical illusion that occurs when two saturated colors, especially of similar brightness, but very different hue are placed adjacent to each other? This can cause an optical "vibration": red and green is a classic combination. It seems that these combinations make edge detection difficult or even ambiguous, perhaps by the reduction of perceived resolution.

This vibration seems related to the phenomenon of chromostereopsis, where it appears that one color is at a different depth than another adjacent color. I once went to an art exhibit which took advantage of chromostereopsis: the artworks were mainly illuminated with ultraviolet, and the resulting UV excitation colors were probably close to spectral, giving a very strong effect.
And yet it seems to work fine in art deco posters...

https://images.squarespace-cdn.com/...5U83476A8/Art+Deco+Travel+Poster?format=1500w
 
Thank you for the responses.

It seems clear that pixel-shift improves image quality and that we can discern more details. What would an increased resolution add to the image quality?


Single exposure vs. 16X pixel shift on Sony, using LR.
Single exposure vs. 16X pixel shift on Sony, using LR.

Acutance would be identical, but the single shot image looses sharpness near Nyquist , due to interpolation. Lot of detail the single image cannot resolve.



[ATTACH alt="Both images, Image height 30" at 180 PPI. "]2978670[/ATTACH]
Both images, Image height 30" at 180 PPI.



[ATTACH alt="Both images, image height 30" at 180 PPI. "]2978672[/ATTACH]
Both images, image height 30" at 180 PPI.

Best regards

Erik

--
Erik Kaffehr
Website: http://echophoto.dnsalias.net
Magic uses to disappear in controlled experiments…
Gallery: http://echophoto.smugmug.com
Articles: http://echophoto.dnsalias.net/ekr/index.php/photoarticles
 

Attachments

  • e3107cce3a644c1f823b0e1e0aafc459.jpg.png
    e3107cce3a644c1f823b0e1e0aafc459.jpg.png
    3.3 MB · Views: 0
  • 89b88c9d703a4fdbb7e63ffdd455dd03.jpg.png
    89b88c9d703a4fdbb7e63ffdd455dd03.jpg.png
    3.9 MB · Views: 0
I understand that image quality increases with pixel-shift. However, I wonder if it ever increases resolution.

Jim Kasson's analysis of a7rIV shows that the resolution does not increase with pixel shift: Does pixel-shift increase resolution?

Frans van den Bergh wrote that pixel shifting does not increase resolution (link).
It can. but there are interesting aspects that you might not know about pixel shift. The need for multiple images isn't ideal for moving subjects and scenes in which there's movement. Based on multiple sources of information, Pixel Shift is more of a way to decrease the level of noise rather than increase resolution. Interestingly, after posting a question about whether or not the lens needs to be able to resolve more with pixel shift rather than without it, I seen various responses. Not everyone agreed that the lens needs to be better, so basically, that's a question that I've been having but hasn't been adequately answered for me. It's also possible to get more resolution by making a large image from multiple smaller ones, which might possibly be better, but that's different.
 
I understand that image quality increases with pixel-shift. However, I wonder if it ever increases resolution.

Jim Kasson's analysis of a7rIV shows that the resolution does not increase with pixel shift: Does pixel-shift increase resolution?

Frans van den Bergh wrote that pixel shifting does not increase resolution (link).
It can. but there are interesting aspects that you might not know about pixel shift. The need for multiple images isn't ideal for moving subjects and scenes in which there's movement. Based on multiple sources of information, Pixel Shift is more of a way to decrease the level of noise rather than increase resolution. Interestingly, after posting a question about whether or not the lens needs to be able to resolve more with pixel shift rather than without it, I seen various responses. Not everyone agreed that the lens needs to be better, so basically, that's a question that I've been having but hasn't been adequately answered for me. It's also possible to get more resolution by making a large image from multiple smaller ones, which might possibly be better, but that's different.
Thank you for your response. I am aware that motion artifacts reduce the applicability of pixel-shift, though some cameras started adding in-camera motion artifact removal. Unfortunately, that feature comes with a reduction in resolution and other related benefits.

AFAIK, the consensus is that pixel-shift contributes to image quality mainly by reducing noise and false colors.
 
AFAIK, the consensus is that pixel-shift contributes to image quality mainly by reducing noise and false colors.
You cannot separate false colors from (lack of) resolution in general.
 
... after posting a question about whether or not the lens needs to be able to resolve more with pixel shift rather than without it, I seen various responses. Not everyone agreed that the lens needs to be better, so basically, that's a question that I've been having but hasn't been adequately answered for me.
Of course the better the lens the better the 'sharpness' in both shifted and unshifted captures because - taking the Modulation Transfer Function as input to the sharpness metric of choice - imaging system MTF is the product of the individual component MTFs, which can be simplified for this discussion as follows:

(1) MTFsys(f) = [ MTFlens(f) . MTFpixaperture(f) ] ** diraccomb(pf]

with f spatial frequency, . element-wise product, ** 2D convolution, diraccomb a 2D train of Dirac delta functions arranged in a rectangular grid. The deltas are 1/p apart in the frequency domain, with p pixel pitch (they are of course p apart in the spatial domain, at the center of each pixel).

Sampling is performed by the 2D delta train in combination with pixel aperture, which is the effective sampling area of the pixel usually assumed to be something close to a square with 100% fill factor.

In a half-pixel shifted capture, pixel aperture does not change (the pixels are physically what they are no matter how many times we use them). On the other hand the spacing of the Dirac deltas in the spatial domain go from p to 1/2p, which means that the deltas of the diraccomb in the frequency domain above will be spaced at 2 cycles per pixel pitch instead of the original 1 c/p. This doubles Nyquist and the unaliased frequencies, with the consequent benefits mentioned upthread - this is where the value of half-pixel shift resides.

There is no need for a better lens to get such benefits. It is obvious by (1) that if MTFlens is improved so will overall system sharpness - in both the unshifted and shifted cases. Of course with a better lens there could potentially be more aliased information to bring back into the fold in the latter.

Jack

PS More here

https://www.strollswithmydog.com/resolution-model-digital-cameras-iii/
 
Last edited:
Most of the adjacent colors in that poster differ greatly either in value or saturation, and that generally is OK according to design rules. But notice that the "Bermuda" text and the woman's hat—similar in value and saturation with the surrounding sky—are both outlined in white, as the old design rules would require.

The one major exception to the rule of avoiding the adjacency of similar values and saturations is the sun. I wonder if that was intended? Perhaps it produces a slight visual vibration, whatever that may be?

The rules and practices of the graphic arts may be a more reliable source for examining physiological phenomena than much of the fine arts, which often assert that "there are no rules," or that "rules are meant to be broken."

Take a look at this infographic: graphic-design-rules.

Note rule #5, "Avoid colors that clash". It is an interesting scientific puzzle: do certain color combinations reliably clash across observers, what exactly is perceived as a clash, can the clashing be quantified, are there optical consequences of clashing, and what are the underlying physiological reasons for clashing? The underlying reasons for other rules in that infographic have scientific interest as well.

I admit this is pretty far removed from the OP. However, I wonder how certain color combinations offer different perceived contrast and resolution than others: for example, consider the ability of a human to visually resolve a sine wave pattern made up not of black and white but of different pairs of colors, or the ability of a CFA to resolve the same and how pixel shift may change that.
 

Keyboard shortcuts

Back
Top