Can pixel shift increase resolution?

Most of the adjacent colors in that poster differ greatly either in value or saturation, and that generally is OK according to design rules. But notice that the "Bermuda" text and the woman's hat—similar in value and saturation with the surrounding sky—are both outlined in white, as the old design rules would require.

The one major exception to the rule of avoiding the adjacency of similar values and saturations is the sun. I wonder if that was intended? Perhaps it produces a slight visual vibration, whatever that may be?

The rules and practices of the graphic arts may be a more reliable source for examining physiological phenomena than much of the fine arts, which often assert that "there are no rules," or that "rules are meant to be broken."

Take a look at this infographic: graphic-design-rules.

Note rule #5, "Avoid colors that clash". It is an interesting scientific puzzle: do certain color combinations reliably clash across observers, what exactly is perceived as a clash, can the clashing be quantified, are there optical consequences of clashing, and what are the underlying physiological reasons for clashing? The underlying reasons for other rules in that infographic have scientific interest as well.

I admit this is pretty far removed from the OP. However, I wonder how certain color combinations offer different perceived contrast and resolution than others: for example, consider the ability of a human to visually resolve a sine wave pattern made up not of black and white but of different pairs of colors, or the ability of a CFA to resolve the same and how pixel shift may change that.
Its been done, many times, though usually on Lab chromaticity axes - ie red/green and yellow/blue, which are opponent colours.

Here's an example, often quoted in the literature...

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1193381/pdf/jphysiol00580-0382.pdf

The problem with that analysis is that it only provides a threshold value. This is not strong enough to stimulate colour opponent cells in the PVC, which detect colour edges.

Our impression of hue contrast is heightened by sharp edges, which actually increases our saturation perception and creates a band-pass response.

In other words, colour discrimination is not easily distinguished from luminance, as once assumed.


In general terms, the graphical rules on colour matching are very well known and have been around a long time. For example, I found this on a home decorating site!!

https://www.canva.com/colors/color-wheel/

I suspect the chromosteopsis effect you describe with certain colour combinations has a lot to do with opponent colour neurons and the grouping of L and M cone clusters - which have quite close spectral sensitivities, with (relatively sparse) blue cone input in the LGN.

In other words, opponent pairs are either SM/L or SL/M. These produce the four primary opponent hues (R,G,Y,B) in which R is not a spectral colour but a magenta hue.

L is never perceived uniquely, so what we call 'red' is actually SL, and what we call blue is actually SM. This causes a degree of colour dithering in colours away from our peak sensitivity (R and B) as opposed to those where L or M dominate.

Combining two of these colours - both using S cone inputs, could have interesting consequences. Try creating a square of L50 a127 b0 on a background of L50 a0 b-127. It's quite hard to look at... ;-)

There are all kinds of other anomalies. My favourite is the HelmHoltz-Kohlrausch effect.

There are also differences in perceived brightness. Coloured lights are perceived as more intense than white ones with the same luminous intensity. This has consequences in terms of our visual contrast response to emissive media as opposed to reflective media.

Back to the OP, the advantage of 1-pixel shifting is the full R and B response at each pixel location. This does not only reduce moire, but as I mentioned above, sharper colours also trigger double opponent neurons and have a disproportionate effect on colour resolution.

Hence my contention that Foveon sensors look sharper than they should based on traditional colour CSF predictions. Nor sure Jim agrees with me though ;-)

--
"A designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away." Antoine de Saint-Exupery
 
Last edited:
But how can we measure colour MTF in a useful way (ie something akin to Delta-E) and how visible is this in comparison to luminance MTF?
I measure MTF on each of the raw channels, and for the past year or so I've been doing that on both sides of and through the focal plane. That gives lots of information about LoCA, LaCA, and various aberrations. However, I don't see much value in merging those together and converting to a CIE space.
Perhaps 57 is wondering about the perceptual side of things. I have often heard it said that the HVS takes sharpness cues from its monochromatic channel,
Not entirely, apparently.

https://www.sciencedirect.com/science/article/pii/S0042698911000526
and that it is much less sensitive to it in the color difference channels. That makes good intuitive sense and seems to match personal experience, though it would be nice to see some data to support the feeling.
Less sensitive, yes - but not as much as once assumed.

But even if it were less sensitive, that doesn't mean that some improvement would not be clearly visible at normal viewing distances.

Intuitively, looking at quite low res Foveon images, I think the issue is underestimated.
I don't have any experience with Foveon images, but I have a lot of experience with Betterlight images, which also capture all three raw color planes at the same location. I think the reason the Betterlight 144MP backs punch so far above their weight is the relative lack of aliasing wrt Bayer-CFA images, not sharpness.
Not convinced ;-) I thinks it applies well below aliasing frequencies.

It's really hard to put colour perception down to a single factor as far as I can tell.
 
But how can we measure colour MTF in a useful way (ie something akin to Delta-E) and how visible is this in comparison to luminance MTF?
I measure MTF on each of the raw channels, and for the past year or so I've been doing that on both sides of and through the focal plane. That gives lots of information about LoCA, LaCA, and various aberrations. However, I don't see much value in merging those together and converting to a CIE space.
Perhaps 57 is wondering about the perceptual side of things. I have often heard it said that the HVS takes sharpness cues from its monochromatic channel,
Not entirely, apparently.

https://www.sciencedirect.com/science/article/pii/S0042698911000526
and that it is much less sensitive to it in the color difference channels. That makes good intuitive sense and seems to match personal experience, though it would be nice to see some data to support the feeling.
Less sensitive, yes - but not as much as once assumed.

But even if it were less sensitive, that doesn't mean that some improvement would not be clearly visible at normal viewing distances.

Intuitively, looking at quite low res Foveon images, I think the issue is underestimated.
I don't have any experience with Foveon images, but I have a lot of experience with Betterlight images, which also capture all three raw color planes at the same location. I think the reason the Betterlight 144MP backs punch so far above their weight is the relative lack of aliasing wrt Bayer-CFA images, not sharpness.
Not convinced ;-) I thinks it applies well below aliasing frequencies.

It's really hard to put colour perception down to a single factor as far as I can tell.
I would think that perception of color is almost orthogonal to perception of detail.

Best regards

Erik
 
But how can we measure colour MTF in a useful way (ie something akin to Delta-E) and how visible is this in comparison to luminance MTF?
I measure MTF on each of the raw channels, and for the past year or so I've been doing that on both sides of and through the focal plane. That gives lots of information about LoCA, LaCA, and various aberrations. However, I don't see much value in merging those together and converting to a CIE space.
Perhaps 57 is wondering about the perceptual side of things. I have often heard it said that the HVS takes sharpness cues from its monochromatic channel,
Not entirely, apparently.

https://www.sciencedirect.com/science/article/pii/S0042698911000526
and that it is much less sensitive to it in the color difference channels. That makes good intuitive sense and seems to match personal experience, though it would be nice to see some data to support the feeling.
Less sensitive, yes - but not as much as once assumed.

But even if it were less sensitive, that doesn't mean that some improvement would not be clearly visible at normal viewing distances.

Intuitively, looking at quite low res Foveon images, I think the issue is underestimated.
I don't have any experience with Foveon images, but I have a lot of experience with Betterlight images, which also capture all three raw color planes at the same location. I think the reason the Betterlight 144MP backs punch so far above their weight is the relative lack of aliasing wrt Bayer-CFA images, not sharpness.
Not convinced ;-) I thinks it applies well below aliasing frequencies.

It's really hard to put colour perception down to a single factor as far as I can tell.
I would think that perception of color is almost orthogonal to perception of detail.
Not as it turns out. Check the link I added above.

The PVC contains many double-opponent colour neurons that detect colour edges even in the absence of luminance differences.
 
Less sensitive, yes - but not as much as once assumed.

But even if it were less sensitive, that doesn't mean that some improvement would not be clearly visible at normal viewing distances.

Intuitively, looking at quite low res Foveon images, I think the issue is underestimated.
I don't have any experience with Foveon images, but I have a lot of experience with Betterlight images, which also capture all three raw color planes at the same location. I think the reason the Betterlight 144MP backs punch so far above their weight is the relative lack of aliasing wrt Bayer-CFA images, not sharpness.
Not convinced ;-) I thinks it applies well below aliasing frequencies.
Any evidence for that? Any explanation as to why? Any reason why Betterlight images wouldn't exhibit the same properties?
It's really hard to put colour perception down to a single factor as far as I can tell.
I'm certainly not doing that.
 
Less sensitive, yes - but not as much as once assumed.

But even if it were less sensitive, that doesn't mean that some improvement would not be clearly visible at normal viewing distances.

Intuitively, looking at quite low res Foveon images, I think the issue is underestimated.
I don't have any experience with Foveon images, but I have a lot of experience with Betterlight images, which also capture all three raw color planes at the same location. I think the reason the Betterlight 144MP backs punch so far above their weight is the relative lack of aliasing wrt Bayer-CFA images, not sharpness.
Not convinced ;-) I thinks it applies well below aliasing frequencies.
Any evidence for that? Any explanation as to why? Any reason why Betterlight images wouldn't exhibit the same properties?
No, I'm sure they do look better for all the same reasons.
It's really hard to put colour perception down to a single factor as far as I can tell.
I'm certainly not doing that.
Sorry, you only mentioned aliasing. I just thought there may be other reasons too.

I can't prove it but the images from the DP2 Quattro are interesting. Compared here with another APSC camera and two high res FF cameras. Not twice the resolution though I grant you, but the Quattro only has 1 20 MP and 2 5 MP layers.
 
57even wrote: I can't prove it but the images from the DP2 Quattro are interesting. Compared here with another APSC camera and two high res FF cameras. Not twice the resolution though I grant you, but the Quattro only has 1 20 MP and 2 5 MP layers.
Mushy, in part probably to counteract heavy aliasing, often what gives Sigmas their interesting look - in combination with the more challenging color model.

a0d7b5dbafd14491a5de2ed528f195c2.jpg.png

The Quattro cameras are indeed interesting because they are yet another demonstration that the luminance/central channel is dominant as far as the perception of sharpness is concerned. The others of course are Bayer CFAs and the multitude of image compression schemes that perform chroma subsampling, including ubiquitous jpeg.

Jack
 
Last edited:
57even wrote: I can't prove it but the images from the DP2 Quattro are interesting. Compared here with another APSC camera and two high res FF cameras. Not twice the resolution though I grant you, but the Quattro only has 1 20 MP and 2 5 MP layers.
Mushy, in part probably to counteract heavy aliasing, often what gives Sigmas their interesting look - in combination with the more challenging color model.
Mushy here, but not everywhere. I can't find any images of isoluminant colour patterns to test it with. All your examples have luminance contrast, but to be fair, so did mine.

True, I chose a bad example in the Quattro. Kind of a side issue anyway which rather distracted from my original question.
The Quattro cameras are indeed interesting because they are yet another demonstration that the luminance/central channel is dominant as far as the perception of sharpness is concerned. The others of course are Bayer CFAs and the multitude of image compression schemes that perform chroma subsampling, including ubiquitous jpeg.

Jack
I am not claiming that luminance isn't dominant. I don't think I said that anywhere.

I'm just saying that colour resolution is not as low as it's often assumed to be, and can even be bandpass in some situations. This is because the majority of colour sensing neurons in the PVC are edge detecting, but have a higher activation threshold.

My inference was that any improvement in colour resolution from achieving full colour information at each pixel would improve our colour acuity more than we might predict at the relevant frequencies if contrast was raised above that activation threshold.

I referenced a paper in my previous post to Jim and would be interested in his opinion about how it would present itself in images. I may well be missing something that makes it irrelevant.

But until we can compare images of isoluminant colour patterns with and without pixel shift I guess we will never know for sure.
 
About the Sigma: many Sigma owners are proud that their cameras "resolve" those fine lines in out favorite part of the studio scene. A quick comparison with the 150mp Phase One, with the Sigma upscaled, reveals how fake that is. Roughly speaking, it "resolves" every second line (plus classical aliasing) but hey, it looks sharp!

An "honest" rendering would just blur them at that resolution, which is what cameras with an AA filter and a similar pixel count do.

[ATTACH alt="SD1 crop, upscaled, from the "RAW" image"]media_4270249[/ATTACH]
SD1 crop, upscaled, from the "RAW" image


PO crop
 
Last edited:
Last edited:

Keyboard shortcuts

Back
Top