Pixel shift and spatially-correlated noise.

John Sheehy

Forum Pro
Messages
28,398
Solutions
8
Reaction score
8,815
Location
NY, USA
Much discussion about pixel shift has centered on aliasing and resolution (with color resolution benefiting most, of course, with a CFA).

I thought that it would be useful to point out the fact that pixel shift also, besides lowering the quantity of noise compared to a single exposure with the same analog gain, should greatly reduce or eliminate some of the more egregious forms of correlated noise and other spatially-correlated artifacts. Think of the "endless maze" pattern that results when the two green channels in a CFA have different sensitivities and/or gain (in the case of the original Canon 7D, the two greens have different color responses!). The discrepancies that it causes, when fixed, completely disappear with a 4-position CFA-nulling shift, because now you are summing all green pixel data with both biases included. A crafty conversion program knowing the spectral response differences could differentiate better between greens and reds and greens and blues, with two slightly different greens fully covering the sensor.

Far too often, when the usefulness of some feature is critiqued, it is wrongly compared to something that was already doing pretty well and not pushed to its visible limits. Pixel shift, while limited in application to still cameras and subjects to keep things simple, has great potential for artifact reduction beyond poor red and blue resolution, and the egregious single channel aliasing in single CFA exposures.
 
Last edited:
Much discussion about pixel shift has centered on aliasing and resolution (with color resolution benefiting most, of course, with a CFA).

I thought that it would be useful to point out the fact that pixel shift also, besides lowering the quantity of noise compared to a single exposure with the same analog gain, should greatly reduce or eliminate some of the more egregious forms of correlated noise and other spatially-correlated artifacts. Think of the "endless maze" pattern that results when the two green channels in a CFA have different sensitivities and/or gain (in the case of the original Canon 7D, the two greens have different color responses!). The discrepancies that it causes, when fixed, completely disappear with a 4-position CFA-nulling shift, because now you are summing all green pixel data with both biases included. A crafty conversion program knowing the spectral response differences could differentiate better between greens and reds and greens and blues, with two slightly different greens fully covering the sensor.

Far too often, when the usefulness of some feature is critiqued, it is wrongly compared to something that was already doing pretty well and not pushed to its visible limits. Pixel shift, while limited in application to still cameras and subjects to keep things simple, has great potential for artifact reduction beyond poor red and blue resolution, and the egregious single channel aliasing in single CFA exposures.
I believe the K1 and D810 share similar sensors. Below are examples of a single exposure with the D810 vs four combined exposures using pixel shift with the K1 (so that the total exposure time is the same as for the single D810 exposure):

203ef377e4b74eb0a78dd4b376d12908.jpg.png
 
Last edited:
I believe the K1 and D810 share similar sensors. Below are examples of a single exposure with the D810 vs four combined exposures using pixel shift with the K1 (so that the total exposure time is the same as for the single D810 exposure):

203ef377e4b74eb0a78dd4b376d12908.jpg.png
I think K-1 is closer to D800E (better high ISOs than D810) plus Pentax's possible denoised RAWs at high ISO.

For this kind of comparisons I prefer to use

- the "Low Light" shots because usually Dpreview pays more attention to give same exposures there than with "Daylight"

- use the area at the grey "DIGITAL PHOTOGRAPHY REVIEW" letters .. a RAW denoised file will show up as blur and low letter definition ;)


--
Ilias
 
I believe the K1 and D810 share similar sensors. Below are examples of a single exposure with the D810 vs four combined exposures using pixel shift with the K1 (so that the total exposure time is the same as for the single D810 exposure):

203ef377e4b74eb0a78dd4b376d12908.jpg.png
I think K-1 is closer to D800E (better high ISOs than D810) plus Pentax's possible denoised RAWs at high ISO.

For this kind of comparisons I prefer to use

- the "Low Light" shots because usually Dpreview pays more attention to give same exposures there than with "Daylight"

- use the area at the grey "DIGITAL PHOTOGRAPHY REVIEW" letters .. a RAW denoised file will show up as blur and low letter definition ;)

https://www.dpreview.com/reviews/im...x=-0.07483535090580681&y=-0.16335298784240002
Your comparison does show that the D800E sensor is more comparable. The interesting thing here is the effect that pixel shift has in that lighting:



49f0a88520ab4d9c92fd119c96f9c23d.jpg.png

Even comparing the K1 to itself at ISO 6400, the difference in color between normal and pixel shift is rather striking. Let's drop it down a couple of stops:



5a8b51f6051944739c3e49c769716320.jpg.png

Interesting, isn't it?
 
I believe the K1 and D810 share similar sensors. Below are examples of a single exposure with the D810 vs four combined exposures using pixel shift with the K1 (so that the total exposure time is the same as for the single D810 exposure):
It is clear that the D810 has a lot more luminance noise reduction applied. The luminance noise is sharper in the K1, so this presentation is not truly equitable. It looks like the K1 only has RAW-RGB color space conversion, and is otherwise pixel-literal.

I really wish that these comparators were possible with literally no NR, and even no intelligent demosaicing. I can see with my own eyes what kind of noise is easily cleaned without destroying detail.

I don't see how this relates to my point about cancelling periodic artifacts at the Nyquist. The D810 does not show such an artifact. It would be necessary to use a camera with a known G1/G2 difference to demonstrate what I was talking about.

What drives me crazy is what manufacturers can get away with in terms of spoonfeeding us features. If a camera and a subject are stabilized enough to use a slow pixel shift, it should be able to repeat the shift over and over for very low effective ISOs, output at perhaps 16 bits of data. You can repeat it yourself, but then you are given a huge load of data that you may not want to handle, and limited options in combining them.
 
Even comparing the K1 to itself at ISO 6400, the difference in color between normal and pixel shift is rather striking. Let's drop it down a couple of stops:

Interesting, isn't it?
No; it is a conversion error, and nothing more. No color conversion is equitable unless the colors and contrast are the same, with the same lens.

Whoever does this at DPReview is probably accepting biased converter defaults.

Then there is the issue of lens contrast, which also makes these comparisons not comparisons of sensors, but of systems including specific lenses.

Scene DR confounds sensor DR comparisons of common and equally-exposed scenes, but not exposure latitude, which is really what DR should be about in a sensor.
 
I don't see how this relates to my point about cancelling periodic artifacts at the Nyquist. The D810 does not show such an artifact. It would be necessary to use a camera with a known G1/G2 difference to demonstrate what I was talking about.
When you say "cancelling" do you really mean averaging to reduce the error?
What drives me crazy is what manufacturers can get away with in terms of spoonfeeding us features. If a camera and a subject are stabilized enough to use a slow pixel shift, it should be able to repeat the shift over and over for very low effective ISOs, output at perhaps 16 bits of data. You can repeat it yourself, but then you are given a huge load of data that you may not want to handle, and limited options in combining them.
I guess you know I am a big fan of digital integration. But why repeat the shift over and over again? Why not shoot N images at shift position 1, then shift and shoot N at position 2 etc? Unless perhaps you want to average out aim position "noise"
 
Unless perhaps you want to average out aim position "noise"
This has always bothered me about pixel shift. Using cameras with mechanical shutters, I find it is the exception that multiple shots truly align at the pixel level even without pixel shift. Literally mounting a camera on a block of concrete, modern autofocus optics often have enough internal play that things can shift by an easily-detectable fraction of a pixel just due to the vibration induced by the shutter and/or noise in lens focus and stabilization controls. For example, mounting a typical Canon PowerShot (which I use fleets of under CHDK) and capturing a sequence of shots I still often find significant pixel-level misalignment just from vibration induced by the in-lens leaf shutter or some other wiggle within the power-driven lens assembly. Fortunately, on my Sony E/FE bodies, I often use old manual lenses which are heavy and tightly toleranced, making shot-to-shot alignment closer to perfect -- especially using electronic shutter.

Given this, I've often wondered if shooting a burst and doing subpixel alignment (e.g., classic superresolution processing) directly on the raw data wouldn't provide at least comparable image quality to using pixel shift. Alternatively, I wonder how much improvement pixel shift could give if the processing for it did subpixel alignment directly on the raw data instead of simply assuming that the shifts perfectly align. Unfortunately, I'm not aware of any consumer camera with pixel shift that allows user-written code to control the shifting, which would be the right way to test this. If Sony hadn't removed the PlayMemories support (and hence OpenMemories programming access) from the A7RIII, I probably would have bought one and played with this by now....

Has anyone measured how much position error happens in "real world" use of pixel shift?

PS: A secondary issue is what the AA filters do in pixel shift. I think Pentax might have had the right idea using the sensor movement to fake an AA filter.... However, I would expect that an AA filter will tend to hide some degree of pixel alignment error when doing pixel shift.
 
Last edited:
I don't see how this relates to my point about cancelling periodic artifacts at the Nyquist. The D810 does not show such an artifact. It would be necessary to use a camera with a known G1/G2 difference to demonstrate what I was talking about.
When you say "cancelling" do you really mean averaging to reduce the error?
I made an example about completely eliminating the influence of any bias, either offset or scalar, that is fixed, and correlated with odd or even pixel rows or columns (the two most common forms being gain or offset differences in odd and even lines of pixels, and G1 and G2 filters that are different). Of course, non-fixed spatially-random noises don't disappear; they just reduce relative to efficiently summed signal.
What drives me crazy is what manufacturers can get away with in terms of spoonfeeding us features. If a camera and a subject are stabilized enough to use a slow pixel shift, it should be able to repeat the shift over and over for very low effective ISOs, output at perhaps 16 bits of data. You can repeat it yourself, but then you are given a huge load of data that you may not want to handle, and limited options in combining them.
I guess you know I am a big fan of digital integration. But why repeat the shift over and over again? Why not shoot N images at shift position 1, then shift and shoot N at position 2 etc? Unless perhaps you want to average out aim position "noise"
I wasn't throwing out my best shifting ideas, there. Just mentioning how simple it is to repeat exposures, if you're doing multiple ones and have excluded significant subject motion of various types for the process to work.

Now that I seem to be asked to defend repeated sets of 4, I can imagine that if you had a small point of narrow-wavelength light moving in an otherwise static-enough scene, doing the same CFA position multiple times in a row, and then repeating, is more likely to miss the moving point of colored light, especially if there is no AA filter or it can not be properly simulated (perhaps due to short exposure times).

For a truly static scene and camera registration and light that doesn't cycle, of course it makes no sense to make unnecessary stepping motions, and also, of course, more positions has value that just 4 positions can't provide. When the possibility of such changes becomes potential issues, then the order of exposures can have effects being efficiency of motion.
 
Given this, I've often wondered if shooting a burst and doing subpixel alignment (e.g., classic superresolution processing) directly on the raw data wouldn't provide at least comparable image quality to using pixel shift. Alternatively, I wonder how much improvement pixel shift could give if the processing for it did subpixel alignment directly on the raw data instead of simply assuming that the shifts perfectly align. Unfortunately, I'm not aware of any consumer camera with pixel shift that allows user-written code to control the shifting, which would be the right way to test this. If Sony hadn't removed the PlayMemories support (and hence OpenMemories programming access) from the A7RIII, I probably would have bought one and played with this by now....

Has anyone measured how much position error happens in "real world" use of pixel shift?

PS: A secondary issue is what the AA filters do in pixel shift. I think Pentax might have had the right idea using the sensor movement to fake an AA filter.... However, I would expect that an AA filter will tend to hide some degree of pixel alignment error when doing pixel shift.
I have tried this software:


It worked ok on my 7D, but I guess that I was hampered by the OLPF. I believe that this sort of sensor alignment super-resolution depends on spatial aliasing to work. You need the system to pass information about >Nyquist frequencies, or else there is nothing to super-resolve. Somewhat similar to interlacing in old crt tvs.

-h
 
I have tried this software:

https://www.photoacute.com

It worked ok on my 7D, but I guess that I was hampered by the OLPF. I believe that this sort of sensor alignment super-resolution depends on spatial aliasing to work. You need the system to pass information about >Nyquist frequencies, or else there is nothing to super-resolve. Somewhat similar to interlacing in old crt tvs.
That looks like classic superresolution processing... but then they claim raw-in-raw-out processing...? That can't really work if the raw out has the same dimensions as the raws input, although one can certainly output a bigger DNG that raw converters will understand. When I was referring to working directly on the raws, I meant for superresolution alignment; the result would be an interpolated image (with a superresolution interpolation function that doesn't match normal raw interpolation), the result of which could be delivered as a DNG, but wouldn't fit in the original raw wrapper.
 
Given this, I've often wondered if shooting a burst and doing subpixel alignment (e.g., classic superresolution processing) directly on the raw data wouldn't provide at least comparable image quality to using pixel shift. Alternatively, I wonder how much improvement pixel shift could give if the processing for it did subpixel alignment directly on the raw data instead of simply assuming that the shifts perfectly align. Unfortunately, I'm not aware of any consumer camera with pixel shift that allows user-written code to control the shifting, which would be the right way to test this. If Sony hadn't removed the PlayMemories support (and hence OpenMemories programming access) from the A7RIII, I probably would have bought one and played with this by now....

Has anyone measured how much position error happens in "real world" use of pixel shift?

PS: A secondary issue is what the AA filters do in pixel shift. I think Pentax might have had the right idea using the sensor movement to fake an AA filter.... However, I would expect that an AA filter will tend to hide some degree of pixel alignment error when doing pixel shift.
I have tried this software:

https://www.photoacute.com

It worked ok on my 7D, but I guess that I was hampered by the OLPF. I believe that this sort of sensor alignment super-resolution depends on spatial aliasing to work.
Well, an aliasing sensor helps the final product, because the result, if it has sufficient source positions and is properly aligned, is not aliased, but still has all of the high-frequency SNR at the Nyquist that an aliasing system allows. I'm not so sure that it helps with alignment, at least if the alignment intelligence code is well-written. One of the effects of aliasing is that it puts points and edges in the wrong places. An anti-aliased point of light has an analog-correct center of intensity; an aliased one does not, especially when there is some blind or insensitive space between photosites, after all the various optical PSFs do their thing.
You need the system to pass information about >Nyquist frequencies, or else there is nothing to super-resolve.
That's what the change of alignment accomplishes. The only difference between a properly-registered shifted stack from an aliasing sensor and a non-aliasing sensor is the contrast of the higher frequencies (while noise at the Nyquist is unaffected, giving higher SNR at higher image frequencies from the aliasing sensor. AA filters drop SNR at the top of the spectrum; their only major downside, IMO, with sharp lenses that need them.
 
I have tried this software:

https://www.photoacute.com

It worked ok on my 7D, but I guess that I was hampered by the OLPF. I believe that this sort of sensor alignment super-resolution depends on spatial aliasing to work. You need the system to pass information about >Nyquist frequencies, or else there is nothing to super-resolve. Somewhat similar to interlacing in old crt tvs.
That looks like classic superresolution processing... but then they claim raw-in-raw-out processing...? That can't really work if the raw out has the same dimensions as the raws input, although one can certainly output a bigger DNG that raw converters will understand. When I was referring to working directly on the raws, I meant for superresolution alignment; the result would be an interpolated image (with a superresolution interpolation function that doesn't match normal raw interpolation), the result of which could be delivered as a DNG, but wouldn't fit in the original raw wrapper.
Of course, a program could use super-resolution from registration changes under the hood, and output a DNG with the same x and y dimensions, but full RGB at each pixel, lacking the egregious red and blue channel aliasing of a normal RAW file exposed through a CFA. This means that the converter has less possible mistakes to make, as it doesn't have to abstract anything spatially, concerning color vs luminance.
 
I tried PhotoAcute back in 2009. I was testing PhotoAcute's auto-alignment & NR-through-stacking of a burst of multiple raw shots, not super-resolution. I used a Pentax K20D. Here's a burst of 14 PEF raw shots, hand-held, until the shot buffer filled up. PhotoAcute would not handle PEFs so it converted them to DNGs automatically before proceeding. (If I had known this, I would have set the K20D to shoot DNGs in the first place.) The sequence was shot at ISO3200, 1/1500s, f/8. I shot at high ISO to get plenty of noise in a daytime shot.

Here's the first frame with the region-being-compared indicated.

imgp7772_highlighted.jpg


Now the comparison shot of the region under examination. This is a capture from part of PhotoAcute's screen so the raw file has not yet been developed and the colours and details are only approximate. I has PhotoAcute display this at a 350% zoom since the orig. region is only about 170x100. I selected Bicubic display resizing in PhotoAcute so the image is being smoothed, but it doesn't do much to hide the amplified sensor RN. (The 14.6MP K20D with its early Samsung CMOS APS-C sensor has a lot of RN.)

photoacute_comparison.jpg


Here is the final 170x100 region from the first of the 14 PEF raw files and from the output (stacked) DNG raw file, both after developing in SilkyPix Pro 4.

output_comp_final.jpg


I've only used default colours, contrast & sharpness settings. (I've copied the default development parameters from one image to the others to made sure they were the same.) Afterwards, I've applied a 3.5x Lanczos upsize to bring it back to 1200x350 to match the original screen display width of the comparison shot, so it doesn't help with sharpness but, as I said I was more interested in testing the auto-alignment & NR-through-stacking.

Dan.
 
Last edited:
I have tried this software:

https://www.photoacute.com

It worked ok on my 7D, but I guess that I was hampered by the OLPF. I believe that this sort of sensor alignment super-resolution depends on spatial aliasing to work.
Well, an aliasing sensor helps the final product, because the result, if it has sufficient source positions and is properly aligned, is not aliased, but still has all of the high-frequency SNR at the Nyquist that an aliasing system allows. I'm not so sure that it helps with alignment, at least if the alignment intelligence code is well-written. One of the effects of aliasing is that it puts points and edges in the wrong places. An anti-aliased point of light has an analog-correct center of intensity; an aliased one does not, especially when there is some blind or insensitive space between photosites, after all the various optical PSFs do their thing.
Reading my sentence a second time, I see now that I had an unfortunate choice of words.

What I wanted to express was:

"I believe that this sort of super-resolution _depends_ on spatial aliasing to work."

I agree that aliasing should not help alignment itself, and quite possibly makes it harder. What we are trying to accomplish is to "find new information" by shifting an aliased sampling grid over a scene. The more different each frame is, the more information it can potentially add to the processed output, but the harder it may be to match to other frames. That sounds like a paradox.

For this tech to work, one must probably assume something about the scene content (a mix of small and large features?) and the camera capture (global or large regions having the same (sub-)pixel shift) such that a global/regional non-ambiguous offset can be found that will let the subtle aliasing from each frame add to the (true) information in the assembled frame.
The only difference between a properly-registered shifted stack from an aliasing sensor and a non-aliasing sensor is the contrast of the higher frequencies (while noise at the Nyquist is unaffected, giving higher SNR at higher image frequencies from the aliasing sensor. AA filters drop SNR at the top of the spectrum; their only major downside, IMO, with sharp lenses that need them.
A nice feature of multi-shot SR using aliased sensors is that each frame is quite usable on its own. I.e. if there is object movement or some other problem during the exposure, it should be simple to mask out an entire frame of parts of it, and still have quite good results. If you are stitching a scene, loosing one exposure is catastrophic.

On the other hand, multi-shot SR only compensates for sensor issues, not limitations in lense, unlike stitching where you can get a wide and sharp panorama using a simple non-wide lense.

-h
 

Keyboard shortcuts

Back
Top