CCD shift enhanced resolution

It doesn\t prevent motion, but obliterates the information that you
are trying to recover above half the sampling frequency of the
sensor.
Correct. The AA filter is designed to cut off frequencies above the
Nyquist frequency for the sensor pitch it sits over. But you're all
thinking "in the box" here. If we're talking about a camera
designed specifically to take advantage of sensor movement, you
wouldn't use an AA filter when the camera is in "enhanced
resolution" mode (the AA filter isn't part of the sensor in most
designs, so why can't it flip out of the way?). The primary reason
the AA is there in the first place is to remove color moire, so the
relevant issue is "can you eliminate moire in a sensor shift?" The
math is complex, but I'm pretty sure the answer is yes.
Of course beyond a certain point your lens will act asthe AA filter anyway.
-the input signal must be continuous and repetitive, i.e. a static
image (for the duration of the photo sequencing)
Actually, I'd disagree with this contention. For obvious reasons I
can't describe anything other than what's in the public domain
(which is almost nothing other than a basic acknowledgement that it
exists), but side-looking survellance cameras in moving objects are
essentially a "shifting sensor" problem. And as long as you have
"fixed" objects in the frame, objects in motion can be detected and
interpolated at a slightly higher resolution than a single frame
capture. Of course it takes a lot of computing...
Isn't that similar to the synthetic aperture radar problem? Still the general case isthat the sensor platform is moved, but I believe that the amount doesnt have to be known a priori, but non-coherent sampling can be used in conjunction with autocorrelation (if the object isstatic).

In fact all this sounds like a great idea for improved landscape detail, of course the shots need to be taken in quick succession to prevent wind from disturbing the scene (or at least having a static scene will make the maths easier)

Laurens
--
Thom Hogan
author, Nikon Field Guide & Nikon Flash Guide
editor, Nikon DSLR Report
author, Complete Guides: D50, D70, D100, D200, D1 series, D2h, D2x,
S2 Pro
http://www.bythom.com
 
Shifting the sensor will not increase resolution.
Actually, it does, but it needs to be shifted just 1/2 pixel in
each direction (4-shot mode) and requires special software to
combine the images. It is supposedly a lot better (= more
resolution) than interpolation and resizing the original image to
200%.
No - Joseph is entirely correct. If the camera has a correctly designed optical anti alias filter then all information above a certain frequency is lost. And moving the detector will not gain any resolution information at all.

Thats the main purpose of an anti alias filter - to remove the dependency of detector alignment, i.e avoiding aliasing. You need aliasing to be able to get more information by moving the sensor.

--
Roland
http://klotjohan.mine.nu/~roland/
 
from a Ricoh Patent Application:

[0019] In this invention, imaging is performed plural times by moving an incident position of an incident light to the image pickup surface by a specified quantity and in the specific direction between exposures, so that it is possible to facilitate imaging based on pixel shifting such that the plurality of imaged image data is used to increase an apparent number of pixels. Accordingly, this invention can obtain a high-resolution image even if the image pickup unit itself has only a small number of pixels.

[0020] In this invention, it becomes easy to slightly change an incident position of an incident light to the image pickup surface by moving the imaging optical system or the image pickup surface by a specified slight amount during the exposing time. Accordingly, in this invention, even if an image pickup signal has a high frequency component higher than one half of a sampling frequency of the image pickup unit, the high frequency component of the image pickup signal is removed, so that occurrence of pseudo colors or moir caused by foldover distortion of the high frequency component can be prevented.

[0021] In order to remove the high frequency component of the image pickup signal that becomes a cause of occurrence of the pseudo colors or moir, a crystal plate is generally disposed in a light path to remove the high frequency component based on a dot-image separation due to birefringence of this crystal plate. However, it has been known that the same effect can be obtained by slightly shifting the object image during the exposing time without using a birefringent plate like the crystal plate.

--
http://www.pbase.com/dgross (work in progress)
http://www.pbase.com/sigmadslr/dominic_gross_sd10

 
from a Ricoh Patent Application:
...
Sounds nice.

Using shift to increase resolution is not something new though
of course, just wanted to answer the question...
-
and it is very obvious. Would be nice to read the claims to see
what new things Ricoh have come up with that is possible to patent.
20020163581

It covers a lot of things, I also think their idea how to enlarge the movement of the piezo actuators looks better than what Sony (Minolta) does. On the other hand the Pentax system probably looks best because of its simplicity.

--
http://www.pbase.com/dgross (work in progress)
http://www.pbase.com/sigmadslr/dominic_gross_sd10

 
Then do a pixelshift of 1,5 well pitch or 2,5 or 3,5. At some point the anti-aliasing has to give in. If not it would mean that more pixels per sensor wouldn't work either with anti-aliasing filters. Bigger shifts will result in some detail loss at the borders of the sensor but that will not be a real issue.

Ernst
 
Then do a pixelshift of 1,5 well pitch or 2,5 or 3,5. At some point
the anti-aliasing has to give in.
But there is no new information. It has the same effect as a shift of +2 followed by a shift of -0.5. The shift of +2 duplicates results everywhere except at the edges of the sensor.
If not it would mean that more
pixels per sensor wouldn't work either with anti-aliasing filters.
With a reduced sensor pitch the AA filter must be modified to match the new sensor, so you still get the expected increase in resolution.

--
Alan Robinson
 
Then do a pixelshift of 1,5 well pitch or 2,5 or 3,5. At some point
the anti-aliasing has to give in.
You could do a pixelshift of 1 and avoid the Bayer color artefacts.

Or you could make the anti alias filter weaker and pixelshift 1/2. But - then you would always have to do a pixel shift to avoid aliasing.

--
Roland
http://klotjohan.mine.nu/~roland/
 
the parts I quoted before are from that application.
Thx!

I must admit I gave up reading before I reached that point. The first 20 or so claims had nothing with pixel-shifting to do so I got lazy and assumed the rest was just repeating again and again - the way patents usually look. But claim 22 was the first calim to mention pixel-shifting.

--
Roland
http://klotjohan.mine.nu/~roland/
 
Tried that with my SD9 and later SD10 (no AA Filter) and astronomy
Software (RegistaX), even single images that were upsized with
Photoshop's bicubic showed more detail than the stacked ones,
because registax could not get them aligend perfectly.
That may very well be due to the lack of AA filter. Aliased data is almost totally lacking in implied sub-pixel information (it has what I call a "snap-to-grid" effect).

Imagine your SD9 recording a movie:

A single point of light 1/4 the width of a pixel moves across the sensor.

One pixel record the light.

In the next frame the light is between sensor wells. Nothing is recorded. Repeat ad infinitum.

Now, repeat with an AA filter. In one frame, one pixel is very bright, and the neighbors are slightly bright. In the next frame, two pixels are moderately bright, suggesting that the point of light is between the pixels. In fact, there is a theoretically infinite sub-pixel precision possible by weighting.

Now, an AA-less sensor is probably ideal for precise calculated movement of the sensor, where no analysis is necessary; aliasing is greatly reduced by in-between multiple samples and AA-filtering is less necessary.

--
John

 
Laurens wrote:
This knowledge is inherent in digital audio (e.g. CDs
are sampled at 44.1kHz, hence the maximum audio freq. recorded is
22.05kHz).
This is considered acceptable, but actually, you get amplitude modulation at frequencies below the nyquist that are simple integer ratios to the sampling frequency. The modulation is quite strong at values like 1/3 the sample frequency, 2/5, etc. The worst is right around the nyquist, but that is usually filtered away before sampling.

The nyquist theory is a "good enough" type of theory; there isn't perfect sampling below the nyquist, even below the filter cutoff.

--
John

 
Tried that with my SD9 and later SD10 (no AA Filter) and astronomy
Software (RegistaX), even single images that were upsized with
Photoshop's bicubic showed more detail than the stacked ones,
because registax could not get them aligend perfectly.
That may very well be due to the lack of AA filter. Aliased data
is almost totally lacking in implied sub-pixel information (it has
what I call a "snap-to-grid" effect).

Imagine your SD9 recording a movie:

A single point of light 1/4 the width of a pixel moves across the
sensor.

One pixel record the light.

In the next frame the light is between sensor wells. Nothing is
recorded. Repeat ad infinitum.
One of the advantages of SD10 over SD9 is that the Foveon sensor sued in SD10 has microlenses. That's an array of square lenses in front of the sensor. The lenses almost touch, so there's very little dead "moat" between the cells.

The moat is so small that a lens probably won't resolve a point of light fine enough to land in it and vanish.
Now, repeat with an AA filter. In one frame, one pixel is very
bright, and the neighbors are slightly bright. In the next frame,
two pixels are moderately bright, suggesting that the point of
light is between the pixels. In fact, there is a theoretically
infinite sub-pixel precision possible by weighting.

Now, an AA-less sensor is probably ideal for precise calculated
movement of the sensor, where no analysis is necessary; aliasing is
greatly reduced by in-between multiple samples and AA-filtering is
less necessary.
Personally, I've always felt that you could get all the AA filter you need by vibrating the sensor at a frequency high enough so that there were several cycles of vibration even in a 1/4000 sec exposure.

--
The Pistons led the NBA, and lost in the playoffs.
The Red Wings led the NHL, and lost in the playoffs.

It's up to the Tigers now...
Leading the league, and going all the way!

Ciao!

Joe

http://www.swissarmyfork.com
 
One of the advantages of SD10 over SD9 is that the Foveon sensor
sued in SD10 has microlenses. That's an array of square lenses in
front of the sensor. The lenses almost touch, so there's very
little dead "moat" between the cells.
The moat is so small that a lens probably won't resolve a point of
light fine enough to land in it and vanish.
True, but if you were to make a movie from the frames, the microlensed SD10 would still be jerky; the light would dwell in a pixel and then quickly advance to the next one. Sub-pixel placement has a lot of jitter.

--
John

 
No - Joseph is entirely correct. If the camera has a correctly
designed optical anti alias filter then all information above a
certain frequency is lost. And moving the detector will not gain
any resolution information at all.
Thats the main purpose of an anti alias filter - to remove the
dependency of detector alignment, i.e avoiding aliasing. You need
aliasing to be able to get more information by moving the sensor.
I don't think so. It may be better not to have an AA filter, but you can still get sub-pixel detail with an AA filter moving the sensor - it's just going to be of a lower contrast. Depending upon how much noise there is, you can sharpen it.

--
John

 
are sampled at 44.1kHz, hence the maximum audio freq. recorded is
22.05kHz).
This is considered acceptable, but actually, you get amplitude
modulation at frequencies below the nyquist that are simple integer
ratios to the sampling frequency. The modulation is quite strong
at values like 1/3 the sample frequency, 2/5, etc. The worst is
right around the nyquist, but that is usually filtered away before
sampling.
I have never heard of this, could you provide a reference? I do know that DAC performance (an presumably ADCs too) is typically worst at fs/3 in terms of spurii, and this could be interpreted as an amplitude modulation of the wanted signal. However, the sidelobes would still be many dB down...

Laurens
The nyquist theory is a "good enough" type of theory; there isn't
perfect sampling below the nyquist, even below the filter cutoff.

--
John

 
I have never heard of this, could you provide a reference? I do
know that DAC performance (an presumably ADCs too) is typically
worst at fs/3 in terms of spurii, and this could be interpreted as
an amplitude modulation of the wanted signal. However, the
sidelobes would still be many dB down...
It probably doesn't happen much in sampling acoustic sounds, but if you create a slow sine-wave sweep through the sampling range, the volume drops out at frequencies like the ones I mentioned, because at certain frequencies only low values will be sampled. Think of it this way; if the sweep is slow enough, at some frequency, the samples are going to occur at equally-spaced excursions both postive and negative, with a very low signal resulting, for fs/4.

This is true, at least, for mathematically generated sweeps. Perhaps sampled ones don't have the opportunity to be articulate enough to record full modulation at these frequencies.

--
John

 
I have never heard of this, could you provide a reference? I do
know that DAC performance (an presumably ADCs too) is typically
worst at fs/3 in terms of spurii, and this could be interpreted as
an amplitude modulation of the wanted signal. However, the
sidelobes would still be many dB down...
It probably doesn't happen much in sampling acoustic sounds, but if
you create a slow sine-wave sweep through the sampling range, the
volume drops out at frequencies like the ones I mentioned, because
at certain frequencies only low values will be sampled. Think of
it this way; if the sweep is slow enough, at some frequency, the
samples are going to occur at equally-spaced excursions both
postive and negative, with a very low signal resulting, for fs/4.

This is true, at least, for mathematically generated sweeps.
Perhaps sampled ones don't have the opportunity to be articulate
enough to record full modulation at these frequencies.
I don't follow. By a swept sinusoid I guess you mean an FMCW signal with a linear (usually triangle) modulation? Perhaps the modulating waveform, such as a triangle, is causing your signal to go over the nyquist frequency and that you therefore see amplitude modulation.

A unity amplitude sine wave at Fs/4, sampled at t=0 will have the folling repetative pattern: 0, 1, 0, -1, so you will have fullscale outputs.

Laurens
 

Keyboard shortcuts

Back
Top