CCD shift enhanced resolution

Messages
28
Reaction score
2
Location
US
I wonder whether any of the new cameras which support CCD shift for image stabilization are capable of using it to effectively increase image resolution.

This will only work on relatively stable subjects, but it would make use of techniques developed by astronomical photographers who use image stacking to increase the effective resolution of their images (image stacking uses a series of slightly different images passed through an algorithm to interpolate a high resolution much more accurately than non-data-supported interpolation techniques can infer).

The combination of many new cameras' ability to take a relatively rapid barrage of images and the slight mobility of the CCD might be capable of producing some spectacularly detailed images of, say, landscapes with (relatively) low-cost cameras.

Do any cameras currently come "stacking ready"? That is, can one set them up to take a series of shots moving the CCD slightly between each shot?

What might the best stacking programs be for handling these quite large images?
 
Sadly no current cameras below medium format digital backs support that. I donät remember if Ricoh had a camera in the past that had this feature, or if it was some other feature I misstake it for.

I've waited for this too. Canon could (probably ?) support this with the IS lenses also.

Either there is some legal problem of having such a feature in the camera or it's just plain stupied of Sony/Minolta to not support this.

This would be great for landscape and product shots for instance.

--
Henrik
 
Ricoh RD-7 from 2000 had a Pro mode (7mp from 3Mp) and JVC GC-QX5HD from 2001 had a Pro-Still mode (6Mp from 3Mp). Both used pixel shifting.

"Superior Imaging Via JVC’s Advanced Pixel Shifting Technology

The GC-QX5HD offers advanced pixel shifting technology to maximize the CCD’s potential, delivering a high color resolution equivalent to 6 million pixels. Via precision optics and electronic circuitry, the image is double exposed and simultaneously shifted up one pixel. As a result of this pixel-shifting method, 100% of the green pixel information is acquired – not mathematically interpolated. This unique method eliminates interpolation in the vertical direction and half of the interpolation in the horizontal direction. Because more image data is acquired – not interpolated – the GC-QX5HD provides superior color accuracy over those cameras, which rely solely on interpolation and results in even greater color accuracy."

There is a lot to discuss on the subject but not in the News section, I think.
 
Do any cameras currently come "stacking ready"? That is, can one
set them up to take a series of shots moving the CCD slightly
between each shot?
see http://www.sinar.ch some backs have 4-shot and 16-shot options.

There are probably a few more (PhaseOne, Imacon?) but this feature is medium-format only at this point.

Lourens
 
Interestingly, one might photograph a landscape from a triopd and I'll bet, even if you took ten shots, no two of them would be perfectly alligned to the exact pixel due to the vagaries of tripod flexibilty. You could do analysis of the results to find a few images with the appropriate movement and proceed to process them without any need to have shifting sensors.

Cheers, Tony.
 
I wonder whether any of the new cameras which support CCD shift for
image stabilization are capable of using it to effectively increase
image resolution.

This will only work on relatively stable subjects,
Yepp ... and only on a vary stable tripod.

You would be surprised how much things actually are moving outdoors. So - it would probably not be of great use outdoors.

Moreover - the anti alias filter used on most digital cameras removes high frequence information. You have to remove that filter also.

So - a nice idea that really is not all that useful for most of us.

--
Roland
http://klotjohan.mine.nu/~roland/
 
Interestingly, one might photograph a landscape from a triopd and
I'll bet, even if you took ten shots, no two of them would be
perfectly alligned to the exact pixel due to the vagaries of tripod
flexibilty. You could do analysis of the results to find a few
images with the appropriate movement and proceed to process them
without any need to have shifting sensors.
Stitching is a better method for landscapes. You will get much higher resolution by simulating a wide angle with several tele photos.

Moreover - it has to be dead calm (or very far to all objects) in the landscape to avoid moving things.

--
Roland
http://klotjohan.mine.nu/~roland/
 
I wonder whether any of the new cameras which support CCD shift for
image stabilization are capable of using it to effectively increase
image resolution.
No, and there's a good reason.

All cameras with CCD shift stabilization (whether P&S or DSLR) have AA (anti-aliasing) filters. These filters limit the resolution of the sensor to reduce or eliminate color moire.

Shifting the sensor will not increase resolution.
This will only work on relatively stable subjects, but it would
make use of techniques developed by astronomical photographers who
use image stacking to increase the effective resolution of their
images (image stacking uses a series of slightly different images
passed through an algorithm to interpolate a high resolution much
more accurately than non-data-supported interpolation techniques
can infer).
Astronomers use stacking to increase the dynamic range of their photographs, allowing dimmer objects to be seen.

If the telescope drive tracks perfectly, the same objects will always align with the same pixels and there would be no resolution increase. It takes some fairly complex software to compensate for imperfect tracking.
The combination of many new cameras' ability to take a relatively
rapid barrage of images and the slight mobility of the CCD might be
capable of producing some spectacularly detailed images of, say,
landscapes with (relatively) low-cost cameras.
Only if you put the camera on a panorama head and use the rapid image ability to take many tiles that you can stitch together into a larger (and higher resolution image). But people have been doing this for years.
Do any cameras currently come "stacking ready"? That is, can one
set them up to take a series of shots moving the CCD slightly
between each shot?
As Lorens pointed out, only medium format cameras (which typically have no AA filters, and suffer for it) can do this.
What might the best stacking programs be for handling these quite
large images?
The best would be Panorama Tools, with a front end such as PTgui or PTassembler. But that's again for doing something that's practical, stitching multiple images, not for trying to increase resolution.

--
The Pistons led the NBA, and lost in the playoffs.
The Red Wings led the NHL, and lost in the playoffs.

It's up to the Tigers now...
Leading the league, and going all the way!

Ciao!

Joe

http://www.swissarmyfork.com
 
Interestingly, one might photograph a landscape from a triopd and
I'll bet, even if you took ten shots, no two of them would be
perfectly alligned to the exact pixel due to the vagaries of tripod
flexibilty.
Tried that with my SD9 and later SD10 (no AA Filter) and astronomy Software (RegistaX), even single images that were upsized with Photoshop's bicubic showed more detail than the stacked ones, because registax could not get them aligend perfectly.

--
http://www.pbase.com/dgross (work in progress)
http://www.pbase.com/sigmadslr/dominic_gross_sd10

 
Interestingly, one might photograph a landscape from a triopd and
I'll bet, even if you took ten shots, no two of them would be
perfectly alligned to the exact pixel due to the vagaries of tripod
flexibilty. You could do analysis of the results to find a few
images with the appropriate movement and proceed to process them
without any need to have shifting sensors.
It depends on your tripod and focal length. I recently tried dynamic range stacking for the first time and was pleased to see that my two images were pixel-by-pixel aligned without me having to do anything. And that was without a remote release (@#$%& Nikon backorders) and manual exposure adjustment rather than auto bracketing. Of course that was with a wide angle lens, a solid tripod/head combination, and otherwise ideal conditions, but it shows what's possible.
 
Many thanks for your replies.

I was interested to hear about the Ricoh & will look around for more info there. Thanks rhodes_

Thanks also Dominic Groß for your experience with Registax, which I still may try out; it's free, after all. Plus I suspect Tony Brown is right about the inevitable shift of the camera, at least with my amateur equipment.

I was super-interested to hear from Joseph Wisniewski that anti-aliasing filters on CCD-shift cameras prevent the motion necessary for stacking to be useful. Fascinating; how does that work I wonder? But I must disagree about the function of stacking software; various sites demonstrate that it is not merely for brightening objects. The results users can get with it
members.cox.net/t.jensen/registax.html
and the more feature-rich (and not free) Astrostack
http://www.astrostack.com
are not just brighter; the final images contain more detail.
 
Astronomers use stacking to increase the dynamic range of their
photographs, allowing dimmer objects to be seen.
They have also done work where they tilt the telescope by an amount less than it's angular resolution and then use post processing to extract a higher resolution image. This was done with the Hubble Deep Field; see, e.g. http://en.wikipedia.org/wiki/Hubble_Deep_Field . I don't know if it's practical with other telescopes because of issues like atmospheric diffraction and the tracking errors that you mention.
Only if you put the camera on a panorama head and use the rapid
image ability to take many tiles that you can stitch together into
a larger (and higher resolution image). But people have been doing
this for years.
It might be interesting to build a special purpose "insect eye" camera. Instead of using a single lens/sensor and pivoting it, you'd have multiple lens/sensor combinations that were pre-aligned at the correct angles. You'd be able to take the whole panorama in one go, which would eliminate problems with things moving between shots. And since the angles of the lenses would be fixed, you'd be able to set up your stitching parameters once and use the same ones every time instead of needing to create new ones every time. You could also set up the system to have the smallest possible overlap to maximize the final image size. You'd just have to be careful about designing it to minimize problems with paralax.
 
Astronomers use stacking to increase the dynamic range of their
photographs, allowing dimmer objects to be seen.
They have also done work where they tilt the telescope by an amount
less than it's angular resolution and then use post processing to
extract a higher resolution image. This was done with the Hubble
Deep Field; see, e.g.
http://en.wikipedia.org/wiki/Hubble_Deep_Field . I don't know if
it's practical with other telescopes because of issues like
atmospheric diffraction and the tracking errors that you mention.
Only if you put the camera on a panorama head and use the rapid
image ability to take many tiles that you can stitch together into
a larger (and higher resolution image). But people have been doing
this for years.
It might be interesting to build a special purpose "insect eye"
camera.
It was interesting. We called it ROTSE.

http://forums.dpreview.com/forums/read.asp?forum=1027&message=16298663

My contribution wasn't that great, just a bit of work with the controllers for the Canon lenses, and some help with the stitching algorithm.
Instead of using a single lens/sensor and pivoting it,
you'd have multiple lens/sensor combinations that were pre-aligned
at the correct angles. You'd be able to take the whole panorama in
one go, which would eliminate problems with things moving between
shots. And since the angles of the lenses would be fixed, you'd be
able to set up your stitching parameters once and use the same ones
every time instead of needing to create new ones every time. You
could also set up the system to have the smallest possible overlap
to maximize the final image size. You'd just have to be careful
about designing it to minimize problems with paralax.
Very careful. ROTSE worked because it operated at astronomical distances.

For near panoramic distances, you need to rotate the camera/lens system about the entrance pupil (often incorrectly referred to as the "nodal point"). With conventional lenses, the only way to make the entrance pupils of multiple lenses coincide is to put a beam splitting mirrors in front of the cameras. Otherwise, you'd have to deal with lens designs that put the entrance pupil behind the sensor (which is possible).

--
The Pistons led the NBA, and lost in the playoffs.
The Red Wings led the NHL, and lost in the playoffs.

It's up to the Tigers now...
Leading the league, and going all the way!

Ciao!

Joe

http://www.swissarmyfork.com
 
I was super-interested to hear from Joseph Wisniewski that
anti-aliasing filters on CCD-shift cameras prevent the motion
necessary for stacking to be useful. Fascinating; how does that
work I wonder? But I must disagree about the function of stacking
software; various sites demonstrate that it is not merely for
brightening objects. The results users can get with it
members.cox.net/t.jensen/registax.html
and the more feature-rich (and not free) Astrostack
http://www.astrostack.com
are not just brighter; the final images contain more detail.
For several reasons. A stacked image typically has less atmospheric effects ("bad" frames are regected from the stack), and the increased dynamic range (noise averaged our) means sharpening is more effective.

And, if your image is bright point sources, stacking helps of the camera has a too weak (too high a spatial cutoff frequency) AA filter.

And astro cameras often don't have AA filters. No commercial astro camera has one: from the inexpensive "cookbook" CB245, to the units offered by Meade, Celestron, or Orion, to the independents like SBIG or FLI. And those folks who tinker with webcams as astro cameras don't have them, either.

--
The Pistons led the NBA, and lost in the playoffs.
The Red Wings led the NHL, and lost in the playoffs.

It's up to the Tigers now...
Leading the league, and going all the way!

Ciao!

Joe

http://www.swissarmyfork.com
 
For near panoramic distances, you need to rotate the camera/lens
system about the entrance pupil (often incorrectly referred to as
the "nodal point").
It depends on how near "near" is. If you start with a wide angle lens and photograph things that are more than a few meters away, you can be surprisingly sloppy.
With conventional lenses, the only way to make
the entrance pupils of multiple lenses coincide is to put a beam
splitting mirrors in front of the cameras. Otherwise, you'd have to
deal with lens designs that put the entrance pupil behind the
sensor (which is possible).
I'm aware of the problem, and that it's possible to design lenses that way. One problem is that you don't just need the entrance pupil behind the sensor. It needs to be behind the sensor by a distance longer than the focal length of the lens, or you won't be able to physically fit all of the sensors into the available space.

An alternative approach would be to have the entrance pupils well in front of the front element, which I think should also be possible. Then you'd have the lens/sensor assemblies pointing inward toward the common point. That approach has the limitation that you can't have a > 180° FOV because the cameras will physically block each other.

Focusing is going to be a bit tricky with either approach. IF lenses are obviously out because they move the pupil distance, change focal length, etc. as they focus. A conventional helicoid might work, but you'd be better off keeping the lens (and hence the entrance pupil) fixed and moving the sensor instead of the other way around. I think that you'd probably be best off just using a small aperture, fixed focus design.
 
Hillarious that this topic has been brought up. On my phone (a p800 running Symbian), a software is available that takes 4 pictures, and makes a 1.2MP image from a VGA image. The software is called Photoacute.

Now that I look at the website, it looks like they have a similar thing for digital cameras.

http://www.photoacute.com

...supposedly it uses a state-of the art algorithm, and i definitely remember seeing more detail in sample shots taken with 4 images than the one. This thing is definitely a reality its just that it isn't known too well.
 
The software uses a "key" that is calculated based on your hardware, so every time you change your hardware, you need to contact the company to get an updated registration code. Hope the company stays in business forever.

I'm really looking forward for all software to use this technique, so I can spend a bunch of hours re-registering software every time I make some change to my hardware. NOT! (Wasn't this situation described in Dante's "Inferno"?)

Wayne
Hillarious that this topic has been brought up. On my phone (a p800
running Symbian), a software is available that takes 4 pictures,
and makes a 1.2MP image from a VGA image. The software is called
Photoacute.

Now that I look at the website, it looks like they have a similar
thing for digital cameras.

http://www.photoacute.com

...supposedly it uses a state-of the art algorithm, and i
definitely remember seeing more detail in sample shots taken with 4
images than the one. This thing is definitely a reality its just
that it isn't known too well.
 
Shifting the sensor will not increase resolution.
Actually, it does, but it needs to be shifted just 1/2 pixel in each direction (4-shot mode) and requires special software to combine the images. It is supposedly a lot better (= more resolution) than interpolation and resizing the original image to 200%.



Obviously, many other factors influence effective resolution, like lens used and aperture used. A soft image captured with lots of pixels is just counterproductive, it slows down workflow with no benefit.

;-)
Lourens
 
I was super-interested to hear from Joseph Wisniewski that
anti-aliasing filters on CCD-shift cameras prevent the motion
necessary for stacking to be useful. Fascinating; how does that
work I wonder? But I must disagree about the function of stacking
It doesn\t prevent motion, but obliterates the information that you are trying to recover above half the sampling frequency of the sensor.

Consider the sensor in only a single (linear) dimension. Nyquist sampling theorem (which predates digital photography by many decades) states that in order to prevent aliasing (moire) the frequency of the sampled signal must not exceed half the sampling frequency. This knowledge is inherent in digital audio (e.g. CDs are sampled at 44.1kHz, hence the maximum audio freq. recorded is 22.05kHz). For digital imaging the "spatial" frequency is considered.

In both audio and imaging an AA filter is used to blur (i.e. remove high frequency detail) the incident light to prevent aliasing (moire).

Image stacking is a form of increasing the sampling rate and thereby allowing a higher spatial frequency (resolution) to be recorded without aliasing. It is dependent on the following conditions:

-the input signal must be continuous and repetitive, i.e. a static image (for the duration of the photo sequencing)

-there must be a slight offset in the sampling point. This can be achieved by mechanical CCD offset, or just the act of pressing the shutter button.

-there must be information left above half the sampling frequency in order to obtain any benefit. This is why the gains from this method are marginal with consumer DSLR: the AA filter is fitted for good reasons and removes almost all information above half the sampling frequency, hence there is little left to recover.

I never saw any benefit when I tried it using my Canon S70, but I didnt persue it far.

regards,
Laurens
 
It doesn\t prevent motion, but obliterates the information that you
are trying to recover above half the sampling frequency of the
sensor.
Correct. The AA filter is designed to cut off frequencies above the Nyquist frequency for the sensor pitch it sits over. But you're all thinking "in the box" here. If we're talking about a camera designed specifically to take advantage of sensor movement, you wouldn't use an AA filter when the camera is in "enhanced resolution" mode (the AA filter isn't part of the sensor in most designs, so why can't it flip out of the way?). The primary reason the AA is there in the first place is to remove color moire, so the relevant issue is "can you eliminate moire in a sensor shift?" The math is complex, but I'm pretty sure the answer is yes.
-the input signal must be continuous and repetitive, i.e. a static
image (for the duration of the photo sequencing)
Actually, I'd disagree with this contention. For obvious reasons I can't describe anything other than what's in the public domain (which is almost nothing other than a basic acknowledgement that it exists), but side-looking survellance cameras in moving objects are essentially a "shifting sensor" problem. And as long as you have "fixed" objects in the frame, objects in motion can be detected and interpolated at a slightly higher resolution than a single frame capture. Of course it takes a lot of computing...

--
Thom Hogan
author, Nikon Field Guide & Nikon Flash Guide
editor, Nikon DSLR Report
author, Complete Guides: D50, D70, D100, D200, D1 series, D2h, D2x, S2 Pro
http://www.bythom.com
 

Keyboard shortcuts

Back
Top