The work of a professor at Columbia University has lead to the invention of a new approach to dynamic range on CCD's. This new system deliberately exposes neighbouring pixels of a CCD at slightly different levels, the theory being that if one pixel is over or under exposed information which is "nearly right" can be gathered from a neighbour. Interesting... "The computed image appears comparable in dynamic range to that produced by a high-end, professional grade digital camera,'' says Nayar.

From Professor Nayar's webpage:

"High Dynamic Range Imaging: Spatially Varying Pixel Exposures"
   T. Mitsunaga and S. K. Nayar,
   Proceedings of IEEE Conference on Computer Vision and Pattern Recognition,
   Hilton Head Island, South Carolina, June 2000.
   [ gzipped postscript ] [ dpreview converted pdf ]

Press release:

Breakthrough in Digital Imaging Technology Reported at Columbia University

New Imaging Technology Promises Richer, More Detailed Photography

NEW YORK, Sept. 7 /PRNewswire/ -- New technology developed by scientists at Columbia University's Computer Vision Laboratory will allow cameras - film, digital or video -- to capture a much broader range of light and color variations, promising richer and more detailed images.

The invention, developed by Shree K. Nayar, professor of computer science, and his research team at Columbia's Fu Foundation School of Engineering and Applied Science, can enhance the range of measurable brightness values of virtually any imaging system -- film or digital still and video cameras, as well as medical and industrial imaging systems based on X-ray, infra-red, synthetic aperture radar and magnetic resonance.

To a very limited extent, traditional image processing methods can enhance images and bring out details in over- or under-exposed images, but, as Nayar notes: "No amount of post-processing can recreate scene details that were never captured to begin with.''

The new technology will help to eliminate under and over-exposures in commonly-photographed scenes, such as a person's face with the sun shining behind or a large room with a single lamp in a corner, where some features are too dark or others, too bright.

The technology includes both hardware and software components. On the hardware side, a simple electronic or optical modification is made to an existing imaging system. The software involves a set of efficient algorithms that reconstruct high-quality images from the lower-quality data captured using the modified imaging system.

Nayar explains that in a conventional imaging chip on a digital camera, for example, all of the light-collecting "pixels'' are equal in the way they collect light, while in the new technology neighboring pixels are exposed differently. This local pattern of different exposures is repeated over the entire array of pixels on the device. The technique, called spatially varying pixel exposures, was developed by Nayar and Tomoo Mitsunaga, a visiting scientist at Columbia from Sony Corporation. When an image is captured using this technology, it is very likely that if one pixel is over- or under-exposed with light, one or more of its neighboring pixels will produce "meaningful'' brightness measurements.

"You get a much larger dynamic range of brightness than a single exposure would give you,'' said Nayar. A prototype camera that incorporates the spatially varying exposure pattern into an imaging system is being pursued by the Computer Vision Laboratory. Nayar said this varying pattern of exposure could be implemented in several ways. One would be to place a mask with cells of different optical transparencies adjacent to the detector. Or the pattern could be etched directly on the detector in solid-state devices, such as charge-coupled devices (CCD's). Alternatively, the sensitivity of the pixels can be preset by using different microlenses for neighboring pixels, or be embedding different apertures within the potential wells of the pixels.

Nayar and Srinivasa Narasimhan, a computer science graduate student, also were able to produce suitable exposure patterns for color imaging. Typically, color imaging chips use a "mosaic'' of red, green and blue color filters, Nayar says. "It turns out that, given such a mosaic, an appropriate exposure pattern can be overlaid to ensure that, within every small neighborhood of pixels on the detector, each of the three colors in measured under different exposures.''

The second step of the technology involves software developed by Nayar and Narasimhan. A set of fast and robust algorithms reconstructs high-quality images from the image captured using the spatially varying pixel exposure technology, with minimal loss in spatial resolution.

In their work, the scientists produced images comparable to those produced by a 12-bit digital camera, which has a range of 4,096 brightness levels, even though the imaging system they used had a detector that yields only 8-bits, or 256 brightness levels.

"The computed image appears comparable in dynamic range to that produced by a high-end, professional grade digital camera,'' says Nayar. Nayar previously invented the Omnicamera, a 360-degree immersive imaging technology that allows Internet or television viewers to see images in all directions at once. He also holds several other U.S. and international patents for inventions related to computer vision and robotics.