This is a die-colored image of MIT's chip that it claims will revolutionize mobile photography.

Mobile photographers may see better images from their devices with a new processor developed at the Massachusetts Institute of Technology (MIT). MIT claims the new technology will convert "smartphone snapshots" into "professional-looking photographs" with just the touch of a button. The technology aims to instantly create more realistic or enhanced lighting in a photo without affecting the image’s ambience.

According to MIT, the chip can perform super fast HDR processing for both still images and video, but will be especially helpful with low-light photography. By embedding the technology into the chip, rather than running it as software, the processes can be more energy efficient — ideal for mobile devices.

MIT explained the low-light imaging in a statement from its news service:

So in this instance the processor takes two images, one with a flash and one without. It then splits both into a base layer, containing just the large-scale features within the shot, and a detailed layer. Finally, it merges the two images, preserving the natural ambience from the base layer of the nonflash shot, while extracting the details from the picture taken with the flash.

To remove unwanted features from the image, such as noise — the unexpected variations in color or brightness created by digital cameras — the system blurs any undesired pixel with its surrounding neighbors, so that it matches those around it. In conventional filtering, however, this means even those pixels at the edges of objects are also blurred, which results in a less detailed image. 

But by using what is called a bilateral filter, the researchers are able to preserve these outlines, Rithe says. That is because bilateral filters will only blur pixels with their neighbors if they have been assigned a similar brightness value. Since any objects within the image are likely to have a very different level of brightness than that of their background, this prevents the system from blurring across any edges, he says. 

To perform each of these tasks, the chip’s processing unit uses a method of organizing and storing data called a bilateral grid. The image is first divided into smaller blocks. For each block, a histogram is then created. This results in a 3-D representation of the image, with the x and y axes representing the position of the block, and the brightness histogram representing the third dimension.

This makes it easy for the filter to avoid blurring across edges, since pixels with different brightness levels are separated in this third axis in the grid structure, no matter how close together they are in the image itself.

The research, funded by Taiwanese manufacturer Foxconn, has been turned into a prototype by Taiwan Semiconductor Manufacturing Company. This devicem built using 40-nanometer CMOS technology,  has been integrated with a camera and display for testing. No plans for commercialization have been revealed.