The cone-shaped cells inside our eyes are sensitive to red, green, and blue—the "primary colors". We perceive all other colors as combinations of these primary colors. In conventional photography, the red, green, and blue components of light expose the corresponding chemical layers of color film. The new Foveon sensors are based on the same principle, and have three sensor layers that measure the primary colors, as shown in this diagram. Combining these color layers results in a digital image, basically a mosaic of square tiles or "pixels" of uniform color which are so tiny that it appears uniform and smooth. As a relatively new technology, Foveon sensors are currently only available in the Sigma SD9 and SD10 digital SLRs and have drawbacks such as relatively low-light sensitivity.
All other digital camera sensors only measure the brightness of each pixel. As shown in this diagram, a "color filter array" is positioned on top of the sensor to capture the red, green, and blue components of light falling onto it. As a result, each pixel measures only one primary color, while the other two colors are "estimated" based on the surrounding pixels via software. These approximations reduce image sharpness, which is not the case with Foveon sensors. However, as the number of pixels in current sensors increases, the sharpness reduction becomes less visible. Also, the technology is in a more mature stage and many refinements have been made to increase image quality.
Similar to an array of buckets collecting rain water, digital camera sensors consist of an array of "pixels" collecting photons, the minute energy packets of which light consists. The number of photons collected in each pixel is converted into an electrical charge by the photodiode. This charge is then converted into a voltage, amplified, and converted to a digital value via the analog to digital converter, so that the camera can process the values into the final digital image.
In CCD (Charge-Coupled Device) sensors, the pixel measurements are processed sequentially by circuitry surrounding the sensor, while in APS (Active Pixel Sensors) the pixel measurements are processed simultaneously by circuitry within the sensor pixels and on the sensor itself. Capturing images with CCD and APS sensors is similar to image generation on CRT and LCD monitors respectively.
The most common type of APS is the CMOS (Complementary Metal Oxide Semiconductor) sensor. CMOS sensors were initially used in low-end cameras but recent improvements have made them more and more popular in high-end cameras such as the Canon EOS D60 and 10D. Moreover, CMOS sensors are faster, smaller, and cheaper because they are more integrated (which makes them also more power-efficient), and are manufactured in existing computer chip plants. The earlier mentioned Foveon sensors are also based on CMOS technology. Nikon's new JFET LBCAST sensor is an APS using JFET (Junction Field Effect Transistor) instead of CMOS transistors.
|This article is written by Vincent Bockaert,|
author of The 123 of digital imaging Interactive Learning Suite
Click here to visit 123di.com