Color Filter Array

Vincent Bockaert,

Each "pixel" on a digital camera sensor contains a light sensitive photo diode which measures the brightness of light. Because photodiodes are monochrome devices, they are unable to tell the difference between different wavelengths of light. Therefore, a "mosaic" pattern of color filters, a color filter array (CFA), is positioned on top of the sensor to filter out the red, green, and blue components of light falling onto it. The GRGB Bayer Pattern shown in this diagram is the most common CFA used.

Mosaic sensors with a GRGB CFA capture only 25% of the red and blue and just 50% of the green components of light.

Red channel pixels
(25% of the pixels)
Green channel pixels
(50% of the pixels)
Blue channel pixels
(25% of the pixels)
Combined image

As you can see, the combined image isn't quite what we'd expect but is sufficient to distinguish the colors of the individual items in the scene. If you squint your eyes or stand away from your monitor your eyes will combine the individual red, green, and blue intensities to produce a (dim) color image.

Red, Green, and Blue channels after interpolation Combined image

The missing pixels in each color layer are estimated based on the values of the neighboring pixels and other color channels via the demosaicing algorithms in the camera. Combining these complete (but partially estimated) layers will lead to a surprisingly accurate combined image with three color values for each pixel.

Many other types of color filter arrays exist, such as CYGM using CYAN, YELLOW, GREEN, and MAGENTA filters in equal numbers, the RGBE found in Sony's DSC-F828, etc.

This article is written by Vincent Bockaert,
author of The 123 of digital imaging Interactive Learning Suite
Click here to visit