COLOR MANAGEMENT – A WALKTHROUGH

Soon after an amateur photographer gets serious about the subject, he/she encounters Color Management, and is almost always confused. There is a good reason for this: it is confusing. Its parts are far-flung and involved, and until the parts are pulled together and their relations made clear, it remains confusing. This is no easy task, because color theory is almost certainly one of the most arcane subjects in all of photography and its experts are few and far between – and I certainly do not profess to be one of them. But I feel its parts can be pulled together and their relations made sufficiently clear to remove much of that confusion.

In this article, I presume to do just this so that color management, at least for me, becomes intelligible and useful. (*1) Several excellent lengthy works exist for this subject (*2), but my goal here is to keep things short, to the point, but inclusive. To this end, I simplify some nonessential complications, employ some pedagogical constructs, and treat some topics with less than complete rigor, but I always try to present basic principles that I feel are essentially correct. Related matters and notes , indicated with a numbered (*), are given on page 2.

COLOR SPACES AND PROFILES

A color model is a mathematical means for representing colors through a set of numbers. The familiar RGB color model, for example, uses three numbers (red, green, blue) to specify each color, while the equally familiar CMYK model uses four (cyan, magenta, yellow, black). If we were to attach to a specific color model a mapping of its numbers into a known reference standard (such as CIELAB or CIEXYZ, which effectively contain all visible colors -- and a good deal more), there would be an implied set of all such mapped colors that defines the gamut (range, space) of that particular color model. The combination of the specific color model and its mapping constitutes a color space. sRGB, Adobe RGB, ProPhoto RGB, and Beta RGB are examples of color spaces within the RGB color model.

There are five basic color spaces that are of importance to those who use digital cameras and either print images or display them on the web (namely, all of us). These are the camera's native color space as embodied in its raw data (actually not a true color space – but close enough for our present purposes), the camera's assigned jpeg space (typically either sRGB or Adobe RGB), the raw processor and/or image editor's (say, ACR or Photoshop's) working space, the computer monitor's color space, and the output-medium's (web or printer's) color space. These color spaces can be, and typically are, different. They differ in the span of colors they contain (gamut) and the numbers assigned to define a particular color.

I assume here an RGB color model, so each color is assigned a triple of RGB values. In an 8-bit mode, each of these three values is any of the 2^8 numbers between 0 and 255 – for a total of 2^8^3 = 16,777,216 colors. In a 16-bit mode, each value is any of the 2^16 numbers between 0 and 65535 – for a total of 2^16^3 = more than 2.8*10^14 colors. Assigned to each color space with its RGB values is a profile that specifies how these values are to be interpreted. These profiles have the names of their spaces, like sRGB, Adobe RGB, ProPhoto RGB, LCDMonitor, HP-LP2475w, Epson S04164X, Canon iP6700D PR1, and the like. Since the same image can pass among different color spaces, one must be able to translate the numbers of one space into those of another so that colors can be perceived the same (or appropriately similar) regardless of the different numbers used to identify them. To do this, a standard space like LAB (CIE 1976) or XYZ (CIE 1931) is used as a common reference -- a Profile Connection Space. This is what Color Management is all about.

LAB is a color space with an incredibly wide gamut, encompassing virtually every color humans can see and many, many more – some that are not "real", existing only in theory and not in human perception. LAB models each color as a triple (L, a, b), where L is Lightness, a is a green-magenta index (negative values of a are green, positive are magenta), and b is a blue-yellow index (negative value are blue, positive yellow). LAB attempts to model human color perception and to identify colors that people know. One can transform the values of a given RGB color space into LAB values, and vice versa. The sRGB triple of (250, 20, 10), for example, has a LAB value of (54, 78, 66), and corresponds to a particular shade of red, like "fire-engine red." Likewise, an Adobe RGB triple of (215, 27, 19) has the same LAB values (54, 78, 66), and so corresponds to the same color. Thus, the sRGB color (250, 20, 10) is essentially the same as an Adobe RGB color (215, 27, 19). LAB numbers, then, become a known standard through which colors from different color spaces can be compared; it can be used to define a translation of colors from one color space into the same (or closely similar) colors of another. The profile associated with a given color space tells just how its numbers are to be interpreted and how they should be translated when needed.

The translation from one space to another is not always perfect. Some color spaces contain more colors – have a wider gamut – than others. Adobe RGB, for example, contains many colors that are not included in sRGB. When converting from a broader to a narrower space, the "extra" colors must somehow be mapped into "nearby" or "most nearly appropriate" colors in the narrower space. There are several ways – called rendering intents – for doing this, which I discuss in more detail in the Related Matters on page 2. Even colors that exist in both spaces cannot always be translated exactly because of the granularity of the integers that make up the triple. An exact translation of a given color, for example, might require a green value of 34.742, which, however, can only be represented by the nearest integer value of 35. Indeed, rather generally, the mathematical transformations that do the translations (conversions) are subject to rounding and truncation errors. However, the basic idea is simple: profiles allow the colors of one space to be converted into appropriate colors in another space. To remind ourselves that this translation may not always be perfect, I shall on occasion include parenthetical statements such as (nearest possible) or (subject to rendering intent).

THE FIVE SPACES

Let's take a picture of a lovely red fire engine and follow through what happens.

1. Camera Raw Data (Camera-Native) "Space"

First, the camera's sensor captures the light from the scene containing the fire engine as electrons in its separate R, G, and B photosites (sensels). These electron charges are not a color (an RGB triple), but separate monochrome R, G, and B luminance values that have been segregated by a Color Filter Array (CFA) on the sensor that allows only certain color ranges to affect specific photosites (I'll assume they are in a Bayer pattern). Next, these charges are transformed by the camera's electronics and software into a digital raw data file with separate R, G, and B data. These data are linear, do not yet have colors (or a gamut), are not yet an image in the usual sense, do not have a white balance, and do not constitute a true color space. But they nevertheless definitively represent the camera's capture of the scene as shot, and they are the basis for any image that follows. These data fully reflect the camera sensor's intrinsic sensitivities to the different colored light of the scene as shot as determined by the CFA and the quantum (photon conversion) efficiencies for different colors, and, as such, they establish what is referred to as the camera's native space (or simply camera-native).

Raw data have no associated white balance. In an RGB color space, grays are represented by equal amounts of R, G, and B (R=G=B). But the camera's sensor has quite differing sensitivities to R, G, and B colors. If one were to shoot a gray wall in sunlight, most cameras would record raw data dominated by G values with lesser B and even lesser R. If processed at these values, the resulting image would have a decidedly greenish tint. This is a camera-native "gray" (and, for those familiar with the subject, this is the green dominance one sees when using UniWB). Ultimately a white balance will be applied to bring the R and B data into a proper balance with the G data, allowing grays to look gray instead of green. But this happens after the formation of the raw data, and the raw data themselves have no white balance.

If you are shooting raw, these are the data files that you open in your raw processor, say ACR or Rawtherapee or Capture One or RPP. If you are shooting jpeg, these are the data that the camera's jpeg engine uses to create its OOC jpeg image. We'll return to raw processing shortly, but for now let us suppose you are shooting jpeg. This takes us to your camera's jpeg space.


2. Camera Jpeg Space

You've just taken a shot of a fire engine, and, behind the scenes, it is captured in the raw data. But you want an OOC jpeg to show the kids. So, also behind the scenes, the camera's jpeg engine takes the raw data, applies a white balance (either auto or some preset) and "develops" them through a process called demosaicing or interpolation: inferring from the separate R, G, and B raw values of pixels neighboring each separate image pixel an appropriate triple of RGB values (a color) to be associated with that pixel. It does this for each of the, say, 12Mb of the camera's pixels. It then applies your chosen camera settings and a gamma. This transformation is possible because the camera's engineers have programmed into the camera's jpeg engine the information needed for translating the raw-data values of camera-native space into the RGB values for the various jpeg color spaces the camera can be instructed to use (typically sRGB and Adobe RGB). This translation is not a standard profile; it can vary from one camera model to another and is likely known completely only to the camera's engineers. But the various third-party raw-processor software developers do a good job of backwards engineering so we can all process our raw data with a raw processor of our choice.

So now the pixels corresponding to the red fire engine become an RGB triple (a color) derived from interpolating the separate R, G, and B raw values from the neighboring fire-engine pixels. But what RGB values will be created? You determine that with your selection of the camera's jpeg space in your camera's menu. Let us suppose you have selected sRGB. This space has a particular set of RGB numbers for each color, and so the camera's image processor converts the raw data into an image with sRGB values and then indicates this by tagging the sRGB name to the file to tell any color-aware image processor or device that this is the way the numbers should be interpreted. (*3)

In particular, the camera's native capture of R, G, and B values for the red fire engine are now converted into the sRGB triples for that same fire-engine red (250, 20, 10) in the jpeg file that has been created – assuming, of course, that an appropriate white balance has been chosen. The result is an image with appropriate sRGB color values assigned to each pixel. If you had instead chosen Adobe RGB for your camera's color space space, the resulting jpeg file would have Adobe RGB values for fire-engine red (215, 27, 19) and they would be so tagged. And despite the fact that the sRGB and Adobe RGB files would employ different RGB triples to describe the colors, their profiles would notify any color-aware image processor how to interpret those triples so that the colors would be rendered as perceptually similar.

The camera's jpeg engine, or at least a part of it, is working all the time, even when you are shooting raw alone, creating the image you see on the camera's monitor and the data for the histograms and blinkies (blinking indicators of blown highlights or blocked shadows). This jpeg, possibly of low-quality, is embedded in the raw file to allow previewing in "playback" mode.

3. Working Color Space

We now come to a branch in the road. The jpeg just produced can either be sent to the printer, which will be dealt with below in section 5, or sent on to an image editor, like Photoshop, for post processing, which I consider next in section 3a. I have sidestepped what happens when shooting raw but will pick that up in section 3b. For now let's send our image to the working space of our chosen image editor.

3a. Image Editor

Let us bring an image, say a TIFF or a jpeg into Photoshop – you can readily adapt what follows to whatever image editor you prefer. Perhaps the image is the OOC jpeg you just made above, or one from a scanner, or one sent to you by a friend. The image already has its profile, say sRGB. When you open this file, you encounter yet another color space: Photoshop's default working space, which you will have previously chosen through Photoshop's Color Settings preferences. A working space is simply an appropriate color space in which the image processor can carry out its tonal and color transformations. This may be the same sRGB space as the image, or it may be different, say, Adobe RGB or ProPhoto RGB. Let's say it is Adobe RGB.

We open, then, a file with an sRGB profile in Photoshop – which is using an Adobe RGB working space. Clearly something must give since we do not want to be applying an Adobe RGB interpretation to our sRGB image values. We must choose either to change Photoshop's working space to sRGB or convert the image to Adobe RGB. A dialog occurs giving this choice when you open the image – if not, you should alter your Color Settings, Color Management Policy dialog items to allow this dialog to be presented. Let us assume here, for illustrative purposes, that you opt to convert the image to the Adobe RGB default working space. This process (described more fully in Related Matters on page 2) simply changes the image file's sRGB color values to the most suitable Adobe RGB values and tags the file as Adobe RGB so the new numbers will be interpreted appropriately. The image will appear to change very little, if at all (since Adobe RGB has a broader gamut than sRGB), but the numbers describing the colors are completely altered. Fire-engine red now has the Adobe RGB values (215, 27, 19) instead of the sRGB values (250, 20, 10).

The choice of working space depends on what you want to do with the image. Some people who process for photo-quality inkjet printers like to work in a wider-gamut space like Adobe RGB or the even wider ProPhoto RGB. They would also likely have avoided using sRGB as the camera's jpeg color space, because once created as sRGB, the image is stuck with its narrow gamut of colors (I deal with this in more detail in Related Matters on page 2). But, those who process for the web are often quite happy to work in sRGB all the way from shutter-down to posted image. sRGB is the de facto standard for web imagery, and so one will almost certainly end up converting web images to sRGB anyway. The narrow gamut may simply not be viewed as a problem in this case. Yet other people prefer using even wider-gamut spaces like ProPhoto RGB because they feel it gives more leeway in processing, although this is an advantage that is best exploited when shooting raw rather than jpeg since the OOC jpeg will necessarily have restricted you to a considerably narrower gamut.

Not all profiles that you see listed on your computer are suitable for working spaces. We will shortly encounter device profiles for monitors, printers, scanners, and the like. These are all tailored to the very specific color needs of their corresponding devices and would not serve at all well to define a working space. The most commonly employed working spaces are sRGB, Adobe RGB, ProPhoto RGB, and Beta RGB. A number of others exist as well, such as, Apple RGB, and ColorMatch RGB, but they are not widely used.

3b. Raw Processor

Previously I assumed you were post-processing a jpeg or TIFF with an image editor like Photoshop. Suppose now, however, that you are shooting raw and you bring your raw file into a raw processor like ACR. As has been noted, these raw data are camera-native, have no profile attached, and are not even an image in the usual sense. Nor are these data affected by any of the camera's settings, including white balance -- although these settings typically accompany the raw file as meta data. Thus, when you open the file in your raw processor, an initial rendering must be made so you can "see what it looks like." This initial rendering is made using the raw processor's default settings in its default working space along with the its knowledge about how to demosaic files from your particular camera. (*4)

The raw processor's working space varies from one processor to another. In ACR, for example, you have a choice among sRGB, Adobe RGB, ProPhoto RGB, or ColorMatch RGB, although a linearized version (gamma = 1.0) of the very broad ProPhoto RGB appears to be its native intermediate space. In others, such as LightRoom or PhotoNinja, there is no choice: they use a linearized version ProPhoto RGB (although the image as displayed on the monitor and the histograms have a 2.2 gamma applied to make them appear more in line with human perception). Yet other processors, such as RPP, carry out a good deal of the processing directly in camera-native space without any intermediate RGB translation. When you have a choice of working space, the same kind of considerations for choosing among them pertain as with the image editor. But, of course, here we are dealing with the raw data and the very broad color gamut implicit in most cameras' native spaces. So if you wish to retain the color riches that lie buried in most raw data, you will wish to choose a working space broad enough to contain them, namely, ProPhoto RGB. But, if you're headed for the web, you might be just as happy using sRGB to keep your processing nicely confined within the narrow bounds ultimately required of the final image. I love ProPhoto RGB and use it for my print-destined processing. But I have also found that processing web-destined images in sRGB helps to keep them more comfortably within that narrow gamut and can result in better images than those processed in ProPhoto RGB and subjected to the cold shower of conversion to sRGB at the last moment. Soft proofing (discussed in Related Matters) can help in making this transition, but its effectiveness can be limited.

Whether or not you have a choice of working space, you always have a choice of the color space used to render and profile your exported image. These typically include sRGB and Adobe RGB, but can also include ProPhoto RGB, ColorMatch RGB, or Beta RGB, and, in some cases, even LAB. Whichever you choose, the image will be defined by the numbers appropriate to that space and contain the proper profile to tell how to interpret them. The raw processor has provided a similar function here as the camera's jpeg engine did in the description above of the Camera's JPEG Color Space: it has demosaiced the data and created a new image file with RGB numbers and profile corresponding to the color space you have chosen. It differs significantly, of course, in that the resulting image embodies a far greater freedom of adjustment, can be rendered in a greater choice of gamuts and types (jpeg, TIFF, etc.), and typically benefits from the use of a much deeper 16-bit depth. 

It should be noted that the raw processing is non-destructive: nothing happens to the raw data file during this processing; they remain unchanged throughout. They simply form the basis of the new image being rendered by the settings of the raw processor. In general, the raw processor's initial (or default) settings are essentially arbitrary and the default image should not typically be viewed as "meaningful" or final in any sense. Raw images are meant to be processed; you use the sliders to alter the settings from the default values to produce the image you want. In ACR these defaults can be set anyway you want, so there is nothing sacrosanct about them or the image they produce. They may result in a nice looking initial image, they may not. This is important to understand, because some people are disappointed in the way their raw images look when the are first viewed in the raw processor. They don't think they look as good as the default image created by another raw processor, or they don't think they are quite up to their jpeg images. But this is not a fault of the raw processor; it a fault of the user not making proper use of the raw processor. When you're shooting raw, it is you who make the image in your processor; it is not the processor that is making the image for you.

Some manufacturer's raw processors honor all the camera's settings from the meta data and use them as the default settings when dealing with their own camera's raw files (Canon DPP and CR2s or Olympus Viewer and ORFs, for example). This may allow the resulting conversions to look very much like OOC jpegs based on these same settings. But this is not true of most "third-party" raw processors, such as LR, ACR, Rawtherapee or RPP, which typically ignore the camera's settings except for the "as-shot" white-balance. (*5)

One of the more important aspects in color managing a raw file is white balance.  We recall that the raw data are in camera-native space and are not white balanced.  Most raw processors create their default rendering using the "as shot" white balance given in the raw file's meta data, but this is only for starters. Unlike the case with jpegs, this WB is not baked into the image, and it is up to you to provide an appropriate WB. Just what is "appropriate" depends on your artistic intentions.  White-balance reference cards and like devices are often used to help establish a WB that neutralizes color casts in the light source, such as the yellowish cast in incandescent light. While such a WB may not always be what is wanted – we surely would not want to "neutralize" a red sunset, for example – it would be appropriate if we wished to be sure that our fire engine's red was given the proper fire-engine red RGB triple in whatever working space we were using.

So the raw processor is software that allows you to become a substitute for the camera's jpeg engine — actually much more than a substitute since you get to convert the raw data as you wish rather than as programmed by the manufacturer's engineers. And just as the camera's jpeg engine or an image editor tags the resultant output file with the chosen profile, so too does the raw processor. The output of the raw processor is a "developed" image file with an appropriate profile embedded. This file can be saved as a jpeg or TIFF or whatever, or it can be sent on for further processing in, say, Photoshop as a psd. But, when it's through, if all was done properly, the fire-engine red will continue to be fire-engine red.

So now we have a final file – produced either by the camera or processed by you – with a profile attached. But all through the raw-processing and/or post-processing this image was being viewed on your monitor. How did it know what colors to display? How did it know to show a fire-engine red? What you were seeing on your monitor was being translated properly from the image's color space or the raw-processor's or image-editor's working space into the monitor's color space by the monitor's profile.

4. Monitor Color Space

Your monitor's pixels display a given color depending (quite loosely) upon the electronic excitation given by three drivers, R, G, and B. A given RGB combination of electronic excitation to a given pixel produces a given color. (CRT and LCD screens work differently, but there is no need to go into that here; the essential idea is the same.) The monitor too has a profile that tells the computer how to convert a given color from the source into the same or similar color on the screen (subject to rendering intent). Using the appropriate profile, the computer's color-management system determines for each of the monitors connected to your computer how to transform a given, say, Adobe RGB value for fire-engine red into the RGB excitation values necessary to produce something close to fire-engine red on the monitor. Most monitors, built-in or external, come with a default profile — which, in practice, tends to leave a great deal to be desired. You are far better off building a custom monitor profile, but not until your monitor is properly calibrated.

Monitor calibration standardizes a number of the monitor's characteristics. Calibration establishes a desired temperature (say, 6500°K), a proper gamma (say, 2.2 or L*), appropriate black and white points (contrast), a suitable brightness (say, 90-120 cd/m^2), and proper color balance (so grays look gray, from light to dark). There are software utilities that attempt this process, but it is best done with a hardware measuring device (a puck) and its associated software. Once the monitor is calibrated, a profile can be created that allows the translation of color-space values to the proper electronic excitations to create those colors appropriately on the screen. If you do not have good monitor calibration and an appropriate profile, these translations will be wrong, and what you see on your screen can look quite different from what was indicated by the source file. It is quite correctly claimed that proper monitor calibration and profiling are the first and most important step to proper color management. Without it, you have no way of assessing if what you're seeing on your monitor is at all appropriate, or if the adjustments you are making in your raw-processor or image editor are producing what you really want or intend. Once a profile has been made for a monitor, it must be properly installed in your computer's system as the default profile for that monitor. Most profiling software does this for you automatically when you save the profile. Color-aware applications use this default profile when displaying to the monitor. (*6)

This is a good spot to give a summary of where we have been. Specifically, we have seen how, with proper profiles, the fire engine in the scene originally framed with the lens and re-created on the camera's sensor has become an image on your monitor with colors very similar to those of the original scene. Translations have taken place: Raw data (camera-native space) -> Camera jpeg space -> Working space -> Monitor color-space. Each of these spaces may have had different sets of RGB values for the various colors in the scene and different gamuts, but appropriate profiles have determined at each stage how the colors' RGB values should get translated from one space to the next so that the colors stay right -- or as right as they can be subject to different gamuts and rendering intent -- and so that the fire engine continues to look essentially as it did at shutter down.

Next we usually want to share our image through some medium like the web or a print. This leads to the last of our five color spaces: the output color space.

5. Output (Printer, Web) Color Space

The output color space may pertain to a printer or the web. Let's begin with an inkjet printer.

5a. The Printer

The printer, like a monitor, makes a color by combining basic colors, in this case little blasts of various colored inks. Inkjet printers use CMYK inks (and maybe others as well), but the colors, just as for a monitor, are triggered by an excitation of an RGB triple. A given set of RGB values causes the various inks to be laid down in the proper proportions to produce a given color. So, just like a monitor, the printer requires its own profile to translate the image's RGB values to the RGB values required by the printer to produce the desired color (or something close to it).

Suppose we are printing an image from Photoshop tagged with the Adobe RGB color space. If we select Photoshop Manages Colors, the printer's profile – chosen by you in the print-driver dialog box – facilitates the translation of the Adobe RGB fire-engine red into the RGB values that the printer requires to produce a similar fire-engine red. These profiles are determined by profiling the printer with appropriate hardware, or they may come with the print driver supplied by the printer's manufacturer, or they may even be supplied by the paper manufacturer. Unlike the case with monitors, these manufacturer-supplied printer profiles tend to be quite good, perhaps better than can be made with basic hardware profilers unless done by truly skilled persons with quality equipment.

These profiles often have names identifying the printer and the particular paper to be used. The Canon iP6700D with Photo Paper Pro printed at highest photo quality, for example, has a profile, installed when you install the print driver, named Canon iP6700D PR1. You can see these profiles listed in the pull-down menu in the PS Color Management dialog during printing. An appropriate profile for your printer/paper combination should be selected in the Print dialog before you hit the print button. If you don't have a profile for your printer, you are typically better off selecting Printer Manages Colors, rather than Photoshop Manages Colors. But, if you have a proper profile, use it. If you've done things properly all the way along the line, this is the way to make your prints look reasonably close to what's on your screen. (*6, *7)

The preceding applies, of course, only when you are printing to your home printer. If you intend to have your images printed commercially, you must find out from the company the profile they expect embedded in the files you give them: usually sRGB, but sometimes they can accept others. Once the commercial firm gets your properly tagged file, they are responsible for applying the profile appropriate to their printer.

5b. The Web

If you intend your image to be viewed on the web, a profile is also highly advisable. Some browsers are color-aware and are able to take an image with just about any major profile and handle it correctly (Safari and Firefox), but many other popular browsers are rather naïve and assume sRGB even when other profiles are embedded. Further, some color-aware browsers (and some image viewers) can handle different image profiles, but then do not use your monitor's default profile to display it properly. You can go here to see where your browser stands in this fiasco.

sRGB is a image color space whose gamut is appropriate for most monitors, even basic ones. And, as mentioned, many browsers assume an image is sRGB even if another color space has been indicated. Thus, your safest bet for files destined for the web is to embed the sRGB profile. If you send an image to the web with an Adobe RGB profile embedded, browsers that assume sRGB will make a mess of it. So, regardless of what working space you use while processing an image for the web (and include in your archived version of the file), as a FINAL step before saving your web version, you should convert the profile to sRGB. Care should be taken when doing this, because conversion from broader spaces into sRGB may result in undesirable color shifts and in highlight clipping (as noted above, some prefer to use sRGB as the working space for web-destined images to avoid this issue). But when converting to sRGB, keep an eye on the histogram in the image editor and make adjustments if necessary and use soft proofing (discussed in Related Matters on Page 2) to help in this transformation.

Unfortunately you can really have no idea of what your web-destined image will look like when viewed by someone else. You have no control over the viewer's monitor, it's gamut, it's calibration and profiling, or the color-awareness of the browser that's being used. That lovely photo that you've just spent so much time processing and that looks so colorful and vibrant on your monitor (especially if it's a wide-gamut monitor) could look, and all too often will look, drab, dull, and washed out on the viewer's monitor. The only help you can give yourself is to be sure the image is sRGB.

Related matters, including assigning vs. converting, gamut, rendering intent, and soft proofing are given on Page 2 of this article, along with the footnotes.