Started Jun 7, 2011 | Discussions thread
 Like?
 Colorspace tutorial - Part 1 In reply to Great Bustard, Jun 8, 2011

Great Bustard wrote:

Let me begin by stating what I understand, then, if you would, explain to me why I'm wrong, and embellish.

For a Bayer CFA, the sensor records an RGGB pattern. This is processed into a full color photo with a particular colorspace.

In several steps. First demosaic; at this point the image is technically not in a color space, rather it is a set of spectral responses that we call "R", "G", and "B". But this is slightly misleading, as the RGB of color spaces has a specific meaning -- R,G,and B are particular spectral responses, which will depend on the (1) spectral reflectivity of the object being imaged as well as (2) the spectral composition of the light illuminating it, in relation to (3) the three types of spectral response of the human visual system. A fourth factor is that, within this human adapted system, one can choose different physical responses to be "the" primary colors "pure" R, "pure" G, and "pure" B. The camera responses, which are not the human responses but rather some approximation of them using color dyes' spectral transmission properties (assuming Bayer-type sensor), must be converted to a 'standard' color space using an input profile .

More precisely, camera spectral responses are converted to a color space independent triplet of spectral responses called X,Y,and Z (also defined by CIE). This is where the input profile comes in -- it can be as simple as assigning each of X,Y,Z as a linear combination of the camera RGB, call them R_c,G_c,B_c, eg

X = C_xr R_c + C_xg G_c + C_xb B_c

etc etc. The matrix C depends on the camera, of course. X,Y,Z are referred to human vision, so this map is only accurate to the extent that the camera responses map 1-1 onto those of human vision (often not the case in some color ranges).

The attempt is made to separate the spectral response of the object from the spectrum of the illuminant by assigning a color temperature to the illuminant; this determines what are the R_C,G_c,B_c values of 'white', and through the above, the white point values X_w,Y_w,Z_w.

At this point one can go easily to Lab representation of color, for instance L is a function f(X/X_w) of the spectral response relative to that of the white point; similarly a and b are functions of X/X_w, Y/Y_w, and Z/Z_w.

Or one can pass to one of the standard color spaces. A standard color space comes equipped with a color temperature or rather a specified illuminant (eg D65 in the case of sRGB, D50 for prophoto; color temps ~5000 and 6500 respectively) and so one should transform the input XYZ to that reference color temperature; then one can again transform XYZ to the color spaces' RGB values. This is another linear transformation

r = M_rx X +M_ry Y + M_rz Z

and similarly for g, b; then a gamma transformation is applied

R = r^{1/gamma} etc

to arrive at the standard color space RGB (gamma=2.2 for sRGB, 1.8 for prophoto). Note also that the matrix M depends on the chosen color space.

There is a very nice website

http://brucelindbloom.com/

that contains a lot of technical information on various color representations and their interrelation. In particular in the Math section you will find the needed formulae, in particular the matrix M under the link " Computing RGB-to-XYZ and XYZ-to-RGB matrices. " (though he uses slightly different math conventions than above).

Now, for a 24 bit conversion, it's my understanding that each pixel would be represented by various combinations of 256 level of red, blue, and green. So, how, then, would a 24 bit sRGB file differ from a 24 bit aRGB file or a 24 bit ppRGB file?

Now we are prepared to discuss the difference between the various color spaces. The columns of the inverse matrix M^{-1} are the {X,Y,Z} values of the 'primaries', ie 'pure' R,G,B for that color space. Different color spaces have different M, and therefore primaries that point in different directions in the universal XYZ space. Again referring to

http://brucelindbloom.com/

go to Calc> CIE Color Calculator, enter eg {R,G,B} = {1,0,0}, click on the RGB box, and the rest of the table will fill with the appropriate values, in particular the XYZ values of the R primary (check 'scale RGB' if you want the answers in 0-255, 8-it normalization, but then the primary is {255,0,0}. Change the color space, click the XYZ box, and the RGB values will update to the appropriate color space. Now you will have the RGB values of the red primary of the old color space converted to the new color space (since the XYZ values are independent of a color space, starting with RGB values of one space, going to XYZ fixes what spectrum we are talking about, then transforming to the new color space gives the RGB values in that other color space of the same light spectrum). If any RGB value is less than zero or larger than 255, the color is out of gamut for that color space. So for instance, we find out that the Prophoto RGB values of the R primary of sRGB are {179,70,26}, ie well within the Prophoto gamut; but the R primary of Prophoto has sRGB values {348,-131,-23}, ie well outside what can be represented in the sRGB gamut. This ability to represent a wider range of spectral illuminations is what make wider gamut color spaces like Prophoto or Adobe RGB more useful for editing than sRGB, which has a rather narrow gamut -- they won't unnecessarily limit what you can do in terms of representing what was captured by the camera. Cameras tend to have a rather wide spectral response, and if there are out of gamut colors, if they are at least represented without clipping you have a chance to do something with that at the editing stage rather than lose color details (this is often an issue with reds).

-- hide signature --
Complain
Post ()
sRGBNew
Keyboard shortcuts: