Luther—Ives condition

Started Dec 7, 2008 | Discussions thread
Eric Chan Senior Member • Posts: 2,800
Re: Luther—Ives condition

Glad to have you enter the discourse, Eric.

Glad to be here.

What do you mean by "scene appearance estimates"?

Colorimetric matches mean that we match tristimulus values (e.g., XYZ) or some equivalent representation (e.g., transformed into a RGB working space, or CIE Lab).

However, there are other important factors that affect our perception of color, such as the absolute brightness of the scene and viewing conditions (background, surround, etc.). We are already quite familiar with this phenomenon. Scene-referred images typically look very flat when reproduced on a display which has a much lower luminance level than the original scene did; same thing with a print that is viewed under fairly dim lighting conditions. This is why non-linear (and ideally spatially-varying) tone mapping and color gamut mapping are needed to obtain an image that "appears" more like the original scene did; in other words, some work needs to be done to get from scene-referred to output-referred (or picture-referred) image data. On the other hand, a scene-referred image may look already very good when displayed on an "HDR" display (there are some special displays with very bright backlights and high contrast ratios), without additional tone and color mapping applied.

Estimating scene appearance usually entails using a color appearance model (e.g., CIECAM 2002) which goes beyond simple colorimetric matches. Parameters of such models nearly always include viewing conditions which aren't normally considered for colorimetry.

And what does one do if, as seems to be the case with current sensor
technology, the Luther-Ives condition is not met? Some (perhaps
weighted) least-squares fit to a set of reference colors, along the
lines of various color calibration scripts? What is the relative
utility of such linear transform based approaches, vs lookup table
based approaches?

The sensors have gotten closer over time. (Older ones could get ill behaved, especially in near IR.) But in general the idea is to optimize over a set of training data, as you say. Using a ColorChecker or variant is a popular approach, and doing so will give you the ability to get very consistent renderings of those targets from camera to camera. However, that does not guarantee consistency across other materials not in the training set, nor does it even guarantee good performance on those other materials. The inverse is also true: profiles that have been designed to work well on real-world data may not work well (or consistently) for test targets. (This is one reason why the Adobe Standard profile doesn't measure up very well on a ColorChecker, e.g., if you use Imatest or a similar tool to check colorimetric accuracy. That is by design.)

A 3x3 or 3x4 matrix (i.e., linear transform) is usually enough to get you good scene colorimetry estimates, provided that your training data generalizes well to the types of scenes you photograph. Non-linear transforms such as chroma-dependent or lightness-dependent hue twists (usually implemented via lookup tables for convenience) are useful for correcting residual errors or trying to nail specific colors. They are also useful for implementing more complex color appearance models such as CIECAM97 or CIECAM02.

Post (hide subjects) Posted by
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark MMy threads
Color scheme? Blue / Yellow