# In-camera processing of long-exposure RAW data

Started Jan 21, 2010 | Discussions thread
 Forum Parent First Previous Next
 Flat view
In-camera processing of long-exposure RAW data
1

I'm ashamed to admit that the famous author's name escapes me at the moment (I'm confident that someone here will remind me), but I recall very well a short story that we studied in high school English class, about a repressive future society where individuals of outstanding talents and abilities have them artificially suppressed, in the name of "equality and fairness" for all.

Such a "society" does exist, though, due to a "repressive" algorithm coded into the firmware of our Nikon cameras. Christian Buil wrote about some effects of that algorithm, which is applied to NEF data where the exposure time is 1/4 sec or longer, but he only referred to it as a "median filter" and to my knowledge, the details of the algorithm were not investigated.

The operation of the algorithm is to identify "outstanding individuals" (hot pixels) and repress (adjust) them so that they no longer distinguish themselves from their ordinary neighbors. Unfortunately, this is done in a rather heavy-handed manner, using a Min() function rather than a graduated adjustment curve, so there is no possibility of recovering the original pixel value from the data that results.

The goal of my study was to find a recovery algorithm if possible, but since that will not happen, I thought that I would at least share the details of the algorithm with the DPR community.

First, let me define the term "neighbor" as it is used here. Processing is performed for each color channel (R, G1, G2, B) independently, so a given pixel's 8 neighbors are actually two pixel positions away. This two-pixel separation unfortunately makes the algorithm's effects rather coarse. Here is a diagram of a section of array, with a pixel of interest in italics and its 8 neighbors shown in bold :
R . . . G1 . . . R . . . G1 . . . R . . . G1 . . . R . . . G1
G2 . . B . . . G2 . . . B . . . G2 . . . B . . . G2 . . . B
R . . . G1 . . . R . . . G1 . . . R . . . G1 . . . R . . . G1
G2 . . B . . . G2 . . . B . . . G2 . . . B . . . G2 . . . B
R . . . G1 . . . R . . . G1 . . . R . . . G1 . . . R . . . G1
G2 . . B . . . G2 . . . B . . . G2 . . . B . . . G2 . . . B

The algorithm first looks at the values of the pixel's neighbors, to find the brightest neighbor. Then, if the pixel being evaluated is brighter than its brightest neighbor, its value is adjusted down to match. Since this test/adjustment is applied sequentially through the file, it's not possible to check against all 8 neighbors, as this could cause ripple effects. Thus the only neighbors included in the test, are those which haven't yet been tested and adjusted. For example, if one scans the file left-to-right and top-to-bottom, the neighbors used for the test will only be the four which are on the line below, and to the immediate right.

As an example, suppose we have the following pixel values (pixel being tested in italics, neighbors used for the test in bold, and pixels from other color channels denoted by an x):
x . . . 43 . . . x . . . 39 . . . x . . . 38 . . . x
x . . . . x . . . x . . . . x . . . x . . . . x . . . x
x . . . 45 . . . x . . . 128 . . . x . . . 41 . . . x
x . . . . x . . . x . . . . x . . . x . . . . x . . . x
x . . . 42 . . . x . . . 44 . . . x . . . 40 . . . x

Since the brightest (tested) neighbor has value 44, the pixel under test will be adjusted down to 44. From the data that remains, there is no way to determine that the original pixel value was 128.

The algorithm does serve its purpose well, given ordinary, macroscopic subjects. Difficulties arise when bright pinpoint objects or details are present in the image. Here is a before/after image* of a test chart which consists of variously spaced white dots on a black background. Where the dots are close enough, there are sufficient bright neighbors to prevent any pixel values from being severely disturbed. However, where the dots are more isolated, resembling stars in an astrophotograph, they literally end up with their hearts punched out:

* This image shows the raw file data directly, i.e., it has not been through a converter, so each pixel is purely red, green or blue.

If you would like to examine the pixel values in the above sample image, you will find that there are many matching values in the processed pane. The before/after versions were obtained by using a 1/5sec shutter speed for the "before" example (which avoids the processing) and 1/4sec shutter for the "after" example.

You may also notice that a second undesirable effect of the algorithm, since it changes the values of the brightest pixels in the more-isolated dots, is color shifts which resemble color moire'.

Complain
 Forum Parent First Previous Next
 Flat view
Post ()
 Forum Parent First Previous Next
Keyboard shortcuts: