# In-camera processing of long-exposure RAW data

Started Jan 21, 2010 | Discussions
 Forum
In-camera processing of long-exposure RAW data
1

I'm ashamed to admit that the famous author's name escapes me at the moment (I'm confident that someone here will remind me), but I recall very well a short story that we studied in high school English class, about a repressive future society where individuals of outstanding talents and abilities have them artificially suppressed, in the name of "equality and fairness" for all.

Such a "society" does exist, though, due to a "repressive" algorithm coded into the firmware of our Nikon cameras. Christian Buil wrote about some effects of that algorithm, which is applied to NEF data where the exposure time is 1/4 sec or longer, but he only referred to it as a "median filter" and to my knowledge, the details of the algorithm were not investigated.

The operation of the algorithm is to identify "outstanding individuals" (hot pixels) and repress (adjust) them so that they no longer distinguish themselves from their ordinary neighbors. Unfortunately, this is done in a rather heavy-handed manner, using a Min() function rather than a graduated adjustment curve, so there is no possibility of recovering the original pixel value from the data that results.

The goal of my study was to find a recovery algorithm if possible, but since that will not happen, I thought that I would at least share the details of the algorithm with the DPR community.

First, let me define the term "neighbor" as it is used here. Processing is performed for each color channel (R, G1, G2, B) independently, so a given pixel's 8 neighbors are actually two pixel positions away. This two-pixel separation unfortunately makes the algorithm's effects rather coarse. Here is a diagram of a section of array, with a pixel of interest in italics and its 8 neighbors shown in bold :
R . . . G1 . . . R . . . G1 . . . R . . . G1 . . . R . . . G1
G2 . . B . . . G2 . . . B . . . G2 . . . B . . . G2 . . . B
R . . . G1 . . . R . . . G1 . . . R . . . G1 . . . R . . . G1
G2 . . B . . . G2 . . . B . . . G2 . . . B . . . G2 . . . B
R . . . G1 . . . R . . . G1 . . . R . . . G1 . . . R . . . G1
G2 . . B . . . G2 . . . B . . . G2 . . . B . . . G2 . . . B

The algorithm first looks at the values of the pixel's neighbors, to find the brightest neighbor. Then, if the pixel being evaluated is brighter than its brightest neighbor, its value is adjusted down to match. Since this test/adjustment is applied sequentially through the file, it's not possible to check against all 8 neighbors, as this could cause ripple effects. Thus the only neighbors included in the test, are those which haven't yet been tested and adjusted. For example, if one scans the file left-to-right and top-to-bottom, the neighbors used for the test will only be the four which are on the line below, and to the immediate right.

As an example, suppose we have the following pixel values (pixel being tested in italics, neighbors used for the test in bold, and pixels from other color channels denoted by an x):
x . . . 43 . . . x . . . 39 . . . x . . . 38 . . . x
x . . . . x . . . x . . . . x . . . x . . . . x . . . x
x . . . 45 . . . x . . . 128 . . . x . . . 41 . . . x
x . . . . x . . . x . . . . x . . . x . . . . x . . . x
x . . . 42 . . . x . . . 44 . . . x . . . 40 . . . x

Since the brightest (tested) neighbor has value 44, the pixel under test will be adjusted down to 44. From the data that remains, there is no way to determine that the original pixel value was 128.

The algorithm does serve its purpose well, given ordinary, macroscopic subjects. Difficulties arise when bright pinpoint objects or details are present in the image. Here is a before/after image* of a test chart which consists of variously spaced white dots on a black background. Where the dots are close enough, there are sufficient bright neighbors to prevent any pixel values from being severely disturbed. However, where the dots are more isolated, resembling stars in an astrophotograph, they literally end up with their hearts punched out:

* This image shows the raw file data directly, i.e., it has not been through a converter, so each pixel is purely red, green or blue.

If you would like to examine the pixel values in the above sample image, you will find that there are many matching values in the processed pane. The before/after versions were obtained by using a 1/5sec shutter speed for the "before" example (which avoids the processing) and 1/4sec shutter for the "after" example.

You may also notice that a second undesirable effect of the algorithm, since it changes the values of the brightest pixels in the more-isolated dots, is color shifts which resemble color moire'.

Complain
Re: In-camera processing of long-exposure RAW data

In my initial de-mosaicing of Canon RAW back when the G3 came out was focused on testing how noise reduction (and hot dead pixels are noise with high freq) per channel before much was public domain on what raw really was. I found that just rejecting the sample data was more effective than clipping and value just interpolated. However, if calibration of the sensor changed, then darkframe subtraction needed to be done first if zero was off within a certain range. i just coded that in the past but there are some wonderful MATLAB libraries for image processing algorithm testing for the non programmers out there.

-C

way back then i was hoping that raw was not the accumulated values but rather 'frames' of sample data with a frame* per min time period..Info on sensor read was hard to find in that era:)* Ah, a temporal element to analyze would make it so much better* I now just think the best way is multiple separate short duration frames since the combination can approximate the sampling of a foveon since there will all ways be micromovents/jitter that allow for an integrated frame of near RGB pixel quality after processing. Much as how our vision and hearing works.

Complain
Correction to Number of neighbors used

Thinking about this some more, it would probably work better if all 8 neighbors were used in the test. I'll need to do some modeling later, to find out what approach actually matches the camera's processing.

Complain
Re: In-camera processing of long-exposure RAW data

Marianne Oelund wrote:

I'm ashamed to admit that the famous author's name escapes me at the moment

"Harrison Burgeron" by Kurt Vonnegut.

Complain
Re: Correction to Number of neighbors used

Changing the sample value of Red and Blue channels affects 8 pixels, whereas green only 4

-C

Complain
Green is not a special case

cluna wrote:

Changing the sample value of Red and Blue channels affects 8 pixels, whereas green only 4

There are two independent green channels, and each is treated exactly as the red and blue channels are. Re-read the array descriptions in my original post.

Complain
Re: In-camera processing of long-exposure RAW data

Why bother using "in camera processing" (ie, jpeg/tiff compression)?
Shoot RAW and process with NX2.

Or, is this "processing" done on the NEF file itself?

Complain
Re: Green is not a special case

Marianne Oelund wrote:

cluna wrote:

Changing the sample value of Red and Blue channels affects 8 pixels, whereas green only 4

There are two independent green channels, and each is treated exactly as the red and blue channels are. Re-read the array descriptions in my original post.

Why treat i as two though in the processing? G1 and G2 are direct measurements, why make G2's output a function of G1? Is there a presumption that G1 being hot will affect the sensitivity of G2? Or that the processing is done after a read of a discrete channel? Something like :

G1=> processing=> buffer
G2=> processing=> buffer
B => processing=> buffer
R => processing=> buffer

Then integrate G1,G2,B,R into a serialized raw??

Make more sense to treat

[G1] [? ][G1]
[? ] [G2][? ]
[G1] [? ][G1]

as just:

[G][? ][G]
[? ][G][? ]
[G][? ][G]

-C

Complain
Yes, it affects the NEF file data

mozarkid wrote:

Or, is this "processing" done on the NEF file itself?

Yes, and it cannot be turned off by the user - and that is the reason for studying it. At exposure times of 1/4 sec or longer, the camera performs this "hot-pixel clipping" algorithm to suppress the unwanted bright pixels. Unfortunately, this has other effects which are particularly harmful for astro images, so my interest was in determining exactly what the camera is doing, to see if it could be "undone."

This has become more important, as the so-called "Mode 3" approach (turning the camera off during the long-exp. NR blackframe period to obtain unprocessed RAW data) no longer works with current firmware.

You can see the effect that the clipping algorithm has on RAW data, by taking high-ISO black frames at 1/5 sec and 1/4 sec, then comparing them. I would also advise users to study the algorithm's effect on the work they normally do, if they use exposure times of 1/4 sec or longer (again, by comparing 1/5 sec and 1/4 sec images).

Complain
It's for simplicity, I suppose

cluna wrote:

Why treat it as two though in the processing?

It allows the same algorithm to be applied to all channels, with no modifications. Then as the processing proceeds, it's not necessary for it to identify which channel it is working on. I'm not defending Nikon's choice, rather I'm just stating what they've done.

Make more sense to treat

[G1] [? ][G1]
[? ] [G2][? ]
[G1] [? ][G1]

as just:

[G][? ][G]
[? ][G][? ]
[G][? ][G]

I could actually try this. I will be running a simulation of the clipping algorithm fairly soon, as verification of my interpretation. When that's done, I can experiment with variations such as your suggestion, to see how they behave.

Complain
Re: In-camera processing of long-exposure RAW data

Marianne,

What an odd algorithm... Kinda like deciding to just chop off peaks. I mean, do I understand correctly that the algorithm essentially flattens all local maxima? Seems like a pretty heavy hammer... that would have an impact on many aspects of image quality.

Say it ain't so

Cheers,

-Yamo-

Complain
opening the can of worms a bit further...

Marianne,

The behaviour you've found and described is likely a big shock to many of us. The use of Nikon bodies for astro and macro photography is potentially seriously compromised.

Now that this can of worms has been opened, it would be interesting to apply a (hopefully) simple and concise test to various Nikon bodies and firmware versions to verify the extent of this 'feature'. Such a test would be very handy. How feasible is such a test?

-- hide signature --

Bob Elkind
Family,in/outdoor sports, landscape, wildlife
photo galleries at http://eteam.zenfolio.com
my relationship with my camera is strictly photonic

bob elkind's gear list:bob elkind's gear list
Nikon D700 Canon EOS 500D Nikon D600 Canon EF-S 18-55mm f/3.5-5.6 Nikon AF Nikkor 50mm f/1.8D +5 more
Complain
And in LiveView?

Is this 'processing' manifested in LiveView display (in real time), or only in captured still image data? I don't know enough about how LiveView displays are derived, to answer this question. Whatever the answer might be, the notion of 'what you see is what you get' would certainly be stretched. For those of us who depend heavily on LiveView mode for critical manual focus, surprises would not be welcome.

-- hide signature --

Bob Elkind
Family,in/outdoor sports, landscape, wildlife
photo galleries at http://eteam.zenfolio.com
my relationship with my camera is strictly photonic

bob elkind's gear list:bob elkind's gear list
Nikon D700 Canon EOS 500D Nikon D600 Canon EF-S 18-55mm f/3.5-5.6 Nikon AF Nikkor 50mm f/1.8D +5 more
Complain
Algorithm successfully simulated

As confirmation of my interpretation of Nikon's hot-pixel clipping algorithm, I coded it and applied it to the "Before" sample file shown in my original post. This simulation used the full 8-neighbor comparison set. The result matches very well with the camera's "After" image, with the only differences being slight exposure change and very small level differences, as expected from noise.

Now that I have a working simulation, I can perform some tweaks on it, and see if any small changes to the algorithm can produce significant improvements. Of course, the hard part would be getting Nikon to put those changes into firmware.

Complain
Re: Algorithm successfully simulated

-- hide signature --

Thom Hogan
author, Complete Guides to Nikon bodies (21 and counting)
http://www.bythom.com

Complain
Re: In-camera processing of long-exposure RAW data

Yamo wrote:

Marianne,

What an odd algorithm... Kinda like deciding to just chop off peaks. I mean, do I understand correctly that the algorithm essentially flattens all local maxima?

Yes, but that would be extremely local maxima, i.e., single-pixel.

Seems like a pretty heavy hammer... that would have an impact on many aspects of image quality.

It can affect brightness and color of extremely fine details which are lighter than their surroundings.

Say it ain't so

Sorry, I can't.

Complain
Not apparent in Live View

bob elkind wrote:

Is this 'processing' manifested in LiveView display (in real time), or only in captured still image data?

The effects of the clipping (color shifts and loss of brightness of the isolated dots) didn't appear in my display when focusing on the test target. Since application of the algorithm is limited to exposures of 1/4 sec or longer, it isn't something which I would expect to run during Live View image handling.

Complain
Testing is fairly easy

bob elkind wrote:

Now that this can of worms has been opened, it would be interesting to apply a (hopefully) simple and concise test to various Nikon bodies and firmware versions to verify the extent of this 'feature'. Such a test would be very handy. How feasible is such a test?

All I had to do, was create an array of single-pixel white dots on a black background, which was printed at 200dpi, then photographed with a D3 from a distance of 24" with the 60mm/2.8D macro lens. It should be easy to adjust this for any camera model, although the finer-pitch sensors will of course be more demanding of optics.

Given a suitable subject, the effect of the clipping algorithm can be judged by comparing a 1/5-sec exposure to a 1/4-sec one. The difference will even show up in the camera JPEG file, viewed on the camera's LCD.

Complain
Ayn Rand "Atlas Shrugged" (nt)

No Text

-Wayne

Complain
Re: In-camera processing of long-exposure RAW data

Marianne Oelund wrote:

Yamo wrote:

Marianne,

What an odd algorithm... Kinda like deciding to just chop off peaks. I mean, do I understand correctly that the algorithm essentially flattens all local maxima?

Yes, but that would be extremely local maxima, i.e., single-pixel.

I'd guess that most local maxima are a single pixel. And anything that is textured would have many of these single pixel local maxima. For instance, noise would be for the most part comprised of single point local maxima. One couldn't faithfully photo copy a grainy film black and white image with such an algorithm applied.

I mean, seems like one might say that Nikon has made the choice of using a particular kind of NR on all shots 1/4 sec. or higher with no way of turning it off.

Seems like a pretty heavy hammer... that would have an impact on many aspects of image quality.

It can affect brightness and color of extremely fine details which are lighter than their surroundings.

Say it ain't so

Sorry, I can't.

indeed

Er, Cheers,

-Yamo-

Complain
 Forum