My Semi-Educated Guess on the Black Spot Problem

Started Dec 9, 2008 | Discussions thread
Henrik Herranen Senior Member • Posts: 1,722
Let's make a sensor model and create some black pixels! :)

Cliff Chase wrote:

For years I designed very high-speed data acquisition systems. One
of the problems in designing these systems is respecting the settling
time of multiplexers. Multiplexers take a finite amount of time to
settle to some percentage of the previous value on the output. The
greater the delta from the previous value the longer it takes to

Another phenomenon with them is that they can "ring", especially if
the output is saturated. So if you have a very high value on the
output and you then switch the mux to a very low input the output can
oscillate slightly. If you don't wait long enough before sampling
then you get an error.

I am a software / digital signal processing engineer and work for a company that has designed microchips for almost two decades, including chips with complex analog designs, like RF, high-quality A/D and D/A converters, and even a panchromatic CMOS image sensor.

I had a talk with our senior chip designers and we came to very similar educated guesses as you. What follows may be a bit technical, and I am sorry if this seems difficult to understand, for which I am sorry. However, all that follows is very realistic and based on what I've designed and seen during a decade of working closely with ASICs.

Begins the lesson:

All real systems have limited bandwidth and such systems tend to ring more or less. The point is doing a design where the ringing is small enough not to disturb real data.

My addition to your guess is that to get ringing, it is probably not enough to just saturate the signal, from what I see it must be saturated BADLY, by several stops, which is what easily happens with specular highlights. Although the A/D converter gives the same value for any sample that has been saturated, ringing can be made worse by having a whiter-than-white analog signal.

Below is a filter model of how an actual, analog system could treat pixels. The model is created ad hoc to show how the black pixel phenomenon could happen. The filter model is below:

X axis is the time used to send one pixel.

At 0 is the pixel that is actually supposed to be read. The filter has a value 0f 1.023 at that point. At -1 and +1 the Y value is 0. So adjacent pixels don't affect the readout of the current pixel. However, at +2 the Y value is -0.023. What this means that whatever happened "two pixels ago" has an ever-so-slight effect on the readout of the current pixel.

How much would this kind of a system effect the performance of sensor readout? Let's take a somewhat extreme situation where you go from pure darkness to pure white (but not one bit more), then back to black. This is shown in the following figure:

  • The red lines with dots represent the analog data value that came from the sensor in a horizontal line: black is 0 and white is 1. They have been moved 0.9 pixels to the right so that they would be adjacent to the read values and easier to compare.

  • The blue line shows how the analog signal value changes over time because of the real signal. As you can see, there is a lot of variation, but no worries, as...

  • The black dotted lines shows the values the A/D converter outputs, as measured from the (blue) analog signal. As you can see, the values that are read are very close to the correct values that came from the CMOS sensor - even in the case of extreme contrast, which is the most difficult situation for the model (and very likely a real system, too).

Do note, however, pixel number 9. While the real (red) value is 0, the digitized value is -0.023. This might look insignificant, and in this case it is. This error would never show in a real-life image.

However, let's take a MUCH stronger signal, a signal where we have an over-exposure of three stops for the white colour. That means that we have to multiply it by 2^3 = 8. This is equivalent to a bright, sharply defined light in dark surroundings. This case is shown below:

As you can see, the basic shape of the sensor values (red) and filtered signal (blue) are the same as before. However, because the signal is way stronger than what the A/D converter can catch, the digitized (black) signal is saturated to a maximum value of 1.

But now, let's rescale the graph and look more closely at what happens at our favourite sample #9:

We-hey! The value for pixel #9 is approximately -0.2, which is one fifth of our whole scale and way blacker than black. This would exactly be the black dots problem, and would occur right on the right side of vastly bright whiter-than-white pixels.

There we have it. Black pixels recreated with a simple, yet realistic model.

My guess is that they are reading out too fast and the multiplexers
aren't settling before they switch to the next pixel. If this is the
case it can't really be fixed in firmware. Also, it could vary
between sensors due to process variation.

It could also vary with temperature. Many people who haven't done chip design think that every chips works similarly. They are dead wrong. It is incredible difficult to make a chip with analog parts to work consistently among temperatures and process variations.

Then again I could be totally wrong.

So could I. But this is very plausible.

Kind regards,

  • Henrik

-- hide signature --

And if a million more agree there ain't no great society
My obligatory gallery:

 Henrik Herranen's gear list:Henrik Herranen's gear list
Canon PowerShot S110 Canon EOS 5D Mark II Canon EOS 5D Mark IV Canon EF 135mm F2L USM Canon EF 100mm f/2.8 Macro USM +7 more
Post (hide subjects) Posted by
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark MMy threads
Color scheme? Blue / Yellow