Let's talk Dynamic Range

Started Nov 11, 2017 | Discussions thread
alanr0 Senior Member • Posts: 2,100
Re: Let's talk Dynamic Range

Father Bouvier wrote:

alanr0 wrote:

Father Bouvier wrote:

Where are all the knowledgeable people? Is it because this topic isn't interesting or just waiting to see where it goes ?

You already have a response from J A C S.

Possibly too many dubious assumptions and mistaken deductions for others to chip in.

So there is no way to get a better dynamic range from technologically same sensor by changing topology of pixels or filters.

Not so. Restricting the exposure of some pixels,

Why would that matter? Putting a filter in front of a pixel doesn't change its DR. You may look at a simplified case for the sake of the argument: a sensor consisting of two pixels, one without a filter and the other with the filter. Each pixel would have 1/2 of DR of a one large pixel. How are going to increase the DR with the two of them?

If you are willing to combine multiple exposures, we can do much better than that, as discussed below.

You seem to be confusing stacking with High Dynamic Range processing.

The two are very different, unless you are using an alternative definition of dynamic range which is new to me.

as J A C S points out is one way.

2) multiple exposure HDR as it typically practiced, is a suboptimal way of increasing dynamic range.

Let's for simplicity sake say you take two images and merge them together. Given the same total exposure time (resulting in the same effective ISO), what is the relation between the two exposure times that results in maximum DR?

We have a very simple optimization task. We take two exposures, using abstract units: 1 and k, and keep the total exposure time constant: 1 + k = const. Find k maximizing dynamic range of the merged images.

Poorly stated. k = (const -1) is also constant,

Yeah, tried to simplify the actual calculations. It was originally t1 + t2 = const. Then I introduced the k = t2/t1 and continued the calculations.

But that misprint doesn't change anything. Show your calculations, how are they different?

so the derivative of F(k) is zero everywhere according to your assumption.

Also assume that during merge the amplitudes are adding linearly, and noise as a square root of squares. I.e. you need to solve: max F(k)=(1+k)/sqrt(1+k^2). Solving dF/dk = 0, you get one predictable solution: k = 1.

Not how an effective HDR merge is carried out. Amplitudes are not added linearly.

If you want to introduce non-linear processing, you can do so after merge as well. Whatever you can achieve with a different tone cure you can achieve with a linear merge first. But you can't get any better DR for the same total exposure time (effective ISO) than from merging equal exposures to begin with.

Only if you assume all the exposures are identical, that none of the exposures saturate, and that the signals are combined with constant weighting.

If your exposures times differ by a factor 100, and the short exposure is just short of saturating, then you don't want to include the parts of the long exposure which are saturated. Similarly, you need to suppress the parts of the short exposure where the shadows are within a few dB of the read noise, but the long exposure has good SNR.

Apply the weighting before combining them, and you could have 6.5 more stops of dynamic range. Use a simple linear addition, and the highlights will be horribly compressed, and the shadows will be buried in noise.

The short exposure allows you to handle 100x the light from a single exposure.

The long exposure gives you 0.99 of the signal from a single exposure, with only a single read noise contribution.

If SNR of mid-tones is a concern, you may wish to use a less aggressive exposure ratio, but the original question was about dynamic range.

You need longer exposures to get best SNR in the shadows. For the same total exposure, multiple short exposures introduce additional read noise, without any improvement in shot noise.

You get exactly the same read and shot noise by adding two short exposures as you get from twice as long one.

How do you avoid twice the read noise energy compared with a single exposure?

With two short exposures you can handle twice the light without saturating.

If you add the two signals, the read noise standard deviation increases by sqrt(2), so dynamic range is improved by only 1/2 stop - as predicted by your formula.

One more corollary. Aptina design is suboptimal. Instead of switching to a large capacitor, they need to merge multiple exposures by discharging the capacitor as many times as necessary and accumulating the resulting signal.

To read out 10 times in a 1/100 s exposure, you need a lot of on-chip per-pixel processing and storage, or the ability to read the entire frame in much less than 1/1000 s.

It might be non-trivial, but a good solution to that is something that actually worth patenting.

Sure. Meanwhile, the Aptina approach works, is available, and doesn't consume too much electrical power or cook the sensor.

-- hide signature --

Alan Robinson

Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark MMy threads
Color scheme? Blue / Yellow