Dynamic Range and HDR

Fort Point Arcades. High Dynamic Range (HDR) photography is all about dealing with the dynamic range of the scenes we capture, and the limited abilities of our cameras/printers to properly capture scenes such as shown above.

Let's start with a basic definition of Dynamic Range:

  • Dynamic range is defined by the ratio of darkest and brightest element that matter for your photographic view (measured in brightness levels).

This is not an absolute range, as it very much depends on your personal goals. There are great photos that show very dark shadows without any details; in doing so they represent only the lower dynamic range part of the scene.

There are actually different types of dynamic ranges to consider:

  • Scene
  • Camera
  • Output (screen, print)
  • Human vision

During the photographic process the dynamic range gets transformed twice:

  • Scene > Capture Device (here we think of cameras)
  • Capture > Output (monitor or print)

It's important to remember that any detail that gets lost during Capture can never be recovered (something we'll cover in more detail later), but in the end it only matters that the final output image pleases your vision.

Different Types of Dynamic Range

Dynamic Range of the Scene

What are the brightest details and darkest details that you want to portray? This is your artistic decision. Probably the best way to explain is to look at some example scenes.

Lost Cabin: In this scene we wanted to show inside and outside details.
Fort Point Arcades. Again we want to show detail in the bright and dark areas. In general we consider highlight areas to be more critical than shadows. Major blown out highlight usually look bad in prints (show as plain paper white).

A dynamic range (contrast ratio) of 1:30000 can easily be reached in these situations - even more if you photograph a dark room with windows to a bright outside scene.

Ultimately, HDR photography is all about creating pleasing images in these circumstances.

Dynamic Range captured by the Camera

If our cameras could capture high dynamic range scenes in a single shot we wouldn't need the techniques described in these articles. Unfortunately the dynamic range of cameras is much lower than many of the scenes they're used to photograph.

How is the DR of a camera defined?

  • Dynamic Range of the camera is measured from brightest details to shadows that have good detail well above the noise floor.

The key thing here is that we measure from highlight details (not a pure white) to shadow details that are not degraded by too much noise.

  • Today's normal DSLRs can capture 7-10 f-stops (1:128 to 1:1000). We don't try to be too optimistic here. Don't get caught up by numbers. Some photographs can look still great with a lot of noise in them and others may lose their beauty. It is your decision. Of course the print size matters too.
  • Slide film 6-7 f-stops
  • Negative film about 10-12 f-stops
  • Highlight recovery in some RAW converters can gain up to +1 extra f-stop

DSLRs have got much better over the last few years, but don't expect miracles. There are some specialized cameras that can capture a higher dynamic range, but these are mostly cameras designed specifically for very special applications. The Fuji S5 (discontinued), for example, had a unique sensor with dual photo sites that allowed it to capture about 2 f-stops more DR.

Output Dynamic Range

Of all the stages in the digital photography process, the output normally shows the lowest dynamic range.

  • Today's Monitors: 1:300-1:1000
  • HDR monitors 1:30000 (watch your eyes, may get stressed)
  • Printers on glossy media: about 1:200
  • Printers on matte fine art papers: below 1:100

You may well be asking yourself why would it make sense to capture the higher dynamic range of a scene if the output DR is so limited? The answer is dynamic range compression (you'll also see us refer to this later as tone mapping).

Important aspects of Human Vision

Because we present our work to other people it is also important to understand some basic aspects about how we perceive images and the world.

Human vision works in quite a different way to our cameras. We all know that our eyes adapt to scenes; when it gets darker our pupils open, and when it gets brighter they close. This process often takes quite a while (it's not instant). It is said that our eyes can see a Dynamic Range of 10 f-stops (1:1024) without adapting the pupils and overall about 24 f-stops.


All detail we can see is not based on absolute tonal values but based on contrast at some image edges. The eye is extremely sensitive to very small contrast changes. This makes the concept of contrast so important.

Global Contrast

Global contrast measures the brightness difference between the darkest and brightest element in the entire image. Tools like Curves and Levels only change global contrast as they treat all pixels with the same brightness identically.

The global contrast has three main regions:

  • Mid-tones
  • Highlights
  • Shadows

The sum of the contrast of these three regions defines the global contrast. This means if you spend more contrast on the mid-tones (very common) you can spend less global contrast on highlights/shadows at any given global contrast (e.g. glossy paper print).

The mid-tones normally show the main subject. If the mid-tones show low contrast the image lacks "snap". Adding more contrast to the mid-tones ("snap") often results in compressed shadows and highlights. Adding some local contrast (see below) can help to improve the overall image presentation.

Local Contrast

The following chart helps to understand the concept of local contrast. The circles in each row have exactly the identical brightness levels.


Yet the top right circle looks a lot brighter than the one on the left. Why is that? Our eyes see the difference between it and its local surroundings. The right circle looks brighter with the dark gray background compared to the same tone on a brighter background on the left. Just the opposite is true for the two circles on the bottom. For our eyes the absolute brightness is of less interest than the relative relation to other close areas.

Tools like Lightroom's Fill Light and Sharpening, and Photoshop's Shadow/Highlight act locally and do not treat all pixels with the same brightness value identically.

The classic Dodge & Burn tools manipulate the local contrast of images. Dodge & Burn is still one of the best methods to refine images, because our own eyes are naturally pretty good at judging how the image is going to appear to other people's eyes. In some way today's tone mapping tools reduce the need for manual dodge & burn without replacing it.

HDR: Managing Dynamic Range

Why then even bother to photograph scenes with higher DR than your camera's or printer's DR? The answer is that we can capture the scene's high DR and later map it to a lower DR output. The key point here is that we don't lose any detail information during this process.

There are other solutions to the problem:

  • Some photographers simply wait for overcast conditions, and don't photograph at all when the DR of the scene is too high.
  • Use fill flash (which of course does not help with landscapes)

While on a longer travel trip you have to get the best out of any weather and we need to find better solutions. Also often the existing light may not depend so much on the weather. This is best illustrated with some example images.

Page Antelope Canyon. This scene in Antelope Canyon is very dark, yet there is still an amazingly wide dynamic range of light (we used 5 shots here at 2EV apart).
Alcatraz. In Alcatraz the light from the right was still quite bright compared to the dark room (there was no artificial light available).

The first step is to capture the full DR of the scene with our cameras without losing any details.

Mapping DR: Lower DR Scenes

Lets first have a look at photographing a lower DR scene.

Lower DR Scene

In this case we can capture the DR of the scene directly with our cameras in one shot. The minor clipping in the shadows is not usually a problem.

Next we map this captured tonal range to our output (which usually offers even lower DR than the camera itself).

Mapping to Output

Mapping to Output

The mapping from camera to output is mainly done via tone curves (often compressing the highlights and shadows). Here are the main tools that get involved:

  • Raw converter processing: maps from linear camera tonality via tone curves
  • Curves and levels in Photoshop
  • Dodge & Burn in Lightroom or Photoshop

Note: In the days of the wet darkroom we printed negatives with enlargers and used papers at different grades (or multigrade papers). The grades differed in the contrast they produced. This is the classic method of tone mapping. Tone mapping may sound like something new, but it is far from it. Only in the early days of photography did photographers map directly from scene to output. Since then the sequence followed has always been:

  • Scene --> Capture --> Output

Mapping DR: Higher DR Scenes

Now let's look at the situation when we photograph a higher dynamic range scene.

Clipping in Highlights and Shadows

Here is an example how the result could look:


As we can see the camera can only capture part of the scene's dynamic range. As mentioned earlier, it's rare that clipping the highlights is a valid option. This means we need to change the exposure to protect all the highlights from getting clipped (ignoring the specular highlights, such as reflections). Then we'd have the following situation:

Exposed for the highlights

Now we have stronger clipping in the shadows. In some cases this may be perfectly fine, though not if we want to show more shadow details.

Below is an example of how the result of exposing for the highlights might look:


Capturing Higher Dynamic Range with Bracketed Exposures

So how can we capture all the DR we want with the same camera? The solution is to capture multiple overlapping exposures at different EV (Exposure) levels.

Exposure Bracketing

Exposure Bracketing

In HDR photography we capture multiple different, but overlapping, exposures to cover the DR of the scene. In general the exposures differ by 1-2 EV. This means the total number of needed exposures is defined by:

  • The DR of the scene we want to capture
  • DR the camera can capture in a single shot

Each additional exposure can add 1-2 EV (depending on your selected bracketing) of DR to the camera's DR.

Now we have to find out what we can do with these multiple exposures. There are quite a few methods:

  • Manual blending (today in Photoshop, was/is done with enlargers)
  • Automatic Exposure Blending (Fusion)
  • Creating HDR images (in HDR enabled Software)

Manual Blending

Manual blending of different exposures (using what are essentially montage techniques) is nearly as old as photography. Even if Photoshop makes it much easier these days it can be a tedious process. We hardly ever use manual blending.

Automatic Exposure Blending (also called Fusion)

Here the software (I most often use Fusion in Photomatix) performs the blending process by blending the different exposures directly into the final output image.

Exposure Blending

Exposure Blending

Fusion (Exposure Blending) usually produces very nice images that look more "natural".

Bryce Canyon blended with Fusion

Creating HDR images

HDR processing is actually a two step process:

  • Create a HDR image
  • Tone-map the HDR image to a normal 16 bit image

When creating HDR images we actually follow the same goal but use a different way. With HDR Imaging (HDRI) we first merge the images to an HDR image and do not map directly to the final output.

Creating a HDR image

Something entirely new to photography (they cannot exist without computers), HDR images are 32-bit floating point images that can store a practically infinite dynamic range of tonal values. The HDR merge process tries to find all the tonal values in the bracketed exposures and create a new electronic image that represents all the tonal values captured by all the exposures.

Note: Always if something new comes along some claim this is an old hat and they did this before they were born :-). To make it clear, HDR processing as described here is new and can only be done using computers, and over the past few years the results have become more and more usable.

But we photographers are only interested in the final print, and techniques to reach this goal existed long before we had computers. In the movie industry they actually work with the HDR images during the CGI process (e.g. 'lets add a second floor to a building'). For photographers, HDR is just a step on our way to the final print. And, again to make it clear, new technology does not automatically produce better images. HDR is simply a new tool that we can exploit and use to create images that were harder to achieve in the past.

Why create images with high dynamic range at all, if the output DR is so limited?

  • Answer: Tone mapping - Map the tonal values from HDR to Output DR

That is why tone mapping is the most important - and also challenging - part of HDR processing for us photographers. The same HDR image can the tone-mapped in many different ways.

The HDR images also can be stored in different formats:

  • EXR (.exr suffix, high color gamut and precision, DR about 30 f-stops)
  • Radiance (.hdr suffix, less color gamut, huge DR)
  • BEF (private format by Unified Color to get better color quality)
  • 32 bit TIFF (very large files due to low compression and not much used in practice)

To create these HDR images you need special HDR software. We use:

  • Photoshop CS5
  • HDRsoft's Photomatix
  • Unified Color's HDR Expose or Express
  • Nik Software HDR Efex Pro 1.0

Unfortunately all these different software packages produce different HDR files. They can differ by (we'll cover these aspects in more detail later):

  • Color (Hue and Saturation)
  • Tonality
  • Alignment
  • Noise handling
  • Chromatic Aberration (CA) handling
  • Ghosting reduction

The Basics of Tone Mapping

As with our Low DR Scene case, we need to compress down to the output DR:

Mapping down to Output DR

How is this different to the Low DR Scene situation? This time the tonal compression is much stronger, and the classic tone-curve approach doesn't work that well anymore. The easiest way to show the basic tone mapping principles is to use an example:

3 Exposures at Fort Point

These are dark arcades at Fort Point in San Francisco. To demonstrate the tone mapping principles we'll use Unified Color's HDR Expose tool, because it allows us to use the various different operations involved in a modular fashion.

Here is the merged HDR file shown without any changes:

HDR Image

It's pretty dark in the shadows, and also almost totally blown out in the highlights. Lets take a look at the histogram as shown in HDR Expose:

HDRLook_Arcades_HDR_Histogram-001.jpg Histogram of the HDR original

The shadows are not really a problem, but the highlights are clipped by about 2EV. First we'll see how a minus 2EV exposure correction would improve the image:

-2 EV Exposure Compensation
HDRLook_Arcades_HDR_Histogram-001.jpg -2.0 EV histogram

Now the highlights seem to be much better but the overall image looks way too dark. What we need is a mix of exposure compensation and lowering the global contrast.

Global Contrast reduction
HDRLook_Arcades_Glob_Local_Contrast_histogram-001.jpg The global contrast is now fine. No highlights are clipped and the shadows are open. Unfortunately the image looks quite flat.

In pre-HDR days, the solution for such a situation would be to use an S-Curve in Photoshop:

Simple Photoshop S-Curve

But crafting a good S-Curve would take a while, and could easily result in over compressed highlights and shadows.

This is why tone mapping tools take another route: They improve the local contrast.

HDRLook_Arcades_Glob_Local_Contrast_histogram-001.jpg Global and local contrast changed

In this version the highlights show detail, the shadows are not blocked and the flatness is gone. This would be not our final version. We usually optimize the photo in Photoshop CS5:

  • Tuning saturation
  • Optimize Contrast with DOP Contrast Plus V2
  • Final sharpening with DOP Optimal Sharp

Note: In Unified Color's HDR Expose you actually can control the global and local contrast independently. We like this systematic approach, as we understand the settings in photographic terms.

What makes all the HDR tools different is the algorithms they use to deal with lowering the contrast (e.g. how they deal with what should be "local"). There is no right or wrong, and it is more a question of your own preferences and personal photographic style.

All the main HDR tools in the market also offer control over further parameters:

  • Detail: Very much related to local contrast and sharpening but not quite the same. Too much detail can make images "grungy". This can be what you want or just not. We use normally our DOP Detail Extractor V2 script for very strong detail.
  • Saturation: Except of HDR Expose and Nik HDR Efex Pro (via U-Points) most HDR tool handle only global saturation (all colors treated the same)
  • White Balance (WB): we try to solve this already at the Raw level in Lightroom (or other Raw converters).
  • Noise Removal: again we remove the noise in Lightroom 3.x if needed.
  • Shadow/Highlight: treatments to open shadows and toning down highlights.
  • Curves: The Curves in Photoshop CS5's Toning (The new CS5 Tone-Mapper) are powerful, but require some time to get the right result.

We'll cover most of these aspects later in the HDR Workflow and HDR Tools chapters in more detail.

Summary on Dynamic Range and HDR

The approach to enhancing the dynamic range that your camera can capture is very old, because these limitations have been known about for a long time. Manual or automatic blending of images offers very powerful ways to map the combined dynamic range of your images down to the lower dynamic range of your printed output. Creating seamless blended images manually can be very challenging and time consuming: Dodge & Burn are techniques that can be very powerful for creating good print, but they require practise and perseverance.

Creating HDR images is a new way to master the same old problem. We as photographers are mostly interested in the tone-mapped results. Tone mapping algorithms face the challenge of compressing a high dynamic range down to an image we can view on a monitor or in a print. The various different methods of tone mapping can give very different results, and it is up to the photographer to select methods he likes best.

Further learning

This is an edited version of the first chapter of an ongoing work by Uwe Steinmueller of Digital Outback Photo, featuring his personal experiences of HDR photography, and will eventually form the basis of a book on the art of HDR photography. If you'd like to find out more about digital imaging workflow from a fine art photographer's perspective then check out the Digital Outback Photo E-book, 'The Digital Photography Workflow Handbook (2010)', by Uwe Steinmueller and Juergen Gulbins, which covers the complete digital photography workflow from input to output. The 540 page prize-winning handbook covers everything from Import to Print (and even backup) and also features Photoshop and Lightroom techniques, HDR, color management and raw editing.

© 2010, www.dpreview.com & Uwe Steinmueller.