Dynamic Range and HDR
|Fort Point Arcades. High Dynamic Range (HDR) photography is all about dealing with the dynamic range of the scenes we capture, and the limited abilities of our cameras/printers to properly capture scenes such as shown above.|
Let's start with a basic definition of Dynamic Range:
- Dynamic range is defined by the ratio of darkest and brightest element that matter for your photographic view (measured in brightness levels).
This is not an absolute range, as it very much depends on your personal goals. There are great photos that show very dark shadows without any details; in doing so they represent only the lower dynamic range part of the scene.
There are actually different types of dynamic ranges to consider:
- Output (screen, print)
- Human vision
During the photographic process the dynamic range gets transformed twice:
- Scene > Capture Device (here we think of cameras)
- Capture > Output (monitor or print)
It's important to remember that any detail that gets lost during Capture can never be recovered (something we'll cover in more detail later), but in the end it only matters that the final output image pleases your vision.
Different Types of Dynamic Range
Dynamic Range of the Scene
What are the brightest details and darkest details that you want to portray? This is your artistic decision. Probably the best way to explain is to look at some example scenes.
|Lost Cabin: In this scene we wanted to show inside and outside details.|
|Fort Point Arcades. Again we want to show detail in the bright and dark areas. In general we consider highlight areas to be more critical than shadows. Major blown out highlight usually look bad in prints (show as plain paper white).|
A dynamic range (contrast ratio) of 1:30000 can easily be reached in these situations - even more if you photograph a dark room with windows to a bright outside scene.
Ultimately, HDR photography is all about creating pleasing images in these circumstances.
Dynamic Range captured by the Camera
If our cameras could capture high dynamic range scenes in a single shot we wouldn't need the techniques described in these articles. Unfortunately the dynamic range of cameras is much lower than many of the scenes they're used to photograph.
How is the DR of a camera defined?
- Dynamic Range of the camera is measured from brightest details to shadows that have good detail well above the noise floor.
The key thing here is that we measure from highlight details (not a pure white) to shadow details that are not degraded by too much noise.
- Today's normal DSLRs can capture 7-10 f-stops (1:128 to 1:1000). We don't try to be too optimistic here. Don't get caught up by numbers. Some photographs can look still great with a lot of noise in them and others may lose their beauty. It is your decision. Of course the print size matters too.
- Slide film 6-7 f-stops
- Negative film about 10-12 f-stops
- Highlight recovery in some RAW converters can gain up to +1 extra f-stop
DSLRs have got much better over the last few years, but don't expect miracles. There are some specialized cameras that can capture a higher dynamic range, but these are mostly cameras designed specifically for very special applications. The Fuji S5 (discontinued), for example, had a unique sensor with dual photo sites that allowed it to capture about 2 f-stops more DR.
Output Dynamic Range
Of all the stages in the digital photography process, the output normally shows the lowest dynamic range.
- Today's Monitors: 1:300-1:1000
- HDR monitors 1:30000 (watch your eyes, may get stressed)
- Printers on glossy media: about 1:200
- Printers on matte fine art papers: below 1:100
You may well be asking yourself why would it make sense to capture the higher dynamic range of a scene if the output DR is so limited? The answer is dynamic range compression (you'll also see us refer to this later as tone mapping).
Important aspects of Human Vision
Because we present our work to other people it is also important to understand some basic aspects about how we perceive images and the world.
Human vision works in quite a different way to our cameras. We all know that our eyes adapt to scenes; when it gets darker our pupils open, and when it gets brighter they close. This process often takes quite a while (it's not instant). It is said that our eyes can see a Dynamic Range of 10 f-stops (1:1024) without adapting the pupils and overall about 24 f-stops.
All detail we can see is not based on absolute tonal values but based on contrast at some image edges. The eye is extremely sensitive to very small contrast changes. This makes the concept of contrast so important.
Global contrast measures the brightness difference between the darkest and brightest element in the entire image. Tools like Curves and Levels only change global contrast as they treat all pixels with the same brightness identically.
The global contrast has three main regions:
The sum of the contrast of these three regions defines the global contrast. This means if you spend more contrast on the mid-tones (very common) you can spend less global contrast on highlights/shadows at any given global contrast (e.g. glossy paper print).
The mid-tones normally show the main subject. If the mid-tones show low contrast the image lacks "snap". Adding more contrast to the mid-tones ("snap") often results in compressed shadows and highlights. Adding some local contrast (see below) can help to improve the overall image presentation.
The following chart helps to understand the concept of local contrast. The circles in each row have exactly the identical brightness levels.
Yet the top right circle looks a lot brighter than the one on the left. Why is that? Our eyes see the difference between it and its local surroundings. The right circle looks brighter with the dark gray background compared to the same tone on a brighter background on the left. Just the opposite is true for the two circles on the bottom. For our eyes the absolute brightness is of less interest than the relative relation to other close areas.
Tools like Lightroom's Fill Light and Sharpening, and Photoshop's Shadow/Highlight act locally and do not treat all pixels with the same brightness value identically.
The classic Dodge & Burn tools manipulate the local contrast of images. Dodge & Burn is still one of the best methods to refine images, because our own eyes are naturally pretty good at judging how the image is going to appear to other people's eyes. In some way today's tone mapping tools reduce the need for manual dodge & burn without replacing it.
HDR: Managing Dynamic Range
Why then even bother to photograph scenes with higher DR than your camera's or printer's DR? The answer is that we can capture the scene's high DR and later map it to a lower DR output. The key point here is that we don't lose any detail information during this process.
There are other solutions to the problem:
- Some photographers simply wait for overcast conditions, and don't photograph at all when the DR of the scene is too high.
- Use fill flash (which of course does not help with landscapes)
While on a longer travel trip you have to get the best out of any weather and we need to find better solutions. Also often the existing light may not depend so much on the weather. This is best illustrated with some example images.
|Page Antelope Canyon. This scene in Antelope Canyon is very dark, yet there is still an amazingly wide dynamic range of light (we used 5 shots here at 2EV apart).|
|Alcatraz. In Alcatraz the light from the right was still quite bright compared to the dark room (there was no artificial light available).|
The first step is to capture the full DR of the scene with our cameras without losing any details.
Mapping DR: Lower DR Scenes
Lets first have a look at photographing a lower DR scene.
|Lower DR Scene|
In this case we can capture the DR of the scene directly with our cameras in one shot. The minor clipping in the shadows is not usually a problem.
Next we map this captured tonal range to our output (which usually offers even lower DR than the camera itself).
|Mapping to Output|
Mapping to Output
The mapping from camera to output is mainly done via tone curves (often compressing the highlights and shadows). Here are the main tools that get involved:
- Raw converter processing: maps from linear camera tonality via tone curves
- Curves and levels in Photoshop
- Dodge & Burn in Lightroom or Photoshop
Note: In the days of the wet darkroom we printed negatives with enlargers and used papers at different grades (or multigrade papers). The grades differed in the contrast they produced. This is the classic method of tone mapping. Tone mapping may sound like something new, but it is far from it. Only in the early days of photography did photographers map directly from scene to output. Since then the sequence followed has always been:
- Scene --> Capture --> Output
Mapping DR: Higher DR Scenes
Now let's look at the situation when we photograph a higher dynamic range scene.
|Clipping in Highlights and Shadows|
Here is an example how the result could look:
As we can see the camera can only capture part of the scene's dynamic range. As mentioned earlier, it's rare that clipping the highlights is a valid option. This means we need to change the exposure to protect all the highlights from getting clipped (ignoring the specular highlights, such as reflections). Then we'd have the following situation:
|Exposed for the highlights|
Now we have stronger clipping in the shadows. In some cases this may be perfectly fine, though not if we want to show more shadow details.
Below is an example of how the result of exposing for the highlights might look:
Capturing Higher Dynamic Range with Bracketed Exposures
So how can we capture all the DR we want with the same camera? The solution is to capture multiple overlapping exposures at different EV (Exposure) levels.
In HDR photography we capture multiple different, but overlapping, exposures to cover the DR of the scene. In general the exposures differ by 1-2 EV. This means the total number of needed exposures is defined by:
- The DR of the scene we want to capture
- DR the camera can capture in a single shot
Each additional exposure can add 1-2 EV (depending on your selected bracketing) of DR to the camera's DR.
Now we have to find out what we can do with these multiple exposures. There are quite a few methods:
- Manual blending (today in Photoshop, was/is done with enlargers)
- Automatic Exposure Blending (Fusion)
- Creating HDR images (in HDR enabled Software)
Manual blending of different exposures (using what are essentially montage techniques) is nearly as old as photography. Even if Photoshop makes it much easier these days it can be a tedious process. We hardly ever use manual blending.
Automatic Exposure Blending (also called Fusion)
Here the software (I most often use Fusion in Photomatix) performs the blending process by blending the different exposures directly into the final output image.
Fusion (Exposure Blending) usually produces very nice images that look more "natural".
|Bryce Canyon blended with Fusion|
Creating HDR images
HDR processing is actually a two step process:
- Create a HDR image
- Tone-map the HDR image to a normal 16 bit image
When creating HDR images we actually follow the same goal but use a different way. With HDR Imaging (HDRI) we first merge the images to an HDR image and do not map directly to the final output.
|Creating a HDR image|
Something entirely new to photography (they cannot exist without computers), HDR images are 32-bit floating point images that can store a practically infinite dynamic range of tonal values. The HDR merge process tries to find all the tonal values in the bracketed exposures and create a new electronic image that represents all the tonal values captured by all the exposures.
Note: Always if something new comes along some claim this is an old hat and they did this before they were born :-). To make it clear, HDR processing as described here is new and can only be done using computers, and over the past few years the results have become more and more usable.
But we photographers are only interested in the final print, and techniques to reach this goal existed long before we had computers. In the movie industry they actually work with the HDR images during the CGI process (e.g. 'lets add a second floor to a building'). For photographers, HDR is just a step on our way to the final print. And, again to make it clear, new technology does not automatically produce better images. HDR is simply a new tool that we can exploit and use to create images that were harder to achieve in the past.
Why create images with high dynamic range at all, if the output DR is so limited?
- Answer: Tone mapping - Map the tonal values from HDR to Output DR
That is why tone mapping is the most important - and also challenging - part of HDR processing for us photographers. The same HDR image can the tone-mapped in many different ways.
The HDR images also can be stored in different formats:
- EXR (.exr suffix, high color gamut and precision, DR about 30 f-stops)
- Radiance (.hdr suffix, less color gamut, huge DR)
- BEF (private format by Unified Color to get better color quality)
- 32 bit TIFF (very large files due to low compression and not much used in practice)
To create these HDR images you need special HDR software. We use:
- Photoshop CS5
- HDRsoft's Photomatix
- Unified Color's HDR Expose or Express
- Nik Software HDR Efex Pro 1.0
Unfortunately all these different software packages produce different HDR files. They can differ by (we'll cover these aspects in more detail later):
- Color (Hue and Saturation)
- Noise handling
- Chromatic Aberration (CA) handling
- Ghosting reduction
The Basics of Tone Mapping
As with our Low DR Scene case, we need to compress down to the output DR:
|Mapping down to Output DR|
How is this different to the Low DR Scene situation? This time the tonal compression is much stronger, and the classic tone-curve approach doesn't work that well anymore. The easiest way to show the basic tone mapping principles is to use an example:
|3 Exposures at Fort Point|
These are dark arcades at Fort Point in San Francisco. To demonstrate the tone mapping principles we'll use Unified Color's HDR Expose tool, because it allows us to use the various different operations involved in a modular fashion.
Here is the merged HDR file shown without any changes:
It's pretty dark in the shadows, and also almost totally blown out in the highlights. Lets take a look at the histogram as shown in HDR Expose:
|Histogram of the HDR original|
The shadows are not really a problem, but the highlights are clipped by about 2EV. First we'll see how a minus 2EV exposure correction would improve the image:
|-2 EV Exposure Compensation|
|-2.0 EV histogram|
Now the highlights seem to be much better but the overall image looks way too dark. What we need is a mix of exposure compensation and lowering the global contrast.
|Global Contrast reduction|
|The global contrast is now fine. No highlights are clipped and the shadows are open. Unfortunately the image looks quite flat.|
In pre-HDR days, the solution for such a situation would be to use an S-Curve in Photoshop:
|Simple Photoshop S-Curve|
But crafting a good S-Curve would take a while, and could easily result in over compressed highlights and shadows.
This is why tone mapping tools take another route: They improve the local contrast.
|Global and local contrast changed|
In this version the highlights show detail, the shadows are not blocked and the flatness is gone. This would be not our final version. We usually optimize the photo in Photoshop CS5:
- Tuning saturation
- Optimize Contrast with DOP Contrast Plus V2
- Final sharpening with DOP Optimal Sharp
Note: In Unified Color's HDR Expose you actually can control the global and local contrast independently. We like this systematic approach, as we understand the settings in photographic terms.
What makes all the HDR tools different is the algorithms they use to deal with lowering the contrast (e.g. how they deal with what should be "local"). There is no right or wrong, and it is more a question of your own preferences and personal photographic style.
All the main HDR tools in the market also offer control over further parameters:
- Detail: Very much related to local contrast and sharpening but not quite the same. Too much detail can make images "grungy". This can be what you want or just not. We use normally our DOP Detail Extractor V2 script for very strong detail.
- Saturation: Except of HDR Expose and Nik HDR Efex Pro (via U-Points) most HDR tool handle only global saturation (all colors treated the same)
- White Balance (WB): we try to solve this already at the Raw level in Lightroom (or other Raw converters).
- Noise Removal: again we remove the noise in Lightroom 3.x if needed.
- Shadow/Highlight: treatments to open shadows and toning down highlights.
- Curves: The Curves in Photoshop CS5's Toning (The new CS5 Tone-Mapper) are powerful, but require some time to get the right result.
We'll cover most of these aspects later in the HDR Workflow and HDR Tools chapters in more detail.
Summary on Dynamic Range and HDR
The approach to enhancing the dynamic range that your camera can capture is very old, because these limitations have been known about for a long time. Manual or automatic blending of images offers very powerful ways to map the combined dynamic range of your images down to the lower dynamic range of your printed output. Creating seamless blended images manually can be very challenging and time consuming: Dodge & Burn are techniques that can be very powerful for creating good print, but they require practise and perseverance.
Creating HDR images is a new way to master the same old problem. We as photographers are mostly interested in the tone-mapped results. Tone mapping algorithms face the challenge of compressing a high dynamic range down to an image we can view on a monitor or in a print. The various different methods of tone mapping can give very different results, and it is up to the photographer to select methods he likes best.
This is an edited version of the first chapter of an ongoing work by Uwe Steinmueller of Digital Outback Photo, featuring his personal experiences of HDR photography, and will eventually form the basis of a book on the art of HDR photography. If you'd like to find out more about digital imaging workflow from a fine art photographer's perspective then check out the Digital Outback Photo E-book, 'The Digital Photography Workflow Handbook (2010)', by Uwe Steinmueller and Juergen Gulbins, which covers the complete digital photography workflow from input to output. The 540 page prize-winning handbook covers everything from Import to Print (and even backup) and also features Photoshop and Lightroom techniques, HDR, color management and raw editing.
© 2010, www.dpreview.com & Uwe Steinmueller.
Jun 18, 2014
Jun 18, 2014
Jun 18, 2014
Jun 18, 2014
|Nectar Dancing by Lensmate|
from A Big Year - birds
|Sad clown by PEB|
|Mtl Gen X 2015 DP by MarioSS|
from - Gen X - (In Full Colours+ Border)
In this article, expert macro photographer Thomas Shahan shares advice for successful closeup photography of bugs, insects and small animals.
DJI's new firmware makes it difficult to fly in restricted airspace, even when you have proper clearance. Is DJI placing themselves between professionals and the FAA?
Go behind the scenes with National Geographic photographer Renan Ozturk and see what it takes to capture a dangerous, harrowing, stunning Nat Geo photo essay.
Erez Marom tells the story behind this ominous photo of the sand 'reaching up' towards the mountains at Skagsanden beach in Norway. He calls this photo 'Torment.'
DPReview staffer Carey Rose has taken the Panasonic Leica DG 15mm F1.7 along for everything from a city-side boat ride to a bachelor party across the mountains. Find out how the little Leica fared.
Canon just unveiled the largest 12-ink printer on the market. The new imagePROGRAF PRO-6000 printer can make prints from 17 all the way up to 60 inches wide.
"Standing in one of the holiest places on earth, I felt uneasy," writes Wired's Jason Parham. "Most of my fellow visitors, I realized with a brief bloom of nausea, were taking selfies."
Christopher Nolan's Dunkirk has been receiving great reviews, but it's a challenge to see it in its full glory. This handy infographic reveals the aspect ratio chaos that is wrought as the industry retreats from film.
Anti-bullying organization Ditch the Label's Annual Bullying Survey 2017 reveals yet again that Instagram, more so than any other social network, has a the worst effect on youth mental health.
It's been a crazy day for innovative patent news. Apparently Sony is thinking of developing a medium format curved sensor camera.
An update to the Silkypix Raw converter fixes some bugs and adds support for several popular new cameras.
This crazy custom-built underwater camera shoots 8x10 large format film. It's supposedly "the first successful underwater 8x10 ever made," and it can be yours for $5,800... plus shipping.
Blackmagic just reveled a new accessory for their Cintel Film Scanner. The Cintel Audio and KeyKode Reader can capture KeyKode data and high-quality audio from film in real-time as it is being scanned.
A new Nikon patent shows a lens designed for a curved full-frame sensor. Could this be the high-end Nikon mirrorless camera people are hoping for?
The ability to shoot images at 1,000 fps first appeared in a Sony smartphone sensor. Now the Japanese manufacturer is using the same feature for industrial applications.
Astronomy expert and photographer Dr. Tyler Nordgren thinks you should "see your first eclipse, photograph your second." But if you do plan on taking photos this August, here are a few tips from someone who's been there.
How confident are you that you can spot a manipulated photo? A recent study at the University of Warwick shows that many people are pretty bad at it.
If you purchased a Leica TL2, do NOT attach Leica's Visoflex electronic viewfinder. Leica is working on a fix, but for now, it's possible the viewfinder will break your camera.
Google just released Motion Stills for Android. Unlike the iOS version, the Android app uses a redesigned video processing pipeline that processes each frame of a video as it is being recorded, creating instant results.
A huge copyright lawsuit between photography firm VHT and Zillow Group is heating up again, as both sides appeal a court ruling that granted VHT $4 million in damages.
European Space Agency astronaut Thomas Pesquet spent 6 months on board the International Space Station where he worked with Google capturing spheric panorama images that are now available in Street View.
It's official. PDN has confirmed with parent company Aurelius that 94-year-old lighting company Bowens is indeed going out of business.
The newly launched firmware version 1.06 fixes AF-issues that can occur with some lenses that are not officially compatible with the MC-11 converter.
Voyager is a waterproof smart light stick you can control entirely from your phone. The light has already blown past its $300K funding goal on Indiegogo.
2018 is the last year Photokina will take place during the traditional end-of-September dates. In 2019, Photokina will take place from the 8th to the 11th of May.
The Canon IXUS 50 (known as the SD400 Digital ELPH in North America) was one of a string of high-performing, pocketable PowerShots of the mid-2000s. In this week's throwback Thursday, Barney casts his mind back to 2005.
A close look at the EOS 6D II's Raw files suggest its dynamic range has taken a significant step backwards compared with the company's recent DSLRs. We look at how much difference this might make for your photos.
With a full-production review unit in our hands, we've got over 100 production samples from the new Canon EOS 6D Mark II to share.
Need a break from your day? Kick back and watch the making of a somewhat unconventional mojito filmed on Canon's new EOS 6D Mark II.
The Bonfoton Camera Obscura Room Lens can turn any room into a camera obscura, projecting the view from your window onto the walls of your room.