Jack Hogan
Veteran Member
Recently there has been an increase in interest in HDR, driven in part by video and gaming. Now Adobe is getting into the game , giving the latest version of Adobe Camera Raw converter some HDR capabilities. Cutting through the marketing/techno-babble, I would like to share my view on what HDR means in practice to photographers and look forward to your thoughts on the matter, setting me straight where I go astray.
In a nutshell, it mainly* comes down to the increased Contrast Ratio of the latest and greatest TVs and monitors. While for the last decade decent affordable photographic displays have been more or less limited to less than 10 stops of CR in practice (500-1000:1), with 8-9 being typical, we are starting to see combinations of technologies that are able to achieve close to 12. In strict viewing conditions even more than that.
This is relevant to photographers because 12 stops or so of Dynamic range is the best that has been able to be achieved by a single still raw capture over the last decade - and barring computational photography that seems to be where things stand today. So far squeezing the potential 12 stops of captured DR into less than 10 of CR has been a challenge, leading to strong compromise solutions like standard Tone Curves or - somewhat better but more limiting - Tone Reproduction Operators. These compromises result in negative side effects like chromaticity shifts and compressed highlights bunched up at the right end of the histogram.
Now for the first time we are able to take the 12 stops or so of a single still capture and display it without squeezing, mitigating the issues above. This is neat as long as everyone who will view our images has a display with such capability.
The rest of HDR terminology and standards are all about delivering the 12 bits of linear image data from the raw converter to the display in an efficient manner (in terms of space and time). 'Efficient' in this context means 'lossy'. The jpeg standard was designed for 8-stop CR and can't deliver 12 stops adequately. Enter HDR10 et similia which do better than that.
The end.
Jack
* Yes, there is also something to do with maximum brightness, but I argue that for photographers that is less of a feature.
In a nutshell, it mainly* comes down to the increased Contrast Ratio of the latest and greatest TVs and monitors. While for the last decade decent affordable photographic displays have been more or less limited to less than 10 stops of CR in practice (500-1000:1), with 8-9 being typical, we are starting to see combinations of technologies that are able to achieve close to 12. In strict viewing conditions even more than that.
This is relevant to photographers because 12 stops or so of Dynamic range is the best that has been able to be achieved by a single still raw capture over the last decade - and barring computational photography that seems to be where things stand today. So far squeezing the potential 12 stops of captured DR into less than 10 of CR has been a challenge, leading to strong compromise solutions like standard Tone Curves or - somewhat better but more limiting - Tone Reproduction Operators. These compromises result in negative side effects like chromaticity shifts and compressed highlights bunched up at the right end of the histogram.
Now for the first time we are able to take the 12 stops or so of a single still capture and display it without squeezing, mitigating the issues above. This is neat as long as everyone who will view our images has a display with such capability.
The rest of HDR terminology and standards are all about delivering the 12 bits of linear image data from the raw converter to the display in an efficient manner (in terms of space and time). 'Efficient' in this context means 'lossy'. The jpeg standard was designed for 8-stop CR and can't deliver 12 stops adequately. Enter HDR10 et similia which do better than that.
The end.
Jack
* Yes, there is also something to do with maximum brightness, but I argue that for photographers that is less of a feature.
Last edited:
