HDR in 2023 Photography

Jack Hogan

Veteran Member
Messages
8,872
Solutions
17
Reaction score
4,220
Location
Toronto, CA
Recently there has been an increase in interest in HDR, driven in part by video and gaming. Now Adobe is getting into the game , giving the latest version of Adobe Camera Raw converter some HDR capabilities. Cutting through the marketing/techno-babble, I would like to share my view on what HDR means in practice to photographers and look forward to your thoughts on the matter, setting me straight where I go astray.

In a nutshell, it mainly* comes down to the increased Contrast Ratio of the latest and greatest TVs and monitors. While for the last decade decent affordable photographic displays have been more or less limited to less than 10 stops of CR in practice (500-1000:1), with 8-9 being typical, we are starting to see combinations of technologies that are able to achieve close to 12. In strict viewing conditions even more than that.

This is relevant to photographers because 12 stops or so of Dynamic range is the best that has been able to be achieved by a single still raw capture over the last decade - and barring computational photography that seems to be where things stand today. So far squeezing the potential 12 stops of captured DR into less than 10 of CR has been a challenge, leading to strong compromise solutions like standard Tone Curves or - somewhat better but more limiting - Tone Reproduction Operators. These compromises result in negative side effects like chromaticity shifts and compressed highlights bunched up at the right end of the histogram.

Now for the first time we are able to take the 12 stops or so of a single still capture and display it without squeezing, mitigating the issues above. This is neat as long as everyone who will view our images has a display with such capability.

The rest of HDR terminology and standards are all about delivering the 12 bits of linear image data from the raw converter to the display in an efficient manner (in terms of space and time). 'Efficient' in this context means 'lossy'. The jpeg standard was designed for 8-stop CR and can't deliver 12 stops adequately. Enter HDR10 et similia which do better than that.

The end.

Jack

* Yes, there is also something to do with maximum brightness, but I argue that for photographers that is less of a feature.
 
Last edited:
I have no disagreement with what you said, but I think there is a serious problem with ambiguity around the terminology "HDR".

This tutorial from Adobe talks about HDR as being the process of combining several shots taken with different exposures into a single JPEG. In other words, the process of squeezing a very large dynamic range down to about 8 stops or so, which can then be viewed on any ordinary screen or print.

Surprisingly, the tutorial never mentions the fact that a large dynamic range is being mapped onto a much smaller dynamic range to fit the viewing devices.

Many photographers seem to be ignorant of the fact that processing raw images to JPEGs has to compress the dynamic range down to suit the viewing devices.

Photographers often talk about the dynamic range of the camera, but rarely mention the dynamic range of the display screen or print.
 
Yeah, good point about ambiguity with the label Tom, perhaps we should distinguish between input and output ranges. Since Standard Dynamic Range has been less than what digital still cameras have been able to produce over the past decade or so, every single image so captured is in fact High Dynamic Range. The fact that one can stack several images at different exposures to increase effective input DR to the display makes no difference to the overall argument in this context: it simply changes the numbers.

I guess we also need a proper definition for SDR now. I haven't found a definitive one, but I think we can probably take sRGB 's (or Rec. 709) as a proxy. They indicate a contrast ratio of 400:1 (80:0.2 cd/m^2), which would have been a good average both for CRTs and decent photographic LCDs up to about a decade ago. Jpeg would also have been a compatible vessel.

For instance my Adobe RGB Dell 2410 circa 2010 had a contrast ratio of about 500:1 once calibrated as I like it (about 9 stops). My current wide-gamut Benq SW270c achieves a little under 10 similarly calibrated. There are now some oled screens with quantum dot and other technologies which, despite claims of infinity and beyond, achieve a couple of stops beyond that in practice once veiling glare is taken into consideration. In fact, my Pioneer Kuro 50 incher HD plasma TV has been doing that at 12 bits per channel for the last 15 years.

Jack
 
Last edited:
Why care about that HDR displays at all?
Low intensity means user would have to switch off all sources of light in their room to view your shadows in your image. High intensity means user's eyes would hurt.
Which problem is that supposed to solve?
Maybe HDR displays have use as a part of VR set, but I don't see usage for desktop HDR beyond GAS
 
I have no disagreement with what you said, but I think there is a serious problem with ambiguity around the terminology "HDR".

This tutorial from Adobe talks about HDR as being the process of combining several shots taken with different exposures into a single JPEG. In other words, the process of squeezing a very large dynamic range down to about 8 stops or so, which can then be viewed on any ordinary screen or print.

Surprisingly, the tutorial never mentions the fact that a large dynamic range is being mapped onto a much smaller dynamic range to fit the viewing devices.

Many photographers seem to be ignorant of the fact that processing raw images to JPEGs has to compress the dynamic range down to suit the viewing devices.

Photographers often talk about the dynamic range of the camera, but rarely mention the dynamic range of the display screen or print.
Good point here.

At this point, I generally consider HDR to be anything that can't fit into an 8-bit sRGB container. I might even consider the threshold to be "can't fit and look good without local tonemapping" since global tonemapping is extremely common.

Individual raw captures from most cameras are HDR nowadays. Yes, you can go even further and get your HDR even higher, but an individual exposure is HDR at this point.
(Mobile devices without stacking are probably the exception.

I often use the term LDR instead of SDR. What is "standard" anyway with regards to dynamic range after all?

Merging images with Debevec's or Robertson's algorithms turns LDR inputs to HDR

Mertens exposure fusion is unique in that it takes multiple LDR inputs and outputs a tonemapped LDR output without an intermediary HDR representation. As a side note, Mertens fusion can be (ab)used as a tonemapper by feeding it multiple synthetic LDR images generated from a single HDR representation. (Google's HDR+ does this for example.)

HLG or PQ-encoded 10-bit images are HDR since they encode way more usable dynamic range than you can encode in an 8-bit gamma-encoded image.

I would consider even HLG-encoded 8-bit to be HDR - while it technically violates the HLG standard, HLG + Rec.2020 fed to an HDR-capable display without any tonemapping looks quite nice.

As a side note, a major benefit that comes along with most HDR-capable displays is wider gamut capability. Both require departure from 8-bit sRGB as a standard. (IMO, HLG + Rec. 2020 is the limit of how far you can push an 8 bit container before you've gone too far.)

The fact that basically all cameras on the market do global tonemapping internally definitely muddies the waters a bit. As a side note, our brains are so used to this tonemapping that anything which is not tonemapped with an S-curve looks strange to us if it is within an emissive rectangle - to the point where I think a lot of HDR content out there in the commerical world (Netflix, Prime Video, etc) has still had some global tonemapping to make it look more "cinematic" (e.g. "looks like how film behaves")

It's too bad that for stills, we probably won't see the installed base of displays with appropriate content and delivery pipeline for years. It's vastly different than video, where there have been well established delivery pipelines that leverage the improved capabilities of displays for years now. It's still my experience that the only way I can reliably deliver an HDR still image to an HDR display is to encode it to 10-bit H.265 with appropriate metadata (HDR10 metadata for PQ, alternate transfer curve SEI flag for HLG).

--
Context is key. If I have quoted someone else's post when replying, please do not reply to something I say without reading text that I have quoted, and understanding the reason the quote function exists.
 
Last edited:
Why care about that HDR displays at all?
Low intensity means user would have to switch off all sources of light in their room to view your shadows in your image. High intensity means user's eyes would hurt.
Which problem is that supposed to solve?
My contention Enginel is that the primary usefulness of HDR displays for photographers is their ability to display a higher contrast ratio than is possible with SDR, resulting in more 'accurate' and pleasing displayed images. Because of the way that raw captures tend to be rendered today, this mainly means that highlights will be subjected to less compression during raw conversion and therefore appear less dull and more life-like. With single still captures the effect is mainly evident in scenes with large dynamic range or scenes with mixed and specular reflections, what Ansel Adams would call Zones VIII and IX.

So for instance in landscapes most of the difference can be noticed in appropriately brighter, prettier clouds and glaciers. I can confirm that. I can also confirm that, unless I make it evident by cycling back and forth - in which case they like it - none of my 'clients' would have noticed the missing HDR range.

Unless one is limited in how dark the showroom can be, a higher brightness is not strictly necessary. But if one assumes that a typical viewing environment cannot be darker than a certain ambient luminance, which also portends a minimum amount of veiling glare, then a base is set upon which to build the CR. And as the number of CR stops increases so necessarily will peak brightness. Which however would be limited to a small percentage of the image, at least with my typical landscapes.

With my HDR10 monitor that's what I see. I wouldn't want to have it in HDR mode when working on other business - but it does make photographs look even prettier when it is on. Turning it off and on is accomplished by pressing a single button on my monitor and activating HDR mode in Windows 11, as discussed in the DPR article.
Maybe HDR displays have use as a part of VR set, but I don't see usage for desktop HDR beyond GAS
You may very well be right for now, and things do indeed move slowly (it seems I personally change monitors and TVs infrequently). However, when the time will come to replace my aging TV and the technology has advanced some more, I will give a serious look at the new model's CR/HDR related specs.

Jack
 
Last edited:
Entropy512 wrote: ...
I often use the term LDR instead of SDR. What is "standard" anyway with regards to dynamic range after all?
Good point.
Merging images with Debevec's or Robertson's algorithms turns LDR inputs to HDR

Mertens exposure fusion is unique in that it takes multiple LDR inputs and outputs a tonemapped LDR output without an intermediary HDR representation. As a side note, Mertens fusion can be (ab)used as a tonemapper by feeding it multiple synthetic LDR images generated from a single HDR representation. (Google's HDR+ does this for example.)

HLG or PQ-encoded 10-bit images are HDR since they encode way more usable dynamic range than you can encode in an 8-bit gamma-encoded image.

I would consider even HLG-encoded 8-bit to be HDR - while it technically violates the HLG standard, HLG + Rec.2020 fed to an HDR-capable display without any tonemapping looks quite nice.

As a side note, a major benefit that comes along with most HDR-capable displays is wider gamut capability. Both require departure from 8-bit sRGB as a standard. (IMO, HLG + Rec. 2020 is the limit of how far you can push an 8 bit container before you've gone too far.)
Interesting.
The fact that basically all cameras on the market do global tonemapping internally definitely muddies the waters a bit. As a side note, our brains are so used to this tonemapping that anything which is not tonemapped with an S-curve looks strange to us if it is within an emissive rectangle - to the point where I think a lot of HDR content out there in the commerical world (Netflix, Prime Video, etc) has still had some global tonemapping to make it look more "cinematic" (e.g. "looks like how film behaves")
Yes indeed.
It's too bad that for stills, we probably won't see the installed base of displays with appropriate content and delivery pipeline for years. It's vastly different than video, where there have been well established delivery pipelines that leverage the improved capabilities of displays for years now. It's still my experience that the only way I can reliably deliver an HDR still image to an HDR display is to encode it to 10-bit H.265 with appropriate metadata (HDR10 metadata for PQ, alternate transfer curve SEI flag for HLG).
:-) Time will tell.
 
Hi Jack

I’m a recent convert to HDR video and stills. It started when I bought an iPhone 13 a few months ago. This phone can produce pretty decent HDR video and it looks good on the iPhone screen which has a max sustained brightness of about 1000 nits (and a peak of about 1200 nits). The format is 10 bit Dolby Vision with an HLG transfer function. I then started to investigate how I could edit and view this on a bigger screen. This ultimately lead to me buying a 16 in MacBook Pro. This has a max sustained screen brightness of 1000 nits and a peak of 1600 nits. I was very pleased with this for video and delighted when Adobe introduced their HDR function in ACR. They call it HDRO sometimes which refers to extended DR being applied to the output (display). My understanding is that it uses 32 bit floating point for raw processing.

I think that this HDR for stills gives brilliant results. The split histogram gives us an idea of what’s happening. It seems to me that it basically stretches out the highlights which compensates for the highlight compression caused by the tone curve. The improvement depends on the type of image but clouds and sky and waterfalls can look much better as do specular highlights and direct light sources.

The big problem is that most people won’t have a system to display these images properly. I’ve tried the windows version of ACR on my Sony tv but the results are nowhere near as good as with the Mac. The tv only has a max brightness of about 500 nits.

An exciting journey lies ahead, albeit slow!

Dave
 
Hi Jack

The format is 10 bit Dolby Vision with an HLG transfer function.
As an FYI, there's no way that is the case. Dobly Vision uses PQ for transfer function (with the only differences from HDR10, which also uses PQ, being the included metadata).

HLG is the only thing that uses the HLG transfer function. Everything else I'm aware of is PQ.

(There's apparently a provision in CTA 861-G to flag content as having the sRGB transfer function but with HDR peak luminance, but I've never seen this used in practice. https://webstore.ansi.org/standards/ansi/cta8612016ansi )
 
Hi Jack

The format is 10 bit Dolby Vision with an HLG transfer function.
As an FYI, there's no way that is the case. Dobly Vision uses PQ for transfer function (with the only differences from HDR10, which also uses PQ, being the included metadata).
That's what I thought too but this screen shot from MediaInfo says otherwise. Also I use HLG as an input and output in Davinci when editing and it works fine.



6124aad68225444cb721953da741ac1c.jpg

HLG is the only thing that uses the HLG transfer function. Everything else I'm aware of is PQ.

(There's apparently a provision in CTA 861-G to flag content as having the sRGB transfer function but with HDR peak luminance, but I've never seen this used in practice. https://webstore.ansi.org/standards/ansi/cta8612016ansi )
 
Hi Jack

I’m a recent convert to HDR video and stills. It started when I bought an iPhone 13 a few months ago. This phone can produce pretty decent HDR video and it looks good on the iPhone screen which has a max sustained brightness of about 1000 nits (and a peak of about 1200 nits). The format is 10 bit Dolby Vision with an HLG transfer function. I then started to investigate how I could edit and view this on a bigger screen. This ultimately lead to me buying a 16 in MacBook Pro. This has a max sustained screen brightness of 1000 nits and a peak of 1600 nits. I was very pleased with this for video and delighted when Adobe introduced their HDR function in ACR. They call it HDRO sometimes which refers to extended DR being applied to the output (display). My understanding is that it uses 32 bit floating point for raw processing.
Good one Dave, HDRO vs HDRI I guess, adding to the burgeoning terminology
I think that this HDR for stills gives brilliant results. The split histogram gives us an idea of what’s happening. It seems to me that it basically stretches out the highlights which compensates for the highlight compression caused by the tone curve. The improvement depends on the type of image but clouds and sky and waterfalls can look much better as do specular highlights and direct light sources.
Right. In fact the 'Auto' button produces identical positions in the various sliders of the Basic Edit section both in SDR and HDR mode (Exposure, Contrast, Highlights etc.) suggesting that, absent a tone curve or equivalent tone mapping to SDR, the HDR image is developed for middle gray as it always would be with tones allowed to fall where they may linearly (?).
The big problem is that most people won’t have a system to display these images properly.
Right, hence attempts at standards
I’ve tried the windows version of ACR on my Sony tv but the results are nowhere near as good as with the Mac. The tv only has a max brightness of about 500 nits.
Just out of curiosity, did you turn on HDR in Windows and what HDR mode does your TV support? I noticed that even if Windows/Monitor are not in HDR mode, ACR still allows to toggle HDR mode on and off, and it does show a difference in the highlights. I presume that what we get in that case is a linear output, with highlights greater than L*100 clipped instead of rolled off (?).

This is in fact what I do with some of my single stills when they show large dynamic range excursions. See for instance here
https://www.strollswithmydog.com/apply-forward-color-matrix/
An exciting journey lies ahead, albeit slow!
Indeed!

Jack
 
Last edited:
Hi Jack

I’m a recent convert to HDR video and stills. It started when I bought an iPhone 13 a few months ago. This phone can produce pretty decent HDR video and it looks good on the iPhone screen which has a max sustained brightness of about 1000 nits (and a peak of about 1200 nits). The format is 10 bit Dolby Vision with an HLG transfer function. I then started to investigate how I could edit and view this on a bigger screen. This ultimately lead to me buying a 16 in MacBook Pro. This has a max sustained screen brightness of 1000 nits and a peak of 1600 nits. I was very pleased with this for video and delighted when Adobe introduced their HDR function in ACR. They call it HDRO sometimes which refers to extended DR being applied to the output (display). My understanding is that it uses 32 bit floating point for raw processing.
Good one Dave, HDRO vs HDRI I guess, adding to the burgeoning terminology
I think that this HDR for stills gives brilliant results. The split histogram gives us an idea of what’s happening. It seems to me that it basically stretches out the highlights which compensates for the highlight compression caused by the tone curve. The improvement depends on the type of image but clouds and sky and waterfalls can look much better as do specular highlights and direct light sources.
Right. In fact the 'Auto' button produces identical positions in the various sliders of the Basic Edit section both in SDR and HDR mode (Exposure, Contrast, Highlights etc.) suggesting that, absent a tone curve or equivalent tone mapping to SDR, the HDR image is developed for middle gray as it always would be with tones allowed to fall where they may linearly (?).
Jack I’ve been fiddling around with images of a Color Checker at a few different exposures and looking at the values of the gray patches to try and establish what is happening when switching from SDR to HDR. The values in the SDR range of the histogram are in 8 bit format whereas in the HDR range they are shown in stops above graphic white according to Adobe which I believe means 90% reflectance. But I’ve got more work to do on this. As far as I can see, the tone curve applied to the SDR version stays in place for the HDR version. The shadows and mid-tones don’t seem to change in appearance when you switch.
The big problem is that most people won’t have a system to display these images properly.
Right, hence attempts at standards
I’ve tried the windows version of ACR on my Sony tv but the results are nowhere near as good as with the Mac. The tv only has a max brightness of about 500 nits.
Just out of curiosity, did you turn on HDR in Windows and what HDR mode does your TV support? I noticed that even if Windows/Monitor are not in HDR mode, ACR still allows to toggle HDR mode on and off, and it does show a difference in the highlights. I presume that what we get in that case is a linear output, with highlights greater than L*100 clipped instead of rolled off (?).

This is in fact what I do with some of my single stills when they show large dynamic range excursions. See for instance here
https://www.strollswithmydog.com/apply-forward-color-matrix/
Yes I had HDR turned on in Windows. The TV gives a choice of HDR10 or HLG for HDR mode and HDR10 seems to be the one that works best. I have a graphics card which supports 10 bit HDR but I don’t have full confidence that my windows system is operating optimally.
An exciting journey lies ahead, albeit slow!
Indeed!

Jack
Dave
 
Last edited:
Wanted to copy/paste some comments from Eric Chan from Adobe regarding HDRO. These were found on the Chrome/Chromium developer forum in a thread regarding a HDRO rendering issue that was occurring.

zp4mEhh.png


yR86ajg.png


J8a12Tk.png

In HDR mode, Camera Raw does all of its internal composite and rendering math in relative floating-point where 1.0 means SDR white (i.e., the same meaning as with SDR images). When reading and exporting HDR images, ACR needs to know how to map 1.0 to/from the relevant encoding space.

Currently macOS/iOS/Windows share the same basic mechanism for displaying overrange data; using a fp16 (half-float) buffer in linear extended sRGB space (Microsoft calls this scRGB). In the Apple case, 1.0 means SDR white. In the Windows case, 1.0 means 80 nits. Either way, I think of the nit level as a holdover from the HDR video standards, and not something particularly meaningful for the case of stills, where all we really need to know is where to place a tone relative to SDR white.
 
When I did a quick check with the Pentax Spotmeter, I found the brightness range from a dark corner of a room to bright sky visible through the window was about 16 stops.

Probably an SEI Photometer would give a more accurate reading, but I don't have one. (That instrument is not usable by spectacle wearers. I could use one now that I've had a cataract operation.)

I don't think any current commercially available camera has a big enough dynamic range to record a 16 stop brightness range. It is of course easy to bracket and merge the results to get something plausible. (Affinity Photo is very handy for this.)

This is most relevant, I think, to real estate photography.

Don Cox
 
Wanted to copy/paste some comments from Eric Chan from Adobe regarding HDRO. These were found on the Chrome/Chromium developer forum in a thread regarding a HDRO rendering issue that was occurring.

zp4mEhh.png


yR86ajg.png


J8a12Tk.png

In HDR mode, Camera Raw does all of its internal composite and rendering math in relative floating-point where 1.0 means SDR white (i.e., the same meaning as with SDR images). When reading and exporting HDR images, ACR needs to know how to map 1.0 to/from the relevant encoding space.

Currently macOS/iOS/Windows share the same basic mechanism for displaying overrange data; using a fp16 (half-float) buffer in linear extended sRGB space (Microsoft calls this scRGB). In the Apple case, 1.0 means SDR white. In the Windows case, 1.0 means 80 nits. Either way, I think of the nit level as a holdover from the HDR video standards, and not something particularly meaningful for the case of stills, where all we really need to know is where to place a tone relative to SDR white.
This looks like very interesting reading. Eric Chan is a great source of information on Adobe stuff. Thanks for posting, will go through the detail when I get a chance.

Dave
 
Hi Jack

The format is 10 bit Dolby Vision with an HLG transfer function.
As an FYI, there's no way that is the case. Dobly Vision uses PQ for transfer function (with the only differences from HDR10, which also uses PQ, being the included metadata).
See this at 10:33:
 

Keyboard shortcuts

Back
Top