Editing for SDR and HDR output

Thank you for the answers.

Adobe software allows for tuning the gain maps which are included when exporting JPEGs. Early experiments show promising results: HDR images look good on SDR devices. I still feel that one gets better SDR results if one develops for SDR alone.

I am using mainly sRGB at export because I intend to share HDRs via browsers. I also allow only for two stops more in HDR edit, hoping that it would cause less issues with various monitors.

I tried using Instagram for HDR images but it does not work. I guess my account was not included yet in the early rollout stage.
And there you have it by using sRGB combined with hdr gamma you have the brightness but keep the color corrected but on the other hand provide limited benefit for viewing

lots of compromises sometimes for best case 1 stop

difference
You guys lost me a little bit there but browsers that currently support HDR are Opera, Chrome, and few others, and Safari with HDR support is in beta and will be out next month. So within five weeks almost 95% of browsers will support HDR. As far as sharing directly via browsers HDR is looking pretty good. Note, if you have to go through Squarespace or DRPreview etc that support is still to come, but direct coding of the image link will be fine--try the AVIF format (with HDR turned on when you save) rather than JPEG (and don't bother with JPEG XL--it appears to be a dead end).
Browser support hasnt been an issue provided the websites support the files themselves

The challenge is that while browsing is standardised on srgb and standard gamma hdr is device dependant and not harmonised

Is down to the operator to make it more compatible
I have no trouble opening and viewing AVIF files in P3 on my Mac with either Opera or Chrome. Why would you want to use sRGB?.
 
Thank you for the answers.

Adobe software allows for tuning the gain maps which are included when exporting JPEGs. Early experiments show promising results: HDR images look good on SDR devices. I still feel that one gets better SDR results if one develops for SDR alone.

I am using mainly sRGB at export because I intend to share HDRs via browsers. I also allow only for two stops more in HDR edit, hoping that it would cause less issues with various monitors.

I tried using Instagram for HDR images but it does not work. I guess my account was not included yet in the early rollout stage.
And there you have it by using sRGB combined with hdr gamma you have the brightness but keep the color corrected but on the other hand provide limited benefit for viewing

lots of compromises sometimes for best case 1 stop

difference
You guys lost me a little bit there but browsers that currently support HDR are Opera, Chrome, and few others, and Safari with HDR support is in beta and will be out next month. So within five weeks almost 95% of browsers will support HDR. As far as sharing directly via browsers HDR is looking pretty good. Note, if you have to go through Squarespace or DRPreview etc that support is still to come, but direct coding of the image link will be fine--try the AVIF format (with HDR turned on when you save) rather than JPEG (and don't bother with JPEG XL--it appears to be a dead end).
Browser support hasnt been an issue provided the websites support the files themselves

The challenge is that while browsing is standardised on srgb and standard gamma hdr is device dependant and not harmonised

Is down to the operator to make it more compatible
I have no trouble opening and viewing AVIF files in P3 on my Mac with either Opera or Chrome. Why would you want to use sRGB?.
Windows PC are based on srgb

well, that’s just sad. :(
—Darin
 
I have had HDR screen since the very beginning
That does not mean that you can view HDR mode images properly.
There is an overwhelming emphasis on brigthness however what matters is contrast
What matters also is detail and color which you lose in SDR mode (highlights).
I much prefer my LG OLED tv with true black to the bright image of my macbook pro or my iphones or even worse some desktop monitors that are just basic LED

Ultimately 10 stops of dynamic range are plenty and many images do not even reach them
I have very rarely seen an image that has not improved by switching to HDR mode.
Sunset sunrise and backlit shots are examples where HDR can help but your examples do not particularly benefit from it

Instagram started to support HDR because phones create HDR images

Phones will drive the adoption, we have had Tv and monitors for years and nothing has happened to date

HDR is interesting because it can avoid editing which many times is done to rebalance dynamic range in a gamma you can display or print, this is the same principles of HLG broadcasting
If it were not for occasional too-bright spots, HDR mode requires less editing. Especially the problematic sky masks can often be avoided.

My advice to everyone is, before making an opinion on HDR mode, to try it out first. The follow up discussions will be more productive after the experimentation.
I have sone hdr video since 6 years and photos since 2
You have much more experience with HDR output than me. Would you mind sharing what tools you used for two years to create HDR photos, what output format you used, and how you shared/viewed HDR photos?
your example images are not ideal to showcase the benefits
I am still struggling with HDR. What I see in Develop mode differs from what I see in the exported AVIF files. Do you have an example that is a better showcase? I consider images with overly bright highlights to be the negative aspect of HDR, the same as the garish look of early HDR merges.
with regards to colors in the highlights well there are not that many the highlights in hdr are super whites
In SDR, the highlights in the overcast sky are muted grey, while in HDR, the original blue comes through. That is visible in my posted images. I consider as highlights any brightness that falls in the HDR part of the tone curve.
dont confuse potential benefits of 10 bit color depth images with hdr dynamic range benefits
I am looking at the practical results. The problem is that the highlights seem to lose information in SDR (colors and details), which are apparent in HDR.
You can have an sdr 10 bits image that shows more colors than 8 bit jpeg
The biggest issue I have had is portability which is mostly why I dont use HDR for images

the easiest way to share images is to put them in a drive in chrome that reads them

Instagram now supports HDR as well but this is mostly optimised for acquisition on phones

for most practical purposes the best combination is to use hdr gamma with p3 gamut this is because you can find a larger number of devices that support P3 especially apple

as PC user though you run into issues as your computer is likely using srgb on hdr gamma for static pages which means the colors go off gamut and you get unexpected results when someone share p3 hdr images

in terms of editing the other problem is that you need to decide what display you are producing for as not all of them supports the full dr of the inage you may output

finally and more importantly if you produce an hdr image the rendering of the related SDR photo for those who dont see hdr is done by your display software again with unexpected results

all the above considered for photos I prefer to just output sdr images…
Thank you for the answers.

Adobe software allows for tuning the gain maps which are included when exporting JPEGs. Early experiments show promising results: HDR images look good on SDR devices. I still feel that one gets better SDR results if one develops for SDR alone.

I am using mainly sRGB at export because I intend to share HDRs via browsers. I also allow only for two stops more in HDR edit, hoping that it would cause less issues with various monitors.

I tried using Instagram for HDR images but it does not work. I guess my account was not included yet in the early rollout stage.
And there you have it by using sRGB combined with hdr gamma you have the brightness but keep the color corrected but on the other hand provide limited benefit for viewing

lots of compromises sometimes for best case 1 stop
In my edits, I see subtle but essential improvements in IQ.
difference
 
In his books ansel Adams talked about the dynamic range of the subject, the negative and the print. He gave different names to all three. If I rembembeR correctly, they were subject brightness range, negative exposure range and print reflectance range. He said that the subject brightness range could vary from very low like 3 stops to very high like 1:20,000 (whatever that is in stops). If the SBR was low, any film could capture the full range. If it was high, the negative could not, normally. However, the zone system included tricks like overexposing slightly and reducing the development time that would increase the exposure range of the negative to be able to cope with large SBR. The modern trick for expanding sensor exposure range is to take multiple shots at different exposures and blend them. This kind of HDR if done properly can significantly help you capture the full brightness range of a high SBR subject. It remains a perfectly valid technique today. Unlike those excreble photomatix style tone mapping effects that badly misuse the wide exposure range captured. The problem, when you have successfully captured a wide SBR onto a wide exposure range negative (or raw file) is how you display the results. In Ansel's case, he was displaying as b&w prints. The trick was to preserve as much of the midtone separation as possible within the limited reflectance range of paper, then take advantage of the non linear tie and shoulder regions of the h&d curve of the paper to compress the dark and low tones by reducing the separation between between them. But if you do it right, the results doesn't look compressed at all, even though the paper reflectance range is much lower than the negative/raw range.

My feeling that HDR at the capture stage is really important for very contrasty scenes, but at least with wet printing, it isn't important at all.

What I'm not sure about is whether inkjet printing has a similar non linear tie and shoulder like wet chemistry. I think it must have because I haven't noticed ink jet prints looking obviously worse than wet prints. I used to know a bafta nominated BBC film editor and he told me he hated working with video because it had no highlight shoulder and burnt out highlights very easily. I wonder if this is an issue with the way monitors display digital images and the reason why the new style HDR monitors can be useful for still images.
 
In his books ansel Adams talked about the dynamic range of the subject, the negative and the print. He gave different names to all three. If I rembembeR correctly, they were subject brightness range, negative exposure range and print reflectance range. He said that the subject brightness range could vary from very low like 3 stops to very high like 1:20,000 (whatever that is in stops).


d6d4bb05f8dc480e80ff00be89a089ba.jpg.png



--
 
What I'm not sure about is whether inkjet printing has a similar non linear tie and shoulder like wet chemistry.
If that exists, it is calibrated out in the profile creating process.
 
Ta. Although I hate it when I post something then make extensive revisions to it only to find someone has posted a reply and blocked my edits from being reposted! You'd think after all these years DPR could have built a preview post/edit function in....grumble, grumble.
 
I usually use my fingers to work out stops. 2x, 4x, 8x, 16x..... I can't tell you how grateful I am for my Nd filter calculator app.
 
In his books ansel Adams talked about the dynamic range of the subject, the negative and the print. He gave different names to all three. If I rembembeR correctly, they were subject brightness range, negative exposure range and print reflectance range. He said that the subject brightness range could vary from very low like 3 stops to very high like 1:20,000 (whatever that is in stops). If the SBR was low, any film could capture the full range. If it was high, the negative could not, normally. However, the zone system included tricks like overexposing slightly and reducing the development time that would increase the exposure range of the negative to be able to cope with large SBR. The modern trick for expanding sensor exposure range is to take multiple shots at different exposures and blend them. This kind of HDR if done properly can significantly help you capture the full brightness range of a high SBR subject. It remains a perfectly valid technique today. Unlike those excreble photomatix style tone mapping effects that badly misuse the wide exposure range captured. The problem, when you have successfully captured a wide SBR onto a wide exposure range negative (or raw file) is how you display the results. In Ansel's case, he was displaying as b&w prints. The trick was to preserve as much of the midtone separation as possible within the limited reflectance range of paper, then take advantage of the non linear tie and shoulder regions of the h&d curve of the paper to compress the dark and low tones by reducing the separation between between them. But if you do it right, the results doesn't look compressed at all, even though the paper reflectance range is much lower than the negative/raw range.

My feeling that HDR at the capture stage is really important for very contrasty scenes, but at least with wet printing, it isn't important at all.

What I'm not sure about is whether inkjet printing has a similar non linear tie and shoulder like wet chemistry. I think it must have because I haven't noticed ink jet prints looking obviously worse than wet prints. I used to know a bafta nominated BBC film editor and he told me he hated working with video because it had no highlight shoulder and burnt out highlights very easily. I wonder if this is an issue with the way monitors display digital images and the reason why the new style HDR monitors can be useful for still images.
The thing I love about negative film is that the highlights do not clip as they do in digital and the film highlights look much better to me and they bother me in digital. HDR gives us significant improvements in highlights (which is not only brightness).
--
2024: Awarded Royal Photographic Society LRPS Distinction
Photo of the day: https://whisperingcat.co.uk/wp/photo-of-the-day/
Website: http://www.whisperingcat.co.uk/
DPReview gallery: https://www.dpreview.com/galleries/0286305481
Flickr: http://www.flickr.com/photos/davidmillier/ (very old!)
 
The thing I love about negative film is that the highlights do not clip as they do in digital and the film highlights look much better to me and they bother me in digital. HDR gives us significant improvements in highlights (which is not only brightness).
You are comparing a scene-referred representation to an output-referred one.
 
The thing I love about negative film is that the highlights do not clip as they do in digital and the film highlights look much better to me and they bother me in digital. HDR gives us significant improvements in highlights (which is not only brightness).
You are comparing a scene-referred representation to an output-referred one.
I was thinking of negative highlights properly scanned. So that would be apples and apples, right? Even when exposing digital image properly, the highlights do not look as good to me as the highlights that I get from scanned negatives. It could be the increasing noise in the negative’s highlight, similar to increasing noise in the positive’s shadows. Do you have a different experience?
 
Last edited:
The thing I love about negative film is that the highlights do not clip as they do in digital and the film highlights look much better to me and they bother me in digital. HDR gives us significant improvements in highlights (which is not only brightness).
You are comparing a scene-referred representation to an output-referred one.
I was thinking of negative highlights properly scanned. So that would be apples and apples, right? Even when exposing digital image properly, the highlights do not look as good to me as the highlights that I get from scanned negatives.
Not my experience, if there's no raw clipping.
It could be the increasing noise in the negative’s highlight, similar to increasing noise in the positive’s shadows. Do you have a different experience?
 
The thing I love about negative film is that the highlights do not clip as they do in digital and the film highlights look much better to me and they bother me in digital. HDR gives us significant improvements in highlights (which is not only brightness).
You are comparing a scene-referred representation to an output-referred one.
I was thinking of negative highlights properly scanned. So that would be apples and apples, right? Even when exposing digital image properly, the highlights do not look as good to me as the highlights that I get from scanned negatives.
Not my experience, if there's no raw clipping.
OK, my negative scanning experience is much lower than yours.
It could be the increasing noise in the negative’s highlight, similar to increasing noise in the positive’s shadows. Do you have a different experience?
--
https://blog.kasson.com
 
The thing I love about negative film is that the highlights do not clip as they do in digital and the film highlights look much better to me and they bother me in digital. HDR gives us significant improvements in highlights (which is not only brightness).
You are comparing a scene-referred representation to an output-referred one.
I was thinking of negative highlights properly scanned. So that would be apples and apples, right? Even when exposing digital image properly, the highlights do not look as good to me as the highlights that I get from scanned negatives.
Not my experience, if there's no raw clipping.
OK, my negative scanning experience is much lower than yours.
Chemical printing is different, since then there is the toe of the paper H&D curve interacting with the shoulder of the negative H&D curve.
It could be the increasing noise in the negative’s highlight, similar to increasing noise in the positive’s shadows. Do you have a different experience?
You mentioned noise. There will be more noise in the highlights of scanned film negs than in digital direct captures, especially if you try to straighten out the shoulder in post.
 
The thing I love about negative film is that the highlights do not clip as they do in digital and the film highlights look much better to me and they bother me in digital. HDR gives us significant improvements in highlights (which is not only brightness).
You are comparing a scene-referred representation to an output-referred one.
I was thinking of negative highlights properly scanned. So that would be apples and apples, right? Even when exposing digital image properly, the highlights do not look as good to me as the highlights that I get from scanned negatives.
Not my experience, if there's no raw clipping.
OK, my negative scanning experience is much lower than yours.
Chemical printing is different, since then there is the toe of the paper H&D curve interacting with the shoulder of the negative H&D curve.
It could be the increasing noise in the negative’s highlight, similar to increasing noise in the positive’s shadows. Do you have a different experience?
You mentioned noise. There will be more noise in the highlights of scanned film negs than in digital direct captures, especially if you try to straighten out the shoulder in post.
I was probably remembering “clipped” images and with negative, highlights do not clip but get noisier.
 
The thing I love about negative film is that the highlights do not clip as they do in digital and the film highlights look much better to me and they bother me in digital. HDR gives us significant improvements in highlights (which is not only brightness).
You are comparing a scene-referred representation to an output-referred one.
I was thinking of negative highlights properly scanned. So that would be apples and apples, right? Even when exposing digital image properly, the highlights do not look as good to me as the highlights that I get from scanned negatives.
Not my experience, if there's no raw clipping.
OK, my negative scanning experience is much lower than yours.
Chemical printing is different, since then there is the toe of the paper H&D curve interacting with the shoulder of the negative H&D curve.
It could be the increasing noise in the negative’s highlight, similar to increasing noise in the positive’s shadows. Do you have a different experience?
You mentioned noise. There will be more noise in the highlights of scanned film negs than in digital direct captures, especially if you try to straighten out the shoulder in post.
I was probably remembering “clipped” images and with negative, highlights do not clip but get noisier.
And lower contrast.
 
Another thing.... People get confused on Googling HDR. In Photography HDR (vs watching TV or video) 99% of it is about blending images of various EV, which we have been messing around with for 20 years.

What we are talking about here is way different. Google HDR and you will get 500 videos instructing you how to do a simple HDR blend in LR or PS.

I wish it had different names.
We need different names, for sure.

But just as a footnote, there's no good reason (and probably reasonable scenarios) where you might want to do HDR blending aimed at a "new HDR" image. Why not, I mean, aside from the crazy, incomprehensible nomenclature that will result.

HDR/HDR?

Double HDR?

HDR Squared?

HHDR?

--Darin
 
I have had HDR screen since the very beginning
That does not mean that you can view HDR mode images properly.
There is an overwhelming emphasis on brigthness however what matters is contrast
What matters also is detail and color which you lose in SDR mode (highlights).
I much prefer my LG OLED tv with true black to the bright image of my macbook pro or my iphones or even worse some desktop monitors that are just basic LED

Ultimately 10 stops of dynamic range are plenty and many images do not even reach them
I have very rarely seen an image that has not improved by switching to HDR mode.
Sunset sunrise and backlit shots are examples where HDR can help but your examples do not particularly benefit from it

Instagram started to support HDR because phones create HDR images

Phones will drive the adoption, we have had Tv and monitors for years and nothing has happened to date

HDR is interesting because it can avoid editing which many times is done to rebalance dynamic range in a gamma you can display or print, this is the same principles of HLG broadcasting
If it were not for occasional too-bright spots, HDR mode requires less editing. Especially the problematic sky masks can often be avoided.

My advice to everyone is, before making an opinion on HDR mode, to try it out first. The follow up discussions will be more productive after the experimentation.
I have sone hdr video since 6 years and photos since 2
You have much more experience with HDR output than me. Would you mind sharing what tools you used for two years to create HDR photos, what output format you used, and how you shared/viewed HDR photos?
your example images are not ideal to showcase the benefits
I am still struggling with HDR. What I see in Develop mode differs from what I see in the exported AVIF files. Do you have an example that is a better showcase? I consider images with overly bright highlights to be the negative aspect of HDR, the same as the garish look of early HDR merges.
with regards to colors in the highlights well there are not that many the highlights in hdr are super whites
In SDR, the highlights in the overcast sky are muted grey, while in HDR, the original blue comes through. That is visible in my posted images. I consider as highlights any brightness that falls in the HDR part of the tone curve.
dont confuse potential benefits of 10 bit color depth images with hdr dynamic range benefits
I am looking at the practical results. The problem is that the highlights seem to lose information in SDR (colors and details), which are apparent in HDR.
You can have an sdr 10 bits image that shows more colors than 8 bit jpeg
The biggest issue I have had is portability which is mostly why I dont use HDR for images

the easiest way to share images is to put them in a drive in chrome that reads them

Instagram now supports HDR as well but this is mostly optimised for acquisition on phones

for most practical purposes the best combination is to use hdr gamma with p3 gamut this is because you can find a larger number of devices that support P3 especially apple

as PC user though you run into issues as your computer is likely using srgb on hdr gamma for static pages which means the colors go off gamut and you get unexpected results when someone share p3 hdr images

in terms of editing the other problem is that you need to decide what display you are producing for as not all of them supports the full dr of the inage you may output

finally and more importantly if you produce an hdr image the rendering of the related SDR photo for those who dont see hdr is done by your display software again with unexpected results

all the above considered for photos I prefer to just output sdr images…
Thank you for the answers.

Adobe software allows for tuning the gain maps which are included when exporting JPEGs. Early experiments show promising results: HDR images look good on SDR devices. I still feel that one gets better SDR results if one develops for SDR alone.

I am using mainly sRGB at export because I intend to share HDRs via browsers. I also allow only for two stops more in HDR edit, hoping that it would cause less issues with various monitors.

I tried using Instagram for HDR images but it does not work. I guess my account was not included yet in the early rollout stage.
And there you have it by using sRGB combined with hdr gamma you have the brightness but keep the color corrected but on the other hand provide limited benefit for viewing

lots of compromises sometimes for best case 1 stop
In my edits, I see subtle but essential improvements in IQ.
difference
It is down to personal taste if it works and you are able to share great
 
SrMi appears to be talking about something different, a hardware thing I'm not familiar with that has to do with something built into certain display monitors. From his description, it appears to rely on using a much brighter backlight. I'm not entirely sure what this achieves, other than perhaps making the shadows lighter so you can see detail in the shadows that would normally be lost in darkness.
In these days of improving monitor contrast ratios, with a varied installed base from 6->10+ stops, it's just a way to try to have most people have a similar viewing experience from a single high bit depth, processed image file. In landscape photography you would likely notice it only in mixed specular highlights like clouds or snowy mountains, that's my experience.

In practice it's a fine mess at the moment, with standard after incompatible standard trying to fix the gross mistakes of the previous one. Unfortunately the push is coming like a freight train from the volume-vs-quality video/gaming/smartphone worlds, so tiny photography is going to be on the receiving end of it, willing or not. Also unfortunately, who could have represented photography has been a follower in all this (Adobe) and who didn't has demonstrated a lack of sensitivity and understanding of it (Apple). Hopefully things will be better once the smoke clears in a few years.

In the meantime you, personally, can have the full HDR experience by ignoring the noise and doing what you have always done: to retain detail in the clouds simply underexpose* to make sure that the histogram is just before clipping where detail is desired. Then convert the raw file neutrally (forget filmic), with no/limited positive Exposure Compensation, displaying it as-is at 10-bits on your recent, 10-stop static CR Benq monitor, the brighter the better. The rest, as they say, is for the standard birds.

Jack

*Underexpose because until recently full scale in the raw data was meant to approximate 2.5-3 stops above middle gray (L*50), which clips mixed specular highlights.
 
Last edited:
SrMi appears to be talking about something different, a hardware thing I'm not familiar with that has to do with something built into certain display monitors. From his description, it appears to rely on using a much brighter backlight. I'm not entirely sure what this achieves, other than perhaps making the shadows lighter so you can see detail in the shadows that would normally be lost in darkness.
In these days of improving monitor contrast ratios, with a varied population from 6->10+ stops, it's just a way to try to have most people have a similar viewing experience from a single processed image file. In landscape photography you would likely notice it only in mixed specular highlights like clouds or snowy mountains.

In practice it's a fine mess at the moment, with standard after incompatible standard trying to fix the gross mistakes of the previous one. Unfortunately the push is coming like a freight train from the volume-vs-quality video/gaming/smartphone worlds, so tiny photography is going to be on the receiving end of it, willing or not. Unfortunately who could have represented photography has been a follower in all this (Adobe) a
Adobe seems to be the leader in gain map standardization (link), an essential element for viewing HDR images on SDR devices. Tools for raw developing for HDR is also in a very early stage.
nd who didn't has demonstrated a lack of sensitivity and understanding of it (Apple). Hopefully things will be better once the smoke clears in a few years.

In the meantime you, personally, can have the full HDR experience by ignoring the noise and doing what you have always done: to retain detail in the clouds simply underexpose* to make sure that the histogram is just before clipping where detail is desired and convert the raw file neutrally (forget filmic), with no/limited positive Exposure Compensation, displaying it as-is at 10 bits on your recent, 10 stop static CR Benq monitor, the brighter the better. The rest, as they say, is for the standard birds.
With all my post-processing attempts, I could never get the richness of highlights in SDR mode. Being careful not to clip is even more critical for HDR.
Jack

*Underexpose because until recently full scale in the raw data was meant to approximate 2.5-3 stops above middle gray (L*50), which clips mixed specular highlights.
 

Keyboard shortcuts

Back
Top