Question on HDR merging

Philip 101

Member
Messages
24
Reaction score
20
Hi folks,

I'm a newbie to HDR for astro and HDR generally but planning to do some post-eclipse processing.

It seems like most of the techniques I've seen involve the key step of merging layers by taking the mean value (in Photoshop's smart object). This is something I've seen in some youtube videos and is also mentioned in a recent thread in this forum.

The results look good, but at the same time, the procedure is confusing because it seems to be throwing good and bad data together. That is, the properly exposed pixels are simply averaged together (given equal weight) along with pixels that are either noisy through underexposure or just wrong because they are blown out by overexposure.

I would have thought that a more careful HDR merge would look at the whole set of exposures, figure out which is the right one to use for each pixel to estimate its true brightness, and then rescale the brightnesses based on the exposure lengths, and combine them.

Is there a tool or procedure that does this?

Thanks!
 
Lightroom has an HDR function built in, and Photoshop has the ability to select from several options for merging. But here is my advice.

Examine all the images you took, and select the best ones for merging. I personally shot 2 sets of 9 image brackets (18 images total) to cover the range between 1/4000th sec and 0.6 sec in 2/3 stop increments. However, I repeated these 2 sets five times during totality. When I stacked I chose the best out of 5 images for each of the 18 bracketed shots.



Also, averaging the stack of 18 images to get 21+ stops of light pretty much takes care of itself. But again, remember the old software development statement, “Crap in equals crap out.” The images have to be good in other words.



One last suggestion, and that is experiment. I tried stacking fewer of the 18 images, but the results worked out best with all 18 for the corona. But for Baily’s beads and the diamond ring I ended up stacking fewer as I experimented and found the best combination of images for each.
 
Hi folks,

I'm a newbie to HDR for astro and HDR generally but planning to do some post-eclipse processing.

It seems like most of the techniques I've seen involve the key step of merging layers by taking the mean value (in Photoshop's smart object). This is something I've seen in some youtube videos and is also mentioned in a recent thread in this forum.

The results look good, but at the same time, the procedure is confusing because it seems to be throwing good and bad data together. That is, the properly exposed pixels are simply averaged together (given equal weight) along with pixels that are either noisy through underexposure or just wrong because they are blown out by overexposure.

I would have thought that a more careful HDR merge would look at the whole set of exposures, figure out which is the right one to use for each pixel to estimate its true brightness, and then rescale the brightnesses based on the exposure lengths, and combine them.

Is there a tool or procedure that does this?

Thanks!
The process of merging layers and taking mean values is used to reduce noise. It is not the way to get an HDR image. There are several sources of noise in your image; some are caused by the camera sensor while others are related to the inherent variability in the incoming signal - something called photo noise. If these noise sources are random - and most are - then taking the average will result in a better picture sonf will reveal finer details. Even if the noise is not random, averaging can improve an image. Suppose an airplane flies thru your scene during one frame of twenty.. That bright distracting light is only at one spot in 1/20th of your images and will have little impact on your final product. (It is still better not to use that image, of course.)
 
The process of merging layers and taking mean values is used to reduce noise. It is not the way to get an HDR image. There are several sources of noise in your image; some are caused by the camera sensor while others are related to the inherent variability in the incoming signal - something called photo noise. If these noise sources are random - and most are - then taking the average will result in a better picture sonf will reveal finer details. Even if the noise is not random, averaging can improve an image. Suppose an airplane flies thru your scene during one frame of twenty.. That bright distracting light is only at one spot in 1/20th of your images and will have little impact on your final product. (It is still better not to use that image, of course.)
You are correct if all the images have the SAME settings. However, if you stack images with multiple settings, for example a bracketed set of 18 images using 1/4000th sec to 0.6 sec in 0.67 stop increments, you get an image with an extra 9.3 stops of light So a 12 stop single image can now display 21+ stops. Human eyes see about 20 to 21 stops. Look at my corona images I posted, clearly this works.

6f3e876c8d814780a48f5f879c6786c9.jpg

--
Best Regards,
Jack
YouTube channel: https://www.youtube.com/channel/UCAfQN-Ygh9z7qqUXdZWM-1Q
Flickr Meteor Album: https://www.flickr.com/photos/jackswinden/albums/72157710069567721
Sony RX100M3, a6000, and a7
 
Last edited:
Last edited:
The process of merging layers and taking mean values is used to reduce noise. It is not the way to get an HDR image. There are several sources of noise in your image; some are caused by the camera sensor while others are related to the inherent variability in the incoming signal - something called photo noise. If these noise sources are random - and most are - then taking the average will result in a better picture sonf will reveal finer details. Even if the noise is not random, averaging can improve an image. Suppose an airplane flies thru your scene during one frame of twenty.. That bright distracting light is only at one spot in 1/20th of your images and will have little impact on your final product. (It is still better not to use that image, of course.)
You are correct if all the images have the SAME settings. However, if you stack images with multiple settings, for example a bracketed set of 18 images using 1/4000th sec to 0.6 sec in 0.67 stop increments, you get an image with an extra 9.3 stops of light So a 12 stop single image can now display 21+ stops. Human eyes see about 20 to 21 stops. Look at my corona images I posted, clearly this works.

6f3e876c8d814780a48f5f879c6786c9.jpg
Gorgeous image Jack! But didn't you use something like the Sean Walker method where you set different transparencies for each layer. Surely, you didn't simply take the mean of all levels, which is the context I understood from the OP's question.

--
Bob in Baltimore
 
Gorgeous image Jack! But didn't you use something like the Sean Walker method where you set different transparencies for each layer. Surely, you didn't simply take the mean of all levels, which is the context I understood from the OP's question.
No I used Mean in the Photoshop stack settings.

In the past I used a variable opacity for each layer method which uses the formula of 1 / layer number. For example:
  • Layer 5 = 1 / 5 = 20%
  • Layer 4 = 1 / 4 = 25%
  • Layer 3 = 1 / 3 = 33%
  • Layer 2 = 1 / 2 = 50%
  • Layer 1 = 1 / 1 = 100%, bottom layer
However, I discovered this time consuming method setting each layer to a unique opacity gives the same result as using Mean, and using Mean is way less work and time.
 
So I understand what both of you are saying. Like Bob said, averaging pixels is a good way to reduce noise.

And I also understand you can use the mean to make an HDR. An overexposed pixel in 1 of N images will get reduced in intensity since that image is only contributing 1/N to the final. If you are taking images at lengths differing by some constant factor time then I guess it smooths things out in some logarithmic-ish way.

It seems like a reasonable approximation. But only an approximation, because -- for example -- that 1 overexposed image is still overexposed so you are averaging in bad data (the true brightness is beyond pure white) with good data. If many photos in the set are overexposed then the problem is even worse. Given the great lengths people go to to generate nice eclipse photos, I was surprised that approximation is so common.

Variable opacity seems like another form of averaging, right? It doesn't fix the problem I mentioned. All the images are still contributing to the final result, when only some of them should (different images contributing differently to different pixels).

A huge caveat is that I have not actually compared any of these methods, and the results certainly look good (thanks for sharing yours, Jack!). I'm just arguing from principle here.
 
Thanks for describing the conceptual issue so well. I too have been concerned about the exact same thing. Jack’s wonderful image shows that you can still get a great image but could it be even better?

I am also wondering whether or not image processing software with built-in HDR stacking does it in a technically correct manner. Such tools are not directly usable in this case because they will probably not align the images correctly. But maybe manually align the frames and HDR-stack the manually-aligned images?

Another concern: Most tutorials I have seen about processing a corona image say to align all the images on the moon’s edge. That evidently works okay assuming all the frames were taken in a short time. But even so, the moon is constantly shifting over the sun during such an exposure sequence. How much sharper could the coronal details be if the images were actually aligned on the coronal details themselves? It would be much tougher to do but it is evidently possible: https://www.space.com/solar-corona-revealed-in-2024-totality-photograph
 
Last edited:
Gorgeous image Jack! But didn't you use something like the Sean Walker method where you set different transparencies for each layer. Surely, you didn't simply take the mean of all levels, which is the context I understood from the OP's question.
No I used Mean in the Photoshop stack settings.

In the past I used a variable opacity for each layer method which uses the formula of 1 / layer number. For example:
  • Layer 5 = 1 / 5 = 20%
  • Layer 4 = 1 / 4 = 25%
  • Layer 3 = 1 / 3 = 33%
  • Layer 2 = 1 / 2 = 50%
  • Layer 1 = 1 / 1 = 100%, bottom layer
However, I discovered this time consuming method setting each layer to a unique opacity gives the same result as using Mean, and using Mean is way less work and time.
Well, I have to admit it works! Your image looked great, so I tried it myself. In fact, it may be mathematically equivalent. Here is my result.

fb3e2760b61646bcbe05a786eadaa174.jpg

There were 11 exposures ranging from 1/1,500 of a second to 0.7 sec in one stop intervals. The Earthshine-illuminated Moon is a composite of 11 70.7 second exposures. The prominences were dubbed in somewhat crudely. I need a better method for doing that!

I used a Nikon Z8 at ISO 64 and an 80mm f/6 StellarVue telescope (480 mm f.l.). All images were processed in LR for the new enhanced noise removal before stacking. The shorter exposures were aligned to one-another using promenades, while the longer exposures were aligned using the double star Zeta Piscium, with one exposure using both methods to tie the stack together. I did some post processing in Viveza as well as in PS.

So, thanks for bringing this approach to my attention.

--
Bob in Baltimore
 
Well, I have to admit it works! Your image looked great, so I tried it myself. In fact, it may be mathematically equivalent. Here is my result.

fb3e2760b61646bcbe05a786eadaa174.jpg

There were 11 exposures ranging from 1/1,500 of a second to 0.7 sec in one stop intervals. The Earthshine-illuminated Moon is a composite of 11 70.7 second exposures. The prominences were dubbed in somewhat crudely. I need a better method for doing that!

I used a Nikon Z8 at ISO 64 and an 80mm f/6 StellarVue telescope (480 mm f.l.). All images were processed in LR for the new enhanced noise removal before stacking. The shorter exposures were aligned to one-another using promenades, while the longer exposures were aligned using the double star Zeta Piscium, with one exposure using both methods to tie the stack together. I did some post processing in Viveza as well as in PS.

So, thanks for bringing this approach to my attention.
Very nice! Did you try to create a Radial Blur adjustment layer. I put my instructions within a thread here but it is all buried deep into that thread so not easy to find. I'm going to create a brand new thread about creating a Solar Eclipse HDR image of the corona in the next day are so so we can have easier access to the information if we ever get another chance at photographing a solar total eclipse, or least available for those who do.

--
Best Regards,
Jack
YouTube channel: https://www.youtube.com/channel/UCAfQN-Ygh9z7qqUXdZWM-1Q
Flickr Meteor Album: https://www.flickr.com/photos/jackswinden/albums/72157710069567721
Sony RX100M3, a6000, and a7
 
Hi guys, I'm going to address a few questions in this post, or try to. I'm just amateur at this myself, but I'm trying to learn, so I'll pass on what I know.

Aligning Solar Total Eclipse Images
  1. Select all layers in an app like Photoshop.
  2. Change the layer types from Normal to Difference. This makes alignment much easier.
  3. Choose one of the darker images where the Moon sillouette is very pronounced and use it as the one to which to the align all the others, then one by one use the Move tool to align them.
Hint: Toggle the top layer you just aligned on and off to see if you can detect any movement. That helps to get the alignment fine tuned. I often set the top layer that I'm aligning to an opacity around 50% so I can better see where it is aligning to the bottom layer which remains at an opacity of 100%.

Stacking Clipped Images

Remember, crap in = crap out, so choose the best images.

As far clipping goes, try images even if they seem to have clipped highlights. I tried not using a few I thought were too blown out, but I found the HDR looked better with them than without. So experiment. Don't forget that a good photo post processing app like Photoshop can make adjustments up to about +/- 5 stops. Don't adjust individual layers though, only the compiled HDR image.

Processing the Stacked Smart Object (HDR Image)

Make backups. A smart object has non-destructive capability, so don't worry about edits because you can change them. However, be safe and make lots of backups.

Dedicated HDR Apps

My guess is they are likely employing the same science and math in stacking that many of the manual instructions and/or apps like Photoshop use. But I'ver never used them, so I don't know. They probably just cater to amateurs by automating the process and likely attempt some automatic post processing of the HDR image much like hitting the Auto button in Photoshop. Try them though, maybe they do a better job.

I don't recommend using Lightroom to create any HDR images though. LR just pukes all over them in my experience. I like LR for processing multiple similar files quickly and/or exporting RAW to an easier format to edit like TIFF. Otherwise I find it isn't that good for post processing a final composite image. Photoshop does a better job, or at least has a better workflow.
 

Keyboard shortcuts

Back
Top