Averaging captures vs. HDR?

Erik Kaffehr

Veteran Member
Messages
8,198
Solutions
7
Reaction score
5,118
Location
Nyköping, SE
Hi,

We have seen a discussion around averaging captures, with the context of improving SNR.

For me, it seems that using HDR is much more useful in most cases, excluding very low light situations like astronomy.

With HDR, we have a limited set of exposures, like one protecting high lights, one for midtones and one for the darks.

Achieving the same DR with averaging would take a lot of exposures.

There may be other advantages of using averaging, like 'faking' an ND-filter.

Best regards

Erik
 
Hi,

We have seen a discussion around averaging captures, with the context of improving SNR.

For me, it seems that using HDR is much more useful in most cases, excluding very low light situations like astronomy.

With HDR, we have a limited set of exposures, like one protecting high lights, one for midtones and one for the darks.

Achieving the same DR with averaging would take a lot of exposures.

There may be other advantages of using averaging, like 'faking' an ND-filter.

Best regards

Erik
Averaging handles movement well; HDR does not. Apart from the standard ghosting issue, imagine the shot that lifts shadows having blurred movement, while the one protecting highlights does not (leaves and branches). Adobe recommends a 3-stop difference for HDR bracketing.

Additionally, in-camera frame averaging works well when implemented (Phase One, Olympus), while no in-camera HDR can generate undemosaiced, scene-referred files.

Apart from frame-averaging and HDR, high-resolution shooting (pixel-shift, superresolution) is another option to improve SNR.

FWIW, both Phase One and Olympus advertise frame averaging as an efficient way to simulate ND filters.
 
Hi,

We have seen a discussion around averaging captures, with the context of improving SNR.

For me, it seems that using HDR is much more useful in most cases, excluding very low light situations like astronomy.

With HDR, we have a limited set of exposures, like one protecting high lights, one for midtones and one for the darks.

Achieving the same DR with averaging would take a lot of exposures.

There may be other advantages of using averaging, like 'faking' an ND-filter.

Best regards

Erik
Averaging handles movement well; HDR does not. Apart from the standard ghosting issue, imagine the shot that lifts shadows having blurred movement, while the one protecting highlights does not (leaves and branches). Adobe recommends a 3-stop difference for HDR bracketing.
So, three images cover a range of 6EV. to get that gain you would need 2`^6 > 64 exposures using frame averaging ,
Additionally, in-camera frame averaging works well when implemented (Phase One, Olympus), while no in-camera HDR can generate undemosaiced, scene-referred files.
I don't think anyone discussed in camera rendition.

I would also suggest that any extended luminance range image needs some tone mapping, with HDR images possibly being an exception.
Apart from frame-averaging and HDR, high-resolution shooting (pixel-shift, superresolution) is another option to improve SNR.
Pixel shift reduces aliasing, but I am not sure it is beneficial to HDR. It can be beneficial but that really depends on processing.
FWIW, both Phase One and Olympus advertise frame averaging as an efficient way to simulate ND filters.
Yes, but is it a good way? ND filters are sort of real. Do we achieve a similar effect merging say 64 (5 stop ND) images?

Best regards

Erik
 
Hi,

We have seen a discussion around averaging captures, with the context of improving SNR.

For me, it seems that using HDR is much more useful in most cases, excluding very low light situations like astronomy.

With HDR, we have a limited set of exposures, like one protecting high lights, one for midtones and one for the darks.

Achieving the same DR with averaging would take a lot of exposures.

There may be other advantages of using averaging, like 'faking' an ND-filter.

Best regards

Erik
Averaging handles movement well; HDR does not. Apart from the standard ghosting issue, imagine the shot that lifts shadows having blurred movement, while the one protecting highlights does not (leaves and branches). Adobe recommends a 3-stop difference for HDR bracketing.
So, three images cover a range of 6EV. to get that gain you would need 2`^6 > 64 exposures using frame averaging ,
It also means that with EC 0 exposure at 1/15 sec, we need an EC -3 exposure at 1/2 sec, which can cause significant blur.
Additionally, in-camera frame averaging works well when implemented (Phase One, Olympus), while no in-camera HDR can generate undemosaiced, scene-referred files.
I don't think anyone discussed in camera rendition.
If we discuss frame averaging vs. HDR merge, then the possibility of in-camera operation should be mentioned.
I would also suggest that any extended luminance range image needs some tone mapping, with HDR images possibly being an exception.
I do not understand.
Apart from frame-averaging and HDR, high-resolution shooting (pixel-shift, superresolution) is another option to improve SNR.
Pixel shift reduces aliasing, but I am not sure it is beneficial to HDR. It can be beneficial but that really depends on processing.
Pixel shift also improves SNR, which is the main benefit of frame averaging and HDR merge.
FWIW, both Phase One and Olympus advertise frame averaging as an efficient way to simulate ND filters.
Yes, but is it a good way? ND filters are sort of real. Do we achieve a similar effect merging say 64 (5 stop ND) images?
Yes. IQ4 people seem to be very happy with it and are missing it in other cameras.

Strong ND filters often have a color cast and require long single-shot exposure (special NR frames). You need to focus with the ND filter off, and then mount the filter. You need to juggle various strength ND filters to achieve the desired effect. Automatic metering with strong ND filters is often inaccurate and is best computed manually.

On the other hand, frame averaging can have issues with gaps if the shutter speed is too fast.
Best regards

Erik
 
Hi,

We have seen a discussion around averaging captures, with the context of improving SNR.

For me, it seems that using HDR is much more useful in most cases, excluding very low light situations like astronomy.

With HDR, we have a limited set of exposures, like one protecting high lights, one for midtones and one for the darks.

Achieving the same DR with averaging would take a lot of exposures.

There may be other advantages of using averaging, like 'faking' an ND-filter.

Best regards

Erik
Averaging handles movement well; HDR does not. Apart from the standard ghosting issue, imagine the shot that lifts shadows having blurred movement, while the one protecting highlights does not (leaves and branches). Adobe recommends a 3-stop difference for HDR bracketing.
So, three images cover a range of 6EV. to get that gain you would need 2`^6 > 64 exposures using frame averaging ,
It also means that with EC 0 exposure at 1/15 sec, we need an EC -3 exposure at 1/2 sec, which can cause significant blur.
Additionally, in-camera frame averaging works well when implemented (Phase One, Olympus), while no in-camera HDR can generate undemosaiced, scene-referred files.
I don't think anyone discussed in camera rendition.
If we discuss frame averaging vs. HDR merge, then the possibility of in-camera operation should be mentioned.
I would also suggest that any extended luminance range image needs some tone mapping, with HDR images possibly being an exception.
I do not understand.
Presentation media does not cover a wide luminance range. Because of that, the tone scale is always manipulated. For normal images this is often done using a tone curve, that compresses highlights and shadows. But, if we need to present images with a wide dynamic range on traditional media, like print, more specialized processing is needed. That processing is normally called tone mapping.
Apart from frame-averaging and HDR, high-resolution shooting (pixel-shift, superresolution) is another option to improve SNR.
Pixel shift reduces aliasing, but I am not sure it is beneficial to HDR. It can be beneficial but that really depends on processing.
Pixel shift also improves SNR, which is the main benefit of frame averaging and HDR merge.
Yes, but it may come with artifacts.
FWIW, both Phase One and Olympus advertise frame averaging as an efficient way to simulate ND filters.
Yes, but is it a good way? ND filters are sort of real. Do we achieve a similar effect merging say 64 (5 stop ND) images?
Yes. IQ4 people seem to be very happy with it and are missing it in other cameras.

Strong ND filters often have a color cast and require long single-shot exposure (special NR frames). You need to focus with the ND filter off, and then mount the filter. You need to juggle various strength ND filters to achieve the desired effect. Automatic metering with strong ND filters is often inaccurate and is best computed manually.
The color cast may be an issue, but it may possibly handled well by having a white balance shot of a grey card.

There is no reason EVF cameras would yield inaccurate exposure evaluation with ND cards.
On the other hand, frame averaging can have issues with gaps if the shutter speed is too fast.
Best regards

Erik
Best regards

Erik
 
Hi,

We have seen a discussion around averaging captures, with the context of improving SNR.

For me, it seems that using HDR is much more useful in most cases, excluding very low light situations like astronomy.

With HDR, we have a limited set of exposures, like one protecting high lights, one for midtones and one for the darks.

Achieving the same DR with averaging would take a lot of exposures.

There may be other advantages of using averaging, like 'faking' an ND-filter.

Best regards

Erik
Averaging handles movement well; HDR does not. Apart from the standard ghosting issue, imagine the shot that lifts shadows having blurred movement, while the one protecting highlights does not (leaves and branches). Adobe recommends a 3-stop difference for HDR bracketing.
So, three images cover a range of 6EV. to get that gain you would need 2`^6 > 64 exposures using frame averaging ,
It also means that with EC 0 exposure at 1/15 sec, we need an EC -3 exposure at 1/2 sec, which can cause significant blur.
Additionally, in-camera frame averaging works well when implemented (Phase One, Olympus), while no in-camera HDR can generate undemosaiced, scene-referred files.
I don't think anyone discussed in camera rendition.
If we discuss frame averaging vs. HDR merge, then the possibility of in-camera operation should be mentioned.
I would also suggest that any extended luminance range image needs some tone mapping, with HDR images possibly being an exception.
I do not understand.
Presentation media does not cover a wide luminance range. Because of that, the tone scale is always manipulated. For normal images this is often done using a tone curve, that compresses highlights and shadows. But, if we need to present images with a wide dynamic range on traditional media, like print, more specialized processing is needed. That processing is normally called tone mapping.
Apart from frame-averaging and HDR, high-resolution shooting (pixel-shift, superresolution) is another option to improve SNR.
Pixel shift reduces aliasing, but I am not sure it is beneficial to HDR. It can be beneficial but that really depends on processing.
Pixel shift also improves SNR, which is the main benefit of frame averaging and HDR merge.
Yes, but it may come with artifacts.
Yes, and so may other methods mentioned.
FWIW, both Phase One and Olympus advertise frame averaging as an efficient way to simulate ND filters.
Yes, but is it a good way? ND filters are sort of real. Do we achieve a similar effect merging say 64 (5 stop ND) images?
Yes. IQ4 people seem to be very happy with it and are missing it in other cameras.

Strong ND filters often have a color cast and require long single-shot exposure (special NR frames). You need to focus with the ND filter off, and then mount the filter. You need to juggle various strength ND filters to achieve the desired effect. Automatic metering with strong ND filters is often inaccurate and is best computed manually.
The color cast may be an issue, but it may possibly handled well by having a white balance shot of a grey card.

There is no reason EVF cameras would yield inaccurate exposure evaluation with ND cards.
All cameras have problems to determine proper exposure when light is low, i.e. , the histogram is not accurate. Also, they often cannot use longer shutter speed in aperture priority mode.
On the other hand, frame averaging can have issues with gaps if the shutter speed is too fast.
Best regards

Erik
Best regards

Erik

--
Erik Kaffehr
Website: http://echophoto.dnsalias.net
Magic tends to disappear in controlled experiments…
Gallery: http://echophoto.smugmug.com
Articles: http://echophoto.dnsalias.net/ekr/index.php/photoarticles
 
All cameras have problems to determine proper exposure when light is low, i.e. , the histogram is not accurate.
Please explain. I think I know what you mean, but I'm not sure. If it's what I think you're talking about, the effect occurs at very low light levels, and differently for different cameras.
 
All cameras have problems to determine proper exposure when light is low, i.e. , the histogram is not accurate.
Please explain. I think I know what you mean, but I'm not sure. If it's what I think you're talking about, the effect occurs at very low light levels, and differently for different cameras.
The effect at very low light levels is that the histogram stops moving regardless of how much you increase the shutter speed. With strong ND filters it is best to meter without the filter and then to manualy increase the shutter speed as indicated by ND filter's table (often included with strong ND filters).
 
All cameras have problems to determine proper exposure when light is low, i.e. , the histogram is not accurate.
Please explain. I think I know what you mean, but I'm not sure. If it's what I think you're talking about, the effect occurs at very low light levels, and differently for different cameras.
The effect at very low light levels is that the histogram stops moving regardless of how much you increase the shutter speed.
Yes, that's what I was thinking. Can be more of a problem with GFX cameras than Nikons and Sony. But for me this is such an outlier situation that I've never bothered to investigate the X2D for that effect. I don't usually shoot with big ND filters.
With strong ND filters it is best to meter without the filter and then to manualy increase the shutter speed as indicated by ND filter's table (often included with strong ND filters).
Right.
 
here is an example of using averaging instead of a ND filter.
I wanted to blur a bit of the sky and the water..
Taken with the GFX100sII

4691a434b54d4e77b6d6a0c9c13e64b3.jpg

--
http://www.michaelfullana.com
 
Last edited:
There may be other advantages of using averaging, like 'faking' an ND-filter.

Best regards

Erik
Listeining to the discussion, I realize that 'emulating an ND filter' would be better wording than 'faking an ND filter', sorry for that!

On the other hand, the OP is really about HDR versus averaging in shooting situations were the intent is increase dynamic range or increase SNR.

Best regards

Erik

--
Erik Kaffehr
Website: http://echophoto.dnsalias.net
Magic tends to disappear in controlled experiments…
Gallery: http://echophoto.smugmug.com
Articles: http://echophoto.dnsalias.net/ekr/index.php/photoarticles
 
Last edited:

Keyboard shortcuts

Back
Top