Hasselblad X2D long exposures versus averaging

JimKasson

Community Leader
Forum Moderator
Messages
52,285
Solutions
52
Reaction score
59,064
Location
Monterey, CA, US
Inearlier posts I looked at the X2D shadow performance with 17-minute exposures at base ISO. I did this because a couple of people who like the long-exposure look asked me about it. But there's a better way to get that blurring effect if you want to minimize noise in the shadows: make a series of exposures with shorter exposures and average them in Photoshop or some other image editor.

This post is intended to demonstrate that.

The cameras were mounted to the Foba camera stand with an Arca Swiss C1 cube. The rest of the setup is as follows:
  • X2D with 38 mm f/2.5 XCD lens, ISO 64, about 10 feet from target
  • Single exposure at f/32 with shutter speed 17 minutes
  • 34 exposures at f/5.6 with shutter speed 32 seconds with no delay between the images, using the built-in intervalometer
  • Manual focusing
  • Breakthrough 10-stop neutral density filter.
  • 2 second self timer
  • Aputure 100 LED light, with diffuser
34 32-second exposures, averaged in Photoshop, 4.75 stops underexposed, 4.75 stop push before averaging

34 32-second exposures, averaged in Photoshop, 4.75 stops underexposed, 4.75 stop push before averaging

17 minute exposure (the DPR interpretation of the EXIF is wrong)

17 minute exposure (the DPR interpretation of the EXIF is wrong)

More details, images here:

https://blog.kasson.com/x2d/hasselblad-x2d-long-exposures-versus-averaging/

--
https://blog.kasson.com
 

Attachments

  • eb5432949a754b93a852830155018b52.jpg
    eb5432949a754b93a852830155018b52.jpg
    241.4 KB · Views: 0
  • 6b40c52ca4144fe9b35dfc44a5b30546.jpg
    6b40c52ca4144fe9b35dfc44a5b30546.jpg
    436.4 KB · Views: 0
Last edited:
Hardly surprising, given that the averaged image got 34 x more photons.

It seems fair to point out that this technique works only when you have more light than you need.
 
Hardly surprising, given that the averaged image got 34 x more photons.

It seems fair to point out that this technique works only when you have more light than you need.
Many, if not most, people do long exposures at base ISO with strong neutral density filters. I’m saying there’s a better way.
 
Hardly surprising, given that the averaged image got 34 x more photons.

It seems fair to point out that this technique works only when you have more light than you need.
I do not follow why you need lots of light.

When shooting on a tripod (typical for averaging), you always have more light than you need unless movement needs to be stopped.

When shooting handheld and there is not a lot of light, and your handholding capability limits your exposure (e.g., ISO 800, 1/10 sec, f/8), you can shoot a series of images, align and average them in post.

The only limitation of averaging is motion, as averaging simulates long shutter speeds.
 
Hardly surprising, given that the averaged image got 34 x more photons.

It seems fair to point out that this technique works only when you have more light than you need.
Many, if not most, people do long exposures at base ISO with strong neutral density filters. I’m saying there’s a better way.
That's quite different from my use case, then - I rarely use an ND filter, and most of my long exposures are long because it's, well, dark.

- Chris
 
Last edited:
Hardly surprising, given that the averaged image got 34 x more photons.

It seems fair to point out that this technique works only when you have more light than you need.
I do not follow why you need lots of light.

When shooting on a tripod (typical for averaging), you always have more light than you need unless movement needs to be stopped.
I meant to say that if you use an ND filter (aka "less light filter"), you have more light than you need.

And you need that if you want to get the same total exposure time. Yes, you can reduce shot noise by averaging exposures, but if your base exposure were 30 minutes (w/o ND filter), you would rarely have the time to take 30 such shots to average them :-)

- Chris
 
Hardly surprising, given that the averaged image got 34 x more photons.

It seems fair to point out that this technique works only when you have more light than you need.
I do not follow why you need lots of light.

When shooting on a tripod (typical for averaging), you always have more light than you need unless movement needs to be stopped.
I meant to say that if you use an ND filter (aka "less light filter"), you have more light than you need.

And you need that if you want to get the same total exposure time. Yes, you can reduce shot noise by averaging exposures, but if your base exposure were 30 minutes (w/o ND filter), you would rarely have the time to take 30 such shots to average them :-)

- Chris
If your base exposure was 30 minutes at ISO 100, you could make 30 x 30s exposures at 6400 ISO (30s/ISO 6400 is about the same as 1800s/ISO 100) and AFAIU 30 exposures would result is 5.4x increase in SNR but a 30 minutes exposure would drop the DR significantly and induce other noise such as hot pixels etc. I a not sure exactly how many stops would be lost but judging from JimKasson post about that I would guess around 4-5 stops for 30 minutes?

So it seems that you would gain from doing multiple exposures in this scenario too.
 
Hardly surprising, given that the averaged image got 34 x more photons.

It seems fair to point out that this technique works only when you have more light than you need.
I do not follow why you need lots of light.

When shooting on a tripod (typical for averaging), you always have more light than you need unless movement needs to be stopped.
I meant to say that if you use an ND filter (aka "less light filter"), you have more light than you need.

And you need that if you want to get the same total exposure time. Yes, you can reduce shot noise by averaging exposures, but if your base exposure were 30 minutes (w/o ND filter), you would rarely have the time to take 30 such shots to average them :-)

- Chris
If your base exposure was 30 minutes at ISO 100, you could make 30 x 30s exposures at 6400 ISO (30s/ISO 6400 is about the same as 1800s/ISO 100) and AFAIU 30 exposures would result is 5.4x increase in SNR but a 30 minutes exposure would drop the DR significantly and induce other noise such as hot pixels etc. I a not sure exactly how many stops would be lost but judging from JimKasson post about that I would guess around 4-5 stops for 30 minutes?

So it seems that you would gain from doing multiple exposures in this scenario too.
With dual conversion gain sensors, raising the ISO above base ISO, mean stacking, and keeping the f stop and total exposure time the same as a single long exposure helps a little by lowering the read noise, but the big win is by getting more photons on the sensor in the mean-stacked series.
 
Sorry if asking a stupid question:

If understanding You right with averaging you take several pictures with lower time and average them in Photoshop; so I don´t see any reason why the second underexposed picture should look any different than the first one. So if this is true why not take the first exposure 34 times and safe a lot of time?
 
Sorry if asking a stupid question:

If understanding You right with averaging you take several pictures with lower time and average them in Photoshop; so I don´t see any reason why the second underexposed picture should look any different than the first one. So if this is true why not take the first exposure 34 times and safe a lot of time?
Go back to the original post and look at the f-stops.
 
Sorry if asking a stupid question:

If understanding You right with averaging you take several pictures with lower time and average them in Photoshop; so I don´t see any reason why the second underexposed picture should look any different than the first one. So if this is true why not take the first exposure 34 times and safe a lot of time?
Go back to the original post and look at the f-stops.
Yes, I think I understood that:

Instead of taking one shot at f/32 for 17min, you average in Photoshop 34 exposures at f/5.6 and 32sec.

My question:
Do You really need 34 exposures, You could also average 34 copies of the one f/5.6 and 32sec ? No?
 
Sorry if asking a stupid question:

If understanding You right with averaging you take several pictures with lower time and average them in Photoshop; so I don´t see any reason why the second underexposed picture should look any different than the first one. So if this is true why not take the first exposure 34 times and safe a lot of time?
Go back to the original post and look at the f-stops.
Yes, I think I understood that:

Instead of taking one shot at f/32 for 17min, you average in Photoshop 34 exposures at f/5.6 and 32sec.

My question:
Do You really need 34 exposures, You could also average 34 copies of the one f/5.6 and 32sec ? No?
That would produce an image that looked like the image you used to get the 34 copies. The average of {x,x,x,x,x,x,...} is x.

Photon noise is stochastic (and, incidentally Poisson), which is why this trick works.

Also, the point of this exercise is to simulate a long exposure. 34 identical shorter-exposure images wouldn't do that.
 
For averaging, I had a good experience with Octave Raw Tools. It is free, fast, and generates DNG raw image files.
I can't get that to work with X2D files. It's looking for an EXIF field in the converted DNGs called 'createdate'.
Error: A file was processed that is missing the CreateDate EXIF tag. Perhaps your file mask included non-DNG files?
There is no such field in DNGs that I make with DNGConverter from X2D raw files. There is a field called 'Date Created'.

Lens : XCD 38V
Date Created : 2022:11:15 10:07:35
Document ID : xmp.did:c602e28c-4198-3f40-acc6-9e36c34f4ad2

To add to my difficulties, the program is written in such a way that the run-time environment seems to ignore most of the breakpoints I set. I'm using Matlab R2022a.

Here's how I'm invoking it:

createStackedDngs('e:\Calibration\X2D\Long Exposure bookcase\DNGs','maxtimedelta', 0);

Any ideas?
 
Last edited:
For averaging, I had a good experience with Octave Raw Tools. It is free, fast, and generates DNG raw image files.
I can't get that to work with X2D files. It's looking for an EXIF field in the converted DNGs called 'createdate'.
Error: A file was processed that is missing the CreateDate EXIF tag. Perhaps your file mask included non-DNG files?
There is no such field in DNGs that I make with DNGConverter from X2D raw files. There is a field called 'Date Created'.

Lens : XCD 38V
Date Created : 2022:11:15 10:07:35
Document ID : xmp.did:c602e28c-4198-3f40-acc6-9e36c34f4ad2

To add to my difficulties, the program is written in such a way that the run-time environment seems to ignore most of the breakpoints I set. I'm using Matlab R2022a.

Here's how I'm invoking it:

createStackedDngs('e:\Calibration\X2D\Long Exposure bookcase\DNGs','maxtimedelta', 0);

Any ideas?
Hmpf.

I would contact the author Horshack. He is active on DPR.
 
For averaging, I had a good experience with Octave Raw Tools. It is free, fast, and generates DNG raw image files.
I can't get that to work with X2D files. It's looking for an EXIF field in the converted DNGs called 'createdate'.
Error: A file was processed that is missing the CreateDate EXIF tag. Perhaps your file mask included non-DNG files?
There is no such field in DNGs that I make with DNGConverter from X2D raw files. There is a field called 'Date Created'.

Lens : XCD 38V
Date Created : 2022:11:15 10:07:35
Document ID : xmp.did:c602e28c-4198-3f40-acc6-9e36c34f4ad2

To add to my difficulties, the program is written in such a way that the run-time environment seems to ignore most of the breakpoints I set. I'm using Matlab R2022a.

Here's how I'm invoking it:

createStackedDngs('e:\Calibration\X2D\Long Exposure bookcase\DNGs','maxtimedelta', 0);

Any ideas?
Hmpf.

I would contact the author Horshack. He is active on DPR.
Thanks. I’ll take it up with Adam.
 

Keyboard shortcuts

Back
Top