Best Sub-Exposure Time

Started Apr 15, 2016 | Discussions
rnclark Senior Member • Posts: 3,185
Best Sub-Exposure Time
6

There has been a lot of discussion on the optimum sub-exposure length here lately. There are multiple factors in setting sub exposure length beyond read noise impacts and maximizing S/N. Factors include seeing, wind and tracking errors, along with having enough frames for stacking to reject unwanted things like airplane and satellite tracks. These factors limit the long exposure time side of the sub-exposure efficiency trade. The short side is impacted most by the time delay between exposures, especially in today's very low read noise cameras.

Here I show some data to illustrate the effects and a comprehensive model to account for these factors. My full description as well as spreadsheets to run the model with your own parameters is located here:

http://www.clarkvision.com/articles/astrophotography.and.exposure/

In the following figure, which image has the highest image quality? Technically the highest S/N is the longer exposure, 38 minutes on the right. But seeing variations meant if I wanted to make a sharper image, I needed to throw out the exposures blurred by bad seeing and sacrifice S/N. I ended up throwing out half the exposures to produce the image on the right. Even though technically lower S/N, the image is sharper with higher contrast in the fine details, and the noise difference between the images is minimal. The higher contrast of the selected image set meant less aggressive stretching to pull out fine details, and fewer artifacts in trying to reduce bloated stars. Less aggressive stretching means noise is not magnified as much. Be sure to click on the "original image" link to see the full resolution image, not the dpreview smeared image embedded in this post.

300 mm f/2.8, Canon 7D2, 2.8 arc-seconds/pixel, full resolution crop.

Here is a video loop of the images included in the left frame stack. Notice periods of poor seeing. Note: these are NOT tracking errors. The video loop is after alignment. Note too, the images were made with no dark frame subtraction, no flats, no bias, output after raw conversion in ACR.

video loop: 1-minute exposures illustrate bad seeing. 2.8 arc-seconds/pixel, full resolution.

Here is the sub set of 19 frames selected from the above for better seeing. This is the set use to make the image on the right in the first figure. Note: it looks like I should have rejected one more frame.

video loop: 1-minute exposures, best 19 frames out of 38, 2.8 arc-sec/pixel full resolution crop.

The following model describes the multiple factors above (though with better seeing).

A sub-exposure efficiency model for a Canon 7d2

The above model uses better seeing than in the video loops. In practice for the M8 image, I should have used even shorter exposure times than 1 minute. By selecting only 19 of 38 (50%), the efficiency was (use the yellow curve, 90% at 1 minute) 0.9 * 0.5 = 0.45. As an exercise, download the spreadsheet from my web site and change the stability to 1 minute, Then the peak efficiency exposure time would have been 30 to 40 seconds and I would have achieved a 58% efficiency (29% better than with 1-minutes subs). Again, low efficiency is not a horrible issue if it leads to a better final image.  Here is the model:

Model for the conditions with the M8 image sequence where 50% of exposures get rejected with 1-minute sub-exposures.  I should have used 30 second exposures at ISO 3200.

Now lets compare older technology cameras. If you use a camera like a Canon 5D Mark 3 at ISO 400, the read noise is 9.8 electrons. Many older CCDs are also in this range. The model shows peak efficiency in the 2 to 3 minutes range. Note the peak efficiency is lower than with the lower read noise camera for the same environmental conditions using shorter exposure times!

Model parameters typical of a Canon 5D Mark III at ISO 400.

If you need short exposures due to environmental conditions, boost the ISO to get low read noise. Figure 8c in my article,

http://www.clarkvision.com/articles/astrophotography.and.exposure/

shows even higher efficiency at sub 1-minute exposures. The main limit to low efficiency at short exposures with modern low read noise cameras is not read noise! It is the delay time between exposures! Use fast memory cards to minimize the delay time.

This should clear up a lot of myths.

Roger

Canon EOS 5D Mark III Canon EOS 7D Olympus Air
If you believe there are incorrect tags, please send us this post using our feedback form.
D L Fischer Forum Member • Posts: 98
Re: Best Sub-Exposure Time
1

One additional consideration with respect to the inter-frame gap would be whether dithering was employed.  When it is, the gap will be considerably longer and have an impact on the estimated efficiency.

On my setup, with moderate atmospheric stability, the gap time varies from 20 to 35 seconds.  The duration of the gap generally depends indirectly on the imaging optics focal length as the user may define settling point as a fraction of pixel at image-scale.  Additional factors for gap time would certainly be atmospheric stability and (to a lesser extent) degree of dithering aggressiveness.

-- hide signature --

David F.

 D L Fischer's gear list:D L Fischer's gear list
Nikon D7100 Nikon D5300 Nikon AF Nikkor 180mm f/2.8D ED-IF Kenko Teleplus Pro 300 AF 1.4x +4 more
Jon Rista Contributing Member • Posts: 681
Re: Best Sub-Exposure Time
1

Interesting information. I am not sure how to reconcile what you are advocating here with actual measurements of my data. They don't seem to jive with your numbers for the 5D III, as you indicate I should be using only 2 minute subs, which definitely delivers significantly noisier results, particularly in my red channel, unless I gather a LOT of subs...and to get 8 hours of integration (my preferred amount at my dark site) with 2 minute subs, I would need 240 subs; at ISO 800 and 1 minute subs, that would be 480 subs; at ISO 1600 and 30 second subs, a whopping 960 subs!! Too much!).

However there is one thing I do believe needs to be addressed.

I don't disagree that the spikes in your animated "seeing" video are not due to tracking issues...because they are not tracking issues.

The wobbling of the scene and stars is most definitely seeing, what with the inconsistency of it throughout the field. However the thing that is causing those spikes off the bottom or bottom-right of the stars, which is 100% consistent across the frame (every star experiences the same stretching in the same direction at the same time when it occurs), is something within your imaging system flexing. You have two issues there...seeing, and flexure. Seeing you can't do much about, although you CAN GUIDE, which will help (especially with something like MetaGuide, which uses highly accurate centroiding and low-latency guiding to even fight seeing, almost like an AO unit.)

Flexure, you CAN do something about! You can reconfigure your setup, figure out a better way to hold your lenses, support them both at the front as well as at the back. This is CRITICAL with a large lens like the Canon EF 300mm f/2.8...which is going to flex up a storm unless you get some support around both the front and back.

I stress this point, because you are apparently tossing a WHOPPING 50% of your frames to improve your results? Even when I was a total beginner almost two and a half years ago, I honestly cannot say I ever threw away 50% of my subs. I maybe tossed 20-30% very early on, but I try to avoid tossing subs like that if I can. I can also say that the only time I experienced spiking in my stars like your video exhibits is when I was suspending my 600mm lens into open air over the tripod foot.

So I disagree that the effects you are seeing in your video of Lagoon there is purely seeing. I also disagree that anyone should be tossing 50% of their subs...that is truly insane. These days I may toss 10%, tops, but I will often only toss one or two.

I'll see if I can create some videos of my aligned subs from data sets I have that use shorter exposures. While I don't doubt that there will be some amount of jitter in eccentricity, I can guarantee you that I do not experience the same spiking that your stars exhibit in your video. That is because I corrected my flexure issues a long time ago, and since doing so, I have stopped encountering such problems. I think it is important for anyone who is experiencing flexure issues to understand that they can be corrected. There is no reason to throw away 50% of your precious time gathering useless data.

-- hide signature --

As a side note. What Roger is basically engaging in here is effectively a form of "lucky imaging", where one acquires many many short exposures, and then only stacks a small percentage of them that have the highest quality. This is basically what planetary/lunar/solar imagers do, by acquiring high resolution, high speed (often hundreds of fps) videos of the solar system object, and run those videos through a program like AutoStakkert!2 to sort and stack the best frames.

It is not an incorrect approach, however I have never really seen it done with exposures longer than a couple of seconds. I have also not seen it done with cameras that don't have extremely high quantum efficiency (i.e. over 70%, some almost 80), and extremely low read noise (these days, a good planetary camera has less than 1e- read noise in high gain mode!) I have also seen it done with EMCCD cameras, which also have significantly less than 1e- read noise and basically zero dark current (with such a camera, it doesn't matter how long your exposures are, all that really matters is the total integration time.)

I think there are better ways of dealing with blurring introduced by seeing. It pretty common to discard a small percentage of subs, say 10-20%. Beyond that, there are more advanced algorithms to integrate your data these days that allow you to use advanced weighting of your sub frames to preserve more of them, and yet extract the most you can from even less than ideal quality subs. PixInsight allows you to weight based on a wide range of statistical criteria. For resolution, you can weight on FWHM, which will allow you to stack more of your hard-earned frames, while maximizing resolution. If that still is not enough, it also has powerful deconvolution tools that will allow you to model the actual PSF from your stars, then use that PSF to deconvolve the image, reducing the blurring from seeing, restoring detail, etc. If that STILL isn't enough, you can then perform star reduction using morphological transformations that can erode down clipped centroids and dilate halos to restore color and improve star quality.

--
Catching Ancient Photons

 Jon Rista's gear list:Jon Rista's gear list
Canon EOS 5D Mark III Sony a6000 Canon EF 50mm f/1.4 USM Canon EF 16-35mm F2.8L II USM Canon EF 100-400mm f/4.5-5.6L IS USM +4 more
OP rnclark Senior Member • Posts: 3,185
Re: Best Sub-Exposure Time

D L Fischer wrote:

One additional consideration with respect to the inter-frame gap would be whether dithering was employed. When it is, the gap will be considerably longer and have an impact on the estimated efficiency.

On my setup, with moderate atmospheric stability, the gap time varies from 20 to 35 seconds. The duration of the gap generally depends indirectly on the imaging optics focal length as the user may define settling point as a fraction of pixel at image-scale. Additional factors for gap time would certainly be atmospheric stability and (to a lesser extent) degree of dithering aggressiveness.

Yes, that can certainly be added to the model.  But with modern cameras, dithering is not needed.  I don't dither most of the time, and this is with no darks, no flats, no bias.  In the few cases in hot environments where dithering is still needed with modern cameras, one really only needs to dither every 20 or 30 frames, so the impact on efficiency can be quite low.  There really is no need to dither after every frame.

Roger

Jon Rista Contributing Member • Posts: 681
Re: Best Sub-Exposure Time

rnclark wrote:

D L Fischer wrote:

One additional consideration with respect to the inter-frame gap would be whether dithering was employed. When it is, the gap will be considerably longer and have an impact on the estimated efficiency.

On my setup, with moderate atmospheric stability, the gap time varies from 20 to 35 seconds. The duration of the gap generally depends indirectly on the imaging optics focal length as the user may define settling point as a fraction of pixel at image-scale. Additional factors for gap time would certainly be atmospheric stability and (to a lesser extent) degree of dithering aggressiveness.

Yes, that can certainly be added to the model. But with modern cameras, dithering is not needed. I don't dither most of the time, and this is with no darks, no flats, no bias. In the few cases in hot environments where dithering is still needed with modern cameras, one really only needs to dither every 20 or 30 frames, so the impact on efficiency can be quite low. There really is no need to dither after every frame.

Roger

I agree that dithering every frame is unnecessary, but I disagree that dithering every 20-30 frames would work. I tend to dither every 2-3 frames myself. I've optimized my inter-frame overhead time so that it is bout 10-15 seconds for everything...frame download, dithering, settling. At every 3 frames, that works quite well, although not perfectly.

Every 20-30 frames, and your going to experience some amount of correlated noise. Correlated noise does not require hot pixels...it simply requires alignment in the low level fixed patterns of your noise and bias signal. If you are not calibrating, then that means your bias signal is in every frame, and that WILL have patterns (every camera, even the best CCD cameras cooled well below 0C, have bias patterns). Even a pixel worth of drift every couple of frames will result in correlated noise.

-- hide signature --

Catching Ancient Photons

 Jon Rista's gear list:Jon Rista's gear list
Canon EOS 5D Mark III Sony a6000 Canon EF 50mm f/1.4 USM Canon EF 16-35mm F2.8L II USM Canon EF 100-400mm f/4.5-5.6L IS USM +4 more
Jon Rista Contributing Member • Posts: 681
Re: Best Sub-Exposure Time
1

rnclark wrote:

Here is a video loop of the images included in the left frame stack. Notice periods of poor seeing. Note: these are NOT tracking errors. The video loop is after alignment. Note too, the images were made with no dark frame subtraction, no flats, no bias, output after raw conversion in ACR.

video loop: 1-minute exposures illustrate bad seeing. 2.8 arc-seconds/pixel, full resolution.

Here is the sub set of 19 frames selected from the above for better seeing. This is the set use to make the image on the right in the first figure. Note: it looks like I should have rejected one more frame.

video loop: 1-minute exposures, best 19 frames out of 38, 2.8 arc-sec/pixel full resolution crop.

Since we are clearing up myths here. Here are a couple of videos of my own. I gathered 25 30s frames for my Orion Sword image that I have shared here on a number of occasions, usually to counter an argument made by Roger (here it is again! :P)

I discarded ZERO frames, because discarding them was totally unnecessary:

30-second guided subs, 25 out of 25, 2.14"/px, 1024x1024 full resolution crop

Apologies, DPR doesn't s seem to want to embed this GIF, so here is the link to actually see the animation:

http://i.imgur.com/HntjEGg.gif

The first thing I want everyone to notice is the absolute lack of any spiking, as Roger's videos (both of them, even the one that discarded 50% of his subs) demonstrate. My system has no flexure, so the spiking is absent. Also notice the overall stability of the frames relative to each other. ROCK. SOLID. Now tell me that guiding isn't useful, even with shorter subs.

Second, notice that seeing IS affecting the stars. My image scale is a little finer than Rogers, however not by a significant amount (2.14"/px vs. 2.8"/px). Note the stability of the stars. They certainly flicker and wobble a bit. That is definitely caused by seeing, not tracking error.

There is absolutely no reason to discard so many subs. If you are effective in your operational procedures, and understand now to track properly (and guiding REALLY DOES help here!), you should be able to get away without tossing ANY subs!

I am working on creating a similar video with 120s subs to demonstrate the difference in faint detail noise as well.

-- hide signature --

Catching Ancient Photons

 Jon Rista's gear list:Jon Rista's gear list
Canon EOS 5D Mark III Sony a6000 Canon EF 50mm f/1.4 USM Canon EF 16-35mm F2.8L II USM Canon EF 100-400mm f/4.5-5.6L IS USM +4 more
OP rnclark Senior Member • Posts: 3,185
Re: Best Sub-Exposure Time

Jon Rista wrote:

Interesting information. I am not sure how to reconcile what you are advocating here with actual measurements of my data. They don't seem to jive with your numbers for the 5D III, as you indicate I should be using only 2 minute subs, which definitely delivers significantly noisier results, particularly in my red channel, unless I gather a LOT of subs...and to get 8 hours of integration (my preferred amount at my dark site) with 2 minute subs, I would need 240 subs; at ISO 800 and 1 minute subs, that would be 480 subs; at ISO 1600 and 30 second subs, a whopping 960 subs!! Too much!).

The future is close where we will just take video and do lucky imaging like with planetary.  Use my spreadsheet and set the inter-frame delay to zero.

Until you do some short exposures and make video loops to see the seeing effects, you really can't asses the impact on image quality. Your previously posted information you said you typically have 8 or so arc-second FWHM star images. Your previously posted image from PHD shows 8.6 FWHM and a very ugly star image. Star images like that certainly impact fine detail. You image at about 2 arc-seconds per pixel and have 8-arc-second / pixel FWHM, so certainly not great seeing.

Rista's ugly star image

However there is one thing I do believe needs to be addressed.

I don't disagree that the spikes in your animated "seeing" video are not due to tracking issues...because they are not tracking issues.

Glad to see you agree because you previously called it tracking errors.

The wobbling of the scene and stars is most definitely seeing, what with the inconsistency of it throughout the field. However the thing that is causing those spikes off the bottom or bottom-right of the stars, which is 100% consistent across the frame (every star experiences the same stretching in the same direction at the same time when it occurs), is something within your imaging system flexing.

No, is is not flexing. Before I go there, here is another sequence of M51 made almost directly overhead, 30 second exposures where all 106 frames were good and not affected by seeing.  I used 100% of the frames.  Final image is here:

http://www.clarkvision.com/galleries/gallery.astrophoto-1/web/m51-420mm-rnclark_ipasa-ds2sdrizl-c03.10.2016.oJ6A8788-923.d-0.67x-c1s.html

A 30-frame clip that is representative:

full resolution video clip, 2.0 arc-sec/pixel, after alignment.

Examining the full M8 images, the problem with the spikes appears to be due to wind shake.  I don't remember wind (imaging session last August).  The above M51 imaging session was made in March in very windy conditions.  I even moved my car close to the setup to help block wind.  Yet, all 106 frames show no wind shake.

Your tracking and flexure idea is without merit.  Flexure is not going to happen on a few second time span.  The spikes in the image are indicative of few second image shifts, so extreme seeing, or wind, and most likely wind.  It could also happen if the ground was mushy and people walking around but this was not the case for this image and no one was walking around the system, including me.


I stress this point, because you are apparently tossing a WHOPPING 50% of your frames to improve your results? Even when I was a total beginner almost two and a half years ago, I honestly cannot say I ever threw away 50% of my subs. I maybe tossed 20-30% very early on, but I try to avoid tossing subs like that if I can. I can also say that the only time I experienced spiking in my stars like your video exhibits is when I was suspending my 600mm lens into open air over the tripod foot.

You are completely missing the point.   Rejecting 50% of subs due to environmental conditions is an indicator of 1) bad night (e.g. very bad seeing and wind), and 2) sub exposures too long.  Shorter subs could give better selection between wind gusts and bad seeing episodes and result in a higher keeper rate.

So I disagree that the effects you are seeing in your video of Lagoon there is purely seeing. I also disagree that anyone should be tossing 50% of their subs...that is truly insane. These days I may toss 10%, tops, but I will often only toss one or two.

Gee Jon, did you even look at the two images?  Clearly by selecting only the best images from the set, a better image can be made.  The point is there is more than  S/N to consider.  The M51 example above illustrates that 50% rejection is not necessarily a norm, even for the poor seeing of Colorado.  The point is to have the data to do the selection.

I'll see if I can create some videos of my aligned subs from data sets I have that use shorter exposures.

Next time you are out, try 30 second exposures on a low object like M8. You might be surprised how bad it looks!  But you have an old camera that will not produce as good a final image using short exposures.  The model indicates 2 to 3 minute sub-exposures with your setup.

Roger

OP rnclark Senior Member • Posts: 3,185
Re: Best Sub-Exposure Time

Jon Rista wrote:

rnclark wrote:

D L Fischer wrote:

One additional consideration with respect to the inter-frame gap would be whether dithering was employed. When it is, the gap will be considerably longer and have an impact on the estimated efficiency.

On my setup, with moderate atmospheric stability, the gap time varies from 20 to 35 seconds. The duration of the gap generally depends indirectly on the imaging optics focal length as the user may define settling point as a fraction of pixel at image-scale. Additional factors for gap time would certainly be atmospheric stability and (to a lesser extent) degree of dithering aggressiveness.

Yes, that can certainly be added to the model. But with modern cameras, dithering is not needed. I don't dither most of the time, and this is with no darks, no flats, no bias. In the few cases in hot environments where dithering is still needed with modern cameras, one really only needs to dither every 20 or 30 frames, so the impact on efficiency can be quite low. There really is no need to dither after every frame.

Roger

I agree that dithering every frame is unnecessary, but I disagree that dithering every 20-30 frames would work. I tend to dither every 2-3 frames myself. I've optimized my inter-frame overhead time so that it is bout 10-15 seconds for everything...frame download, dithering, settling. At every 3 frames, that works quite well, although not perfectly.

Every 20-30 frames, and your going to experience some amount of correlated noise. Correlated noise does not require hot pixels...it simply requires alignment in the low level fixed patterns of your noise and bias signal. If you are not calibrating, then that means your bias signal is in every frame, and that WILL have patterns (every camera, even the best CCD cameras cooled well below 0C, have bias patterns). Even a pixel worth of drift every couple of frames will result in correlated noise.

Jon, again, your experience is with an old camera that has significant banding problems.  Try a newer camera.  Note I said "with modern cameras."

Jon Rista Contributing Member • Posts: 681
Re: Best Sub-Exposure Time
1

rnclark wrote:

Jon Rista wrote:

Interesting information. I am not sure how to reconcile what you are advocating here with actual measurements of my data. They don't seem to jive with your numbers for the 5D III, as you indicate I should be using only 2 minute subs, which definitely delivers significantly noisier results, particularly in my red channel, unless I gather a LOT of subs...and to get 8 hours of integration (my preferred amount at my dark site) with 2 minute subs, I would need 240 subs; at ISO 800 and 1 minute subs, that would be 480 subs; at ISO 1600 and 30 second subs, a whopping 960 subs!! Too much!).

The future is close where we will just take video and do lucky imaging like with planetary. Use my spreadsheet and set the inter-frame delay to zero.

Until you do some short exposures and make video loops to see the seeing effects, you really can't asses the impact on image quality. Your previously posted information you said you typically have 8 or so arc-second FWHM star images. Your previously posted image from PHD shows 8.6 FWHM and a very ugly star image. Star images like that certainly impact fine detail. You image at about 2 arc-seconds per pixel and have 8-arc-second / pixel FWHM, so certainly not great seeing.

Please check my most recent reply. I just posted a video of 30-second subs. I have nothing like you indicated. Imaging was about 50 degrees over the eastern horizon.

Rista's ugly star image

However there is one thing I do believe needs to be addressed.

I don't disagree that the spikes in your animated "seeing" video are not due to tracking issues...because they are not tracking issues.

Glad to see you agree because you previously called it tracking errors.

No, I called it flexure.

"The inconsistent wobble is seeing, however the jarring that causes the spikes, which are consistent across the frame... That is flexure right there, my friend. It actually looks like a fairly severe flexure issue."

The wobbling of the scene and stars is most definitely seeing, what with the inconsistency of it throughout the field. However the thing that is causing those spikes off the bottom or bottom-right of the stars, which is 100% consistent across the frame (every star experiences the same stretching in the same direction at the same time when it occurs), is something within your imaging system flexing.

No, is is not flexing. Before I go there, here is another sequence of M51 made almost directly overhead, 30 second exposures where all 106 frames were good and not affected by seeing. I used 100% of the frames. Final image is here:

http://www.clarkvision.com/galleries/gallery.astrophoto-1/web/m51-420mm-rnclark_ipasa-ds2sdrizl-c03.10.2016.oJ6A8788-923.d-0.67x-c1s.html

A 30-frame clip that is representative:

full resolution video clip, 2.0 arc-sec/pixel, after alignment.

Examining the full M8 images, the problem with the spikes appears to be due to wind shake. I don't remember wind (imaging session last August). The above M51 imaging session was made in March in very windy conditions. I even moved my car close to the setup to help block wind. Yet, all 106 frames show no wind shake.

Your tracking and flexure idea is without merit. Flexure is not going to happen on a few second time span. The spikes in the image are indicative of few second image shifts, so extreme seeing, or wind, and most likely wind. It could also happen if the ground was mushy and people walking around but this was not the case for this image and no one was walking around the system, including me.

First off, in this M51 clip, I see the effects of both seeing and some light jostling of the mount. I'd say wind, but there are a number of reasons that can cause that, I totally agree there. I actually don't like imaging on concrete pads, because if someone walks on one end, the pad can tilt. Imperceptibly to us, but more than enough to affect subs. I generally prefer to set up on grass or dirt, I weight down the tripod a bit and make sure it's settled into the ground before imaging. At my dark site, I also use my car to block wind. Also bring along a tarp and additional tripods to set up additional wind blocks if I need them. Wind is really a killer, absolutely no argument from me there!

As for flexure. There is not just one kind of flexure. I believe you are thinking of differential flexure, where the flex between a guide scope and imaging scope differs, resulting in an additional drift that can shift stars between frames, or even elongate stars. That does tend to require a greater length of time to exhibit as a problem.

The flexure I am talking about is a flexing of the lens or telescope. Especially when a large lens like the Canon EF 300mm f/2.8 L is attached to the mount via the tripod foot, and the objective end of the lens is allowed to hang suspended. That DOES result in flexing, which results in stars stretching and jostling like your videos demonstrate. It does not require a long period of time. It is even possible for the bouncy flexing of a large, long prime lens like Canon's great whites to bounce around enough to result in multiple spikes in a single shorter sub (i.e. 120s). It is also possible that your videos were picking up jolts from wind, I'll offer that. My point is, the spiking is like nothing I've ever seen from seeing. I don't generally image below ~30 degrees over the horizon. Too much atmosphere, it definitely softens things up. I think the only time I did was to try and image Corona Australis, for which I did not bother processing the data as it was too soft. I can try to put together a video of that, as I don't think I used particularly long subs.

I stress this point, because you are apparently tossing a WHOPPING 50% of your frames to improve your results? Even when I was a total beginner almost two and a half years ago, I honestly cannot say I ever threw away 50% of my subs. I maybe tossed 20-30% very early on, but I try to avoid tossing subs like that if I can. I can also say that the only time I experienced spiking in my stars like your video exhibits is when I was suspending my 600mm lens into open air over the tripod foot.

You are completely missing the point. Rejecting 50% of subs due to environmental conditions is an indicator of 1) bad night (e.g. very bad seeing and wind), and 2) sub exposures too long. Shorter subs could give better selection between wind gusts and bad seeing episodes and result in a higher keeper rate.

No, I understand that. That is why I brought up the concept of lucky imaging. However, when it comes to deep sky imaging...every minute of time you spend exposing is precious, IMO. I spend hours and hours at my dark site. I'll head out there just before sunset, and often won't come home until I see astronomical twilight on the other end. I've used subs ranging from 120s to 720s out there, depending on how dark it actually is, and how faint the object I am imaging is.

So I disagree that the effects you are seeing in your video of Lagoon there is purely seeing. I also disagree that anyone should be tossing 50% of their subs...that is truly insane. These days I may toss 10%, tops, but I will often only toss one or two.

Gee Jon, did you even look at the two images? Clearly by selecting only the best images from the set, a better image can be made. The point is there is more than S/N to consider. The M51 example above illustrates that 50% rejection is not necessarily a norm, even for the poor seeing of Colorado. The point is to have the data to do the selection.

Certainly I did. I think there are better ways.

I prefer not to toss any subs at all, if I can avoid it. To that end, I've optimized my cabling to minimize cable tug issues. I always use a wind block (combination of my car and tarps, as I mentioned before). I don't bother imaging low in the horizon at all...if I want to capture one of those objects, I'll just head south and do it properly, rather than fight the environment and waste 50% of my time.

That is how I look at it. As I said, lucky imaging is certainly a viable approach, however I've never seen it done with longer exposures. If that is something that really does interest you, you might want to look into some of the new DSO imaging cameras that ZWO is producing. There are some amazing cameras out there. The ASI 1600m has only 1.2e- read noise, is cooled (so consistently low dark current year round), and with smaller ROI readout it can reach very high frame rates for lucky imaging. Even at full frame size, it delivers 23fps.

http://astronomy-imaging-camera.com/products/usb-3-0/asi1600mm/

ZWO has some other cameras with smaller sensors as well that can reach frame rates well over 300fps (one can even get over 700fps, for very high gain, short exposure work such as lunar and solar).

Anyway. I don't deny the potential benefits of lucky imaging. I just think that for the most part, DSO imaging is benefitted by eliminating as many of the environmental impacts as possible.

I'll see if I can create some videos of my aligned subs from data sets I have that use shorter exposures.

Next time you are out, try 30 second exposures on a low object like M8. You might be surprised how bad it looks! But you have an old camera that will not produce as good a final image using short exposures. The model indicates 2 to 3 minute sub-exposures with your setup.

I don't bother with low objects. Not from where I live, anyway. I feel it's a futile fight to try to image something through such a tall atmospheric column, as obviously seeing is going to be significantly worse.

When I image at ISO 1600, I usually use about 2 minute subs. I have an example of 120s ISO 1600 subs coming in a bit here. I'll share once the video is done. I do believe that even at ISO 1600, imaging longer than 30 seconds is valuable. Dynamic range can suffer, however dynamic range is also something we have ways of expanding. For example, PixInsight has the HDRComposition. This is a tool that allows you to take several sets of subs...most of them longer, then several shorter sets for "the bright stuff". The tool will linear fit all the data together, then combine the shorter subs, which contribute much more of the camera's DR to brighter details, into the longer subs. This eliminates issues with clipping, and can expand your dynamic range by, theoretically, and unlimited number of stops. I used this technique to process my Orion Sword image, which obviously had problems with the core of the nebula blowing out with the longer subs (and would have even if I'd used a lower ISO setting.)

It's a little funny. I use an old noisy camera (and I whole-heartedly admit it, the 5D III is a noisy sucker! No argument from me there), while you use a new modern camera with lower noise (particularly dark current). On the flip side, and not to be rude here just an observation, you seem to be hampered by older processing algorithms (and even techniques, such as when it comes to registration and integration), while I am using some of the most cutting edge processing technology on the planet (I would argue that PixInsight is well ahead of anyone else, particularly ImagesPlus, but also MaxImDL and CCDStack).

I would honestly be interested in seeing what you can do with that low noise camera of yours, if you paired it with PixInsight. I think if you gave PixInsight a try, it would open your mind to some of the possibilities that advanced algorithms and greater computing power can offer these days that we simply did not have a decade ago. While I don't think anything can correct the spiking evident in your Lagoon video, blurring caused by seeing? Totally correctable with effective pre- and post-processing techniques. Weighted integrations. Drizzling. Deconvolution (not sharpening, I mean true deconvolution). Star reduction. I think you would be surprised what software can do to overcome most of the softening that seeing introduces, and how much detail you can extract from your images. (Oh, and if you do try PI, give it an HONEST TRY. It's definitely got a learning curve, about the same as someone who has never used PS before trying it for the first time. It would take more than one or two cursory "once overs" to properly evaluate it's capabilities. If you give it six months, at least, I bet you wold be able to get better results from your own integrations and early-stage processing, at the very least.)

-- hide signature --

Catching Ancient Photons

 Jon Rista's gear list:Jon Rista's gear list
Canon EOS 5D Mark III Sony a6000 Canon EF 50mm f/1.4 USM Canon EF 16-35mm F2.8L II USM Canon EF 100-400mm f/4.5-5.6L IS USM +4 more
Trollmannx Senior Member • Posts: 4,727
Re: Best Sub-Exposure Time

Find this thread very interesting!

Have even changed my own imaging sessions after reading threads here for a while. The old 4-8 min imaging sessions are completely gone (atmospheric blurring beeing the main problem).

Now my typical sub exposures are 1-2 min (realizing that is deep enough if stacking lots of them). Did some tests verifying some claims posted here about subs and limiting magnitude and stacking. The testing and analyzing the results was good fun and a fine learning experience.

Also learned that using too short sub exposures, my Canon 6D and 7DII will show banding looking like bar codes (well, exaggerating a bit). Suspect that banding show up when the background is too under exposed - seems like cameras like these are tuned to deliver splendid results as soon as there is some information in the deepest shadows. A blank exposure is of little value to ordinary photographers living in a real world anyway.

So interesting to find the sweet spot for every lens and telescope used. Right now my next project is to find the shortest focal lenght needed to get maximum resolution from my site - seen from a strictly practical point of view (have all the numbers needed but turbulence will get the last word).

So thank you for sharing the information here. Very much appreciated!

Jon Rista Contributing Member • Posts: 681
Re: Best Sub-Exposure Time

rnclark wrote:

Jon Rista wrote:

rnclark wrote:

D L Fischer wrote:

One additional consideration with respect to the inter-frame gap would be whether dithering was employed. When it is, the gap will be considerably longer and have an impact on the estimated efficiency.

On my setup, with moderate atmospheric stability, the gap time varies from 20 to 35 seconds. The duration of the gap generally depends indirectly on the imaging optics focal length as the user may define settling point as a fraction of pixel at image-scale. Additional factors for gap time would certainly be atmospheric stability and (to a lesser extent) degree of dithering aggressiveness.

Yes, that can certainly be added to the model. But with modern cameras, dithering is not needed. I don't dither most of the time, and this is with no darks, no flats, no bias. In the few cases in hot environments where dithering is still needed with modern cameras, one really only needs to dither every 20 or 30 frames, so the impact on efficiency can be quite low. There really is no need to dither after every frame.

Roger

I agree that dithering every frame is unnecessary, but I disagree that dithering every 20-30 frames would work. I tend to dither every 2-3 frames myself. I've optimized my inter-frame overhead time so that it is bout 10-15 seconds for everything...frame download, dithering, settling. At every 3 frames, that works quite well, although not perfectly.

Every 20-30 frames, and your going to experience some amount of correlated noise. Correlated noise does not require hot pixels...it simply requires alignment in the low level fixed patterns of your noise and bias signal. If you are not calibrating, then that means your bias signal is in every frame, and that WILL have patterns (every camera, even the best CCD cameras cooled well below 0C, have bias patterns). Even a pixel worth of drift every couple of frames will result in correlated noise.

Jon, again, your experience is with an old camera that has significant banding problems. Try a newer camera. Note I said "with modern cameras."

I've seen correlated noise in your own images from the 7D II. Again, the issue with correlated noise is not just related to hot pixels. It is related to patterns. Even the 7D II has patterns in it's bias signal. Certainly far less than it's predecessor, but it is not devoid of all pattern. Even the 5Ds has pattern in it's bias signal. It's Canon's Achilles Heel...I've been waiting for too many years for them to fix it...gave up hope a long time ago.

I have also processed data from the 7D II, from the D810a, from countless astro-modded entry level DSLRs, as well as a wide range of CCD cameras. Even with a temp regulated CCD camera at -20C, without either subtracting out the bias or dithering will experience correlated noise (many of the newer low dark current Sony CCD sensors that do not require dark subtraction can still experience correlated noise from bias patterns.)

-- hide signature --

As a side note. I've said this in the past, but I clearly need to say it again. Not every astrophotographer, particularly the beginners, is willing or even capable of spending a lot of money on a DSLR like the 7D II. Technically speaking, that camera is totally overkill for most beginners. Most beginners who go for a DSLR end up picking an older one up used. While Nikon's D5000 series have become more popular recently, there are still far more astrophotographers out there with good old Canon Rebels. Some of them going as far back as the 400D.

Since not everyone is capable of spending $1500 on a 7D II, and god forbid $6000 on the highly efficient 300mm f/2.8 L II lens, I think it is only prudent to offer them an example of what a camera more in line with what a majority of astrophotographers are likely to be using is capable of, and how to make the most of it.

Again, not to be rude...just an observation. But you seem to have this very narrow tunnelvision when it comes to astrophotography: That everyone is going to be using the 7D II, and that they will benefit from it's lower noise. Your advice, being inexorably bound to that one specific DSLR, can be...well, let's just say confusing and less than ideally helpful for those who don't have it, can't get it, don't want it.

--
Catching Ancient Photons

 Jon Rista's gear list:Jon Rista's gear list
Canon EOS 5D Mark III Sony a6000 Canon EF 50mm f/1.4 USM Canon EF 16-35mm F2.8L II USM Canon EF 100-400mm f/4.5-5.6L IS USM +4 more
OP rnclark Senior Member • Posts: 3,185
Re: Best Sub-Exposure Time

Jon Rista wrote:

rnclark wrote:

Next time you are out, try 30 second exposures on a low object like M8. You might be surprised how bad it looks! But you have an old camera that will not produce as good a final image using short exposures. The model indicates 2 to 3 minute sub-exposures with your setup.

I don't bother with low objects. Not from where I live, anyway. I feel it's a futile fight to try to image something through such a tall atmospheric column, as obviously seeing is going to be significantly worse.

Well, why not show us a video at full resolution of a crop of M8 you did last year at your quincy observing spot not far from your home:

https://www.astrobin.com/186860/E/

You made 55 sub frames, though each 7 minutes. Your full resolution image shows good size halos around bright stars, the effect I have seen with seeing and/or tracking/stacking errors.  It would be interesting to see if seeing variations will show with such long integration times.

You can make an animated gif that plays well on dpreview using photoshop:

http://blog.hubspot.com/marketing/how-to-create-animated-gif-quick-tip-ht

Roger

Jon Rista Contributing Member • Posts: 681
Re: Best Sub-Exposure Time

rnclark wrote:

Jon Rista wrote:

rnclark wrote:

Next time you are out, try 30 second exposures on a low object like M8. You might be surprised how bad it looks! But you have an old camera that will not produce as good a final image using short exposures. The model indicates 2 to 3 minute sub-exposures with your setup.

I don't bother with low objects. Not from where I live, anyway. I feel it's a futile fight to try to image something through such a tall atmospheric column, as obviously seeing is going to be significantly worse.

Well, why not show us a video at full resolution of a crop of M8 you did last year at your quincy observing spot not far from your home:

https://www.astrobin.com/186860/E/

You made 55 sub frames, though each 7 minutes. Your full resolution image shows good size halos around bright stars, the effect I have seen with seeing and/or tracking/stacking errors. It would be interesting to see if seeing variations will show with such long integration times.

You can make an animated gif that plays well on dpreview using photoshop:

http://blog.hubspot.com/marketing/how-to-create-animated-gif-quick-tip-ht

Roger

The "halos" are actually the starburst diffraction effect I get from stopping my lens down to f/4.5. I believe I already mentioned that recently. I used Photoshop to create the other GIF. I think it was simply too large. However, I did link the original GIF url so you can still view it.

I guess M8 is about 27 degrees up? Probably one of the lowest targets I've imaged. As I mentioned before, I don't worry much about the bright stars clipping. That said, my bright stars were quite round in that image, as I don't have tracking or stacking errors. Seeing at that altitude would be worse, however I also still never experienced any of the spiking as you did in your images.

I've been working on a video comparison of Orion Nebula with short and long subs, to demonstrate the fact that 4x longer exposure did NOT bloat my stars. I don't know if I'll have time after that is done to create another video of M8...if I do, I'll try.

-- hide signature --

Catching Ancient Photons

 Jon Rista's gear list:Jon Rista's gear list
Canon EOS 5D Mark III Sony a6000 Canon EF 50mm f/1.4 USM Canon EF 16-35mm F2.8L II USM Canon EF 100-400mm f/4.5-5.6L IS USM +4 more
sharkmelley
sharkmelley Senior Member • Posts: 1,622
Re: Best Sub-Exposure Time

A thought provoking post, as usual.

The video is very interesting.  Putting aside the star spikes which may have been caused by wind gusts it is most interesting to see the jelly-like wobble across the whole frame.  I'm guessing that stacking algorithms are going to have great difficulty compensating for that.

When imaging under challenging conditions it certainly makes sense to throw away the worst exposures.  Yours is an interesting example of this.

Mark

 sharkmelley's gear list:sharkmelley's gear list
Sony a7S +1 more
Jon Rista Contributing Member • Posts: 681
Re: Best Sub-Exposure Time

Alright, sorry for the delay in producing these. Had to finish up work. I just wanted to show the difference between using very short subframes vs. longer sub frames (in this case, 30s vs. 120s, to keep it in line with the kinds of exposures I think most people here might be using).

I grabbed the first 25 frames for the 120s sequence so it could be compared with the 25 frames for the 30s sequence. I linear fit the 120s subs to the 30s subs to normalize the data. I then stretched the 30s subs, and applied that same stretch to the 120s subs. I finally batch cropped and exported all the frames for import into PS to create the video clips.

First, the 30s sequence:

See video here: http://i.imgur.com/dYf108m.gif

And the 120s sequence:

See video here: http://i.imgur.com/fCFLbBl.gif

No appreciable difference in fine details with the longer subs, despite being 4x as long as the short subs. The faint stars are the same size. Even fainter stars are appearing thanks to the lower relative noise. A very slight increase in flicker among the brighter stars with the longer subs. Nothing I would get worked up over, and certainly nothing resembling the spiking in Rogers subs. I still believe those spikes are from flexure, or possibly wind...either way, it does not appear to be a seeing artifact to me.

It should be noted that the 120s sequence was acquired before the 30s sequence, so Orion was lower in the atmosphere. I honestly cannot say how low, but I usually don't image below ~30 degrees above the horizon due to increased effects from atmospheric turbulence.

The largest difference between the two videos is not some huge increase in seeing effects in the one from the longer subs. The largest difference is the amount of noise in the faint details. Significantly more noise in the 30s subs video. Now, that can be overcome, but that gets to the heart of the point I was making in the other thread. You would need hundreds of subs to normalize the SNR of an image integrated out of 30s subs to the same SNR you could get out of longer subs like the 120s video. I used 25 subs for each of these videos. If they were actually integrated (stacked), I would need four times as many 30s subs...at the very least. However since we are talking same ISO here, I would actually be adding more total read noise with the shorter subs...so I would need more than four times as many to actually get the same SNR as stacking 25 120s subs.

Oh, and here is the real rub with using lots of shorter subs. If SNR is important to you, especially if it is more important than raw resolution (because there are ways of recovering resolution, there is no way to increase the photons gathered other than to actually gather them!), stacking so many short subs is going to normalize the FWHM anyway. What you might have gained by discarding 50% or more of your subs in an attempt to "lucky image" a DSO wouldn't be possible if you need the SNR to pull out faint details cleanly. By stacking 90% of those subs, the integration algorithm is going to sample and distribute all of the wobbling of all of the input stars, and there really won't be any difference in star bloat vs. the integration from the longer subs. It tends to balance out in the end.

The clipping of bright areas with the longer subs? That was why I tool several sets of subs, both longer and shorter, down to for the Trap. I linear fit all of the integrations from each set I gathered to the 5s integration, then performed a high dynamic range combination of each into the longer integrations one at a time. That recovered all of the dynamic range, from the very faint background details to the bright Trapezium core. So if I am imaging a region with both very bright as well as very faint objects, such as Orion...or say such as M81/82 and the IFN in the region of space around them, I would much rather gather fewer, longer subs (even if that just means 120s at ISO 1600 or 240s at ISO 800) and then grab another set of shorter subs for the bright objects and HDR combine them...than to gather hundreds if not thousands (yes, to get good SNR on IFN, I do believe it would be all too easy to integrate 1000 30s subs and still not have sufficient SNR!) of subs and deal with sorting, culling, and integrating them all.

-- hide signature --

Catching Ancient Photons

 Jon Rista's gear list:Jon Rista's gear list
Canon EOS 5D Mark III Sony a6000 Canon EF 50mm f/1.4 USM Canon EF 16-35mm F2.8L II USM Canon EF 100-400mm f/4.5-5.6L IS USM +4 more
Okiedrifter Forum Member • Posts: 88
Re: Best Sub-Exposure Time

Man. I can't add any useful info here. But when Jon and Roger discuss things I sure learn a lot. Both of you guys have helped me immensely on my imaging and processing!

 Okiedrifter's gear list:Okiedrifter's gear list
Canon EOS Rebel T4i Canon EOS 7D Mark II Canon EF 50mm f/1.8 II Canon EF-S 55-250mm f/4-5.6 IS II Canon EF-S 18-135mm F3.5-5.6 IS STM +1 more
OP rnclark Senior Member • Posts: 3,185
Re: Best Sub-Exposure Time

Jon Rista wrote:

Alright, sorry for the delay in producing these. Had to finish up work. I just wanted to show the difference between using very short subframes vs. longer sub frames (in this case, 30s vs. 120s, to keep it in line with the kinds of exposures I think most people here might be using).

I grabbed the first 25 frames for the 120s sequence so it could be compared with the 25 frames for the 30s sequence. I linear fit the 120s subs to the 30s subs to normalize the data. I then stretched the 30s subs, and applied that same stretch to the 120s subs. I finally batch cropped and exported all the frames for import into PS to create the video clips.

First, the 30s sequence:

See video here: http://i.imgur.com/dYf108m.gif

And the 120s sequence:

See video here: http://i.imgur.com/fCFLbBl.gif

No appreciable difference in fine details with the longer subs, despite being 4x as long as the short subs.

The shorter subs have higher contrast in the fine details and are resolving finer details in moments of better seeing.


The largest difference between the two videos is not some huge increase in seeing effects in the one from the longer subs. The largest difference is the amount of noise in the faint details. Significantly more noise in the 30s subs video.

This is basic math and physics.  Of course a 30 second exposure is noisier, but combine four 30-second images and the noise will be indistinguishable visually from a 120 second exposure.  Then select a high resolution set of 30 second frames and you will have a 120 second image with finer details and higher contrast in the details.

Put your image through software like that used for planetary images and you'll get a much higher resolution image (assuming the software can handle the big images).

Now, that can be overcome, but that gets to the heart of the point I was making in the other thread. You would need hundreds of subs to normalize the SNR of an image integrated out of 30s subs to the same SNR you could get out of longer subs like the 120s video. I used 25 subs for each of these videos. If they were actually integrated (stacked), I would need four times as many 30s subs.

To be clear, it is total integration time that matters, not the number of subs.  Complaining about more subs is irrelevant--complain about exposure time.  If you want 8 hours of exposure, just make 8 hours of exposures, regardless of the sub exposure length.

..at the very least. However since we are talking same ISO here, I would actually be adding more total read noise with the shorter subs...so I would need more than four times as many to actually get the same SNR as stacking 25 120s subs.

Oh, and here is the real rub with using lots of shorter subs. If SNR is important to you, especially if it is more important than raw resolution (because there are ways of recovering resolution, there is no way to increase the photons gathered other than to actually gather them!), stacking so many short subs is going to normalize the FWHM anyway.

Not if it is done intelligently.  I did it by hand selecting.  Software could handle many more faster.

What you might have gained by discarding 50% or more of your subs in an attempt to "lucky image" a DSO wouldn't be possible if you need the SNR to pull out faint details cleanly. By stacking 90% of those subs, the integration algorithm is going to sample and distribute all of the wobbling of all of the input stars, and there really won't be any difference in star bloat vs. the integration from the longer subs. It tends to balance out in the end.

Again you are missing the whole point.  If you are stacking low resolution subs it may improve detection of a big blob featureless nebula, but it is actually detrimental to fine detail, lowering contrast.  In an effort to bring that detail back, one needs to stretch more, magnifying noise, making things worse.

Deconvolution increases noise in the process of improving resolution.  Both my images of M8 at the beginning of the thread had deconvolution and star reduction methods applied.  It certainly helped (the difference would be much worse if not applied), but could not overcome the destructive effects of poor seeing/shake.

Roger

swimswithtrout Veteran Member • Posts: 3,005
Even "Rocket Scientists" screw up

rnclark wrote:


In the following figure, which image has the highest image quality? Technically the highest S/N is the longer exposure, 38 minutes on the right. But seeing variations meant if I wanted to make a sharper image, I needed to throw out the exposures blurred by bad seeing and sacrifice S/N. I ended up throwing out half the exposures to produce the image on the right. .

300 mm f/2.8, Canon 7D2, 2.8 arc-seconds/pixel, full resolution crop.


Roger

Which "right" do you mean ? What's on the left ??

OP rnclark Senior Member • Posts: 3,185
Re: Even "Rocket Scientists" screw up

swimswithtrout wrote:

rnclark wrote:

In the following figure, which image has the highest image quality? Technically the highest S/N is the longer exposure, 38 minutes on the right.

Oops  38 minutes is on the left.

But seeing variations meant if I wanted to make a sharper image, I needed to throw out the exposures blurred by bad seeing and sacrifice S/N. I ended up throwing out half the exposures to produce the image on the right. .

300 mm f/2.8, Canon 7D2, 2.8 arc-seconds/pixel, full resolution crop.

Roger

Which "right" do you mean ?

My other right!

Thanks for catching that.

Jon Rista Contributing Member • Posts: 681
Re: Best Sub-Exposure Time
1

rnclark wrote:

Jon Rista wrote:

Alright, sorry for the delay in producing these. Had to finish up work. I just wanted to show the difference between using very short subframes vs. longer sub frames (in this case, 30s vs. 120s, to keep it in line with the kinds of exposures I think most people here might be using).

I grabbed the first 25 frames for the 120s sequence so it could be compared with the 25 frames for the 30s sequence. I linear fit the 120s subs to the 30s subs to normalize the data. I then stretched the 30s subs, and applied that same stretch to the 120s subs. I finally batch cropped and exported all the frames for import into PS to create the video clips.

First, the 30s sequence:

See video here: http://i.imgur.com/dYf108m.gif

And the 120s sequence:

See video here: http://i.imgur.com/fCFLbBl.gif

No appreciable difference in fine details with the longer subs, despite being 4x as long as the short subs.

The shorter subs have higher contrast in the fine details and are resolving finer details in moments of better seeing.

Based on...what? A subjective observation of a GIF animation? The shorter subs have significantly more noise. That diminishes our ability to properly evaluate such differences visually. I'm sorry, going to call bogus on this one. *shrug*

Besides, I have so many tools at my disposal to enhance detail. Not that I would need to, as I believe the contrast differences boil down to the noise, which is all a matter of uncertainty, and once the two data sets are integrated, any supposed increase in contrast you may think you are seeing would disappear.

Unless, of course, I discarded half of the data I acquired. ;P I mean, we all have lots and lots of time to just waste every clear night, right?

There are better ways to improve detail than by throwing away perfectly good data. Every sub in both of my videos is ideal. I can use every one of them. I have no spiking or jerking in my stars. I don't even have the ripple from seeing that is evident in your videos. I have no reason to throw anything away.

The largest difference between the two videos is not some huge increase in seeing effects in the one from the longer subs. The largest difference is the amount of noise in the faint details. Significantly more noise in the 30s subs video.

This is basic math and physics. Of course a 30 second exposure is noisier, but combine four 30-second images and the noise will be indistinguishable visually from a 120 second exposure. Then select a high resolution set of 30 second frames and you will have a 120 second image with finer details and higher contrast in the details.

Put your image through software like that used for planetary images and you'll get a much higher resolution image (assuming the software can handle the big images).

Now, that can be overcome, but that gets to the heart of the point I was making in the other thread. You would need hundreds of subs to normalize the SNR of an image integrated out of 30s subs to the same SNR you could get out of longer subs like the 120s video. I used 25 subs for each of these videos. If they were actually integrated (stacked), I would need four times as many 30s subs.

To be clear, it is total integration time that matters, not the number of subs. Complaining about more subs is irrelevant--complain about exposure time. If you want 8 hours of exposure, just make 8 hours of exposures, regardless of the sub exposure length.

*sigh* Your talking in circles now. I always want HOURS worth of integration, not a specific sub count. I've said it so many times, but again, for the record. I aim for a minimum of four hours, and my ultimate goal is at least eight hours (and if I can get more for particularly faint objects, I do everything in my power to do so.)

Let's say I'm going for 8 hours of integration. According to your plots from the OP of this thread, I would need to expose for ~30 seconds at ISO 1600 to reach that optimal efficiency peak. So, to get eight HOURS of 30s subs, I would need an absolutely insane 960 subs. Are you saying that is reasonable? Honestly? I am actually asking here. I don't quite know if you actually think that is reasonable or not.

What it is you really advocate if the imager's goal is to get a really deep integration for low noise on very faint details. On the one hand, you advocate heavily for very short subs (30-60s). However right here you do an about face and say the same thing I've been saying for months: That it is the total integration time that matters.

However, if you combine the two...short subs and hours of total integration time...the inevitable conclusion is that one must STACK MORE SUBS to get the desired total integration time.

If you ARE saying that, AND if you are saying that I should be culling a significant percentage of my subs in order to get high resolution as well...lets just stick with 50% for now. That means I would need to acquire 16 hours of data, 1920x20s subs, and throw away HALF of them.

I honestly find that approach to astrophotography to be....insane, for lack of any better word to describe it.

So, given that...what are my options? Expose longer at a high ISO like ISO 1600? The read noise is better, but the dynamic range is worse. So I'd be clipping my stars more than if I dropped to a lower ISO, and got longer subs. Which was the same point, in totality, that I made in the other thread. Because the lower ISO settings have more DR, I could expose for longer than 2x exposure time per sub...or, stick with 2x and clip my stars less.

(This is the same hangup we always have. You seem to think I am infatuated with sub count. I am not. If anything, I am infatuated with getting eight hours of integration time. ;P If it wasn't for the weather, I would be getting 8 hours on every object, more for very faint things like IFN fields. This is the argument I've been making for months. I want hours of integration, not minutes. I advocate that everyone on this forum get at least a few hours of integration, as many as they can muster. However we advocate different exposure times. You advocate for very short exposures, I advocate for longer exposures. However given a fixed integration time, if the only thing we are varying is sub exposure length...then there is only one logical conclusion: that you need to stack more short subs to get the same integration time as when stacking longer subs. When it comes to HOURS of integration time (vs. minutes), then the next logical conclusion one must come to is: I am going to require a completely unreasonable number of sub frames if I stick with 30 second subs! O_o)

..at the very least. However since we are talking same ISO here, I would actually be adding more total read noise with the shorter subs...so I would need more than four times as many to actually get the same SNR as stacking 25 120s subs.

Oh, and here is the real rub with using lots of shorter subs. If SNR is important to you, especially if it is more important than raw resolution (because there are ways of recovering resolution, there is no way to increase the photons gathered other than to actually gather them!), stacking so many short subs is going to normalize the FWHM anyway.

Not if it is done intelligently. I did it by hand selecting. Software could handle many more faster.

Sure, but, as I note above...if I am going to cull 50% of my subs, and I still want eight hours of integration for low noise, I would have to acquire 16 hours worth of 30s subs (at your preferred high ISO of 1600 where read noise is lower)...or 1920 subs. At ~28mb each, that is approximately 54 GIGS of data. Even a software algorithm (which, FWIW, I have...again, PixInsight has everything; you should really try it) would take a very long time to evaluate and cull that many subs.

As a matter of fact, it could take hours to load, demosaic, run noise evaluation, star measurements (FWHM, eccentricity, star support, etc.) and then finally apply your chosen filtering algorithm to the whole lot, and move or copy your selections to another directory. I know, because I've done it on a data set of over 300 files before. Only 300, and it took a couple of hours to complete the whole culling and weighting process (algorithmically). That was with my previous computer, something a bit more average, what I expect most astrophotographers have at their disposal. The same process with my newer and extremely high powered computer that has 32gigs of ram and multiple high speed SSD drives, still takes about an hour. Fifty four gigs of data, and a range of algorithmic analyses run on each and every frame. I believe a person would still require hours to evaluate each and every frame and properly cull those that did not measure up.

What you might have gained by discarding 50% or more of your subs in an attempt to "lucky image" a DSO wouldn't be possible if you need the SNR to pull out faint details cleanly. By stacking 90% of those subs, the integration algorithm is going to sample and distribute all of the wobbling of all of the input stars, and there really won't be any difference in star bloat vs. the integration from the longer subs. It tends to balance out in the end.

Again you are missing the whole point. If you are stacking low resolution subs it may improve detection of a big blob featureless nebula, but it is actually detrimental to fine detail, lowering contrast. In an effort to bring that detail back, one needs to stretch more, magnifying noise, making things worse.

Deconvolution increases noise in the process of improving resolution. Both my images of M8 at the beginning of the thread had deconvolution and star reduction methods applied. It certainly helped (the difference would be much worse if not applied), but could not overcome the destructive effects of poor seeing/shake.

You are nitpicking microscopic differences in my images, and the analysis is quite subjective at that given the significantly lower SNR of the shorter subs. Noise diminishes our ability to perceive tonal differences. So it's really tough to say definitively that the shorter subs have higher contrast on finer details. What finer details? They are all buried in noise!

Once you stack 4x as many short subs, the effects of seeing across all of them would normalize the differences. So the shorter subs are not going to be resolving any more detail than the longer ones. Not on a normalized basis. That is, unless you gather 8x as many subs and throw away half!! But, that is a ludicrously insane waste of potentially hours of time. Why be so wasteful? The differences between my two videos are microscopic compared to the differences between your two videos. Both of my videos, even the one with longer subs, still demonstrates pure seeing effects, which barely cause a wobble in the stars. The effects of whatever it was causing your spikes (flexure, wind, whatever) are MASSIVE in comparison. I strongly believe there is something other than seeing causing that, and if it is something other than seeing, it can be corrected. The source of those spikes could be eliminated.

You need to give PixInsight's deconvolution a try. I suspect it is lightyears ahead of ImagesPlus. And it is true deconvolution based on a proper PSF, one that can even be modeled from your actual stars in your images. Anyway, I suspect I'm barking up the wrong tree here trying to recommend PI to you.

The complaint about increasing noise is moot. How many people are actually going to acquire many hours of total integration time with 30 second subs? No one acquires over nine hundred subs, let alone nearly two thousand with the intent of throwing away half! Time is far more precious than that. When you account for a realistic inter-frame overhead (I think 4s is rock bottom, realistically I think it is going to be closer to 10-15 seconds when you account for the need to focus every few frames early on, the need to dither, even if it is every couple of frames, etc.) That means you could be spending a couple extra HOURS of time on-site gathering those 1920 subs.

The assumption that your going to have the same SNR with many many short subs is...well at best, it is extremely hopeful, but in all practicality, it is false. It is tough enough to acquire a few hundred subs. The few people I know who do, usually don't go much over ~300. They, like me, do everything in their power to keep as many of them as possible.

So the potential for an integration made from longer subs to have significantly higher SNR than an integration made from shorter subs is very high. With a much higher SNR, the small amount that deconvolution will impact noise is not going to affect it enough to reduce SNR to the same low SNR you would have with only 300 30s subs.

With deconvolution and longer subs with sufficient integration time, you can have both. You can have higher SNR and finer detail. This is my point. Why waste so much time acquiring information you are just going to throw away? Why waste any time at all? There are better ways!

-- hide signature --

So we come full circle to my original argument from the other thread. Get longer subs, at a lower ISO, and get fewer of them. You get more SNR per sub. You have to deal with less inter-frame overhead (which, FTR, can be much worse than 10 seconds per frame, depends on a variety of factors). You don't need as much storage space to store all the files. You don't need to spend as much time pre-processing and post-processing once you have it all on your computer at home. If you manage your environmental factors, then you shouldn't have any NEED to discard 50% of your data. You shouldn't have spikes or jolts, at worst you might have a slightly larger wobble with longer subs, as my videos indicate.

I'll still take deconvolution with longer subs. Every time. There is more to efficiency than your charts demonstrate.

--
Catching Ancient Photons

 Jon Rista's gear list:Jon Rista's gear list
Canon EOS 5D Mark III Sony a6000 Canon EF 50mm f/1.4 USM Canon EF 16-35mm F2.8L II USM Canon EF 100-400mm f/4.5-5.6L IS USM +4 more
Keyboard shortcuts:
FForum MMy threads