Tks for the samples. I hv viewed all 4 images and yes, I can see the stark difference.
I'm a Mac user, can't use Sequator. So I used only Starstax.
Starry Sky Stacker and Starry Landscape Stacker are modestly priced Mac packages that have very loyal followings.
And I heard post stacking is a pain in the a**.
It’s not even close to that when you use one of the specialized applications. Drag and drop two sets of files.
And Starstax don't have the stacking function but it has dark frames subtraction function. At the end of the shoot just shoot maybe 10 frames with lens cap on and use Starstar dark frames function. It seems like a breeze. I hv not tried that out yet.
What are your thoughts abt the diff. in stacking and using dark frames
Tks
Apples and oranges. Stacking’s primary effect is to reduce photon shot noise. That refers to the randomness of the arrival timing. That noise equals the square root of the number of recorded photons, and the same math works on the data numbers in the image file. With 4X the photons you will have 2X the signal to noise ratio. 100X gets you to a 10X benefit. So there is a diminishing return aspect that is a problem when you want to include moving stars with a stationary foreground.
Dark frames are used to a) reduce fixed pattern noise like lines and amplifier glow (largely a thing of older cameras thankfully) and hot pixels, as well as b) thermal noise that arises in the sensor structure. If you have amp glow that will be the worst of the issues. It looks like a light leak on one side of the frame.
The
highly undesirable way to employ dark frames is to use long exposure noise reduction because it takes a dark frame after every light frame. That wastes valuable exposure time. What many people would rather do is to collect a set of dark frames in a sequence at the end of a session, while they pack up. Since the lights and darks are collected at almost the same time and at sort of the same sensor temperature, they can be somewhat useful. However to maximize the benefit the sensor would need to be operated at the same regulated temperature throughout which is not available in a general purpose camera.
Between stacking and dark subtraction you will obtain quite a lot more visual benefit from the former. Many night sky landscape shooters using general purpose cameras don’t bother to collect dark frames. Here are some thoughts to consider, your mileage may vary:
Hot pixels are a frequently cited reason to do dark frame subtraction. I decided to see for myself for my cameras if they do in fact persist from one frame to the next. I collected a few dozen long exposures in a dark closet with the lens cap on. About half of them persisted to the following frame, half of those persisted to the third frame, half of those persisted for a fourth, etc. Therefore there wouldn’t be many that were present in both the averaged lights and the averaged darks. I decided to heal them manually in post. Also some raw converters and some stacking programs suppress many of them automatically.
Here’s a potential downside: If the dark noise you are subtracting is not repetitive (in other words, primarily random in nature) from frame to frame, subtraction actually
increases noise. Let that sink in for a second. To
subtract uncorrelated noise is to
add an inverted copy of the noise. Don’t do that on the basis of 1 light frame -1 dark frame; that’s the camera’s long exposure noise reduction feature. Instead average many lights and average many darks,
then do the subtraction. The apps manage that for you.
With a regulated, cooled sensor the random thermal dark noise is greatly reduced to very nearly nothing. That mitigates the problem of random - random. What it leaves in place is any fixed pattern noise which
is useful to subtract out. So it’s a technique most commonly associated with specialized astro cameras.