Astrophoto post processing challenge

Started Jul 27, 2015 | Discussions
rnclark Senior Member • Posts: 2,205
Astrophoto post processing challenge
1

I have a challenge for you to test your astrophoto post processing skills.

Astrophotography Image Processing with Images Made in Moderate Light Pollution
http://www.clarkvision.com/articles/astrophotography.image.processing2/

I give links to where the raw files can be downloaded so you can try processing the images with your own methods.

The data set includes nine 1-minute exposures of the Scorpio region and the Rho Ophiuchus nebula complex made with a 100 mm lens at f/2 and a stock Canon 6D. Also included are 4 dark frames and 5 flat fields.

The sky images were obtained from a green zone west of Denver at an elevation of about 11,000 feet with no Moon. There is a light pollution gradient from left to right, as well as airglow. The Milky
Way is on the left edge, so there is also a deep sky brightness gradient from the galaxy. Thus all together, with the relatively short total exposure time, it will be a challenge to bring out the
nebula, including dark nebulae, hydrogen alpha nebulae, blue and the yellow reflection nebulae.

So the challenge is to produce the best image that brings out the many colorful nebulae in the region. Use whatever method you desire and post a link to the processed image where we can see the results and post a description here of what you did to produce the image.

From what I've seen from processing by others made on other forums, I have some theories. Some processing methods seem to suppress
the reds (H-alpha). This may lead to the common perception of needing to modify DSLRs. See if you can bring out the H-alpha nebulae while at
the same time bringing out the blue reflection nebulae, reducing chromatic aberration, subtracting light pollution and airglow and bringing out the faintest stars.

I've also seem some weird processing artifacts, like splotchiness, loss of faint stars and clipping of the low end, all due to processing.

So let's see what you can make of these images, and please tell us how you did it. My goal here is to diagnose different methods, what works best and come up with a guide to better processing methods.

Roger

Canon EOS 6D Olympus Air
If you believe there are incorrect tags, please send us this post using our feedback form.
Michael S.
Michael S. Veteran Member • Posts: 6,885
Re: Astrophoto post processing challenge

rnclark wrote:

So let's see what you can make of these images, and please tell us how you did it. My goal here is to diagnose different methods, what works best and come up with a guide to better processing methods.

Roger

Hi Roger!

Hope I'm downloading the correct dropbox link - will give it a try.

-- hide signature --

Michael S.
EUROPE; dpreview since 2001
NIKON NPS Member
(check equipment via profile)

 Michael S.'s gear list:Michael S.'s gear list
Leica D-Lux (Typ 109) Leica M Monochrom (Typ 246) Nikon D610 Nikon D5300 Leica SL (Typ 601) +5 more
Sir Canon
Sir Canon Senior Member • Posts: 1,547
Re: Astrophoto post processing challenge

yeah its not working i cant open them in photoshop

-- hide signature --

I tend to overdo things

 Sir Canon's gear list:Sir Canon's gear list
Panasonic Lumix DMC-TS1 Canon EOS 350D Canon EOS 550D Canon EOS 700D Canon EF 50mm f/1.8 II +6 more
OP rnclark Senior Member • Posts: 2,205
Re: Astrophoto post processing challenge

The files need to be unzipped. They are Canon CR2 raw files from a Canon 6D, so your software needs to be current enough to open 6D files.

swimswithtrout Senior Member • Posts: 2,539
Re: Astrophoto post processing challenge

rnclark wrote:

I have a challenge for you to test your astrophoto post processing skills.

Astrophotography Image Processing with Images Made in Moderate Light Pollution
http://www.clarkvision.com/articles/astrophotography.image.processing2/

The data set includes nine 1-minute exposures of the Scorpio region and the Rho Ophiuchus nebula complex made with a 100 mm lens at f/2 and a stock Canon 6D. Also included are 4 dark frames and 5 flat fields.

Roger

9 minutes of imaging ?? Nine subs ??? Who shoots an exposure that short ?????

It takes me ~ 1 hr to even set up, so if I can't get at least 1-1/2-2 hrs bare minimum, of 2-4 minute, subs using an 8" f4, on a single target, I don't even bother.

Why not just post a single 10 minute shot and call it good ?

OP rnclark Senior Member • Posts: 2,205
Re: Astrophoto post processing challenge

Its an exercise in the whole process. I designed it to use just a few subs so it would not be too difficult to download and process hundreds of frames. Even so, it is a challenge in removing airglow and light pollution gradients and extracting weak signals, just like with more subs and fainter subjects. It is also an exercise in reducing aberrations and noise. With so few frames, it will show weaknesses in post processing methodologies when trying to bring out signals close to the noise limits.

Show us what you can do.

Michael S.
Michael S. Veteran Member • Posts: 6,885
Re: Here is my version!
3

Good Morning!

As you used an unmodiefied DSLR with short exposure time, it's hard to get Halpha in this picture - you can't push what is not there. And, not to forget to mention - you can't overcome the disadvantages s "normals" lens has compared to a scope. As to be seen in the picture below...so many red/purple stars, which of course do not exist...caused by the combination of earth's atmosphere and the chromatic errors of "normal" lenses.

My result with Pixinsight:

kind regards,

-- hide signature --

Michael S.
EUROPE; dpreview since 2001
NIKON NPS Member
(check equipment via profile)

 Michael S.'s gear list:Michael S.'s gear list
Leica D-Lux (Typ 109) Leica M Monochrom (Typ 246) Nikon D610 Nikon D5300 Leica SL (Typ 601) +5 more
OP rnclark Senior Member • Posts: 2,205
Re: Here is my version!

Michael S. wrote:

Good Morning!

As you used an unmodiefied DSLR with short exposure time, it's hard to get Halpha in this picture - you can't push what is not there. And, not to forget to mention - you can't overcome the disadvantages s "normals" lens has compared to a scope. As to be seen in the picture below...so many red/purple stars, which of course do not exist...caused by the combination of earth's atmosphere and the chromatic errors of "normal" lenses.

Excellent.  This is one of the better tries I have seen.  You did get some of the H-alpha.  But it is a fallacy to think that unmodified cameras record no H-alpha.  Unmodified Canon DSLRs record about 1/3 the H-alpha line intensity as a modified camera.  So to say there is no H-alpha is like saying with 1/3 the exposure there are no stars.

The problem with modern processing is people along the way in their workflow are applying a histogram equalization step, and your image is showing signs of that too.  You do have a residual green color cast and green banded airglow at the bottom.  But ignoring that, look at the gradient from the Milky Way on the left edge and toward the right.  The color goes from yellow (I'm ignoring the green cast) to blue.  About 96% of stars in our galaxy are like our sun or redder (G, K, and M stars), and the proportion is more around the galactic center.  Thus the overall color of the image is quite red-orange.  The histogram equalization step has created more blue and suppressed red.

For example, take an image of a very red sunset and run it through the same workflow or even a simple histogram equalization step (e.g. auto white balance).  That will really reduce the red channel, suppressing many reds.

Same with the scorpio image.  The reds are there and so is a lot of H-alpha.  Processing needs to be such that it does not suppress the reds, and therefore the H-alpha.

The magenta stars are due to chromatic aberration, which I believe pixinsight has tools to correct it.

So what histogram equalization steps did you do in your image?

Roger

Michael S.
Michael S. Veteran Member • Posts: 6,885
Re: Here is my version!

rnclark wrote:

Excellent. This is one of the better tries I have seen. You did get some of the H-alpha. But it is a fallacy to think that unmodified cameras record no H-alpha. Unmodified Canon DSLRs record about 1/3 the H-alpha line intensity as a modified camera. So to say there is no H-alpha is like saying with 1/3 the exposure there are no stars.

Hi!
I did not say you won't get "any" H-alpha but just less than using an astromodified camera. Actually I would estimate - 20-25% without modification and 98% with it.

The problem with modern processing is people along the way in their workflow are applying a histogram equalization step, and your image is showing signs of that too. You do have a residual green color cast and green banded airglow at the bottom. But ignoring that, look at the gradient from the Milky Way on the left edge and toward the right. The color goes from yellow (I'm ignoring the green cast) to blue. About 96% of stars in our galaxy are like our sun or redder (G, K, and M stars), and the proportion is more around the galactic center. Thus the overall color of the image is quite red-orange. The histogram equalization step has created more blue and suppressed red.

Ups - I forgot to use SCNE - or HLVG - meaning getting rid of the green cast - that's just one button push, quite easy. And according the "red" stars or better the rich amount of them - yes, there are many of them in the core of the Milky Way but they are not purple - and that is caused by the lenses used.

CHANGED the picture meanwhile - just look at it using "original size".

So what histogram equalization steps did you do in your image?

I looked for getting a neutral background - that can be done in PI.

kind regards,

-- hide signature --

Michael S.
EUROPE; dpreview since 2001
NIKON NPS Member
(check equipment via profile)

 Michael S.'s gear list:Michael S.'s gear list
Leica D-Lux (Typ 109) Leica M Monochrom (Typ 246) Nikon D610 Nikon D5300 Leica SL (Typ 601) +5 more
Sir Canon
Sir Canon Senior Member • Posts: 1,547
Re: Astrophoto post processing challenge

ok once they are unzipped adobe cc should open them

-- hide signature --

I tend to overdo things

 Sir Canon's gear list:Sir Canon's gear list
Panasonic Lumix DMC-TS1 Canon EOS 350D Canon EOS 550D Canon EOS 700D Canon EF 50mm f/1.8 II +6 more
footbag New Member • Posts: 8
Re: Astrophoto post processing challenge

I processed this with my typical method.

[URL=http://astrob.in/full/198651/0/][IMG]http://astrob.in/198651/0/rawthumb/gallery/get.jpg[/IMG][/URL]

PI for calibration and processing then finish in Photoshop.

PI

Calibration and stacking

DBE

Background Neutralization & Color Calibration

Histogram Transformation

Photoshop

Selective color balancing

First, processing with such limited integration time does test your ability to process poor quality data.  I'm not sure that the techniques for processing low SNR data translates over to quality data with sufficient integration time.  I was definitely out of my element.

Compared to your original image, I wasn't able to pull out the Ha color that you did.  Doing so would force me to either paint out the dark nebula below it, or paint on a selective mask.  Since the dark nebula below is actual structure, neither sounded like a good idea.  I also lean heavily towards the more natural look.  I want it to look real.  Blown out stars with halos, excessive blurring, color with no structure; all detract from the image.

I couldn't reveal any structure in the HA region, although it doesn't look like you were able to either.  I don't think the structure/contrast made it above the noise.  The color did.  Frankly, it just seems like you blurred and boosted the red, perhaps with a selective mask.  IMO, your results just look like blurry and unnatural.  When I began in AP, I'd be ecstatic with your results.  But, I think the hobby has progressed considerably in 5 years.  Those who take the hobby seriously, really have a different standard for how they rate their images.

I have to admit that I take issue with your statements regarding others processing methods.  You suggest we're muting the Ha during color calibration.  I would argue there isn't enough Ha data to work with.  The only thing that would extract it would be to heavily blur and stretch.  I've spent 5 years learning how not to do that.

Adam Jaffe

sharkmelley
sharkmelley Contributing Member • Posts: 690
A Subtle Attempt
1

Here's my attempt, done in IRIS:

Larger version here:

http://www.markshelley.co.uk/Astronomy/2015/RogerClarkChallenge.jpg

To achieve the colour balance I simply scaled the red and blue channels up by 2 which is approximately correct for daylight white balance on the 6D.

After that, the background subtraction was performed. This bit is guesswork because it is very difficult to know what "colour" the gaps between the stars should actually be once the light pollution is subtracted - I opted for a kind of muddy dark brown - you could instead opt for jet black or dark blue. A bit of dynamic range scaling followed by increasing the saturation a bit and I was done. No histogram equalisation at all.

My overall comment is that there is virtually no H-alpha signal captured in these exposures. To be honest, I strongly suspect that if one didn't already know to expect to it in the image and already know where to find it then one wouldn't go looking for it to saturate it and make it visible. So I think this challenge is a great example of why H-alpha modification is so important - it would enable 4x as much H-alpha signal and it would be immediately visible in the image.

Mark

OP rnclark Senior Member • Posts: 2,205
Re: A Subtle Attempt

sharkmelley wrote:

Here's my attempt, done in IRIS:

Larger version here:

http://www.markshelley.co.uk/Astronomy/2015/RogerClarkChallenge.jpg

To achieve the colour balance I simply scaled the red and blue channels up by 2 which is approximately correct for daylight white balance on the 6D.

After that, the background subtraction was performed. This bit is guesswork because it is very difficult to know what "colour" the gaps between the stars should actually be once the light pollution is subtracted - I opted for a kind of muddy dark brown - you could instead opt for jet black or dark blue. A bit of dynamic range scaling followed by increasing the saturation a bit and I was done. No histogram equalisation at all.

My overall comment is that there is virtually no H-alpha signal captured in these exposures. To be honest, I strongly suspect that if one didn't already know to expect to it in the image and already know where to find it then one wouldn't go looking for it to saturate it and make it visible. So I think this challenge is a great example of why H-alpha modification is so important - it would enable 4x as much H-alpha signal and it would be immediately visible in the image.

Hi Mark,
Some general comments:

Your image has a magenta cast and a general bluing of the stars from left to right going out of the galactic center. That bluing is an artifact o processing, most likely in the background removal. Michael was able to get a lot more H-alpha out of the data. The H-alpha is there and should not be hard to show. There are not that many blue stars in this part of the galaxy. So it seems your color balance and background subtraction are off and that seems to have influenced the inability to show the H-alpha.

Roger

sharkmelley
sharkmelley Contributing Member • Posts: 690
Re: A Subtle Attempt

rnclark wrote:

Hi Mark,
Some general comments:

Your image has a magenta cast and a general bluing of the stars from left to right going out of the galactic center. That bluing is an artifact o processing, most likely in the background removal. Michael was able to get a lot more H-alpha out of the data. The H-alpha is there and should not be hard to show. There are not that many blue stars in this part of the galaxy. So it seems your color balance and background subtraction are off and that seems to have influenced the inability to show the H-alpha.

Roger

Fair comments and useful feedback!

I might have another attempt when I find some time.

Mark

Jon Rista Contributing Member • Posts: 681
Re: A Subtle Attempt

Alight, had a chance to process this data. Here is my result:

Processed in PixInsight, using the standard approach to color calibration and gradient extraction. The data does have SOME Ha...but it is extremely, extremely faint. In order to reveal the Ha at all, I had to do some extra work that would not be necessary with an astro modded DSLR. I split the RGB channels, then performed some histogram work on the R channel independently to enhance the contrast of the area with Ha. I then recombine the channels. After that I was able to enhance what minimal Ha there was in the image. I did not push it far, most of it is buried in chroma noise, and enhancing the regions of Ha around Antares just enhanced the color noise as well. I've performed some noise reduction, mostly in luminance, however too much chroma noise reduction and the Ha goes with it.

Star colors are ok, but they have a good deal of chromatic aberration in them. Similar to my 600mm f/4 lens in the corners (the one thing I hate about that lens), but with more lateral CA. That purples up the halos. The only solution to that was to desaturate the stars, and I don't like doing that...so the purple halos remain to a degree.

I aimed to avoid crushing colors and stars in heavy contrast, and specifically to preserve as much of the faint blue nebulosity and some of the background dust lanes below Antares as much as possible. Again, this data is very weak, and while it is possible to bring some of these details out, more integration time, at least double but preferably more, would result in a night and day difference in the ability to bring out faint details.

Oh, and full size, for reference:

http://www.astrobin.com/full/198900/0/?real=&mod=

 Jon Rista's gear list:Jon Rista's gear list
Canon EOS 5D Mark III Sony a6000 Canon EF 50mm f/1.4 USM Canon EF 16-35mm F2.8L II USM Canon EF 100-400mm f/4.5-5.6L IS USM +4 more
OP rnclark Senior Member • Posts: 2,205
Re: A Subtle Attempt

Jon,

General comments on your processing for the challenge:

In your processing, you did the all too common thing of creating a bluing of the star field away from the galactic core.  Again, stars do not do that.  It must be some form of gradient removal people are doing.  But the effect is to significantly decrease the H-alpha.  There remains green and red banded airglow in the bottom of the image, and the red airglow merges with the red H-alpha nebula below Antares.   In your full resolution image, there is red-blue splotchiness, similar to that in your horsehead image.  That splotchiness limits extraction of faint signals.

Regarding bias frames, bias is part of dark frames.  Dark frames were at the same exposure as the light frames.  Thus the equation is:

calibrated image: ((light - bias) - (dark - bias)) / (flat-bias) = (light - dark)/(flat-bias).

Bias is a single value.  For the Canon 7D2, it is 2048 in the 14-bits/channel raw data.  Bias frames also contain read and pattern noise, as do dark frames and light frames.  The pattern noise in my 7D2 is about 0.5 electron, so not a factor.  The master flat frame I supplied had the bias removed.   None of these should have affected your results.

Roger

Jon Rista Contributing Member • Posts: 681
Re: A Subtle Attempt

rnclark wrote:

Jon,

General comments on your processing for the challenge:

First off, before you critique anyone's images, you need to critique your own. I did not come here for critique. It blows my mind that you think you can teach me something about image processing given the state of your own version. Mind blowing.

In your processing, you did the all too common thing of creating a bluing of the star field away from the galactic core. Again, stars do not do that. It must be some form of gradient removal people are doing. But the effect is to significantly decrease the H-alpha. There remains green and red banded airglow in the bottom of the image, and the red airglow merges with the red H-alpha nebula below Antares. In your full resolution image, there is red-blue splotchiness, similar to that in your horsehead image. That splotchiness limits extraction

of faint signals.

The primary issue with the stars is NOT a gradient issue. First off, starfields are often processed as a matter of taste. When you say: "Stars do not do that." I honestly do not know what that means. I'll take my star field over yours any day, any time, all the time. I MUCH prefer my own starfield to the stark white pixellated, hard-edged field in your version. Stars come in a wide variety of stellar classes. Within the core of our galaxy, stars are cooler and yellower. As you move away from the core of our galaxy, just as when you move away from the core of any galaxy, they become hotter and whiter/bluer. Blue giants have a tendency to cluster along the outer edges of the primary dust lanes of galaxies (please, look up a few Hubble galaxy images and look for that...it's a common trend.) At the very least, the right half of the starfield in this image should be a bit cooler than the left half.

A lot of the bluing is the fact that the stars have chromatic aberration, mostly lateral chromatic aberration from what I can tell, which results in purple halos. Suppressing the purple does not eliminate the halo, it just becomes more blue, albeit smaller. THAT is probably the primary cause of the "bluing" of the stars in most of the attempts here.

I don't know what you are seeing on your screen, however my starfield on my calibrated screen here is mostly white. It might lean slightly cooler towards the right hand side of the field. Matter of taste. Personally, I like seeing the colors of the starfield grade from the core out when it comes to galactic core images. I don't find a heavy orange field to be very realistic, nor aesthetically pleasing.

Changing the star field is a trivial matter. Having a starfield of some particular color wasn't the crux of the challenge. Your imposing your own personal aesthetic tastes on the challenge here, which I find a little odd (particularly given the state of your own image...plank/eye syndrome?) Astrophotography is as much art as science, often more art than science for most amateurs. Bringing out the Ha was the crux of the challenge. That's what I focused on (and despite that, I believe my image is vastly superior to your own, although still far from an ideal result...I'd rather have REAL Ha data, rather than have to scrape and dig for scraps in the depths of the background sky.) You want a different starfield? Here:

I actually think that hurts the contrast of the Ha, so I saturated the above a bit more as well. That exacerbated color noise a bit more. The warmer starfield around the Rho Ophiuchus region makes it harder to discern the Ha there. The slightly bluish starfield improved the contrast with the pink of the Ha IMO.

Regarding the background gradient. I did not focus on background extraction. I did a quick DBE in PixInsight, and focused the rest of my efforts on finding and extracting what minimal Ha data barely exists in this image. With more meticulous DBE, the gradient issues would not exist, but that is again a fairly trivial problem. I could rewind my processing and redo the extraction, but I have other things to do, and I do not want to spend any more time on this image.

The splotchiness, btw...that is YOUR data. I did not add splotchiness, it's in the data I downloaded from you. That is the result of insufficient integration time and the use of data interpolated from Bayer CFA data. This would be another reason why many astrophotographers use mono CCD with LRGB filters...no interpolation. I can suppress the splotching further, however as I stated, doing so suppressed the minuscule amount of Ha further as well. Tradeoffs. I've done nothing broad or large scale to suppress Ha, and everything in my power to reveal it. The problem is not processing. I did not do any kind of histogram equalization. I used a scientifically valid color calibration routine. When I did subtract the background gradient, the gradient was nearly grayscale, with a slight yellowing towards the left of the frame (expected, not much you can do about that given the stars there) which is about as ideal as a background extraction gets.

This is not an Ha suppression problem. The problem is the fact that it barely exists, which is what I've been saying all along. It's the same thing Michael S. said. It's the same thing everyone has been saying. You can try to deny that fact as much as you want, but your data does NOT contain very much Ha. BARELY enough to enhance, and because it is so sparse (not all red pixels, which constitute only 1/4 of the sensor area to start with, got sufficient Ha signal to swamp the noise floor), it comes through looking mostly like color noise itself. This is clearly evident in your own processing. It is also evident in Michael S.' processing. I chose not to enhance it so far that it looked forced or artificial...and to my eyes, my version still tries too hard.

* * *

It is clear that the only thing that will settle this issue is a proper comparison between an unmodded and modded DSLR. I am not interested in debating whether the 7D II captures some Ha data. Of course it does. My much older 5D III, which has higher read noise and significantly higher dark current, even captures some Ha data. Both cameras, as well as any other unmodded ILCs, gather extremely weak Ha signal. The debate, along with anecdotal claims about "Ha suppression", about the processing of this image, are only possible BECAUSE this image has such insufficient Ha data. We wouldn't even be having this discussion if we could compare even unprocessed integrations from both a modded and unmodded DSLR. The differences would be obvious with a simple screen stretch in PixInsight, let alone fully processed results.

And that was what the question in the original question by SnappieChappie was asking about. Whether an "astro" version of the 60D was better or not. The 60Da would be better, marginally, however if I were to recommend an option to SnappieChappie, I'd be recommending a fully astro modded used 6D. With the lower dark current, larger pixels, and excellent cost/value ratio, there are few DSLRs on the market that can beat it for astro. (Yes, I did say larger pixels...I don't adhere to the smaller pixels are better for astro mantra unless your imaging with a very, very wide field, where image scale would be well undersampled with 6 micron pixels.)

Until we can compare modded and unmodded side by side, any further debate is pretty pointless. I've demonstrated my processing skill, and revealed the Ha in your own image (using some fairly extreme techniques)...and all that lead to was anecdotal claims about how I've somehow suppressed the Ha, or how I've somehow introduced a gradient into the stars. Seriously? You haven't a clue how I processed, what my steps were, what settings I used at each step, and I simply do not believe you, or anyone else for that matter, can derive the processing technique just by looking at a farily heavily compressed JPEG online.

Regarding bias frames, bias is part of dark frames. Dark frames were at the same exposure as the light frames. Thus the equation is:

calibrated image: ((light - bias) - (dark - bias)) / (flat-bias) = (light - dark)/(flat-bias).

Bias is a single value. For the Canon 7D2, it is 2048 in the 14-bits/channel raw data. Bias frames also contain read and pattern noise, as do dark frames and light frames. The pattern noise in my 7D2 is about 0.5 electron, so not a factor. The master flat frame I supplied had the bias removed. None of these should have affected your results.

Roger

Regarding bias and bias frames. Bias is a single value in an UNSCALED frame. Calibration tools these days, including DSS, PixInsight, Nebulosity, and MaxImDL all scale calibration frames. I believe ImagesPlus can do scaling as well, although I think it is manual (much like the manual option in DSS.) PixInsight actually does per-light-frame noise evaluation, and scales the master dark ideally for each and every frame...it isn't just a single global scaling. The bias is scaled along with everything else unless it is first removed.

PixInsight scales both master flats and master darks. Yes, the bias is in the darks, however after the master dark is scaled, the bias is different than in each light frame. If you already bias subtracted the flat, then that's probably fine.

For scaling to work, it is ESSENTIAL that the bias be removed from everything first. All darks, all flats, and all lights must be bias calibrated before doing anything else. Once that is done, then dark and flat scaling will not result in changing the bias signal, and they can be subtracted or divided out of the lights properly.

Oh, one more thing for the record. I generally use Winsorized Sigma Clipping with my integrations to reject pixels that fall outside a specified range of StdDev. This eliminates star and meteor trails, but also eliminates hot pixels, cosmic ray strikes, etc. The bare minimum sub count for WSC to work is 10 subs, and it works better with much more. That would  be another reason to get deeper integrations, for more reliable outlier rejection.

 Jon Rista's gear list:Jon Rista's gear list
Canon EOS 5D Mark III Sony a6000 Canon EF 50mm f/1.4 USM Canon EF 16-35mm F2.8L II USM Canon EF 100-400mm f/4.5-5.6L IS USM +4 more
Jon Rista Contributing Member • Posts: 681
Re: A Subtle Attempt
3

For an example of how much Hydrogen nebula actually exists in this region, please see this:

http://deepskycolors.com/astro/2014/03/2014-03_Rho+Ha-mb.jpg

 Jon Rista's gear list:Jon Rista's gear list
Canon EOS 5D Mark III Sony a6000 Canon EF 50mm f/1.4 USM Canon EF 16-35mm F2.8L II USM Canon EF 100-400mm f/4.5-5.6L IS USM +4 more
OP rnclark Senior Member • Posts: 2,205
Re: A Subtle Attempt

Jon Rista wrote:

rnclark wrote:

Jon,

General comments on your processing for the challenge:

First off, before you critique anyone's images, you need to critique your own. I did not come here for critique. It blows my mind that you think you can teach me something about image processing given the state of your own version. Mind blowing.

Yes, it is mind blowing. See my responses below. If anyone is following, I provide some real facts and data below.

In your processing, you did the all too common thing of creating a bluing of the star field away from the galactic core. Again, stars do not do that. It must be some form of gradient removal people are doing. But the effect is to significantly decrease the H-alpha. There remains green and red banded airglow in the bottom of the image, and the red airglow merges with the red H-alpha nebula below Antares. In your full resolution image, there is red-blue splotchiness, similar to that in your horsehead image. That splotchiness limits extraction

of faint signals.

The primary issue with the stars is NOT a gradient issue. First off, starfields are often processed as a matter of taste. When you say: "Stars do not do that." I honestly do not know what that means. I'll take my star field over yours any day, any time, all the time. I MUCH prefer my own starfield to the stark white pixellated, hard-edged field in your version. Stars come in a wide variety of stellar classes. Within the core of our galaxy, stars are cooler and yellower. As you move away from the core of our galaxy, just as when you move away from the core of any galaxy, they become hotter and whiter/bluer. Blue giants have a tendency to cluster along the outer edges of the primary dust lanes of galaxies (please, look up a few Hubble galaxy images and look for that...it's a common trend.) At the very least, the right half of the starfield in this image should be a bit cooler than the left half.

Well, you might look at some real photometric data. See my Color of Stars article:

http://www.clarkvision.com/articles/color-of-stars/

Just above the conclusions is the histogram of star colors from the Tycho 2 catalog with over 2.4 million stars to fainter than magnitude 15. There are very few blue stars in the galaxy, less than 1%!

Then I used the Tycho database to do slices from the galaxy region in my Rho Ophiuchus area and made histograms of star color as one moves away from the galactic plane. The histograms are in Figure 6 here:

http://www.clarkvision.com/articles/astrophotography.image.processing2/

Clearly there is no bluing of star color as one moved away from the galactic plane. In fact, the data show there is a slight reddening. Thus any bluing in the processed challenge image is an artifact of post processing and not real. That bluing is one of the reasons you did not pick up as much H-alpha.

A lot of the bluing is the fact that the stars have chromatic aberration, mostly lateral chromatic aberration from what I can tell, which results in purple halos. Suppressing the purple does not eliminate the halo, it just becomes more blue, albeit smaller. THAT is probably the primary cause of the "bluing" of the stars in most of the attempts here.

Chromatic aberration correction should have been done. That is a common problem in wide field imaging and is another part of the challenge. Sorry you couldn't improve that. I've had others claim that Pixinsight did a superior job in that regard, but I haven't seen results that back up the claim.

I don't know what you are seeing on your screen, however my starfield on my calibrated screen here is mostly white. It might lean slightly cooler towards the right hand side of the field. Matter of taste. Personally, I like seeing the colors of the starfield grade from the core out when it comes to galactic core images. I don't find a heavy orange field to be very realistic, nor aesthetically pleasing.

I run a full color managed workflow with color calibrated monitors. You reposted image is better, but your original had more blue gradient.

Changing the star field is a trivial matter. Having a starfield of some particular color wasn't the crux of the challenge.

The challenge was to produce a good image with airglow and light pollution subtracted, and to bring out faint signals with minimal noise. It wasn't to just show H-alpha. Color gradients that are not real and are artifacts of post processing and are a big negative in my book and especially when it suppresses signals and produces a splotchy background.

Your imposing your own personal aesthetic tastes on the challenge here, which I find a little odd (particularly given the state of your own image...plank/eye syndrome?) Astrophotography is as much art as science, often more art than science for most amateurs. Bringing out the Ha was the crux of the challenge. That's what I focused on (and despite that, I believe my image is vastly superior to your own, although still far from an ideal result...I'd rather have REAL Ha data, rather than have to scrape and dig for scraps in the depths of the background sky.) You want a different starfield?

I imposed no personal aesthetics. You assumed and attacked because I called your color gradient not real. It is NOT. It is an artifact of your processing. The star catalog data shows that. It is not my imposing anything,

I actually think that hurts the contrast of the Ha, so I saturated the above a bit more as well. That exacerbated color noise a bit more. The warmer starfield around the Rho Ophiuchus region makes it harder to discern the Ha there. The slightly bluish starfield improved the contrast with the pink of the Ha IMO.

Regarding the background gradient. I did not focus on background extraction. I did a quick DBE in PixInsight, and focused the rest of my efforts on finding and extracting what minimal Ha data barely exists in this image. With more meticulous DBE, the gradient issues would not exist, but that is again a fairly trivial problem. I could rewind my processing and redo the extraction, but I have other things to do, and I do not want to spend any more time on this image.

Others have reported the DBE is causing color gradients. It appears to be at least partially responsible for suppressing H-alpha.

The splotchiness, btw...that is YOUR data.I did not add splotchiness, it's in the data I downloaded from you.

NO NO NO! It is a product of your raw conversion. Different raw converters and how they are tuned and what algorithm is used will have different artifacts. Apparently the one you used caused more splotchiness.

That is the result of insufficient integration time and the use of data interpolated from Bayer CFA data.

No matter how long one integrates, there will always be signal at the low end. If you want to bring out that last faintest thing, you better have the raw converter tuned well. Some mitigation of the splotchiness is helped by dithering, but the star field can cause that with the raw converter and how it projects between the Bayer pixels.

This would be another reason why many astrophotographers use mono CCD with LRGB filters...no interpolation. I can suppress the splotching further, however as I stated, doing so suppressed the minuscule amount of Ha further as well. Tradeoffs. I've done nothing broad or large scale to suppress Ha, and everything in my power to reveal it. The problem is not processing. I did not do any kind of histogram equalization. I used a scientifically valid color calibration routine. When I did subtract the background gradient, the gradient was nearly grayscale, with a slight yellowing towards the left of the frame (expected, not much you can do about that given the stars there) which is about as ideal as a background extraction gets.

Yes, a dedicated CCD and filters mitigates one problem, adds others.  That is irrelevant here.  The topic here is post processing methods with whatever data you have.

This is not an Ha suppression problem. The problem is the fact that it barely exists, which is what I've been saying all along. It's the same thing Michael S. said. It's the same thing everyone has been saying. You can try to deny that fact as much as you want, but your data does NOT contain very much Ha. BARELY enough to enhance, and because it is so sparse (not all red pixels, which constitute only 1/4 of the sensor area to start with, got sufficient Ha signal to swamp the noise floor), it comes through looking mostly like color noise itself. This is clearly evident in your own processing. It is also evident in Michael S.' processing. I chose not to enhance it so far that it looked forced or artificial...and to my eyes, my version still tries too hard.

You are not getting the point. No matter if you use a dedicated CCD, modified DSLR, or stock DSLR, there will always be faint signals near the noise level, even with a hundred hours on integration. If you want to bring out such faint signals, good processing methodology is critical. It matters not what the integration time in the challenge image is. The challenge is to show what you can bring out. Pretend it is a 5,000 hour hyperstar integration; what can you bring out?

* * *

It is clear that the only thing that will settle this issue is a proper comparison between an unmodded and modded DSLR. I am not interested in debating whether the 7D II captures some Ha data. Of course it does. My much older 5D III, which has higher read noise and significantly higher dark current, even captures some Ha data. Both cameras, as well as any other unmodded ILCs, gather extremely weak Ha signal. The debate, along with anecdotal claims about "Ha suppression", about the processing of this image, are only possible BECAUSE this image has such insufficient Ha data.

NO, again it will be the same with any of the above and trying to bring out that last bit of information near the noise level.

We wouldn't even be having this discussion if we could compare even unprocessed integrations from both a modded and unmodded DSLR. The differences would be obvious with a simple screen stretch in PixInsight, let alone fully processed results.

Again, you are missing the point of the challenge. Don't confuse this with the is the 60Da thread. Best practices are necessary are needed with whatever data you have. Making unreal color gradients and suppressing signal is not a best practice.

And that was what the question in the original question by SnappieChappie was asking about. Whether an "astro" version of the 60D was better or not. The 60Da would be better, marginally,

Fine, but this is not that thread. This is the "show us what you can do with this data" thread.

however if I were to recommend an option to SnappieChappie, I'd be recommending a fully astro modded used 6D. With the lower dark current, larger pixels, and excellent cost/value ratio, there are few DSLRs on the market that can beat it for astro. (Yes, I did say larger pixels...I don't adhere to the smaller pixels are better for astro mantra unless your imaging with a very, very wide field, where image scale would be well undersampled with 6 micron pixels.)

That is a different thread. Please stick to this one.

Until we can compare modded and unmodded side by side, any further debate is pretty pointless.

That is a different thread. Again, shows us what you can do with existing data. That is universal in this challenge. See above.

I've demonstrated my processing skill, and revealed the Ha in your own image (using some fairly extreme techniques)...and all that lead to was anecdotal claims about how I've somehow suppressed the Ha, or how I've somehow introduced a gradient into the stars. Seriously?

Yes, seriously.  The photometry data prove that.

You haven't a clue how I processed, what my steps were, what settings I used at each step, and I simply do not believe you, or anyone else for that matter, can derive the processing technique just by looking at a farily heavily compressed JPEG online.

If your processing is so bad to cause a color shift and creating color gradients from red to blue when converting to jpeg maybe you need some different software.

Regarding bias frames, bias is part of dark frames. Dark frames were at the same exposure as the light frames. Thus the equation is:

calibrated image: ((light - bias) - (dark - bias)) / (flat-bias) = (light - dark)/(flat-bias).

Bias is a single value. For the Canon 7D2, it is 2048 in the 14-bits/channel raw data. Bias frames also contain read and pattern noise, as do dark frames and light frames. The pattern noise in my 7D2 is about 0.5 electron, so not a factor. The master flat frame I supplied had the bias removed. None of these should have affected your results.

Roger

Regarding bias and bias frames. Bias is a single value in an UNSCALED frame. Calibration tools these days, including DSS, PixInsight, Nebulosity, and MaxImDL all scale calibration frames. I believe ImagesPlus can do scaling as well, although I think it is manual (much like the manual option in DSS.) PixInsight actually does per-light-frame noise evaluation, and scales the master dark ideally for each and every frame...it isn't just a single global scaling. The bias is scaled along with everything else unless it is first removed.

PixInsight scales both master flats and master darks. Yes, the bias is in the darks, however after the master dark is scaled, the bias is different than in each light frame. If you already bias subtracted the flat, then that's probably fine.

Wow.

First, scaling dark frames is only done when the dark frame exposure time is different than the light frames. I supplied dark frames done at the same exposure time. BUT do note that most DSLRs these days, including your Canon 5DIII, as well as 7D2, 6D, Nikon D800, 810, and many more have on sensor dark current suppression. That means the dark level does not change with exposure time. Scaling dark frames was for when dark current is not suppressed. Note this is on sensor technology and is not something you can turn on or off. It is not long exposure noise reduction.

Further, dark current scaling was a flawed concept from the start. Say for example, your dark frames were 2x shorter than your light frames. Scaling dark frames by 2x also scales the noise 2x, but if you doubled the dark frame exposure time, the noise would only increase by root 2. Scaling dark frames does not treat noise properly.

Dark frames are actually no longer needed with on-sensor dark current suppression in recent cameras. Subtracting dark frames just adds another noise source to the light frames and that inhibits one from extracting faint signals.  For example, we sometimes see people doing something like 100 luigth frames and 20 darks.  The noise at the low end is dominated by the 20 darks, not the lights.

For scaling to work, it is ESSENTIAL that the bias be removed from everything first. All darks, all flats, and all lights must be bias calibrated before doing anything else. Once that is done, then dark and flat scaling will not result in changing the bias signal, and they can be subtracted or divided out of the lights properly.

See above. Note, bias on most canon cameras is 2048 on the 14-bit scale. One should not measure bias frames--that is just another noise source. Software should be able to take a single constant for the bias.

Oh, one more thing for the record. I generally use Winsorized Sigma Clipping with my integrations to reject pixels that fall outside a specified range of StdDev. This eliminates star and meteor trails, but also eliminates hot pixels, cosmic ray strikes, etc. The bare minimum sub count for WSC to work is 10 subs, and it works better with much more. That would be another reason to get deeper integrations, for more reliable outlier rejection.

Be careful with median combines. Median combine quickly becomes posterised. See:

http://www.clarkvision.com/articles/image-stacking-methods/

I think we should call this conversation done.

Roger

sharkmelley
sharkmelley Contributing Member • Posts: 690
Re: A Subtle Attempt

rnclark wrote:

Bias is a single value. For the Canon 7D2, it is 2048 in the 14-bits/channel raw data. Bias frames also contain read and pattern noise, as do dark frames and light frames. The pattern noise in my 7D2 is about 0.5 electron, so not a factor. The master flat frame I supplied had the bias removed. None of these should have affected your results.

Roger

I had guessed the bias level was 2048 (by looking at the darks) so when I noticed the lack of supplied bias files I subtracted 2048 from the flatfield CR2 files, as a proxy.

Are you saying there was a master flat fame that I missed?

Or are you saying you had already subtracted the bias from the 5 CR2 flatfield files? (In which case my processing effort suffers from double subtraction in the flat)

BTW I agree with you that for processing Canon data it is better to use a scalar bias value instead of the bias files.  My experience has always been that the fixed pattern noise in the Canon bias files changes from session to session and so the bias files rarely match the lights.

Mark

Keyboard shortcuts:
FForum MMy threads