Jon Rista wrote:
rnclark wrote:
Jon,
General comments on your processing for the challenge:
First off, before you critique anyone's images, you need to critique your own. I did not come here for critique. It blows my mind that you think you can teach me something about image processing given the state of your own version. Mind blowing.
Yes, it is mind blowing. See my responses below. If anyone is following, I provide some real facts and data below.
In your processing, you did the all too common thing of creating a bluing of the star field away from the galactic core. Again, stars do not do that. It must be some form of gradient removal people are doing. But the effect is to significantly decrease the H-alpha. There remains green and red banded airglow in the bottom of the image, and the red airglow merges with the red H-alpha nebula below Antares. In your full resolution image, there is red-blue splotchiness, similar to that in your horsehead image. That splotchiness limits extraction
of faint signals.
The primary issue with the stars is NOT a gradient issue. First off, starfields are often processed as a matter of taste. When you say: "Stars do not do that." I honestly do not know what that means. I'll take my star field over yours any day, any time, all the time. I MUCH prefer my own starfield to the stark white pixellated, hard-edged field in your version. Stars come in a wide variety of stellar classes. Within the core of our galaxy, stars are cooler and yellower. As you move away from the core of our galaxy, just as when you move away from the core of any galaxy, they become hotter and whiter/bluer. Blue giants have a tendency to cluster along the outer edges of the primary dust lanes of galaxies (please, look up a few Hubble galaxy images and look for that...it's a common trend.) At the very least, the right half of the starfield in this image should be a bit cooler than the left half.
Well, you might look at some real photometric data. See my Color of Stars article:
http://www.clarkvision.com/articles/color-of-stars/
Just above the conclusions is the histogram of star colors from the Tycho 2 catalog with over 2.4 million stars to fainter than magnitude 15. There are very few blue stars in the galaxy, less than 1%!
Then I used the Tycho database to do slices from the galaxy region in my Rho Ophiuchus area and made histograms of star color as one moves away from the galactic plane. The histograms are in Figure 6 here:
http://www.clarkvision.com/articles/astrophotography.image.processing2/
Clearly there is no bluing of star color as one moved away from the galactic plane. In fact, the data show there is a slight reddening. Thus any bluing in the processed challenge image is an artifact of post processing and not real. That bluing is one of the reasons you did not pick up as much H-alpha.
A lot of the bluing is the fact that the stars have chromatic aberration, mostly lateral chromatic aberration from what I can tell, which results in purple halos. Suppressing the purple does not eliminate the halo, it just becomes more blue, albeit smaller. THAT is probably the primary cause of the "bluing" of the stars in most of the attempts here.
Chromatic aberration correction should have been done. That is a common problem in wide field imaging and is another part of the challenge. Sorry you couldn't improve that. I've had others claim that Pixinsight did a superior job in that regard, but I haven't seen results that back up the claim.
I don't know what you are seeing on your screen, however my starfield on my calibrated screen here is mostly white. It might lean slightly cooler towards the right hand side of the field. Matter of taste. Personally, I like seeing the colors of the starfield grade from the core out when it comes to galactic core images. I don't find a heavy orange field to be very realistic, nor aesthetically pleasing.
I run a full color managed workflow with color calibrated monitors. You reposted image is better, but your original had more blue gradient.
Changing the star field is a trivial matter. Having a starfield of some particular color wasn't the crux of the challenge.
The challenge was to produce a good image with airglow and light pollution subtracted, and to bring out faint signals with minimal noise. It wasn't to just show H-alpha. Color gradients that are not real and are artifacts of post processing and are a big negative in my book and especially when it suppresses signals and produces a splotchy background.
Your imposing your own personal aesthetic tastes on the challenge here, which I find a little odd (particularly given the state of your own image...plank/eye syndrome?) Astrophotography is as much art as science, often more art than science for most amateurs. Bringing out the Ha was the crux of the challenge. That's what I focused on (and despite that, I believe my image is vastly superior to your own, although still far from an ideal result...I'd rather have REAL Ha data, rather than have to scrape and dig for scraps in the depths of the background sky.) You want a different starfield?
I imposed no personal aesthetics. You assumed and attacked because I called your color gradient not real. It is NOT. It is an artifact of your processing. The star catalog data shows that. It is not my imposing anything,
I actually think that hurts the contrast of the Ha, so I saturated the above a bit more as well. That exacerbated color noise a bit more. The warmer starfield around the Rho Ophiuchus region makes it harder to discern the Ha there. The slightly bluish starfield improved the contrast with the pink of the Ha IMO.
Regarding the background gradient. I did not focus on background extraction. I did a quick DBE in PixInsight, and focused the rest of my efforts on finding and extracting what minimal Ha data barely exists in this image. With more meticulous DBE, the gradient issues would not exist, but that is again a fairly trivial problem. I could rewind my processing and redo the extraction, but I have other things to do, and I do not want to spend any more time on this image.
Others have reported the DBE is causing color gradients. It appears to be at least partially responsible for suppressing H-alpha.
The splotchiness, btw...that is YOUR data.I did not add splotchiness, it's in the data I downloaded from you.
NO NO NO! It is a product of your raw conversion. Different raw converters and how they are tuned and what algorithm is used will have different artifacts. Apparently the one you used caused more splotchiness.
That is the result of insufficient integration time and the use of data interpolated from Bayer CFA data.
No matter how long one integrates, there will always be signal at the low end. If you want to bring out that last faintest thing, you better have the raw converter tuned well. Some mitigation of the splotchiness is helped by dithering, but the star field can cause that with the raw converter and how it projects between the Bayer pixels.
This would be another reason why many astrophotographers use mono CCD with LRGB filters...no interpolation. I can suppress the splotching further, however as I stated, doing so suppressed the minuscule amount of Ha further as well. Tradeoffs. I've done nothing broad or large scale to suppress Ha, and everything in my power to reveal it. The problem is not processing. I did not do any kind of histogram equalization. I used a scientifically valid color calibration routine. When I did subtract the background gradient, the gradient was nearly grayscale, with a slight yellowing towards the left of the frame (expected, not much you can do about that given the stars there) which is about as ideal as a background extraction gets.
Yes, a dedicated CCD and filters mitigates one problem, adds others. That is irrelevant here. The topic here is post processing methods with whatever data you have.
This is not an Ha suppression problem. The problem is the fact that it barely exists, which is what I've been saying all along. It's the same thing Michael S. said. It's the same thing everyone has been saying. You can try to deny that fact as much as you want, but your data does NOT contain very much Ha. BARELY enough to enhance, and because it is so sparse (not all red pixels, which constitute only 1/4 of the sensor area to start with, got sufficient Ha signal to swamp the noise floor), it comes through looking mostly like color noise itself. This is clearly evident in your own processing. It is also evident in Michael S.' processing. I chose not to enhance it so far that it looked forced or artificial...and to my eyes, my version still tries too hard.
You are not getting the point. No matter if you use a dedicated CCD, modified DSLR, or stock DSLR, there will always be faint signals near the noise level, even with a hundred hours on integration. If you want to bring out such faint signals, good processing methodology is critical. It matters not what the integration time in the challenge image is. The challenge is to show what you can bring out. Pretend it is a 5,000 hour hyperstar integration; what can you bring out?
* * *
It is clear that the only thing that will settle this issue is a proper comparison between an unmodded and modded DSLR. I am not interested in debating whether the 7D II captures some Ha data. Of course it does. My much older 5D III, which has higher read noise and significantly higher dark current, even captures some Ha data. Both cameras, as well as any other unmodded ILCs, gather extremely weak Ha signal. The debate, along with anecdotal claims about "Ha suppression", about the processing of this image, are only possible BECAUSE this image has such insufficient Ha data.
NO, again it will be the same with any of the above and trying to bring out that last bit of information near the noise level.
We wouldn't even be having this discussion if we could compare even unprocessed integrations from both a modded and unmodded DSLR. The differences would be obvious with a simple screen stretch in PixInsight, let alone fully processed results.
Again, you are missing the point of the challenge. Don't confuse this with the is the 60Da thread. Best practices are necessary are needed with whatever data you have. Making unreal color gradients and suppressing signal is not a best practice.
And that was what the question in the original question by SnappieChappie was asking about. Whether an "astro" version of the 60D was better or not. The 60Da would be better, marginally,
Fine, but this is not that thread. This is the "show us what you can do with this data" thread.
however if I were to recommend an option to SnappieChappie, I'd be recommending a fully astro modded used 6D. With the lower dark current, larger pixels, and excellent cost/value ratio, there are few DSLRs on the market that can beat it for astro. (Yes, I did say larger pixels...I don't adhere to the smaller pixels are better for astro mantra unless your imaging with a very, very wide field, where image scale would be well undersampled with 6 micron pixels.)
That is a different thread. Please stick to this one.
Until we can compare modded and unmodded side by side, any further debate is pretty pointless.
That is a different thread. Again, shows us what you can do with existing data. That is universal in this challenge. See above.
I've demonstrated my processing skill, and revealed the Ha in your own image (using some fairly extreme techniques)...and all that lead to was anecdotal claims about how I've somehow suppressed the Ha, or how I've somehow introduced a gradient into the stars. Seriously?
Yes, seriously. The photometry data prove that.
You haven't a clue how I processed, what my steps were, what settings I used at each step, and I simply do not believe you, or anyone else for that matter, can derive the processing technique just by looking at a farily heavily compressed JPEG online.
If your processing is so bad to cause a color shift and creating color gradients from red to blue when converting to jpeg maybe you need some different software.
Regarding bias frames, bias is part of dark frames. Dark frames were at the same exposure as the light frames. Thus the equation is:
calibrated image: ((light - bias) - (dark - bias)) / (flat-bias) = (light - dark)/(flat-bias).
Bias is a single value. For the Canon 7D2, it is 2048 in the 14-bits/channel raw data. Bias frames also contain read and pattern noise, as do dark frames and light frames. The pattern noise in my 7D2 is about 0.5 electron, so not a factor. The master flat frame I supplied had the bias removed. None of these should have affected your results.
Roger
Regarding bias and bias frames. Bias is a single value in an UNSCALED frame. Calibration tools these days, including DSS, PixInsight, Nebulosity, and MaxImDL all scale calibration frames. I believe ImagesPlus can do scaling as well, although I think it is manual (much like the manual option in DSS.) PixInsight actually does per-light-frame noise evaluation, and scales the master dark ideally for each and every frame...it isn't just a single global scaling. The bias is scaled along with everything else unless it is first removed.
PixInsight scales both master flats and master darks. Yes, the bias is in the darks, however after the master dark is scaled, the bias is different than in each light frame. If you already bias subtracted the flat, then that's probably fine.
Wow.
First, scaling dark frames is only done when the dark frame exposure time is different than the light frames. I supplied dark frames done at the same exposure time. BUT do note that most DSLRs these days, including your Canon 5DIII, as well as 7D2, 6D, Nikon D800, 810, and many more have on sensor dark current suppression. That means the dark level does not change with exposure time. Scaling dark frames was for when dark current is not suppressed. Note this is on sensor technology and is not something you can turn on or off. It is not long exposure noise reduction.
Further, dark current scaling was a flawed concept from the start. Say for example, your dark frames were 2x shorter than your light frames. Scaling dark frames by 2x also scales the noise 2x, but if you doubled the dark frame exposure time, the noise would only increase by root 2. Scaling dark frames does not treat noise properly.
Dark frames are actually no longer needed with on-sensor dark current suppression in recent cameras. Subtracting dark frames just adds another noise source to the light frames and that inhibits one from extracting faint signals. For example, we sometimes see people doing something like 100 luigth frames and 20 darks. The noise at the low end is dominated by the 20 darks, not the lights.
For scaling to work, it is ESSENTIAL that the bias be removed from everything first. All darks, all flats, and all lights must be bias calibrated before doing anything else. Once that is done, then dark and flat scaling will not result in changing the bias signal, and they can be subtracted or divided out of the lights properly.
See above. Note, bias on most canon cameras is 2048 on the 14-bit scale. One should not measure bias frames--that is just another noise source. Software should be able to take a single constant for the bias.
Oh, one more thing for the record. I generally use Winsorized Sigma Clipping with my integrations to reject pixels that fall outside a specified range of StdDev. This eliminates star and meteor trails, but also eliminates hot pixels, cosmic ray strikes, etc. The bare minimum sub count for WSC to work is 10 subs, and it works better with much more. That would be another reason to get deeper integrations, for more reliable outlier rejection.
Be careful with median combines. Median combine quickly becomes posterised. See:
http://www.clarkvision.com/articles/image-stacking-methods/
I think we should call this conversation done.
Roger