This is odd... dss problem please help

Started Jan 4, 2016 | Discussions
Jon Rista Contributing Member • Posts: 681
Re: Quantization

Jack Hogan wrote:

Note that e- or pixel pitch did not enter the discussion. I believe there is no 'safe' ISO as far as quantization error is concerned: it's 'safe' at whatever ISO the >1LSB-rn-before-ADC condition is met. I think in any case that these low level effects, if present, are virtually impossible to recognize in a normal capture, as shown by Jim's related visual tests. I wonder if they would instead be observable when stacking 100 images.

Wouldn't a bit of dithering or even a small amount of drift between subs eliminate the chance that quantization noise (and the posterization effect it could cause) actually appear in a large stack?

I have found that I can randomly dither several pixels between frames with only about 10 seconds of inter-frame overhead (it used to take a couple of minutes, the trick is to avoid forcing your guide setup to settle below the limit dictated by seeing). With low dithering overhead like that, dithering should always be done regardless.

 Jon Rista's gear list:Jon Rista's gear list
Canon EOS 5D Mark III Sony a6000 Canon EF 50mm F1.4 USM Canon EF 16-35mm F2.8L II USM Canon EF 100-400mm f/4.5-5.6L IS USM +4 more
Jon Rista Contributing Member • Posts: 681
Re: This is odd... dss problem please help
2

The problem is Roger kept addressing the wrong point. He seemed to think that I was advocating against getting a 1/3rd histogram exposure because I was advocating getting longer exposures. He seemed to think I was advocating for longer exposures solely because of read noise. He seemed to think I was advocating for getting longer exposures in a light polluted zone, which would have resulted in 1/2, 2/3, 3/4 histogram, or even more. He was wrong. He missed my point entirely. Even when he wrote his own article on the subject of light pollution, he STILL missed the point.

I was advocating finding darker skies than the average imager (as most imagers I know and have met in the last two years image from their back yards, which is overwhelmingly most often in a red zone (as well as white zones (potentially much worse!) and orange zones (slightly better)), which is very high in light pollution, which forces the imager to use shorter exposures, often as little as 15-30 seconds before you reach 1/3rd histogram). By imaging under darker skies, one must use longer exposures to even reach 1/4er histogram, let alone 1/3rd.

Does that make sense? I am not advocating against the 1/4-1/3rd histogram guideline. I am advocating finding skies that are much darker than the average suburban back yard, because when you have very little light pollution, it's a very small problem, and you can gather more information with less noise, even with the same exposure length (which would result in significantly less than 1/3rd histogram). This is an example of the difference between two subs of the same exposure, one red zone, one green zone:

Red Zone left; Green Zone right (SINGLE sub exposure!)

The difference in noise should be beyond obvious. The increase in structural detail in the dark site image should also be obvious. All that extra noise comes from light pollution. Why would you want to deal with that if you didn't have to? You can usually find a decent green zone imaging site within about 30-40 minutes of a suburban home in a city. It's literally orders of magnitude better.

 Jon Rista's gear list:Jon Rista's gear list
Canon EOS 5D Mark III Sony a6000 Canon EF 50mm F1.4 USM Canon EF 16-35mm F2.8L II USM Canon EF 100-400mm f/4.5-5.6L IS USM +4 more
Jack Hogan Veteran Member • Posts: 7,491
Re: Quantization

Jon Rista wrote:

Jack Hogan wrote:

Note that e- or pixel pitch did not enter the discussion. I believe there is no 'safe' ISO as far as quantization error is concerned: it's 'safe' at whatever ISO the >1LSB-rn-before-ADC condition is met. I think in any case that these low level effects, if present, are virtually impossible to recognize in a normal capture, as shown by Jim's related visual tests. I wonder if they would instead be observable when stacking 100 images.

Wouldn't a bit of dithering or even a small amount of drift between subs eliminate the chance that quantization noise (and the posterization effect it could cause) actually appear in a large stack?

That would make sense Jon.

I have found that I can randomly dither several pixels between frames with only about 10 seconds of inter-frame overhead (it used to take a couple of minutes, the trick is to avoid forcing your guide setup to settle below the limit dictated by seeing). With low dithering overhead like that, dithering should always be done regardless.

Intuitively if the visual information is 'properly' encoded into the raw data of each individual capture (>1LSB-rn-before-ADC) one should be able to recover it to one's satisfaction by stacking enough images. When that condition is not met, can quantization error be improved by stacking?  It may be worth a test.

Jack

Jack Hogan Veteran Member • Posts: 7,491
Re: Quantization
1

rnclark wrote:

It gets a little more complex when you start stacking to detect sub one photon per exposure.

Hi Roger,

I understand that it should make no difference how many e- we are capturing per exposure as long as the ADC is 'properly' dithered (properly meaning that read noise before the ADC is at least 1DN or LSB)*. If a weak signal is encoded unsatisfactorily into the raw data of a single capture (i.e. SNR is too low for a pleasing picture), stacking several captures will help to bring it out. Clearly for a given situation, equipment and set up the weaker the signal the more the captures to stack in order to achieve a target SNR. But in theory there should be no lower limit to the signal that can be 'properly' encoded in such a situation.

We had a thread about this a while back about unity gain. I made this model showing the effects of sampling gain in a 100 image stack. Clearly, there is a difference in going from unity gain to 0.5e/dn and a slight gain to 0.3 e/dn. Of course as sky noise, or read noise, or dark current noise increases per exposure, the effect becomes less. The model includes Poisson noise from photons.

I never quite wrapped my head around this interesting chart. I am going to have to think about it for a while, probably try to duplicate it, which requires time. But my guess would be that if one let the number of images stacked vary, one could achieve a similar target SNR in all rows, albeit at different stack heights, because random noise before the 'ADC' (3e-) is constantly above one LSB (max 2e-).

Jack

*The amount of read noise at the input of the ADC is admittedly not precisely known, as discussed in the previous post. So when measuring read noise at the output of the ADC one should probably leave a bit of a safety margin.

Trollmannx Veteran Member • Posts: 6,690
Re: This is odd... dss problem please help
1

Allien wrote:

"Jon, this really is getting tiring and I'm going to bow out as I really don't have the time to keep making corrections after this one, but for the sake of other astrophotographers, I'll try one more time."

As my first post, I wanted to thank rnclarck for sticking it out, I totally get it.

I'm perplexed as to the straw-man arguments that are presented to him.

Even being a new guy to astrophotogpraphy, I can usually understand what Roger is saying, yet find the arguments against him confusing and full of words.

So far I am still doing fixed tripod stacking, pixinsight and PS trials expired...but when I get my barndoor up and running I will have a leg up from reading Roger's posts.

It is not a hard concept to grasp, aim for 1/3rd histogram and open that shutter as often as possible! Integration time!

This is very relevant for us new guys as Roger's technique will have less tracking errors. This is a pretty big deal as it will allow us to delay buying that very expensive mount.

To me the main point here is that disagreement can spur very exiting discussions. This thread is a gold mine to me and my own thinking (beeing well versed in astrophotography but also beeing lazy and looking for shortcuts getting me 98 % towards the ideal and leaving the last two percent to the true astrophotographers).

A fine thread (subjective statement)!

rnclark Veteran Member • Posts: 3,957
Re: Quantization

Jack Hogan wrote:

rnclark wrote:

It gets a little more complex when you start stacking to detect sub one photon per exposure.

Hi Roger,

I understand that it should make no difference how many e- we are capturing per exposure as long as the ADC is 'properly' dithered (properly meaning that read noise before the ADC is at least 1DN or LSB)*. If a weak signal is encoded unsatisfactorily into the raw data of a single capture (i.e. SNR is too low for a pleasing picture), stacking several captures will help to bring it out. Clearly for a given situation, equipment and set up the weaker the signal the more the captures to stack in order to achieve a target SNR. But in theory there should be no lower limit to the signal that can be 'properly' encoded in such a situation.

We had a thread about this a while back about unity gain. I made this model showing the effects of sampling gain in a 100 image stack. Clearly, there is a difference in going from unity gain to 0.5e/dn and a slight gain to 0.3 e/dn. Of course as sky noise, or read noise, or dark current noise increases per exposure, the effect becomes less. The model includes Poisson noise from photons.

I never quite wrapped my head around this interesting chart. I am going to have to think about it for a while, probably try to duplicate it, which requires time. But my guess would be that if one let the number of images stacked vary, one could achieve a similar target SNR in all rows, albeit at different stack heights, because random noise before the 'ADC' (3e-) is constantly above one LSB (max 2e-).

Jack

*The amount of read noise at the input of the ADC is admittedly not precisely known, as discussed in the previous post. So when measuring read noise at the output of the ADC one should probably leave a bit of a safety margin.

Hi Jack,

Theories are great, but often leave something out, or don't fully apply to a limited set of observations.

For example, Nyquist theorem says sampling with at least twice the highest frequency.  But that assumes an infinite time base and constant frequency..  With a limited time base, one gets aliasing unless sampling is at much higher than Nyquist.

In the real world, there are systematics.  For example, in image sensors, fixed pattern noise, 1/f noise, finite digitization.  Typically, one can go to great lengths to reduce noise sources like fixed patterns, but in reality, it gets very difficult to correct to greater than 10x lower than the random noise.

In the above model, which is a pure mathematical model with a random number generator (which of course is not perfectly random, but is a more pore model than a real sensor with low level FPN), it gets difficult to extract signals smaller than about 10times lower than the random noise.  Thus we see with random noise of 3 electrons, it is tough to get below 0.3 photon per pixel per frame.  Of course one can stack more but to get to 0.2 photons/exposure, one would need to increase the number of frames by 1.5^2 or 225 frames, and in the real world it would take more due to the systematics.  This leads to a key in astrophotography: collect the light as fast as you can, meaning large aperture fast optics.

Regarding quantization noise, there is always +/- 1 bit.  So at unity gain with 3 electron read noise you get:

sqrt(3^2 +1^2) = 3.2 electrons noise.

It is that quantization error that contributes to the degradation of extracting the faint signal.

Roger

rnclark Veteran Member • Posts: 3,957
Re: This is odd... dss problem please help

Jon Rista wrote:

The problem is Roger kept addressing the wrong point. He seemed to think that I was advocating against getting a 1/3rd histogram exposure because I was advocating getting longer exposures. He seemed to think I was advocating for longer exposures solely because of read noise. He seemed to think I was advocating for getting longer exposures in a light polluted zone, which would have resulted in 1/2, 2/3, 3/4 histogram, or even more. He was wrong. He missed my point entirely. Even when he wrote his own article on the subject of light pollution, he STILL missed the point.

I was advocating finding darker skies....

Jon, this is disingenuous at best and blatantly false.

Note I WAS imaging at a dark site and DID have 1/4 to 1/3 histogram with 1-minute subs. Here is what you were complaining about, and these are direct quotes from you:

> I very strongly disagree with the way you advocate very short exposures and very short integration times.

> Every other serious and skilled DSLR astrophotographer I know, including Scott Rosen and Jerry Lodriguss, use longer exposures

> Even if you have low read noise, get the longest exposures you can without overexposing. At my dark site on a night with good transparency, I can expose at ISO 800 for 10-12 minutes...

> If you are limited to 60s exposures, and you are reaching 1/3rd histogram in that amount of time, then your limited by light pollution.

> Instead of 30 second subs or 60 second subs, use four, five, maybe even 10 minute subs if your site is dark enough.

> As for read noise itself. If you stuck with the 1 minute exposures at the dark site, you would need to stack more than 24 subs to get the same SNR as with a single 24 minute sub.

Sorry to the group if this is a little testy--I've been up all night imaging, but this crap has to stop.

Roger

ChrisLX200 Regular Member • Posts: 449
Re: This is odd... dss problem please help
1

rnclark wrote:

Jon Rista wrote:

The problem is Roger kept addressing the wrong point. He seemed to think that I was advocating against getting a 1/3rd histogram exposure because I was advocating getting longer exposures. He seemed to think I was advocating for longer exposures solely because of read noise. He seemed to think I was advocating for getting longer exposures in a light polluted zone, which would have resulted in 1/2, 2/3, 3/4 histogram, or even more. He was wrong. He missed my point entirely. Even when he wrote his own article on the subject of light pollution, he STILL missed the point.

I was advocating finding darker skies....

Jon, this is disingenuous at best and blatantly false.

Note I WAS imaging at a dark site and DID have 1/4 to 1/3 histogram with 1-minute subs. Here is what you were complaining about, and these are direct quotes from you:

> I very strongly disagree with the way you advocate very short exposures and very short integration times.

> Every other serious and skilled DSLR astrophotographer I know, including Scott Rosen and Jerry Lodriguss, use longer exposures

> Even if you have low read noise, get the longest exposures you can without overexposing. At my dark site on a night with good transparency, I can expose at ISO 800 for 10-12 minutes...

> If you are limited to 60s exposures, and you are reaching 1/3rd histogram in that amount of time, then your limited by light pollution.

> Instead of 30 second subs or 60 second subs, use four, five, maybe even 10 minute subs if your site is dark enough.

> As for read noise itself. If you stuck with the 1 minute exposures at the dark site, you would need to stack more than 24 subs to get the same SNR as with a single 24 minute sub.

Sorry to the group if this is a little testy--I've been up all night imaging, but this crap has to stop.

Roger

Jon, you have shown far more patience than I dealing with this BS, and you are clearly very knowledgable in the subject, but if I may direct readers to the website of StarTools (a fantastic post-processing program written by Ivo Jager - someone who knows what he's talking about and who's opinion I respect...).

http://forum.startools.org/viewtopic.php?f=4&t=912&sid=b9946e1573f99b8bcd3a273a068aaead

Thanks for your attention.

ChrisH

 ChrisLX200's gear list:ChrisLX200's gear list
Panasonic Lumix DMC-ZS3 Sony RX100 Canon EOS 350D Canon EOS 70D Sigma 10-20mm F4-5.6 EX DC HSM +4 more
Jon Rista Contributing Member • Posts: 681
Re: Quantization

rnclark wrote:

Jack Hogan wrote:

rnclark wrote:

It gets a little more complex when you start stacking to detect sub one photon per exposure.

Hi Roger,

I understand that it should make no difference how many e- we are capturing per exposure as long as the ADC is 'properly' dithered (properly meaning that read noise before the ADC is at least 1DN or LSB)*. If a weak signal is encoded unsatisfactorily into the raw data of a single capture (i.e. SNR is too low for a pleasing picture), stacking several captures will help to bring it out. Clearly for a given situation, equipment and set up the weaker the signal the more the captures to stack in order to achieve a target SNR. But in theory there should be no lower limit to the signal that can be 'properly' encoded in such a situation.

We had a thread about this a while back about unity gain. I made this model showing the effects of sampling gain in a 100 image stack. Clearly, there is a difference in going from unity gain to 0.5e/dn and a slight gain to 0.3 e/dn. Of course as sky noise, or read noise, or dark current noise increases per exposure, the effect becomes less. The model includes Poisson noise from photons.

I never quite wrapped my head around this interesting chart. I am going to have to think about it for a while, probably try to duplicate it, which requires time. But my guess would be that if one let the number of images stacked vary, one could achieve a similar target SNR in all rows, albeit at different stack heights, because random noise before the 'ADC' (3e-) is constantly above one LSB (max 2e-).

Jack

*The amount of read noise at the input of the ADC is admittedly not precisely known, as discussed in the previous post. So when measuring read noise at the output of the ADC one should probably leave a bit of a safety margin.

Hi Jack,

Theories are great, but often leave something out, or don't fully apply to a limited set of observations.

For example, Nyquist theorem says sampling with at least twice the highest frequency. But that assumes an infinite time base and constant frequency.. With a limited time base, one gets aliasing unless sampling is at much higher than Nyquist.

In the real world, there are systematics. For example, in image sensors, fixed pattern noise, 1/f noise, finite digitization. Typically, one can go to great lengths to reduce noise sources like fixed patterns, but in reality, it gets very difficult to correct to greater than 10x lower than the random noise.

In the above model, which is a pure mathematical model with a random number generator (which of course is not perfectly random, but is a more pore model than a real sensor with low level FPN), it gets difficult to extract signals smaller than about 10times lower than the random noise. Thus we see with random noise of 3 electrons, it is tough to get below 0.3 photon per pixel per frame. Of course one can stack more but to get to 0.2 photons/exposure, one would need to increase the number of frames by 1.5^2 or 225 frames, and in the real world it would take more due to the systematics. This leads to a key in astrophotography: collect the light as fast as you can, meaning large aperture fast optics.

Regarding quantization noise, there is always +/- 1 bit. So at unity gain with 3 electron read noise you get:

sqrt(3^2 +1^2) = 3.2 electrons noise.

It is that quantization error that contributes to the degradation of extracting the faint signal.

Roger

So, all of this is to achieve "as close to perfectly replicating the signal" as possible? (Just clarifying that's the level this conversation has reached...perfect replication of the original signal.)

How much in the real world does 0.2e- worth of quantization noise actually matter, though? Just off the top of my head, the only time it would really matter is if you were imaging narrow band at a dark site where you could effectively expose for (well) over an hour per sub, and still not achieve a skyfog limited background. At that point, there would always be some signal in the read noise of the image. Even with LRGB imaging at a 22mag/sq" or better exceptional dark site, your still going to have airglow to contend with, and that will ultimately limit your exposures, and allow you to become skyfog limited (at which point read noise hardly matters, let alone a fraction of an electron worth of quantization noise).

In a more practical situation, say people imaging in even moderately light polluted regions like a green or yellow zone (let alone the more common orange/red/white zones), I cannot see 0.2e- quantization noise actually mattering unless the imager was stacking hundreds of subs. Even then, it wouldn't take more than a small amount of dithering, or even just natural inter-frame drift (which tends to occur without ideal polar alignment), which would offset PRNU in the final integration, meaning one should never see any posterization in their final integrations.

It might impact SNR a small amount at lower ISOs, however few people image at anything below ISO 400, and those who do tend to get very long exposures (I've used 20-30 minute sub exposures with filtration on my 5D III at ISO 400 on a few occasions), and I never saw a hint of posterization or other quantization issues after stacking.

 Jon Rista's gear list:Jon Rista's gear list
Canon EOS 5D Mark III Sony a6000 Canon EF 50mm F1.4 USM Canon EF 16-35mm F2.8L II USM Canon EF 100-400mm f/4.5-5.6L IS USM +4 more
Jack Hogan Veteran Member • Posts: 7,491
The power of dithering
2

rnclark wrote:

Hi Jack,

Theories are great, but often leave something out, or don't fully apply to a limited set of observations.

For example, Nyquist theorem says sampling with at least twice the highest frequency. But that assumes an infinite time base and constant frequency.. With a limited time base, one gets aliasing unless sampling is at much higher than Nyquist.

In the real world, there are systematics. For example, in image sensors, fixed pattern noise, 1/f noise, finite digitization. Typically, one can go to great lengths to reduce noise sources like fixed patterns, but in reality, it gets very difficult to correct to greater than 10x lower than the random noise.

In the above model, which is a pure mathematical model with a random number generator (which of course is not perfectly random, but is a more pore model than a real sensor with low level FPN), it gets difficult to extract signals smaller than about 10times lower than the random noise. Thus we see with random noise of 3 electrons, it is tough to get below 0.3 photon per pixel per frame. Of course one can stack more but to get to 0.2 photons/exposure, one would need to increase the number of frames by 1.5^2 or 225 frames, and in the real world it would take more due to the systematics. This leads to a key in astrophotography: collect the light as fast as you can, meaning large aperture fast optics.

Regarding quantization noise, there is always +/- 1 bit. So at unity gain with 3 electron read noise you get:

sqrt(3^2 +1^2) = 3.2 electrons noise.

It is that quantization error that contributes to the degradation of extracting the faint signal.

I hear you on real-world vs ideal, and FPN, Roger. On the other hand about quantization error, I understand that the random read noises that we are typically able to measure and discuss in these fora already include such a quantization component. For instance when bclaff or sensorgen say that the D7200 has about 1.9e- read noise at base ISO, that figure already includes quantization error: it's part of the measurement.

Your earlier chart and comment spurred me to investigate further, though, so I simulated a version of the strips: the numbers are Poisson objects with intensities increasing linearly from zero to the left to a mean of 1 e-/capture to the right, 0.1e- times the number so 6 represents a signal of 0.6e-/capture. To make them display this way I equated the lowest value in the file to zero and the highest value to 255, the equivalent of a levels adjustment:

A) Linear intensities of e-/capture, so 3 corresponds to 3/10e- and 10 to 1e-/capture

If we add 3 e- of gaussian read noise to the signal ramp and digitize the resulting image with a 'proper' gain of 1 DN = 2e- (top row in your chart) this is what we get after the levels adjustment:

B) 1 Frame: same as above with 3e- of random read noise and quantized at 1 LSB = 2e-.

Ugh, we can't see anything. Fair enough theory says that the signal to noise ratio at number 10 should be about 0.316; and the SNR at number one should be 0.033, much (much!) below the typically acceptable working range. Bear with me though, as I am trying to think this through.

Since random noise standard deviation (3e-) was higher than 1 DN (2e-) in figure B) above, information theory suggests that the light information from the signal (the numbers) was nevertheless 'properly' captured in the raw data: assuming no DSNU and FPN all we would need to do to pull out the information buried deep into the noise is to stack many such captures to reduce the proportion of random read noise in the final image. For instance, here is a stack of 10 lights of the subject above

C) 10 frames with the same properties as the one just above

and here is one of 100

D) 100 frames of B) above

Coming out nicely. If we keep at this ideal exercise we should be able to get at the visual information stored in the individual captures with any arbitrary target SNR. And in fact the result of stacking enough (for me) such captures is top image A), showing the original signal numbers in all their glory. You had not realized that the top image was the final stack, had you? If you want a cleaner final image, keep going.

Of course I am not suggesting that anybody sane should attempt to capture the number of frames required to recover a signal recorded at an SNR of 1/30 in a single frame, just that one can do so if one so desires - as long as the information is in the data in the first place. And the condition for that to be true is that random noise at the input of the ADC needs to be higher than one LSB. In this case it was and so we could.

Let's take a look now at a case when it isn't. The following four strips correspond to B), C), D) and A) above respectively, with the same signal but lower random read noise of 1e-, digitized by an ADC where it takes 10e- to reach 1 LSB (the a7S has a similar gain at base ISO):

B1) 1 Frame: same signal as B) above but with 1e- of random read noise and quantized at 1 DN = 10e-.

Yeah, the tiny signal (remember, maximum 1 e-) just can't make it to the 0.5DN threshold without the help of the earlier relatively stronger dithering power, so most of the single frames are full of zeros (0, zip). The few specs one can see are statistical oddities.

C1) 10 frames as in B1), compare to C)

D1) 100 frames as in B1), compare to D) This is what information loss looks like.

The oddities are piling up, but because they are oddities they are not in the right proportions... so by the time one gets to n frames (as in figure A above) the result is nonlinear, with distorted means and standard deviations :

A1) Hey, in quantum mechanics if you play tennis against a wall long enough the ball goes through it

Here instead is what A) looks like for comparison again, same number of lights stacked. Here means and standard deviations are proportional to the original, as expected from captures that captured the 'full' visual information:

A) Same signal but more noise than A1) ... and 'proper' ADC threshold (gain)

Therefore, ignoring pixel response non uniformities for a moment (mainly FPN and DSNU), the key to being able to capture visual information in the raw data is a properly dithered ADC: random noise rms at its input of at least 1 DN (= 1 LSB). Less will result in visual information loss; more will not help but will require stacking more captures in order to dig out that signal at the same final target SNR.

Bill Claff conveniently provides such estimates of read noise per DN for most cameras by measuring noise directly off raw data dark frames at every ISO. Keep a bit of a margin, however, because the resulting figures refer to the output of the ADC and include ADC noise, prnu, FPN, quantization noise and more*. The actual random noise at the input of the ADC will therefore be somewhat lower once the above components have been subtracted out. How much lower depends on how well the camera imaging system was tuned during design and manufacturing. Every generation is better than the last.

Ok, thanks to you folks this concludes my initiation to astro, I think I understand how the basics work. Now all I have to do is go out, buy the kit and practice. Don't hold your breath

Jack

* Also careful about the actual bit depth of some cameras when determining the random noise in one LSB: some 14-bit cameras run at 13 or even 12 bits depending on operating mode. For instance the a7mkII , like many other advanced Sonys, appears to run the ADC at 13 bits at base ISO. I understand that Bill Claff's read noise in DN figures refer to a 14-bit scale instead, so its measured read noise of 1.1DN at ISO100 could actually be half that at 13-bits. And the a7mkII does in fact show quantization artifacts a stop or so up from base ISO.

Jon Rista Contributing Member • Posts: 681
Re: This is odd... dss problem please help

rnclark wrote:

Jon Rista wrote:

The problem is Roger kept addressing the wrong point. He seemed to think that I was advocating against getting a 1/3rd histogram exposure because I was advocating getting longer exposures. He seemed to think I was advocating for longer exposures solely because of read noise. He seemed to think I was advocating for getting longer exposures in a light polluted zone, which would have resulted in 1/2, 2/3, 3/4 histogram, or even more. He was wrong. He missed my point entirely. Even when he wrote his own article on the subject of light pollution, he STILL missed the point.

I was advocating finding darker skies....

Jon, this is disingenuous at best and blatantly false.

Note I WAS imaging at a dark site and DID have 1/4 to 1/3 histogram with 1-minute subs. Here is what you were complaining about, and these are direct quotes from you:

> I very strongly disagree with the way you advocate very short exposures and very short integration times.

> Every other serious and skilled DSLR astrophotographer I know, including Scott Rosen and Jerry Lodriguss, use longer exposures

> Even if you have low read noise, get the longest exposures you can without overexposing. At my dark site on a night with good transparency, I can expose at ISO 800 for 10-12 minutes...

> If you are limited to 60s exposures, and you are reaching 1/3rd histogram in that amount of time, then your limited by light pollution.

> Instead of 30 second subs or 60 second subs, use four, five, maybe even 10 minute subs if your site is dark enough.

> As for read noise itself. If you stuck with the 1 minute exposures at the dark site, you would need to stack more than 24 subs to get the same SNR as with a single 24 minute sub.

Sorry to the group if this is a little testy--I've been up all night imaging, but this crap has to stop.

Roger

All of those quotes are taken completely out of context here. If anything, that needs to stop.

I don't think you have actually read my entire posts. You almost seemed to purposely cherry pick the couple of cases where I even mentioned read noise before...completely bypassing everything else I said about darker skies, red zones, and light pollution. I've stated many times that I agree with the 1/3rd exposure rule (I advocate it myself), and in my own experience, at dark sites, it is exceptionally rare to be able to get 1/3rd histogram exposures in a mere 60 seconds (if I could, I would!) Bigger pixels don't change things here...they gather more light per unit time than smaller pixels, yet they have a larger capacity, so for a given flux and F-number, you should reach 1/3rd histogram in the same exposure time regardless (and, I DO have a lens with a bigger aperture...and despite that, it still takes me longer than 1 minute.) The difference in Q.E. between the 5D III and 7D II is about 8%, not enough to account for 2-3 stops difference in exposure times.

Something doesn't add up there. Either your skies were quite bright, or your histogram wasn't actually at 1/3rd. Either way...I am not being disingenuous. I don't think you have read all of my posts, and you are misinterpreting or ignoring what I have said. I've been quite clear, and other people seem to get that.

 Jon Rista's gear list:Jon Rista's gear list
Canon EOS 5D Mark III Sony a6000 Canon EF 50mm F1.4 USM Canon EF 16-35mm F2.8L II USM Canon EF 100-400mm f/4.5-5.6L IS USM +4 more
Jon Rista Contributing Member • Posts: 681
Re: This is odd... dss problem please help

Jon Rista wrote:

rnclark wrote:

Jon Rista wrote:

The problem is Roger kept addressing the wrong point. He seemed to think that I was advocating against getting a 1/3rd histogram exposure because I was advocating getting longer exposures. He seemed to think I was advocating for longer exposures solely because of read noise. He seemed to think I was advocating for getting longer exposures in a light polluted zone, which would have resulted in 1/2, 2/3, 3/4 histogram, or even more. He was wrong. He missed my point entirely. Even when he wrote his own article on the subject of light pollution, he STILL missed the point.

I was advocating finding darker skies....

Jon, this is disingenuous at best and blatantly false.

Note I WAS imaging at a dark site and DID have 1/4 to 1/3 histogram with 1-minute subs. Here is what you were complaining about, and these are direct quotes from you:

> I very strongly disagree with the way you advocate very short exposures and very short integration times.

> Every other serious and skilled DSLR astrophotographer I know, including Scott Rosen and Jerry Lodriguss, use longer exposures

> Even if you have low read noise, get the longest exposures you can without overexposing. At my dark site on a night with good transparency, I can expose at ISO 800 for 10-12 minutes...

> If you are limited to 60s exposures, and you are reaching 1/3rd histogram in that amount of time, then your limited by light pollution.

> Instead of 30 second subs or 60 second subs, use four, five, maybe even 10 minute subs if your site is dark enough.

> As for read noise itself. If you stuck with the 1 minute exposures at the dark site, you would need to stack more than 24 subs to get the same SNR as with a single 24 minute sub.

Sorry to the group if this is a little testy--I've been up all night imaging, but this crap has to stop.

Roger

All of those quotes are taken completely out of context here. If anything, that needs to stop.

I don't think you have actually read my entire posts. You almost seemed to purposely cherry pick the couple of cases where I even mentioned read noise before...completely bypassing everything else I said about darker skies, red zones, and light pollution. I've stated many times that I agree with the 1/3rd exposure rule (I advocate it myself), and in my own experience, at dark sites, it is exceptionally rare to be able to get 1/3rd histogram exposures in a mere 60 seconds (if I could, I would!) Bigger pixels don't change things here...they gather more light per unit time than smaller pixels, yet they have a larger capacity, so for a given flux and F-number, you should reach 1/3rd histogram in the same exposure time regardless (and, I DO have a lens with a bigger aperture...and despite that, it still takes me longer than 1 minute.) The difference in Q.E. between the 5D III and 7D II is about 8%, not enough to account for 2-3 stops difference in exposure times.

Something doesn't add up there. Either your skies were quite bright, or your histogram wasn't actually at 1/3rd. Either way...I am not being disingenuous. I don't think you have read all of my posts, and you are misinterpreting or ignoring what I have said. I've been quite clear, and other people seem to get that.

For the record, because context matters:

Jon Rista wrote:

Let's compare full size images so that we aren't hiding noise in any way, shall we? I would bet good money that your 9 minute (<-- that is the INTEGRATION time) image is extremely noisy in the background details. Here's mine:

http://www.astrobin.com/full/142576/F/?real=&mod=

Nine minutes (<-- that is the INTEGRATION time!) will get you something, but that something isn't necessarily going to be ideal. It's going to be noisy. It's going to be quite noisy. I don't think I've ever seen a full size or even 50% size image from you. Your conveniently hiding how noisy your images are by scaling them down...a LOT in most cases. So let's be realistic and honest here. Share your unscaled image, and let's see what 9 minutes really gets you.

I very strongly disagree with the way you advocate very short exposures and very short integration times. I do not think it's good advice. Your the only astrophotographer I know that advocates extremely limited and minimal integration times as a matter of course, and actively argues against deeper integration time. Every other astrophotographer I know advocates the same thing I do...using the longest exposures you can in the darkest skies you can (or by using narrow band filters on a mono CCD), and getting as many sub exposures as possible (<-- That's another way of saying, get more integration time!). This is a fundamental, the most basic thing you teach anyone who is interested in learning astrophotography. I know guys who use 45, 60, 90 minute narrow band subs with CCD cameras using newer Sony ICX sensors with as little as 2.8e- read noise, and their results are phenomenal (this brings narrow band imaging into the discussion, where exposure time is rarely limited, as achieving a skyfog limited background that swamps read noise can require hours of exposure for a single sub, even if you have extremely low read noise...it's a different case than DSLR imaging, just an exemplar to demonstrate that this isn't necessarily about read noise, not until your background sky is so dark that you CAN'T swamp the read noise). Every other serious and skilled DSLR astrophotographer I know, including Scott Rosen and Jerry Lodriguss, use longer exposures and often very extensive integration (as much as 30 hours or more at times) to get the best results. As a matter of course these days, I encourage DSLR imagers to find dark skies, even in a green zone, because the difference between imaging in a green zone and imaging in a white zone is literally orders of magnitude better.

Yes, you are correct, one shouldn't expose past 1/3rd histogram. I agree that going beyond 1/3rd histogram with a DSLR throws away dynamic range. I offered my advice as far as exposure times based on what I know about sky brightness and aperture. I made an assumption about the OP's sky brightness, but I was also clear about the brightness of my skies and that I was recommending 360s relative to that kind of sky darkness at f/5. I was also clear that if he had brighter skies than I was assuming, he would likely be FORCED to use shorter exposures.

However if you do have darker skies, then don't limit yourself to short exposures. Even if you have low read noise, get the longest exposures you can without overexposing (<-- implies not going beyond 1/3rd, given the above paragraph). At my dark site on a night with good transparency, I can expose at ISO 800 for 10-12 minutes before I hit 1/3rd histogram. And that's exactly what I do. I also get as much integration time as I can so I can pull out faint details with as little noise as possible.

With the full context here...my primary concern is the recommendation of limited INTEGRATION time. I mentioned exposure times as well, however that is really a secondary concern, given what I wrote above, my primary concern is the limited integration time of 9 minutes. That is nine one minute exposures. I went on to explain that everyone else I know who is well versed in astrophotography, and in most cases has been doing it a good decade or so longer than I have, also advocate long integration times. Scott Rosen has been pumping out integrations with 25 hours or more lately, some closer to 40 hours, with nothing more than a DSLR. He usually uses 300-600 second subs at ISO 1600 with an astro modded 6D.

It is a common occurrence for Roger to exemplify his admiration of the 7D II with VERY short integration times...9 minutes, 10 minutes, 37 minutes, 45 minutes. His images are very noisy, but that fact is usually very hidden due to the very small replication sizes he always uses. I don't think I've seen a 100% scale image of Roger's more than once or twice? At best, it's 50%, and frequently much smaller than that. Downsampling averages pixels together which suppresses noise. Hence the reason for my challenge, that we both share our images at 100% and compare apples to apples...something that never happened. I think some of the truth is being hidden here, and I don't think that is to the service of anyone who comes along to read these posts, or Roger's site.

There is a body of evidence that indicates a 1-minute sub at a dark site isn't going to get you 1/3 histogram most of the time. There are caveats to that...all of which involve some kind of increase to light pollution...high airglow (even aurora), inversion layers that reflect more LP, increased snowfall on the ground resulting in further increased LP, etc. Combine all of those together, and you can shift a whole zone or more (which is what has happened to my dark site lately...December 2014 I was measuring as high as 21.6mag/sq" and lately in 2016 with high snowcover and thicker inversion layers, I've been measuring closer as 21mag/sq" with increased frequency, and as low as 20.8mag/sq"...a difference of 1.74x, over one stop, on average. That moved me out of a borderline gray zone solidly into a yellow zone...a not insignificant shift.)

The bottom half of my post above covers how I fully agree with the 1/3rd histogram rule, and I tried to offer context for why I believe a 1 minute exposure is extremely short for a dark site, given I usually have to expose for about five minutes before I even hit 1/4 histogram, and on a good clear night of high transparency, can hit as much as 10 minutes or so (pretty rare occasions, only happened a couple of times at my dark site).

Anyway. I'm done with this tangent of the debate. It's no longer useful, I cannot see how any further bickering over semantics or meaning when things are taken out of context can help anyone. It's just becoming petty. I'll resort to writing my own articles on my site instead, and those who are interested can feel free to read.

 Jon Rista's gear list:Jon Rista's gear list
Canon EOS 5D Mark III Sony a6000 Canon EF 50mm F1.4 USM Canon EF 16-35mm F2.8L II USM Canon EF 100-400mm f/4.5-5.6L IS USM +4 more
sharkmelley
sharkmelley Senior Member • Posts: 2,531
Re: Quantization

rnclark wrote:

Regarding quantization noise, there is always +/- 1 bit. So at unity gain with 3 electron read noise you get:

sqrt(3^2 +1^2) = 3.2 electrons noise.

It is that quantization error that contributes to the degradation of extracting the faint signal.

If I understand you correctly then with a gain of 0.2e/DN then with 3 electron read noise plus the +/- 1 bit noise we get:

sqrt(3^2 + 0.2^2) = 3.007 electrons noise

The difference between 3.2e noise and 3.007e noise cannot explain the big difference between the rows for unity gain and 0.2e/DN gain in your table.

We've discussed this table previously and visible differences between those rows still makes no sense to me.

Mark

 sharkmelley's gear list:sharkmelley's gear list
Sony a7S Nikon Z6 +1 more
rnclark Veteran Member • Posts: 3,957
Re: Quantization

sharkmelley wrote:

rnclark wrote:

Regarding quantization noise, there is always +/- 1 bit. So at unity gain with 3 electron read noise you get:

sqrt(3^2 +1^2) = 3.2 electrons noise.

It is that quantization error that contributes to the degradation of extracting the faint signal.

If I understand you correctly then with a gain of 0.2e/DN then with 3 electron read noise plus the +/- 1 bit noise we get:

sqrt(3^2 + 0.2^2) = 3.007 electrons noise

The difference between 3.2e noise and 3.007e noise cannot explain the big difference between the rows for unity gain and 0.2e/DN gain in your table.

We've discussed this table previously and visible differences between those rows still makes no sense to me.

Mark

So you are saying you won't see a difference in images made at unity gain versus higher isos if stretched in post processing?  Unity gain these days is typically aground ISO 300 to 600 for pixels sizes in the 4 to 6 micron range.  Is there any camera out there where you can stretch the unity gain iso image and see the SAME faint things as a higher iso image?  Why not make all cameras max out at unity gain if that is all you need?

rnclark Veteran Member • Posts: 3,957
Re: This is odd... dss problem please help

ChrisLX200 wrote:

Jon, you have shown far more patience than I dealing with this BS, and you are clearly very knowledgable in the subject, but if I may direct readers to the website of StarTools (a fantastic post-processing program written by Ivo Jager - someone who knows what he's talking about and who's opinion I respect...).

http://forum.startools.org/viewtopic.php?f=4&t=912&sid=b9946e1573f99b8bcd3a273a068aaead

Thanks for your attention.

ChrisH

1) You two better be careful about libel.

2) I take pride in my website and when someone points out an error I correct it. Show me a real error and I will correct it.

3) Ivo has attacked me too but in each case I have shown where his errors are, which mostly have to do with what is the transfer function from linear camera data to tone curve data and the effects on processing. See Figure 7a-c here: http://www.clarkvision.com/articles/astrophotography.image.processing/

4) The startools website makes general claims and zero specifics. If anyone can point out a real error, I will correct it.

5) Interesting how you cite others and their expertise. You might want to check out mine, as I didn't just pick up a camera a few years ago: http://www.clarkvision.com/rnc/ In particular you can check out my publications where I have hundreds of scientific publications on imaging.

6) It is pretty amusing how I have been attacked here for something as simple as EXPOSURE TIME, saying my exposures are too short. Yet I showed a comparison of my 9 minutes of exposure on M42 versus hours of exposure time with a larger aperture lens and showing similar faint dust! Sorry if this upsets the traditional way of doing things. Reasonable people would take a look at new methodology, see if there is something to it, and if it works for them to produce a better product, so be it. In fact I have received many emails from people saying that they have tried my new methods and are getting better results. Also you guys (including Rista) attacked saying it's only nine minutes exposure time on the M42 image I pointed to. yet on the same page, http://www.clarkvision.com/articles/astrophotography.and.exposure/ I show much longer exposure times that bring out even fainter dust. And yes the colors are natural--not the overblown reds of a modified camera.

7) In response to Ivo's attacks, I put together an image processing challenge last year:

http://www.clarkvision.com/articles/astrophotography.image.processing2/

Here is the dpreview thread: http://www.dpreview.com/forums/thread/3879668

Notice the same characters attacking!

The challenge is simple. I put out the raw files flats, dark, lights. Show how your method is superior. Ivo never did the challenge. And frankly, I have yet to see a traditional method that even comes close to new methods.

The point of all this is, neither I no anyone else has denied that lots of exposure time doesn't help make a better final astro image. Nor that dark skies are better than light polluted skies. In fact that is what I show on the page being attacked, http://www.clarkvision.com/articles/astrophotography.and.exposure/

Nor have I attacked traditional methods like you guys attack new methods.

The simple fact is that technology is changing. Digital cameras are improving in quantum efficiency while at the same time pushing noise lower and lower. And technology is changing such that on sensor dark current suppression means no need for dark frames. That is a huge technical advantage that can improve faint object detection over traditional methods that use dark frames. I can lay out the some math for you if you would like. This technology in the camera means one can use simpler methods to reach fainter objects with less work and less exposure time.

I'm really sorry that you feel so threatened by these new methods, but making baseless accusations while ignoring new technology says more about you and Ivo than about me or new methods.

Roger

Jon Rista Contributing Member • Posts: 681
Re: This is odd... dss problem please help

Not threatened by 'new' methods at all, Roger. I am often accused of being too new-fangled myself, usually simply because of my extensive use of PixInsight and tools that were unheard of a few years ago. I've developed my own techniques for processing my images. It's not about that at all. It's a true concern I have that you are teaching, pushing, inferior and flawed techniques that limit or damage image quality as fundamentally superior to anything else. The fact that you cannot handle any dissention to your methods is another problem. You don't generally discuss the possible merits of any other alternative, you dismiss out of hand, and won't hear any critiques of your own methods. You ALWAYS have something to counter any argument with, which often pushes discussions into the arcane limits of theory.

I've said this in the past, but I'll reiterate here. You've been doing astrophotography for a LONG time. For that longevity, your results are not very good. I don't really mean to be rude...it's an honest observation. You focus heavily on some things, such as a strict adherence to some notion of scientifically accurate color (if there is such a thing, and even if there was, you offer no allowance for artistic license), such as pulling out barely extant Ha signal that is utterly buried in the noise from a 30 minute integration...and critique everyone who takes on your challenge (every one of whom had cleaner results with better tonal fidelity and better color than your own) on minute flaws that no one else would see, and yet you fail to recognize the flaws in your own images. For one, if you want to dig out a faint signal, integrate more. Even a couple of hours. Don't waste time smoothing out a bits of noise here and there in an attempt to call it hydrogen alpha. You utterly DESTROY your stars. They are some of the most garish and have heavily posterized stars halos, lacking in any degree of tonal fidelity or color purity, that I've seen in astro images from a non-beginner, and unparalleled for someone who has been at this for as long as you have.

I don't deny or refute everything you say (i.e. I fully agree with the 1/4-1/3 histogram guideline), however there are specific things I feel I have to respond to because what you advocate, either explicitly, or as is more often the case implicitly by the images you regularly share which have short integration times, lots of noise, posterization etc. and advocate as excellent results with good IQ, is inferior. Your results are generally inferior, and there are specific reasons why. If you had been doing astrophotography for a couple of years, I'd say "Nice job! Keep it up!"...but you have been doing this far longer than I have (I've only been at it two years...not even, actually!)...and your results are rarely better than the beginners I encounter on a daily basis. They have an excuse...you don't really have an excuse. With your knowledge of this field, you should be able to produce record shattering world class results...but you don't. I expect this kind of stuff from someone with your longevity in the field:

http://www.starpointing.com/ccd.html

THAT is world class astrophotography, long in the field (~2004?), using TRADITIONAL methods, from someone who knows what their doing...from someone who knows how to integrate in order to REALLY pull out the faint stuff. His quality is second to none, his stars are exquisite, he has revealed more faint details around commonly imaged objects than anyone I've encountered. You don't think that is superior to your "new methods"? Seriously?

Show me some world class results, Roger, and maybe I'll change my mind about you and some of your ideas.

And libel? A threat? Seriously. Desperate measures.

 Jon Rista's gear list:Jon Rista's gear list
Canon EOS 5D Mark III Sony a6000 Canon EF 50mm F1.4 USM Canon EF 16-35mm F2.8L II USM Canon EF 100-400mm f/4.5-5.6L IS USM +4 more
sharkmelley
sharkmelley Senior Member • Posts: 2,531
Re: Quantization

rnclark wrote:

sharkmelley wrote:

rnclark wrote:

Regarding quantization noise, there is always +/- 1 bit. So at unity gain with 3 electron read noise you get:

sqrt(3^2 +1^2) = 3.2 electrons noise.

It is that quantization error that contributes to the degradation of extracting the faint signal.

If I understand you correctly then with a gain of 0.2e/DN then with 3 electron read noise plus the +/- 1 bit noise we get:

sqrt(3^2 + 0.2^2) = 3.007 electrons noise

The difference between 3.2e noise and 3.007e noise cannot explain the big difference between the rows for unity gain and 0.2e/DN gain in your table.

We've discussed this table previously and visible differences between those rows still makes no sense to me.

Mark

So you are saying you won't see a difference in images made at unity gain versus higher isos if stretched in post processing? Unity gain these days is typically aground ISO 300 to 600 for pixels sizes in the 4 to 6 micron range. Is there any camera out there where you can stretch the unity gain iso image and see the SAME faint things as a higher iso image? Why not make all cameras max out at unity gain if that is all you need?

Yes that's exactly what I'm saying - at least that is what my own statistical modelling suggests. Indeed that same modelling suggests that with a so-called ISOless sensor then imaging at 2e/DN is perfectly fine but do first check with a table of actual measured read noise against ISO for your own particular camera to check that read noise does not jump up at a gain 2e/DN (like it does with most Canon cameras). But even if the read noise does jump at 2e/DN then it is usually swamped by the background sky fog noise in any case so the sky fog becomes the constraint, not the read noise.

As Roger frequently points out, FPN (fixed pattern noise) is the one fly in the ointment that may force you to use higher ISOs than the unity gain ISO, in order to reduce the effects of FPN. So you need to investigate the FPN of your own camera.

So why don't all camera max out at unity gain? Probably because cameras are aimed at the consumer market where consumers rightly want to obtain a high ISO JPG or a high ISO video without a ton of post-processing. Also because the apparent read noise still continues to drop very slightly at high ISOs.

Mark

 sharkmelley's gear list:sharkmelley's gear list
Sony a7S Nikon Z6 +1 more
rnclark Veteran Member • Posts: 3,957
Re: This is odd... dss problem please help

Jon Rista wrote:

Not threatened by 'new' methods at all, Roger. I am often accused of being too new-fangled myself, usually simply because of my extensive use of PixInsight and tools that were unheard of a few years ago. I've developed my own techniques for processing my images. It's not about that at all. It's a true concern I have that you are teaching, pushing, inferior and flawed techniques that limit or damage image quality as fundamentally superior to anything else. The fact that you cannot handle any dissention to your methods is another problem. You don't generally discuss the possible merits of any other alternative, you dismiss out of hand, and won't hear any critiques of your own methods. You ALWAYS have something to counter any argument with, which often pushes discussions into the arcane limits of theory.

I've said this in the past, but I'll reiterate here. You've been doing astrophotography for a LONG time. For that longevity, your results are not very good. I don't really mean to be rude...it's an honest observation. You focus heavily on some things, such as a strict adherence to some notion of scientifically accurate color (if there is such a thing, and even if there was, you offer no allowance for artistic license), such as pulling out barely extant Ha signal that is utterly buried in the noise from a 30 minute integration...and critique everyone who takes on your challenge (every one of whom had cleaner results with better tonal fidelity and better color than your own) on minute flaws that no one else would see, and yet you fail to recognize the flaws in your own images. For one, if you want to dig out a faint signal, integrate more. Even a couple of hours. Don't waste time smoothing out a bits of noise here and there in an attempt to call it hydrogen alpha. You utterly DESTROY your stars. They are some of the most garish and have heavily posterized stars halos, lacking in any degree of tonal fidelity or color purity, that I've seen in astro images from a non-beginner, and unparalleled for someone who has been at this for as long as you have.

Wow Jon, let's look at reality. First, you seem to have a misconception of integration. Integration doesn't magically make fainter stuff brighter. One must subtract the background sky glow then stretch the image enhance and to compress the dynamic range so that we can see the contrast in the faint objects. That does not change regardless of how long you integrate.  Integration and stacking only reduces noise--it does not change the intensity profile or contrast. Regardless of processing method, to show the same contrast in the final image, the stretching is the same regardless of integration time.  More integration just reduces noise allowing one to stretch more and bring out more details.

I don't pixel peep.  I produce images that I can make large prints of, and no one has complained about noise in my images, from delivered images, to top international contests.  For example, check out the fall/winter issue of Natures Best with the annual international contest.  I have 3 images out of the 101 chosen, two are astro,

Yes, I try for natural color. The reality is that most stars in the night sky are white to yellow. not the all too common half or more stars in the scene are blue because of post processing we see today in the digital era. For those still reading, see http://www.clarkvision.com/articles/color-of-stars/ and table 2.  So, no, I haven't destroyed color purity, in fact I've maintained reasonable natural color and not radically changed the color with post processing.

So let's do a comparison.  Here are your stars compared to mine, below.  Looks to me that the stars have similar diameter and the bright stars have similar flair.  In fact, if you look at the higher resolution image in the link, you will see the flair on my bright stars is smaller than yours.

Here are results from the challenge comparison, traditional versus new processing.  I don't see a bloated posterized stars with the new methods of processing.  I do see squashing of reds in your traditional method that included histogram equalization.

Example processing differences: traditional (left), new (right)

I don't deny or refute everything you say (i.e. I fully agree with the 1/4-1/3 histogram guideline), however there are specific things I feel I have to respond to because what you advocate, either explicitly, or as is more often the case implicitly by the images you regularly share which have short integration times, lots of noise, posterization etc. and advocate as excellent results with good IQ, is inferior. Your results are generally inferior, and there are specific reasons why.

First I don't claim my results are perfect.  I posted the challenge to learn and improve.  The main thing I learned for my own images is to use star reduction algorithms, and I have been employing it since (which I did post about).  It also became clear that most people doing traditional methods are employing histogram equalization in their workflow and that is the main thing destroying reds and makes white and yellow stars blue.  Your image on  the left above is a typical example: we see a bluing of stars moving out of the Milky Way from left to right.  That is not natural and the effect is dependent on star intensity, thus varying color balance with star intensity.  You call that good post processing?

Another thing I learned is that people are employing post processing noise reduction that is making an unnatural splotchy background.  Here is an example, again use our M42 images.  Notice the red-blue splotchiness in the right image.  Again check the similarity of the stars noting that my image is made with half the focal length (so a little less detail) and many times less exposure.

If you had been doing astrophotography for a couple of years, I'd say "Nice job! Keep it up!"...but you have been doing this far longer than I have (I've only been at it two years...not even, actually!)...and your results are rarely better than the beginners I encounter on a daily basis. They have an excuse...you don't really have an excuse. With your knowledge of this field, you should be able to produce record shattering world class results...but you don't. I expect this kind of stuff from someone with your longevity in the field:

http://www.starpointing.com/ccd.html

THAT is world class astrophotography, long in the field (~2004?), using TRADITIONAL methods, from someone who knows what their doing...from someone who knows how to integrate in order to REALLY pull out the faint stuff. His quality is second to none, his stars are exquisite, he has revealed more faint details around commonly imaged objects than anyone I've encountered. You don't think that is superior to your "new methods"? Seriously?

Show me some world class results, Roger, and maybe I'll change my mind about you and some of your ideas.

You are certainly welcome to your opinion.  I never claimed my images were the best thing out there.  It is pretty amusing that you post to web sites with images made with much larger apertures and longer focal lengths.  Of course bigger apertures and more exposure time will produce a better result.

What I am trying to do is show that people can make beautiful images with very small  apertures and with simple processing methods and have fun.  If I wanted to make deeper images, I could get out my 8-inch and 12.5-inch telescopes, the Losmandy mount, the autoguiders and computers, and spend hours packing the car, driving to remote sites, setting up and exposing for hours.  Instead I choose to have a very small light system that I can carry in a backpack (includes cameras, lenses, tripod, and tracking system), set up and be imaging in a few minutes with no computers, no autoguiders, and then do simple post processing to produce some nice images.  And this includes taking the system on airplanes with normal luggage (backpack and a checked suitcase with all my normal clothes).

Ranting and raving does not help people move forward.  If you can identify specifics on how to improve processing for a particular set of data, I'm listening.  For example, I've identified the problem with histogram equalization in processing, and people taught me about star reduction.  That is the way to move forward, NOT ranting and raving.

I am having fun imaging.

Roger

Sir Canon
OP Sir Canon Senior Member • Posts: 1,572
Re: This is odd... dss problem please help
1

johns images definitely have more much more deatail, while rogers has more color. John: in pixinsight, extract a luminance image. use the lrgb combine tool put the luminance in the luminance slot and move the saturation slider to the left. that should boost your colors while keeping it natural

-- hide signature --

I tend to overdo things

 Sir Canon's gear list:Sir Canon's gear list
Panasonic Lumix DMC-TS1 Canon EOS 350D Canon EOS 550D Canon EOS 700D Canon EF 50mm F1.8 II +6 more
Jon Rista Contributing Member • Posts: 681
Re: This is odd... dss problem please help
1

Roger, before I even get into this with you, I don't know what you are trying to pull, but you seem to have purposely butchered my image in the comparison with yours. Whatever you did to make that comparison look that bad is completely unethical. THIS is how my image really looks:

My real version, with MY processing!

Rogers image in comparison to a butchered version of my own!! WTF?

I am not even going to begin having a conversation with you about this if you are going to screw around with my image like that, and try to purposely present it in an unreal and unfair light. That is beyond screwed up!

And again, you are SIGNIFICANTLY downsampling the data here, which is hiding the true nature of the noise in your image. I've shared my image at 100% crop, so there is no additional smoothing of noise due to downsampling. I am not even going to touch your image to provide my own comparison after this, though...good god.

 Jon Rista's gear list:Jon Rista's gear list
Canon EOS 5D Mark III Sony a6000 Canon EF 50mm F1.4 USM Canon EF 16-35mm F2.8L II USM Canon EF 100-400mm f/4.5-5.6L IS USM +4 more
Keyboard shortcuts:
FForum MMy threads