Achieving FF IQ from Nikon 1 system

I can understand combining different EV photos of the same scene increases the DR, which is pretty much HDR, but not quite understand how combining the same EV photo can have any effect at all. Each photo with the same EV effectively register the same (or fairly similar) DR, so having a bunch of them shouldn't have any increase on DR.
DR measures the number of photographic stops a sensor can capture between highlights and shadows, for which detail can be distinguished from noise. The shadow end of DR is limited by noise and stacking reduces noise, thus stacking increases DR.
Yes I understand that, but it seems that the "increase" in DR due to noise is very very limited, No?
 
I can understand combining different EV photos of the same scene increases the DR, which is pretty much HDR, but not quite understand how combining the same EV photo can have any effect at all. Each photo with the same EV effectively register the same (or fairly similar) DR, so having a bunch of them shouldn't have any increase on DR.
DR measures the number of photographic stops a sensor can capture between highlights and shadows, for which detail can be distinguished from noise. The shadow end of DR is limited by noise and stacking reduces noise, thus stacking increases DR.
Yes I understand that, but it seems that the "increase" in DR due to noise is very very limited, No?
Most of the DR improvements in sensors over the last 5 years has come from reduction in read noise instead of increases in saturation capacity. If you look at the samples I posted in this thread the J4's stacked DR matches the D750's DR.
 
Last edited:
DR measures the number of photographic stops a sensor can capture between highlights and shadows, for which detail can be distinguished from noise. The shadow end of DR is limited by noise and stacking reduces noise, thus stacking increases DR.
Yes I understand that, but it seems that the "increase" in DR due to noise is very very limited, No?
No. Every time you double the number of frames stacked, DR goes up by 0.5 stop.

For example, if your camera has 7 stops of DR at ISO 6400, then the DR achieved by stacking increases in this manner:

2-frame stack: DR 7.5 stops

4-frame stack: DR 8 stops

8-frame stack: DR 8.5 stops

16-frame stack: DR 9 stops

32-frame stack: DR 9.5 stops

64-frame stack: DR 10 stops

128-frame stack: DR 10.5 stops

Also see my post below, "The truth about stacking and DR" with actual noise data for Horshack's stacking demonstration.

Not only does DR improve, but shot noise also improves. This is important since shot noise dominates at mid tones and often is the primary contributor to the visible noise in an image.
 
DR measures the number of photographic stops a sensor can capture between highlights and shadows, for which detail can be distinguished from noise. The shadow end of DR is limited by noise and stacking reduces noise, thus stacking increases DR.
Yes I understand that, but it seems that the "increase" in DR due to noise is very very limited, No?
No. Every time you double the number of frames stacked, DR goes up by 0.5 stop.

For example, if your camera has 7 stops of DR at ISO 6400, then the DR achieved by stacking increases in this manner:

2-frame stack: DR 7.5 stops

4-frame stack: DR 8 stops

8-frame stack: DR 8.5 stops

16-frame stack: DR 9 stops

32-frame stack: DR 9.5 stops

64-frame stack: DR 10 stops

128-frame stack: DR 10.5 stops

Also see my post below, "The truth about stacking and DR" with actual noise data for Horshack's stacking demonstration.

Not only does DR improve, but shot noise also improves. This is important since shot noise dominates at mid tones and often is the primary contributor to the visible noise in an image.

--
Source credit: Prov 2:6
- Marianne
OK! Thanks for the info, since I'm not very familiar with stacking. I was always under the impression that stacking would improve noise, sure, but DR-wise, it would just get the maximum of what the senor can capture.

According to the 0.5 stop improvement on every double the number of frames, following this logic, the DR could actually reach unbelievably huge figure, surely that can't be right? There must be a limit somewhere?

Don't get me wrong, not trying to dispute this theory. Like I said, I'm no expert on stacking. Just want to get a clearer picture, that's all.

Thanks!
 
According to the 0.5 stop improvement on every double the number of frames, following this logic, the DR could actually reach unbelievably huge figure, surely that can't be right? There must be a limit somewhere?
The limit is determined by the number of pixels you have to work with in each frame (data set size) and may also be impacted by computing precision. Also at some point, you need to start compensating for fixed-pattern noise, which manifests as both offset and gain, and this adds quite a bit of effort.

In practical terms, the limit is most likely to be the amount of time you are willing to spend to chase after the diminishing returns. If you're imaging galaxies 10 billion light years away, spending 10 days with extremely expensive equipment may be worth it. Few of us can spend even a fraction of that time on one image. Life is short.

Don't get me wrong, not trying to dispute this theory. Like I said, I'm no expert on stacking. Just want to get a clearer picture, that's all.
Nice pun.
 
Sorry, is that not a bit contradictive? If there is camera movement there would be blur, especially if you're using 1/30s, even if you're doing 60fps. If you're stacking a bunch of photos with camera movement, wouldn't that produce unsharp image?
Moving the camera between exposures doesn't imply there is problematic blur in the images. Holding a camera as still as you can will produce the same amount of blur in a single 1/30 image or two consecutive 1/60 images. It's not hard to make that blur effectively invisible. We do it all the time. No camera is ever truly stable - especially handheld.
 
Sorry, is that not a bit contradictive? If there is camera movement there would be blur, especially if you're using 1/30s, even if you're doing 60fps. If you're stacking a bunch of photos with camera movement, wouldn't that produce unsharp image?
Moving the camera between exposures doesn't imply there is problematic blur in the images. Holding a camera as still as you can will produce the same amount of blur in a single 1/30 image or two consecutive 1/60 images. It's not hard to make that blur effectively invisible. We do it all the time. No camera is ever truly stable - especially handheld.
Of cause! That's why to my mind, stacking must be done with a sturdy tripod and static subject, possible a remote trigger as well. Although when I shoot with a tripod, I often use interval shooting so to avoid the hassel of a remote, that way I could eliminate all physical contact to advoid camera shake and with a DSLR, I could also use the mirror up function at the same time.

By making the blur invisble, do you mean, by holding the camera steady, or do you mean by software?

So my next question would be if I were to shoot 3 or 5 pictures with different EVs with bracketing and stack them together for a high DR photo, then to achieve the same result by stacking pictures with same EV, I would requir a huge number of shots to even get close? If that's true, what's the benefit of shooting with same EV? Or is that just a way to work around the 1 system having no exposure bracketing and to take advantage of its insane fps? So if I were to shoot other system like a DSLR with exposure bracketing function, I would be better off shooting with different EVs?

Sorry for the amount of questions! :-P
 
According to the 0.5 stop improvement on every double the number of frames, following this logic, the DR could actually reach unbelievably huge figure, surely that can't be right? There must be a limit somewhere?
The limit is determined by the number of pixels you have to work with in each frame (data set size) and may also be impacted by computing precision. Also at some point, you need to start compensating for fixed-pattern noise, which manifests as both offset and gain, and this adds quite a bit of effort.

In practical terms, the limit is most likely to be the amount of time you are willing to spend to chase after the diminishing returns. If you're imaging galaxies 10 billion light years away, spending 10 days with extremely expensive equipment may be worth it. Few of us can spend even a fraction of that time on one image. Life is short.
Don't get me wrong, not trying to dispute this theory. Like I said, I'm no expert on stacking. Just want to get a clearer picture, that's all.
Nice pun.
 
Sorry, is that not a bit contradictive? If there is camera movement there would be blur, especially if you're using 1/30s, even if you're doing 60fps. If you're stacking a bunch of photos with camera movement, wouldn't that produce unsharp image?
Moving the camera between exposures doesn't imply there is problematic blur in the images. Holding a camera as still as you can will produce the same amount of blur in a single 1/30 image or two consecutive 1/60 images. It's not hard to make that blur effectively invisible. We do it all the time. No camera is ever truly stable - especially handheld.
Of cause! That's why to my mind, stacking must be done with a sturdy tripod and static subject, possible a remote trigger as well. Although when I shoot with a tripod, I often use interval shooting so to avoid the hassel of a remote, that way I could eliminate all physical contact to advoid camera shake and with a DSLR, I could also use the mirror up function at the same time.

By making the blur invisble, do you mean, by holding the camera steady, or do you mean by software?

So my next question would be if I were to shoot 3 or 5 pictures with different EVs with bracketing and stack them together for a high DR photo, then to achieve the same result by stacking pictures with same EV, I would requir a huge number of shots to even get close? If that's true, what's the benefit of shooting with same EV? Or is that just a way to work around the 1 system having no exposure bracketing and to take advantage of its insane fps? So if I were to shoot other system like a DSLR with exposure bracketing function, I would be better off shooting with different EVs?

Sorry for the amount of questions! :-P
For general-purpose photography, HDR is the better solution - it yields the same noise reduction and DR improvements as stacking with only a fraction of the images required. I touched on this in the OP - the reason I'm using stacking on the J4 is simply because it doesn't have exposure bracketing. My ideal solution would be a 20-frame burst @ 60fps with the camera bracketing in 1EV increments, yielding a full 20EV capture. That would produce a very low noise high DR scene. You wouldn't even need to meter in many cases.
 
Last edited:
Sorry, is that not a bit contradictive? If there is camera movement there would be blur, especially if you're using 1/30s, even if you're doing 60fps. If you're stacking a bunch of photos with camera movement, wouldn't that produce unsharp image?
Moving the camera between exposures doesn't imply there is problematic blur in the images. Holding a camera as still as you can will produce the same amount of blur in a single 1/30 image or two consecutive 1/60 images. It's not hard to make that blur effectively invisible. We do it all the time. No camera is ever truly stable - especially handheld.
Of cause! That's why to my mind, stacking must be done with a sturdy tripod and static subject, possible a remote trigger as well. Although when I shoot with a tripod, I often use interval shooting so to avoid the hassel of a remote, that way I could eliminate all physical contact to advoid camera shake and with a DSLR, I could also use the mirror up function at the same time.

By making the blur invisble, do you mean, by holding the camera steady, or do you mean by software?

So my next question would be if I were to shoot 3 or 5 pictures with different EVs with bracketing and stack them together for a high DR photo, then to achieve the same result by stacking pictures with same EV, I would requir a huge number of shots to even get close? If that's true, what's the benefit of shooting with same EV? Or is that just a way to work around the 1 system having no exposure bracketing and to take advantage of its insane fps? So if I were to shoot other system like a DSLR with exposure bracketing function, I would be better off shooting with different EVs?

Sorry for the amount of questions! :-P
For general-purpose photography, HDR is the better solution - it yields the same noise reduction and DR improvements as stacking with only a fraction of the images required. I touched on this in the OP - the reason I'm using stacking on the J4 is simply because it doesn't have exposure bracketing. My ideal solution would be a 20-frame burst @ 60fps with the camera bracketing in 1EV increments, yielding a full 20EV capture. That would produce a very low noise high DR scene. You wouldn't even need to meter in many cases.
Understood! Thanks!
 
According to the 0.5 stop improvement on every double the number of frames, following this logic, the DR could actually reach unbelievably huge figure, surely that can't be right? There must be a limit somewhere?
The limit is determined by the number of pixels you have to work with in each frame (data set size) and may also be impacted by computing precision. Also at some point, you need to start compensating for fixed-pattern noise, which manifests as both offset and gain, and this adds quite a bit of effort.

In practical terms, the limit is most likely to be the amount of time you are willing to spend to chase after the diminishing returns. If you're imaging galaxies 10 billion light years away, spending 10 days with extremely expensive equipment may be worth it. Few of us can spend even a fraction of that time on one image. Life is short.
Don't get me wrong, not trying to dispute this theory. Like I said, I'm no expert on stacking. Just want to get a clearer picture, that's all.
Nice pun.
Indeed, a nice pun!

I tried some stacking yesterday night, and the results came out worse than the original images. Totally disgusted I threw everything away. My normal patience was sadly lacking, but I blame my pneumonia, which affects me far more than I like.

But I'm stubborn: I'll try again tonight!

Any suggestions (I plan to take a view from our balcony, which mainly consists of a lot of low houses, some grass, a playground for kids, and street lamps, but mostly roofs). Some distant hills and antennas for the radio, and TV stations)?!

I have access to a FX, two CX, one NEX, one m43, two APS-C DSLRs, one compact APS-C. 12-25MB.

Planned to use the Nikon V2, but the Ricoh GR might be easier?!
 
According to the 0.5 stop improvement on every double the number of frames, following this logic, the DR could actually reach unbelievably huge figure, surely that can't be right? There must be a limit somewhere?
The limit is determined by the number of pixels you have to work with in each frame (data set size) and may also be impacted by computing precision. Also at some point, you need to start compensating for fixed-pattern noise, which manifests as both offset and gain, and this adds quite a bit of effort.

In practical terms, the limit is most likely to be the amount of time you are willing to spend to chase after the diminishing returns. If you're imaging galaxies 10 billion light years away, spending 10 days with extremely expensive equipment may be worth it. Few of us can spend even a fraction of that time on one image. Life is short.
Don't get me wrong, not trying to dispute this theory. Like I said, I'm no expert on stacking. Just want to get a clearer picture, that's all.
Nice pun.
Indeed, a nice pun!

I tried some stacking yesterday night, and the results came out worse than the original images. Totally disgusted I threw everything away. My normal patience was sadly lacking, but I blame my pneumonia, which affects me far more than I like.

But I'm stubborn: I'll try again tonight!

Any suggestions (I plan to take a view from our balcony, which mainly consists of a lot of low houses, some grass, a playground for kids, and street lamps, but mostly roofs). Some distant hills and antennas for the radio, and TV stations)?!

I have access to a FX, two CX, one NEX, one m43, two APS-C DSLRs, one compact APS-C. 12-25MB.

Planned to use the Nikon V2, but the Ricoh GR might be easier?!
 
For general-purpose photography, HDR is the better solution - it yields the same noise reduction and DR improvements as stacking with only a fraction of the images required. I touched on this in the OP - the reason I'm using stacking on the J4 is simply because it doesn't have exposure bracketing. My ideal solution would be a 20-frame burst @ 60fps with the camera bracketing in 1EV increments, yielding a full 20EV capture. That would produce a very low noise high DR scene. You wouldn't even need to meter in many cases.
I should clarify first, that my earlier comments about HDR methods, in my post below, were assuming that shutter speed would be kept the same or similar; in this case the limit to DR improvement is subject to the shape of the camera's DR curve across its ISO range. In practice, it is often necessary to restrict shutter-speed range, but when a tripod and more time are available, HDR technique can be extended through use of long exposure times for the shadow areas.

How would one achieve a 20-EV bracket? That is a 1,000,000:1 exposure range. It is not likely that one would want to vary aperture much, as that would cause areas in the image to have different DOF. Neither would one want to use very high ISO settings, which would be contrary to the goal of high SNR. That puts most of the burden of exposure bracketing onto shutter speed, meaning that the highest exposures would require very long exposure times.

Even a much more modest 10-EV bracket would be difficult or impossible to do at a high frame rate. One could easily see the practical difficulties by postulating a typical example and working out the numbers. I'll leave that as an exercise for the reader.

Ultimately, without resorting to stacking, you are still limited by the FWC of the sensels, which establishes the upper limit on signal-to-shot-noise ratio. In the case of Nikon 1 sensors, that is not terribly high.

--
Source credit: Prov 2:6
- Marianne
 
Last edited:
For general-purpose photography, HDR is the better solution - it yields the same noise reduction and DR improvements as stacking with only a fraction of the images required. I touched on this in the OP - the reason I'm using stacking on the J4 is simply because it doesn't have exposure bracketing. My ideal solution would be a 20-frame burst @ 60fps with the camera bracketing in 1EV increments, yielding a full 20EV capture. That would produce a very low noise high DR scene. You wouldn't even need to meter in many cases.
I should clarify first, that my earlier comments about HDR methods, in my post below, were assuming that shutter speed would be kept the same or similar; in this case the limit to DR improvement is subject to the shape of the camera's DR curve across its ISO range. In practice, it is often necessary to restrict shutter-speed range, but when a tripod and more time are available, HDR technique can be extended through use of long exposure times for the shadow areas.

How would one achieve a 20-EV bracket? That is a 1,000,000:1 exposure range. It is not likely that one would want to vary aperture much, as that would cause areas in the image to have different DOF. Neither would one want to use very high ISO settings, which would be contrary to the goal of high SNR. That puts most of the burden of exposure bracketing onto shutter speed, meaning that the highest exposures would require very long exposure times.

Even a much more modest 10-EV bracket would be difficult or impossible to do at a high frame rate. One could easily see the practical difficulties by postulating a typical example and working out the numbers. I'll leave that as an exercise for the reader.

Ultimately, without resorting to stacking, you are still limited by the FWC of the sensels, which establishes the upper limit on signal-to-shot-noise ratio. In the case of Nikon 1 sensors, that is not terribly high.
 
For hand-held use I'd agree, a 10EV range is more reasonable. What challenges would you see in achieving that at a high frame rate? The J4 tops out at 1/16,000 so a 9EV bracket can be achieved down to 1/60, which understandably is the minimum shutter speed the camera lets you use for a 60fps burst.
The problem is: How often do you have a scene that is bright enough to start at 1/16000 shutter speed? What is a more typical shutter speed that one is likely to see, given the goal of ETTR?

Think especially, about how bright typical shadow areas are, which one would want to expose long enough that they would use a significant portion of the sensor's capacity.

In the final analysis, whether one uses HDR methods, or stacking, or a combination, image quality is always down to how much total exposure time you allow. Clever approaches allow one to make best use of that time, but there is no way to avoid spending it.
 
For hand-held use I'd agree, a 10EV range is more reasonable. What challenges would you see in achieving that at a high frame rate? The J4 tops out at 1/16,000 so a 9EV bracket can be achieved down to 1/60, which understandably is the minimum shutter speed the camera lets you use for a 60fps burst.
The problem is: How often do you have a scene that is bright enough to start at 1/16000 shutter speed? What is a more typical shutter speed that one is likely to see, given the goal of ETTR?

Think especially, about how bright typical shadow areas are, which one would want to expose long enough that they would use a significant portion of the sensor's capacity.

In the final analysis, whether one uses HDR methods, or stacking, or a combination, image quality is always down to how much total exposure time you allow. Clever approaches allow one to make best use of that time, but there is no way to avoid spending it.
 
Sorry, is that not a bit contradictive? If there is camera movement there would be blur, especially if you're using 1/30s, even if you're doing 60fps. If you're stacking a bunch of photos with camera movement, wouldn't that produce unsharp image?
Moving the camera between exposures doesn't imply there is problematic blur in the images. Holding a camera as still as you can will produce the same amount of blur in a single 1/30 image or two consecutive 1/60 images. It's not hard to make that blur effectively invisible. We do it all the time. No camera is ever truly stable - especially handheld.
Of cause! That's why to my mind, stacking must be done with a sturdy tripod and static subject, possible a remote trigger as well. Although when I shoot with a tripod, I often use interval shooting so to avoid the hassel of a remote, that way I could eliminate all physical contact to advoid camera shake and with a DSLR, I could also use the mirror up function at the same time.

By making the blur invisble, do you mean, by holding the camera steady, or do you mean by software?

So my next question would be if I were to shoot 3 or 5 pictures with different EVs with bracketing and stack them together for a high DR photo, then to achieve the same result by stacking pictures with same EV, I would requir a huge number of shots to even get close? If that's true, what's the benefit of shooting with same EV? Or is that just a way to work around the 1 system having no exposure bracketing and to take advantage of its insane fps? So if I were to shoot other system like a DSLR with exposure bracketing function, I would be better off shooting with different EVs?

Sorry for the amount of questions! :-P
What I mean is the opposite.
  1. The amount of blur, due to subject motion or camera movement, is determined by the total exposure time of the frame or image stack.
  2. A little camera movement means an image stack contains more data than it would with no movement. This doesn't improve DR but does improve resolution.
  3. A stack with multiple images can serve both masters and increase DR while truly increasing resolution.
You can demonstrate the resolution increase in PS yourself. Take multiple images of the same subject handheld. Increase their size to 200% with the nearest neighbor algorithm. make a panorama with your unsized images.

Your reultant ant image will have both more resolution and lower noise than your originals. This will be preserved if you downsize to the original pixel dimensions afterwards.

Pits what software like "photo acute" is doing.
 
This image stacking would be fun to try. Do anyone know free SW and tutorial to do this? I don't have photoshop.
 

Keyboard shortcuts

Back
Top