DuncanDovovan

DuncanDovovan

Lives in Germany Northern Germany, Germany
Works as a Software Industry
Joined on Jan 17, 2012
About me:

Would love to do more:
- Macro
- 360x180 Panoramas
- Astro-photography
- Portraits
- Participate in Challenges on DPReview or Worth1000

Comments

Total: 49, showing: 1 – 20
« First‹ Previous123Next ›Last »
On Sources of noise part two: Electronic Noise article (234 comments in total)
In reply to:

nikanth: In some camera, if it makes sense to shoot at ISO 100 and push exposure up, during post-processing, shouldn't the camera do the same to implement higher ISOs?

I think that you are being selective when applying a tone curve. You use your head to decide that certain parts should be less amplified than others. That is something a camera cannot do.

Direct link | Posted on May 18, 2015 at 11:21 UTC
On Sources of noise part two: Electronic Noise article (234 comments in total)
In reply to:

Discovery Of Light: How would an ISO-invariant vs variant react with some body cap on testing. It would be interesting to see the pushed results.

Hi Rich. Isn't that the technique they use in Astrophotography to deduct read noise from long exposures?

Direct link | Posted on May 14, 2015 at 22:20 UTC
On Sources of noise part two: Electronic Noise article (234 comments in total)
In reply to:

DuncanDovovan: Richard / Rishi: What I would really love to see is a shot noise simulation. Rishi looks to be the guy who can calculate the number of photons that a pixel gets.

From the number of photons caught, couldn't you calculate the S/R ratio?

And from the S/N ratio, couldn't you visualise/simulate the shot noise how it would look like in a 100% crop (pixel level) and a crop that would show the scene at the same scale (more pixels scaled down to the same magnification)?

Wouldn't it be possible to visualise the shot noise, other upstream noise and downstream noise separately that way?

I would love to see (as in picture crops / not tables) and compare the shot noise, upstream and downstream noise components under several conditions. Like higher vs lower ISO. Shorter vs longer exposures. Overexposure and compensating, underexposure and compensating vs normal exposure.

And all of that for 5 different tonal values from dark to bright and in different situations from cloudy to bright daylight.

@dosdan: Nice. But not really accurate. The Wikipedia example shows the same noise in bright and dark areas. I would expect the bright areas to have less noise.

Also, this does not compare the difference between 1 stop overexposure toned down and normal exposure.

I'm also not sure which shutter speeds would theoretically belong to each picture.

But like I said: What I take home from the articles is that there is a very complex equation at work. Sometimes under exposing through ISO is better (to get more dynamic range), sometimes over exposing through slower shutter times is better (to get less noise and more details in the shadows), sometimes normal exposure is better (to capture the entire dynamic range and prevent highlight blowout), sometimes even underexposing through faster shutter times is better if you are really in a very high dynamic range situation, cannot use multiple exposure HDR and need to prevent highlight clipping.

I need to take more pictures and compare. =;-)

Direct link | Posted on May 14, 2015 at 20:55 UTC
On Sources of noise part two: Electronic Noise article (234 comments in total)
In reply to:

DuncanDovovan: Richard / Rishi: What I would really love to see is a shot noise simulation. Rishi looks to be the guy who can calculate the number of photons that a pixel gets.

From the number of photons caught, couldn't you calculate the S/R ratio?

And from the S/N ratio, couldn't you visualise/simulate the shot noise how it would look like in a 100% crop (pixel level) and a crop that would show the scene at the same scale (more pixels scaled down to the same magnification)?

Wouldn't it be possible to visualise the shot noise, other upstream noise and downstream noise separately that way?

I would love to see (as in picture crops / not tables) and compare the shot noise, upstream and downstream noise components under several conditions. Like higher vs lower ISO. Shorter vs longer exposures. Overexposure and compensating, underexposure and compensating vs normal exposure.

And all of that for 5 different tonal values from dark to bright and in different situations from cloudy to bright daylight.

Side note: That would only be for theoretical purposes. I think the best thing people can do is to learn their camera and experiment based on the information you have provided in the articles so far.

Direct link | Posted on May 14, 2015 at 20:41 UTC
On Sources of noise part two: Electronic Noise article (234 comments in total)
In reply to:

P Johnson: I've got no problem with noise what so ever at lowest ISO while most amateur cameras have to resort to noise reduction and smearing. I liked that effect until I realised there was something wrong and upgraded. Especially fine detailed vegetation etc must be unsmeared to look natural, while architecture is not so critial. It should, however, be possible to cool down the sensor to eliminate some of the electronic noise, as in astronomy, and this may be a selling feature as batteries get better, which they do all the time.

I think the problem in astrophotography is the extremely long exposure times, because there are really not a lot of photons available. In that situation, thermal noise has a good chance to influence the image, because the available number of photons you want to capture is so small and the thermal noise starts to compete with the signal you want to capture.

Not so much an issue in normal photography I'd say.

Direct link | Posted on May 14, 2015 at 12:29 UTC
On Sources of noise part two: Electronic Noise article (234 comments in total)

Richard / Rishi: What I would really love to see is a shot noise simulation. Rishi looks to be the guy who can calculate the number of photons that a pixel gets.

From the number of photons caught, couldn't you calculate the S/R ratio?

And from the S/N ratio, couldn't you visualise/simulate the shot noise how it would look like in a 100% crop (pixel level) and a crop that would show the scene at the same scale (more pixels scaled down to the same magnification)?

Wouldn't it be possible to visualise the shot noise, other upstream noise and downstream noise separately that way?

I would love to see (as in picture crops / not tables) and compare the shot noise, upstream and downstream noise components under several conditions. Like higher vs lower ISO. Shorter vs longer exposures. Overexposure and compensating, underexposure and compensating vs normal exposure.

And all of that for 5 different tonal values from dark to bright and in different situations from cloudy to bright daylight.

Direct link | Posted on May 14, 2015 at 12:24 UTC as 33rd comment | 4 replies
On Sources of noise part two: Electronic Noise article (234 comments in total)

So in terms of real life photography, could we say this?

1) When the subject must remain low key, use longer exposures, lower ISO and position the histogram to the right, to make sure you have as little noise in the dark parts as possible.

2) If the dynamic range covers the histogram and in high dynamic range situations, expose normally at low ISO to prevent blown out highlights. Or use HDR through multiple exposures.

3) In dark situations, when you cannot lower your shutter speed anymore, because motion blur will become an issue and the available light prompts you to use high ISO values, you can under expose the (RAW) photo by dialling down the ISO multiple stops and adjust in post processing in order to gain dynamic range / keep highlight details that would be lost at high ISO values.

4) When you are a real noise ninja, larger sensors with bigger and more light sensitive lenses give you the opportunity to scale down your photos and thus reducing the noise levels that way.

OK?

Direct link | Posted on May 14, 2015 at 12:11 UTC as 34th comment | 9 replies
In reply to:

User4487106776: Thanks for the article, but I'm not so sure if the theory - PHOTONS CAUSING VISIBLE SHOT NOISE? - is infact relevant...or maybe I'm missing something here:

I'm sure that if you'd do some calculations, you'd probably be able to approximately calculate how many photons are actually arriving in a pixel at 1/125 and f/8. It would be interesting to see somebody do this calculation.

Direct link | Posted on May 13, 2015 at 15:27 UTC
In reply to:

User4487106776: Thanks for the article, but I'm not so sure if the theory - PHOTONS CAUSING VISIBLE SHOT NOISE? - is infact relevant...or maybe I'm missing something here:

You'd be surprised. Pixels are very very tiny. And shutter speeds of 1/1000 of a second are really very very short exposures. And dark areas really do not emit lots and lots of photons. And pixels do need lots and lots of photons to generate an electric signal.

Regarding your question about the speed of light: You cannot have x photons per second coming off your subject at your camera and then if you could change their speed, suddenly more or less than that are coming through your lens. Where would the rest be or come from?

If their speed is high, there would be a lots of distance between them. If their speed would be low, they would move closer together. But the same amount would cross a certain line.

Suppose you let light shine through a tube with water. Light moves less fast through water. But at the end of the tube, photons would move fast again through air. Does this mean the light would flicker? No.

So the speed of light has no impact here only how many photons are coming.

Direct link | Posted on May 13, 2015 at 15:21 UTC
In reply to:

Lhermine: Very interesting article ! Thanks very much DPR.

As many people here, I was wondering if the number of photons can be so low that shot noise would be significant.

Based on the so-called "sunny 16 rule", I've found that for a proper exposition on a 24 MPix FF sensor, around 50,000 photons should hit each pixel. This lead to a shot noise with an amplitude of 1.4 %.

This kind of noise amplitude should to small to be noticed. However, in darker conditions, you may have to increase the ISO speed let's say for instance 6400. It means that you have fewer photons. The corresponding shot noise amplitude will be around 3.5 %. We reach up to 14 % for ISO 100,000!

And that's the bad news: 1,000,000 ISO speed will never be as good as 100 ISO whatever the quality of the sensor because of the shot noise.

So bad...

(for those who are interseted in the computation, please mail me, you may point out some mistakes ;-) )

I did the same calculation and it was for direct sunlight!

I think the number of photons involved in catching a normal white area lit by the sun is MUCH lower than that.

And then calculate 8 stops less light for the low lights. Then SN really becomes an issue - certainly if you increase the shutter speed for example at sporting events at 1/8000 for example and with the current Megapixel hype that has fortunately stalled at the moment.

Direct link | Posted on May 13, 2015 at 13:33 UTC
On Let me try to address that... article (65 comments in total)
In reply to:

DuncanDovovan: Richard, I still believe your conclusion in this article are not entirely right.

Because you changed 2 things for the D810: The sensor AND the lens.

Thought experiment: Let's take the APS-C lens and sensor as the standard. Would there exist a light-neutral optical component that would enlarge the projected image to let the APS-C crop cover the FF sensor, the light intensity per square mm would DROP, because you'd have to spread the same amount of light over a larger area, correct?

Assuming the number of pixels per square mm are the same for the APS-C and FF sensors, the FF sensor pixels would now get LESS light per pixel - which would mean a worse S/N ratio.

BUT: You would now have more pixels covering the same area of the photo, because the image is enlarged. So you can now combine the pixels (downscaling the image) to improve the S/N ratio again.

I conclude that by only enlarging the sensor the result would be the same S/N ratio - not better as concluded in your article?

I think our misunderstanding is also due to the definition we use for shot noise. I think you are using a definition that is based on the entire area and the reduction of the final result.

I am thinking of shot noise more like how many photons can a pixel capture in a given amount of time.

In your definition, the larger sensor captures more photons for the whole sensor and therefore a better shot noise.

In my definition, the pixels still get the same amount of light so the shot noise is the same. BUT I can reduce the FF image in post processing to reduce the visibility of the shot noise.

Direct link | Posted on May 13, 2015 at 00:41 UTC
On Let me try to address that... article (65 comments in total)
In reply to:

DuncanDovovan: Richard, I still believe your conclusion in this article are not entirely right.

Because you changed 2 things for the D810: The sensor AND the lens.

Thought experiment: Let's take the APS-C lens and sensor as the standard. Would there exist a light-neutral optical component that would enlarge the projected image to let the APS-C crop cover the FF sensor, the light intensity per square mm would DROP, because you'd have to spread the same amount of light over a larger area, correct?

Assuming the number of pixels per square mm are the same for the APS-C and FF sensors, the FF sensor pixels would now get LESS light per pixel - which would mean a worse S/N ratio.

BUT: You would now have more pixels covering the same area of the photo, because the image is enlarged. So you can now combine the pixels (downscaling the image) to improve the S/N ratio again.

I conclude that by only enlarging the sensor the result would be the same S/N ratio - not better as concluded in your article?

The fact that in your situation the FF performs better is I think because of the fact that you used a lens that is in fact capable of enlarging the image while maintaining the light intensity?

Would there exist a light-neutral optical component to reduce the projected image for that FF lens you used, the light intensity would INCREASE, technically giving you the opportunity to get same S/N ratio on the smaller sensor, right?

When using the FF sensor with the same lens you used for the APS-C image, you could also move closer to the subject to make sure the same area that was projected on the APS-C is now covering the FF sensor. But then you move closer, which means you increase the amount of light your lens captures, which allows the same projection with the same light intensity on a larger area. Also in this case not the sensor, but the changing of the distance actually causes the better S/N ratio.

Direct link | Posted on May 8, 2015 at 18:58 UTC
On Let me try to address that... article (65 comments in total)

Richard, I still believe your conclusion in this article are not entirely right.

Because you changed 2 things for the D810: The sensor AND the lens.

Thought experiment: Let's take the APS-C lens and sensor as the standard. Would there exist a light-neutral optical component that would enlarge the projected image to let the APS-C crop cover the FF sensor, the light intensity per square mm would DROP, because you'd have to spread the same amount of light over a larger area, correct?

Assuming the number of pixels per square mm are the same for the APS-C and FF sensors, the FF sensor pixels would now get LESS light per pixel - which would mean a worse S/N ratio.

BUT: You would now have more pixels covering the same area of the photo, because the image is enlarged. So you can now combine the pixels (downscaling the image) to improve the S/N ratio again.

I conclude that by only enlarging the sensor the result would be the same S/N ratio - not better as concluded in your article?

Direct link | Posted on May 8, 2015 at 18:51 UTC as 11th comment | 5 replies
On Let me try to address that... article (65 comments in total)
In reply to:

DuncanDovovan: You mention pixel size is the same for a FZ sensor and the APS-C sensor.
But I assume the FZ sensor has more pixels than the APS-C sensor, right?

Otherwise a better S/N ratio would not be possible?

I know. But theoretical. Suppose the same size pixels with more room in between that are not light sensitive.

In that case the the SN ratio would be the same as a smaller sensor with the same amount of pixels in the same size just closer together, correct?

Direct link | Posted on May 5, 2015 at 22:04 UTC

Hey Richard, I know this is going to be a tough one: But in your next article, could you try to do the following:

Shot noise is really impossible to make visible, because it needs to be captured by the sensor first, which adds all kinds of different noise factors.

Would it be possible for you to calculate the number of photons caught for a certain exposure and some tonal values and then calculate the S/N ratio for those tonal values and subsequently SIMULATE that S/N for a 1:1 simulated crop of that tonal value?

So basically take the tonal value in photoshop (without noise) and add the noise based on the calculated shot noise?

That would be so great to see in your next article.

Direct link | Posted on May 5, 2015 at 09:46 UTC as 29th comment | 1 reply
On Let me try to address that... article (65 comments in total)
In reply to:

Dan142: Am I missing something? The sun and other stars as well as artificial light sources emit a finite number of photons per unit time. Therefore, there are a finite number of photons that can impact a detector at any point in time. The number of photons available for detection will therefore be directly proportion to the detector's surface area. Other properties of the detector will determine what percent of those photons are actually detected, and then, with how much fidelity they can be represented visually.

Why are you missing something? This is what the article says right?

Basically the larger the sensor becomes and assuming it does not contain wasted space on its surface, it either has more pixels of the same size that a smaller sensor would have, or it would contain the same number of pixels, but they would be larger.

In both cases the larger sensor would capture more photons for a scene, either because its "buckets" are larger or because it can combine the information for multiple pixels and have more photons that way as shown in this article.

Direct link | Posted on May 5, 2015 at 09:36 UTC
On Let me try to address that... article (65 comments in total)
In reply to:

DuncanDovovan: You mention pixel size is the same for a FZ sensor and the APS-C sensor.
But I assume the FZ sensor has more pixels than the APS-C sensor, right?

Otherwise a better S/N ratio would not be possible?

I understand. Just a stupid question for me to verify I understood you completely:

Only in the theoretical situation of a the larger sensor having the same amount of pixels as a smaller sensor and those pixels having the same size as for the smaller sensor - only in that theoretical case the SN ratio would be the same as that of the smaller sensor, correct?

Direct link | Posted on May 5, 2015 at 09:30 UTC
On Let me try to address that... article (65 comments in total)

You mention pixel size is the same for a FZ sensor and the APS-C sensor.
But I assume the FZ sensor has more pixels than the APS-C sensor, right?

Otherwise a better S/N ratio would not be possible?

Direct link | Posted on May 4, 2015 at 14:16 UTC as 18th comment | 6 replies
In reply to:

babart: Thanks for an interesting article that certainly states what I think is (is becoming?) the kernal of questions about noise. I.e., sensors being relatively equal the larger will give less noise problems, and exposing to the right (anathema to those of us who shot transparency film for decades) is the better way to expose digital sensors.

However, I often encounter areas of endeavor where a sizable segment of the image I want to capture is at the low end of the spectrum, and yet I don't want to clip what highlights there are. Think architectural photos of large spaces where lights would be fairly useless. How does the use of HDR affect the overall noise in those areas with less light? Yes, one is merging several images, one or two of which expose the darker areas of the scene at a brighter, less noisy, exposure value, but how does merging it with the darker frames affect the noise ratio in the darker portions of the image?

Following the article, I think it might make sense to over expose the entire range to make sure the important highlights of your darkest frame are just not clipping.

After that it becomes hocus pocus, because I have no clue what algorithms HDR merge programs use to compile the end result.

Assuming the pictures have a dynamic range of 8 stops, the middle exposure has 3 stops brighter and 3 stops darker than the mid tonal value 5. If the HDR software is aware of the theory of this article, it should use information from the exposures "to the right" - so use tonal values 4, 5, 6 and 7 from the mid exposure for example (1 left, middle and 2 right).

Assuming that HDR software do not "use to the right" to secure the best parts of the pictures, perhaps it would make sense to darken all exposures before merging to force the HDR algorithm to "use to the right". The end result should be brightened again.

Would be interesting to hear Richard's input here...

Direct link | Posted on May 3, 2015 at 17:38 UTC

Hello Richard. Thanks for being so active in the questions. I learnt a lot by reading the discussions.

In your 2nd part, could you also dive into the aspect of energy spilling from a pixel to surrounding pixels?

I notices in a few tests with over exposure, that the edge contrast between very light and very dark parts in overexposed photos becomes a bit worse. It looks like light spills over to surrounding pixels?

Direct link | Posted on May 3, 2015 at 12:11 UTC as 31st comment
Total: 49, showing: 1 – 20
« First‹ Previous123Next ›Last »