Thom Hogan prefers Oly jpeg over Canon and Nikon! Locked

Started May 19, 2010 | Discussions thread
This thread is locked.
FrankyM
Senior MemberPosts: 2,164Gear list
Re: Case in point
In reply to Kermes, May 26, 2010

Kermes wrote:

FrankyM wrote:

Kermes wrote:

FrankyM wrote:

Are you saying then, that a 4/3 sensor exposed to a given light intensity for a given time would exhibit more noise than a 135FF sensor of the same technology and pixel density exposed to the same light intensity for the the same time?

In colloquial terms, yes, that's exactly what I'm saying.

More precisely, the magnitude of the visible photon shot noise in an area is given by the square root of the number of photons captured in that area++. If you take the same size final image made from a 4/3 camera and a FF one, and compare equal size portions of the images, the patch of the FF image represents four times the sensor area compared with the one from the 4/3 image. If the sensors were illuminated at the same intensity, that patch represents four times the number of photons, and therefore will have half the noise.

By "Intensity" we mean lumens/sq m. (or lux). Therefore all parts of the sensor are illuminated equally.

Indeed we do. If you look at what I said above, that is the assumption made.

It may be what you wanted to say but it's not what you wrote.

Let's do a thought experiment. Imagine 2 sensors of the same technology - A with 1 photo-site and B, with four. Let's say they are illuminated by a light source which outputs 10 photons/second. Now (ignoring noise due to ancillary circuits) if we open the shutter for 10 seconds, the single photo-site of A will, on average, have collected 100 photons, while those of B will have collected, on average, 25 photons each. Thus when we now read these values we can see that the SNR of A (100/SQRT(100) = 10) is twice that of the photo-sites of B (SNR=25/SQRT(25)=5).

Nothing wrong with all that

Why? Because the light intensity is different in each case

The light intensity is the same, but the area of the pixels in B is 1/4 of those in A. As you reminded me, by intensity we mean lumens m^-2.

So, to get a SNR of B the same as that of A we must expose 4 times longer or increase the light source output.

But you're comparing a SNR in A over four times the area as in B, so in B you're looking at a quarter of the image while in A you're looking at the whole image. lets continue with your thought experiment, and see what happens when we compare all four B pixels together with A. So to find the combined SNR we add the signals, which gives us 4*25 = 100. Noise adds in quadrature, so to find the noise we need to take sqrt(4*5^2) = sqrt(100) = 10, so the SNR = 10, just as it was in A.

Sorry, but this is your mistake. We read pixels, each one individually.

In reality we do this in cameras via the aperture. So for a given f number, the real aperture area required for B will be 4 times that for A.

I think you'll find this doesn't work. What you're saying is that cameras with more pixels need higher exposure. That simply is not the case.

Yes it is. Work out the physics. In effect, what it's saying is if you increase the number of pixels for a given sensor area, the number of photons collected at each photosite will be reduced in comparison and thus the SNR is reduced. In other words, increasing pixel density increases the effect of noise.

The term 'sensor' is a convenience to refer to a set of photo-sites and associated circuitry - you don't read a sensor - you read the individual photo-sites and thus it is the SNR of the individual photo-sites that is important. This is why you can see more noise in dark areas of an image than in lighter ones

There are two reasons you see more noise in shadow areas, neither of which is the one you give. the first is simply that in dark areas there are fewer photons collected, so the shot noise signal to ratio, which as I said goes as the square root of the number of photons collected, is lower than it is in bright areas. The second is that with the lower shot noise, the read noise becomes more significant, and therefore visible. One of the things that sensor manufacturers do to increase low light performance is to minimise read noise.

You haven't read what I wrote (or perhaps you simply haven't understood it).

++ Strictly speaking the noise is given by the square root of the number of photons, and therefore so is the signal to noise ratio, which is what photographers understand as visible noise.

Well, yes, the more photons you collect the higher the SNR and it's a non linear relationship.

Is I said, it's a square root relationship.

This bit of your comment warranted a clarification, but I didn't want to break up the flow of discussion above.

(and as you can see the image produce by B is 2 stops underexposed if A is correct).

there is no such thing as underexposure for a sensor, which to a close approximation just counts the number of photons. the photon density (in lux as you pint out) is the same for both sensors in your example, so they are both receiving the same exposure.

Of course a sensor cannot be under exposed. You don't read very carefully what I write. I said "...and as you can see the image produce by B is 2 stops underexposed if A is correct."

In terms of that exposure relative to the pixel saturation capacity, since quartering the pixel area will generally quarter that capacity, and also quiarter the number of photons collected, there is no change in the exposure relative to saturation capacity.

I think you need to sit down and work out the physics. I tried to describe it in simple terms but perhaps I have oversimplified (I'm just a dumb engineer, I'm afraid).

 FrankyM's gear list:FrankyM's gear list
Olympus OM-D E-M5
Post (hide subjects)Posted by
(unknown member)
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark post MMy threads
Color scheme? Blue / Yellow