# f-number equivalent between m43 and FF

Started Mar 25, 2014 | Discussions
 All forums Forum
Re: Still incorrect.

Great Bustard wrote:

dwalby wrote:

OK, let me reword my statement a bit. A pixel contains an analog signal value prior to A/D conversion. Every analog signal has some sort of noise associated with it, and the A/D conversion itself introduces read noise. So each pixel, when sampled, has a signal level, and some uncertainty due to the noise.

The amount of light falling on the pixel during the exposure is subject to uncertainty, due to the nature of light itself. This is known as photon noise. Then the sensor adds additional noise, which is called read noise.

Yes, I know that, it just wasn't important in the context of the original discussion regarding SNR being attributable to a single pixel.  I was only discussing the basic concept of noise being present, not the nature of the noise itself.

Bob is saying that a single pixel does not have noise, because he is computing the noise by taking the average value of a patch of pixels that should have uniform values and computing the standard deviation.

Well, Bob could have said that, but he seemed happier with one-sentence answers with no details.  Computing the stdev of multiple pixels does not invalidate the assumption that an individual pixel can have an SNR value associated with it, it just changes the SNR computation.

However, we could compute the noise of a single pixel by instead taking several identical exposures, recording the values of a pixel, computing the mean, and then the standard deviation.

exactly, which is what I said earlier about having a 1 pixel sensor, then exposing it multiple times at the same exposure setting and seeing how the pixel value varied with each exposure.

Regardless, it is *crucial* to understand that noise comes from the light itself (photon noise), and that this noise is an inherent property of the light, having nothing, whatsoever, to do with technology. The sensor and supporting hardware then adds in additional noise (read noise).

again, already understood.  I believe the noise distribution is Poisson when a very small number of photons are counted, and becomes Gaussian when a larger number of photons are counted.  And as such, if the signal voltage created by the photons is S, the noise voltage (standard deviation) is the square root of S.

And, in the example above regarding averaging a patch of pixels, the average sum of N signal voltage values is NS and the standard deviation increases as the square root of N.

Except for the portions of the photo where the light is very low, the photon noise is the dominant source of noise. Thus, for deep shadows and/or very high ISO photography, the read noise becomes an important factor, but until that point, it is the photon noise that is dominant.

agreed, never challenged this.

Either way, it is not per-pixel noise that matters, but image noise, and this is a function of the total amount of light falling on the sensor (and sensor efficiency), not the amount of light falling on a single pixel.

Bob stated that a single pixel doesn't have an SNR, I disagreed with that claim.  Everything you said above is true, and I agree with, it just wasn't part of the original claim I disputed.

I'm not disputing whether or not per-pixel SNR is the best metric for evaluation of a sensor's performance, I was simply stating that it is possible to measure and compare the SNR of a single pixel on any given sensor.  Perhaps if Bob had taken the time to explain his own comments a little more thoroughly you wouldn't have had to do it for him.

Complain
Re: Still incorrect.

dwalby wrote:

Bob stated that a single pixel doesn't have an SNR, I disagreed with that claim. Everything you said above is true, and I agree with, it just wasn't part of the original claim I disputed.

I'm not disputing whether or not per-pixel SNR is the best metric for evaluation of a sensor's performance, I was simply stating that it is possible to measure and compare the SNR of a single pixel on any given sensor. Perhaps if Bob had taken the time to explain his own comments a little more thoroughly you wouldn't have had to do it for him.

Ah.  I just wanted to say that comparing noise pixel for pixel is a meaningless comparison unless the photos are made from the same number of pixels, and that noise comes from both the light itself as well as the sensor and supporting hardware.

However, as you agree with all that, we're good.

Cheers!

Complain
Re: That "explanation" is so wrong...

What was it Wolfgang Pauli said? "It is not only not right, it is not even wrong."

J.

Complain
Re: f-number equivalent between m43 and FF
3

dwalby wrote:

bobn2 wrote:

dwalby wrote:

bobn2 wrote:

dwalby wrote:

Allan Olesen wrote:

D Cox wrote:

The total area of the sensor is irrelevant. If you put the same lens on various sizes of sensor, the noise level in the part of the image that they all record will be identical. (Assuming they are the same generation of technology.)

Let us compare the Nikon D800 (FF, 36MP) and the Nikon D7000 (APS-C, 16MP).

Same pixel size. Same Sony sensor technology. Same noise per pixel if you expose equally (same f-stop, shutter speed and scene light).

But not same magnification of the final output.

If you take the same photo with these two cameras and enlarge to the same output size, each pixel from the D7000 will be magnified 1.5x more in each direction. Consequently, the final output will appear more noisy.

And you can't really say I am wrong about that because I am actually just repeating your own claims from another post in this thread: The same pixel will look more noisy the more you magnify it.

So yes, it is all about sensor area.

I agree its all about sensor area, but I'm not sure I agree with your example.

In the case of two sensors of different size, but with the same size pixel, you have the same SNR for each pixel, we all agree on that.

Where I don't necessarily agree with you is that the simple act of printing the pixel larger or smaller will have any effect on its SNR.

A pixel has no SNR (in the context of a single photo)

a pixel has a signal level, as measured by the A/D converter, and an uncertainty due to noise.

Wrong. A pixel has a signal level as measured by the A/D converter. That is all, it gives a single value.

-- hide signature --

Bob

OK, let me reword my statement a bit. A pixel contains an analog signal value prior to A/D conversion. Every analog signal has some sort of noise associated with it, and the A/D conversion itself introduces read noise. So each pixel, when sampled, has a signal level, and some uncertainty due to the noise.

Even when your statement is reworded it remains wrong. There is no noise in a single observation. You just read the pixel and get a value. There only becomes noise over a number of observations, and in the context of a single photo, each pixel is read only once. So, the noise becomes apparent when you view a group of pixels, which you believe should have the same value, and they don't. There is no 'uncertainty' of separable noise value in a single pixel, no noise component in a single reading of a pixel. The point about this, in the end, is talking about 'per pixel' noise is a nonsense - what matters is the variation over an area, and in the context of photography, the variation of the same area of the same size output image. More particularly, noise is bandwidth dependent, and if you're comparing noise you need to normalise bandwidth (as is typically done in electronics where noise is given 'per Hz').

-- hide signature --

Bob

Complain
Re: Still incorrect.
2

dwalby wrote:

Bob is saying that a single pixel does not have noise, because he is computing the noise by taking the average value of a patch of pixels that should have uniform values and computing the standard deviation.

Well, Bob could have said that, but he seemed happier with one-sentence answers with no details.

Bob is rude enough to make the assumption in the first place that people who make claims will want back them up.

Computing the stdev of multiple pixels does not invalidate the assumption that an individual pixel can have an SNR value associated with it, it just changes the SNR computation.

How do you compute the SNR of a single observation of a single pixel?

However, we could compute the noise of a single pixel by instead taking several identical exposures, recording the values of a pixel, computing the mean, and then the standard deviation.

exactly, which is what I said earlier about having a 1 pixel sensor, then exposing it multiple times at the same exposure setting and seeing how the pixel value varied with each exposure.

Bob's statement explicitely excluded that, because it said that a single pixel in a single photo has no SNR. Saying there is variability over multiple observations in time is irrelevant with respect to a photo.

Either way, it is not per-pixel noise that matters, but image noise, and this is a function of the total amount of light falling on the sensor (and sensor efficiency), not the amount of light falling on a single pixel.

Bob stated that a single pixel doesn't have an SNR, I disagreed with that claim. Everything you said above is true, and I agree with, it just wasn't part of the original claim I disputed.

Wrongly.

I'm not disputing whether or not per-pixel SNR is the best metric for evaluation of a sensor's performance, I was simply stating that it is possible to measure and compare the SNR of a single pixel on any given sensor. Perhaps if Bob had taken the time to explain his own comments a little more thoroughly you wouldn't have had to do it for him.

You didn't do that 'for' Bob, you did that 'for' yourself, and still ended up being wrong. As Bob said, there is no SNR for a single pixel in a single photo (and when talking about what one photo looks like, considering what the next one might look like is a bit of a nonsense).

-- hide signature --

Bob

Complain
Re: That "explanation" is so wrong...

Great Bustard wrote:

xpatUSA wrote:

Great Bustard wrote:

xpatUSA wrote:

Great Bustard wrote:

Take a photo of a scene from a particular position, with a particular focal length, f-ratio, and shutter speed. Then take another photo of the same scene from the same position with twice the focal length and the same f-ratio and shutter speed. Crop the first photo to the same framing as the second and display the photos at the same size. Which is more noisy?

For example, take a photo of a scene at 50mm f/5.6 1/200 ISO 1600 and another photo of the same scene from the same position at 100mm f/5.6 1/200 ISO 1600. Crop the 50mm photo to the same framing as the 100mm photo. Display both photos at the same size.

SD9, LO res, ISO 400, 70mm vs 35mm (zoom marks):

Noise display by ImageJ

Any conclusions to be drawn from this? GB? DC?

Can you go into more detail as to what the posted photos are? Thanks!

They are crops from pictures of my wellhouse, the which pictures were taken according to your suggestion quoted above.

Do the crops show equal portions of the scene, or are they 100% crops?

The shots were taken per your suggestion. By virtue of being in FastStone's comparison view, they were made equal portions by zooming the left side pane in, to be approximately the same size as the right. FastStone's comparison view allows zooming in separate panes independently when pushing the <ctrl> key.

I took a photo of the scene at 35mm f/8 1/350 ISO 400 and another photo of the same scene from the same position at 70mm f/8 1/350 ISO 400. Displayed both photos at the same subject magnification in FastStone Viewer's comparison window (the smaller image zoomed in but not smoothed).

When you say "same subject magnification", does that mean 100% view, or, for example, each crop showing 1% of the scene?

The 'crops' are virtual because of FastStone comparison settings used. I should have used a better term, perhaps. It's like when you view a Mighty Merrill file on your screen at 100% view, you see a 'cropped' view. If you look at the box above each histogram you can see the amount of zoom applied. From that, you can determine what you need, hopefully.

Took a screen capture ('Untitled-2') and opened it in ImageJ and took a histogram for each dark area, then screen-captured again for posting. Best viewed full size, of course. The bell curves should tell the story, do we think. The standard deviation at left being 40% of the mean, at right being 32% of the mean, what does that tell us?

I have nasty feeling that the zooming has negated my test and that ImageJ should not have been presented with a screen capture. I will do it with actual real crops instead and report in later.

 it will be new images; I cleaned my CF card, duh [/edit]

I can answer that when I have answers to the two questions above.

-- hide signature --

Cheers,
Ted

xpatUSA's gear list:xpatUSA's gear list
Panasonic Lumix DMC-G1 Sigma SD14 Sigma SD15 Panasonic Lumix DMC-GH1 Panasonic Lumix G Vario 14-45mm F3.5-5.6 ASPH OIS +13 more
Complain
Re: f-number equivalent between m43 and FF
1

HumanTarget wrote:

D Cox wrote:

The final SNR depends (assuming technology of the same generation) on the amount of light falling on each pixel. This determines the uncertainty of the measurement of light by that individual photodetector. The more photons, the more certain the measurement.

Actually, shot noise increases with the number of photons, so the more photons the more noise, even though the SNR is increased. And a lot of cameras have reduced read noise at higher ISO settings, in which case they get better measurements with lower light levels (though at a reduced SNR ratio).

The uncertainty is what you call the SNR. I find it better to think of a single measurement (one exposure, one pixel) as having uncertainty rather than noise. If you call it noise, you get confused with the image noise across an array of pixels, which results from the random uncertainty of each pixel measurement.

The signal in a photographic image is the differences between pixels. The uncertainty (error bars) for each pixel gives a random, or noise, component when comparing the pixels.

Your definition of signal would suggest that a noisy image has a stronger signal (more difference between pixels).

A noisy image might have a higher amplitude, but the whole problem of noise is that it is inextricably combined with the image signal. There is no a priori way to know that the sky doesn't really have a lot of random variations in it. We call this "noise" because we know from other experience that the sky doesn't "really" look dotty.

The total area of the sensor is irrelevant. If you put the same lens on various sizes of sensor, the noise level in the part of the image that they all record will be identical. (Assuming they are the same generation of technology.)

But why would you crop the image of the larger sensor? I know of nobody who always crops their images to a lowest common denominator. That totally defeats the purpose of a larger sensor. More information gives you a better image.

You would crop it only for the purpose of comparing noise levels between sensors. It is essential to keep the variables to a minimum, and adding varying amounts of enlargement to the mix confuses the comparison.

The question of why you might see more noise in a bigger print from the same area of a sensor is interesting, but I don't think it is a property of the sensor. Pixel size is.

Complain
Re: f-number equivalent between m43 and FF
2

bobn2 wrote:

D Cox wrote:

goshigoo wrote:

Just read the thread about the new 15 f/1.7; I see people saying f/1.4 is f/1.4 regardless of format

I do agree with this statement on the meaning of f-number, but it is confusing since it is the format + f-number which determines the amount of light captured by a picture, which accounts for the final Signal-to-noise ratio of a picture

The final SNR depends (assuming technology of the same generation) on the amount of light falling on each pixel.

This is not remotely right.

The shot noise varies with the square root of the number of photons. So the more photons a pixel (photodetector) captures, the more certain you can be that the output number is correct. The error bar in the measurement becomes absolutely larger but proportionately smaller.

If you take an array of numbers making up a digital image, the uncertainty of each number gives a random variation in the values, which combines with the varying levels that you want to record. This makes the recorded image noisy. (Spatial noise, not temporal as in a sound recording.)

Complain
Re: f-number equivalent between m43 and FF
1

Allan Olesen wrote:

D Cox wrote:

The total area of the sensor is irrelevant. If you put the same lens on various sizes of sensor, the noise level in the part of the image that they all record will be identical. (Assuming they are the same generation of technology.)

Let us compare the Nikon D800 (FF, 36MP) and the Nikon D7000 (APS-C, 16MP).

Same pixel size. Same Sony sensor technology. Same noise per pixel if you expose equally (same f-stop, shutter speed and scene light).

But not same magnification of the final output.

The degree of enlargement is a completely separate effect. It may be a way to make noise (and lens faults) easier to see, but it doesn't affect the noise level itself.

If you take the same photo with these two cameras and enlarge to the same output size, each pixel from the D7000 will be magnified 1.5x more in each direction. Consequently, the final output will appear more noisy.

Will it measure to have greater noise amplitude (for instance in a sky area) ? I think you are talking about appearance and I am talking about measurement.

And you can't really say I am wrong about that because I am actually just repeating your own claims from another post in this thread: The same pixel will look more noisy the more you magnify it.

So yes, it is all about sensor area.

Complain
Re: f-number equivalent between m43 and FF
1

Great Bustard wrote:

How does a pixel know how big the sensor is ?

It doesn't. It's also not relevant, except inasmuch as the pixel count affects sensor efficiency. For example, if we had two sensors, one with 12 MP and the other with 24 MP with the same size and same QE, then if the read noise per pixel were the same for each sensor, the 24 MP sensor would be more noisy (although this would only be apparent in very low light).

Exactly. And it would be more noisy because the pixels are smaller.

Complain
Re: f-number equivalent between m43 and FF
1

D Cox wrote:

Great Bustard wrote:

How does a pixel know how big the sensor is ?

It doesn't. It's also not relevant, except inasmuch as the pixel count affects sensor efficiency. For example, if we had two sensors, one with 12 MP and the other with 24 MP with the same size and same QE, then if the read noise per pixel were the same for each sensor, the 24 MP sensor would be more noisy (although this would only be apparent in very low light).

Exactly. And it would be more noisy because the pixels are smaller.

Wrong. It would be more noisy because it had more pixels. So far as read noise is concerned, smaller pixels tend to be quieter, but not enough quieter in general to fully compensate for the increased number. So, for instance, compare the D800 and D610 which pixel for pixel perform almost identically, the D610 is marginally quieter in the shadows though in truth, the difference is small enough that it can be swamped by exposure management and processing differences. For instance, looking at DPR's new tool

The D800 looks the quietest in the shadows (and overall) - we know it ain't so, but there is enough tolerance in DPR's method to swamp the real differences over a 3:1 difference in pixel area.

-- hide signature --

Bob

Complain
Re: That "explanation" is so wrong...

xpatUSA wrote:

Great Bustard wrote:

xpatUSA wrote:

Great Bustard wrote:

xpatUSA wrote:

Great Bustard wrote:

Take a photo of a scene from a particular position, with a particular focal length, f-ratio, and shutter speed. Then take another photo of the same scene from the same position with twice the focal length and the same f-ratio and shutter speed. Crop the first photo to the same framing as the second and display the photos at the same size. Which is more noisy?

For example, take a photo of a scene at 50mm f/5.6 1/200 ISO 1600 and another photo of the same scene from the same position at 100mm f/5.6 1/200 ISO 1600. Crop the 50mm photo to the same framing as the 100mm photo. Display both photos at the same size.

Can you go into more detail as to what the posted photos are? Thanks!

They are crops from pictures of my wellhouse, the which pictures were taken according to your suggestion quoted above.

Do the crops show equal portions of the scene, or are they 100% crops?

The shots were taken per your suggestion. By virtue of being in FastStone's comparison view, they were made equal portions by zooming the left side pane in, to be approximately the same size as the right. FastStone's comparison view allows zooming in separate panes independently when pushing the <ctrl> key.

I took a photo of the scene at 35mm f/8 1/350 ISO 400 and another photo of the same scene from the same position at 70mm f/8 1/350 ISO 400. Displayed both photos at the same subject magnification in FastStone Viewer's comparison window (the smaller image zoomed in but not smoothed).

When you say "same subject magnification", does that mean 100% view, or, for example, each crop showing 1% of the scene?

The 'crops' are virtual because of FastStone comparison settings used. I should have used a better term, perhaps. It's like when you view a Mighty Merrill file on your screen at 100% view, you see a 'cropped' view. If you look at the box above each histogram you can see the amount of zoom applied. From that, you can determine what you need, hopefully.

Took a screen capture ('Untitled-2') and opened it in ImageJ and took a histogram for each dark area, then screen-captured again for posting. Best viewed full size, of course. The bell curves should tell the story, do we think. The standard deviation at left being 40% of the mean, at right being 32% of the mean, what does that tell us?

I have nasty feeling that the zooming has negated my test and that ImageJ should not have been presented with a screen capture. I will do it with actual real crops instead and report in later.

 it will be new images; I cleaned my CF card, duh [/edit]

I can answer that when I have answers to the two questions above.

OK, I used your method same as before but there was a problem or two. A normal scene shows a change of metering when zooming. So I shot a letter-size piece of white paper instead. Then, because it's a Foveon, each pixel is 3 photo-sites so I exported only the green channel from RawDigger to ImageJ.

Image size: 1134x756px (SD9 LO res RAW)

Histogram area selected: 670x552px

35mm zoom: mean 67.185, s.dev 8.551, min 32, max 104

70mm zoom: mean 77.853, s.dev 16.98, min 32, max 140

Numbers above are 16-bit grayscale levels.

Curves were classically bell-shaped. Looks to me like the 70mm focal length image is about twice as noisy, just as Great Bustard predicted.

-- hide signature --

Cheers,
Ted

xpatUSA's gear list:xpatUSA's gear list
Panasonic Lumix DMC-G1 Sigma SD14 Sigma SD15 Panasonic Lumix DMC-GH1 Panasonic Lumix G Vario 14-45mm F3.5-5.6 ASPH OIS +13 more
Complain
Re: That "explanation" is so wrong...

xpatUSA wrote:

OK, I used your method same as before but there was a problem or two. A normal scene shows a change of metering when zooming. So I shot a letter-size piece of white paper instead. Then, because it's a Foveon, each pixel is 3 photo-sites so I exported only the green channel from RawDigger to ImageJ.

Image size: 1134x756px (SD9 LO res RAW)

Histogram area selected: 670x552px

35mm zoom: mean 67.185, s.dev 8.551, min 32, max 104

70mm zoom: mean 77.853, s.dev 16.98, min 32, max 140

Numbers above are 16-bit grayscale levels.

Curves were classically bell-shaped.

Oops, the focal lengths were typed back-asswards, should be:

70mm zoom: mean 67.185, s.dev 8.551, min 32, max 104

35mm zoom: mean 77.853, s.dev 16.98, min 32, max 140

. . . making the wider-angle shot the noisier.

Confirmed by a test I just did at HI res (avoids binning obfuscation):

70mm: mean 49.7, s.dev 11.9

36mm: mean 48.4, s.dev 24.4

These last figures straight from RawDigger, whole image, no exporting involved. Numbers are un-scaled raw data values. Much mo' simple!

-- hide signature --

Cheers,
Ted

xpatUSA's gear list:xpatUSA's gear list
Panasonic Lumix DMC-G1 Sigma SD14 Sigma SD15 Panasonic Lumix DMC-GH1 Panasonic Lumix G Vario 14-45mm F3.5-5.6 ASPH OIS +13 more
Complain
Re: That "explanation" is so wrong...
1

xpatUSA wrote:

xpatUSA wrote:

OK, I used your method same as before but there was a problem or two. A normal scene shows a change of metering when zooming. So I shot a letter-size piece of white paper instead. Then, because it's a Foveon, each pixel is 3 photo-sites so I exported only the green channel from RawDigger to ImageJ.

Image size: 1134x756px (SD9 LO res RAW)

Histogram area selected: 670x552px

35mm zoom: mean 67.185, s.dev 8.551, min 32, max 104

70mm zoom: mean 77.853, s.dev 16.98, min 32, max 140

Numbers above are 16-bit grayscale levels.

Curves were classically bell-shaped.

Oops, the focal lengths were typed back-asswards, should be:

70mm zoom: mean 67.185, s.dev 8.551, min 32, max 104

35mm zoom: mean 77.853, s.dev 16.98, min 32, max 140

. . . making the wider-angle shot the noisier.

Confirmed by a test I just did at HI res (avoids binning obfuscation):

70mm: mean 49.7, s.dev 11.9

36mm: mean 48.4, s.dev 24.4

These last figures straight from RawDigger, whole image, no exporting involved. Numbers are un-scaled raw data values. Much mo' simple!

-- hide signature --

Cheers,
Ted

I'm still not clear what it is you're doing. What you need to be doing is:

Take an image with a 70mm lens.

Using the same exposure (leave the shutter speed and f-number the same - will be close enough - and have the same subject with the same lighting) take an image with a 35mm lens. Now crop the 35mm image to have the same framing as the 35mm lens - now both images, in terms of subject content should be next to identical.

Take both images, the 70mm and cropped 30mm and sample to the same pixel size, as if for presentation, so maybe if you were going to web presentation it would be something like 1024x1536, for print obviously larger. Resample using something that you would really use for high quality reproduction - say bicubic or Lanczos. (using nearest neighbour aliases the noise - and everything else).

Now read the resultant images into ImageJ and compare the noise in like patches.

That process actually simulates the process of viewing the images the same size (in this case, web size)

-- hide signature --

Bob

Complain
Re: That "explanation" is so wrong...

xpatUSA wrote:

xpatUSA wrote:

OK, I used your method same as before but there was a problem or two. A normal scene shows a change of metering when zooming. So I shot a letter-size piece of white paper instead. Then, because it's a Foveon, each pixel is 3 photo-sites so I exported only the green channel from RawDigger to ImageJ.

Image size: 1134x756px (SD9 LO res RAW)

Histogram area selected: 670x552px

35mm zoom: mean 67.185, s.dev 8.551, min 32, max 104

70mm zoom: mean 77.853, s.dev 16.98, min 32, max 140

Numbers above are 16-bit grayscale levels.

Curves were classically bell-shaped.

Oops, the focal lengths were typed back-asswards, should be:

70mm zoom: mean 67.185, s.dev 8.551, min 32, max 104

35mm zoom: mean 77.853, s.dev 16.98, min 32, max 140

. . . making the wider-angle shot the noisier.

Confirmed by a test I just did at HI res (avoids binning obfuscation):

70mm: mean 49.7, s.dev 11.9

36mm: mean 48.4, s.dev 24.4

These last figures straight from RawDigger, whole image, no exporting involved. Numbers are un-scaled raw data values. Much mo' simple!

-- hide signature --

Cheers,
Ted

I think you need to downsample using a good algorithm (say Lanczos) and use for this linear space instead of gamma 2.2. Gamma 2.2 is responsible for the lower mean value at the "70mm" sample.

http://www.4p8.com/eric.brasseur/gamma.html

http://www.imagemagick.org/Usage/resize/#resize_colorspace

Now that I am thinking of this .. this gamma error is in part responsible for the much less noise we percieve after downsampling as (at the dark parts like in your sample) it dampens the pixel differences much more than a correct workflow should ..

-- hide signature --

Ilias

Complain
Re: That "explanation" is so wrong...

bobn2 wrote:

xpatUSA wrote:

xpatUSA wrote:

OK, I used your method same as before but there was a problem or two. A normal scene shows a change of metering when zooming. So I shot a letter-size piece of white paper instead. Then, because it's a Foveon, each pixel is 3 photo-sites so I exported only the green channel from RawDigger to ImageJ.

Image size: 1134x756px (SD9 LO res RAW)

Histogram area selected: 670x552px

35mm zoom: mean 67.185, s.dev 8.551, min 32, max 104

70mm zoom: mean 77.853, s.dev 16.98, min 32, max 140

Numbers above are 16-bit grayscale levels.

Curves were classically bell-shaped.

Oops, the focal lengths were typed back-asswards, should be:

70mm zoom: mean 67.185, s.dev 8.551, min 32, max 104

35mm zoom: mean 77.853, s.dev 16.98, min 32, max 140

. . . making the wider-angle shot the noisier.

Confirmed by a test I just did at HI res (avoids binning obfuscation):

70mm: mean 49.7, s.dev 11.9

36mm: mean 48.4, s.dev 24.4

These last figures straight from RawDigger, whole image, no exporting involved. Numbers are un-scaled raw data values. Much mo' simple!

-- hide signature --

Cheers,
Ted

I'm still not clear what it is you're doing.

May I make quite clear that this not my idea. I'm following a suggestion made by Great Bustard but with an absolute minimum of processing.

What you need to be doing is:

I'm not going to be following alternative suggestions by anyone else, sorry.

-- hide signature --

Cheers,
Ted

xpatUSA's gear list:xpatUSA's gear list
Panasonic Lumix DMC-G1 Sigma SD14 Sigma SD15 Panasonic Lumix DMC-GH1 Panasonic Lumix G Vario 14-45mm F3.5-5.6 ASPH OIS +13 more
Complain
Re: That "explanation" is so wrong...

dethis2 wrote:

I think you need to downsample using a good algorithm (say Lanczos) and use for this linear space instead of gamma 2.2. Gamma 2.2 is responsible for the lower mean value at the "70mm" sample.

Now that I am thinking of this .. this gamma error is in part responsible for the much less noise we percieve after downsampling as (at the dark parts like in your sample) it dampens the pixel differences much more than a correct workflow should ..

I did think about using linear export. But the last numbers I gave are from the raw data (no processing at all) and the result was similar enough, I thought.

With all your and other's recent advice which appears to include post-processing of various kinds, I'm becoming confused as to what is being compared (noise-wise):just a focal length and sensor response (only) - or full conversion of all three channels with cropping and resizing, etc?

-- hide signature --

Cheers,
Ted

xpatUSA's gear list:xpatUSA's gear list
Panasonic Lumix DMC-G1 Sigma SD14 Sigma SD15 Panasonic Lumix DMC-GH1 Panasonic Lumix G Vario 14-45mm F3.5-5.6 ASPH OIS +13 more
Complain
Re: That "explanation" is so wrong...

xpatUSA wrote:

dethis2 wrote:

I think you need to downsample using a good algorithm (say Lanczos) and use for this linear space instead of gamma 2.2. Gamma 2.2 is responsible for the lower mean value at the "70mm" sample.

Now that I am thinking of this .. this gamma error is in part responsible for the much less noise we percieve after downsampling as (at the dark parts like in your sample) it dampens the pixel differences much more than a correct workflow should ..

I did think about using linear export. But the last numbers I gave are from the raw data (no processing at all) and the result was similar enough, I thought.

-- hide signature --

Cheers,
Ted

Indeed the raw result looks more "in target".

You mean that you took the raw results by selecting a n1 x n2 area in the 35 mm shot and a 2n1 x 2n2 in the 70mm one ?.

I was talking about the results after resampling, where the lower mean value is an indication that something went wrong.

-- hide signature --

Ilias

Complain
Re: f-number equivalent between m43 and FF

D Cox wrote:

HumanTarget wrote:

D Cox wrote:

The final SNR depends (assuming technology of the same generation) on the amount of light falling on each pixel. This determines the uncertainty of the measurement of light by that individual photodetector. The more photons, the more certain the measurement.

Actually, shot noise increases with the number of photons, so the more photons the more noise, even though the SNR is increased. And a lot of cameras have reduced read noise at higher ISO settings, in which case they get better measurements with lower light levels (though at a reduced SNR ratio).

The uncertainty is what you call the SNR. I find it better to think of a single measurement (one exposure, one pixel) as having uncertainty rather than noise. If you call it noise, you get confused with the image noise across an array of pixels, which results from the random uncertainty of each pixel measurement.

The uncertainty is the noise, which is why it's called noise.  A better SNR does not give you better certainty; it just makes the uncertainty less important.  Noise is noise, whether from one pixel sampled multiple times, or from an array of pixels.

The signal in a photographic image is the differences between pixels. The uncertainty (error bars) for each pixel gives a random, or noise, component when comparing the pixels.

Your definition of signal would suggest that a noisy image has a stronger signal (more difference between pixels).

A noisy image might have a higher amplitude, but the whole problem of noise is that it is inextricably combined with the image signal. There is no a priori way to know that the sky doesn't really have a lot of random variations in it. We call this "noise" because we know from other experience that the sky doesn't "really" look dotty.

We know that the sky has a lot of random variations in it, because that's the way light works.  Light is noisy.

The total area of the sensor is irrelevant. If you put the same lens on various sizes of sensor, the noise level in the part of the image that they all record will be identical. (Assuming they are the same generation of technology.)

But why would you crop the image of the larger sensor? I know of nobody who always crops their images to a lowest common denominator. That totally defeats the purpose of a larger sensor. More information gives you a better image.

You would crop it only for the purpose of comparing noise levels between sensors. It is essential to keep the variables to a minimum, and adding varying amounts of enlargement to the mix confuses the comparison.

But by comparing different sized pixels, you're doing the same thing.  You're comparing one surface that can hold more light to one that holds less.  If you had two ideal/perfect sensors of the same size, but one had twice the pixels, it would appear more noisy per pixel because the sensor with fewer pixels would not have the resolution to show the noise inherent to the light.  By your logic, though, it would be a worse performing sensor, even though it'd have no noise of its own.

The question of why you might see more noise in a bigger print from the same area of a sensor is interesting, but I don't think it is a property of the sensor. Pixel size is.

But the sensor is made up of a number of pixels, isn't it?  If you compare a V6 engine to a V8 engine, you wouldn't compare the cylinder of one to another; you'd compare the overall effect of all of them.

Complain
Re: That "explanation" is so wrong...

dethis2 wrote:

xpatUSA wrote:

dethis2 wrote:

I think you need to downsample using a good algorithm (say Lanczos) and use for this linear space instead of gamma 2.2. Gamma 2.2 is responsible for the lower mean value at the "70mm" sample.

Now that I am thinking of this .. this gamma error is in part responsible for the much less noise we percieve after downsampling as (at the dark parts like in your sample) it dampens the pixel differences much more than a correct workflow should ..

I did think about using linear export. But the last numbers I gave are from the raw data (no processing at all) and the result was similar enough, I thought.

-- hide signature --

Cheers,
Ted

Indeed the raw result looks more "in target".

You mean that you took the raw results by selecting a n1 x n2 area in the 35 mm shot and a 2n1 x 2n2 in the 70mm one ?.

I meant what I said earlier - my last results were for the whole image. Bear in mind the image was of a white paper sheet at low exposure and the frame was full for each shot.

70mm: mean 49.7, s.dev 11.9

36mm: mean 48.4, s.dev 24.4

These last figures straight from RawDigger, whole image, no exporting involved. Numbers are un-scaled raw data values.

I was talking about the results after resampling, where the lower mean value is an indication that something went wrong.

I understand. As you know, RawDigger analyses the raw data before any post-processing. No re-sampling involved.

-- hide signature --

Cheers,
Ted

xpatUSA's gear list:xpatUSA's gear list
Panasonic Lumix DMC-G1 Sigma SD14 Sigma SD15 Panasonic Lumix DMC-GH1 Panasonic Lumix G Vario 14-45mm F3.5-5.6 ASPH OIS +13 more
Complain
 All forums Forum