Perception, reality and a signal below the noise...

Well, of course, if one meters relative to the sensor’s saturation, then it naturally follows that no amount of increasing saturation capacity would ever give more highlight headroom. But I would like to question whether it is more relevant to look at things like that rather than in terms of number of photons.

Per “Image Sensors and Signal Processing for Digital Stills Camera”, page 308:
Because the capacitance of a pixel depends on its cross-sectional structure and the area of the pixel, pixel capacitance, and thus full-well capacity, generally scales linearly with area for a given technology […]. The maximum number of photoelectrons that a 1 × 1 μm pixel can store is approximately 1/25th the number that a 5 × 5 μm pixel can store.
Therefore, it generally is the case that larger sensors can hold more light before clipping, for the image as a whole or for any fixed fraction of it.
No. Exposure is signal per unit area. The same exposure will saturate all sensors with the same QE at the same point.
I have never denied that. But exposure is not what I meant by “light”, photons are.
But that's not what you see in the image, that's only what's recorded. The image is just a number code that's reproduced by a printer or display. Pure white is the brightest you are going to get, whatever the camera records.
It seems logical that looking at light per unit area would appear to nullify a linear advantage of a larger area.
No, because with a larger area, you record more light overall, so the signal to noise ratio is higher over the total area of the image.
However, this applies only to signals and noise with spatial frequencies near to the width or height of the sensor. For smaller details, what matters is the exposure in each local patch of the image. You can have a small shadow area with very low S/N ration, next to a highlight area with very good S?N ratio. These ratios are not affected by the exposures in other parts of the image.
Hi,

Any area would be built from pixels. Each of the pixels would have a an SNR which is the square root of the photon count. Making the pixels smaller there would be more pixels defining that area, but SNR would be nearly the same.
A single pixel doesn't have a S/N ratio in a still image. It makes a measurement, like a hand-held light meter or a thermometer. That measurement has an uncertainty, an error bar if you like, the width of which depends on the measurement. There's no way to split that single number into "signal" and "noise".

It's only when you have a array or a sequence of measurements that you can talk about a signal.
Assume that you expose a pixel to constant light a take a large number of samples.

The sample value will have a poisson distribution with a standard of deviation that is the root of the photon count. Signal is the photon count and noise is conventionally the standard of deviation of the signal. So, SNR is quite conveniently sqrt(signal).

No a large number pixels and take one single sample each at the same time. As we presume light to be constant, the two measurements will be identical.
If we have a larger sensor, it will have a higher SNR if silicon design is identical, and exposure relative saturation is the same and the image is viewed at the same size.
But not if it is viewed at the same degree of enlargement.
That is correct, but I would guess that comparisons would images viewed at same size.

Best regards

Erik
 
Well, of course, if one meters relative to the sensor’s saturation, then it naturally follows that no amount of increasing saturation capacity would ever give more highlight headroom. But I would like to question whether it is more relevant to look at things like that rather than in terms of number of photons.

Per “Image Sensors and Signal Processing for Digital Stills Camera”, page 308:
Because the capacitance of a pixel depends on its cross-sectional structure and the area of the pixel, pixel capacitance, and thus full-well capacity, generally scales linearly with area for a given technology […]. The maximum number of photoelectrons that a 1 × 1 μm pixel can store is approximately 1/25th the number that a 5 × 5 μm pixel can store.
Therefore, it generally is the case that larger sensors can hold more light before clipping, for the image as a whole or for any fixed fraction of it.
No. Exposure is signal per unit area. The same exposure will saturate all sensors with the same QE at the same point.
I have never denied that. But exposure is not what I meant by “light”, photons are.
But that's not what you see in the image, that's only what's recorded. The image is just a number code that's reproduced by a printer or display. Pure white is the brightest you are going to get, whatever the camera records.
It seems logical that looking at light per unit area would appear to nullify a linear advantage of a larger area.
No, because with a larger area, you record more light overall, so the signal to noise ratio is higher over the total area of the image.
However, this applies only to signals and noise with spatial frequencies near to the width or height of the sensor. For smaller details, what matters is the exposure in each local patch of the image. You can have a small shadow area with very low S/N ration, next to a highlight area with very good S?N ratio. These ratios are not affected by the exposures in other parts of the image.
Not if you are comparing images at the same size. In that case, any unit square patch of the image has a higher SNR.

And if you are not, the entire discussion is moot anyway.
 
A single pixel doesn't have a S/N ratio in a still image. It makes a measurement, like a hand-held light meter or a thermometer. That measurement has an uncertainty, an error bar if you like, the width of which depends on the measurement. There's no way to split that single number into "signal" and "noise".

It's only when you have a array or a sequence of measurements that you can talk about a signal.
Flag on play.

We are talking about photon shot noise so in this case your analogy is not correct. There is no error because the sensor is measuring the exact number of photons that arrive*. It is not the sensor's fault that the number of photons arriving per unit time fluctuates. It is just the nature of light emission and propagation through scattering and absorbing media.

The only error is the difference between what is measured and the average value of the photon flux. And, this error or noise exists even in a single measurement by a single pixel, although unknowable to the observer.

When we talk about SNR in a loose context, signal means the average value and noise means the standard deviation, both over a series of measurements as you say.

*Assuming no read noise or statistical fluctuation in QE, etc., which is a reasonable approximation for larger photon flux and good QE.
 
A single pixel doesn't have a S/N ratio in a still image. It makes a measurement, like a hand-held light meter or a thermometer. That measurement has an uncertainty, an error bar if you like, the width of which depends on the measurement. There's no way to split that single number into "signal" and "noise".

It's only when you have a array or a sequence of measurements that you can talk about a signal.
This might be tangential to the discussion but... a single pixel does have noise characteristics. The sequence of measurements is in time.
That's true for video, but not for still photography where each pixel gives you one number.

Perhaps you're thinking of your advanced photon-counting technology, but nobody is using that outside of labs.

Don Cox
 
A single pixel doesn't have a S/N ratio in a still image. It makes a measurement, like a hand-held light meter or a thermometer. That measurement has an uncertainty, an error bar if you like, the width of which depends on the measurement. There's no way to split that single number into "signal" and "noise".

It's only when you have a array or a sequence of measurements that you can talk about a signal.
Flag on play.

We are talking about photon shot noise so in this case your analogy is not correct. There is no error because the sensor is measuring the exact number of photons that arrive*. It is not the sensor's fault that the number of photons arriving per unit time fluctuates. It is just the nature of light emission and propagation through scattering and absorbing media.
The pixel measures the exact number of photons, but there is an a priori assumption by the photographer (and his critics) that the surfaces of the subject (or the sky) are not covered in little random dots. We are trying to produce an accurate picture of the surfaces in the scene, using the photons reflected from them as evidence.
The only error is the difference between what is measured and the average value of the photon flux. And, this error or noise exists even in a single measurement by a single pixel, although unknowable to the observer.
The error is the difference between the evidence from the photons (with shot noise) and the evidence from prior vision (perhaps in better light and from a closer distance) and touch, which tells us what the subject "really" looks like.
When we talk about SNR in a loose context, signal means the average value and noise means the standard deviation, both over a series of measurements as you say.
Yes. Consider a photo of a smooth plastic ball. (It's as well to have an example.)

I know from previous experience tat the ball is the same colour all over and has almost no surface texture. But what do we see in the photograph ? Lots of speckly texture which you and I know is shot noise plus read noise.

The signal is not the photons: it's the ball. The variations in colour and apparent texture are noise. Averaging the measurements over a patch of a few dozen pixels (with a "noise reduction" algorithm) gives an image that's nearer to the "real" signal.

The purpose of vision is to help us find food, avoid enemies, and not bump into trees. The signal is the objects around us.

30c57350ee71492fbb65838692e053d8.jpg

*Assuming no read noise or statistical fluctuation in QE, etc., which is a reasonable approximation for larger photon flux and good QE.
 
A single pixel doesn't have a S/N ratio in a still image. It makes a measurement, like a hand-held light meter or a thermometer. That measurement has an uncertainty, an error bar if you like, the width of which depends on the measurement. There's no way to split that single number into "signal" and "noise".

It's only when you have a array or a sequence of measurements that you can talk about a signal.
This might be tangential to the discussion but... a single pixel does have noise characteristics. The sequence of measurements is in time.
That's true for video, but not for still photography where each pixel gives you one number.
That number changes when you repeat the experiment over and over again. Each pixel has statistical properties like a probability distribution. For read noise, it is something like Gaussian noise, discretized.
Perhaps you're thinking of your advanced photon-counting technology, but nobody is using that outside of labs.
No, I am not thinking about that. Noise can be modeled as follows. Consider all pixels as i.i.d. (or not) random variables (white noise). It is a random process generating a single frame, and roughly speaking, the statistical characteristics across a single frame (in uniform regions) are the same as the temporal ones of a single pixel.

This applies to additive noise (read noise). Shot noise (which is not really noise, as mentioned above) and other types of noise can be modeled along those lines as well with some modifications.
 
Last edited:
A single pixel doesn't have a S/N ratio in a still image. It makes a measurement, like a hand-held light meter or a thermometer. That measurement has an uncertainty, an error bar if you like, the width of which depends on the measurement. There's no way to split that single number into "signal" and "noise".

It's only when you have a array or a sequence of measurements that you can talk about a signal.
Flag on play.

We are talking about photon shot noise so in this case your analogy is not correct. There is no error because the sensor is measuring the exact number of photons that arrive*. It is not the sensor's fault that the number of photons arriving per unit time fluctuates. It is just the nature of light emission and propagation through scattering and absorbing media.
The pixel measures the exact number of photons,
The pixel doesn't measure an exact number of photons. The exact number of photons impinged on a pixel for even an exact number of photoelectrons measured is an unknowable quantity. Described only by a distribution. Which is strictly not even a Poisson distribution. See bullet # 4 in the link below:

https://www.dpreview.com/forums/post/50345602

And, furthermore

https://www.dpreview.com/forums/post/50345857
 
A single pixel doesn't have a S/N ratio in a still image. It makes a measurement, like a hand-held light meter or a thermometer. That measurement has an uncertainty, an error bar if you like, the width of which depends on the measurement. There's no way to split that single number into "signal" and "noise".

It's only when you have a array or a sequence of measurements that you can talk about a signal.
Flag on play.

We are talking about photon shot noise so in this case your analogy is not correct. There is no error because the sensor is measuring the exact number of photons that arrive*. It is not the sensor's fault that the number of photons arriving per unit time fluctuates. It is just the nature of light emission and propagation through scattering and absorbing media.
The pixel measures the exact number of photons,
The pixel doesn't measure an exact number of photons.
Of course it measures an exact number of photons, or can measure that if it is sensitive enough. What do you think it is measuring?
The exact number of photons impinged on a pixel for even an exact number of photoelectrons measured is an unknowable quantity.
Of course it is knowable. What do you mean by saying it is unknowable?
 
A single pixel doesn't have a S/N ratio in a still image. It makes a measurement, like a hand-held light meter or a thermometer. That measurement has an uncertainty, an error bar if you like, the width of which depends on the measurement. There's no way to split that single number into "signal" and "noise".

It's only when you have a array or a sequence of measurements that you can talk about a signal.
Flag on play.

We are talking about photon shot noise so in this case your analogy is not correct. There is no error because the sensor is measuring the exact number of photons that arrive*. It is not the sensor's fault that the number of photons arriving per unit time fluctuates. It is just the nature of light emission and propagation through scattering and absorbing media.
The pixel measures the exact number of photons,
The pixel doesn't measure an exact number of photons.
Of course it measures an exact number of photons, or can measure that if it is sensitive enough. What do you think it is measuring?
The exact number of photons impinged on a pixel for even an exact number of photoelectrons measured is an unknowable quantity.
Of course it is knowable. What do you mean by saying it is unknowable?
Because, a number of different (closely placed) photon number values could potentially generate the same number of photoelectrons, which are actually measured. Hence, you can only argue probabilistically about the exact number of photons that generated a particular number of photoelectrons - for typical, ordinary CMOS / CCD sensors.

That was in the 2 links in the original post of mine.
 
A single pixel doesn't have a S/N ratio in a still image. It makes a measurement, like a hand-held light meter or a thermometer. That measurement has an uncertainty, an error bar if you like, the width of which depends on the measurement. There's no way to split that single number into "signal" and "noise".

It's only when you have a array or a sequence of measurements that you can talk about a signal.
Flag on play.

We are talking about photon shot noise so in this case your analogy is not correct. There is no error because the sensor is measuring the exact number of photons that arrive*. It is not the sensor's fault that the number of photons arriving per unit time fluctuates. It is just the nature of light emission and propagation through scattering and absorbing media.
The pixel measures the exact number of photons,
The pixel doesn't measure an exact number of photons.
Of course it measures an exact number of photons, or can measure that if it is sensitive enough. What do you think it is measuring?
The exact number of photons impinged on a pixel for even an exact number of photoelectrons measured is an unknowable quantity.
Of course it is knowable. What do you mean by saying it is unknowable?
Because, a number of different (closely placed) photon number values could potentially generate the same number of photoelectrons, which are actually measured. Hence, you can only argue probabilistically about the exact number of photons that generated a particular number of photoelectrons - for typical, ordinary CMOS / CCD sensors.

That was in the 2 links in the original post of mine.
And, suppose the QE was 100%? As I mentioned, with a high value of QE it is a good approximation to say one is measuring the exact number of photons.
 
A single pixel doesn't have a S/N ratio in a still image. It makes a measurement, like a hand-held light meter or a thermometer. That measurement has an uncertainty, an error bar if you like, the width of which depends on the measurement. There's no way to split that single number into "signal" and "noise".

It's only when you have a array or a sequence of measurements that you can talk about a signal.
Flag on play.

We are talking about photon shot noise so in this case your analogy is not correct. There is no error because the sensor is measuring the exact number of photons that arrive*. It is not the sensor's fault that the number of photons arriving per unit time fluctuates. It is just the nature of light emission and propagation through scattering and absorbing media.
The pixel measures the exact number of photons,
The pixel doesn't measure an exact number of photons.
Of course it measures an exact number of photons, or can measure that if it is sensitive enough. What do you think it is measuring?
The exact number of photons impinged on a pixel for even an exact number of photoelectrons measured is an unknowable quantity.
Of course it is knowable. What do you mean by saying it is unknowable?
Because, a number of different (closely placed) photon number values could potentially generate the same number of photoelectrons, which are actually measured. Hence, you can only argue probabilistically about the exact number of photons that generated a particular number of photoelectrons - for typical, ordinary CMOS / CCD sensors.

That was in the 2 links in the original post of mine.
And, suppose the QE was 100%? As I mentioned, with a high value of QE it is a good approximation to say one is measuring the exact number of photons.
Of course, approximation. The earlier discussion insisted on exact.
 
And, suppose the QE was 100%? As I mentioned, with a high value of QE it is a good approximation to say one is measuring the exact number of photons.
Of course, approximation. The earlier discussion insisted on exact.
No, I did not. Please pay attention to the * comment in my post. Thx.
 
Are you seeing the signal yet? ;)
Yes, I guess that my question was more about psychology, but threads often discuss other issues than OP:s intention.

Some arguments also makes me realize that I may need to be humble. It may be that one's point of view is pretty obvious, but the same issue may be seen from a different view point, offering a different view.

Best regards

Erik
 
I guess that the discussion relates to how human perception works. Dunning Kruger obviously comes mind.
This reminds me of a quote I saw recently, something like: smart people are not right more often; they are just wrong for more sophisticated reasons.
Dunning and Kruger referred to "...the notion that incompetent individuals lack the metacognitive skills necessary for accurate self-assessment", and also (I would suggest) lack the skills necessary to assess others and assess evidence.

It reminds me of what different people mean by "common sense".

When smart people say something is common sense, then if asked they will usually justify it based on rational argument and evidence. Even if they are wrong, they've provided a sophisticated reason!

For less smart people, saying something is common sense is the end of the argument. It just is. "Common sense" is something that smart people (like themselves) just instinctively know, and requires no justification.

--
Simon
 
Last edited:
When smart people say something is common sense, then if asked they will usually justify it based on rational argument and evidence. Even if they are wrong, they've provided a sophisticated reason!

For less smart people, saying something is common sense is the end of the argument. It just is. "Common sense" is something that smart people (like themselves) just instinctively know, and requires no justification.
Interesting observation, and possibly a useful way to classify people. I like it. (Dodging bricks)

The only problem is that I often see people give carefully thought-out reasoning for the nuttiest ideas and conspiracy theories. I would say their skills for evaluating ideas are not very good, but they nevertheless give what some would regard as a sophisticated reason. Your idea sounds good, but I'm not sure it's right, unfortunately.
 
Last edited:
I guess that the discussion relates to how human perception works. Dunning Kruger obviously comes mind.
This reminds me of a quote I saw recently, something like: smart people are not right more often; they are just wrong for more sophisticated reasons.
Dunning and Kruger referred to "...the notion that incompetent individuals lack the metacognitive skills necessary for accurate self-assessment", and also (I would suggest) lack the skills necessary to assess others and assess evidence.

It reminds me of what different people mean by "common sense".

When smart people say something is common sense, then if asked they will usually justify it based on rational argument and evidence. Even if they are wrong, they've provided a sophisticated reason!

For less smart people, saying something is common sense is the end of the argument. It just is. "Common sense" is something that smart people (like themselves) just instinctively know, and requires no justification.
"Common sense" isn't.

 
I guess that the discussion relates to how human perception works. Dunning Kruger obviously comes mind.
This reminds me of a quote I saw recently, something like: smart people are not right more often; they are just wrong for more sophisticated reasons.
Dunning and Kruger referred to "...the notion that incompetent individuals lack the metacognitive skills necessary for accurate self-assessment", and also (I would suggest) lack the skills necessary to assess others and assess evidence.

It reminds me of what different people mean by "common sense".

When smart people say something is common sense, then if asked they will usually justify it based on rational argument and evidence. Even if they are wrong, they've provided a sophisticated reason!

For less smart people, saying something is common sense is the end of the argument. It just is. "Common sense" is something that smart people (like themselves) just instinctively know, and requires no justification.
"Common sense" isn't.

https://www.psychologytoday.com/us/...1107/common-sense-is-neither-common-nor-sense
That was a good link...

Best regards

Erik
 
Someone was discussing to buy Hasselblad X1D or Fujifilm GFX. One of the comments was:

'The X1D cameras just have a different look. It might be the massive color information that the camera is said to have in its firmware. It might be something in the way a "raw" file is generated. It might be the treatment of colors in the camera profile that Phocus has or that Adobe obtained from Hasselblad for Lightroom. Maybe one can wrestle an X1D raw file to look like a Fuji GFX file, but if you just start on the X1D raw in Phocus or even Lightroom, you typically have something of the camera "look" in most shots when you are done.'

That is a quite definitive statement. But do have cameras different color?

So, I downloaded DPReviews studio test images for both the X1D and the GFX 100, generated DCP Profiles for both with LumaRiver Profile Designer developed the Studio Test image with consistent exposure in Lightroom and analysed the ColorChecker colors using Babelcolor 'PatchTool'.

The color differences where like this:



The colors are split squares, I don't recall which side is wich, but they look pretty similar to me...
The colors are split squares, I don't recall which side is wich, but they look pretty similar to me...

So, it seems both cameras are capable of producing exactly the same color.

Obviously, different raw developer may produce different colors and they apply hue twists and tone curves that are different.



This is one of the test samples of the X1DII from DPReviews, processed with Hasselblad's own Phocus with WB and 'exposure' ('L'-value in Lab)  approximately the same. There are differences. But, i don't think that would be a reason to choose one over the other.
This is one of the test samples of the X1DII from DPReviews, processed with Hasselblad's own Phocus with WB and 'exposure' ('L'-value in Lab) approximately the same. There are differences. But, i don't think that would be a reason to choose one over the other.

Best regards

Erik



--
Erik Kaffehr
Website: http://echophoto.dnsalias.net
Magic uses to disappear in controlled experiments…
Gallery: http://echophoto.smugmug.com
Articles: http://echophoto.dnsalias.net/ekr/index.php/photoarticles
 
Someone was discussing to buy Hasselblad X1D or Fujifilm GFX. One of the comments was:

'The X1D cameras just have a different look. It might be the massive color information that the camera is said to have in its firmware. It might be something in the way a "raw" file is generated. It might be the treatment of colors in the camera profile that Phocus has or that Adobe obtained from Hasselblad for Lightroom. Maybe one can wrestle an X1D raw file to look like a Fuji GFX file, but if you just start on the X1D raw in Phocus or even Lightroom, you typically have something of the camera "look" in most shots when you are done.'

That is a quite definitive statement. But do have cameras different color?

So, I downloaded DPReviews studio test images for both the X1D and the GFX 100, generated DCP Profiles for both with LumaRiver Profile Designer developed the Studio Test image with consistent exposure in Lightroom and analysed the ColorChecker colors using Babelcolor 'PatchTool'.

The color differences where like this:

The colors are split squares, I don't recall which side is wich, but they look pretty similar to me...
The colors are split squares, I don't recall which side is wich, but they look pretty similar to me...

So, it seems both cameras are capable of producing exactly the same color.
on this chart...
 

Keyboard shortcuts

Back
Top