Erik Kaffehr
Veteran Member
Assume that you expose a pixel to constant light a take a large number of samples.A single pixel doesn't have a S/N ratio in a still image. It makes a measurement, like a hand-held light meter or a thermometer. That measurement has an uncertainty, an error bar if you like, the width of which depends on the measurement. There's no way to split that single number into "signal" and "noise".Hi,However, this applies only to signals and noise with spatial frequencies near to the width or height of the sensor. For smaller details, what matters is the exposure in each local patch of the image. You can have a small shadow area with very low S/N ration, next to a highlight area with very good S?N ratio. These ratios are not affected by the exposures in other parts of the image.But that's not what you see in the image, that's only what's recorded. The image is just a number code that's reproduced by a printer or display. Pure white is the brightest you are going to get, whatever the camera records.I have never denied that. But exposure is not what I meant by “light”, photons are.No. Exposure is signal per unit area. The same exposure will saturate all sensors with the same QE at the same point.Well, of course, if one meters relative to the sensor’s saturation, then it naturally follows that no amount of increasing saturation capacity would ever give more highlight headroom. But I would like to question whether it is more relevant to look at things like that rather than in terms of number of photons.
Per “Image Sensors and Signal Processing for Digital Stills Camera”, page 308:
Therefore, it generally is the case that larger sensors can hold more light before clipping, for the image as a whole or for any fixed fraction of it.Because the capacitance of a pixel depends on its cross-sectional structure and the area of the pixel, pixel capacitance, and thus full-well capacity, generally scales linearly with area for a given technology […]. The maximum number of photoelectrons that a 1 × 1 μm pixel can store is approximately 1/25th the number that a 5 × 5 μm pixel can store.
No, because with a larger area, you record more light overall, so the signal to noise ratio is higher over the total area of the image.It seems logical that looking at light per unit area would appear to nullify a linear advantage of a larger area.
Any area would be built from pixels. Each of the pixels would have a an SNR which is the square root of the photon count. Making the pixels smaller there would be more pixels defining that area, but SNR would be nearly the same.
It's only when you have a array or a sequence of measurements that you can talk about a signal.
The sample value will have a poisson distribution with a standard of deviation that is the root of the photon count. Signal is the photon count and noise is conventionally the standard of deviation of the signal. So, SNR is quite conveniently sqrt(signal).
No a large number pixels and take one single sample each at the same time. As we presume light to be constant, the two measurements will be identical.
That is correct, but I would guess that comparisons would images viewed at same size.But not if it is viewed at the same degree of enlargement.If we have a larger sensor, it will have a higher SNR if silicon design is identical, and exposure relative saturation is the same and the image is viewed at the same size.
Best regards
Erik


