(unknown member)
•
Forum Pro
•
Posts: 16,732
Re: The role of demosaicing ?
3
tokumeino wrote:
57even wrote:
[...]
If you downsize the files to the same size (either in pixels or print size) the noise level will be more or less identical. It doesn't matter how big the pixels are, only how much light you collect over the whole sensor, or per sq m of the final image.
More small pixels collect the same amount of light as fewer large ones.
Bit-depth is only relevant to DR.
Just look at the SNR data on DXO.
Not beeing an expert, this makes sense to me, especially now that the circuitery between photosites doesn't eat as much sensor space as before, and now that there are microlenses to collect light all over the surface.
I can imagine that with a high density, more photosites will provide random values, but in that case, more accurate photosites will be in the closest neighborhood to provide an adequate value, provided the demosaicing/photosite-level-NR algorithm is not stupid and can more or less sort between random and accurate photosites.
The real problem when photosites get very small (ie Phone cameras) is that the photodiodes have to be shallower. Light absorption is highest at the surface, but longer wavelengths penetrate further. When you get down to depths like 2microns, you lose some of the red signal so colour accuracy is affected.
As far as as understand, this raises the (open as far as I'm concerned) question : how do demosaicing, color patterns (bayer vs xtrans) and density interact ? I suspect that the answer is not trivial and would require more knowledge than I have at my disposal. My intuition would be that the xtrans pattern involving less dense red and blue colored photosites, with a lower photosite density, a fail might be catastrophic because interpolation would require a value far away. But that's only an uneducated guess so I could easlily get convinced of the opposite by well stated arguments. Any idea about that ?
If we just compare mosaic sensors for a moment, and lets assume we use the same raw converter. If we ignore pixel variation/noise the uncertainty level is based entirely on spatial separation. Smaller pixels are closer together, so we have a higher sampling rate for the same image area, ie more resolution.
If we add the noise factor, then we get more variation in those pixel values, but our perception of that noise depends on our visual acuity. We seldom 'see' pixel noise unless we blow the image up to 100%. We see the averaged variation at resolutions at which we have the highest visual sensitivity, or about 10-15 lines/degree. We can see down to about 80 l/degree, but at very low contrast.
Clearly, if there are more pixels describing those lines, our brain averages the noise over more pixels, and the net result is more or less the same.
Similarly, if we downsize the image, the new output pixels have noise averaged over more input pixels, so it's pretty much the same again. We are simply trading a higher sampling rate for less sample variation. This does of course assume we use a good downsampling algorithm.
It all comes down to noise per unit area of the image, which correlates very well to noise per unit area of the sensor - assuming sensors are the same size.
With X-trans, there is slightly more uncertainty based on the spatial separation of each red and blue pixels that are next to green. However, this is not as bad as it seems.
Firstly, the colour filters cover a range of wavelengths. There is quite a lot of overlap between red and green and green and blue. So it is possible to correlate a neighbouring green-red pair with another green-red pair to refine the probable red/green ratio. It's just more computationally intensive, and needs to look at a larger sample of pixels. It also requires a two-pass technique (at least) rather than 1 pass.
There is a very slight loss of colour resolution, but not as much as you might think.
Good Bayer raw converters also use wider interpolation to detect edges, because they are very prone to aliasing - all the red and blue pixels are aligned in rows and columns at only half the sensor frequency. Even edge detection will not eliminate colour moire, so many also use an AA filter.
X-trans does not require an AA filter, which adds about 1/2 a pixel of blur. So, like everything, it's a trade off - but it does require a demosaicing process that is tuned to its specific quirks.
It should be noted that nearly all FF cameras with 24MP sensors still require an AA filter, but still produce plenty of colour moire. Moire removal blurs images.
So I guess it depends how much you dislike colour moire.
-- hide signature --
Reporter: "Mr Gandhi, what do you think of Western Civilisation?"
Mahatma Gandhi: "I think it would be a very good idea!"