Bigger pixels = Less noise... explain?

Started Oct 21, 2011 | Discussions thread
nedelcho
Regular MemberPosts: 351Gear list
Like?
Erik Fossum toughts on the question
In reply to ejmartin, Oct 21, 2011

If mr Erik Fossum don't mind, I will quote some of his thoughts on the subject:

KLO82 wrote:
I have two questions:

1. Is it easier (from technical/ engineering point of view) to make a sensor with larger pixels to have better quantum efficiency? Let's assume that right at this moment Canon has decided to develop two FF sensors with different pixel count. One is 12MP and another is 32 MP. (I wrote "right at this moment" to imply similar level of technology) Is it more likely that the 12MP sensor will have better QE?

Erik Fossum:

Glad you mentioned the comparison being done with the same generation of technology. Most people comparing camera performance forget that the sensor technology generation is generally different.

Generally it is easier to have higher QE in larger pixels at the same technology generation. But, we are talking about the difference between say 70% QE and 60% QE so really not so significant.

2. What about read noise per unit area (or in this case, read noise per sensor, as both the sensors have same area)? Which one of the sensors is more likely to have less read noise?

OK, silly to talk about read noise per unit area. It is not really a useful metric.

The read noise in sensors these days is a few electrons (e.g., 2-3 rms). That means that as soon as the signal is above say 10 e-, fundamental photon shot noise is dominant. Most people don't find the low light levels corresponding to 10 e- signal level very useful...except maybe astronomers or the military.

So, first of all, don't worry about read noise these days. Second, it would be almost identical in the two sensors you postulate.

These r hypothetical questions "assuming all other things equal". I am sure that in real life there can b many other variables involved.

Yes, for example color cross talk between different color signals (Bayer or Foveon) has a big impact on low light SNR in the processed image. Larger pixels have less color crosstalk within the same technology generation.

Bigger pixels can also more easily have larger DR and higher max SNR due larger full well capacity - at the same technology generation.

Nevertheless, to the human eye, at the same technology generation, there is a "sweet spot" where the sense of IQ peaks, which is usually at the highest pixel density (ha ha ha) that a sensor maker will choose to make, not surprisingy.

And thanks you guys for making my ears burn off.

Thus, the question becomes, why would the read noise not scale with area?

It just does not. It is a transistor property, not a pixel-area property.

Are you saying that 1 pixel has a larger full-well capacity than 4 pixels of the same area? Is there a reference to this anywhere from the manufacturers?

No, I said it is easier to design higher full well into a bigger pixel than a smaller pixel. Full well in the photodiode is mostly area x per area capacitance x voltage. The latter two are the same so it is easier to do with larger pixel area. But, normally one designs the output node for a particular conversion gain (uV/e-) and the maximum voltage swing therefore sets the full well and nominally independent of pixel area. It is a complicated design tradeoff.

There are additional design tradeoffs made at the camera electronics and firmware level so it is hard to say that camera performance is entirely related to the sensor performance, as you say.

Read noise is the noise in the transistor circuit used to read out the photoelectrons collected by the pixel. Examples are thermal noise, 1/f noise, etc. Most of the noise comes from the very first transistor (and why uV/e- conversion gain is desired to be high - to get above this noise floor).

Noise in a photo is probably something different from sensor read noise, and is more likely dominated by photon shot noise.

I probably wasn't clear. We were talking about full well or maximum number of electrons. DR is the ratio of the full well to the dark read noise (both measured in electrons). SNR is typically equal to the square root of the full well. Both are measured in dB as 20 log X.

Since read noise would be about equal in the two cases in this discussion, DR and SNR are limited by the full well.

Why is this so? For example, why would one 2x2 pixel gather more light than four 1x1 pixels?

What I meant was why a higher proportion of photons falling on four 1x1 pixels would be recorded than a single 2x2 pixel.

This seems like the opposite of your initial question. Anyway, the single "2x2" pixel has less dead area relative to "live" area than four "1x1" pixels, if I am understanding you right, even after microlenses. This goes under the category of fill factor. The higher the fill factor, the more photons make into the silicon.

Here is the link to discussion:
http://forums.dpreview.com/forums/read.asp?forum=1000&message=38717540

So in the end as I understand it bigger pixels are better, but if the technology is better (newer), small pixels may be equal to the large ones in the previous generation. But with the same generation bigger is better as we have seen with D7000/K5 and Sony a77 i.e. same generation sensor but a77 is more pixel dense (smaller pixels) and it is much more noisier that the sensor in D7000/K5 or Nex5n even when 24MP is resized to D7000/K5 dimensions i.e. 16MP.
--
My gallery:
http://4coolpics.com/author_pics.php?aut=21355

Reply   Reply with quote   Complain
Post (hide subjects)Posted by
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark post MMy threads
Color scheme? Blue / Yellow