Why and how small pixels influence the image quality

Started Mar 31, 2012 | Discussions
 Like?

Safesphere wrote:

Your honest effort to bring some clarity to this issue is admirable and the information surfaced in this thread is absolutely great. However, you keep confusing the forum with ungrounded conclusions. Those who cannot follow your math get a wrong impression that you have something here. In fact, the read noise and consequently DR do not depend on the pixel size and are determined by the sensor technology with D800 as one example.

The point you are relying on is wrong:

schwoofi wrote:

KLO82 has summarized this nicely:

"To put it simply, if we divide a larger pixel to create x number of smaller pixels, we are increasing the read noise per unit area by sqrt(x) [assuming same technology]."

You cannot divide a larger pixel into a number of smaller pixels after the fact. The only way to do it is by making a new sensor. And D800 proves this point wrong, as it has the same or better DR.

This is just a simplification, we are really not splitting the pixels to make smaller pixels.

If the quote above is read in reverse, then combining a number of smaller pixels into a larger one with the same overall area, if done in software after the fact, decreases the read noise in relation to the signal by the square root. This is the same with the photon noise and is nothing new or strange or unexpected.

You are missing the point here. With same technology, read noise is same per pixel - whatever the size of the pixel is. Lets say we have a large pixel and four smaller pixels which can cover the same area as the larger pixel, and all are made with same technology. So it means that all the pixels have same read noise, say z.
For the large pixel, we have read noise of z.

For the smaller 4 pixels (each of which has the read noise z), covering the same area, the read noise = sqrt(4) * z = 2z.

So you will get more read noise per unit area from the smaller pixels.

For shot noise, what you wrote is true, as shot noise is not same for different sized pixels.

To summarize, this thread is great, the math presented is competely irreevant, if confusing to many, the read noise and DR do not depend on the pixe size of the camera, but can be improved in shadows by combining pixels in software.

Complain
 Like?
 Re: In terms of IQ, smaller pixels deliver. In reply to Great Bustard, Apr 4, 2012

Great Bustard wrote:

KLO82 wrote:

All these specific examples do not say against the fact that for same level of technology (which means same read noise per pixel, whatever the size of the pixel is)...

We should cut this right there. Why does same level of tech mean same read noise / pixel regardless of pixel size? Does that also mean same saturation limit regardless of pixel size?

Because read noise per pixel has nothing to do with pixel size, it is dependent on the technology used.

But we can argue that if Canon made 20D sized pixels using the 50D technology, it would have less read noise per unit area than 50D.

OK, let's argue that. First of all, it would have a disasterous affect on the IQ at base ISO, not only because you lose a huge amount of resolution (15 MP vs 8 MP) but also because you lose 1.4 stops of light gathering ability (the 50D pixel saturates at 37% the amount of light as the 20D pixel).

Not really, you are comparing pixel level saturation of 50D vs 20D instead of unit area saturation.

If same read noise, then same saturation, no? If not, why not?

I guess the same technology assumption will also assume same saturation capacity per unit area. But this, I am not very sure.

But that is beside my point, I am not talking about saturation capacity here. What I am saying is if we use same technology as of 50D ( but make the pixels as large as 20D instead of as small as 50D).

In other words, trade base ISO IQ for high ISO IQ?

And this sacrifice is all for a read noise of 3.3 electrons at ISO 1600 vs 4.4 electrons for the 50D pixels covering the same area.

Yeah -- like that.

Now, you might argue that the larger pixel would be able to hold more light. Well, guess what? Using the same sensor tech, this will also raise the read noise , and thus, once again, the 50D pixels win out.

How is it raising the read noise? We are not increasing the number of pixels but instead making the 50D pixel's larger.

Well, since you're not interested in saturation capacity, I guess it isn't.

Here is the calculation:
20D's read noise (per pixel) @ ISO 1600 = 3.8e ; pixel pitch = 6.26 micron
50D's read noise (per pixel) @ ISO 1600 = 3.3e ; pixel pitch = 4.68 micron

So 50D's read noise at the same area as 20D's pixel = 6.26/4.48 * 3.3e = 4.4e.

What's that calculation all about? Why are you scaling read noise? And why are you scaling by the pixel width rather than the pixel area?

First tell me why did you write this (orange colored, quoted below)? What was your basis of calculation in which you got same result as mine?

And this sacrifice is all for a read noise of 3.3 electrons at ISO 1600 vs 4.4 electrons for the 50D pixels covering the same area .

But if we made the pixels of 50D as large as 20D, it's read noise would remain same : 3.3e (as read noise per pixel does not vary with pixel size). So in this case 50D will have advantage (3.3e read noise vs 3.8 e read noise for 20D).

Now, compare this to two 50D pixels: sqrt (3.3² + 3.3²) = 4.7 electrons -- half a stop more read noise, but two pixels rather than one. I wonder how NR would change the score? That is, apply NR to a 15 MP photo that has half a stop more read noise until it matches the detail in the 8 MP photo, and then see how the noise compares. Once again, I imagine it will go something like this:

So, let's recap. There was the 50D, with 15 MP. You are advocating a 50D2 with 8 MP, but the pixel stats stay the same. The results of which are effectively raising the base ISO a full stop (half the pixels means half the saturation capacity),

Obviously no. Why halving the pixel count will cause half the saturation capacity??

in return for a 1/2 stop decrease in read noise / area.

Do I have that right?

Complain
 Like?
 Re: In terms of IQ, smaller pixels deliver. In reply to KLO82, Apr 4, 2012

Just want to quote from Emil Martinec:
http://theory.uchicago.edu/~ejm/pix/20d/tests/noise/noise-p3.html#pixelsize
(last updated on January 11, 2010)

The above DSLR/digicam comparison outlines the extremes of what may be possible with current or near-term technology, if digicam pixel densities were used to populate full-frame sensors. The fact that a digicam's performance is in the same ballpark as the best DSLR's when referred to fixed spatial scale, suggests that the problems with noise in digicams is not due to their ever smaller pixels, but rather it is due to their continued use of small sensors.

Bottom line: Among the important measures of image quality are signal-to-noise ratio of the capture process, and resolution. It was shown that for fixed sensor format, the light collection efficiency per unit area is essentially independent of pixel size, over a huge range of pixel sizes from 2 microns to over 8 microns, and is therefore independent of the number of megapixels. Noise performance per unit area was seen to be only weakly dependent on pixel size. The S/N ratio per unit area is much the same over a wide range of pixel sizes. There is an advantage to big pixels in low light (high ISO) applications, where read noise is an important detractor from image quality, and big pixels currently have lower read noise than aggregations of small pixels of equal area. For low ISO applications, the situation is reversed in current implementations -- if anything, smaller pixels perform somewhat better in terms of S/N ratio (while offering more resolution). A further exploration of these issues can be found on the supplemental page. Rather than having strong dependence on the pixel size, the noise performance instead depends quite strongly on sensor size -- bigger sensors yield higher quality images, by capturing more signal (photons).

The other main measure of image quality is the resolution in line pairs/picture height; it is by definition independent of the sensor size, and depends only on the megapixel count. The more megapixels, the more resolution, up to the limits imposed by the system's optics.

Complain
 Like?
 Re: Why and how small pixels influence the image quality In reply to KLO82, Apr 4, 2012

KLO82 wrote:

But we can argue that if Canon made 20D sized pixels using the 50D technology, it would have less read noise per unit area than 50D. Here, we are making large pixels even though we have the technology to make pixels smaller (eg. 16MP 1D4, even though using that technology we could make smaller pixels - 18MP 60D). Or, in other words, with same technology, larger pixels will have less read noise per unit area than smalller pixels.

Great Bustard wrote:

KLO82 wrote:

To put it simply, if we divide a larger pixel to create x number of smaller pixels, we are increasing the read noise per unit area by sqrt(x) [assuming same technology]. Right?

Only if the read noise / pixel stays the same. Many pixels in Canon sensors (e.g. 20D through the 50D) were scaled -- that is, the read noise of the pixel was proportional to the area of the pixel.

A good example is the 450D and 1100D, same size pixel but different pixels, 450D uses the 40D generation pixel design to give around 4 e- read noise, the 1100D uses the 50D generation to give 3 e-.
--
Bob

Complain
 Like?
 Re: In terms of IQ, smaller pixels deliver. In reply to KLO82, Apr 4, 2012

KLO82 wrote:

Great Bustard wrote:

KLO82 wrote:

All these specific examples do not say against the fact that for same level of technology (which means same read noise per pixel, whatever the size of the pixel is)...

We should cut this right there. Why does same level of tech mean same read noise / pixel regardless of pixel size? Does that also mean same saturation limit regardless of pixel size?

Because read noise per pixel has nothing to do with pixel size, it is dependent on the technology used.

It is directly dependent on the pixel size, if different size pixels are achieved by simple scaling. That is because the pixel read noise is dependent on the charge-voltage conversion factor (gain) of the pixel, which in turn is dependent on the capacitance of the read transistor and attached floating diffusion which in turn is dependent on there size. There is a direct physical dependency which makes it hard to argue that it doesn't exist. Mostly these read noise improvements are achieved using the same semiconductor processes on the same lines. Really the only scope for improvement is reduction in size of the active part of the pixel.

-- hide signature --

Bob

Complain
 Like?
 Re: In terms of IQ, smaller pixels deliver. In reply to KLO82, Apr 4, 2012

KLO82 wrote:

All these specific examples do not say against the fact that for same level of technology (which means same read noise per pixel, whatever the size of the pixel is)

That is a strange definition of 'same level of technology'. For the same semiconductor process, same doping profiles, same A to D system (which to me makes a better definition of 'level of technology') and comparing pixels which are scaled replicas of each other, then read noise is proportional to pixel area.
See here:
http://www.aptina.com/products/technology/DR-Pix_WhitePaper.pdf

The CG is actually an inverse way of expressing the capacitance of the FD node.
...

(and since capacitance falls by area small FD has lower capacitance and higher CG than large FD)

Higher CG also means that the sensor will realize a reduction in read noise when the system noise is referred back to the FD.
...

For example, if the analog signal chain of an image sensor has 100 μV of noise at the input, then this amount of noise can be referred back to the FD by using the SF gain. For example, 100 μV of noise at the pixel’s output, divided by SF gain of 0.8, becomes 125 μV of noise when referred to the FD. Now, when divided by CG to convert to noise in units of electrons, this shows that higher CG will result in fewer electrons of noise at the FD. In the example above, the low-CG case saw 125 μV / 30 μV/e = 4.2 electrons of noise whereas 125 μV / 150 μV/e = 0.8 electrons of noise was evident in the high-CG case. The higher CG produces not only a larger voltage signal for a given amount of signal electrons, it also makes the noise of the system appear smaller when compared to the measured signal.

Where CG is 'conversion gain' and 'FD' is floating diffusion node, where the photo-electrons are collected.
--
Bob

Complain
 Like?
 Re: In terms of IQ, smaller pixels deliver. In reply to bobn2, Apr 4, 2012

Consider it like this:

Lets say we have (relatively large) pixels. After a certain time, we have acheived scaling technology by which we can make pixels smaller (and reduce per pixel read noise). If now we use the same technology (less read noise per pixel) to make the same sized large pixel as before, instead of making the smaller pixels, we will have less read noise per unit area than the the smaller pixels in the same area.

bobn2 wrote:

KLO82 wrote:

Great Bustard wrote:

KLO82 wrote:

All these specific examples do not say against the fact that for same level of technology (which means same read noise per pixel, whatever the size of the pixel is)...

We should cut this right there. Why does same level of tech mean same read noise / pixel regardless of pixel size? Does that also mean same saturation limit regardless of pixel size?

Because read noise per pixel has nothing to do with pixel size, it is dependent on the technology used.

It is directly dependent on the pixel size, if different size pixels are achieved by simple scaling. That is because the pixel read noise is dependent on the charge-voltage conversion factor (gain) of the pixel, which in turn is dependent on the capacitance of the read transistor and attached floating diffusion which in turn is dependent on there size. There is a direct physical dependency which makes it hard to argue that it doesn't exist. Mostly these read noise improvements are achieved using the same semiconductor processes on the same lines. Really the only scope for improvement is reduction in size of the active part of the pixel.

-- hide signature --

Bob

Complain
 Like?
 Re: In terms of IQ, smaller pixels deliver. In reply to KLO82, Apr 4, 2012

KLO82 wrote:

Consider it like this:

Lets say we have (relatively large) pixels. After a certain time, we have acheived scaling technology by which we can make pixels smaller (and reduce per pixel read noise). If now we use the same technology (less read noise per pixel) to make the same sized large pixel as before, instead of making the smaller pixels, we will have less read noise per unit area than the the smaller pixels in the same area.

Not so simple, because the FW is determined by the output swing of the read transistor. If you use a very high conversion gain you get with a tiny read transistor with a big pixel you reduce the read noise per unit area but you also reduce the photoelectron capacity per unit area, in exact proportion. Essentially, per pixel DR is invariant under scaling. What this does is effectively raise the 'base ISO' of the sensor, and is exactly what Nikon did with the D3 and D3s. It trades bright light DR for low light DR.
This is also discussed in the Aptina paper I linked

Generally, the amount of voltage swing allowed in the pixel is fixed by the overall sensor design. This fixed range of voltage means that CG can have a distinct impact on the sensor’s FW. For example, in a typical design, the power supply is fixed at 2.8 V and the analog signal chain has a fixed window of 1 V allowed voltage swing at the input. For example, if 1 V of swing is allowed at the pixel’s output and the source follower (SF) gain is 0.8, then the allowed voltage swing at the FD is equal to 1 V / 0.8 = 1.25 V.

As shown above, the capacitance of the FD node will determine the amount of charge that can be detected within the fixed voltage operating range. Assuming a 1.25 FD voltage swing: if a sensor has CG = 30 μV/e, then the largest FW capacity achievable will be 1.25∙106 μV / 30 μV/ e ~ 42,000 electrons. Alternatively, if a pixel has a much higher CG equal to 150 μV/e, then the FW capacity would be limited by the fixed voltage swing:

1.25∙106 μV / 150 μV/e ~ 8,000 electrons.
--
Bob

Complain
 Like?
 Re: Why and how small pixels influence the image quality In reply to schwoofi, Apr 4, 2012

But today's pixel is not the same as yesterday's pixel, you can't really compare that way.

schwoofi wrote:

http://theory.uchicago.edu/~ejm/pix/20d/tests/noise/

Example Nikon D3s vs D800 at ISO 25600

Nikon D3s

12MP
Saturation = 743
Photon noise (18% grey) = sqrt(743*0.18) = 11.6

SNR (18% grey) = (743*0.18) / sqrt( 11.6² + 3.2² ) = 11.1 = 20.9dB (DxO=20.8)
DR = log2( 743 / 3.2 ) = 7.9 (DxO=7.63)

Nikon D800

36MP
Saturation = 252
Photon noise (18% grey) = sqrt(252*0.18) = 6.7
SNR (18% grey) = (252*0.18) / sqrt( 6.7² + 3.7² ) = 5.9 = 15.4dB (DxO=15.2)
DR = log2( 252 / 3.7 ) = 6.1 (DxO=5.91)

Scaled to 12MP
Saturation = 3*252 = 756
Photon noise (18% grey) = sqrt(756*0.18) = 11.7
Read noise = sqrt( 3* 3.7² ) = 6.4
SNR (18% grey) = (756*0.18) / sqrt( 11.7² + 6.4² ) = 10.2 = 20.2dB
DR = log2( 756 / 6.4 ) = 6.8 (more than one stop lower than D3s)

(The DxO measurements, when scaled to 8MP, are SNR=21.8, DR=7)

Conclusion

There definitely is an effect, even if scaled down.

Smaller pixels negatively affect DR and the SNR. The latter in darker areas mostly.

Best regards

-- hide signature --
Khun_K's gear list:Khun_K's gear list
Phase One Capture One Pro
Complain
 Like?
 To make it even more confusing In reply to KLO82, Apr 4, 2012

To make it even more confusing, one could divide the read noise of a certain sensor into one part that depends on the signal strength and one part that doesn't. With this we could accurately model the behavior of the different sensors from Canon and Nikon (and also Sony) from low ISO to high ISO. At high ISO the signal strength is low and the part of the read noise that does not depend on the signal strength dominates.

The Sony sensor is in a way unique, in that the part of it's read noise that is linearly dependent on the signal strength is very close to zero.

So, what I have concluded would fit to Sony style sensors more than it is applicable to the Canon and Nikon style sensors/readout circuits.

Best regards

schwoofi's gear list:schwoofi's gear list
Sigma DP1 Merrill Sigma DP3 Merrill Canon EOS 5D Mark III Canon EF 50mm f/1.4 USM Canon EF-S 18-55mm f/3.5-5.6 +4 more
Complain
 Like?
 I am concentrating on the low light - high ISO case In reply to bobn2, Apr 4, 2012

In low light, high ISO situations the part of the read noise that is independent from the signal strength is dominant:

e.g. for the EOS 7D, the read noise is constant from ISO400 upwards:

http://www.sensorgen.info/CanonEOS_7D.html

Best regards

schwoofi's gear list:schwoofi's gear list
Sigma DP1 Merrill Sigma DP3 Merrill Canon EOS 5D Mark III Canon EF 50mm f/1.4 USM Canon EF-S 18-55mm f/3.5-5.6 +4 more
Complain
 Like?
 Re: In terms of IQ, smaller pixels deliver. In reply to bobn2, Apr 4, 2012

bobn2 wrote:

KLO82 wrote:

Consider it like this:

Lets say we have (relatively large) pixels. After a certain time, we have acheived scaling technology by which we can make pixels smaller (and reduce per pixel read noise). If now we use the same technology (less read noise per pixel) to make the same sized large pixel as before, instead of making the smaller pixels, we will have less read noise per unit area than the the smaller pixels in the same area.

Not so simple, because the FW is determined by the output swing of the read transistor. If you use a very high conversion gain you get with a tiny read transistor with a big pixel you reduce the read noise per unit area but you also reduce the photoelectron capacity per unit area, in exact proportion. Essentially, per pixel DR is invariant under scaling. What this does is effectively raise the 'base ISO' of the sensor, and is exactly what Nikon did with the D3 and D3s. It trades bright light DR for low light DR.

I can see that things are not very simple here, but staying with larger pixels in this case resulted in a camera which is even more suitable for low light shooting.

This is also discussed in the Aptina paper I linked

Generally, the amount of voltage swing allowed in the pixel is fixed by the overall sensor design. This fixed range of voltage means that CG can have a distinct impact on the sensor’s FW. For example, in a typical design, the power supply is fixed at 2.8 V and the analog signal chain has a fixed window of 1 V allowed voltage swing at the input. For example, if 1 V of swing is allowed at the pixel’s output and the source follower (SF) gain is 0.8, then the allowed voltage swing at the FD is equal to 1 V / 0.8 = 1.25 V.

As shown above, the capacitance of the FD node will determine the amount of charge that can be detected within the fixed voltage operating range. Assuming a 1.25 FD voltage swing: if a sensor has CG = 30 μV/e, then the largest FW capacity achievable will be 1.25∙106 μV / 30 μV/ e ~ 42,000 electrons. Alternatively, if a pixel has a much higher CG equal to 150 μV/e, then the FW capacity would be limited by the fixed voltage swing:

1.25∙106 μV / 150 μV/e ~ 8,000 electrons.
--
Bob

Complain
 Like?
 Re: In terms of IQ, smaller pixels deliver. In reply to KLO82, Apr 4, 2012

KLO82 wrote:

bobn2 wrote:

KLO82 wrote:

Consider it like this:

Lets say we have (relatively large) pixels. After a certain time, we have acheived scaling technology by which we can make pixels smaller (and reduce per pixel read noise). If now we use the same technology (less read noise per pixel) to make the same sized large pixel as before, instead of making the smaller pixels, we will have less read noise per unit area than the the smaller pixels in the same area.

Not so simple, because the FW is determined by the output swing of the read transistor. If you use a very high conversion gain you get with a tiny read transistor with a big pixel you reduce the read noise per unit area but you also reduce the photoelectron capacity per unit area, in exact proportion. Essentially, per pixel DR is invariant under scaling. What this does is effectively raise the 'base ISO' of the sensor, and is exactly what Nikon did with the D3 and D3s. It trades bright light DR for low light DR.

I can see that things are not very simple here, but staying with larger pixels in this case resulted in a camera which is even more suitable for low light shooting.

Yes, but less suitable for bright light shooting, in that it has a reduction in achievable bright/mid tone SNR compared with a camera with smaller pixels. The range of optimisation is fairly limited. A gain in one part of performance is generally balanced by a fall in another. So, for instance the limitation here is that of the FW by the output swing of the read transistor. If the a big pixel his high conversion gain, then its per area photoelectron capacity is limited. There is a thought that if D3s technology had been used to make a 6MP sensor, it would have had fantastic low light performance. Well, yes it would, 1/2 stop better than the D3s, but the cost would be that it would have a maximum photoelectron density half that of the D3s, and thus been a full stop behind in low light SNR - plus of course the resolution would be 30% less.

One way round the voltage swing issue is to raise the supply voltage of the sensor, and I suspect that this is what Nikon has done with the D3s. The problem with that is increased power dissipation (as the square of the supply voltage) which is not good for thermal noise, especially in cameras with LV and video capability - possibly why the D3s's video capability was limited.

In the end the bulk of the technological progress made in the semiconductor part of the sensor is geometry shrinkage, and that brings with it a reduction in the optimum pixel size, and thus an increase in pixel counts. Obviously the designers have the freedom to move a certain amount either side of optimal to achieve specific goals such as low light performance, but this is always at the expense of another part of performance.

As a trend, overall performance increases and pixel size reduces. Whether you want to attribute them both to the cause of geometry shrinks is just a matter of point of view.
--
Bob

Complain
 Like?
 Re: In terms of IQ, smaller pixels deliver. In reply to bobn2, Apr 4, 2012

Surely I am missing something obvious? You are saying that it would be a better low light performer, but low light SNR would be lower??

bobn2 wrote:

There is a thought that if D3s technology had been used to make a 6MP sensor, it would have had fantastic low light performance. Well, yes it would, 1/2 stop better than the D3s, but the cost would be that it would have a maximum photoelectron density half that of the D3s, and thus been a full stop behind in low light SNR - plus of course the resolution would be 30% less.

Complain
 Like?
 Re: He throws away the shadow areas for his DR calculation. In reply to schwoofi, Apr 4, 2012

schwoofi wrote:

Kaj E wrote:

Real difference in DR between D4 and D800 with highlight headroom included:

http://home.comcast.net/~NikonD70/Charts/PDR.htm

His definition of Photographic Dynamic Range is a low endpoint with an SNR of 20.

He throws away the shadow areas for his DR calculation.

Yes he "throws away" shadow DR if you compare it to Engineering DR. His Photographic DR definition is more restrictive than the Engineering DR where shadow detail stops at signal/noise =1. This is much worse than what a photographer typically finds acceptable.

Bill Claff's Photographic DR is closer to what photographers would call acceptable in an image.

The point is that "photographic DR" includes highlight headroom which DxO Mark "throws away". Highlight headroom has highest S/N and is definitely usable.

-- hide signature --

Kind regards
Kaj
http://www.pbase.com/kaj_e
WSSA member #13

It's about time we started to take photography seriously and treat it as a hobby.- Elliott Erwitt

Complain
 Like?
 Re: Yes but, if you shoot high ISO ... In reply to schwoofi, Apr 4, 2012

schwoofi wrote:

Yes but, if you shoot high ISO you normally need the dynamic range to the low end and not in the highlights.

As I said in my reply above, how much noise you find acceptable in the shadows is a personal preference. I doubt that many photographers are happy with S/N=1.

For the highest DR you definitely want to utilize all the highlight DR with the best S/N.

-- hide signature --

Kind regards
Kaj
http://www.pbase.com/kaj_e
WSSA member #13

It's about time we started to take photography seriously and treat it as a hobby.- Elliott Erwitt

Complain
 Like?
 Re: In terms of IQ, smaller pixels deliver. In reply to KLO82, Apr 4, 2012

If I am understanding correctly (I must admit I also find Bob's statement confusing), if the same electronics is coupled to a larger collection area photodiode, then one achieves the same read noise and saturation capacity in electrons, but twice the collection area; so for a given exposure (light per unit area) the sensor will saturate twice as fast -- in effect what has been done is to double the base ISO. If on the other hand, one adjusts the electronics so that the saturation capacity remains the same per unit area, then the read noise goes up in proportion (doubles). But read noise adds in quadrature (assuming it is uncorrelated), so two smaller pixels have sqrt(2) not double the read noise, so the smaller pixels win.

KLO82 wrote:

Surely I am missing something obvious? You are saying that it would be a better low light performer, but low light SNR would be lower??

bobn2 wrote:

There is a thought that if D3s technology had been used to make a 6MP sensor, it would have had fantastic low light performance. Well, yes it would, 1/2 stop better than the D3s, but the cost would be that it would have a maximum photoelectron density half that of the D3s, and thus been a full stop behind in low light SNR - plus of course the resolution would be 30% less.

-- hide signature --
Complain
 Like?
 Re: In terms of IQ, smaller pixels deliver. In reply to KLO82, Apr 4, 2012

KLO82 wrote:

Surely I am missing something obvious? You are saying that it would be a better low light performer, but low light SNR would be lower??

bobn2 wrote:

There is a thought that if D3s technology had been used to make a 6MP sensor, it would have had fantastic low light performance. Well, yes it would, 1/2 stop better than the D3s, but the cost would be that it would have a maximum photoelectron density half that of the D3s, and thus been a full stop behind in low light SNR - plus of course the resolution would be 30% less.

I am pretty sure Bob meant bright light SNR in that last sentence, ie, you can decrease read noise (which matters for really high ISO) at the expense of saturation capacity. And by reducing your saturation capacity, you reduce the maximum achievable SNR at base ISO.

Complain
 Like?
 Re: Read noise doesn't double for double capacitance In reply to ejmartin, Apr 4, 2012

ejmartin wrote:

If I am understanding correctly (I must admit I also find Bob's statement confusing), if the same electronics is coupled to a larger collection area photodiode, then one achieves the same read noise and saturation capacity in electrons, but twice the collection area; so for a given exposure (light per unit area) the sensor will saturate twice as fast -- in effect what has been done is to double the base ISO. If on the other hand, one adjusts the electronics so that the saturation capacity remains the same per unit area, then the read noise goes up in proportion (doubles). But read noise adds in quadrature (assuming it is uncorrelated), so two smaller pixels have sqrt(2) not double the read noise, so the smaller pixels win.

But, Bob is wrong about that because he is keeping the transistor shape (width to length ratio) constant when increasing its capacitance. The read noise in e- does not double when you double the capacitance of the source follower with the technology held constant. The reason is that the technology node defines the gate length, so to increase capacitance you don't scale all the transistor dimensions. Instead, to increase the capacitance you increase gate width while holding the length constant. Picture the two transistors from the small pixels joined in parallel side by side to make a wider transistor. The new transistor has lower noise (lower resistance (R) = lower 4KTR thermal noise) than the two small transistors. The net effect is that with the saturation held constant, the noise per area is unchanged by pixel size. OTOH, with saturation capacity per area decreased by just using the small transistors in the large pixel, the read noise per area is also decreased.

Complain
 Like?
 Re: In terms of IQ, smaller pixels deliver. In reply to KLO82, Apr 4, 2012

KLO82 wrote:

Surely I am missing something obvious? You are saying that it would be a better low light performer, but low light SNR would be lower??

No, it's a mistype, it would be 1/2 stop better in low light (read noise wise, of course the shot noise would depend on the QE, as ever, so the half stop gain would be in the shadows. Absolute bright light performance would be 1 stop worse, due to the halving of saturation photoelectron density.

bobn2 wrote:

There is a thought that if D3s technology had been used to make a 6MP sensor, it would have had fantastic low light performance. Well, yes it would, 1/2 stop better than the D3s, but the cost would be that it would have a maximum photoelectron density half that of the D3s, and thus been a full stop behind in low light SNR - plus of course the resolution would be 30% less.

-- hide signature --

Bob

Complain
 Forum
Keyboard shortcuts: