12mp: beginning of the end of mp race?

  • Thread starter Thread starter mschf
  • Start date Start date
In terms of equivalent photoelectrons the minimum read noise is found to be the same over more than an order of magnitude difference in pixel area between DSLR and compact digicams in a given year. If bobn2 was correct the LX3 would have 1/4 e- read noise. And yes a photographer is expected to turn up the DSLR ISO setting to achieve minimum read noise when there is a limited amount of light. It is true though that increasing the parallelism in read circuitry can result in a reduction in read noise without requiring an improvement in transistor technology. This is explained in the HP sensor noise paper as well as my first post on telegraph noise. So if we increased the number of columns and read channels and A-D converters in a sensor by some factor but kept the number of rows the same the read noise per area would not change.
 
That's going back to the start of this discussion. If the read noise
per pixel is the same, the read noise goes down, because you're
averaging over a larger number of pixels (theres a sqrt(N) N or
1/sqrt(N)). Having accepted that, DSP then argues that per pixel read
noise goes up as the pixel scales, for various reasons.
No, I showed that when the per pixel read noise stays the same the
noise per area goes up.
You didn't show it, you stated it.
You changed the formula I posted to have the
read noise per pixel going down:
Yes, because it does. That bit is not to do with the longer discussion. If you integerate N pixels with SNR s/n over an area, the integrated signal is N*s/N, the the integrated noise is sqrt(N*n^2) N = n/sqrt(N), so the new SNR = sqrt(N) s/n. Obviously, the larger number of pixels a sensor squeezes into a given area, the larger N will be and the greater will be the SNR.

--
Bob

 
In terms of equivalent photoelectrons the minimum read noise is found
to be the same over more than an order of magnitude difference in
pixel area between DSLR and compact digicams in a given year.
Err, no. Currentl DSLR's about 5 microns = 25 sq microns, P&S around 2 microns, = 4 square microns. that's a factor of less than an order of magnitude, not more.
If
bobn2 was correct the LX3 would have 1/4 e- read noise.
assuming it had the same technology, which you seem to feel you have established, but have not.
And yes a
photographer is expected to turn up the DSLR ISO setting to achieve
minimum read noise when there is a limited amount of light. It is
true though that increasing the parallelism in read circuitry can
result in a reduction in read noise without requiring an improvement
in transistor technology. This is explained in the HP sensor noise
paper as well as my first post on telegraph noise.
Again, assuming that sensor design is settling time limited, which you have yet to show, it is simply an assertion.
So if we increased
the number of columns and read channels and A-D converters in a
sensor by some factor but kept the number of rows the same the read
noise per area would not change.
No assuming both the above, and equivalent readout rate (and I haven't even begun to calculate whether this factor is just what is needed to cancel the read noise averaging effect, but I guess that it isn't). In fact, readout speeds tend to be governed by processor speeds, so across a generation are about constant, making high pixel count cameras slower.

In practice, look at the figures for real sensors. It is clear that trend in read noise is down even as the trend in pixel density is up.

--
Bob

 
You changed the formula I posted to have the
read noise per pixel going down:
Yes, because it does. That bit is not to do with the longer
discussion. If you integerate N pixels with SNR s/n over an area, the
integrated signal is N*s/N, the the integrated noise is sqrt(N*n^2) N
= n/sqrt(N), so the new SNR = sqrt(N) s/n. Obviously, the larger
number of pixels a sensor squeezes into a given area, the larger N
will be and the greater will be the SNR.
The noise per pixel in photoelectrons is the same as shown by measurements (2.5e- to 3.5e-). The signal per pixel in photoelectrons is reduced by N. Averaging over N pixels the net SNR is reduced by sqrt(N).
 
Note the title of the post. The minimum read noise per pixel for the 5D2 (6.4 micron pixels) in equivalent photoelectrons is about 2.5 e-. The read noise per pixel in photoelectrons for one of the best campact digicams the LX3 (2 micron pixels) is about the same, not 1/4 e- as you predict. Your theory doesn't match reality.
 
Note the title of the post. The minimum read noise per pixel for the
5D2 (6.4 micron pixels) in equivalent photoelectrons is about 2.5 e-.
The read noise per pixel in photoelectrons for one of the best
campact digicams the LX3 (2 micron pixels) is about the same, not 1/4
e- as you predict. Your theory doesn't match reality.
More circles. You assert the readout technology is the same. I suspect that it isn't. But we've been there before.
--
Bob

 
No, my formula is correct.
It's wrong, because if you scale the cell, the capacitance scales
(there, I've said it three times now), wherever it is. this much was
confirmed by Eric Fossum in a thread here somewhere.
Eric merely stated that the floating diffusion capacitance scales
with source follower gate area: but we both know that already. The
point is that you cannot just scale a cell from a 5D2 down to 2
micron pitch by shrinking all dimensions uniformly because the
resulting transistors would not work.
Can you justfiy that assertion? There was a considerable discussion
with Eric on the usenet thread trying to find the limits of the
scalability of that SF transistor (and it is that transistor which
ultimately dictates the limits of scalability). As I remember, it was
below 2 microns pixel pitch, and the problem was not it 'not working'
but that noise sources such as random telegraph signal noise
(essentially quantum noises associated with very small numbers of
charge carriers) began to become significant. However, the discussion
of the limit of scalability, although interesting, is somewhat
different from the 'small pixels = more noise' fallacy, and also
merely puts a limit on where read noise stops scaling down with pixel
pitch. It does not invalidate the idea. Certainly also, these limits
have not made Eric think it's not worthwhile investigating the
potential of tiny, tiny pixels.
Is this the Eric Fossum thread you're thinking of?

http://forums.dpreview.com/forums/read.asp?forum=1000&message=28818064

--
Daniel
 
I stated above that the read noise per pixel in photoelectrons was independent of pixel area (based mainly on reported measurements) and that therefore the read noise per area increased by the square root of the pixel density. Yet bobn2 stated that by scaling laws the read noise per area decreases by the square root of the area. Thanks to Bob's posts I now understand his position (even if he doesn't understand mine). Lets see if I can derive both results given different assumptions. First here is a paper that gives a formula for the noise in an amplifier stage:
http://www.stw.tu-ilmenau.de/~ff/beruf_cc/cmos/cmos_noise.pdf

That formula shows the noise is proportional to sqrt(2*k*T/C) where k is just a constant and T is the absolute temperature and C is the amplifier's capacitance. Let's derive that result:

As Bob pointed out the amplifier stages can have unity voltage gain so we can simply examine the voltage noise of the amplifiers. The standard form of such an amplifier is the source follower. I will use a simplified model of the transistor as a thermal resistor R in series with a switch. The switch turns on until the output voltage equals the gate voltage then the switch turns off. The output of the transistor is connected to the next stage which appears as a capacitance load C. The current therefore flows for a time proportional to the circuit time constant R*C until the output voltage matches the gate voltage. At any instant the transistor is on the signal current is proportional to the voltage across the transistor divided by its resistance and the noise current is given by sqrt(4*k*T/R). Notice that the signal to noise ratio scales by sqrt(1/R) if the transistor resistance is changed. The noise electrons are uncorrelated with time so over time they average in a root mean square sense. The noise charge is therefore proportional to sqrt(R*C*4*k*T/R) and the noise voltage to sqrt(R*C*4*k*T/R) C which is proportional to the value HP has. Note I did not derive the absolute noise just its proportionality and I didn't add the noise of the two inputs of the differential amplifiers so the factor of 4 versus 2 is not a concern.

So where did the expected SNR of the amplifier which is proportional to sqrt(1/R) go to? The value of R was canceled out by the amount of time averaging that occurred in the capacitor. The transistor conductivity will still exert its influence indirectly though by the amount of capacitance that can be driven. Since the readout time is proportional to the time constant R*C for a given read time the capacitance may be increased proportionally to 1/R. Since the noise scales by sqrt(1/C) this means for a given frame rate we can set the capacitance so the noise scales by sqrt(R).

Now we are ready to see what happens when we change the pixel size. First take my example where the transistor technology is fixed so the gate length and transistor conductivity per unit of gate width is fixed. We divide up a large pixel into N small pixels. We do this by reducing each pixel source follower transistor gate width by the factor N so its photoelectric charge to gate capacitance and thus signal voltage stays the same. I will also assume that each of these N pixels are connected to the same output column wire and that the frame rate stays the same so that the time available to read out a pixel is reduced by the factor N. The first source follower transistor resistance has increased by the factor N and the readout time available has decreased by N so the capacitance the first transistor drives must be decreased by N^2. The first stage noise is thus increased by a factor of N. Now the second stage capacitance is partly do to its transistor gate width so its output capacitance will also need to be reduced and so on throughout all the gain stages. From stage to stage the capacitance can be increased by each stage's current gain and the noise of the various stages add incoherently (in root mean square fashion) so given sufficient gain at each stage the noise is dominated by the first amplifier. Therefore the total read noise per pixel scales approximately by N. If we then average the N pixels back together to determine the effective read noise per sensor area since the pixel noises add incoherently we get a net increase in noise per area of sqrt(N).

Next consider the case where the N small pixels all have their own parallel output wires but we still hold the frame rate and gate length constant. Each pixel gate width is again reduced by the factor N. Now the first transistor can drive a capacitance that scales by 1/N so its noise scales by sqrt(N). Each successive transistor stage has its gate width also scaled by 1/N and so does the capacitance it drives so the noise of each stage and thus the total noise scales by sqrt(N). Now when we add the N pixels back together we find the sensor noise per unit area compared to the large pixel case is constant.

Last consider Bob's case. The transistor technology improves so we can reduce both the transistor length and width by sqrt(N) to hold the voltage constant when shrinking the pixel by an area factor of N. Now reducing the transistor gate length increases its conductance and lets say that increase compensates for the reduced gate width so the transistor resistance stays constant. (This is more or less true at least for modest scale changes.) Next lets scale the frame readout time by the number of pixels so the readout time per pixel stays constant. Now the first amplifier can drive the same output capacitance as the original large pixel and so on for all gain stages so the voltage read noise per pixel is unchanged. Then averaging the N small pixels together yields a net reduction in read noise per area of sqrt(N).

The ultimate limit on pixel density therefore depends on which of these cases apply along with the ability to maintain quantum efficiency (which doesn't seem to be a problem especially for sensors with back side illumination).
 
I know that Bob is going to want to attack my reasoning but I only included the derivation of HP's noise formula to show where it comes from; so if I wasn't rigorous that doesn't mean that their formula is wrong. I also really do understand Bob's reasoning which he amply posted above so he doesn't need to repeat it again.

Lastly I would like to thank Bob for helping me clarify my understanding of how read noise changes with pixel scaling.
 
The Sigma/Foveon argument all along is that pixels by themselves don't really reflect resolution, and the resolving power of the Foveon sensor is definitely greater than a Bayer CFA sensor with the same MP.

I personally think that an 8 MP APC-C Foveon sensor would be all you'd need or want, and you'd outresolve Bayer sensors up to around 20 MP without all of the extra pixel baggage in terms of file size and processing time. I can up-rez a good Foveon image to double it's resolution, e.g., 4.3 MP to 18+ MP, and get outstanding large prints.

Sigma, are you listening? When are you going to come out with an 8 MP camera? Better yet, when are you going to come out with an 8 to 10 MP full-frame Foveon-based dSLR?
--
'Do you think a man can change his destiny?'
'I think a man does what he can until his destiny is revealed.'
 
My preceding post hit the word limit so I needed to leave off some clarifications. Notice that although I mentioned read noise in equivalent photoelectrons at the beginning of the post the rest of the post deals strictly with voltage noise.

In a given amplifier stage I reference the transistor's noise to its output voltage because it is the output capacitor that integrates its noise. It is standard convention though to always reference an amplifier's noise to its input and that looks like what HP has done in their paper. I thought that would add confusion though because then the voltage noise appears referenced to the input capacitance while that stage's time constant is set by its output capacitance.

I glossed over the noise in the second and succeeding stages for brevity. First lets derive the second stage noise of my first case assuming that it is also a simple source follower so anything more complicated like the correlated double sampler is somewhere downstream of it. The capacitance of the second stage can therefore be determined strictly by its gate capacitance so its width must scale by 1/N^2 and its resistance increases by N^2. Its available sample time has gone down by a factor of N so its output capacitance scales by 1/N^3 and its noise scales by sqrt(N^3). The result is that as the density scales by N the effective second stage noise degrades even more rapidly than the first stage noise. Because of gain though the total noise still depends primarily on the noise in the first stage so the total effective read noise per area degrades by a factor slightly more than sqrt(N).

Next consider the second stage noise in the second case. For this case to keep each stage's sampling time unchanged we assume that all stages have increased parallelism by a factor of N. The second stage capacitance In this case was scaled by 1/N so the gate width also scales by 1/N and its resistance increases by N. Since in this case the sample time is unchanged by scaling the second stage output capacitance scales by 1/N. The second stage noise therefore scales by sqrt(N) which is the same as the first stage. We can continue in the same fashion scaling the capacitance and gate widths of all stages by 1/N so in this case the total noise per pixel scales by exactly sqrt(1/N) and the effective read noise per area is exactly constant.

Finally consider the second stage noise for Bob's case. For this case the second stage capacitance is unchanged but the gate length has decreased by a factor of sqrt(N). The gate width therefore increases by a factor of sqrt(N) and the amplifier's resistance decreases by a factor of N. Since for this case the sampling time is unchanged we can increase the second stage output capacitance by a factor of N so the second stage voltage noise per pixel decreases by a factor of sqrt(N). The total effective read noise per area for this case therefore improves by a factor of slightly more than sqrt(N).
 
No, my formula is correct.
It's wrong, because if you scale the cell, the capacitance scales
(there, I've said it three times now), wherever it is. this much was
confirmed by Eric Fossum in a thread here somewhere.
Eric merely stated that the floating diffusion capacitance scales
with source follower gate area: but we both know that already. The
point is that you cannot just scale a cell from a 5D2 down to 2
micron pitch by shrinking all dimensions uniformly because the
resulting transistors would not work.
Can you justfiy that assertion? There was a considerable discussion
with Eric on the usenet thread trying to find the limits of the
scalability of that SF transistor (and it is that transistor which
ultimately dictates the limits of scalability). As I remember, it was
below 2 microns pixel pitch, and the problem was not it 'not working'
but that noise sources such as random telegraph signal noise
(essentially quantum noises associated with very small numbers of
charge carriers) began to become significant. However, the discussion
of the limit of scalability, although interesting, is somewhat
different from the 'small pixels = more noise' fallacy, and also
merely puts a limit on where read noise stops scaling down with pixel
pitch. It does not invalidate the idea. Certainly also, these limits
have not made Eric think it's not worthwhile investigating the
potential of tiny, tiny pixels.
Is this the Eric Fossum thread you're thinking of?

http://forums.dpreview.com/forums/read.asp?forum=1000&message=28818064

--
Daniel
That was the one, thanks. The operative sentence being: "So, true, gain in the SF is inversely proportional to the FD capacitance which in turn scales like the area of the SF gate and FD node, and parasitics, and sometimes intentional capacitance."
--
Bob

 
I know that Bob is going to want to attack my reasoning but I only
included the derivation of HP's noise formula to show where it comes
from;
I'm not going to attack your reasoning, I think your reasoning is correct - it is the base assumptions that we differed on, as I said above somewhere.
so if I wasn't rigorous that doesn't mean that their formula is
wrong. I also really do understand Bob's reasoning which he amply
posted above so he doesn't need to repeat it again.
Lastly I would like to thank Bob for helping me clarify my
understanding of how read noise changes with pixel scaling.
Thanks to you for a the discussion. It's made me think about where the various design tradeoffs are. In particular, the importance of process limits, even if it's conventional wisdom to say that sensors don't exercise process geometry much. I wonder if the limits of Canon's old fab line, with its 0.5 micron feature size, are beginning to show, and partially explains why the others, with newer fabs (and Nikon with its ability to shop around) are catching up (or maybe have caught up).
--
Bob

 
Thanks to you for a the discussion. It's made me think about where
the various design tradeoffs are. In particular, the importance of
process limits, even if it's conventional wisdom to say that sensors
don't exercise process geometry much. I wonder if the limits of
Canon's old fab line, with its 0.5 micron feature size, are beginning
to show, and partially explains why the others, with newer fabs (and
Nikon with its ability to shop around) are catching up (or maybe have
caught up).
I am sure that Canon's choice of fab line includes such cost considerations as what size chip it can make without stitching and what equipment they have which would otherwise have no productive use. They presented a paper some time ago about an APS-H sensor they designed with 3.2 micron pixels but they have said that they haven't yet used the main design features it contains in a released product. That is probably because it used fairly up to date fabrication equipment that they don't want the expense of using. After all, at some point the read noise is low enough that further reduction is not worth the expense and for something like the 5D2 with its 6.4 micron pixels state of the art equipment would be overkill. The timing of Canon's introduction of their next really improved sensors might depend on when their lithography machine customers update their lines and send their current wafer steppers back to Canon.
 
The Sigma/Foveon argument all along is that pixels by themselves
don't really reflect resolution, and the resolving power of the
Foveon sensor is definitely greater than a Bayer CFA sensor with the
same MP.
For luminance, they have the same resolution power per pixel, for real resolution. In B&W resolution tests, both go awry at the same pixel frequency; just with different issues. The AA-Bayer goes soft, while the Foveon goes wrong, putting edges where they don't belong, and miscounting details.
I personally think that an 8 MP APC-C Foveon sensor would be all
you'd need or want, and you'd outresolve Bayer sensors up to around
20 MP without all of the extra pixel baggage in terms of file size
and processing time. I can up-rez a good Foveon image to double it's
resolution, e.g., 4.3 MP to 18+ MP, and get outstanding large prints.
Another person might look at it and see a mosaic, though.

It is a given that some people think that low-MP aliased images are worth zooming into or enlarging; it is not a given that all people accept the quality of the imaging.
Sigma, are you listening? When are you going to come out with an 8 MP
camera? Better yet, when are you going to come out with an 8 to 10 MP
full-frame Foveon-based dSLR?
If the Foveon is going to drag way behind the mainstream sensors in luminance resolution, it will serve no purpose. A 40MP APS-C Bayer with a very weak AA filter will outdo an 8MP Foveon by a good margin. The higher the pixel density gets, the less a strong AA filter is needed, but also, the less interference with MTF it creates, anyway. A very-hires Bayer camera can effectively simulate an 8MP Foveon; the reverse is not possible.

--
John

 
The Sigma/Foveon argument all along is that pixels by themselves
don't really reflect resolution, and the resolving power of the
Foveon sensor is definitely greater than a Bayer CFA sensor with the
same MP.
For luminance, they have the same resolution power per pixel, for
real resolution. In B&W resolution tests, both go awry at the same
pixel frequency; just with different issues. The AA-Bayer goes soft,
while the Foveon goes wrong, putting edges where they don't belong,
and miscounting details.
All the demosaic algorithms that attempt to interpret more luminance detail than the number of green pixels sometimes generate what look to me like ugly digital artifacts. To me the most natural/best looking raw conversions use a fairly simple demosaic followed by superior sharpening (like R-L deconvolution) and sophisticated noise suppression as necessary. I therefore consider a Bayer sensor to have a proper luminance resolution equal to the number of green pixels it contains.

I have also seen a few unusual test cases that show superior resolution for the Foveon sensor. One I saw had a flamingo bird where all the luminance detail was in the red channel so the Bayer sensor's resolution looked like a Foveon sensor with 1/4 the pixels.
 
Thanks to you for a the discussion. It's made me think about where
the various design tradeoffs are. In particular, the importance of
process limits, even if it's conventional wisdom to say that sensors
don't exercise process geometry much. I wonder if the limits of
Canon's old fab line, with its 0.5 micron feature size, are beginning
to show, and partially explains why the others, with newer fabs (and
Nikon with its ability to shop around) are catching up (or maybe have
caught up).
I am sure that Canon's choice of fab line includes such cost
considerations as what size chip it can make without stitching and
what equipment they have which would otherwise have no productive
use.
Now they've got two fab lines, the old half micron one and the new one, which was said when they announced it to be aimed at making their own sensors for P&S (currently they use Sony). Presumably the new one must have capabilities to go to micron scale pixels, the old one clearly not. It's quite interesting to speculate how Canon assembled its fab lines. Since they are not a semiconductor manufacturer, it's unlikely they had an old line unused (although they could have bought one from someone else). As a stepper manufacturer, its possible they had unused equipment around, though my guess would be that steppers are manufactured to order, not for inventory.
They presented a paper some time ago about an APS-H sensor they
designed with 3.2 micron pixels but they have said that they haven't
yet used the main design features it contains in a released product.
That is probably because it used fairly up to date fabrication
equipment that they don't want the expense of using.
Once they've got the equipment, they'll want to use it for all its worth - the last thing you want is that high capital equipment unused - an advantage of the foundry strategy Nikon uses over Canon's approach. My guess would be it was fabbed on the new line (maybe as a process prover), but I would also guess that line's priority will be P&S. Keeping both lines full will be an interesting juggling act.
After all, at
some point the read noise is low enough that further reduction is not
worth the expense and for something like the 5D2 with its 6.4 micron
pixels state of the art equipment would be overkill. The timing of
Canon's introduction of their next really improved sensors might
depend on when their lithography machine customers update their lines
and send their current wafer steppers back to Canon.
Looking at the secondhand fab line market, I suspect they don't send them back. Generally, when a line is no longer useful to one company, they sell it on as a whole. Developing and characterising the process for a complete line is complex enough that companies are unlikely to do it with obsolescent equipment.

--
Bob

 
People usually cite the noise as the problem they see in high mega pixel cameras. Since the Foveon sensor is much noisier than other sensors, I don't see how Foveon can be seen as a main stream alternative.

Also while the end JPEG files are smaller than the ones from other cameras with comparable resolution, the raw files produced by Foveon actually isn't that much smaller, so I'm not sure if there'll be that much saving in processing time. I suppose there will be savings in memory card space if you go JPEG.
The Sigma/Foveon argument all along is that pixels by themselves
don't really reflect resolution, and the resolving power of the
Foveon sensor is definitely greater than a Bayer CFA sensor with the
same MP.

I personally think that an 8 MP APC-C Foveon sensor would be all
you'd need or want, and you'd outresolve Bayer sensors up to around
20 MP without all of the extra pixel baggage in terms of file size
and processing time. I can up-rez a good Foveon image to double it's
resolution, e.g., 4.3 MP to 18+ MP, and get outstanding large prints.

Sigma, are you listening? When are you going to come out with an 8 MP
camera? Better yet, when are you going to come out with an 8 to 10 MP
full-frame Foveon-based dSLR?
 

Keyboard shortcuts

Back
Top