12mp: beginning of the end of mp race?

  • Thread starter Thread starter mschf
  • Start date Start date
[snip]
I suspect that the source follower noise is a smallish component of
the total noises [snip]
I don't think so. At high ISO the first transistor should dominate
the read noise.
That would be the case if there was a single, feedback amplifier. In practice there seems generally to be two or three separate stages, in each one the first transistor contributes the bulk of the noise, that being the way FB amplifiers work. The noise contribution of the source follower, I am informed, is quite small, since it has zero voltage gain and is usually designed for low noise. Could be wrong. I haven't direct experience of designing CMOS image sensors (although I do of other CMOS) - I just go by what those that do say, and the papers they write.
In practice, it's enough that read noise stays constant. Since
increasing pixel density reduces read noise/area, it's still a win,
just not as much as if the read noise per pixel were reducing too.
[snip]
No, if the read noise per pixel is constant then the effective read
noise increases by the square root of the pixel density.
No that's not right. I had a brainfart a bit ago when i suddenly started thinking that was the case, but it's not. The reason is that when you input refer the read noise, it scales with the pixel size (it's a voltage noise, and you pass it backward through the cell capacitance to get it input referred - I pointed this out to the wider community, then for some reason my brain decided to deny it).
This is why
read noise limits density. The same calculation for shot noise shows
that if quantum efficiency is constant then shot noise is independent
of pixel density:
Dividing up one large pixel into N small ones:
small pixel signal = large pixel signal N
small pixel shot noise = large pixel shot noise/sqrt(N)
small pixel read noise = large pixel read noise
The last is wrong. Should be
small pixel read noise = large pixel read noise/N (input referred)
After adding the small pixels back together:
summed pixels signal = large pixel signal * N/N
summed pixels shot noise = large pixel shot noise * sqrt(N) sqrt(N)
summed pixels read noise = large pixel read noise * sqrt(N)
Last one should be

summed pixels read noise = large pixel read noise * sqrt(N) N = large pixel read noise/sqrt(N)

--
Bob

 
No, if the read noise per pixel is constant
Constant at what? You can refer to read noise relative to
saturation, relative to absolute signal, or in electrons. I assume
you mean the latter, but there is no force holding this constant.
Right, electrons.
then the effective read
noise increases by the square root of the pixel density. This is why
read noise limits density. The same calculation for shot noise shows
that if quantum efficiency is constant then shot noise is independent
of pixel density:
Dividing up one large pixel into N small ones:
small pixel signal = large pixel signal N
small pixel shot noise = large pixel shot noise/sqrt(N)
small pixel read noise = large pixel read noise
After adding the small pixels back together:
summed pixels signal = large pixel signal * N/N
summed pixels shot noise = large pixel shot noise * sqrt(N) sqrt(N)
summed pixels read noise = large pixel read noise * sqrt(N)
There is no reason to assume that future read noise trends will be
static, as measured in electrons.
I am not referring to future trends, I am discussing designing a hypothetical sensor with a given pixel transistor and different active areas.
The shot noise comparison is rhetorical; it is not a phenomenon, but
an identity. total image shot noise depends on total image photon
collection. The division is extraneous to noise. You can not expect
the same to be true of read noise. A sensor does not have an
intrinsic read noise, to be divided up into pixels.

Current P&S sensors outperformed DSLRs at base ISO in low read noise
per unit of area, perhaps the D3x has them beat now, though, but
that's a first.
I am not discussing the base ISO case which is DR limited. I am discussing the high ISO case.
 
[snip]
I suspect that the source follower noise is a smallish component of
the total noises [snip]
I don't think so. At high ISO the first transistor should dominate
the read noise.
That would be the case if there was a single, feedback amplifier. In
practice there seems generally to be two or three separate stages, in
each one the first transistor contributes the bulk of the noise, that
being the way FB amplifiers work. The noise contribution of the
source follower, I am informed, is quite small, since it has zero
voltage gain and is usually designed for low noise. Could be wrong. I
haven't direct experience of designing CMOS image sensors (although I
do of other CMOS) - I just go by what those that do say, and the
papers they write.
Please give references. The source follower feeds a column amplifier with a wider gate and therefore lower voltage noise than the pixel transistor. These two first stages dominate the high ISO noise.
In practice, it's enough that read noise stays constant. Since
increasing pixel density reduces read noise/area, it's still a win,
just not as much as if the read noise per pixel were reducing too.
[snip]
No, if the read noise per pixel is constant then the effective read
noise increases by the square root of the pixel density.
No that's not right. I had a brainfart a bit ago when i suddenly
started thinking that was the case, but it's not. The reason is that
when you input refer the read noise, it scales with the pixel size
(it's a voltage noise, and you pass it backward through the cell
capacitance to get it input referred - I pointed this out to the
wider community, then for some reason my brain decided to deny it).
You are confused by referring to the "cell capacitance". The photoelectrons are completely transferred from the photodiode to the floating diffusion of the source follower so the photodiode capacitance is irrelevant; only the source follower floating diffusion capacitance counts.
This is why
read noise limits density. The same calculation for shot noise shows
that if quantum efficiency is constant then shot noise is independent
of pixel density:
Dividing up one large pixel into N small ones:
small pixel signal = large pixel signal N
small pixel shot noise = large pixel shot noise/sqrt(N)
small pixel read noise = large pixel read noise
The last is wrong. Should be
small pixel read noise = large pixel read noise/N (input referred)
No, read noise in electrons being determined strictly by the source follower is independent of the photodiode size. In fact the read noise in e- of the relatively large pixel 5D2 is similar to that of the state of the art small sensors in P&S cameras.
After adding the small pixels back together:
summed pixels signal = large pixel signal * N/N
summed pixels shot noise = large pixel shot noise * sqrt(N) sqrt(N)
summed pixels read noise = large pixel read noise * sqrt(N)
Last one should be
summed pixels read noise = large pixel read noise * sqrt(N) N =
large pixel read noise/sqrt(N)
No, my formula is correct.
 
[snip]
I suspect that the source follower noise is a smallish component of
the total noises [snip]
I don't think so. At high ISO the first transistor should dominate
the read noise.
That would be the case if there was a single, feedback amplifier. In
practice there seems generally to be two or three separate stages, in
each one the first transistor contributes the bulk of the noise, that
being the way FB amplifiers work. The noise contribution of the
source follower, I am informed, is quite small, since it has zero
voltage gain and is usually designed for low noise. Could be wrong. I
haven't direct experience of designing CMOS image sensors (although I
do of other CMOS) - I just go by what those that do say, and the
papers they write.
Please give references.
Search for the usenet thread started by John Sheehy on the joy of pixel density, also Eric Fosum's workshops (Emil Martinec posted a link somewhee)
The source follower feeds a column amplifier
with a wider gate and therefore lower voltage noise than the pixel
transistor. These two first stages dominate the high ISO noise.
Plus the first stage of the VGA. But what we don't know is the relative contribution of each. If my speculation that the 5DII sensel is the same as the 1DIII sensel is correct (and if it wasn't, I'd though Canon would have not made it exactly the same size) then they seem to have managed to reduce high ISO noise from 4+ e- to 2.5 e- with the same source follower transistor (and very probably the same column amplifier transistor). We're both speculating, I don't see any reason to think your speculation is better than mine.
In practice, it's enough that read noise stays constant. Since
increasing pixel density reduces read noise/area, it's still a win,
just not as much as if the read noise per pixel were reducing too.
[snip]
No, if the read noise per pixel is constant then the effective read
noise increases by the square root of the pixel density.
No that's not right. I had a brainfart a bit ago when i suddenly
started thinking that was the case, but it's not. The reason is that
when you input refer the read noise, it scales with the pixel size
(it's a voltage noise, and you pass it backward through the cell
capacitance to get it input referred - I pointed this out to the
wider community, then for some reason my brain decided to deny it).
You are confused by referring to the "cell capacitance".
No I'm not
The
photoelectrons are completely transferred from the photodiode to the
floating diffusion of the source follower
I know that
so the photodiode
capacitance is irrelevant; only the source follower floating
diffusion capacitance counts.
Which is the 'cell capacitance'. If you scale the cell, that scales too.
This is why
read noise limits density. The same calculation for shot noise shows
that if quantum efficiency is constant then shot noise is independent
of pixel density:
Dividing up one large pixel into N small ones:
small pixel signal = large pixel signal N
small pixel shot noise = large pixel shot noise/sqrt(N)
small pixel read noise = large pixel read noise
The last is wrong. Should be
small pixel read noise = large pixel read noise/N (input referred)
No, read noise in electrons being determined strictly by the source
follower is independent of the photodiode size. In fact the read
noise in e- of the relatively large pixel 5D2 is similar to that of
the state of the art small sensors in P&S cameras.
As pointed out above, I didn't say photodiode size, i said cell capacitance, which scales as the cell scales.
After adding the small pixels back together:
summed pixels signal = large pixel signal * N/N
summed pixels shot noise = large pixel shot noise * sqrt(N) sqrt(N)
summed pixels read noise = large pixel read noise * sqrt(N)
Last one should be
summed pixels read noise = large pixel read noise * sqrt(N) N =
large pixel read noise/sqrt(N)
No, my formula is correct.
It's wrong, because if you scale the cell, the capacitance scales (there, I've said it three times now), wherever it is. this much was confirmed by Eric Fossum in a thread here somewhere.
--
Bob

 
It's wrong, because if you scale the cell, the capacitance scales
(there, I've said it three times now), wherever it is. this much was
confirmed by Eric Fossum in a thread here somewhere.
Your claim that the read noise in electrons scales by pixel area is wrong: just look at the read noise in electrons at maximum ISO for a wide range of sensors. You will find the best sensors have about 3e- of read noise regardless of cell size. This is because you cannot just scale a cell without designing a whole new transistor. You can change the size of the photodiode though. Once a company designs a nice low noise read circuit nothing is stopping them from putting that design in a sensor with large photodiodes, and this is what Canon did with the 5D2.
 
Search for the usenet thread started by John Sheehy on the joy of
pixel density, also Eric Fosum's workshops (Emil Martinec posted a
link somewhee)
I have already read Eric's papers and they don't support your position.
The source follower feeds a column amplifier
with a wider gate and therefore lower voltage noise than the pixel
transistor. These two first stages dominate the high ISO noise.
Plus the first stage of the VGA. But what we don't know is the
relative contribution of each. If my speculation that the 5DII sensel
is the same as the 1DIII sensel is correct (and if it wasn't, I'd
though Canon would have not made it exactly the same size) then they
seem to have managed to reduce high ISO noise from 4+ e- to 2.5 e-
with the same source follower transistor (and very probably the same
column amplifier transistor). We're both speculating, I don't see any
reason to think your speculation is better than mine.
Your speculation that the 5D2 and 1DII share the same sensel is completely groundless, of course Canon can be expected to design two sensors with different cell designs that happen to have the same pixel pitch. The reason your speculation is less likely than mine is that you doubt that the transistor that is working with the least signal power dominates the noise.
No, my formula is correct.
It's wrong, because if you scale the cell, the capacitance scales
(there, I've said it three times now), wherever it is. this much was
confirmed by Eric Fossum in a thread here somewhere.
Eric merely stated that the floating diffusion capacitance scales with source follower gate area: but we both know that already. The point is that you cannot just scale a cell from a 5D2 down to 2 micron pitch by shrinking all dimensions uniformly because the resulting transistors would not work.
 
Search for the usenet thread started by John Sheehy on the joy of
pixel density, also Eric Fosum's workshops (Emil Martinec posted a
link somewhee)
I have already read Eric's papers and they don't support your position.
Strange, Eric supported the 'position' directly. Can you find a quote that denies my 'position'. It is quite simple. Wherever the capacitance is in the cell, if you scale the cell, the capacitance scales.
The source follower feeds a column amplifier
with a wider gate and therefore lower voltage noise than the pixel
transistor. These two first stages dominate the high ISO noise.
Plus the first stage of the VGA. But what we don't know is the
relative contribution of each. If my speculation that the 5DII sensel
is the same as the 1DIII sensel is correct (and if it wasn't, I'd
though Canon would have not made it exactly the same size) then they
seem to have managed to reduce high ISO noise from 4+ e- to 2.5 e-
with the same source follower transistor (and very probably the same
column amplifier transistor). We're both speculating, I don't see any
reason to think your speculation is better than mine.
Your speculation that the 5D2 and 1DII share the same sensel is
completely groundless, of course Canon can be expected to design two
sensors with different cell designs that happen to have the same
pixel pitch.
Speculation is groundless, thats why it's speculation. Your position that they don't share the same sensel is similarly groundless. Let's think about it this way - Canon provided itself with a considerable marketing problem making the 5DII sensor the same pixel count the same as the 1DsIII sensor. If it had been a completely new pixel design (and they said it was derived from the 1DsIII sensor) they might have taken the opportunity to avoid that problem. Moreover, had it been a new sensel, they might have taken the opportunity to give it 100% microlens coverage, like the 50D. Even moreover, there is a quote in an interview somewhere saying that it is identical, apart from the CFA.
The reason your speculation is less likely than mine is
that you doubt that the transistor that is working with the least
signal power dominates the noise.
Thats not a reason, that's simply incorrect electronics, and an incorrect interpretation of what I said. There is no doubt that the SF transistor is an important component of the noise, but in most configurations there appear to be other noise source that could be comparable. We can agree that the SF transducer is the irreducible minimum of the noise, what we don't know is how close current designs are to that.
No, my formula is correct.
It's wrong, because if you scale the cell, the capacitance scales
(there, I've said it three times now), wherever it is. this much was
confirmed by Eric Fossum in a thread here somewhere.
Eric merely stated that the floating diffusion capacitance scales
with source follower gate area: but we both know that already. The
point is that you cannot just scale a cell from a 5D2 down to 2
micron pitch by shrinking all dimensions uniformly because the
resulting transistors would not work.
Can you justfiy that assertion? There was a considerable discussion with Eric on the usenet thread trying to find the limits of the scalability of that SF transistor (and it is that transistor which ultimately dictates the limits of scalability). As I remember, it was below 2 microns pixel pitch, and the problem was not it 'not working' but that noise sources such as random telegraph signal noise (essentially quantum noises associated with very small numbers of charge carriers) began to become significant. However, the discussion of the limit of scalability, although interesting, is somewhat different from the 'small pixels = more noise' fallacy, and also merely puts a limit on where read noise stops scaling down with pixel pitch. It does not invalidate the idea. Certainly also, these limits have not made Eric think it's not worthwhile investigating the potential of tiny, tiny pixels.

--
Bob

 
Search for the usenet thread started by John Sheehy on the joy of
pixel density, also Eric Fosum's workshops (Emil Martinec posted a
link somewhee)
I have already read Eric's papers and they don't support your position.
Strange, Eric supported the 'position' directly. Can you find a quote
that denies my 'position'. It is quite simple. Wherever the
capacitance is in the cell, if you scale the cell, the capacitance
scales.
We were discussing the relative significance of the source follower noise here. The point about the cell capacitance was discussed below.
Speculation is groundless, thats why it's speculation.
We should probably leave it at that. We both agree that the source follower places a limit on how low the noise can go and the exact contribution at high ISO for a given sensor is not published.
Eric merely stated that the floating diffusion capacitance scales
with source follower gate area: but we both know that already. The
point is that you cannot just scale a cell from a 5D2 down to 2
micron pitch by shrinking all dimensions uniformly because the
resulting transistors would not work.
Can you justfiy that assertion? There was a considerable discussion
with Eric on the usenet thread trying to find the limits of the
scalability of that SF transistor (and it is that transistor which
ultimately dictates the limits of scalability). As I remember, it was
below 2 microns pixel pitch, and the problem was not it 'not working'
but that noise sources such as random telegraph signal noise
(essentially quantum noises associated with very small numbers of
charge carriers) began to become significant. However, the discussion
of the limit of scalability, although interesting, is somewhat
different from the 'small pixels = more noise' fallacy, and also
merely puts a limit on where read noise stops scaling down with pixel
pitch. It does not invalidate the idea. Certainly also, these limits
have not made Eric think it's not worthwhile investigating the
potential of tiny, tiny pixels.
Scaling a transistor is not done by simply reducing all 3D dimensions by the same factor, it requires different parts to be adjusted by different factors. In short it requires a redesign. Anyway, if you succeed in redesigning a new low noise transistor nothing would be stopping you from placing it in a sensor with large photodiodes. I know Eric is trying to design an entirely new sensor architecture with very tiny pixels but he also hasn't succeeded yet.

Finally, if as you assert the read noise scales by pixel area why do state of the art sensors all have around 2.5 to 3.5 e- of high ISO noise per pixel regardless of its size over more than an order of magnitude in area?
 
Search for the usenet thread started by John Sheehy on the joy of
pixel density, also Eric Fosum's workshops (Emil Martinec posted a
link somewhee)
I have already read Eric's papers and they don't support your position.
Strange, Eric supported the 'position' directly. Can you find a quote
that denies my 'position'. It is quite simple. Wherever the
capacitance is in the cell, if you scale the cell, the capacitance
scales.
We were discussing the relative significance of the source follower
noise here. The point about the cell capacitance was discussed below.
Neither of us has definitive information on the relative significance of that SF noise with respect to the other front end noises, so it's hardly worth arguing about.
Speculation is groundless, thats why it's speculation.
We should probably leave it at that. We both agree that the source
follower places a limit on how low the noise can go and the exact
contribution at high ISO for a given sensor is not published.
Agreed.
Eric merely stated that the floating diffusion capacitance scales
with source follower gate area: but we both know that already. The
point is that you cannot just scale a cell from a 5D2 down to 2
micron pitch by shrinking all dimensions uniformly because the
resulting transistors would not work.
Can you justfiy that assertion? There was a considerable discussion
with Eric on the usenet thread trying to find the limits of the
scalability of that SF transistor (and it is that transistor which
ultimately dictates the limits of scalability). As I remember, it was
below 2 microns pixel pitch, and the problem was not it 'not working'
but that noise sources such as random telegraph signal noise
(essentially quantum noises associated with very small numbers of
charge carriers) began to become significant. However, the discussion
of the limit of scalability, although interesting, is somewhat
different from the 'small pixels = more noise' fallacy, and also
merely puts a limit on where read noise stops scaling down with pixel
pitch. It does not invalidate the idea. Certainly also, these limits
have not made Eric think it's not worthwhile investigating the
potential of tiny, tiny pixels.
Scaling a transistor is not done by simply reducing all 3D dimensions
by the same factor, it requires different parts to be adjusted by
different factors.
If you adjusted all 3D dimensions, you would not reduce the capacitance, because the gate oxide would get thinner. What I'm talking about is a 2D die shrink. This is common practice. Take a design that works, shrink it. And it does work, within limits, particularly because many of the characteristics of a MOS transistor are shape (L/W) related, not size related.
In short it requires a redesign.
If that were always true, die shrinks would not be so prevalent in the semiconductor industry.
Anyway, if you
succeed in redesigning a new low noise transistor nothing would be
stopping you from placing it in a sensor with large photodiodes.
That's not the point. There are very many variables in the design of a sensel, and designers will optimise as they wish. What we are talking about here are characteristics intrinsic to scaling, and to talk about those, you need to talk about scaling.
I
know Eric is trying to design an entirely new sensor architecture
with very tiny pixels but he also hasn't succeeded yet.
He wouldn't even be pursuing it if he thought it were theoretically impossible. No-one said it was easy.
Finally, if as you assert the read noise scales by pixel area why do
state of the art sensors all have around 2.5 to 3.5 e- of high ISO
noise per pixel regardless of its size over more than an order of
magnitude in area?
But the order of magnitude of area also covers a whole range of different read circuitries. Your own figures for the LX3 on another thread, reveal that it's read circuitry is quite different from a typical DSLR arrangement. I would suggest that P&S are using much simpler/cheaper read arrangements, and can get away with it because of the lower intrinsic read noise. The DSLR 'state of the art' in that range is Canon (no-one else has sub 4 e- read noise). The discussion is about the intrinsics of scaling, we have no evidence that Canon is simply scaling its pixels. In fact, since they like to work with a small set of pixel sizes (unlike Sony, who use a different pixel size in almost every sensor, and may indeed be scaling from a small set of designs) it suggests that each sensel size is individually designed, it maybe that they do use the same SF transistor design across a range of sensel designs. Plus, coming back to that other issue, I am by no means convinced that all of that front end read noise is the source follower.
--
Bob

 
No, read noise in electrons being determined strictly by the source
follower is independent of the photodiode size. In fact the read
noise in e- of the relatively large pixel 5D2 is similar to that of
the state of the art small sensors in P&S cameras.
While this is true, it is only for high ISOs, and it must be taken in to consideration that the 2.5 e- of the 5D2 at ISO 1600 is achieved in a camera with more of a design budget than the $400 P&S cameras with read noises of 2.8 e-. Who knows what could be done with higher pixel densities, with more of a budget? I would tend to think that the VGAs in P&S cameras are merely cosmetic, saving multiplication time. They may pick up late-stage noise before amplification, and a different design might allow less noise to get amplified by getting the signal stronger earlier in the signal chain.

--
John

 
If you adjusted all 3D dimensions, you would not reduce the
capacitance, because the gate oxide would get thinner. What I'm
talking about is a 2D die shrink. This is common practice. Take a
design that works, shrink it. And it does work, within limits,
particularly because many of the characteristics of a MOS transistor
are shape (L/W) related, not size related.
The MOS scaling rules I have seen described do scale all dimensions, just not by the same factors and voltages do wind up different. (Remember we are talking about more than an order of magnitude in area between a D-SLR and a P&S.) Just look at the problems with logic transistor scaling with transistors that won't turn off etc.
[snip]
Finally, if as you assert the read noise scales by pixel area why do
state of the art sensors all have around 2.5 to 3.5 e- of high ISO
noise per pixel regardless of its size over more than an order of
magnitude in area?
But the order of magnitude of area also covers a whole range of
different read circuitries. Your own figures for the LX3 on another
thread, reveal that it's read circuitry is quite different from a
typical DSLR arrangement. I would suggest that P&S are using much
simpler/cheaper read arrangements, and can get away with it because
of the lower intrinsic read noise. The DSLR 'state of the art' in
that range is Canon (no-one else has sub 4 e- read noise). The
discussion is about the intrinsics of scaling, we have no evidence
that Canon is simply scaling its pixels. In fact, since they like to
work with a small set of pixel sizes (unlike Sony, who use a
different pixel size in almost every sensor, and may indeed be
scaling from a small set of designs) it suggests that each sensel
size is individually designed, it maybe that they do use the same SF
transistor design across a range of sensel designs. Plus, coming back
to that other issue, I am by no means convinced that all of that
front end read noise is the source follower.
--
If you read my reply in that thread you will see that the read circuitry characteristics are in fact very similar in the high ISO range where their gains (in e- to volts at the A-D) are the same. The significant difference is that the gain of the D-SLR can be turned down when there is plenty of light to allow much higher saturation counts for low ISO operation. I find it a bit unlikely that read noise should scale by sensel area but just doesn't because the small sensors spoil the performance with simpler/cheaper read circuitry.
 
No, read noise in electrons being determined strictly by the source
follower is independent of the photodiode size. In fact the read
noise in e- of the relatively large pixel 5D2 is similar to that of
the state of the art small sensors in P&S cameras.
While this is true, it is only for high ISOs, and it must be taken in
to consideration that the 2.5 e- of the 5D2 at ISO 1600 is achieved
in a camera with more of a design budget than the $400 P&S cameras
with read noises of 2.8 e-. Who knows what could be done with higher
pixel densities, with more of a budget? I would tend to think that
the VGAs in P&S cameras are merely cosmetic, saving multiplication
time. They may pick up late-stage noise before amplification, and a
different design might allow less noise to get amplified by getting
the signal stronger earlier in the signal chain.
Actually since the volumes of point and shoot cameras are so high I expect their non-recurring design budgets are at least as high as those of D-SLR's. If you look at the literature it seems the greatest design efforts are currently directed to cell phone cameras. I really don't think it is a coincidence that the read noise in e- is independent of pixel size and has been that way for years despite some natural scaling by pixel area.
 
The scaling of a MOS transistor in gate length is not trivial but scaling the width is simple: halve the width and you halve the capacitance and the conductance. Now the shot noise in a signal is proportional to the square root of the number of electrons so the signal to shot noise improves by the square root of the signal. This is true at each gain step so the improvement in shot noise in the current follower output versus its input depends directly on the gain in electron count.

Now consider a sensor where you scale the pixels only in one dimension: halving the source follower width, doubling the number of columns and halving the gate width of the column transistor. Since the number of rows is unchanged we have the same amount of time to read out a pixel and all the voltages are unchanged because the ratio of electrons to area is the same. The voltage on each column transistor is the same but each carries half the charge so its shot noise has increased. If we now bin the two columns together on read out we would have the same output conductance and total signal to shot noise ratio as for the full size pixel case so doubling the number of columns would have had no effect on the read noise per area of the sensor. If instead we read the two columns separately we have half as much time to read each one but the column transistors have half the conductance so the net charge they can transfer on their output is quartered resulting in an effective doubling of the column amplifier's output noise.

If you successfully design a new transistor with a reduced gate length that same design could be used for any pixel size so I still contend that it doesn't make sense to compare a large pixel to a small one with different source follower gate length.
 
If you adjusted all 3D dimensions, you would not reduce the
capacitance, because the gate oxide would get thinner. What I'm
talking about is a 2D die shrink. This is common practice. Take a
design that works, shrink it. And it does work, within limits,
particularly because many of the characteristics of a MOS transistor
are shape (L/W) related, not size related.
The MOS scaling rules I have seen described do scale all dimensions,
just not by the same factors and voltages do wind up different.
(Remember we are talking about more than an order of magnitude in
area between a D-SLR and a P&S.) Just look at the problems with logic
transistor scaling with transistors that won't turn off etc.
This is actually an irrelevant discussion (for reasons which I'll com to in a reply to a later post), but for the sake of accuracy, when we talk about LSI scaling, we tend to talk about a single process, which tends to digtate diffusion depths, oxide depth (all the z dimension) in fact. When you cahnge the process, all bets are off - that's a complete redesign, not just a scale.
[snip]
Finally, if as you assert the read noise scales by pixel area why do
state of the art sensors all have around 2.5 to 3.5 e- of high ISO
noise per pixel regardless of its size over more than an order of
magnitude in area?
But the order of magnitude of area also covers a whole range of
different read circuitries. Your own figures for the LX3 on another
thread, reveal that it's read circuitry is quite different from a
typical DSLR arrangement. I would suggest that P&S are using much
simpler/cheaper read arrangements, and can get away with it because
of the lower intrinsic read noise. The DSLR 'state of the art' in
that range is Canon (no-one else has sub 4 e- read noise). The
discussion is about the intrinsics of scaling, we have no evidence
that Canon is simply scaling its pixels. In fact, since they like to
work with a small set of pixel sizes (unlike Sony, who use a
different pixel size in almost every sensor, and may indeed be
scaling from a small set of designs) it suggests that each sensel
size is individually designed, it maybe that they do use the same SF
transistor design across a range of sensel designs. Plus, coming back
to that other issue, I am by no means convinced that all of that
front end read noise is the source follower.
--
If you read my reply in that thread you will see that the read
circuitry characteristics are in fact very similar in the high ISO
range where their gains (in e- to volts at the A-D) are the same.
The significant issue, is how those 'gains' are obtained. There is a charge/voltage conversion, with a certain conversion constant and a voltage gain. Different mixes of the two will give the same charge/voltage 'gain' but very different end results. If we could design a cell that was so heroically small it gave 1000V output for a single electron, the rest of the system would have to be about attenuation, rather than gain.
The
significant difference is that the gain of the D-SLR can be turned
down when there is plenty of light to allow much higher saturation
counts for low ISO operation.
Wrong model, gain can be turned up. The charge/voltage conversion ratio is the indivisible minimum 'gain'.
I find it a bit unlikely that read
noise should scale by sensel area but just doesn't because the
small sensors spoil the performance with simpler/cheaper read
circuitry.
Why? at the moment cameras with small sensors tend to be at the lower cost part of the market. We know that both Canon and Nikon put a lot more into the ADC systems of their pro DSLR's than they do the consumer models - is it not reasonable to think that the P&S would be even less well endowed. (although my suspicion it is actually as well endowed as a MF camera - straight to ADC with digital gain for ISO - just 10 bit rather than 16 bit samples.
--
Bob

 
This is actually an irrelevant discussion (for reasons which I'll com
to in a reply to a later post), but for the sake of accuracy, when we
talk about LSI scaling, we tend to talk about a single process, which
tends to digtate diffusion depths, oxide depth (all the z dimension)
in fact. When you cahnge the process, all bets are off - that's a
complete redesign, not just a scale.
But there is no reason for a given process to increase the gate length of the source follower, it will always be at the minimum possible until a new process is perfected. Changing the gate width does make sense to allow for the larger currents needed to transfer the same ratio of electrons out of the larger full wells of a large photosite sensor.
 
I am like most of you here and have read articles and chapters in books on sensor technology. And I like you have read more megapixels is bad, keeping them the same is good. At one point it was acknowledged that all anyone needed was 6 megapixels; whoever said this never owned a 300D camera, which is okay as long as you don't crop, then the image degrades very quickly.

In theory, the 450D camera shouldn't have the quality of image as the 300D, again if you follow the "less is more" argument of pixels on sensors. But of course, this isn't reality, more is - well, more. The quality of image is superior on the 450D on blow up and cropping than the 300D.

There is more to megapixels than just megapixels, there is the processor, there is micro lenses, there is how the pixels are stacked, there is how much room between each pixel.

I am remind of Bill Gates stating that all we needed was a 640K computer, which of course was utter nonsense. So if Bill Gates back then didn't for see the future, how can we speculate that "more is less" and we are better of with less, which is more.

Anyways thought I would add here some speculation from Pop Photo out of their August 2008 mag, p.74:

"And there's more to come, Electrical engineers at Stanford have developed a digital camera with 12,616 tiny lenses that sees in super 3D. They've shrunk the imaging sensor's pixels down to 0.7 microns, less than a tenth the size of the pixels in many DSLRs, and grouped them into arrays topped by miniature lenses much smaller than the ones used in today's sensors.

This, in turn, is helping pave the way for the gigapixel camera, with 100 times more pixels than today's 10MP clunkers."

Its safe to say, given the history of electronics and computer technology, that we'll look back and "remember" when we thought a full frame camera with 24 megs was hot.

So lets us realize that unless we are an engineer in the industry, a discussion like this is akin to a discussion about how many angels can dance on the head of a pin.
--
An excellent lens lasts a lifetime, an excellent DSLR, not so long.
 
This is actually an irrelevant discussion (for reasons which I'll com
to in a reply to a later post), but for the sake of accuracy, when we
talk about LSI scaling, we tend to talk about a single process, which
tends to digtate diffusion depths, oxide depth (all the z dimension)
in fact. When you cahnge the process, all bets are off - that's a
complete redesign, not just a scale.
But there is no reason for a given process to increase the gate
length of the source follower, it will always be at the minimum
possible until a new process is perfected. Changing the gate width
does make sense to allow for the larger currents needed to transfer
the same ratio of electrons out of the larger full wells of a large
photosite sensor.
When a new process comes on stream, chips are usually designed very conservatively, particularly gate length. As yields become characterised and more data is gathered on them, designs can be scaled. A die shrink of 40% on the same process is not unheard of - it all depends how the yield actually pans out.
--
Bob

 
When a new process comes on stream, chips are usually designed very
conservatively, particularly gate length. As yields become
characterised and more data is gathered on them, designs can be
scaled. A die shrink of 40% on the same process is not unheard of -
it all depends how the yield actually pans out.
--
But there is no reason not to apply the reduction in gate length to the large pixel sensor as well as the small pixel sensor. There is a reason for different gate widths to handle the different currents but as I explained in my telegraph noise post that does not result in lower noise for the smaller pixel sensor. The bottom line is that increasing the information being read out by splitting up a pixel into smaller units costs noise to the extent that that information is read serially. If the information is read in parallel as in the 2x column count example then the noise is unchanged by splitting up a pixel.
 
When a new process comes on stream, chips are usually designed very
conservatively, particularly gate length. As yields become
characterised and more data is gathered on them, designs can be
scaled. A die shrink of 40% on the same process is not unheard of -
it all depends how the yield actually pans out.
--
But there is no reason not to apply the reduction in gate length to
the large pixel sensor as well as the small pixel sensor.
If you do that, you're not scaling, and if one wants to talk about effects intrinsic to pixel size, scaling is the only operating that really makes sense as a general case. Once you start allowing them to be designed differently, all bets are off, so far as effects attributable to a particular design parameter are concerned. We could agree around a qualification that per pixel read noise is invariant under 2D scaling, and therefore read noise density decreases under 2D scaling. As far as process changes and redesigns are concerned, we can make no general statements.
There is a
reason for different gate widths to handle the different currents but
as I explained in my telegraph noise post that does not result in
lower noise for the smaller pixel sensor. The bottom line is that
increasing the information being read out by splitting up a pixel
into smaller units costs noise to the extent that that information is
read serially.
I can't see whether it is read serially or parallel has anything to do with image level integrated cell read noise.
If the information is read in parallel as in the 2x
column count example then the noise is unchanged by splitting up a
pixel.
See above.
--
Bob

 

Keyboard shortcuts

Back
Top