# Pixel Size vs Sensor Efficiency

Started May 18, 2012 | Discussions thread
 Like?
 Re: You have ignored read-out speed In reply to bobn2, Jul 13, 2012

bobn2 wrote:

DSPographer wrote:

Your problem is that you start by assuming I don't understand what I am talking about.

That's not my problem at all. I know that you are one of the most knowledgable people here. I only begin to have an inkling that you don't know what you are talking about when it starts to becomes apparent that the conversation is a case in which you don't know what you are talking about. I've found in several conversations that your very good theoretical knowledge leads you to conclusions about how things must be, and that is what you assert, even when they are different from that in actuality. This is one of those cases.

This started with how to increase the amount of time available to read each pixel since that determines the bandwidth and thus the noise of the first amplifier stage- the pixel SF. Looking at one column of the bottom half of the array, each row is read in turn onto the same column line. To increase the number of rows without slowing down the frame rate or decreasing the output capacitance that the source follower drives, it would be necessary to read two rows of the same column simultaneously.

No it is not. That is what you have decided, and that is now for you how it must be, but it is not like that. Imagine that there are three stages to digitising an exposure. There is the integration time I - the time the photodiodes receive light. Then there is the read time R - the time it takes to read the analog values from the pixels into the analog output 'buffers'. Then the digitisation time D - the time it takes for the ADC to convert the stored analog values to digital values. Now, if you want to shoot at some frame rate F fps then I + R + D ≤1/ F . Since I is essentially fixed (since it is the exposure time) R + D ≤(1/ F ) - I . What that means is that if you make D smaller, then you can make R larger, and maintain the same frame rate. One way to do this is to increase the number of channels, so that we have ADC's of the same speed dealing with fewer pixels each, thus reducing the time they need to digitise al the pixels they are responsible for. If we have two channels, we halve D , if we have 16 we reduce it by a factor of 16, and if the frame rate doesn't increase then we can increase R appropriately. If the pixel count were the same then we could increase the time to read each pixel, and thus the time to read each row. Or we can increase the number of pixels proportionately, and keep the same pixel and row rate. Never do we need to read more than one row at a time to get this advantage.

It is not difficult to pipeline and parallelize the output of the row buffer and digitization such that R dominates the pixel read time. I have already assumed this has been done so that D has negligible impact on the read time. Then, the only way to speed up the read out to allow more rows in the same frame time is to: increase the SF bandwidth, or use parallel column lines. That is my point I made at the beginning.

P.S - under the strict scaling assumption, the capacitance that the SF drives falls in any case. Your denial of the invariance of pixel DR under strict scaling doesn't stack up either way.

If you scale the SF output capacitance then the SF noise power doesn't fall according to pixel area like it does for constant output capacitance. This is because while the output noise power per Hertz is the same for the perfectly scaled transistor, the bandwidth in Hertz is increased if the output capacitance is reduced. The SF output DR as the ratio of saturation to read noise is therefore not constant. You are picturing everything being scaled preserves the DR, but that is just not the case.

DSPographer's gear list:DSPographer's gear list
Canon PowerShot G7 X Canon EOS 5D Mark II Canon EF 24mm f/2.8 Canon EF 50mm f/1.8 II Canon EF 200mm f/2.8L II USM +4 more
Complain
Post ()
Keyboard shortcuts: