How do i explain to someone its not about megapixels ?

Started Oct 4, 2013 | Questions thread
bobn2 Forum Pro • Posts: 61,432
Re: A little understanding just might help.

Great Bustard wrote:

The question, of course, is why read noise would not be inversely proportional to the area of a pixel, which I am not qualified to answer (but Bob is). I suspect that what is going on is that the read noise could scale, but the designers come up with a formula that works for a given tech and apply it across the board to all sensors of the same generation without scaling it as a function of the pixel size, for the reason that they don't want to take chances with a new tech. Then, for the next generation of sensor, if they stay with the same tech, they do scale the design as they are now confident in the design from the previous generation, resulting in an improved efficiency.

However, this explanation strikes me as odd. For example, a larger pixel should also have a higher saturation limit in proportion to its area, and, indeed, it does appear that's how things work out. So why does this higher saturation limit of the larger pixel not result in greater read noise? And given that it doesn't, why can't the smaller pixel have the same higher saturation limit as the larger pixel?

Well, like I said, I'm not qualified to answer these questions, but, hopefully, Bob can chime in.

One of the problems with the way people seem to want to treat these discussion is that 'pixels' and 'sensors' are treated like natural phenomena, absolutely driven by laws of nature. They aren't, they are designed things, the performance of each one determined by the skill, constraints and practice of the designer. All one can really talk bout is underlying trends, and one side wants to say that the underlying trend is for sensors to get noisier as pixels shrink, the other the opposite. I have shown why there is an underlying tendency for sensor read noise to reduce and therefore for DR to increase as pixels are scaled (that is, take the same design and shrink it uniformly), and the trends taken from DxO data show also that as pixel sizes have shrunk, read noise has decreased and sensor DR increased. On that basis I'm happy to say that in the above dispute, the party that says small pixels cause higher noise and lower DR is wrong, and as a trend the reverse is true - small pixels have produced lower noise and higher DR.

Of course, that is not to say that the read noise and DR or sensors is neatly ordered by pixel size. There are plenty of confounding cases for both sides of the argument. Coming back to the point about sensors not being natural phenomena - the truth is that designers do not simply take a design and relentlessly scale it down. Nor do they appear to produce different size pixels of a generation by simply doing a generational design and scaling it. They appear to be doing different things. Sometimes they produce completely different designs - for instance the Canon 1D X sensor is quite different from the 5DIII in detail design (the 1D X has a read transistor for each pixel, the MkIII share one between two). More generally, they will tend to design subcomponents for the pixel, and then assemble them to produce the pixels for a generation - thus for instance it's quite likely that the D800 and D600 share the same basic pixel electronics. From Chipworks you can see the layout of the D800 chip:

Given that a sensor is a regular array of pixels, it's pretty easy to see how those same components can be assembled at different spacings to yield different size pixels. Towards the edges of the range, the design will be a little less optimum, but design is cheaper than designing from fresh each time, and scaling isn't possible unless the design rules are also scaled (this does happen, if the yield analysis shows a good yield at current design rules, then a geometric shrink can be undertaken - this typically happens with new processes, image sensor processes tend to be old and mature).

With regard to the saturation capacity, you never get to see the absolute saturation capacity of the sensor - all you get to see is the maximum readout that the manufacturer takes from it, and again that can be determined by all sorts of things other than the pixel circuitry, the design of the readout chain, for instance. Again there are economic reasons why it might be sensible to design one basic configuration and apply it to several sensors, even though it might not quite be optimum for them all.

-- hide signature --


Post (hide subjects) Posted by
(unknown member)
(unknown member)
(unknown member)
(unknown member)
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark MMy threads
Color scheme? Blue / Yellow