The cost of a device goes up with area and yield. Yield is a
function of manufacturing quality and area. As the area goes up,
then the probabably of a good die goes down. To a first
approximation, the die yield of a "perfect" die goes down with the
cube of the lLinear dimension.
As Andrew points out, the yield would be a lot higher IF we
accepted some dead pixels that get mapped/processed out (several
well know techniques for doing this). If a defect is an causes
and "open circuit" then there is usually not much harm. But if the
defect is a short, then it will probably be catastrophic. Then
there are "high current" (near short) defects will will probably
also be fatal. What I don't know these days is what is the
percentage of "short" verus "open" defects.
A big problem is that these are ANALOG (linear) devices. A digital
die, like most IC's today, can tolerate a good bit of current
leakage in one transistor and still work. In fact you can even
have some shorts and the device will work. But on an ANALOG die, a
current leak is much more likely to be fatal (the current leak
affects is neighbors more dramatically). I imagine that there are
design techniques that might be used to better isolate neighboring
cells in a CMOS design, but difficult to do in a CCD design (with
the serial movement).
Assuming the "dead" pixels did not affect their neighbors, we then
would get into criteria for a "bad die." LCD flat panel makers
have had these criteria for year (but are going away as quality
goes up). For example, we would probably not like pixels that are
clumped together to be bad.
It is correct that if the resolution of the sensors was high enough
that we could certainly tollerate a few bad, space apart pixels,
but I don't know (either way) whether the type of defects would
improve the yield of an "acceptable" die that much.
Karl
It's true that a silicon wafer costs a certain, fixed amount to
produce - a cost which depends on the process and not on the size
of the individual devices which will eventually be cut from it. So
the cost of a sensor will always scale with its area.
However, whereas on an LCD display, individual duff pixels are
highly annoying and unacceptable, that's not true of an image
sensor. As long as you know where they are (which can be easily
determined), you can map them out. The processing electronics on
the sensor (or possibly, but less likely, the camera) can ignore
the value read from that pixel and interpolate it from the
surrounding pixels instead. On an image with millions of pixels,
you'd never notice that a few were 'made up'.
This interpolation of dead pixels is standard practice on the small
sensors used as Web cams - there's no reason why it shouldn't be
just as useful on high quality equipment.
Andy.
I think one important factor is the cost of fabricating the silicon
in the 35mm size. I really want that size sensor, but semiconductor
companies cut their costs by shrinking down the size of a chip. In
addition, there is some chance that any particular square
millimeter of the chip will have a flaw that prevents it from
working, so the larger the chip the lower the production yield.
Since you're trying to build fairly large features on the chip
compared to microprocessor transistors, this is somewhat mitigated,
but nonetheless.
Most importantly, the larger the sensor the less sensors will fit
on a, say 8 inch wafer. And if the costs for producing such a wafer
are essentially fixed, the larger sensor will always cost
proportionally more than the smaller sensor.
All of this is to say we'll have a very long wait to have a 35mm
full-frame sensor camera priced like a Canon Rebel.