FF sensors expensive to make? Think twice

In other words, Canon have the technology to make FF sensors in a
single lithography pass. And that's on 300mm (12") wafers.
As others have noted, this isn't the primary determinate of sensor
expense. Basically, single step versus multiple step doesn't change
yield by enough to change costs significantly;
That is true of CMOS sensors only. Multiple exposures of CCD sensors must be painstakingly lined up or the sensor is ruined. If a single pass is capable of making a FF CCD sensor, then we may see FF CCD sensors in cameras. Those manufacturers who do not have access to Canon's proprietary on chip CMOS noise reduction patents can resort to CCD sensors to keep noise low, as Sony has done with its 14mp CCD sensor.
the only thing it
impacts is how fast you can get a wafer ready, which in turn impacts
how many wafers you can run a day on the fab. So it might allow you
to recover the cost of your fab slightly faster, and it might produce
slightly lower overhead expense, but the bottom line is still
dependent mostly on wafer cost and wafer defects (yield).
As all businessmen will tell you, time is money. So, the faster one can make FF sensors, the cheaper they can be made.
Many of us who were trying to estimate FF versus APS sensor costs
actually were already factoring out stepping (i.e. assuming that
you'd get the FF in a single shot).

--
Thom Hogan
author, Complete Guides to Nikon bodies (18 and counting)
http://www.bythom.com
Canon, through years of practice, may have perfected the art of multiple exposures on CMOS sensors. Sony is at present getting all the practice time it can get with its first full frame. If a single shot steppers are available, then the costs of FF sensors can come down quite quickly. That would give manufacturers more incentive to compete agressively on the basis of price.
 
The linked article states that recent devices on 1Ds cameras show no
visible stitching artefacts (which are usually pretty easy to spot
visually) whilst devices from the D3 still clearly show 3 stitch
interface artefacts (and shows them in photos).
The problem is, the 'stitching artifacts' are in the microlens/bayer
layer. You couldn't see any silicon stitching artifacts through
those.
Where do you get that from. Every stitched reticle die I have ever
seen has had a fairly easy to see discontinuity when viewed by the
naked eye - very difficult to see when viewed under an optical
microscope, let alone an electron microscope. None of the die I have
worked with have had microlenses on them, so it isn't a microlens
issue. This is directly on the raw silicon and metal. It is
thousands of microscopic discontinuities aligning to create a visible
artefact even though each one viewed on its own is very minor.
Sorry, poor wording on my part. DSLR sensors have a microlens/bayer layer on top. If you see stitching artifacts, you are seeing those artifacts in the top layer (the microlens/bayer). It's highly unlikely that you can see whether or not there are stitching artifacts in the silicon or not through that layer.
As an example, have a look at the image on the front of this
monochrome 11Mp full frame chip from Dalsa:
http://www.dalsa.com/sensors/documents/FTF4027M_datasheet_20061030.pdf
That is clearly a 3-reticle stitch device, yet there are no filters
or microlenses.
Its more obvious in real life than in the photo, especially if you
tilt the device under the light.
Yes, I didn't say that you can't see stitching artifacts in a monochrome sensor, merely that if you see stitching artifacts in a sensor with microlenses and bayer filters, what you see is the artifacts in those, which may not be the same as the ones in the silicon.
There is a press release somewhere from Nikon claiming a breakthrough
in reticule matching (and therefore stitching) technology, which
would mitigate the losses from stitch faults.
I think you mean "might" mitigate the losses. Yet to be established.
Well, they claim it does, who knows whether they're using it, or whether it's been established.
--
Bob
 
Canon state clearly that they use stitching in their full frame sensors, their
two lines are both 8".
What Canon have stated clearly is that "the circuit pattern of a
fullframe sensor is too large to be projected on the silicon wafer
all at once".
Clearly, this is marketing propaganda - considering that they
actually sell steppers for single-pass projection on 12" wafers.
In fact, their steppers can obviously do MF sensors in one step, not
just FF sensors.
Strange use of marketing propaganda - to undersell your technical prowess. I suggest that they say that's what they do because it's what they do.
--
Bob
 
I thought (based upon nothing, really) that the microlenses were molded in a single step. Is there some reason they need to be stitched like the silicon layer?
--
http://www.pbase.com/victorengel/

 
Dont know if you overlooked this but the machine you linked is for
making CCD/LCD.. while Canon's full frame sensors are CMOS.
If we're talking LCD steppers, Nikon make some truly humungous ones, they more or less own the LCD stepper market. Every Canon camera contains LCD's produced with Nikon steppers.

--
Bob
 
In other words, Canon have the technology to make FF sensors in a
single lithography pass. And that's on 300mm (12") wafers.
As others have noted, this isn't the primary determinate of sensor
expense. Basically, single step versus multiple step doesn't change
yield by enough to change costs significantly;
That is true of CMOS sensors only. Multiple exposures of CCD sensors
must be painstakingly lined up or the sensor is ruined. If a single
pass is capable of making a FF CCD sensor, then we may see FF CCD
sensors in cameras. Those manufacturers who do not have access to
Canon's proprietary on chip CMOS noise reduction patents can resort
to CCD sensors to keep noise low, as Sony has done with its 14mp CCD
sensor.
You keep on trotting out this b'llocks. The 'proprietary on chip CMOS noise reduction' is a 4T cell allowing per pixel correlated double sampling. Canon doubtless have patents on their implementation, but the basic technolgy was not invented by them and is used by several CIS manufacturers. There are other effective strategies also. The IMX021 sensor does not appear to be significantly worse with respect to noise than equivalent canon sensors.

--
Bob
 
Apparently, Canon does not use that rather old 50x50mm stepper with 500nm resolution to make its 35mmFF (24x36mm) sensors.

For one thing, that stepper has been around for some years, since before the 1DsMkIII, 5D and even 1DsMkII were introduced, so it already existed when Canon said that it needs to use stitching to make 24x36mm sensors. So unless Canon is lying repeatedly in two white papers, that stepper is not suitable for Canon's DSLR sensor fab.

One reason might be its large 500nm feature size, which could make the amplifier circuity within every CMOS pixel too big, reducing electron well capacity and dynamic range more than Canon wants.

By the way, Nikon also used to make stepper with maximum field size larger than 24x36mm, but has discontinued that model. The largest field size for any stepper introduced for some years is 26x33mm, and it seems that most recent models have exactly 26x33mm maximum field size.
 
Is that all that is in there? Do you know which ARM core they are
using? I wonder where they put their AFE?
It's known to be an ARM, I suspect the people programming CHDK ( http://chdk.wikia.com/wiki/Main_Page ) have gathered a lot of information on which core each DIGIC uses. I suspect the buy AFE's from AD just like Nikon. It takes a lot of prior expertise to get as good at these analog circuits as AD.
--
Bob
 
Anastigmat wrote:
SNIP
As all businessmen will tell you, time is money. So, the faster one
can make FF sensors, the cheaper they can be made.
This may not be the case for a FF sensor. My understanding is that camera companies contract for say 10k sensors to be made and when they are delivered they make 10K bodies and then order 10K more. For more popular bodies the number is higher than 10K but the point is that sensor production is not a 24/7 operation. It is not even an 8/5 operation.

Sure there is a marginal savings in faster chip production, but not sufficient to provide the savings you are talking about. I would doubt faster chip production would lower the true fabrication cost by even 5%.

For low production cameras (which FF cameras are) it would not be surprising if a single sensor run of a couple of weeks would last the camera maker the whole year.

SNIP

--
Those who forget history are condemned to go to summer school.
 
I have been out of micro-electronics since 1993.
I am sure a lot of things has changed.

But I still believe making FF sensor chip is not as easy and cheap as making "potato chips".

--
ecube
 
I don't know. All of our Cable Modem chips used to have the little "ARM" logo marked on them. Just looking at a fairly late version of a similar part, it doesn't say ARM on it, but it has three ARM 9 cores inside. Maybe they have changed their policy or our contract ahs changed. Interesting info anyway.
 
Company I used to work for would always deliberately omit or blur any vendor logos on promo shots of h/w, so the finished products in this case may have the actual labels, but the photos could be doctored up not to show them
Nick
I don't know. All of our Cable Modem chips used to have the little
"ARM" logo marked on them. Just looking at a fairly late version of
a similar part, it doesn't say ARM on it, but it has three ARM 9
cores inside. Maybe they have changed their policy or our contract
ahs changed. Interesting info anyway.
--
My meager galleries: http://nickambrose.smugmug.com/
 
That is true of CMOS sensors only. Multiple exposures of CCD sensors
must be painstakingly lined up or the sensor is ruined.
Where do you get this idea that CMOS doesn't have to be 'painstakingly' aligned?

Think about it. Every pixel still needs to be electrically connected irrespective of the underlying technology.
 
I thought (based upon nothing, really) that the microlenses were
molded in a single step. Is there some reason they need to be
stitched like the silicon layer?
--
http://www.pbase.com/victorengel/

There are different techniques for making microlenses. One uses the normal photolithography process to form little 'pillars', which are then heated until they collapse to become lenses. This obviously requires stitching, just like any other photolithography. Such a lens layer can then be used to form a mould to replicate lens layers, which are bonded to the chip. Those lens layers replicate the stitching artifacts in the original. There are also techniques using laser machining, and probably others too. It could well be that Nikon is using a lithography based lens layer, which therefore shows artifacts and canon is using some other technique (or even used it's 50mm square stepper to make the original) and so its microlenses don't display stitching, even though the underlying chip is stitched.
--
Bob
 
It could well be that Nikon
is using a lithography based lens layer, which therefore shows
artifacts and canon is using some other technique (or even used it's
50mm square stepper to make the original) and so its microlenses
don't display stitching, even though the underlying chip is stitched.
Is there an example image around where the stitching artifacts are obvious? Before this thread I hadn't heard of anyone complaining about this problem with any digital camera.
 
you may have something here w.r.t. pricing. Todays' lowest price FF sensor camer does sell around $1500 or so (5D w/ lens). I suspect as time goes on, 5D or its equivalent would sell around $1300-$1500 w/o lens.
Well, see my own guesstimates before I quoted the listed document.
Regardless, it does not change my point one bit.
You are right that the yield has a significant effect to those huge
FF chips prices. I remember hearing figs like c. 80% in APS-C size
sensors and something like 20% for FF chips. 1:4 ratio and counting
the 1:2.5 size this means from a single wafer you get about 1 10th of
FF sensor compared to APS-C sensor and thus the 10 times price. I
have understood the APS-C sensor price is around $50 (plus company
profit) so FF sensor would cost around $500 (plus company profit).
And when put that into a product with all the sells network
costs/profits and taxes, you get around $1000 add-on to product price
because of FF sensor, and something more because of the FF size
mirror and viewfinder.

What comes to the OP post, the single pass litography machine is very
significant when the yield is tried to get lower. No more junk
because of micrometer scale offsets in litography passes. Eventually
we can expect the cropped to FF sensor cost ration to gome down -
perhaps to min 1:5 level - meaning the minimum camera sells price
difference would be "only" around perhaps $500. Today lowest cost new
APS-C cams sells around $500, which could hint tomorrow lowest cost
FF cams around $1000...$1500 (perhaps $1290 just to throw a wild
number).
--
http://www.flickr.com/photos/adatta

http://picasaweb.google.com/owaustin/
 
There is? All I've found in this thread is the PDF file for the Dalsa sensor. I don't see any shots from Nikon cameras that show stitching artifacts.
 

Keyboard shortcuts

Back
Top