Previous news story    Next news story

CMOS sensor inventor Eric Fossum discusses digital image sensors

By dpreview staff on Oct 28, 2011 at 22:52 GMT

Image sensor engineer and primary inventor of the CMOS sensor, Eric Fossum, has given the second annual Victor M. Tyler Distinguished Lectureship in Engineering at Yale University. Fossum's talk: 'Photons to Bits and Beyond: The Science & Technology of Digital Image Sensors' covers a wide range of subjects, from the basis of the way sensors work to the potential risks to society of the ways technology can be used. He touches on noise, demosaicing and how 'the force of marketing is greater than the force of engineering.' Yale has put a video of the presentation on YouTube and it's well worth watching if you have any interest at all in the physics and engineering that make your camera work. (via Image Sensors World)

Comments

Total comments: 114
Eric Fossum
By Eric Fossum (Nov 5, 2011)

Thanks all for your comments. I may not return to this discussion now for some time so please don't look for a response from me.

0 upvotes
Cross_
By Cross_ (Nov 3, 2011)

Here's a calculator for diffraction limit which contains a list of airy disk sizes for different apertures :
http://www.cambridgeincolour.com/tutorials/diffraction-photography.htm

0 upvotes
Neoasphalt
By Neoasphalt (Nov 2, 2011)

Mr. Fossum,
I would like to get answer from expert as you on the following question - if the given size sensor pixel density would be reduced, lets say four times, from common 16 to 4 MP (enough for people who don't print their images) and made with the same latest CMOS technology, will this result to significantly:
1. Lower noise
2. Higher dynamic range
3. Better tonal range
4. Higher color depth
5. Cheaper production costs
6. Or other possible improvements?

Thank You

0 upvotes
Eric Fossum
By Eric Fossum (Nov 3, 2011)

Is the pixel size the same or larger? Same optics? Are we optimizing the pixel for any particular parameter? Sorry to answer your question with questions but these are also important. I will try to check in a day or two and see what you say.

0 upvotes
Neoasphalt
By Neoasphalt (Nov 3, 2011)

*I am not expert in sensor technology, but I assume if density becomes 4 times less and technology is similar then it is possible to make pixel size roughly 4 times bigger.
*Same optics.
* We are optimizing pixel to such possible extent that production cost left the same or less and those 5 (or more) parametrs becomes as good as possible, although making priority on lower noise.

0 upvotes
Eric Fossum
By Eric Fossum (Nov 5, 2011)

Let's assume 4x larger pixel and 4x larger full well but same operating voltages etc. So, conversion gain is reduced 4x. This analysis merits more reflection, but shooting from the hip on a Friday night...

1. Noise. Read noise would be 4x worse. Usually you don't have to worry about read noise. In shot-noise limited performance, SNR would be better at same digital output level. With lower conversion gain, the same digital output level corresponds to larger number of electrons and hence better SNR.

2. Dynamic Range. Probably the same. The max signal (in electrons) has increased by 4x. The read noise (dark noise) has increased by 4x. The ratio is the dynamic range, more or less.

3. Tonal Range. I am thinking this is limited by the ADC quantization error so it would be the same. But maybe something else limits tonal range.

4. Color Depth. Also ADC limited, I think, so the same.

0 upvotes
Eric Fossum
By Eric Fossum (Nov 5, 2011)

5. Cheaper production costs. Not likely. Same die size. Maybe slight yield improvement due to larger pixel.

6. I am sure there must be some, but I can't think of any at this moment.

Note that these answers depend on the assumptions you asked me to make, and assumptions I made. You should not generalize too much from this, but it seems that more pixels is better under this set of assumption since SNR is probably not a big issue for well exposed, low ISO (gain) photos. Maybe the SNR improvement of the larger pixels would be reflected by better performance at high ISO numbers.

0 upvotes
forpetessake
By forpetessake (Nov 5, 2011)

If I correctly understood the question the picture quality would be pretty much identical if one simply resized the output of 16Mp sensor to 4Mp.

0 upvotes
Eric Fossum
By Eric Fossum (Nov 5, 2011)

Actually the smaller pixels will give better slightly better results if the image quality is read noise limited, since the noise will grow 2x if you add together 4 pixels. Also the color resolution will be better. Due to color processing, having at least a full Bayer kernel with in the Airy disk is better than each kernel element being roughly the size of the Airy disk since color processing reduces resolution anyway.

0 upvotes
IRStuff
By IRStuff (Nov 7, 2011)

One of the initial assumptions seems dubious to me. If you already have a 16M process, then a 4M imager would use the same sized pixels, with 1/4 of the overall active area, and get the large boost in wafer yield that would make it profitable to sell the 4M imager. Upping the pixel area, is unlikely, particularly for a consumer imager, since the yield would be comparable to the 16M, and you'd have to sell it for about the same amount, which is a non-starter.

To the first approximation, the noise performance of a same-sized pixel 4M would be comparable to slightly better. Assuming the same frame rate means that the readout rate could potentially be reduced, which generally lowers overall read noise. Since there are fewer reads in the array itself, clocking and other related noises should be reduced.

0 upvotes
Neoasphalt
By Neoasphalt (Nov 8, 2011)

As I understood if pixel size will be 4 times bigger on same sensor size you think that SNR will be better while other parametrs staying the same.
I compared 3.1 and 3.9 MP old generation (2005) sensors with same optics & body and noise levels of 3.1 MP one is really much lower, although pixel density is only 1.2 times less. So I tought that with new technologies and 4 times lower density (e.g. 16->4) noise levels should be hugely lower which may attract many customers, cause most of the latest high megapixel P&S (and some higher class) cameras have visible/excessive noise even at base ISO in good light.

0 upvotes
Eleson
By Eleson (Nov 2, 2011)

4 microns is mentioed as a 'limit' for f2.8, but what is the equivalent size for a 1.1 or 1.0 lens?

0 upvotes
J R R S
By J R R S (Nov 2, 2011)

I wonder... would it be 1) benifical, 2) possible & 3) wize, to try and make sensors like photo multiplyer tubes? (at each pixel location)...

Ok to answer question 3 I know it would not be wize :)
But if you could over come question 2 (given unlimeted resorces) would question 1 be a yes?

0 upvotes
Eric Fossum
By Eric Fossum (Nov 2, 2011)

There exist sensor arrays where each pixel is a "single-photon avalanche detector" or SPAD. The arrays are small (e.g. 32x32) and used for scientific applications. Unfortunately use of these in larger arrays and with smaller pixels is a large technical challenge.
see: http://www.imagesensors.org/Past%20Workshops/2007%20Workshop/2007%20Papers/063%20Charbon.pdf

0 upvotes
forpetessake
By forpetessake (Nov 1, 2011)

It looks like sensor pixels and gaps between them are pretty large compared to logic gates, so one can probably pack a lot of logic on the same chip in modern 28 nm CMOS technology. Will we soon see a camera on a chip, with sensor, processor, memory, etc. on a single die?

0 upvotes
Eric Fossum
By Eric Fossum (Nov 1, 2011)

This "system on a chip" image sensor or SOC is in widespread use for camera-phone applications already.

0 upvotes
Bulgroz
By Bulgroz (Nov 1, 2011)

Pr. Fossum,
Thank you for this very interesting presentation. In particular I find the concept of QIS most interesting. For me the challenge is probably more in the photodetector than in the associated electronics. You consider the data transfer as a big challenge, but dont you think possible to resort to on-chip data reduction (image formation) and on-chip networking, rather than moving out Tbits/s of data ? What is in your mind the timeframe for the realization of the QIS ?

0 upvotes
Eric Fossum
By Eric Fossum (Nov 1, 2011)

This is a longer term R&D project and may never result in a commercial success. But 5-10 years is a reasonable time scale.

0 upvotes
JB Digital
By JB Digital (Nov 1, 2011)

Thanks very interesting lectur,I like it very muts.

0 upvotes
Dan Ortego
By Dan Ortego (Nov 1, 2011)

'Gee, I should’ve had a V8'. If I knew I was going to be so interested in science I would have paid more attention in scholastics. Thank you for a wonderful presentation.

Regards,
Dan Ortego

0 upvotes
Eric Fossum
By Eric Fossum (Nov 1, 2011)

Thanks Dan. I couldn't ask for more.

0 upvotes
harumpel
By harumpel (Nov 1, 2011)

Thank you for the interesting presentation, Dr. Fossum!

Since most photographic devices today are those integrated into mobile phones, with tiny sensors but high frame rates, do you think for them it makes sense to use super-resolution algorithms to lower the noise and increase resolution?

Isn't it that super-resolution upscaling is already used by some DVD players and TVs ?

Kind Regards
Theo

0 upvotes
Eric Fossum
By Eric Fossum (Nov 1, 2011)

Theo, sure, one could interpolate more pixels but the real resolution is fixed by the number of real samples takes. Some low end cameras (e.g. web cams) have, in the past, done what you suggested. Sometimes it is still done.

0 upvotes
forpetessake
By forpetessake (Nov 1, 2011)

Can somebody knowledgeable explain, why do we still have an ISO setting in cameras with CMOS sensors? As far as I understand from this presentation, CMOS sensors have separate amplifiers for each pixel. That means that the gain of every amplifier can be set for optimal signal/noise ratio for the amount of light the pixel received. In other words we won't have an ISO setting for a sensor, rather a camera can set an optimal ISO (gain) for each pixel. So we can get rid of ISO setting on one hand and get the best possible dynamic range from the sensor under each given light condition on the other. So why don't we see this implemented in modern CMOS sensors, or is it?

0 upvotes
thinkfat
By thinkfat (Nov 1, 2011)

The existence of a signal amplifier per pixel does not imply they're programmable. Don't think of "amplifier" in the way your home stereo amp works. In the sensor the amplifier is just one transistor and two resistors.

Comment edited 10 minutes after posting
0 upvotes
Eric Fossum
By Eric Fossum (Nov 1, 2011)

hmm, no resistors.

0 upvotes
J R R S
By J R R S (Nov 2, 2011)

obviously the outcome of this setup would be every pixel exposed to its maximum potential... hence the final image would probably be very very light gray... you need to set the amplifacation globaly or you will get no contrast!

0 upvotes
forpetessake
By forpetessake (Nov 5, 2011)

"you need to set the amplifacation globaly or you will get no contrast" -- I don't think so. The wonderful thing (if that were possible), that the output of each pixel would be weighed (scaled) with individual amplifier gain, so the resulting raw file would have the best possible dynamic range the sensor is capable of. It would be like combining multiple exposures taken with ISO from the lowest to the highest -- no pixel overexposed, no pixel is underexposed.

0 upvotes
Eric Fossum
By Eric Fossum (Nov 6, 2011)

So in this scheme I think you need two exposures. One to figure the right gain settings, and one to take the larger dynamic range image, and hope nothing bright or dark moves in between the two shots.
If you are going to take two exposures there are many ways to achieve high dynamic range - like one short and one long exposure fused together.

0 upvotes
Boris F
By Boris F (Oct 31, 2011)

Dear Eric, thanks for the very interesting lecture. Like it, especially the part about lens's diffraction limit. Can I ask you a question: Is it good idea to be inventor? Please, advise. Dream to work with you in a one team.

0 upvotes
Eric Fossum
By Eric Fossum (Oct 31, 2011)

Thanks Boris. It is a good idea to be a good inventor. Luck and timing are helpful as well. I did not plan to be an inventor. It is just a natural part of being an engineer that one sometimes comes up with solutions to problems, and sometimes these solutions are new enough to be patentable.

2 upvotes
Boris F
By Boris F (Nov 1, 2011)

Thanks for your kind answer. God bless you!

0 upvotes
Cy Cheze
By Cy Cheze (Oct 31, 2011)

Does Fossum suggest an optimum number of pixels on a 1/2.3" or ASP-C CMOS sensor? If one views the image on a 1920x1080 screen, isn't the optimum really only about 2MP, exept when cropping, in which case you'd need perhaps 4mp or 6mp? Can one strip away the "force of marketing" and assert anything on the matter? Just curious.

0 upvotes
Eric Fossum
By Eric Fossum (Oct 31, 2011)

Completely depends on the technology used to implement the sensor so no unique answer.

0 upvotes
Cy Cheze
By Cy Cheze (Oct 31, 2011)

Thanks. But I suspect there has to be a point were the incremental resolution is offset by the incremental "noise." At any rate, that's what some people proclaim. The physics should set some boundaries which technology might try to dodge, but not overcome. Is there no "Fossum Uncertainty Principle"? There should also be some way to quantify or measure the upper limits of what viewers can distinguish under realistic display situations.

0 upvotes
Eric Fossum
By Eric Fossum (Oct 31, 2011)

Cy, do you want noise or resolution? People cannot even agree on the weighting of components in an image quality metric. Let's say you did have a metric formula so you could compute IQ from all these factors. Still, the technology used to impact the sensor will impact noise, maximum signal, QE, etc. Perceived image quality does seem to get generally better with smaller pixels down to some size which is smaller than predicted just by the diffraction limit, and counter also to SNR analysis. So there is also a human perception factor that is not yet quantified.
Nevertheless, at some point even with perfect pixels and IQ well defined, there is marginal return on IQ. I just have only a fuzzy idea where that might be, and I think most manufacturers try to find that point along with other technical limitations. Sorry to be so, uh, uncertain.

2 upvotes
DrummerCT
By DrummerCT (Nov 7, 2011)

This issue, in part, reminds me of "vernier acuity" (also known as visual hyperacuity) in which the ability to detect visual "differences" is smaller than one would expect just from measuring visual angle of the target from the eye. In other words, the visual difference detectable by a person, at first glance (pun intended), appears to be within the visual angle SMALLER than a single retinal cell. I've not read that literature in quite a number of years, and last I recall the puzzling aspect of what looks like being able to detect visual differences within the diameter of a single cell may be explainable by inter-cell communications and light distribution differences across multiple cells (this is all a rather technical subject and I fear that the summary does the topic injustice). But in any event, it does not surprise me that perceived image quality exceeds the diffraction limit mentioned above.

0 upvotes
princewolf
By princewolf (Oct 31, 2011)

Thank you very much for this Dr. Fossum. I was very excited to see you around in dpreview, and it's our chance and privilege. Several points;

-"Force of marketing..." is one of the best quotes I have ever heard, it explains a lot of things going on in technology driven industries.

-The UDTV with 33 MP, if it were available today, could make photography pretty much a thing of the past, considering that still digital photos today have so much resolution advantage over HDTV, but again there would always be a cheaper 60MP still image taker for any 33MP video camera!

-I suspect viewing a 33MP video at 60fps would be quite challenging for the brain. There is no doubt about the advantage in image quality, but some neural problems may arise but I guess it's too early-and certainly not for me to pass judgement on.

Finally, I would like to ask if you could recommend a "for dummies" type resource for understanding the deeper electronics of current photography technology?

0 upvotes
Eric Fossum
By Eric Fossum (Oct 31, 2011)

-32 Mpixel @ 60 fps - looks more like real life so when I saw it at the Aichi Expo some years ago I did not see any viewing problems.
-I am sorry but I just don't know of any intro-level books like that. Seems like there should be some general photography-audience books.

0 upvotes
Uaru
By Uaru (Oct 31, 2011)

- 33MP challenging for the brain? I do not think so. Not more then a real life seeing anyway.

- still photography is not about the megapixels and a higher quality in comparison to TV, but the seeing something interesting, and then composing it. You can appreciate and contemplate a good photograph looking hours at it - a good quality is necessary for it, but it is only a tool. There is a big difference between still image and moving image. Considering that many photos taken with full frame DSLR are then transformed into something Facebook can handle... I would not worry about still image, even if TV had more megapixels...

2 upvotes
princewolf
By princewolf (Oct 31, 2011)

-Most people live below their eyes' resolution capability all their lives, and some people who just start wearing glasses have difficulty adjusting to the clarity. In fact, optometrists prescribe less than perfect numbers on purpose because brain can have difficulty absorbing all the details. I'm talking about long term effects, and unless you are a neurologist Uaru, your guess is as good as mine.

-With a videocam with 32 MP output resolution shooting at 60 fps-oh yeah, I would be worried! Even if some frames were interpolated, a 30 fps shooting with 32 MP each would allow me to pick the best frame. Of course other adjustments such as ISO speed, aperture control, shutter speed etc would still be necessary, but hey--they can incorporate all these into video if they make a 32MP@60fps videocam. In any case, for any 32MP videocam there would still be a photo camera with at least twice the resolution, so the point is moot.

0 upvotes
J R R S
By J R R S (Nov 2, 2011)

just because you look at it does not mean you see it!

0 upvotes
bushi
By bushi (Nov 16, 2011)

princewolf, and how do you think that UHD videocam shooting would look like, comparing to photo shooting, I mean in a practical sense? What about composing, etc?

It is a totally different process, to shoot a video, versus shooting stills, practically, different mindset and photographer (filmmaker) focus.

However, it might be true, that the camera itself might help with getting the most out of your shots, when "shooting" in a movie mode, and picking the best frame later on. But to a very little degree, IMHO.

0 upvotes
peatantics
By peatantics (Oct 31, 2011)

Thank you Dr Fossom for rolling the rock of CMOS to the top of the hill.
Concerns for the use put too is reminiscent of big brother on two levels.
Identification: The act of knowing that which is contained in boundaries!
So the means of recording is both borrowed by the state and individuals.
The public office watches the individuals while the individuals watch them
Watch them watch the individual who chooses to break the laws so made.
This production of yours has identified one thing which is paranoia is fear.
Now we know what can be done about proper use and wrong use of CMOS?
Like Sisyphus felt it was fun to watch the rock roll down that hill again, again.

0 upvotes
Techblast
By Techblast (Oct 30, 2011)

OK. Wonderful lecture. That means 4 micron is the right size to match what physics already knows. So there is your standard based upon the electromagnetic spectrum called visible light and when you multiple this pixel measurement by FF dimension, you have the practical limits of FF resolution before "shot" artifacts become an increasingly worsening phenomenon. OK. So Canon ID-X should provide the best possible rendition of light and resolution for FF and that if you need a larger printout you will need to resize up and accept the inherent loses or you will need to go up to a MF sensor solution. This applies to all FF manufacturers and not just Canon. Excellent lecture! Thanks!!

0 upvotes
Eric Fossum
By Eric Fossum (Oct 30, 2011)

I think drawing the line at 4 microns may be premature, but certainly there is a diminished return on investment in resolution as pixels fall below the diffraction limit. Thanks for your nice comments.

3 upvotes
SteveGJ
By SteveGJ (Oct 31, 2011)

It's worth noting that the colour sampling (at least for red and blue channels) at twice the photosite pitch, so smaller photosite might assist that.

I wonder why the bayer pattern dominates. Given that low energy red photons have the lowest QE, and my subjective impression is red chroma noise is most prevalent, would it make more sense to have two red sensors per cell?

One thing I'd be interested to know if consideration has ever been given to sensors which do not use regular grid patterns. The eye's rods and cones are certainly not layed out with the regimented regularity of a CMOS or CCD sensor, and neither is film. A pseduo-random pattern might disrupt things like moire patterns. Of course the changes required to image processing all the way through the stack would be fairly horrible to consider, but I'm pretty sure the brain's optical perception systems do not work through regular grids.

0 upvotes
Rick Knepper
By Rick Knepper (Oct 31, 2011)

Good Morning Dr. Fossum. I should disclose that I am a proponent of high resolution but not unbridled. I suppose the corollary is that I also support small pixels by definition although I could care less about the size of pixels. I am financially confined to FF for the near future. Also, I am not technically educated on digital imaging.

By diminishing returns do you also mean degradations as is suggested by Techblast? The two ideas do not seem necessarily tied together. As for diffraction, I think of that as an evil that in small amounts is preferable to not getting the DoF one needs. How small can the pixel get (how many pixels on a FF chip) before unrecoverable harm is done? You and others have mentioned 4 microns. Does that equate to 18 MPs on a FF surface also as suggested by Techblast?

Comment edited 1 minute after posting
0 upvotes
Techblast
By Techblast (Oct 31, 2011)

Rick, if you read Canon's release statements concerning their new 1D-X, they claim a 6.95 micron pixel size adn a 6.4 micron pixel size for the 5D MKII. If for example 4 micron does turn out to be the point where further shrinkage does not necessarily advance resolution detail when enlarged to 100% size, then you can surmize that a FF sensor will max out around 31 - 32 Megapixels, which is good. For me, it would be somewhat fruitless to pay $3K, $4k, $11K for individual lens only to distort IQ with undersized pixels. Canon has made it clear, at least to me, that pixel size provides a higher signal to noise ratio, a cleaner replication of light and a sharper image.

So, it would seem that once Canon gets to ~30MP on their FF sensor, that Canon, Nikon and others may have to ask the critical question as to whether to embark upon Medium Frame sensors in order to provide higher resolutions while maintaining Image Quality

Comment edited 12 minutes after posting
0 upvotes
Techblast
By Techblast (Oct 31, 2011)

PS. The assumption is that is 6.95 micron = 18.1 megapixels, then 4 micron = 31,754,385 pixels.

It looks like Canon, Nikon and others FF equipment providers are either going to have to investing a lot of money in R&D in order to find new methods for photsensor device fabrication that can accurately capture light on smaller photosensor sites and hence can avoid MF sensors or will find themselves quickly approaching the limits that a FF sensor can deliver and face having to develop MF camera solutions for the high end professional market within the next 4-5 years.

It will be interesting in a few years to see how this plays out.

0 upvotes
SteveGJ
By SteveGJ (Oct 31, 2011)

@Techblast

Your calculations are faulty. You have performed a linear calculation when it's the area that matters. If 6.95 microns pitch gives 18.1 MPiX, 4 microns will give 3.02 x as many photosites, or 54.6 MPiX.

More directly, 24mm x 36mm at 4 microns provides for 6,000 x 9,000 = 54MPiX. (And consistent with the Sony A77 24MPiX sensor which has a, near enough, 4 micron pitch on APS at 4,000 x 6,000 photosites). A FF sensor has 2.25 x the area of a 1.5 crop factor APS-C; 2.25 x 24 = 24MPiX.

So a 54MPiX FF sensor is certainly technically feasible using a scaled up version of current technology, although it might be sub-optimal in other ways (read rate, power dissipation etc.).

Personally I have doubts we'll see CMOS MF sensors as the R&D costs might not justify the relatively limited market.

0 upvotes
Rick Knepper
By Rick Knepper (Oct 31, 2011)

Thanks for that clarification. I thought you were saying that the 1D X was already at 4 microns. Now I am back on track.

0 upvotes
Dario D
By Dario D (Oct 30, 2011)

Oh boy, lots of great info on camera sensors (at least the parts my non-engineer self can follow, lol).
To add to the social issues list, perhaps, here's a big one I've noticed:
It seems camera companies have a deliberate unwillingness (still) to allow Point-&-Shoot cameras to perform in darker environments (like indoors), which likely contributes to this worldwide problem of people having dirt-poor perception of their self-image.

In other words, I believe the horrible results we get from frontal-flash, or underexposed, poor-looking photos, is a worldwide scourge on human self-perception. People already think they're ugly in GOOD shots, so, imagine what must happen when they see themselves portrayed even worse than reality.

Isn't it affordable for a company to just use a lower-megapixel version of even a 3 year-old D-SLR sensor (better low-light), or, for that matter, add a single screw to the onboard flash, allowing it to pivot upward, and become a beautifying bounce flash?

Comment edited 8 minutes after posting
0 upvotes
name1
By name1 (Nov 13, 2011)

What about back-illuminated sensors?

0 upvotes
Photato
By Photato (Oct 30, 2011)

'The force of marketing is greater than the force of engineering'
What I don't understand is, why Mr Fossum is now admitting that current sensor densities is more about marketing than actual performance.
In previous discussions here in these forums I personally discuss this with him and he seemed to have a different position advocating for more/smaller pixels.
Back then I only agree with him about the pixel size sweet spot.
I, by no means, have his background but my observations come after years of real world product comparisons and I've always maintained the position that current densities are above the sweet spot for performance, due to market demand, as Sony always put it with each release of a new consumer sensor with more pixels. 'Due to market demand...'

0 upvotes
Eric Fossum
By Eric Fossum (Oct 30, 2011)

I believe most of these discussions have been about shot noise, read noise, full well and image quality. I don't recall a lot of discussion of the diffraction limit. In any case, I still believe in smaller pixels.

Until you reach sub-diffraction-limited pixel sizes, there is a sweet spot for every technology generation. Once you are below this size, the return on resolution for smaller pixels diminishes.

3 upvotes
Photato
By Photato (Oct 31, 2011)

Yeas and also diffraction. One problem is that it is assumed perfect conditions. Camera on tripod ,shooting a static subject, perfect lens from corner to corner and perfect focus. In practice all that blows away and it turns out that we can be served better with larger pixels for most common situations since the maximum resolution the sensor is capable is very rarely achieved.
BTW Nice self portrait and thanks for taking the time to reply.

0 upvotes
Jim222
By Jim222 (Oct 30, 2011)

Thanks to Dr Fossum for sharing his knowledge with us. It is a privilege to hear from one of the lead engineers in the field. My favorite take away from this talk is the clear indication that more pixels is not always better and that all else being equall I would rather have larger area light gathering units on my sensor.

1 upvote
pauly6734
By pauly6734 (Oct 30, 2011)

Mr. Fossum,

You said photon(s) can break a Si=Si double bond? You may have misspoken, because it may be impossible. What do I know? I am just a pill counting pharmacist. Please advice!

Paul

0 upvotes
Eric Fossum
By Eric Fossum (Oct 30, 2011)

Paul, I did not say double bond, but a single electron can be detached from the bond by a photon with energy of a bit more than 1 eV (like, red green or blue). Usually we talk about such photon absorption using an energy band model but the chemical model is sometimes easier to use. I am not sure what you think is impossible so maybe you are thinking of something different thant what I tried to describe.

0 upvotes
pauly6734
By pauly6734 (Oct 30, 2011)

Dear Professor Fossum,

Please go to your video at exactly 27:50. Here is what you said: " When light comes in, if it has enough energy, it can actually break one of these ( Si=Si ) bond. " Please also refer to your slide.

Paul

0 upvotes
Eric Fossum
By Eric Fossum (Oct 30, 2011)

If you take one electron out of a 2-electron bond that constitutes breaking a covalent bond. A covalent bond by definition involves a pair of electrons.

3 upvotes
pauly6734
By pauly6734 (Oct 31, 2011)

Thanks Professor! I understand the photo electric effect E=hv-p, but Si is non metallic, and it is practically inert; how can photon-electron reaction occurs? Perhaps you are talking about the metallic oxide that Si may serves as a substrate? Please advice!

0 upvotes
infosky
By infosky (Oct 31, 2011)

professor should just admit he had misspoke about the covenant bond. The energy of visible photon can only bring an electron at the valence band to the conduction band. The conduction band is formed due to the periodic potential structure of the Si crystal. The electron can move freely inside the crystal, but can not escape from the crystal. Therefore, no covalent bond is broken.

1 upvote
Eric Fossum
By Eric Fossum (Oct 31, 2011)

Paul, I am talking about the optical generation of electron-hole pairs within the silicon crystal. It is a little different from the photoelectric effect.

Infosky's response is right and wrong. In the energy band model, an electron is optically excited to the conduction band leaving behind a hole. The hole, in essence, is a broken bond (absence of an electron). As it moves the "broken bond" shifts from atom to atom in the lattice with remarkable mobility.
Thus, the localized point of absorption and the corresponding broken bond exists only momentarily, and then the broken bond moves with the original break being "healed" by electron motion in the valence band. Besides calling it a "covenant bond", saying that no covalent bond is broken is incorrect when considering the dual nature of electrons in a crystal - both as "waves" and as classical particles.

0 upvotes
infosky
By infosky (Oct 30, 2011)

As an optical scientist, I had learned very little from this lecture. It was intended for students in college. If you are familiar with sensors, I would advice you to skip the video entirely. You won't learn much.

The speaker mentioned a few " new" ideas. But, I would not really think these ideas worth your time. If you are worried that you might miss something, just fast forward the video to 50th minute. You don't lose anything by skipping the first 50 minutes. The speaker talked too much on what was in his mind rather than what was really useful to the audience.

1 upvote
Eric Fossum
By Eric Fossum (Oct 30, 2011)

Dear Infosky,
Sorry you were disappointed. If you are a technical expert you should know this material already and of course it was indeed intended for a general university audience. However, perhaps you as an optical scientist would be more interested in an invited talk I gave at theOptical Society of America (OSA) meeting in Toronto this past summer:
http://ericfossum.com/Publications/Papers/2011%20OSA%20QIS%20Concepts%20and%20Challenges.pdf

3 upvotes
Martin Datzinger
By Martin Datzinger (Oct 30, 2011)

Very interesting talk, I enjoyed outlook on future technology the most! Some questions came to my mind:

1. Regarding QIS: What will be this paradigm's impact on sensor DR? Shouldn't it basically correlate with jot size and sampling speed? What kind of DR can one expect?

2.Is there any chance and research done to increase the water bucket depth (to stick with the football field analogy) of CMOS sensors in order to increase DR? Or will we just have to use larger area for that task? What about logarithmic readout?

3. Regarding RGBZ sensors: To my understanding, the eyes can't see depth, only the brain calculates it out of of parallax information provided by the eye pair distance (and subject motion / perspective change). Current 3D display technology utilises parallax. This sensor provides true depth data but no parallax information. Can the latter be calculated so that the human visual system can actually make use of the available depth information?

Thank you and kind regards,
Martin

0 upvotes
Eric Fossum
By Eric Fossum (Oct 30, 2011)

Hi Martin,
1. The DR can be quite larger- far larger than conventional CMOS image sensor pixels. But, we can do this with slight changes to CMOS image sensors as well right now if the camera manufacturers wanted to, so technology is not the limiting factor for DR.
2. Full well depth is always a major design goal in any new generation pixel, but as pixels get smaller, it gets more and more difficult to increase the per-area-capacitance to compensate for loss in area.
3. Parallax can be computed from range data, and some 3D TVs do this calculation inside the TV. There are some emerging standards for 3D TV signals but I am not intimately familiar with them.

0 upvotes
pgb
By pgb (Oct 30, 2011)

With QIS would the period of a group of sample increase the DR?
Then the slower the shutter speed the greater the DR or
does the speed of light negate this in comparison.
Ccomputation speed would also influence this, but not so much
when compared to shutter speed.

Interesting that increasing DR is alreday possible for a set pixel
size if they want to.

0 upvotes
aarif
By aarif (Oct 30, 2011)

I guess the only people that didn't enjoyed listening to this were the more more more mega-pixel lovers

2 upvotes
TrojMacReady
By TrojMacReady (Oct 31, 2011)

You realize Dr. Fossum is a proponent of more pixels himself?
DSLR's for example are long from reaching diffraction "limited" pixels.

1 upvote
Lee Jay
By Lee Jay (Oct 29, 2011)

Nice talk. Enjoyed it very much.

The diffraction discussion is a bit misleading, however. It assumes perfect sampling and that the Rayleigh limit is a hard limit. Neither is true.

If you include the effects of the real diffraction MTF=0 cutoff, large pixels versus infinitesimal pixels, Bayer mask versus monochrome, and the use of AA filters, you get than you can go a least a factor of four or so smaller than the numbers mentioned in the talk.

I've done real-world testing on this, at f/11. The Rayleigh criteria would indicate that 14.8 micron pixels would be all you'd need at f/11. But real-world testing indicates improvement in apparent sharpness right down to about 3.2 microns pixels. The improvement at the end is very, very small, but still there. It goes to zero (visibly) below that.

Thus, for the f/2-f/2.8 optics of cell phones, I wouldn't worry too much about smaller pixels being truly useless until you start to get below 1 micron pixel sizes.

3 upvotes
Eric Fossum
By Eric Fossum (Oct 30, 2011)

I would be very interested in seeing the exact details of this investigation and the results. Also, did you do an analytical comparison? A link or private email would be welcome. Thanks.

2 upvotes
Road Lice
By Road Lice (Oct 29, 2011)

I am not so disturbed that Eric has bestowed new powers of surveillance upon Big Brother as I am by the fact that "The force of marketing is greater than the force of engineering". When shopping for cameras I dread having to sift through hectares of semi-dysfunctional cameras that are really misleading marketing trinkets designed to appeal to the unwashed masses. I would prefer the best engineering available in a camera.

I don't trust camera manufacturers and I suspect they conspire against everyday consumers and photography enthusiasts alike. I would like to know what the actual difference in manufacturing costs are between an 18 megapixel APS-C sensor and an 18 megapixel Full Frame sensor. Am I being denied an inexpensive Full Frame camera by the the forces of marketing?

6 upvotes
David Fell
By David Fell (Oct 29, 2011)

I would like to know what the actual difference in manufacturing costs are between an 18 megapixel APS-C sensor and an 18 megapixel Full Frame sensor. Am I being denied an inexpensive Full Frame camera by the the forces of marketing?

No

Go to http://en.wikipedia.org/wiki/Image_sensor_format it states: Production costs for a full frame sensor can exceed twenty times the costs of an APS-C sensor. Only about thirty full-frame sensors can be produced on an 8 inches (20 cm) silicon wafer that would fit 112 APS-C sensors, and there is a significant reduction in yield due to the large area for contaminants per component. Additionally, the full frame sensor requires three separate exposures during the photolithography stage, which requires separate masks and quality control steps. The APS-H size was selected since it is the largest that can be imaged with a single mask to help control production costs and manage yields.

1 upvote
Road Lice
By Road Lice (Oct 29, 2011)

With all due respect to Wikipedia, I would like to know what the actual difference in manufacturing cost is for the sensor. Are we talking about $25 or $100 or $250?

2 upvotes
Joe Ogiba
By Joe Ogiba (Oct 29, 2011)

"With all due respect to Wikipedia, I would like to know what the actual difference in manufacturing cost is for the sensor. Are we talking about $25 or $100 or $250?"

The 18mp 7D cost $1,800 and the new FF 18mp 1DX is $6,800 so I would think the sensor is more than $250 or 5% of the $5,000 difference.

Comment edited 59 seconds after posting
0 upvotes
Road Lice
By Road Lice (Oct 29, 2011)

And the cost of a 600D with the same sensor as the 7D is $750. However, the price of complete cameras does not tell me anything about the manufacturing costs of their constituent components. Rather than relying on Wikipedia or the suggested retail prices of cameras to speculatively interpolate the costs of sensors I was hoping that someone familiar with sensor manufacturing would spill the beans on what it actually costs to make sensors.

3 upvotes
GS_Mathomhouse
By GS_Mathomhouse (Oct 31, 2011)

And what would one get with the knowledge of sensor manufacturing costs? The right to decide on a reasonable price for a camera? When making such a calculation, be sure to consider the cost of R&D, QC&T, licensing, return on investment, both in physical and intellectual assets, transport, packaging, storage etc...

0 upvotes
GMaximus
By GMaximus (Oct 31, 2011)

For some reference, 1/4, 1/3.2'' 5MPix sensors are there on the market for about 7-8 USD unit price.
http://search.digikey.com/us/en/products/OV05633-C48A/884-1017-ND/2123269?wt.z_cat_cid=Dxn_US_US2011_Catlink

0 upvotes
Road Lice
By Road Lice (Oct 31, 2011)

Question: "And what would one get with the knowledge of sensor manufacturing costs?"

Answer: The deepest, darkest secret of the camera business!

The costs you mention (like R&D, licensing and transportation) are the same for an APS-C sensor and a Full Frame sensor. I want to know what the difference is. I want to know if a camera company is reducing the cost of a camera by $50 by giving me an APS-C sensor instead of a Full Frame sensor.

1 upvote
Josh152
By Josh152 (Oct 31, 2011)

I would love to know how much it really costs in total to make these cameras and how much pricing is purely market positioning and greed. I have hard time believing it is necessary to charge $2500+ for a camera like the 5D mark II in order to make a profit. It is more understandable with a camera like the 1DX that has a completely new sensor and auto focus system with lots of R&D behind it. But the 5D Mark II used mostly existing technology and components. Heck some of the components were already previous generation technology at the time the camera was released.

1 upvote
Higuel
By Higuel (Nov 1, 2011)

Joe Ogiba: it is enough to compare the prices of the cheapest sony ff versus the nikon d3x using the SAME SENSOR to see how they cheat in the prices!!

1 upvote
Josh152
By Josh152 (Nov 1, 2011)

I suspect that the actual differnce in manufacturing cost between a FF and a Crop sensor is significantly less than some would have you believe. It is probable that the only reason there isn't FF cameras for $1000 USD or less is because the camera manufactures want to be able to charge a premium for full frame cameras. I bet if you really knew how much it costs to produce say, the D3X, you would find out that there is a very high markup. The only explanation for Sony and other manufactures being able to sell cameras with the same sensor for significantly less money is that the sensor is not the major cost of producing the camera. If it was how could canon sell Rebels with the same sensor for over $1,000 less than the 7D or the 5D mark II for $2,500 When the 1DS Mark III has a SRP of $6,999.99. It is obvious the FF cameras are being kept at an artificiality high price. It is also obvious that the sensor is only a small fraction of the cost of producing the camera.

Comment edited 3 minutes after posting
1 upvote
Neoasphalt
By Neoasphalt (Oct 29, 2011)

When there will be the first HD resulotion (2MP) sensor (1/2.3 or bigger size) made with modern technologies for social network users who don't want to print their pictures, but instead of that want better quality in low light conditions, use smaller capacity memory cards, faster processing, post processing & upload times and simply watch them on the high definition screen and pay less for such a sensor (camera)?
Or that never will be made cause it is dangerous to worldwide megapixel race and megapixel myth?

5 upvotes
zodiacfml
By zodiacfml (Oct 30, 2011)

I'm all for this idea. The mp race is only making it difficult to these consumers including me. I want fast shutter speeds and least possible noise in low light. Huge resolution sensors have their place in good light.

1 upvote
Josh152
By Josh152 (Nov 2, 2011)

The thing with megapixels is that how many you need really depends on what you want the camera to do. For example if all you want to do is post pics on the web then any half way decent camera will do. But if you want to make large fine art prints, then you really do need as many megapixels as you can get. There is not a "megapixel Myth". Just a lack of understanding of what more megapixels actually gives you and what the tradeoffs are.

Comment edited 2 minutes after posting
0 upvotes
lylejk
By lylejk (Oct 29, 2011)

Well I definitely enjoyed the presentation. Glad you shared it with us. Look forward to the plenoptic and 3D imaging aspects for digital photography. As for the social issues that will rise due to this technology, well, anything can be used for both good and evil so I really don't have an opinion to share here w.r.t. that. :)

1 upvote
Jon Stern
By Jon Stern (Oct 29, 2011)

While credit should be given where it's due, Eric Fossum did NOT invent CMOS image sensors.

The invention of passive pixel sensors dates back to the late 1960s (it's generally credited to Peter Noble in 1968). As for commercialization, VVL had the first commercial CMOS sensor on the market around 1993 (if I recall correctly). This was a passive pixel device.

Eric's work at JPL was on active pixel sensors. This was a major improvement that allows the high-quality CMOS sensors we enjoy today. However, there was work in this area that predates Eric. Namely by Tsutomu Nakamura, at Olympus, and there's an even earlier claim by Hitachi.

Where Eric does have a good claim is to have invented the sensor that uses intra-pixel charge transfer and achieves real correlated double sampling, which is a big deal when it comes to noise reduction.

Comment edited 3 times, last edit 11 minutes after posting
5 upvotes
Eric Fossum
By Eric Fossum (Oct 29, 2011)

Jon, you are almost exactly right. However, nearly 100% of today's CMOS image sensors fall into your last paragraph and all are referred to just as CMOS image sensors. In the mid 90's I published a review paper that refers to all the historical works you refer to. Also note that my friend Tsutomu Nakamura coined the phrase "active pixel". I just made it welll known.

In the land of passive pixels, much credit is due to Gene Weckler. The technology is those days was MOS, not CMOS.

11 upvotes
Jon Stern
By Jon Stern (Oct 29, 2011)

Eric, you've always given credit to those earlier works. I wasn't trying to imply otherwise.

Yes, without true CDS CMOS sensors wouldn't be where they are to day. This was vitally important work.

I wasn't attempting to take away from your contribution. Rather I was trying to show that there have been many important contributions made.

I believe that science and engineering badly represent themselves to young people by perpetuating the myth of the lone inventor. We are social animals and young people are likely to be turned off the idea of going in to science if they think it means working as a recluse.

Science and engineering (as you well know) are almost always highly collaborative affairs, rich in social interaction. If we are going to overturn the negative view that young people in The West have towards science and engineering as a career, this needs to be emphasized.

BTW, you always had a reference to an early charge transfer paper in your early papers. Was that by Meindl?

1 upvote
lbjack
By lbjack (Oct 31, 2011)

And let's not forget Carver Mead, who also had a little something to do with developing CMOS sensors.

Comment edited 36 seconds after posting
0 upvotes
Eric Fossum
By Eric Fossum (Oct 31, 2011)

Carver Mead is a brilliant person and someone I like and respect a lot. You are probably referring to the 3-layer detector from Foveon. I think the primary inventor on that was Dick Merrill, formerly of National Semiconductor, and sadly now deceased. There was also some significant prior art as well (see, for example references in:
http://ericfossum.com/Publications/Papers/2011%20IISW%20Two%20Layer%20Photodetector.pdf )
Frankly I am not sure of Carver's role in the development of the Foveon X3 sensor. But, I am sure he had a little something to do with developing the technology and product as you say.

1 upvote
vFunct
By vFunct (Oct 29, 2011)

This is great. My first job out of college was to design CMOS image sensors arrays at Intel around 1996, largely based around his designs. Nice to see him talk about upcoming technologies as well.

The societal implications of technology are a concern, but as engineers, we just make first because that's what we do, and ask questions about it later. Really, in a 100 years, these digital image sensors are going to be everywhere, on walls, flexible fabric, packaging, everything, and are going to be networked. We're not even close to their potential applications at this point.

Meanwhile, back at Intel, we added a camera shutter noise to make people more comfortable with using digital cameras, as back then consumers didn't really know about the coming digital camera revolution. (digital cameras were largely only used by a few professionals).

0 upvotes
DRG
By DRG (Oct 28, 2011)

Dr. Fossum also participates in the forums here, so it's possible to find him and ask any questions you may have.

0 upvotes
thielges
By thielges (Oct 28, 2011)

I think that the title should read "CMOS *Image Sensor* Inventor..." CMOS itself (as a platform for digital logic) was invented back in the 1960s by Frank Wanlass.

2 upvotes
Daniel Browning
By Daniel Browning (Oct 28, 2011)

I think the title is just fine how it is.

Most readers who have heard of CCD/CMOS will assume the title refers to image sensors, not generic CMOS. And anyone who reads the first sentence will have the matter clarified for them.

Sometimes brevity trumps accuracy. The title also says "discusses digital image sensors", but technically, that isn't accurate either. He discusses much more than just image sensors (just as the "CMOS" in the title refers to much more than just "CMOS"). But it doesn't make sense to list *all* the things he talks about in the title, so some sort of short summary is made.

4 upvotes
tmpennyjr
By tmpennyjr (Oct 29, 2011)

6 to one, half dozen to the other.......

0 upvotes
Ron Parr
By Ron Parr (Oct 29, 2011)

He also didn't invent the CMOS sensor. He invented the active-pixel CMOS sensor. CMOS sensors are attributed to Weckler, 1967. Fossum himself cites Weckler for this in his papers on the topic.

0 upvotes
wil13jak
By wil13jak (Oct 29, 2011)

Wow he said pottatos she said..........and so on geez

2 upvotes
Ron Parr
By Ron Parr (Oct 29, 2011)

Words have meaning and the truth matters - perhaps not to everybody, but to people who understand what the words mean, the truth often does matter. I suspect it matters to Gene Weckler. Perhaps some day you will invent something - or perhaps you already have. In any case, I hope that you will get proper credit for it.

5 upvotes
SpankyAsami rillo
By SpankyAsami rillo (Oct 29, 2011)

Who are you people. Get over it.

0 upvotes
eye-spy
By eye-spy (Oct 29, 2011)

Correct attribution is an important issue. If you are writing articles that are read by millions of people then you have a responsibility to include factual data other wise the article is no better than the kind of "beer talk" you can hear in every bar around the country.

If you repeat a lie often enough, it becomes the truth.

0 upvotes
dosdan
By dosdan (Oct 29, 2011)

The RGBZ technology is interesting, as is QIS. Beside gesture control, I wonder if the Z component could be used some way in webcams, perhaps to give a 3-D effect. Samsung's vision seems to be 3-D everywhere, not just entertainment.

Dan

0 upvotes
Eric Fossum
By Eric Fossum (Oct 29, 2011)

Ron, just to be clear, Gene figured out how to integrate optical signal on a PN junction and to read that out with a switch, now called a passive pixel. The technology was MOS, not CMOS, for those that care about wording. Gene is a friend and still very active in the image sensor community. He was very supportive of our work at JPL.

There are many people that have contributed to the state of image sensor technology today. See for example the first few pages of this presentation:
http://ericfossum.com/Presentations/2011%2025th%20Anniversary%20of%20IISW%20%20Reflections%20on%20Directions.pdf

And since the invention of the CMOS active pixel image sensor with intrapixel charge transfer many many engineers around the world have pushed this technology to its high level of performance that we see today.

0 upvotes
Ron Parr
By Ron Parr (Oct 29, 2011)

Eric, I pointed out above that you cited Gene Weckler. You have been clear about Weckler's contribution in your writing. You give Weckler credit for the basic passive pixel which begat the early passive pixel CMOS sensors. You have also been clear about the history of active pixel designs before this idea was applied to CMOS. I have no problem with YOUR scholarship and credit attribution on this.

I don't think it's correct for dpreview to say that you invented the CMOS sensor, or the CMOS image sensor, and I doubt you think that's correct either.

0 upvotes
Eric Fossum
By Eric Fossum (Oct 29, 2011)

Ron, as I said above to Jon Stern, when people say CMOS image sensor today they are almost always referring to a CMOS active pixel image sensor with intra-pixel charge transfer yada yada yada since the earlier incarnations did not work so well. I am pretty comfortable with the short hand title and it is ok with me if you are not and prefer the longer title for clarity. To me they are one in the same these days.

0 upvotes
Ron Parr
By Ron Parr (Oct 29, 2011)

Eric - I have elsewhere referred to you as the inventor of the "modern CMOS sensor" (with image implicit). I think this is not overly long or technical, yet still fair both to you and to those upon whose shoulders you have stood.

1 upvote
Eric Fossum
By Eric Fossum (Oct 29, 2011)

Thanks Ron. I will have to look you up when I finally get around to visiting Dave Brady.

0 upvotes
Jink
By Jink (Oct 30, 2011)

I wonder who created Eric Fossum (and the rest of the lineage). Perhaps they should get credit too along with all his school teachers and first girl friend etc. And perhaps while we are at it we should thank the Earth and the Sun for which none of this CMOS stuff would be possible either. And don't forget the countless animals that gave their lives to sustain Eric. It's also worth noting that in a few thousand years all of this will very likely be completely lost and forgotten in the never-ending never-beginning interdependent crucible of arising and subsiding phenomenon.

Just saying. ;)

2 upvotes
Canon20Duser
By Canon20Duser (Nov 1, 2011)

The lecture was terrific and I have memorized several points you made to impress my friends. I work in aviation journalism, taking videos of propeller aircraft, and the older CCD sensors have less rolling shutter problems than CMOS, correct? I am talking about shutters and that is not your expertise, I realize.

0 upvotes
Eric Fossum
By Eric Fossum (Nov 2, 2011)

I do know something about electronic shutters. Generally we talk about either rolling shutters or global shutters. Full frame CCDs and frame transfer CCDs have some shutter issues like smear. Interline CCDs have less smear problem. CMOS sensors have no smear. But rolling shutter devices can have artifacts under certain conditions, and global shutter CMOS sensors usually have larger pixels and higher read noise. Recently some lower noise global shutter CMOS image sensors with small pixels have been made for R&D purposes and probably there will be commercial devices soon.

1 upvote
Canon20Duser
By Canon20Duser (Nov 6, 2011)

Thanks, Eric. I didn't realize CCDs can have shutter issues as well. I'll look forward to those new CMOS global shutters.

0 upvotes
Total comments: 114