Cheap naked chips snap a perfect picture

The journalist who wrote this isn't that clued up...
  • This is not a new idea, people have been taking the tops of memory chips and using them as crude image sensors for about 25 years (or at least, that's when I first saw a report in the electronics mag 'Wireless World'.
  • There's no intrinsic reason why a sensor chip of any size based on a memory design would be any cheaper than one based on conventional CMOS (in fact, given that memory derived sensors have been around for 25 years, if the idea was really any good, that's how all sensors would be made).
  • The chip defined here isn't going to produce great quality output, because it's not sufficiently oversampled to allow a one bit 'digitiser' to produce the kind of DR we are used to. They report oversampling by a factor of 100. This will give 4.32 bits per 10M effective sample - not quite the quality we're used to.
  • You never really know what was actually said to the journalist, but the statement 'the memory cell will always be 100 times smaller than CMOS sensor cells; it is bound to be that way because of the sheer number of signal-conditioning transistors the CMOS sensor needs around each pixel' is nonsense. CMOS active pixel sensors have usually four transistors per pixel, by sharing circuitry between pixels, its not hard to get that down to an effective count between one and two. A memory cell has one - that's not two orders of magnitude different.
  • The idea of an oversampled sensor is a good one, and being pursued by many researchers and has been discussed extensively on these forums.
Just to be clear, I'm not saying that Prof Charbon is talking nonsense, simply that the New Scientist article is.
 
The journalist who wrote this isn't that clued up...
  • This is not a new idea, people have been taking the tops of memory chips and using them as crude image sensors for about 25 years (or at least, that's when I first saw a report in the electronics mag 'Wireless World'.
If you read the background material for this weeks Nobel Prize for Physics, the original CCD concept from 1969 was originally targeted as an alternative to magnetic bubble memory: http://nobelprize.org/nobel_prizes/physics/laureates/2009/phyadv09.pdf
  • There's no intrinsic reason why a sensor chip of any size based on a memory design would be any cheaper than one based on conventional CMOS (in fact, given that memory derived sensors have been around for 25 years, if the idea was really any good, that's how all sensors would be made).
  • The chip defined here isn't going to produce great quality output, because it's not sufficiently oversampled to allow a one bit 'digitiser' to produce the kind of DR we are used to. They report oversampling by a factor of 100. This will give 4.32 bits per 10M effective sample - not quite the quality we're used to.
  • You never really know what was actually said to the journalist, but the statement 'the memory cell will always be 100 times smaller than CMOS sensor cells; it is bound to be that way because of the sheer number of signal-conditioning transistors the CMOS sensor needs around each pixel' is nonsense. CMOS active pixel sensors have usually four transistors per pixel, by sharing circuitry between pixels, its not hard to get that down to an effective count between one and two. A memory cell has one - that's not two orders of magnitude different.
  • The idea of an oversampled sensor is a good one, and being pursued by many researchers and has been discussed extensively on these forums.
Here is a discussion around a paper Eric Fossum published in 2005
http://forums.dpreview.com/forums/read.asp?forum=1000&message=20837619
Just to be clear, I'm not saying that Prof Charbon is talking nonsense, simply that the New Scientist article is.
A fair summary in my opinion
--
Alan Robinson
 
I think perhaps the original statement meant the new sensor would save by not needing the ADC portion of a traditional sensor.

OTOH, the DR issue seems to indicate you'd need a lot of data processing - around 75 Gigapixel?

--
EOS 50D, 20D, 10D, 630, A-1, SD1000
-- Please remove the Quote option!
-- Why can't you edit more than once???
-- How about switching to real forum software?
 
I can't even begin to count the flaws in that idea...

Color and resolution

Sure, you can make memory cells so small that you can put a billion of them on a point and shoot camera, but then the cells end up smaller than a wavelength of visible light. That's nice, if you want a sensor that only sees X-rays, but it kind of sucks for visible light. There's a fundamental resolution limit, as soon as you go below 700nm on the cell size, there's pretty much no way to make color filtering work. That's:
  • 1.2 gigapixels on full frame
  • 500mp on APS
  • 130mp on APS
Even if you are content with B&W images, or lay huge color filters down that each cover hundreds of pixels, by the time a cell hits a half wavelength, sensitivity is going to pot.

Dynamic range

Now, there's no real way to provide D/A converters (yes, not A/D) on a per pixel basis, so we're talking pure decimation, not a delta/sigma approach. That means that to get an effective dynamic range that approaches a modern CCD or CMOS sensor, you really are looking at super pixels that encompass thousands (not hundreds) of pixels. The overall resolution is going to be insanely low, a gigapixel sensor delivers a megapixel image.

--
Rahon Klavanian 1912-2008.

Armenian genocide survivor, amazing cook, scrabble master, and loving grandmother. You will be missed.

Ciao! Joseph

http://www.swissarmyfork.com
 
Two decades ago I invented an imaging radar using 1-bit processors. If you use a lot of them then you can do unusual things. I referred to it as a 'memory-less processor' and others thought of it as a 'processing memory'. lol!

Record the information, and then reorganize the memory, based on a 'program', to get the image.

For example, take a hologram and cut it in fourths. The hologram image is on all four parts, just at a lower resolution. Cut each piece in fourths again and the image is still on each piece. You can keep cutting until wavelength effects dominate.

I can envision this, using coherent light, as an industrial ultra high resolution imaging system.

--
  • Rick
 
Sure, you can make memory cells so small that you can put a billion of them on a point and shoot camera, but then the cells end up smaller than a wavelength of visible light.
This doesn't matter. The goal is to have a very large number of sensels that register either a photon detection or none. By definition, a photon is absorbed either completely or not at all by a given absorption event. The fact that the sensels are much smaller than the photon wavelength simply means that the probability of absorption of any given photon is spread out over a large number of sensels. Yet, only one will ultimately absorb it. As a result, the detector doesn't improve on resolution beyond that imposed by wavelength, but nonetheless creates a photon distribution map from which an image can be reconstructed.
Now, there's no real way to provide D/A converters (yes, not A/D) on a per pixel basis
This is the whole point. The image is spatially oversampled, with output pixels each ultimately determined from many on/off chip sensels.

Film works the same way (as does the retina). Each grain is binary, but the number of grains that flip state over an area much larger than that of a single grain determines the overall density of the resulting slide or negative.

The 100 sensels per output pixel may not be enough to produce a high quality image comparable to existing CCD or CMOS sensors inside even the least expensive cameras, but the concept can be scaled.

David
 
I should further add that this concept falls right in line with John Sheehy's discussions (with which I agree) about the value of increasing pixel count in the right way. The ultimate conceptual limit is an array of sensels so dense that each one rarely absorbs more than a single photon. Then, the resulting signal consists of just the distribution map of alll the photons that reached the sensor ... a snapshot of statistical reality (statistical, since, at the sensel level, the same shot taken over and over again would yield different results). The output of any lower-pixel-count, more conventional sensor could be derived from this information.

In this same ultimate limit, there would be more than enough sensels to allow for accurate color via a color filter array.

David
 
Sure, you can make memory cells so small that you can put a billion of them on a point and shoot camera, but then the cells end up smaller than a wavelength of visible light.
This doesn't matter. The goal is to have a very large number of sensels that register either a photon detection or none. By definition, a photon is absorbed either completely or not at all by a given absorption event. The fact that the sensels are much smaller than the photon wavelength simply means that the probability of absorption of any given photon is spread out over a large number of sensels.
The fact that you have some sort of an aperture smaller than a wavelength of light means that the photon (a wave phenomenon) bounces right off it.
Yet, only one will ultimately absorb it. As a result, the detector doesn't improve on resolution beyond that imposed by wavelength, but nonetheless creates a photon distribution map from which an image can be reconstructed.
Now, there's no real way to provide D/A converters (yes, not A/D) on a per pixel basis
This is the whole point. The image is spatially oversampled, with output pixels each ultimately determined from many on/off chip sensels.
The point you're neglecting is that it's insufficiently spatially oversampled. The reason I mentioned having D/A converters is so that some noise shaping could be used. Without that, your oversampling literally has to be on the order of your desired dynamic range. Want something resembling 12 bits (about 4000:1)? You need to oversample at least 4000 (and actually, 8000 works better) cells.
Film works the same way (as does the retina). Each grain is binary,
Actually, it isn't, at least not in the sense you mean. Grains are huge, 1-2um. If they were laid out in a two dimensional structure, you wouldn't have enough resolution for any useful photography in smaller formats. A 1 micron grain would yield 864 binary megapixels, and that means you could decimate to around 10 pits and have less than a 1mp effective resolution.

That's why instead of a sparse billion grains in a 2 dimensional structure, you have a lot more, "floating" in a three dimensional structure, and a photon that misses one grain at a spatial location can react with another grain at the same location and a different depth. So, out 10 photons on the same "course", and you end up with a larger "blob" at that location.


but the number of grains that flip state over an area much larger than that of a single grain determines the overall density of the resulting slide or negative.

The 100 sensels per output pixel may not be enough to produce a high quality image comparable to existing CCD or CMOS sensors inside even the least expensive cameras, but the concept can be scaled.
That's what I keep trying to point out. It can't be scaled. Make the sensitive elements too small, and they won't respond to visible light. Make them large enough to work, and you can't have enough resolution. Without three dimensional fabrication, to create a significant number of layers of detectors, this doesn't work, regardless of whether your detectors are photochemical or electrooptical.

--
Rahon Klavanian 1912-2008.

Armenian genocide survivor, amazing cook, scrabble master, and loving grandmother. You will be missed.

Ciao! Joseph

http://www.swissarmyfork.com
 
The fact that you have some sort of an aperture smaller than a wavelength of light means that the photon (a wave phenomenon) bounces right off it.
Actually, the photon has to be absorbed somewhere. The point of a deep submicron detector is to collect the photoelectron as close as possible to where it was absorbed.
Actually, it isn't, at least not in the sense you mean. Grains are huge, 1-2um. If they were laid out in a two dimensional structure, you wouldn't have enough resolution for any useful photography in smaller formats. A 1 micron grain would yield 864 binary megapixels, and that means you could decimate to around 10 pits and have less than a 1mp effective resolution.
Joe, pls. take a look at my paper cited already in this thread, and let's start from there. I await your comment.
 
The fact that you have some sort of an aperture smaller than a wavelength of light means that the photon (a wave phenomenon) bounces right off it.
Actually, the photon has to be absorbed somewhere. The point of a deep submicron detector is to collect the photoelectron as close as possible to where it was absorbed.
Seems reasonable. The wave function of a photon is a probability distribution. If the wavelength is smaller than a pixel then the chances of a photon 'bullet' hitting a pixel squarely in the middle being absorbed by that pixel are high. In the case of a pixel smaller than a wavelength then the chances of that photon being absorbed by surrounding pixels is relatively higher. Simply, you cease to be able to locate the photon to a single pixel with any degree of certainty. If Joe's argument were corredct, then your medium wave radio would need an antenna hundereds of metres long - a little impractical.
Actually, it isn't, at least not in the sense you mean. Grains are huge, 1-2um. If they were laid out in a two dimensional structure, you wouldn't have enough resolution for any useful photography in smaller formats. A 1 micron grain would yield 864 binary megapixels, and that means you could decimate to around 10 pits and have less than a 1mp effective resolution.
Joe, pls. take a look at my paper cited already in this thread, and let's start from there. I await your comment.
We'll await Joe's comment, my comment would be that I think Joe is right about the number of equal size jots needed to give a decent DR. Film has several things going for it, one is, as Joe suggests, the 3-D structure, which means that there are many more grains in the emulsion than their size would suggest. The other is that the grains are varying sizes, which gives varying chemical 'gain' within the sensor, which means that the film can provide more apparent DR than could a fixed grain size (similar to the Fuji extended DR idea) . The most extreme fine grain emulsions, with very consistent grain size, were (are) copying films, technical pan and so on. When used for pictorial purposes these had very limited tonal range. Other films had multiple layer emulsions, with different grain sizes in the different layers, to give extended tonal range. I suspect that if the jots had a range of gains, then the apparent tonal range of the sensor would be increased.
The problem, of course, would be that the effective QE would be reduced.
 
Hi Roland....I would not say implemented commercially at all. I think it was a graduate student. The image he studied was synthetic. Aside from the computer science side of things, I am not sure there was any real imaging experiment. I have seen the written paper, btw. The mathematical algorithm/analysis part is the meat of the paper. The only potato, or spud, is the SPAD which is mentioned but is not practical yet for this kind of imaging. There was no mention of prior art.
 
We'll await Joe's comment, my comment would be that I think Joe is right about the number of equal size jots needed to give a decent DR. Film has several things going for it, one is, as Joe suggests, the 3-D structure, which means that there are many more grains in the emulsion than their size would suggest. The other is that the grains are varying sizes, which gives varying chemical 'gain' within the sensor, which means that the film can provide more apparent DR than could a fixed grain size (similar to the Fuji extended DR idea) . The most extreme fine grain emulsions, with very consistent grain size, were (are) copying films, technical pan and so on. When used for pictorial purposes these had very limited tonal range. Other films had multiple layer emulsions, with different grain sizes in the different layers, to give extended tonal range. I suspect that if the jots had a range of gains, then the apparent tonal range of the sensor would be increased.
The problem, of course, would be that the effective QE would be reduced.
I can't say I know much about film other than the basics. One interesting thing about my proposed digital film sensor (which someday I hope to build) is that the grain size need not be fixed. It can be adjusted on the fly, or adjusted from frame-plane to frame-plane.

Once you catch the individual photon strikes as accurately as possible, almost everything else is computational. I think it has sort of mind-blowing possibilities, but maybe that is just me. We still need to be able to build single electron jots that are 0.5 um or smaller on side. No easy task. Don't throw out your CMOS image sensor DSLR yet.
 
We'll await Joe's comment, my comment would be that I think Joe is right about the number of equal size jots needed to give a decent DR. Film has several things going for it, one is, as Joe suggests, the 3-D structure, which means that there are many more grains in the emulsion than their size would suggest. The other is that the grains are varying sizes, which gives varying chemical 'gain' within the sensor, which means that the film can provide more apparent DR than could a fixed grain size (similar to the Fuji extended DR idea) . The most extreme fine grain emulsions, with very consistent grain size, were (are) copying films, technical pan and so on. When used for pictorial purposes these had very limited tonal range. Other films had multiple layer emulsions, with different grain sizes in the different layers, to give extended tonal range. I suspect that if the jots had a range of gains, then the apparent tonal range of the sensor would be increased.
The problem, of course, would be that the effective QE would be reduced.
I can't say I know much about film other than the basics. One interesting thing about my proposed digital film sensor (which someday I hope to build) is that the grain size need not be fixed. It can be adjusted on the fly, or adjusted from frame-plane to frame-plane.
There is a similar effect with film. Since the developer will be locally oxidised when it reduces a grain, it becomes less able to reduce neighbouring grains. By changing the activity of the developer the degree of grain clumping, and therefore visible grain can be changed. Hence fine grain and high speed devepopers. I'd view your idea as being more similar to that, unless you adopted a randomised region size, to simulate the different grain sizes.
Once you catch the individual photon strikes as accurately as possible, almost everything else is computational.
Of course, decreasing the pixel size below the wavelength of the light doesn't significantly increase the accuracy of pixel location. One could perhaps do the equivalent with stacked pixels, Foveon style.
I think it has sort of mind-blowing possibilities, but maybe that is just me. We still need to be able to build single electron jots that are 0.5 um or smaller on side. No easy task. Don't throw out your CMOS image sensor DSLR yet.
I wonder if you could do something with a flash ROM style floating gate structure? It lays out well and the readout is pretty minimal.
Thinks: is this the idea reported in the original article?
 
pixelpepper wrote:

Of course, decreasing the pixel size below the wavelength of the light doesn't significantly increase the accuracy of pixel location.
I wonder if you meant photoelectron generation location. And, it does if they are packed together. I think you are mixing up a few ideas here. My 2005 work was targeted at sub-diffraction limit (SDL) pixel sizes so if that is what you are worried about, worry no longer.

Anyway, the article is here, as mentioned elsewhere in this same thread by someone else. It is conceptual so I think it would be an easy read for this community and has also been discussed previously. But, I am still happy to discuss it some more.

http://www.ericfossum.com/Publications/Papers/Gigapixel%20Digital%20Film%20Sensor%20Proposal.pdf
 
Actually you would not need a deep 3-D structure. Note that one can detect the Single Event Upset flip of the cell (jot? sensel?) and let it reset. The exposure time for a camera is forever compared with the speed of a memory cell.

--
  • Rick
 
pixelpepper wrote:

Of course, decreasing the pixel size below the wavelength of the light doesn't significantly increase the accuracy of pixel location.
I wonder if you meant photoelectron generation location.
I meant photon location. What I was getting at is that if you take a quantum mechanical view then the classical EM wave approximates to the QM wave function, which is a probability distribution of observing the photon over space. If you try to constrain the precision of observation to a scale significantky smaller than the wavelength, then the probability of observing the photon (i.e. it releasing a photoelectron) becomes relatively smaller. What this means is that as the sensels reduce far below the wavelength, their ability to locate the photon reduces.
And, it does if they are packed together.
Not that I can see, unless my understanding of QM is wrong (which it could be, it's quite rusty).
I think you are mixing up a few ideas here. My 2005 work was targeted at sub-diffraction limit (SDL) pixel sizes so if that is what you are worried about, worry no longer.
I'm not worried about diffraction. The effect I'm talking about is a QM effect, not a wave effect. Actually, I'm not 'worried' about anything, I'm just saying that there is no point having deep sub wavelength pixels if the intention is to locate the photons more accurately. If the intention is to allow counting of more than one photon at a location, then there may be alternative solutions, such as multilayer sensels or the 'counting' sensel suggested by Mojo151 (although I think there are problems with his approach, in that the array would have to be read out multiple times within an exposure)
Anyway, the article is here, as mentioned elsewhere in this same thread by someone else. It is conceptual so I think it would be an easy read for this community and has also been discussed previously. But, I am still happy to discuss it some more.

http://www.ericfossum.com/Publications/Papers/Gigapixel%20Digital%20Film%20Sensor%20Proposal.pdf
Thanks Eric, I did read it (in fact I had read it before) and agree that it is an interesting concept. It is as you say conceptual, so doesn't have a detailed discussion of the physics and mathematics. At present, there are a few aspects that I haven't convinced myself about:
  • You say "like film, we expect this jot-based DFS to exhibit D-log H exposure characteristics since the physics and mathematics is nominally identical to film. That is, the dynamic range could be large and the exposure characteristics more appealing for photographic purposes." I'm realising that I don't know enough about the physics and chemistry of films to judge whether the digital film is indeed identical, but what I can't see is why a patch of any given number of jots would provide any more DR than simple oversampling theory would suggest, just because of the mode of development.
  • You say "Due to the single-bit nature of the “analog-to-digital” conversion resolution, high row-readout rates can be achieved,
allowing scanning of a gigapixel sensor with perhaps 50,000 rows in milliseconds and enabling multiple readouts per exposure." What I can't see here is how the data density of the sensor is actually reduced with respect to a sensor with larger multi-bit pixels - so far as I can see, as an oversampled one bit sensor, we'll have to shift more data bits off the sensor than we would with multi-bit pixels. I must admit, when I originally saw the slides of you presentation of this concept, I assumed that 'development' happened on the sensor chip , with some simple communication between adjacent sensels. If this is not the case, the simple number of bits that need to be shifted off the sensor seems to me likely to be a problem (without having thought about it too hard, or doing the sums).
 

Keyboard shortcuts

Back
Top