Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
If you read the background material for this weeks Nobel Prize for Physics, the original CCD concept from 1969 was originally targeted as an alternative to magnetic bubble memory: http://nobelprize.org/nobel_prizes/physics/laureates/2009/phyadv09.pdfThe journalist who wrote this isn't that clued up...
- This is not a new idea, people have been taking the tops of memory chips and using them as crude image sensors for about 25 years (or at least, that's when I first saw a report in the electronics mag 'Wireless World'.
Here is a discussion around a paper Eric Fossum published in 2005
- There's no intrinsic reason why a sensor chip of any size based on a memory design would be any cheaper than one based on conventional CMOS (in fact, given that memory derived sensors have been around for 25 years, if the idea was really any good, that's how all sensors would be made).
- The chip defined here isn't going to produce great quality output, because it's not sufficiently oversampled to allow a one bit 'digitiser' to produce the kind of DR we are used to. They report oversampling by a factor of 100. This will give 4.32 bits per 10M effective sample - not quite the quality we're used to.
- You never really know what was actually said to the journalist, but the statement 'the memory cell will always be 100 times smaller than CMOS sensor cells; it is bound to be that way because of the sheer number of signal-conditioning transistors the CMOS sensor needs around each pixel' is nonsense. CMOS active pixel sensors have usually four transistors per pixel, by sharing circuitry between pixels, its not hard to get that down to an effective count between one and two. A memory cell has one - that's not two orders of magnitude different.
- The idea of an oversampled sensor is a good one, and being pursued by many researchers and has been discussed extensively on these forums.
A fair summary in my opinionJust to be clear, I'm not saying that Prof Charbon is talking nonsense, simply that the New Scientist article is.
I can't even begin to count the flaws in that idea...
This doesn't matter. The goal is to have a very large number of sensels that register either a photon detection or none. By definition, a photon is absorbed either completely or not at all by a given absorption event. The fact that the sensels are much smaller than the photon wavelength simply means that the probability of absorption of any given photon is spread out over a large number of sensels. Yet, only one will ultimately absorb it. As a result, the detector doesn't improve on resolution beyond that imposed by wavelength, but nonetheless creates a photon distribution map from which an image can be reconstructed.Sure, you can make memory cells so small that you can put a billion of them on a point and shoot camera, but then the cells end up smaller than a wavelength of visible light.
This is the whole point. The image is spatially oversampled, with output pixels each ultimately determined from many on/off chip sensels.Now, there's no real way to provide D/A converters (yes, not A/D) on a per pixel basis
The fact that you have some sort of an aperture smaller than a wavelength of light means that the photon (a wave phenomenon) bounces right off it.This doesn't matter. The goal is to have a very large number of sensels that register either a photon detection or none. By definition, a photon is absorbed either completely or not at all by a given absorption event. The fact that the sensels are much smaller than the photon wavelength simply means that the probability of absorption of any given photon is spread out over a large number of sensels.Sure, you can make memory cells so small that you can put a billion of them on a point and shoot camera, but then the cells end up smaller than a wavelength of visible light.
The point you're neglecting is that it's insufficiently spatially oversampled. The reason I mentioned having D/A converters is so that some noise shaping could be used. Without that, your oversampling literally has to be on the order of your desired dynamic range. Want something resembling 12 bits (about 4000:1)? You need to oversample at least 4000 (and actually, 8000 works better) cells.Yet, only one will ultimately absorb it. As a result, the detector doesn't improve on resolution beyond that imposed by wavelength, but nonetheless creates a photon distribution map from which an image can be reconstructed.
This is the whole point. The image is spatially oversampled, with output pixels each ultimately determined from many on/off chip sensels.Now, there's no real way to provide D/A converters (yes, not A/D) on a per pixel basis
Actually, it isn't, at least not in the sense you mean. Grains are huge, 1-2um. If they were laid out in a two dimensional structure, you wouldn't have enough resolution for any useful photography in smaller formats. A 1 micron grain would yield 864 binary megapixels, and that means you could decimate to around 10 pits and have less than a 1mp effective resolution.Film works the same way (as does the retina). Each grain is binary,
That's what I keep trying to point out. It can't be scaled. Make the sensitive elements too small, and they won't respond to visible light. Make them large enough to work, and you can't have enough resolution. Without three dimensional fabrication, to create a significant number of layers of detectors, this doesn't work, regardless of whether your detectors are photochemical or electrooptical.but the number of grains that flip state over an area much larger than that of a single grain determines the overall density of the resulting slide or negative.
The 100 sensels per output pixel may not be enough to produce a high quality image comparable to existing CCD or CMOS sensors inside even the least expensive cameras, but the concept can be scaled.
Actually, the photon has to be absorbed somewhere. The point of a deep submicron detector is to collect the photoelectron as close as possible to where it was absorbed.The fact that you have some sort of an aperture smaller than a wavelength of light means that the photon (a wave phenomenon) bounces right off it.
Joe, pls. take a look at my paper cited already in this thread, and let's start from there. I await your comment.Actually, it isn't, at least not in the sense you mean. Grains are huge, 1-2um. If they were laid out in a two dimensional structure, you wouldn't have enough resolution for any useful photography in smaller formats. A 1 micron grain would yield 864 binary megapixels, and that means you could decimate to around 10 pits and have less than a 1mp effective resolution.
Seems reasonable. The wave function of a photon is a probability distribution. If the wavelength is smaller than a pixel then the chances of a photon 'bullet' hitting a pixel squarely in the middle being absorbed by that pixel are high. In the case of a pixel smaller than a wavelength then the chances of that photon being absorbed by surrounding pixels is relatively higher. Simply, you cease to be able to locate the photon to a single pixel with any degree of certainty. If Joe's argument were corredct, then your medium wave radio would need an antenna hundereds of metres long - a little impractical.Actually, the photon has to be absorbed somewhere. The point of a deep submicron detector is to collect the photoelectron as close as possible to where it was absorbed.The fact that you have some sort of an aperture smaller than a wavelength of light means that the photon (a wave phenomenon) bounces right off it.
We'll await Joe's comment, my comment would be that I think Joe is right about the number of equal size jots needed to give a decent DR. Film has several things going for it, one is, as Joe suggests, the 3-D structure, which means that there are many more grains in the emulsion than their size would suggest. The other is that the grains are varying sizes, which gives varying chemical 'gain' within the sensor, which means that the film can provide more apparent DR than could a fixed grain size (similar to the Fuji extended DR idea) . The most extreme fine grain emulsions, with very consistent grain size, were (are) copying films, technical pan and so on. When used for pictorial purposes these had very limited tonal range. Other films had multiple layer emulsions, with different grain sizes in the different layers, to give extended tonal range. I suspect that if the jots had a range of gains, then the apparent tonal range of the sensor would be increased.Joe, pls. take a look at my paper cited already in this thread, and let's start from there. I await your comment.Actually, it isn't, at least not in the sense you mean. Grains are huge, 1-2um. If they were laid out in a two dimensional structure, you wouldn't have enough resolution for any useful photography in smaller formats. A 1 micron grain would yield 864 binary megapixels, and that means you could decimate to around 10 pits and have less than a 1mp effective resolution.
I can't say I know much about film other than the basics. One interesting thing about my proposed digital film sensor (which someday I hope to build) is that the grain size need not be fixed. It can be adjusted on the fly, or adjusted from frame-plane to frame-plane.We'll await Joe's comment, my comment would be that I think Joe is right about the number of equal size jots needed to give a decent DR. Film has several things going for it, one is, as Joe suggests, the 3-D structure, which means that there are many more grains in the emulsion than their size would suggest. The other is that the grains are varying sizes, which gives varying chemical 'gain' within the sensor, which means that the film can provide more apparent DR than could a fixed grain size (similar to the Fuji extended DR idea) . The most extreme fine grain emulsions, with very consistent grain size, were (are) copying films, technical pan and so on. When used for pictorial purposes these had very limited tonal range. Other films had multiple layer emulsions, with different grain sizes in the different layers, to give extended tonal range. I suspect that if the jots had a range of gains, then the apparent tonal range of the sensor would be increased.
The problem, of course, would be that the effective QE would be reduced.
There is a similar effect with film. Since the developer will be locally oxidised when it reduces a grain, it becomes less able to reduce neighbouring grains. By changing the activity of the developer the degree of grain clumping, and therefore visible grain can be changed. Hence fine grain and high speed devepopers. I'd view your idea as being more similar to that, unless you adopted a randomised region size, to simulate the different grain sizes.I can't say I know much about film other than the basics. One interesting thing about my proposed digital film sensor (which someday I hope to build) is that the grain size need not be fixed. It can be adjusted on the fly, or adjusted from frame-plane to frame-plane.We'll await Joe's comment, my comment would be that I think Joe is right about the number of equal size jots needed to give a decent DR. Film has several things going for it, one is, as Joe suggests, the 3-D structure, which means that there are many more grains in the emulsion than their size would suggest. The other is that the grains are varying sizes, which gives varying chemical 'gain' within the sensor, which means that the film can provide more apparent DR than could a fixed grain size (similar to the Fuji extended DR idea) . The most extreme fine grain emulsions, with very consistent grain size, were (are) copying films, technical pan and so on. When used for pictorial purposes these had very limited tonal range. Other films had multiple layer emulsions, with different grain sizes in the different layers, to give extended tonal range. I suspect that if the jots had a range of gains, then the apparent tonal range of the sensor would be increased.
The problem, of course, would be that the effective QE would be reduced.
Of course, decreasing the pixel size below the wavelength of the light doesn't significantly increase the accuracy of pixel location. One could perhaps do the equivalent with stacked pixels, Foveon style.Once you catch the individual photon strikes as accurately as possible, almost everything else is computational.
I wonder if you could do something with a flash ROM style floating gate structure? It lays out well and the readout is pretty minimal.I think it has sort of mind-blowing possibilities, but maybe that is just me. We still need to be able to build single electron jots that are 0.5 um or smaller on side. No easy task. Don't throw out your CMOS image sensor DSLR yet.
I wonder if you meant photoelectron generation location. And, it does if they are packed together. I think you are mixing up a few ideas here. My 2005 work was targeted at sub-diffraction limit (SDL) pixel sizes so if that is what you are worried about, worry no longer.pixelpepper wrote:
Of course, decreasing the pixel size below the wavelength of the light doesn't significantly increase the accuracy of pixel location.
I meant photon location. What I was getting at is that if you take a quantum mechanical view then the classical EM wave approximates to the QM wave function, which is a probability distribution of observing the photon over space. If you try to constrain the precision of observation to a scale significantky smaller than the wavelength, then the probability of observing the photon (i.e. it releasing a photoelectron) becomes relatively smaller. What this means is that as the sensels reduce far below the wavelength, their ability to locate the photon reduces.I wonder if you meant photoelectron generation location.pixelpepper wrote:
Of course, decreasing the pixel size below the wavelength of the light doesn't significantly increase the accuracy of pixel location.
Not that I can see, unless my understanding of QM is wrong (which it could be, it's quite rusty).And, it does if they are packed together.
I'm not worried about diffraction. The effect I'm talking about is a QM effect, not a wave effect. Actually, I'm not 'worried' about anything, I'm just saying that there is no point having deep sub wavelength pixels if the intention is to locate the photons more accurately. If the intention is to allow counting of more than one photon at a location, then there may be alternative solutions, such as multilayer sensels or the 'counting' sensel suggested by Mojo151 (although I think there are problems with his approach, in that the array would have to be read out multiple times within an exposure)I think you are mixing up a few ideas here. My 2005 work was targeted at sub-diffraction limit (SDL) pixel sizes so if that is what you are worried about, worry no longer.
Thanks Eric, I did read it (in fact I had read it before) and agree that it is an interesting concept. It is as you say conceptual, so doesn't have a detailed discussion of the physics and mathematics. At present, there are a few aspects that I haven't convinced myself about:Anyway, the article is here, as mentioned elsewhere in this same thread by someone else. It is conceptual so I think it would be an easy read for this community and has also been discussed previously. But, I am still happy to discuss it some more.
http://www.ericfossum.com/Publications/Papers/Gigapixel%20Digital%20Film%20Sensor%20Proposal.pdf