I am very well aware of the uses of oversampling. (This is within my
area of expertise.) Diffraction means there is perfectly zero signal
above the diffraction cutoff frequency so oversampling beyond that
point for antialiasing is unnecessary. It is also more efficient to
achieve dynamic range directly than with oversampling for this
application. (We won't be using delta-sigma converters.)
Well, we might be. You don't know what 'this application' is yet.
Are you referring to delta-sigma converters? Spatially? With noise
shaping via a feedback loop filter in the spatial domain? I would
like to see how that works.
Spatially and two dimensionally. I've been thinking how that might work for a while. The nice bit is that you can put the front end comparator of the delta-sigma right in the pixel, which must be as low a noise configuration as you can get - direct to digital. The operating frequencies for the delta-sigmas get a bit extreme though - per column isn't enough to hack it. Splitting the columns in two with converters top and bottom might just do it.
Edit: forgot to add. Another interesting possibility with this scheme is a CFA operating at less than per pixel frequency. (but still above the diffraction limit). The whole array is digitised in one go, and colour information can be decoded from the resultant data stream, just like old style analog colour TV, with colour carriers modulated onto the main luminance signal. Not my idea, papers have already been published on this.
More particularly, oversampling at sufficient bit depth for the tiny
pixels allows options that just sufficient sampling doesn't. The
small bit depth also opens up interesting hardware options, like
digitisation in the pixel. If it was such a poor idea, Eric Fossum
wouldn't be pursuing it.
Oh, there are some possibilities for tiny pixels, sure. But you don't
need significant oversampling above the diffraction cutoff Nyquest
spacing because any finer spacing contains zero information.
Who said 'need', we're talking interesting design possibilities here. You don't 'need' 24 bits per sample or 64kHz sampling in audio, either, but there are plenty who'll tell you it sounds better than 16/44.
I wonder
if Eric is going to perform data compression within the array to
lower the communications bandwidth?
No, he's cleverer than that. He's going to 'develop' the image right there in the sensor.
Replacing a simple count with
information that includes photon location will be a real data hog.
Yup, I think you need data reduction in or close to the sensor.
If you aren't thinking of outperforming current sensors then you only
need the same photoelectron density, about 800 e- in a 0.9μm
pixel, and just 10 bit digitisation. The beneficial effects of high
pixel density make it 'worth the trouble' by itself. In truth, it's
not necessarily that much trouble, either, but it does require more
processor power and memory.
I meant that I was thinking of how to outperform a camera which is
already far better than current ones. To really make a Gigapixel
camera worthwhile I would like to have high quality pixels which
means getting the shot noise down.
If you have lots of pixels, they don't need to be high quality (not in the sense you mean, they need low read noise). Shot noise is a given with any particular QE and sensitivity. As I keep on pointing out, it's in the image, not the sensor. QE apart, there is no way of engineering around it, save longer exposures. And ISO 32 sensors would be ripped apart in today's market.
Then with new design lenses you
could significantly outperform the mere 4 foot by 6 foot at 250 DPI
image that a 2 micron pixel full frame camera would produce.
Yup. I've observed before, what Nikon needs to do with the D3x is develop some 'super primes' - makes more sense than the lame MX rumour.
--
Bob