DSLR Sensor in P&S

I think the evolution of digital cameras will have more to do with
what people want than what is possible. Consumer digicams won't go
that much beyond 10mp simply because you don't need much more than
that (you don't even need that). Very little of it will have much
to do with the physical limits of possibility.
Could be. OTOH, 99% of camera users (a) never print larger than 4 x
6, and (b) even if they did, don't have the camera technique to
(consistently) get quality good enough for enlargements much bigger
than that. IOW, 1.5 to 2 megapixels would be plenty. Yet 3MP has
become the entry-level baseline, 5 MP the standard in low-end to
midrange ones, and 8 MP the high end.

I see no (market-related) reason the megapixel race should stop. I
predict we'll continue to see higher resolutions, until we hit some
limit where megapixels will become irrelevant even in controlled
testing -- possibly somewhere around 40-60 MP.
This kind of thing has precedent. Look no further than computer CPUs. The rivalry between Intel and AMD precipitated a "Megahertz war" (or "megahurts" as some called it).

The original Athlon scaled up MHz far better than the Intel equivalent (P3). Intel's response was the P4. The P4 lowered IPC (instructions per clock) in favour of an architecture that could be scaled up much higher. AMD responded with a model rating although this wasnn't the first time such a rating had been used. Cyrix had used it for their 586 chips. Intel's response was predictabl: model numbers were some kind of con.

What drove this was entirely consumer demand and marketing. Its easy for a consumer to realise a P4 3GHz is faster than a P4 2GHz. At one point MHz was a fairly decent comparison between different chips too. The P4 changed that.

Intel has flip-flopped. Future P4s beyond about 4GHz have been canned. The mobile chip (Pentium-M, which is basically a P3 with some funky power managemnet and other stuff thrown in) will replace it. Unfortunately (for Intel) the P-M at 1.7GHz does about as much as a P4 at 2.8GHz so Intel has to eat its earlier words and marketing. Moreso, they've decided to drop MHz ratings in favour of model numbers (more eating of words).

Anyway, the moral of the story is this: at about 4GHz Intel decided that frequency one-upmanship is (with current technology) either not feasible or not cost-effective.

I predict digicams will reach a similar point and sooner rather than later. Firstly, people will start to realise on a non-poster print you simply can't see the difference between 12 and 20mp but you can see the difference of another 2 f-stops of DR. It won't change completely. "20 megapixels" on a bullet list of marketing material will always have some advantage over "12 megapixels" but megapixels aren't the only determining factor when someone buys a camera.

If anything, a crunch in camera technology is more likely than it was in computers because cameras rely on something far more tangible and easy-to-understand: what you can see. Computers vary in performance for many reasons. The CPU is only one factor of many (eg memory speed, bus speed, graphics card, etc) but if a camera shop puts up a display or the same composition take with 5, 10, 15 and 20mp cameras the differences (or lack thereof) are instantly recognisable.
--
'A colour-sense is more important, in the development of
the individual, then a sense of right or wrong.'
-- Oscar Wilde
 
Take digicam sensors for example. There is a physical limit to how
small they can get but the story doesn't, as you imply, end there.
What if there was a transparent material such that you could create
a film that was transparent but read in l(and thus absorbed) ight
of a particular frequency (eg red). Stack three of those together
and you have a parrallel RGB pixel triplet (or RGBG quadruplet) in
the area of one pixel, which could effectively triple (quadruple)
your pixel density (instead of using a repeating Bayer pattern of
four pixel "sites").
Actually, you've just described the Foveon sensor.

The only problem is quantum physics. There's uncertainty about where the are that actually makes it noisier than a comparable Bayer pattern sensor.
There is an optic fibre technology that uses erbium-doping of
fibres and (iirc) a magnetic field in place of repeaters.
And in fiber optics, you have meters or kilometers of cable so that such doping can have a noticible effect. In a sensor, you have microns.
This brings up an interesting issue of light ampliciation as a
means of "increasing" (effective) sensitivity
If you'r already counting every last photon, what are you going to "amplify"?

--
A cyberstalker told me not to post anymore...
So I'm posting even more!

Ciao!

Joe

http://www.swissarmyfork.com
 
Take digicam sensors for example. There is a physical limit to how
small they can get but the story doesn't, as you imply, end there.
What if there was a transparent material such that you could create
a film that was transparent but read in l(and thus absorbed) ight
of a particular frequency (eg red). Stack three of those together
and you have a parrallel RGB pixel triplet (or RGBG quadruplet) in
the area of one pixel, which could effectively triple (quadruple)
your pixel density (instead of using a repeating Bayer pattern of
four pixel "sites").
Actually, you've just described the Foveon sensor.
Yes although Foveon X3 sensors have (yet) to result in substantially better image quality or pixel density.
There is an optic fibre technology that uses erbium-doping of
fibres and (iirc) a magnetic field in place of repeaters.
And in fiber optics, you have meters or kilometers of cable so that
such doping can have a noticible effect. In a sensor, you have
microns.
This brings up an interesting issue of light ampliciation as a
means of "increasing" (effective) sensitivity
If you'r already counting every last photon, what are you going to
"amplify"?
Well, my point wasn't to use erbium doped fibres in digital cameras exactly. Erbium doping fibres work (iirc) by exciting atoms so that when they absorb a photon, they release another one. The question is: if you can create a material that, under certain conditions, will do that, can you create a material that'll release 2 or more photons when it absorbs one? More to the point, can it do it in such a way as to effectively amplify a signal?
--
'A colour-sense is more important, in the development of
the individual, then a sense of right or wrong.'
-- Oscar Wilde
 
FWIW, here's my prediction of what we'll see in three years:

(1) The problems with AF and shutter lag are history. "Prosumer"
and compact cameras focus and respond just as fast as SLR's.
Or at least much faster than they do now. They're still playing "pin the tail on the donkey", with their move the lens, check the focus, see if it's better or worse, and move it again.

SLRs still have the advantage of focus sensors that can say "better" or "worse" and tell you in what direction, so you get a "much too close" signal and know what direction to drive the lens, instead of just a "way off" signal, which leaves you getting.

There's ways around this, from building phase deterction AF on the main sensor by giving a small percentage of the pixels "tilted" microlenses, to vibrating the sensor quickly forward and back, so you can get those "good" or "bad" signals thousands of times a second, instead of dozens.
(2) Top-end prosumer cameras will have EVF's that rival SLR OVF's
in all-around usability: they'll still be limited in dynamic range,
but resolution and refresh rate will be high enough that there
won't be much practical difference with even a pretty good OVF.
Many people will prefer good EVF's to poor SLR OVF's. Rumors of a
Minolta or Canon camera with an APS+ -sized high-resolution sensor,
a high-end EVF, and interchangeable lenses (using the "old" lens
mount) will be floating around.
Not a prediction. I've handled one. In case you're interested, you got the brand wrong.
(3) Megapixel counts will be about 12-20 for "prosumer" cameras,
16-40 for DSLR's. The sensors will also incorporate electronic ways
to combine pixels for lower resolution but improved sensitivity and
lower noise.
That's been on the market for a while. The Foveon sensor used in the Sigma SD9 and SD10 DSLRs comes immediatly to mind. And a smaller Foveon sensor is going into a Polaroid branded point and shoot camera this fall.
(4) Dynamic range of sensors will be improved by several stops:
they will have left slide behind and will be starting to get close
to color neg.
That's what the Fuji S3 is supposed to have.
(5) Following the cult following gained by the Epson RD-1, a niche
manufacturer will have introduced a compact camera with an
APS-sized sensor. Despite its price tag (higher than an entry-level
DSLR), and modest lens specifications (35/2 efl prime or
28-70/2.8-3.5 efl zoom), it will sell moderately well.
Actually, the RD-1 is bigger and heavier than the Pentax *ist. So all you really need is a couple of "modest" lenses for the existing SLRs, and you're all set.

*ist is the first DSLR to incorporate a major component (the prism) actually sized for the APS format. That was an immediate trimming of about 100 grams of glass weight from the camera (and a substantial streamlining of the SLR shape).
(6) Sensitivity and noise will be roughly the same as now.
Small-sensor cameras will have sensitivity ranges between ISO 50
(some go as low as ISO 25) and ISO 800 (with a few pushing it to
ISO1600). At lower-resolution modes, ISO400 is halfway decent and
ISO800 usable for some purposes. DSLR's will have squeezed another
usable stop of sensitivity from their sensors: ISO3200 will be good
enough for press use, and ISO6400 usable with moderately aggressive
noise reduction.
Sounds about right.
(7) Cell phone cameras will largely occupy the niche of low-end
ultra-compacts. Pixel counts will be comparatively low (3-8 MP)
because of the constraints imposed by bandwidth and the relatively
low-performance lenses that need to be used.
I predict the first camera phone related death by the end of 2004, much like the "phone rage" deaths currently in the news, where people are killed because their mobile antics anger those around them.

--
A cyberstalker told me not to post anymore...
So I'm posting even more!

Ciao!

Joe

http://www.swissarmyfork.com
 
Well, my point wasn't to use erbium doped fibres in digital cameras
exactly. Erbium doping fibres work (iirc) by exciting atoms so
that when they absorb a photon, they release another one. The
question is: if you can create a material that, under certain
conditions, will do that, can you create a material that'll release
2 or more photons when it absorbs one? More to the point, can it
do it in such a way as to effectively amplify a signal?
That's an effective way of amplifying a signal when you want to feed it to a detecting device that doesn't detect single photons. Such as a human eye, which is why we use light amplification scopes.

It's also effective when your detection process results in the substantial loss of photons. Although the photodetectors underneath the Bayer filters are approaching 100% quantum efficiency, we are still throwing away a substantial numbe of photons in the color filters. A multiplier before the filters would be useful. But that's also a problem that could be solved with a more efficient color separation method. I've always been rather fond of the concept of resonant sensors, or of using diffraction for color separation.

--
A cyberstalker told me not to post anymore...
So I'm posting even more!

Ciao!

Joe

http://www.swissarmyfork.com
 
FWIW, here's my prediction of what we'll see in three years:

(1) The problems with AF and shutter lag are history. "Prosumer"
and compact cameras focus and respond just as fast as SLR's.
Or at least much faster than they do now. They're still playing
"pin the tail on the donkey", with their move the lens, check the
focus, see if it's better or worse, and move it again.

SLRs still have the advantage of focus sensors that can say
"better" or "worse" and tell you in what direction, so you get a
"much too close" signal and know what direction to drive the lens,
instead of just a "way off" signal, which leaves you getting.
True, of course. Still, Ricoh has demonstrated that very fast AF is possible on even inexpensive PnS's, using rather old technology (active IR AF combined with contrast-detection AF). I'd assume that IR AF would be particularly well suited for small-sensor digital, because of the shorter focal lengths and greater depths of field: you don't need as precise focus as with larger formats, and IR may actually be better at measuring subject distance than "pinning the tail on the donkey."
There's ways around this, from building phase deterction AF on the
main sensor by giving a small percentage of the pixels "tilted"
microlenses, to vibrating the sensor quickly forward and back, so
you can get those "good" or "bad" signals thousands of times a
second, instead of dozens.
Interesting ideas. I like the idea about the tilted microlenses -- it sounds like it could be pretty easy to make.
(2) Top-end prosumer cameras will have EVF's that rival SLR OVF's
in all-around usability: they'll still be limited in dynamic range,
but resolution and refresh rate will be high enough that there
won't be much practical difference with even a pretty good OVF.
Many people will prefer good EVF's to poor SLR OVF's. Rumors of a
Minolta or Canon camera with an APS+ -sized high-resolution sensor,
a high-end EVF, and interchangeable lenses (using the "old" lens
mount) will be floating around.
Not a prediction. I've handled one. In case you're interested, you
got the brand wrong.
Really? Cool! I guess we'll see one sooner than three years, then.
(3) Megapixel counts will be about 12-20 for "prosumer" cameras,
16-40 for DSLR's. The sensors will also incorporate electronic ways
to combine pixels for lower resolution but improved sensitivity and
lower noise.
That's been on the market for a while. The Foveon sensor used in
the Sigma SD9 and SD10 DSLRs comes immediatly to mind. And a
smaller Foveon sensor is going into a Polaroid branded point and
shoot camera this fall.
I was aware of that. However, I understand the Sigma DSLR's don't use the feature, and even if they would, the pixel count is still a bit low for it to be truly useful for still photography.
(4) Dynamic range of sensors will be improved by several stops:
they will have left slide behind and will be starting to get close
to color neg.
That's what the Fuji S3 is supposed to have.
True. I'm sure the other sensor manufacturers are pursuing ways to raise the blow-out point of their sensors too. Personally, I'm much more interested in these developments than more megapixels. It's too bad I'm in the Canon system, so the S3 isn't really an option...
(5) Following the cult following gained by the Epson RD-1, a niche
manufacturer will have introduced a compact camera with an
APS-sized sensor. Despite its price tag (higher than an entry-level
DSLR), and modest lens specifications (35/2 efl prime or
28-70/2.8-3.5 efl zoom), it will sell moderately well.
Actually, the RD-1 is bigger and heavier than the Pentax *ist. So
all you really need is a couple of "modest" lenses for the existing
SLRs, and you're all set.
Yeah, I don't really consider the RD-1 a compact, even though the rangefinder lenses certainly are smaller than their SLR counterparts. However, I think it may be important in that it's the first non-SLR digicam to have a big sensor (AFAIK). If it's successful, it might encourage others. I would very much like a camera like my Rollei AF-M 35, only digital and without some of the really stupid design details.

[snip]

Petteri
--




[ http://www.prime-junta.tk ]
 
No offense man, but you're being dense. I don't know if it's on
purpose or not, but you haven't listened to a damned thing I said.
It's just your vision of what reality is without an open mind or
any desire to understand another's point of view. If you live that
way through life, you'll miss out on all the things that other
people can teach you.

Now listen.
You should read physics, physics of light. If you think light being made of numbered single photons, that carry the information from the "source" to your film, eye or sensor.

When there is less light (dark), there are less photons to carry the information. When you have smaller area to collect the information, you get smaller amount of instances at a set time frame.

So if a (to be a) "pixel" in a sensor is smaller, it needs more time to collect the needed photons that "carry" the information. This is not limited by the elevtronics, the sensor, but by the laws of physics. So there is really no workaround for this.

If you underexposure a frame (film) and get grainy image, it is exactly the same thing. There are too little instances to make a whole sharp picture. If you exposure it longer, you will get a sharper image, all the instances you need. But if the number of the actual instances "traveling" through the air are too limited per time and per area you are recording, you will not get a good picture, but a noisy one. It can not be a "faster", any more sensitive by developing electronics after the point when there just are not enough photons to create a detailed photo.

You can create a sensor that is much denser that the current ones. You can (actually rather) easily create active cooling with electornics for the image sensors to reduce image noise created by overheating large sensors. But you can never record more image than the photons that your sensor will receive (per time and per area).

So there are limitations, and it is nothing like the chip industry (that I do follow quite "full time" myself). The digital photo gear has been leaping last 10 years with giant steps. Computer technology has been recently developing slower (if you forget the GPU development of last 5-10 years with their 6 month cyckles and so on) than the digital imaging techonogy. And it will be faster or as fast as the next 2-5 years for dSLR cameras, but not anymore for typical PS consumer cameras that are already rather advanced considering the task they are used for.

We will get bigger sensors (mm² and mpxl) but the speed that we have got last years will slow down in a few years since there are less to gain. There may be some algorithms that can compute the noise less eveident if you have a limited photon situation in a night shot with a dense sensor, but the lawa of physics will eventually make it impossible to get more information unless you use larger sensor areas and also bigger lenses (area of the front element or the "limiting element"). This is actually not so big problem when sensor prices fall and you need to buy the sensor only once, not like film that would cost the more you use it.

--
Osku
 
One thing that CAN be developed continuously until some 90-95% of area is reached, is the percentage the light diodes in a sensor do cover coppared to the total sensor area. At the moment it is probably (an unqualified estimate) around 10-20%, and up to 70-80% the development will be rather easy, going a head like a steam train. The Fuji sensors use already larger area than the other sensors, but is nothing compared to the future sensors.

So if you can live with the image quality of current sensors, and the are actually used at the moment is around lets say 20% of the sensor size, you can make the sensor 80% smaller (area) than the current is, and still get as many photons per time per sensor. Bet there the road ends. And I want more pixels and more details and less noise. And you will get less noise with larger area per recorded pixel.

If you would have lets say 100x100 pixels per one curent pixel in you sensor, it could also be possible to use the recorded data so, that you choose only the pixels that seem to get information (photons) and by software a computer could then calculate the image (not with that high resolution, but a usable resolution, e.g. a 2x2, 3x3 or 5x5 pixel output for that 100x100 recorded pixel area.

--
Osku
 
Well I thought what you wrote was quite interesting, and with none of the arrogance of the previous person.

I understand what you are stating. But I think that you're finding it difficult to understand me.

To go back to my examples, if we tried to put a .13 micron chip into a 1990 computer, it would overheat immediately in the copper. We have changed the metals and we have changed the process.

Are the photo-receptors we employ today the very best method to capture and record light? Obviously, it works well; but a Radio Shack Tandy computer in 1985 sold for $6000, and they thought it worked well too. Compare that computer to today's computer and I think it might be worth $6.

What I'm suggesting is that by using a completely new technology or a completely new way of looking at our present technology, we can reduce the size of the sensor. A computer chip uses less electrical current today and processes thousands of more calculations. If a fly can see well enough that it is difficult to swat (it basically lives 10x faster than we do), then why can't we make a photon-receptor that does the job better? I'm just not buying the physics lessons. You're saying technology has already reached the barrier of the speed of light and the density of photons in common situations. I'm saying that we are no where at all near that barrier in any way, shape, or form, excepting by the technology we current use. Today, right now, we are at a barrier. I'm not talking about today.

Why is it difficult to consider the present as an example (a pentium 4 3600mhz chip),
look back twenty years to the past (a 2.8mhz),

reason out the things that changed (different metals and exponentially greater methods of stacking transistors),
align that to a different technology (camera CCDs),

and look forward to the future (entirely different photo-receptors and exponentially greater methods of stacking pixel units)...
No offense man, but you're being dense. I don't know if it's on
purpose or not, but you haven't listened to a damned thing I said.
It's just your vision of what reality is without an open mind or
any desire to understand another's point of view. If you live that
way through life, you'll miss out on all the things that other
people can teach you.

Now listen.
You should read physics, physics of light. If you think light being
made of numbered single photons, that carry the information from
the "source" to your film, eye or sensor.

When there is less light (dark), there are less photons to carry
the information. When you have smaller area to collect the
information, you get smaller amount of instances at a set time
frame.

So if a (to be a) "pixel" in a sensor is smaller, it needs more
time to collect the needed photons that "carry" the information.
This is not limited by the elevtronics, the sensor, but by the laws
of physics. So there is really no workaround for this.

If you underexposure a frame (film) and get grainy image, it is
exactly the same thing. There are too little instances to make a
whole sharp picture. If you exposure it longer, you will get a
sharper image, all the instances you need. But if the number of the
actual instances "traveling" through the air are too limited per
time and per area you are recording, you will not get a good
picture, but a noisy one. It can not be a "faster", any more
sensitive by developing electronics after the point when there just
are not enough photons to create a detailed photo.

You can create a sensor that is much denser that the current ones.
You can (actually rather) easily create active cooling with
electornics for the image sensors to reduce image noise created by
overheating large sensors. But you can never record more image than
the photons that your sensor will receive (per time and per area).

So there are limitations, and it is nothing like the chip industry
(that I do follow quite "full time" myself). The digital photo gear
has been leaping last 10 years with giant steps. Computer
technology has been recently developing slower (if you forget the
GPU development of last 5-10 years with their 6 month cyckles and
so on) than the digital imaging techonogy. And it will be faster or
as fast as the next 2-5 years for dSLR cameras, but not anymore for
typical PS consumer cameras that are already rather advanced
considering the task they are used for.

We will get bigger sensors (mm² and mpxl) but the speed that we
have got last years will slow down in a few years since there are
less to gain. There may be some algorithms that can compute the
noise less eveident if you have a limited photon situation in a
night shot with a dense sensor, but the lawa of physics will
eventually make it impossible to get more information unless you
use larger sensor areas and also bigger lenses (area of the front
element or the "limiting element"). This is actually not so big
problem when sensor prices fall and you need to buy the sensor only
once, not like film that would cost the more you use it.

--
Osku
 
As has been pointed out before, quantum efficiencies of current sensors are approaching 70%. Almost all of the photons that can be counted are already being counted. Let's say you have a sensor that counts sheep going by. If it counts 70 out of every 100 sheep that goes by, then there is only a potential for a 43% improvement in your sheep counter. No matter how high tech your sheep counter became, it's never going to count 200 sheep when only 100 sheep go by. The fundamental problem with small imagers isn't the ability of the sensor to record photons. It's that there aren't enough photons to record. Sensors will likely to continue to improve until just about every photon that hits the sensor will be recorded. Once that happens, it's game over, man. Sensors will have become as good as they can get. No amount of "thinking outside the box" or whatever is going to change that. At that point, either the lens or the sensor has to get larger to get any improvement, or both.
 
And before the response comes back with "then why don't we just run those single sheep through a magic 10X sheep multiplier device before we count them" argument, stop and think about what you are suggesting...
 
Actually, the one way that convential sheep detectors...er, image sensors throw away a lot of photons is with the Bayer patten RGB filters. A Foveon-type sensor that was as efficient at detecting photons as a modern CCD would have a three-fold increase in sheep...er, photons recorded. So there is some hope for a somewhat more sensitive 2/3 sensor, though it's unlikely to approach even current APS sensors. One could also tradeoff some lens range for a faster maximum aperture. Active sensor cooling could come in the form of the new, more efficient thermoelectrics that are coming along. And it wouldn't hurt to stand down the megapixel arms race (more pixels seems to only increase file size with these tiny sensors).

Oh, and to the original poster, I didn't mean to demean you with the sheep analogy. You didn't seem to understand that image sensors are fundamentally different to processors and that the built-in limits are a whole lot closer. It isn't DSLR snobbery. We'd all love it if you could make a tiny sensor that could produce images as good as a 1Ds in low light. Honest.
And before the response comes back with "then why don't we just run
those single sheep through a magic 10X sheep multiplier device
before we count them" argument, stop and think about what you are
suggesting...
 
alistairsyme wrote:
snipped...
Sensors will
have become as good as they can get. No amount of "thinking
outside the box" or whatever is going to change that. At that
point, either the lens or the sensor has to get larger to get any
improvement, or both.
So a potential answer is a very clean (noise free) small sensor and a large front element? The front element needs to be large only in relation to the sensor size?

--
bob
Latest offering - 'Dusk on the Buriganga'
http://www.pbase.com/bobtrips
Shots from a bunch of places (esp. SEA and Nepal).
Pictures for friends, not necessarily my best.

http://www.trekearth.com/members/BobTrips/photos/
My better 'attempts'.
 
By your argument, it is more useful for light to be captured from a larger sensor. So why not have a sensor the size of your 77mm lens?

I'm sorry, but these arguments are simply not making any sense to me.

And no one is seeing the point. I feel like a broken record.
You are saying that the problem is with the physics of light itself.

I'm saying that is not necessarily true. You are measuring the basis of your facts by today's technology, not by tomorrow's technology.

If I said, look, it's just impossible to fit 17 million transistors onto this 1990 5v chip. For one thing, 5 volts will fry the transistors and for another, the .13 micron spacing model is at something like .25 and so it doesn't fit electrically and it doesn't fit physically. It can't be done.

With current CCD technology, only so much light can be read at one single moment in time. It is currently at 70%, so even at 100% it could not be the same sized CCD as the 2/3 in the 8mp prosumers. BUT , with future technology, it should be possible to do so. What if they improve the way that the photo receptors work, so that like the computer chip, it is 4 times as useful in real estate? What if they improve the way that light is captured in the first place with a whole different paradym of thought regarding how that light is processed?

Computers used to be quite expensive. I purchased a Pentium 60mhz for $3,000, a stick of 8mb of Ram for $100. Now, things are different. The computer industry is stable. We don't have to upgrade much, and 2nd from the top of the line is affordable. You can buy 3400mhz systems, with 2000mb of Ram and 250,000mb of storage for about 1/2 of what that 1993 Pentium 60 system cost me.

Camera used to be quite expensive. In the 1990's you could spend $15,000-$30,000 for a top-of-the-line camera. Today's 2nd from the top of the line is affordable at around $3000.

Part of the reason for this is that when you are able to mass produce, you can save more money because more people are purchasing and providing the company a profit.

Keeping on top in a competitive market like high-end photographic equipment only means that we will benefit from better and better cameras as each company attempts to win our attention. If Foveon proves to be twice as good when that technology matures, we'll all go over there. If Sony creates a 6MP chip that is as small as the 8mp and it has lower noise and can handle high ISO, many people will buy the cameras that are built around it. So you don't think that technology will continue to improve because you believe in this 70% of maximum light in our current technology. I'm saying that this is not a way to compare it. You are using today's technology as a basis for your statement. We will not be using today's technology tomorrow. We will have better ways to capture light. Nothing technological is static or will ever be static. If you can run electricity through something, we will never ever stop making a better mouse trap out of it.

Did you know that we're about to go to hexidecimal numbering on the internet? Did you know that they are working on organic gel pipelines that are far more superior than fiber?

Speaking of Fiber, the same cell of light can contain multiple data pieces. I'm talking lots and lots and lots of different data pieces. One ATM cell of light. It's stored at different frequencies. Apply that to CCDs and they could be as small as the letter "o" you see here & still capture full-frame light by using alogrythyms desciphering the different frequencies of light to fill in what other light would have shown if it were there. Light shining on light could be read in reflections on the frequencies so that unseen light could still be read. Or however they do it. The point is that they'll do it. You can't stop the machine. Once something is built and it has electricity attached to it, we will never ever stop making it better and more efficient.
As has been pointed out before, quantum efficiencies of current
sensors are approaching 70%. Almost all of the photons that can be
counted are already being counted. Let's say you have a sensor
that counts sheep going by. If it counts 70 out of every 100 sheep
that goes by, then there is only a potential for a 43% improvement
in your sheep counter. No matter how high tech your sheep counter
became, it's never going to count 200 sheep when only 100 sheep go
by. The fundamental problem with small imagers isn't the ability
of the sensor to record photons. It's that there aren't enough
photons to record. Sensors will likely to continue to improve
until just about every photon that hits the sensor will be
recorded. Once that happens, it's game over, man. Sensors will
have become as good as they can get. No amount of "thinking
outside the box" or whatever is going to change that. At that
point, either the lens or the sensor has to get larger to get any
improvement, or both.
 
Sensors will
have become as good as they can get. No amount of "thinking
outside the box" or whatever is going to change that. At that
point, either the lens or the sensor has to get larger to get any
improvement, or both.
So a potential answer is a very clean (noise free) small sensor and
a large front element? The front element needs to be large only in
relation to the sensor size?
Very large front element == very bright lens.

Yep, lenses on digicams can be made brighter than their SLR counterparts, gram for gram, because of the smaller format; a 28-200/2.8-3.5 like the one on the Minolta A2 would be freakin' huge if it was an SLR lens. An SLR 28-200 that size would clock perhaps f/4-5.6 -- a stop and a half or so slower. This does partly offset the lower sensitivity of the sensors for that specific application.

Unfortunately there are "hard" limits to lens brightness, too. Theoretically, you could go as bright as f/0.5 (and, in fact, Zeiss has made f/0.7 lenses for Nasa; you can guess the price). However, in practice, very few lenses brighter than f/1.4 have been very successful, and the ones that have have been fundamentally simple, symmetrical designs (normal-range prime lenses, like the Leica Noctilux 50/1.0 or, more recently, Cosina-Voigtländer Nokton 35/1.2). Trouble is, you can't use lenses like these on digital cameras until sensor (microlens?) designs improve to deal better with off-axis light. It'll be very interesting to see how well the Epson RD-1 works with these lenses: presumably, they've found some workaround.

Petteri
--




[ http://www.prime-junta.tk ]
 
By your argument, it is more useful for light to be captured from a
larger sensor. So why not have a sensor the size of your 77mm lens?
That would be absolutely fabulous. It would also be fabulously expensive (with current technology). Medium-format digital is still going strong in its niche precisely because sometimes it is worth $30,000 to have a 645-sized sensor.

Of course, larger formats hit their own limitations too; for example, at any given field of view and aperture, you will have less depth of field. So a 77 mm diameter imager would be significantly limited in some respects. Not to mention that the camera carrying it and the lenses you would need for it would be too bulky for any number of applications. It would be the digital equivalent of a large-format camera: fantastic for landscapes but useless for "general photography," whatever that may mean.
I'm sorry, but these arguments are simply not making any sense to me.
Could that be because you're not listening?
And no one is seeing the point. I feel like a broken record.
You are saying that the problem is with the physics of light itself.
I'm saying that is not necessarily true. You are measuring the
basis of your facts by today's technology, not by tomorrow's
technology.
You appear to have a blind belief that technological advances will inevitably surmount any physical obstacles, and that, in a very short amount of time. What you're suggesting re sensors involves obstacles similar to those regarding time travel or teleportation of macrosocopic objects, or exceeding the speed of light: things that are if not actually impossible, at least wildly improbable.

[snip]
With current CCD technology, only so much light can be read at one
single moment in time. It is currently at 70%, so even at 100% it
could not be the same sized CCD as the 2/3 in the 8mp prosumers.
BUT , with future technology, it should be possible to do so.
What if they improve the way that the photo receptors work, so that
like the computer chip, it is 4 times as useful in real estate?
What if they improve the way that light is captured in the first
place with a whole different paradym of thought regarding how that
light is processed?
You're still missing the point.

Picture a football field. Cover it with buckets. Picture a rainstorm.

The football field is the sensor array. The buckets are the photoreceptors. The raindrops are the photons. Now, how are you going to increase the efficiency in which the buckets capture the raindrops?

[snip computer stuff]

Once more, Liquid -- your analogy is bad. Sensors aren't computers, even if they involve circuitry etched on silicon. There's no more reason the computer analogy should hold for silicon than that it should hold for cars.

Oh, and by the way -- the microchip today is essentially the same beast it was when it was invented a few decades ago. The improvement has been evolutionary and incremental. There have been no paradigm-shifting changes in the way they're designed or manufactured.
Speaking of Fiber, the same cell of light can contain multiple data
pieces. I'm talking lots and lots and lots of different data
pieces. One ATM cell of light. It's stored at different
frequencies. Apply that to CCDs and they could be as small as the
letter "o" you see here & still capture full-frame light by using
alogrythyms desciphering the different frequencies of light to fill
"Algorithms," Liquid.

A single photon has a single "frequency," no matter what you do with it. The quantum superposition of states has nothing to do with it.
in what other light would have shown if it were there. Light
shining on light could be read in reflections on the frequencies so
that unseen light could still be read. Or however they do it. The
point is that they'll do it.
That sounds like a statement of faith, Liquid. By all means keep it, but don't be surprised if people with more understanding of the problems involved point out to you that it's not based in reality.
You can't stop the machine. Once
something is built and it has electricity attached to it, we will
never ever stop making it better and more efficient.
"Never" is a very long time, Liquid.

Petteri
--




[ http://www.prime-junta.tk ]
 
The fundamental problem with small imagers isn't the ability
of the sensor to record photons. It's that there aren't enough
photons to record. Sensors will likely to continue to improve
until just about every photon that hits the sensor will be
recorded. Once that happens, it's game over, man. Sensors will
have become as good as they can get. No amount of "thinking
outside the box" or whatever is going to change that. At that
point, either the lens or the sensor has to get larger to get any
improvement, or both.
Which is exactly what I was trying to explain. If there is no more photons then you have to have larger sensor area to get more photons recorded. If you double the sensor size, the actual number of photons coming from subject doubles too (if the distance from the sensor to the object absolutely the same).

But sensors of the future can only record almost 100% of the photons coming to the sensors, that depends again of the lens and the image ciscle size it produces and so on, and the size (area) of the sensor.

But if the sensors can at the moment get 70% of the photons, it is rather hight figure already. The pictures I have seen about some photodiodes that seem to cover around 20% of the total area of the sensor, since the borders and wiring are not able to record anything. But the microlenses do help, and if present sensors are up to recording 70% the route is even shorter before they can only improve the light fall off around the corners and use active cooling (that is used by enthusiasts already in space photography to reduce noise levels that sensor and ambient heat do produce).

--
Osku
 
Osku wrote:
[snip]
improve the light fall off around the corners and use active
cooling (that is used by enthusiasts already in space photography
to reduce noise levels that sensor and ambient heat do produce).
I'm no expert with cooling systems (all I know is what a PC-modding friend has explained to me, and the widgets I built into my machine on his suggestion). However, I'm under the impression that these are rather large -- at the very least, you need some way to carry the heat to the radiating element, e.g. a liquid circulating in a pipe inside a copper element attached to the chip. An active system (if I understand you correctly) would add a heat pump system (like in a fridge or an AC system). Wouldn't these be rather big and power-hungry to put inside a device as small as your average digital camera?

Petteri
--




[ http://www.prime-junta.tk ]
 
Liquid_Thought,

When people said (or say) that it is impossible to improve further the computing power (reduce chip size, increase Mhz, etc...), the statement is always relative to current technology (there is also usually the "will have to find another way to deal with it"). The barrier is only technology.

But in your suggestion, the limit is set by the nature of light itself: Physics. And you can't do anything about that. The barrier is very different.

This being said, we don't know what the future will be made of... (what if the theory is wrong? what if we are able to predict exactly the behaviour of photon noise? or if we use light and something else to build the image more effectively, or...etc! But this is closer to science fiction).

Olivier
 
Actually, the one way that convential sheep detectors...er, image
sensors throw away a lot of photons is with the Bayer patten RGB
filters. A Foveon-type sensor that was as efficient at detecting
photons as a modern CCD would have a three-fold increase in
sheep...er, photons recorded.
Unfortunatly, the Foveon sensor has trouble with the quantum physics of sheep. It needs lots of photons, or it can't see colors. If you look at a chart of wavelength vs absorbtion depth, you'll see it's a chart of the probability of a photon being absorbed at a particular depth vs. its wavelength.

So, if a red sheep wanders into a Foveon sensor, it has near equal chances of being "counted" as green or blue, rather than red. Probability increases with decreasing wavelength, purple sheep are almost always counted as blue.
So there is some hope for a somewhat
more sensitive 2/3 sensor, though it's unlikely to approach even
current APS sensors. One could also tradeoff some lens range for a
faster maximum aperture. Active sensor cooling could come in the
form of the new, more efficient thermoelectrics that are coming
along.
Even if you make the Peltier coolers themselves more efficient, you're still faced with the problem of heat influx from the thermal mass of the camera and its interaction with the ambient, as well as heat generated in the camera itself.

I don't see active cooling on anything small, just on medium format backs.
And it wouldn't hurt to stand down the megapixel arms race
(more pixels seems to only increase file size with these tiny
sensors).

Oh, and to the original poster, I didn't mean to demean you with
the sheep analogy.
Ha. You know it was just a sheep shot.

--
A cyberstalker told me not to post anymore...
So I'm posting even more!

Ciao!

Joe

http://www.swissarmyfork.com
 

Keyboard shortcuts

Back
Top