More MP from a 350D replacement?I don't think so.Read this

The rod cells of some animal eyes can register the entry of a single photon, which is far beyond the noise/sensitivity capacity of modern camera sensors as I understand it. Also, you have to remember that higher pixel density can still help when there is a decent amount of light. That's why 8MP P & S cameras can take sharp pictures; they just suffer in lower light.
 
Someone with more knowledge can correct me, but IIRC a digital sensor can only resolve about 60 lp/mm as opposed to the potential 80 to over 100 on slow speed film.

That's another thing to consider, assuming I've got my numbers right.
 
amazing resolution but what is "funny" is that for portrait they
used between F18 and F12 aperture! With such a large sensor I guess
it's hard to use large aperture.
Exactly right.

When we are talking about brightness and exposure, the effect of f-stop is the same across all format sizes. But when we are talking about sharpness, the effects of f-stop are directly related to the format size.

As you probably already know, an 80mm lens on a 35mm (or full-frame) camera has essentially the same field-of-view as a 50mm lens on a 1.6x camera like the D-Rebel series (300D/350).

But to get the same sharpness effect, you also need to divide the f-stop by 1.6. To get the same amount of diffraction on a 1.6x camera as a 35mm camera has at f/16, you need to open up to f/10. To get the same depth-of-field with a 1.6x camera and 50mm lens that the 35mm camera with 80mm lens has at f/8, you have to open up to f/5. (It's usually easier to remember 1-1/3 stops than dividing by 1.6 in your head.)

The Mamiya's sensor is 60mm diagonal, the XT/350D's sensor is 26.7mm diagonal, so the Mamiya's is 2.25 times as large (diagonally). The sharpness effect of f/18 and f/12 is therefore equivalent to f/8 and f/5.3 on a 1.6x camera for the same field-of-view.
 
The rod cells of some animal eyes can register the entry of a
single photon, which is far beyond the noise/sensitivity capacity
of modern camera sensors as I understand it.
This is a very misleading info. While the rod cells can indeed sens one photon the brain will ignore such a low signal. It is if you want a "noise reduction" algorithm built in the brain. If you spend a long time (30-40 min) in complete darkness and then single photons start ariving at the eye with a resonable frequency you start "recording" the photons. You can not form an image though since the integration time (exposure time) is too short.

Also, you have to remember also that in low light you do not see the colors! Why do you think that there is such limitation? It seems to me that there is indeed very difficult to get images in low light.

The newest CCD called EMCCD (from electron multiplied CCD) can indeed detect single photons! You do have to cool them to relatively low temperatures (-40C or so) to supress the termal electron (dark current) production in each pixel. The problem is that cooling the CCD requires a lot of energy, way beyond what a battery can provide for reasonable amounts of time.
Also, you have to
remember that higher pixel density can still help when there is a
decent amount of light. That's why 8MP P & S cameras can take
sharp pictures; they just suffer in lower light.
And they suffer in high intensity light since they do not have enough dynamic range!
--
Ed Richer
 
The rod cells of some animal eyes can register the entry of a
single photon, which is far beyond the noise/sensitivity capacity
of modern camera sensors as I understand it.
This is a very misleading info. While the rod cells can indeed sens
one photon the brain will ignore such a low signal.
The human brain, yes. But cave, deep-sea and nocturnal animals are far more sensitive. Suffice it to say that it is not "a very misleading info", although I don't have time to do research for you. Here's the top hit on a quick Google search (I scanned the page quickly) showing that the HUMAN eye and brain will respond to a stimulus of 9 photons. This is a little bit better than the sensor in your Digital Rebel.

http://math.ucr.edu/home/baez/physics/Quantum/see_a_photon.html
It is if you
want a "noise reduction" algorithm built in the brain. If you spend
a long time (30-40 min) in complete darkness and then single
photons start ariving at the eye with a resonable frequency you
start "recording" the photons. You can not form an image though
since the integration time (exposure time) is too short.
Also, you have to remember also that in low light you do not see
the colors! Why do you think that there is such limitation? It
seems to me that there is indeed very difficult to get images in
low light.
The reason you cannot see colors in low light is because you have two kinds of photoreceptor cells in the retina, and only one type (not the one tuned for low-light sensitivity) can sense colors.
The newest CCD called EMCCD (from electron multiplied CCD) can
indeed detect single photons! You do have to cool them to
relatively low temperatures (-40C or so) to supress the termal
electron (dark current) production in each pixel. The problem is
that cooling the CCD requires a lot of energy, way beyond what a
battery can provide for reasonable amounts of time.
That's interesting. This means that someday we will have such sensitivity available in handheld cameras, if some hard barrier is not found.
Also, you have to
remember that higher pixel density can still help when there is a
decent amount of light. That's why 8MP P & S cameras can take
sharp pictures; they just suffer in lower light.
And they suffer in high intensity light since they do not have
enough dynamic range!
They have enough to take good-quality pictures. Look at the Fuji F11 samples on this very site, and then come back and say that the camera cannot supply enough dynamic range to take pictures in good light. You also have to remember that technology is constantly improving.
 
The rod cells of some animal eyes can register the entry of a
single photon, which is far beyond the noise/sensitivity capacity
of modern camera sensors as I understand it.
This is a very misleading info. While the rod cells can indeed sens
one photon the brain will ignore such a low signal.
The human brain, yes. But cave, deep-sea and nocturnal animals are
far more sensitive. Suffice it to say that it is not "a very
misleading info", although I don't have time to do research for
you. Here's the top hit on a quick Google search (I scanned the
page quickly) showing that the HUMAN eye and brain will respond to
a stimulus of 9 photons. This is a little bit better than the
sensor in your Digital Rebel.
If you consider that a 7 micron pixel can store 45,000 photons (this number is not from Canon but is representative for most CCD I worked with) and the A/D converter is 12 bit (4098 levels) it turns out that the detection threshhold is 11 photons. Not bad compared with the 9 photons you mentioned considering the color filters in fromt of the sensor. Typical readout noise of a good electronics system is 3-5 electrons (photons) so the 11 is chosen such that you will not see it in the final image. At least not very apparent.
http://math.ucr.edu/home/baez/physics/Quantum/see_a_photon.html
It is if you
want a "noise reduction" algorithm built in the brain. If you spend
a long time (30-40 min) in complete darkness and then single
photons start ariving at the eye with a resonable frequency you
start "recording" the photons. You can not form an image though
since the integration time (exposure time) is too short.
Also, you have to remember also that in low light you do not see
the colors! Why do you think that there is such limitation? It
seems to me that there is indeed very difficult to get images in
low light.
The reason you cannot see colors in low light is because you have
two kinds of photoreceptor cells in the retina, and only one type
(not the one tuned for low-light sensitivity) can sense colors.
That is exactly my point. If it was easy the eye didn't need two types of cells! If you eliminate the color filters from the sensor you will get better light sensitivity. The filter has probably 50-60% transmission rate and then that is from 1/3 of the photons (considering uniform wavelenght distribution). For this reason the cameras designed for highest sensitivity are b/w.
The newest CCD called EMCCD (from electron multiplied CCD) can
indeed detect single photons! You do have to cool them to
relatively low temperatures (-40C or so) to supress the termal
electron (dark current) production in each pixel. The problem is
that cooling the CCD requires a lot of energy, way beyond what a
battery can provide for reasonable amounts of time.
That's interesting. This means that someday we will have such
sensitivity available in handheld cameras, if some hard barrier is
not found.
This cameras exist right now. You can buy one for your telescope but it needs a large battery to run for long periods of time. Apogee Inc sels one for ~$5,000 if I remember correctly.
Also, you have to
remember that higher pixel density can still help when there is a
decent amount of light. That's why 8MP P & S cameras can take
sharp pictures; they just suffer in lower light.
And they suffer in high intensity light since they do not have
enough dynamic range!
They have enough to take good-quality pictures. Look at the Fuji
F11 samples on this very site, and then come back and say that the
camera cannot supply enough dynamic range to take pictures in good
light. You also have to remember that technology is constantly
improving.
Good light is something very vague. I think that you encountered a lot of situations when the lighting did not allow you to get a good image. As an example when taking pictures of a subject in shadow and you have a lot of sky in the frame you typicaly blow the sky. A large sensor with a lot of dynamic range will be able to record more of the range than the smaller sensor in a compact p/s.

--
Ed Richer
 
whoaouh i didn't know that CCD were able to use such a little number of photon!

Aren't quantic rules starting to limit precision of the result when you're talking of a few photons only?

mmiikkee
 
whoaouh i didn't know that CCD were able to use such a little
number of photon!
Aren't quantic rules starting to limit precision of the result when
you're talking of a few photons only?

mmiikkee
I think that you refer to the "shot noise". Indeed the minimum noise due to the stochastic nature of light is the Poisson noise which is the square root of the signal itself. So, if your signal has only 9 photons your noise will be 3 or 30%. If the signal is 100 photons then the noise is 10 (10%), for 400 you will have 20 (5%), and for a full pixel of 45,000 photons the noise will be 212 or 0.5%. This is the main reason why the underexposed images are more noisy! To this noise values you have to add a relatively constant readout noise, and the dark noise (which would be better called thermal noise). The dark noise depends mostly on the temperature of the sensor and the exposure time. Images taken with long exposures will have higher noise because of this. The only way to reduce the dark noise is to cool the sensor. You get approx half the noise for every 8C reduction in temperature.

--
Ed Richer
 
do you mean that current CCD sensors are already close to the physical limit? that's amazing. At least Cmos sensors look so big that I thought there still was a margin for improvement.

Does this mean that nearly each photon interacting with a photosite already have an effect on the final 12bit value on a cmos sensor?

If yes this means should mean that the megapixel race is close to its end...?
(Except if people are ok to have lower sensitivity...)

mmiikkee
 
do you mean that current CCD sensors are already close to the
physical limit? that's amazing. At least Cmos sensors look so big
that I thought there still was a margin for improvement.

Does this mean that nearly each photon interacting with a photosite
already have an effect on the final 12bit value on a cmos sensor?
Not quite! Because of the readout noise you have to have 10 photons to get a change of one level in the digital value of the final image. The EMCCD will reduce the readout noise considerably, or even eliminate it. But there is no free lunch. The EMCCD eliminates the noise by amplifying the number of electrons (converted photons) at each "clock" (transfering of charge in the serial register). It is very effective on low number of electrons, so it is very useful when you have ONLY low light images. I think TI is ready to put out a video camera based on their EMCCD called Impactron. It is very good for night imaging or astrophotography. The good news is that you can change the gain (amplification) relatively easy. So you can have a camera that allows you to turn the "night shot" on and apply EM gain to obtain relatively good images in low light situatons, and then reverse to normal mode for "good" light.

As I said that only reduces the readout noise which is a significant portion of the noise in low light situations.

As I said the best solution for "dark noise" is cooling and it is not (yet) practical due to energy consumption (remember your electric bill during summer?). There are some inovations which allow the reduction of the dark noise to some degree (burried channel comes to mind) but the improvement is not dramatical. So at least as long as we stay with silicon the improvements on dark noise are limitted.

Some improvement can come from increasing the quantum efficiency of the sensor. A good sensor right now has 45-55% quantum efficiency. It means that only approx half of the photons that reach the surface of the sensor are actually transformed into electrons. And you have to remember hat the color filters in front of each pixel and the antialiasing filter are cutting that light down significantly!

The back illuminated CCD can have a QE of 85-90% so almost double. They are very expensive though and I do not think they will be introduced to consumer cameras any time soon. The profesional astronomical and scientific cameras use such sensors. The cost for a 512x512 pizels sensor by itself is ~$3,500!

In conclusion, while there can be some incremental improvements in the near future on the sensor technology, unless very "exotic" technologies become significantly cheaper the megapixel race is dead!
If yes this means should mean that the megapixel race is close to
its end...?
(Except if people are ok to have lower sensitivity...)

mmiikkee
--
Ed Richer
 
thanks for all info. Very interesting!

I never took time to look for info about ccd and cmos images (apart from the basics about color pattern and so on).

In fact the quantum efficiency of 50% is much higher than what I would have expected. This looks like very mature technology!

But if I understand correctly this should mean that there can't be any significant improvement ( I mean a noise reduction of a factor 10 for example) not only because of technology but also because of physical limitations...

So I can't dream of a noise free ISO6400 on a 40MP camera for example (except with a huge sensor and then a even bigger lense and a very small DOF...)
is that correct?

Some people claim that film had even better sharpness is it true then? Is chemical process even more efficient than CCD? Or was it true only for low ISO films?

mmiikkee
 
thanks for all info. Very interesting!
I never took time to look for info about ccd and cmos images (apart
from the basics about color pattern and so on).
In fact the quantum efficiency of 50% is much higher than what I
would have expected. This looks like very mature technology!

But if I understand correctly this should mean that there can't be
any significant improvement ( I mean a noise reduction of a factor
10 for example) not only because of technology but also because of
physical limitations...

So I can't dream of a noise free ISO6400 on a 40MP camera for
example (except with a huge sensor and then a even bigger lense and
a very small DOF...)
is that correct?
It is not that bad. The EMCCD technology appeared only 3 years ago and it is already available in some specialty cameras. EM would be particularly usefull for high ISO images, so you can dream of a ISO6400 camera at 8 MP in the (not so) near future.

The race for more MP is not limited only from the pixel size/noise problem. The sad fact is that the lenses are not very good either. If you look at the lens review sites you will see that the 8MP sensor outresolves most of the lenses. Only the good primes and very expensive L glass outresolve the 8 MP sensor at only at the optimal apperture. And this are lab tests with the camera on a tripod, mirror lockup, remote shutter, no athmosferic haze, etc.

So the sensitivity WILL be the next race!
Some people claim that film had even better sharpness is it true
then? Is chemical process even more efficient than CCD? Or was it
true only for low ISO films?

mmiikkee
--
Ed Richer
 
Are there any? I mostly scanned through the rest of the thread because I'm not a physics major and don't really understand most of it (but darn it, I sure do try!).

So what I've come up with is that the APSC sized 8mp sensor we got on our 350D is pretty much outresolving all of even the mighty L lenses for the sensor size, and so my new (used) 70-200L is finally reaching the full potential of the sensor.

So what next? "M" lenses? Or did they reach the practical limit of lens quality back in the FD age? Buwahaha, and here I was thinking my new L lens was pretty much going to make my camera the limiting factor in image quality for the next decade.
 
The rod cells of some animal eyes can register the entry of a
single photon, which is far beyond the noise/sensitivity capacity
of modern camera sensors as I understand it.
That's an urban myth...

ANY optical sensor register the entry of a single photon. That's just the nature of light. In your CCD camera, a single photon creates a single electron. Indeed, you have some noise, so that you wouldn't be able to detect only a single electron at the end.

But neither can we. We cannot see single photons. Our eye/brain has the same noise problems as our ccd sensor.

If we truly would be able to 'see' (i.e. conciously detect) single photons, than the army wouldn't have to built night vision goggles ! :-)
 
How many MP do we need? 20MP? 50MP? Are we going to make wall sized murals from each and every single shot we take? 6MP, heck, 4-5MP is more than enough for most people.
 
thanks for all info. Very interesting!
I never took time to look for info about ccd and cmos images (apart
from the basics about color pattern and so on).
In fact the quantum efficiency of 50% is much higher than what I
would have expected. This looks like very mature technology!
Actually, there are allready CCD systems with > 85% quantum effiency... I don't know the efficieny of the current CMOS sensor in our camera's. Could be much lower, and then there would be room for improvement.
But if I understand correctly this should mean that there can't be
any significant improvement ( I mean a noise reduction of a factor
10 for example) not only because of technology but also because of
physical limitations...

So I can't dream of a noise free ISO6400 on a 40MP camera for
example (except with a huge sensor and then a even bigger lense and
a very small DOF...)
is that correct?
You can always dream. :-)))

One thing Ed left out is temperature. For photo camera's, the biggest improvement would probably come from cooling... (Most of the noise is thermal noise...) Typical cooled CCD camera's use peltier coolers to go down -80 degrees Celsius. But that would be a little difficult on the camera.... You would need a big battery to power the peltier, and big cooling fins on the camera body...

Techniques like an EMCCD are totally useless on a non-cooled camera. You would simply amplifiy thermal noise.

So yes, we are allready pretty close to physical limitations. Don't expect any miracles in the future...
Some people claim that film had even better sharpness is it true
then? Is chemical process even more efficient than CCD? Or was it
true only for low ISO films?
Only true for extremely slow film. ISO 100 is roughly equal to 6-8MP. ISO 50 would be 12MP, ISO 25 16MP. (IIRC)

And that's for film on full frame!! So the resolution of the crop sensor on the 350D is as good ISO 25 film. Just because it's cropped, you get less MP...
 
... for large landscape it is very pleasant to have large and very very detailed photo that you can what even through a magnifier.

But maybe the best method for this is to create a large panorama from several photos. or course it's not as simple as 1 shot and the scene must be static but I saw that some people made 1Giga pixel this way.
How do you print that?... well I don't know.

But for sure it would be interesting only for a few shots, not all.

the resolution offered by the mamiya zd is already awesome:
http://www.mamiya-op.co.jp/home/camera/digital/zd/sample/sample.html

For my shots I do not need more than 8MP and still would be happy with 4MP.

mmiikkee
 
Not a great thing, but for example... cropping.

I asked previously in this thread if a D2X with a good lens such e.g. the tamron 90mm macro, would have captured (sorry for my bad english, I hope you hunderstand) more detail than a 8mp camera. The answer was yes and so another question. Looking at test reports of http://www.photozone.de there are many lenses which seems to outsolve the 350d sensor: surely many primes, but also some relatively inexpensive zooms like 70-200f4, tamron 28-75, sigma 100-300f4 and 70-200f2.8 etc. My question is: with so many lenses which can outsolve 8mp, where is the problem with more mpx?
--
cuginoStefano
http://www.pbase.com/cuginostefano
 

Keyboard shortcuts

Back
Top