More MP from a 350D replacement?I don't think so.Read this

Not a great thing, but for example... cropping.
You might simply use a diffent focal length on a zoom, or add a TC.
I asked previously in this thread if a D2X with a good lens such
e.g. the tamron 90mm macro, would have captured (sorry for my bad
english, I hope you hunderstand) more detail than a 8mp camera. The
answer was yes and so another question. Looking at test reports of
http://www.photozone.de there are many lenses which seems to outsolve the
350d sensor: surely many primes, but also some relatively
inexpensive zooms like 70-200f4, tamron 28-75, sigma 100-300f4 and
70-200f2.8 etc.
Not that many... Most of those examples only outresolve the sensor at the center, but not at the edge, and not at all apertures. For example the Tamron 28-75 is quite a bit softer at the edge, at ALL apertures. That won't get better...

It's mostly the telelenses that can outresolve the sensor in a broad range.... those you could equip with a TC.

Another thing to consider, is that these are all best case scenario's. The lens sits on a tripod, multiple images are taken, and only the sharpest ones are used.

In real life, you might not shoot many images of the same thing, just to be able to select the ultimate sharpness. You also might shoot without a tripod. That means the lens sharpness is also limited by motion blur. Don't forget that the 1/f rule only indicates acceptable motion blur.

Then there's also focus accuracy and DoF which can limit the required resolution.
My question is: with so many lenses which can
outsolve 8mp, where is the problem with more mpx?
More pixels on the same size sensor (and same technology) means:
  • more noise
  • less dynamic range.
  • slower camera. (lower burst and continous fps)
So, then the question is wether you need those extra MP's.

Assuming you frame the image reasonable when taking the shot, that's mostly a matter of printer output size. You need to print pretty big before the printers effective pixel resolution requires more than 8MP. Typical printer ppi (not dpi!) is about 300 ppi. That means you need to print bigger than 12x8" before the printer can actually use those extra MP.

And don't forget that big images are generally viewed at a larger distance, thus reducing the required resolution. 300ppi typically required for viewing at something like 30cm. A picture on the wall that's viewed from 1 meter distance thus requires less ppi.

In the end, it's all a compromise. I think that 8MP on a crop sensor is a very good compromise between typical high-end lens sharpness, dynamic range and noise, and the amount of MP reqirued for printing your photo.
 
All arguments non-experts advance in these forums to show that MP cannot, should not, will not advance are based on the most faulty assumptions, even as announcements of radical new changes in camera design come out on this site regularly.

Also, consider that a Fuji F11 sensor, if increased to the size of the XT sensor but retaining the same pixel size, would have around 55 MP. I brought up the F11 in this thread, and the instant (predictable) response had to do with dynamic range, but the F11 is a camera that sells well and takes very good pictures in a range of conditions.

With respect to the dynamic range problem, it is the least important of all issues, as it can be easily solved with a change in design. People shouldn't make the same mistakes famously made by so many before, in assuming that science and engineering cannot make advances.

I agree with a previous poster-- this is all prompted by nothing more than a current slowdown in Canon megapixel counts. It means nothing. One major reason the 30D could be restricted to 8MP is just that they didn't want it to draw off sales of the 5D, which cannot be increased much further without endangering the status of the 1Ds Mk II (and even more obviously the 1D).
 
All arguments non-experts advance in these forums to show that MP
cannot, should not, will not advance are based on the most faulty
assumptions, even as announcements of radical new changes in camera
design come out on this site regularly.
New radical design? I've yet to see them. And no design can change the laws of physics...
Also, consider that a Fuji F11 sensor, if increased to the size of
the XT sensor but retaining the same pixel size, would have around
55 MP.
You haven't seen me say that a higher MP isn't possible. The question is if it is at all usefull, and if the disadvantages aren't bigger than the advantages.
With respect to the dynamic range problem, it is the least
important of all issues,
Excuse me? Dynamic range is barely enough with the current sensors! It is one of the most important issues with digital camera's.
as it can be easily solved with a change
in design.
Which would be?
People shouldn't make the same mistakes famously made
by so many before, in assuming that science and engineering cannot
make advances.
Nor should people make the even more famous mistake, in thinking that science and engineering can break the laws of physics...
 
All arguments non-experts advance in these forums to show that MP
cannot, should not, will not advance are based on the most faulty
assumptions, even as announcements of radical new changes in camera
design come out on this site regularly.
New radical design? I've yet to see them. And no design can
change the laws of physics...
A redundantly obvious statement, but it doesn't mean anything. Nobody has (or can) prove that the laws of physics prevent a useful MP increase. As a matter of fact, there are high-performance cameras out right now with higher pixel density than the XT.
Also, consider that a Fuji F11 sensor, if increased to the size of
the XT sensor but retaining the same pixel size, would have around
55 MP.
You haven't seen me say that a higher MP isn't possible. The
question is if it is at all usefull, and if the disadvantages
aren't bigger than the advantages.
Okay, that I can buy. But I wasn't really trying to pick at your statements, but rather the herd mentality which (like other fashions here) currently has the masses parroting back and forth extremely silly statements.

I would say yes, that higher MP counts are still useful. I would like to have a higher-density sensor myself; more detail is better. The ideal would be a camera that could record all detail in an image, but of course such a camera is impossible.

People wouldn't stitch images together if they didn't want more detail. There is plenty of scope for useful improvement in this area, if it can be made.
With respect to the dynamic range problem, it is the least
important of all issues,
Excuse me? Dynamic range is barely enough with the current
sensors! It is one of the most important issues with digital
camera's.
I said it is not important in this context because it can be solved easily, which it can.
as it can be easily solved with a change
in design.
Which would be?
I am going to present a very simple scheme which would obviously work. I'm not saying that this would be the best-- it's not even the best I can come up with off the top of my head. It's just a very simple partitioning scheme.

Take any hypothetical overexposure; for this example let's take a blown shot at 1/100s. Take multiple full exposures at a fraction of the shutter time: for this example, let's just double the shutter speed. So you take two 1/200s exposures, all other settings being equal. You don't even necessarily need to close the shutter to do this-- just ensure that all the pixels are read and reset quickly.

Take each double-reading for each pixel, add the numbers, and divide them by 2 to get the final reading. You have effectively just compressed the dynamic range in-camera. You have discarded some information-- the gradations in intensity between steps-- but you have stayed within the required information space of the original format.

Obviously, any scheme which works by preventing clipped highlights works for all images; you could intentionally expose any image further to the right.

There are a number of key reasons why this is not the best even I can come up with, including increased processing and buffer requirements. Even this simple scheme would work, though. It is more likely that improvements could be made at the pixel level, which might not require some exotic new technology either. I won't bore you.
People shouldn't make the same mistakes famously made
by so many before, in assuming that science and engineering cannot
make advances.
Nor should people make the even more famous mistake, in thinking
that science and engineering can break the laws of physics...
You haven't successfully shown that the laws of physics prevent a useful increase in megapixel density. Hence this statement means nothing; I haven't made any such mistake. Lastly, I don't know of any famous mistakes like that. Research budgets exist to determine what's possible, but speculation is often its own authority.
 
well there is no real limitation except the laws of physics...

If you are able to record each photon you receive with a sensor then you are not able to improve sensor any more. I suppose there must be some rule also for the size of one photosite versus wavelength of the photon (should be larger than the wavelength of photon is visible spectrum?? )

Based on this there must be a physical limit to the information a given size sensor can record at a given light during a given time...

But of course you can make larger senosr. But then the optical laws (and the ease of use) will be also a limit. The larger the sensor is the smaller the DOF is. look already at the portraits on the mamiya ZD gallery the aperture is reduced to F/12 or even f/18 for a portrait shot:

http://www.mamiya-op.co.jp/home/camera/digital/zd/sample/sample.html

So nothing is perfect in this world... just my understanding...

mmiikkee
 
it seems you know well the question...
So i am wondering about something else:

current sensors record amplitude of the light but would it be possible to record polarisation information on some "new sensors"?

This would be awesome to be able to apply polarization filtering on a raw file... guess i'm dreaming... but is is theorically possible?

mmiikkee
 
and long before you hit any sort of physical limit, you hit a cost/value limit that makes your product unviable in the market. once a technique is discovered, though, it usually goes through a series of refinements to make it cheaper and better.

you're right about there being limits. i just don't agree that an (apparent only) slump in r&d by canon implies that we've hit a hard physical limit already. sorry for no-caps-- typing one handed.
 
it seems you know well the question...
So i am wondering about something else:
current sensors record amplitude of the light but would it be
possible to record polarisation information on some "new sensors"?

This would be awesome to be able to apply polarization filtering on
a raw file... guess i'm dreaming... but is is theorically possible?

mmiikkee
No way! when a photon is captured it is transformed in a electron. You can not even "remember" the wavelength and that's why you need Bayer filters to reconstruct the color.
--
Ed Richer
 
I talked at length about cooling in the previous posts! I do agree that you have to cool the sensor for the EM to work. TI chips (Impactron) require "only" -10C because its smallish pixels.
--
Ed Richer
 
well there is no real limitation except the laws of physics...
If you are able to record each photon you receive with a sensor
then you are not able to improve sensor any more. I suppose there
must be some rule also for the size of one photosite versus
wavelength of the photon (should be larger than the wavelength of
photon is visible spectrum?? )

Based on this there must be a physical limit to the information a
given size sensor can record at a given light during a given time...
As far as the sensing part is concerned, there is no appreciable size limit. Sensing a photon, means absorbing it: for example by putting an electron into an excited state. That simply requires the atom to sit within the wave function of the photon.

There is a definite limit to the sharpness that you can image light on a surface ofcourse. That's the famous Abbe's Law:
d = 0.61 * wavelength/ NA.

(NA= numerical aperture = n * sin(phi) ; n=refractice index, phi angle of the lens)).

With the best confocal microscopes, you can reach NA=1.4, and thus get to a spot size of about half the wavelength. Ofcourse, that requires oil immersion objectives with a working distance of a 100 microns, and a field of view of nothing. Not quite relevant for photography... For camera's, we'll be stuck to low NA's, and thus end up at a few micron spot size.

But for CCD camera's, that's all beside the point. In the end, a CCD sensor is simply a 'bucket' for photons. The bigger the bucket (3D), the more photons you can collect during the exposure, thus the better the dynamic range. The 'depth' of the bucket is more or less fixed, so pixel size is directly related to dynamic range.

While CCD's are getting more advanced, this simple concept hasn't changed. A continous, or extremely quick readout CCD, as suggested before, to remove this dynamic range barrier for the high speeds of photography, isn't that simple. Sure, you could double the maximum number of collected photons. But you also double the readout noise, increase thermal noise, etc... The circuitry to read so extremely fast will be noisy as well. In the end, it's doubtfull if you would gain anything at all!

Canon's victorious CMOS sensors have gone exactly the other way. Perfecting the producing of a conceptually very simply sensor. Not making a much more difficuly sensor.

BTW: You could easily make a 50MP sensor for the 350D at this very moment. Simply take a P&S CCD and extend it to 22mm. Nothing withholding anyone from doing that. It would be rather expensive, and pretty noisy. But absoutely possible with curent technology.
 
it seems you know well the question...
So i am wondering about something else:
current sensors record amplitude of the light but would it be
possible to record polarisation information on some "new sensors"?
Certainly seems possible.

The most obvious way would be to put a polarization filter in front of the sensor, just like we have red, green and blue filters. We could make a RGB-P pattern. (Who needs double green information anyway? :-)

Might be a bit tricky though, to get the polarizers that small. I guess you would first need to apply it to the whole chip, and etch the pattern. I wonder how they do that with the colour filters. Any MEMS experts here?
 
I had the comment made a technical salesman at my local store, and I think I already wrote something about it (indirectly).

He said that low quality lens would show on a higher sensor resolution camera (such as the 16mp Mark II). In the end, I am enclined to believe such an assumption that sensors are slowly approaching the limit of high quality glass. Were there so many improvements made in lens in the last decades (aside from the mechanical aspect of it)? We are still using 15 years old lens in our current cameras.

I would very much like an expert to compare glass from 40 years ago from today's lens. Professional film cameras probably had high optics resolution (so I learned), but how high compared to pixel/mm?

Of course, wat we have in our hands is probably nothing compared to what the military have in their satellites...

--
Regards,

Benjilafouine
 
So, I understand the concept of pixel size and the number of pixels in relation to sensor size. If I understand your statements, the lens limitation is in its ability to capture more detail, "given a specific pixel size" then?
 
So, I understand the concept of pixel size and the number of pixels
in relation to sensor size. If I understand your statements, the
lens limitation is in its ability to capture more detail, "given a
specific pixel size" then?
Yes.

When the smallest detail that a lens projects onto the CCD has detail that is much larger than the pixel size, then there's little to be gained by making the pixels even smaller.

It's simply a matter of the weakest link in a chain. The image quality is determined by the weakest component. That can be the sensor, it can be the lens, or they can be both be equally strong.

With 8MP, you've reached a situation where mostly the lens is limiting for normal consumer lenses, and the sensor and lens are equally important for high-end lenses. For some high-end lenses/settings, the sensor is mostly limiting.

Considering that neither MP or lens sharpness are free, that's a good compromise.

NB: Something some people don't seem to understand: There is NO hard defined border as far as image quality/sharpness is concerend. You cannot say that 8MP is too little, and 8.5 MP is enough. It doesn't work that way.

That's quite fundamental, not only for photography. Shannon's sampling theory says that you require double sampling frequency to capture a signal at all. But it doesn't mean that things don't improve anymore when you use a higher sampling frequency. It does. It's just that you get quickly diminishing returns. (Especially when factoring in signal recovery methods, i.e. post-processing.)

When the lens is close to the resolving limit of a 8MP sensor, then it certainly will look better on a 10MP sensor. But the difference will be small, certainly not equivalent to the difference in sensor resolution.
 
First before I start please know I am totally on board with this thread, including the comments about pixed density, up to.... here.
With 8MP, you've reached a situation where mostly the lens is
limiting for normal consumer lenses, and the sensor and lens are
equally important for high-end lenses. For some high-end
lenses/settings, the sensor is mostly limiting.
Hmm, I think the sensor is limiting for what now is the new generation of consumer lenses. The center parts of my 17-85mm IS are so sharp it appears to resolve well beyond the (admittedly 6 mpixel) sensor on my 300D. And I mean, at least double. I have 100% crops where in the center of the image there will be pixels of one solid color right up against pixels of another color with no in-between. This indicates to me that even this 'average' consumer lens is much sharper -- yes, at least in the center -- than the current sensors.

Furthermore on this subject of which direction Canon is going, lens vs. sensors... I have read on this forum links to interviews with Canon reps where they have stated (as you probably read) that the megapixel race is pretty much over, but they intend to focus more (oops, no pun intended) on lenses. Further that with a rumor I read that Canon wants to bring out a series of lenses above the L-series.... then you see new entries in the "consumer" lenses from Canon, first the 17-85mm with it's (whatever) element, then the 10-22mm which has somethig of a cult following, then what appears to be a promising offering, at least on paper, this new 17-55 2.8 IS lens which is packed with exotic glass elements made from (whatever). So it appears the trend from Canon will be lenses, lenses, lenses, probably to exploit the already excellent bodies they have and to play catch up with Nikon glass.

Has anyone else culled this conclusion out of the various goings-on here?
 
Photons do not even have wavelength. That is a characteristic of the wave behavior of lenses. Light cannot be described totally as photons or waves - it has behaviors from both. You don't have top remember anything - just like colour you determine from which sites (through which filters) the light was captured. Polarization would be no different - as someone else noted, just putting a polarizer in front of the sensor site would do the job (but with the inherant light loss, and indirectly making the 'colour' sites smaller.
No way! when a photon is captured it is transformed in a electron.
You can not even "remember" the wavelength and that's why you need
Bayer filters to reconstruct the color.
--
Ed Richer
 

Keyboard shortcuts

Back
Top