Will a compact 15 Megapixels sensor come out anytime soon?

What I said is true...you cannot get more detail.

That's not to say that having more pixels doesn't do other things for you. More pixels should help with aliasing and false color. But more detail? No.

Consider the G10. With a pixel pitch of 1.7um, it's just barely diffraction limited at f/2.8 with an Airy Disk of 3.8um. So it is impossible to get more detail out of a larger pixel count.

.
Although you are certainly getting into the "law" of diminishing returns, it is incorrect to think that you do not get more detail just because a pixel is the same size as an Airy disc.
You're right. But if you where to calculate it yourself, or maybe just read what I wrote , you would see that the pixel is half the size of the Airy Disk. At that point you cannot get any more detail.

You can "think about it..." all you like...science say you can't get any more detail.
That's not science. That's simplistic assumption.

Whoever came up with the idea of using the literal size of the airy disk as a threshold metric for pixel size should be put in stockades in the public square. Diffraction causes a loss of contrast for high frequencies. It does not eradicate them, although, well beyond the so-called diffraction limit, they become relatively useless as the noise is much stronger than them.

Most of the light is in the center of the cones; very little in the rings. The rings have only a mild effect; only the center of the cone has a large effect.

Resolving airy disks is not a part of normal photography. You'd have to photograph point light sources with a perfect lens in a vacuum to end up with an airy disk at real world lens apertures. In normal photography, there are no airy disks in the image; the airy disk is simply a probability map of photon displacement (in addition to all the other sources of displacement).

--
John

 
Every single thing you said is 100% wrong.
That's not science. That's simplistic assumption.
It’s been a part of optical science for a very long time now.
Whoever came up with the idea of using the literal size of the airy disk as a threshold metric for pixel size should be put in stockades in the public square.
I don’t think anyone suggested that pixels doesn't need to be smaller than an airy disks. What’s been suggested is pixels don't need to be smaller than half the airy disk, and at that point all contrast is lost.
Most of the light is in the center of the cones; very little in the rings. The rings have only a mild effect; only the center of the cone has a large effect.
The size of an airy disk is defined as the occurrence of the first minimum. Therefore, the cone IS the only thing that’s being considered. A pixel that's half the size of an airy disk gets half the cone. That's why all contrast is lost and you can't get any more detail.
Resolving airy disks is not a part of normal photography. You'd have to photograph point light sources with a perfect lens in a vacuum to end up with an airy disk at real world lens apertures. In normal photography, there are no airy disks in the image; the airy disk is simply a probability map of photon displacement (in addition to all the other sources of displacement).
There are images in the Canon Talk forum showing diffraction patterns in (from one example) the bright lights of a bridge in a night scene. And how in the world can you say that resolving airy disks isn't a part of normal photography, when that's the very reason for images getting softer and losing detail as the aperture is closed? I don't understand how you could possibly come to such a conclusion.

.
 
What I said is true...you cannot get more detail.

That's not to say that having more pixels doesn't do other things for you. More pixels should help with aliasing and false color. But more detail? No.

Consider the G10. With a pixel pitch of 1.7um, it's just barely diffraction limited at f/2.8 with an Airy Disk of 3.8um. So it is impossible to get more detail out of a larger pixel count.

.
Although you are certainly getting into the "law" of diminishing returns, it is incorrect to think that you do not get more detail just because a pixel is the same size as an Airy disc.
You're right. But if you where to calculate it yourself, or maybe just read what I wrote , you would see that the pixel is half the size of the Airy Disk. At that point you cannot get any more detail.

You can "think about it..." all you like...science say you can't get any more detail.
That's not science. That's simplistic assumption.

Whoever came up with the idea of using the literal size of the airy disk as a threshold metric for pixel size should be put in stockades in the public square.
Well, Lord Rayleigh died a long time ago, so fortunately for him your wish will go unfulfilled.
Diffraction causes a loss of contrast for high frequencies. It does not eradicate them, although, well beyond the so-called diffraction limit, they become relatively useless as the noise is much stronger than them.
Sorry, there is a "brick wall" for resolution. When the Airy disks of two point sources overlap to the extent that there is no longer a dip in intensity between them, they cannot be resolved no matter how much you sharpen. That's slightly below the Rayleigh criterion of the spot separation being equal to the radius of the first minimum of the Airy pattern, but close enough. What is often not mentioned however is that the resolution of CFA sensors is somewhat less than the pixel spacing, so that resolution gains are achieved even when the disk is larger than the pixel spacing.
Most of the light is in the center of the cones; very little in the rings. The rings have only a mild effect; only the center of the cone has a large effect.
The rings are not at issue. The issue is when do the central spots overlap to the point that they merge into one blob.
Resolving airy disks is not a part of normal photography. You'd have to photograph point light sources with a perfect lens in a vacuum to end up with an airy disk at real world lens apertures. In normal photography, there are no airy disks in the image; the airy disk is simply a probability map of photon displacement (in addition to all the other sources of displacement).
Resolving the Airy pattern would only make sense if one were going to try some sort of deconvolution to undo it. Not high on my wish list.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
most often more megapixels is seen as a selling point, but the normal consumer don't want to mess with so large files (especially with P&S) and ends up using smaller image sizes.

I wonder how difficult would it be for a company to use that point to put the rivals in a bad light while they make cameras optimized for "real" common use.
 
You're right. But if you where to calculate it yourself, or maybe just read what I wrote , you would see that the pixel is half the size of the Airy Disk. At that point you cannot get any more detail.

You can "think about it..." all you like...science say you can't get any more detail.

.
Thinking about things, experimenting, and drawing conclusions is what science is all about. Spouting things others say on "authority" is merely dogma.

Let us conduct a thought experiment (indulge me!). Imagine I want to take an image of a star and although I know I cannot get more than a certain amount of detail, I want to see what an Airy disc looks like. How many pixels will I need to accurately image an Airy disc? Certainly more than two! Indeed I can improve the accuracy of the image the more pixels I use.

Although this is only a hypothetical example, it does serve to show that pixels half the size of a theoretically perfect Airy disc is not the limit of detail possible.
 
You're right. But if you where to calculate it yourself, or maybe just read what I wrote , you would see that the pixel is half the size of the Airy Disk. At that point you cannot get any more detail.

You can "think about it..." all you like...science say you can't get any more detail.

.
Thinking about things, experimenting, and drawing conclusions is what science is all about. Spouting things others say on "authority" is merely dogma.

Let us conduct a thought experiment (indulge me!). Imagine I want to take an image of a star and although I know I cannot get more than a certain amount of detail, I want to see what an Airy disc looks like. How many pixels will I need to accurately image an Airy disc? Certainly more than two! Indeed I can improve the accuracy of the image the more pixels I use.
Wouldn't it be easier to just reduce the aperture, if all you wanted was to photograph the diffraction artifacts?
Although this is only a hypothetical example, it does serve to show that pixels half the size of a theoretically perfect Airy disc is not the limit of detail possible.
Not really, since the "detail" in question doesn't come from the scene, but from the lens.
 
Thinking about things, experimenting, and drawing conclusions is what science is all about. Spouting things others say on "authority" is merely dogma.

Let us conduct a thought experiment (indulge me!). Imagine I want to take an image of a star and although I know I cannot get more than a certain amount of detail, I want to see what an Airy disc looks like. How many pixels will I need to accurately image an Airy disc? Certainly more than two! Indeed I can improve the accuracy of the image the more pixels I use.
Wouldn't it be easier to just reduce the aperture, if all you wanted was to photograph the diffraction artifacts?
Well, of course it would, but this is a thought experiment whose purpose is to prove that you can obtain more detail from an image with more pixels than two per Airy disc. It also proves that if you can improve the resolution of the Airy disc you can better resolve an image which is a whole lot of overlapping Airy discs or circles of confusion.
Although this is only a hypothetical example, it does serve to show that pixels half the size of a theoretically perfect Airy disc is not the limit of detail possible.
Not really, since the "detail" in question doesn't come from the scene, but from the lens.
All images are produced by lenses even if they originate outside the lens. Even if a lens was theoretically perfect, it would still be diffraction limited and render images as circles of confusion. Add to that optical aberrations which are impossible to eliminate in real world lenses and you can see that all images "come" from the lens and are no more than images created by a whole lot of overlapping "artefacts".

This is nothing new - all lenses render images in this way. When you increase the number of pixels all you are doing is better recording the image formed by the lens. All you are doing by limiting the number of pixels is reducing the sensor's ability to record all the detail a lens is capable of.

Increases in the pixel count always increase resolution even though that increase is not a linear one. With good lenses an increase in pixels means a worthwhile increase in detail, with mediocre lenses less so, but even with poor lenses there is always an improvement in resolution - it is never zero even though it can be negligible.
 
Thinking about things, experimenting, and drawing conclusions is what science is all about. Spouting things others say on "authority" is merely dogma.

Let us conduct a thought experiment (indulge me!). Imagine I want to take an image of a star and although I know I cannot get more than a certain amount of detail, I want to see what an Airy disc looks like. How many pixels will I need to accurately image an Airy disc? Certainly more than two! Indeed I can improve the accuracy of the image the more pixels I use.
Wouldn't it be easier to just reduce the aperture, if all you wanted was to photograph the diffraction artifacts?
Well, of course it would, but this is a thought experiment whose purpose is to prove that you can obtain more detail from an image with more pixels than two per Airy disc. It also proves that if you can improve the resolution of the Airy disc you can better resolve an image which is a whole lot of overlapping Airy discs or circles of confusion.
No, it certainly doesn't prove that, only that you can increase the resolution of the artifacts. You can just as well do that in software, since they don't contain more information about the subject.
Although this is only a hypothetical example, it does serve to show that pixels half the size of a theoretically perfect Airy disc is not the limit of detail possible.
Not really, since the "detail" in question doesn't come from the scene, but from the lens.
All images are produced by lenses even if they originate outside the lens. Even if a lens was theoretically perfect, it would still be diffraction limited and render images as circles of confusion. Add to that optical aberrations which are impossible to eliminate in real world lenses and you can see that all images "come" from the lens and are no more than images created by a whole lot of overlapping "artefacts".

This is nothing new - all lenses render images in this way. When you increase the number of pixels all you are doing is better recording the image formed by the lens. All you are doing by limiting the number of pixels is reducing the sensor's ability to record all the detail a lens is capable of.

Increases in the pixel count always increase resolution even though that increase is not a linear one. With good lenses an increase in pixels means a worthwhile increase in detail, with mediocre lenses less so, but even with poor lenses there is always an improvement in resolution - it is never zero even though it can be negligible.
Except that from some point this improvement in resolution isn't of the subject you photograph, in which case it's meaningless to most of us.
 
Well, of course it would, but this is a thought experiment whose purpose is to prove that you can obtain more detail from an image with more pixels than two per Airy disc. It also proves that if you can improve the resolution of the Airy disc you can better resolve an image which is a whole lot of overlapping Airy discs or circles of confusion.
No, it certainly doesn't prove that, only that you can increase the resolution of the artifacts. You can just as well do that in software, since they don't contain more information about the subject.
Information about the subject is formed by the lens, it doesn't exist in a vacuum. Information about the scene is gathered by the lens and projected onto the sensor. The sensor then records this information and is limited by the number of pixels available to it - the more pixels available, the more accurately it can record the image formed by the lens.

Airy discs, theoretically perfect circles of confusion, are not artefacts, they are the elements from which all images are formed - the more perfectly they are captured by the pixels on a sensor, the more detailed the image will be.

Your assertion that Airy discs don't contain more information about the subject and that improving the resolution of an Airy disc is akin to sharpening in software is laughable.
Although this is only a hypothetical example, it does serve to show that pixels half the size of a theoretically perfect Airy disc is not the limit of detail possible.
Not really, since the "detail" in question doesn't come from the scene, but from the lens.
All images are produced by lenses even if they originate outside the lens. Even if a lens was theoretically perfect, it would still be diffraction limited and render images as circles of confusion. Add to that optical aberrations which are impossible to eliminate in real world lenses and you can see that all images "come" from the lens and are no more than images created by a whole lot of overlapping "artefacts".

This is nothing new - all lenses render images in this way. When you increase the number of pixels all you are doing is better recording the image formed by the lens. All you are doing by limiting the number of pixels is reducing the sensor's ability to record all the detail a lens is capable of.

Increases in the pixel count always increase resolution even though that increase is not a linear one. With good lenses an increase in pixels means a worthwhile increase in detail, with mediocre lenses less so, but even with poor lenses there is always an improvement in resolution - it is never zero even though it can be negligible.
Except that from some point this improvement in resolution isn't of the subject you photograph, in which case it's meaningless to most of us.
Of course it is! All increases in resolution are of the subect by definition otherwise it wouldn't be an increase in resolution. There is no "point" where the sensor is resolving something other than the image formed by the lens.

I suggest you read a little about optics and lenses and understand how light works as both a wave and a particle and how it forms an image with and without a lens.
 
It's not only Canon, the best (and more expensive) compacts from Panasonic, Sony, Casio, Ricoh also stop at around 10 megapixels, focusing on quality over quantity.

Yes, some DSLRs offer more pixels, but they also tend to have matching optics so you don't waste all the pixels on blur. The newest high-end DSLR from Nikon, D3S, has 12 megapixels, or 1.4 MP/cm². The newest 14 megapixel compacts have as much as 50 MP/cm², so a comparable DSLR would need to have 430 megapixels before the pixels were as small as on a 14 MP compact camera.
Check out the images from the Canon G10 and G11 at base ISO. The G10 has a far superior image in terms of reaolution and it is because of the 14.7 megapixel sensor as the lenses are the same.

Have a look at the images and then come back and tell me the G11 has superior IQ - it doesn't!
 
It's not only Canon, the best (and more expensive) compacts from Panasonic, Sony, Casio, Ricoh also stop at around 10 megapixels, focusing on quality over quantity.

Yes, some DSLRs offer more pixels, but they also tend to have matching optics so you don't waste all the pixels on blur. The newest high-end DSLR from Nikon, D3S, has 12 megapixels, or 1.4 MP/cm². The newest 14 megapixel compacts have as much as 50 MP/cm², so a comparable DSLR would need to have 430 megapixels before the pixels were as small as on a 14 MP compact camera.
Check out the images from the Canon G10 and G11 at base ISO. The G10 has a far superior image in terms of reaolution and it is because of the 14.7 megapixel sensor as the lenses are the same.

Have a look at the images and then come back and tell me the G11 has superior IQ - it doesn't!
Huh? Of course it doesn't, the G11 doesn't have better low-ISO than the G10 and the G10 isn't diffraction limited at F2.8 and has twice the pixel size of most of the 14mp compacts we're seeing.

It also has much better lens than all other 14mp compacts, which may see resolution issues long before diffraction kicks in, by the way..

A better comparison would be to compare G10 and G11 at F11 or F16, that would give an indication of how much you can gain from having more than 14 mp in a typical compact.
 
Although this is only a hypothetical example, it does serve to show that pixels half the size of a theoretically perfect Airy disc is not the limit of detail possible.
Well, I can already see there's no point in arguing with you over it. I suggest you read ejmartin's post above...and he IS a scientist.

.
 
Let us conduct a thought experiment (indulge me!). Imagine I want to take an image of a star and although I know I cannot get more than a certain amount of detail, I want to see what an Airy disc looks like. How many pixels will I need to accurately image an Airy disc? Certainly more than two! Indeed I can improve the accuracy of the image the more pixels I use.
You may be interested in this post by Marianne Oelund:

http://forums.dpreview.com/forums/read.asp?forum=1030&message=21952208

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
That's not science. That's simplistic assumption.
It’s been a part of optical science for a very long time now.
I know it's been part of web folklore for a while. "Science" seems to be correct, until a better model comes along. "Science" is just what people say is science.
Whoever came up with the idea of using the literal size of the airy disk as a threshold metric for pixel size should be put in stockades in the public square.
I don’t think anyone suggested that pixels doesn't need to be smaller than an airy disks. What’s been suggested is pixels don't need to be smaller than half the airy disk, and at that point all contrast is lost.
I didn't say 1:1; it's your 2:1 that I am challenging. I am simply suspicious of brick walls, simple ratios, etc. They sell because they are easy to digest, even if they aren't true.
Most of the light is in the center of the cones; very little in the rings. The rings have only a mild effect; only the center of the cone has a large effect.
The size of an airy disk is defined as the occurrence of the first minimum. Therefore, the cone IS the only thing that’s being considered.
Well, I've seen people considering the rings, too.
A pixel that's half the size of an airy disk gets half the cone. That's why all contrast is lost and you can't get any more detail.
Where do you get the idea that your metric here means anything? WTH is "gets half the cone"? Even IF we are photographing cones, it takes many, many pixels to accurately render it. If we're looking at two overlapping cones from two close point light sources, and want to know how far apart they are, pixelation is not going to help.

That's not how things work. The cones are pointy, not hollow, so they can not possibly decimate any reasonable frequency, relative to the size of the cone point. We're talking several pixels per disk before contrast becomes so low that it is undetectable.
Resolving airy disks is not a part of normal photography. You'd have to photograph point light sources with a perfect lens in a vacuum to end up with an airy disk at real world lens apertures. In normal photography, there are no airy disks in the image; the airy disk is simply a probability map of photon displacement (in addition to all the other sources of displacement).
There are images in the Canon Talk forum showing diffraction patterns in (from one example) the bright lights of a bridge in a night scene.
I'm not aware of the example, but are you sure it isn't a combination of things like flare, bokeh, and diffraction? if you see it in the image, it is at a scale far from the micro-contrast losses we are discussing.
And how in the world can you say that resolving airy disks isn't a part of normal photography, when that's the very reason for images getting softer and losing detail as the aperture is closed? I don't understand how you could possibly come to such a conclusion.
I already explained that. The airy disk is nothing more than a displacement probability map, in a normal photograph. Light is coming from many locations; by the time you have your exposure, each virtual point source in the subject has given few or no photons; there is no build-up of photons from a point source to form an airy disk. It takes MANY, MANY photons from a single point source with no other distortions to get an airy disk. The airy disk is not the shape of a single photon, but rather, the cumulative effect of many photons from the same point source.

--
John

 
I guess if you want to increase the number of pixels by an order of magnitude just so you can get a better view of airy disks...well, to each his own.

But you're not going to get more detail out of the scene. Detail from the scene with an angular separation equal to or less than the airy disk radius simply cannot be resolved, not matter how many pixels you have.

.
 
I guess if you want to increase the number of pixels by an order of magnitude just so you can get a better view of airy disks...well, to each his own.

But you're not going to get more detail out of the scene. Detail from the scene with an angular separation equal to or less than the airy disk radius simply cannot be resolved, not matter how many pixels you have.

.
In order to resolve two lines at the Rayleigh criteria spacing you also need a pixel between the lines. The minimum monochrome sensor spacing that recovers details is therefore half the Airy disc radius. This is most easily understood by considering the MTF due to diffraction and applying the Nyquist sampling criteria to the frequency at the 9% MTF point which results in a monochrome pixel spacing of (1.22 * lambda * f-stop/2) which for green light in the center of the optical spectrum is a pixel spacing of f-stop/3. For a Bayer CFA only half the pixels yield unambiguous luminance information though so the spacing for a color sensor needs to be decreased by a factor of .707 which results in a minimum color pixel pitch for recorded information of about .35 of an Airy disc radius.

Here is a reference for the diffraction MTF of circular and rectangular apertures:

http://www.cvimellesgriot.com/products/Documents/TechnicalGuide/Modulation_Transfer_Function.pdf

Note that the plots show spatial frequencies normalized to the diffraction cutoff frequency Uic which in air is equal to 1/(lambDA*f-stop) or 1.22/rd where rd is the Airy disc radius. The Nyquist limit pixel spacing is 1/2 of the spatial wavelength (the wavelength is the reciprocal of the spatial frequency).
 
DSPographer wrote:
The formulas I gave in the previous posts
This is most easily understood by considering the MTF due to diffraction and applying the Nyquist sampling criteria to the frequency at the 9% MTF point which results in a monochrome pixel spacing of (1.22 * lambda * f-stop/2) which for green light in the center of the optical spectrum is a pixel spacing of f-stop/3. For a Bayer CFA only half the pixels yield unambiguous luminance information though so the spacing for a color sensor needs to be decreased by a factor of .707 which results in a minimum color pixel pitch for recorded information of about .35 of an Airy disc radius.
Is for pixel spacings in microns. So to fully resolve an f2.8 diffraction limited lens requires a monochrome pixel spacing of 2.8/3 or 0.93 microns and a Bayer array pixel spacing of 0.35 * (1.22 * (.55) * 2.8) which is 0.66 microns. To sample to an MTF of zero which is the diffraction cutoff these spacings need to be divided by 1.22 which yields a color pixel spacing of 0.54 microns.
 
That's very interesting, but I think your primary premise is incorrect. What you're doing is simply increasing pixel resolution to see the diffraction pattern.

For the confused, just look at the image on this page...specifically, the center image.
http://hyperphysics.phy-astr.gsu.edu/hbase/phyopt/raylei.html

What's being referred to is the slight dip between the two Airy disks. If you imagine pixels at the bottom of the image, and the points and dips representing light levels, you'd see that you need at least a pixel at the first point, one at the center dip, and one at the second point, to capture all the "detail".

The problem is that the center dip isn't image detail...only the points are. The center dp is manifested as the distance between the real detail points diminishes. It is an effect of the merging of two Airy disks, and capturing it does not represent a tangible increase in the amount of detail captured from the scene. So reducing pixel size to 1/2 the radius of the Airy doesn't get you anything.

But not to worry...I'm sure ejmartin will jump in and straighten us all out :P

.
 
But not to worry...I'm sure ejmartin will jump in and straighten us all out :P
And until he does, here's some interesting reading on the subject:
  • Diffraction Limited Photography: Pixel Size, Aperture and Airy Disks
http://www.cambridgeincolour.com/tutorials/diffraction-photography.htm
  • Do Sensors “Outresolve” Lenses?
http://luminous-landscape.com/tutorials/resolution.shtml
  • A real-world example:
http://diglloyd.com/articles/Diffraction/Diffraction-example-1DsM3.html
 
The Canon G10 was 15.1MP then Canon decided to use a different sony sensor in the G11 that has better high iso by 1.5 stops but is just 10MP.
 

Keyboard shortcuts

Back
Top