Will a compact 15 Megapixels sensor come out anytime soon?

Image quality from the Canon G11 is decidedly inferior to the G10
That is a matter of opinion. Very few who use the cameras being discussed will ever notice the difference in pixel count because of lack of people making prints - which is the main reason you need so many pixels! And you'd have to get your eye pretty close to the print to notice the difference between a 10 and 15MP print even of 11x14 or 16x20 size!
 
Check out the images from the Canon G10 and G11 at base ISO. The G10 has a far superior image in terms of reaolution and it is because of the 14.7 megapixel sensor as the lenses are the same.
You are confusing terms. IQ and resolution are separate characteristics. High resolution is not equal to great IQ! I looked at the G11 review and to me the IQ of these two cameras are equal up to iso400 and the G11 are just smaller images compared to the G10.

The G11 has noticeably better IQ at iso800 and extremely better IQ at iso3200 than the G10.

Remember, you only need 10+ MP if you are going to print 16x20 or larger and even then most people will have a hard time telling which print is from 15MP and which is from 10MP.
 
That's very interesting, but I think your primary premise is incorrect. What you're doing is simply increasing pixel resolution to see the diffraction pattern.

For the confused, just look at the image on this page...specifically, the center image.
http://hyperphysics.phy-astr.gsu.edu/hbase/phyopt/raylei.html

What's being referred to is the slight dip between the two Airy disks. If you imagine pixels at the bottom of the image, and the points and dips representing light levels, you'd see that you need at least a pixel at the first point, one at the center dip, and one at the second point, to capture all the "detail".

The problem is that the center dip isn't image detail...only the points are. The center dp is manifested as the distance between the real detail points diminishes. It is an effect of the merging of two Airy disks, and capturing it does not represent a tangible increase in the amount of detail captured from the scene. So reducing pixel size to 1/2 the radius of the Airy doesn't get you anything.

But not to worry...I'm sure ejmartin will jump in and straighten us all out :P
Actually the data between the two lines represents one cycle from high to low and back to high in intensity. The Nyquist criteria states that we need two samples per cycle so my statement that we need a sample at the low point between the bright points is correct. Without the extra point it would not be possible to distinguish the presence of two separate bright lines from one large continuous bright area. This ambiguity is the aliasing of the high frequency near Nyquist limit sine wave signal to a low frequency near constant level signal. If you have a background in digital signal processing (as I have) then it is simple to consider the frequency domain representation of the diffraction MTF and combine this with Shannon/Nyquist sampling theory to make the sampling requirement obvious.
 
This is exactly what I was talking about. At apertures of f/8 and smaller, there are more than enough pixels to capture the detail...the detail simply isn't there to be captured. Those pixels are capturing nothing but diffraction patterns.

With a 6.4um pixel size, I think a perfect lens should have stopped down to f/9.5 before before the airy disk became twice as large as a pixel. Detail should not have been lost before then, but it was, suggesting that the lens, as good as it is, isn't perfect.

So, since we're limited by our lenses, I guess none of this matters :P

.
 
The luminous-landscape diffraction article is not too bad but it does contain some significant errors. For instance this statement:

"The Rayleigh criterion – based on human visual acuity – isn’t adequate for estimating the resolving power of a lens that projects images on a sensor. The sensor needs more contrast and separation between Airy disks than the human eye. Foveal cones aren’t like pixels."

This statement is presented without justification and is in general wrong. The problem is that it ignores the possibility of sharpening being used on the digital image to boost the contrast of high frequencies. The minimum acceptable MTF is then seen to be determined not by some made up sensor MTF requirement but the point at which the signal to noise ratio after sharpening would be unacceptable. For low-ISO exposures a 6x contrast boost of high frequencies may be used to recover details at the Rayleigh limit to contrast value > 50% where they can be easily seen. In that case the monochrome spacing needed is 1/2 the Airy disc radius . For high ISO images though no sharpening may be acceptable and then the table values in the article with a monochrome pixel spacing of 1/2 the diameter would be correct.
 
It is important to distinguish between the point at which diffraction effects first become significant and the point at which all further details are extinguished (the diffraction cutoff). The first of these values is traditionally listed as the diffraction limit but in fact details extend beyond that point but at reduced contrast. The second point where we are beyond two samples per Rayleigh criteria spacing represents the true limit beyond which no further information is extracted.
 
It is important to distinguish between the point at which diffraction effects first become significant and the point at which all further details are extinguished (the diffraction cutoff). The first of these values is traditionally listed as the diffraction limit but in fact details extend beyond that point but at reduced contrast. The second point where we are beyond two samples per Rayleigh criteria spacing represents the true limit beyond which no further information is extracted.
Precisely. And most discussion on the web conflates these two, treating the point at which MTF50 (w/o sharpening) is reached as if it were the extinction point according to the Rayleigh criterion (which IIRC is close to but not quite the point where MTF drops to zero, due to the little dip between the centroids of the diffraction spots). I suspect that this is the source of much of the confusion.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
Actually the data between the two lines represents one cycle from high to low and back to high in intensity.
No it doesn't. That’s what you’re missing.

The transition from the maximum to the minimum of an Airy disk does NOT represent any such transition at the source. It is not data. It is a there as a consequence of light passing through an aperture.

.
 
Check out the images from the Canon G10 and G11 at base ISO. The G10 has a far superior image in terms of reaolution and it is because of the 14.7 megapixel sensor as the lenses are the same.
You are confusing terms. IQ and resolution are separate characteristics. High resolution is not equal to great IQ! I looked at the G11 review and to me the IQ of these two cameras are equal up to iso400 and the G11 are just smaller images compared to the G10.

The G11 has noticeably better IQ at iso800 and extremely better IQ at iso3200 than the G10.

Remember, you only need 10+ MP if you are going to print 16x20 or larger and even then most people will have a hard time telling which print is from 15MP and which is from 10MP.
Although resolution and IQ are different terms, resolution is the major component of IQ, the others being noise and dynamic range - at least as far as sensors are concerned. IQ is not just someone's opinion! It can be quantified as far as detai, resolution, noise and DR are concerned.

The G10 image is not just larger it is also higher resolution and provides more detail at lower ISOs. The actual reason for the image being larger is that both images are 100% crops. If you can't see the detail in the G11 image, increasing its size to match that of the G10 will NOT bring more detail up, it will only make the blurrier image bigger.

The G10 has noticeably more detail and therefore better IQ at lower ISOs than the G11 precisely because it has more megapixels. The G11 is of course better at high ISO and for people who expect to do a lot of photography at higher ISOs, it is a better solution than the G10. However to say, as you do, that people will have a hard time telling images from the two cameras apart, not only belies what you said about the G11 being superior, it is not the point! The G11 is a backward step iin terms of resolution and therefore a backward step as far as IQ is cioncerned (high ISOs excepted).

It is time we stopped "pixel bashing" because of some sort of inverted snobbery and a belief that we know better than the "advertising people". More megapixels equals better IQ.
 
Actually the data between the two lines represents one cycle from high to low and back to high in intensity.
No it doesn't. That’s what you’re missing.
Yes, it does as I explain below.
The transition from the maximum to the minimum of an Airy disk does NOT represent any such transition at the source. It is not data. It is a there as a consequence of light passing through an aperture.
The signal from the source represents an impulse at each point. If we have a series of points spaced at the Rayleigh criteria spacing then the Fourier transform of the signal is a series of harmonics with the fundamental wavelength equal to the spacing between two adjacent points. If the source points are true impulses then the spatial frequency spectrum before we introduce the aperture would extent as a harmonic set to infinity. The diffraction of the aperture causes this spectrum to be truncated to the diffraction cutoff. (The spatial frequency content after the aperture is the product of the source spectrum and the lens' MTF).

Note that as a linear system the lens MTF cannot introduce frequencies that are not present in the signal before the lens. Referenced to the energy available across the aperture for a given frequency the lens MTF can only reduce the magnitude that is already present from the source.
 
Image quality from the Canon G11 is decidedly inferior to the G10
That is a matter of opinion. Very few who use the cameras being discussed will ever notice the difference in pixel count because of lack of people making prints - which is the main reason you need so many pixels! And you'd have to get your eye pretty close to the print to notice the difference between a 10 and 15MP print even of 11x14 or 16x20 size!
Whether one is satisfied with low resolution images and doesn't care about image quality or cannot discern the difference is NOT the point! The G10 has far better resloution at low ISO and this is not a matter of "opinion" but a matter of fact .

Indeed, I expect photographers who buy the G series of cameras are concerned about image quality, saving RAW files, and quality lenses and will be very concerned if their images are inferior.

If you want to pay more for inferior images, go ahead.
 
Actually the data between the two lines represents one cycle from high to low and back to high in intensity.
The transition from the maximum to the minimum of an Airy disk does NOT represent any such transition at the source. It is not data. It is a there as a consequence of light passing through an aperture.
The transition from central maximum to first minimum of the Airy diffraction pattern represents the intensity of a point source after it has passed through a circular aperture. It is the data arriving at the sensor from the source. When two such sources approach one another in angular separation as seen by the camera, the two diffraction patterns start to overlap, and eventually cease to become distinct when there is no "dip in the middle" between the two primary maxima. The separation of the two maxima is source detail one wants to resolve, the decreasing difference between the depth of the dip and the height of the central maxima is a decreasing MTF due to diffraction. MTF is line pairs , and so MTF50 arises when the dip is at half the peak height, and extinction is when the dip is at the same height as the two surrounding maxima, ie the patterns merge. But the MTF is certainly measured wrt to the contrast between pixels measuring the dip between the maxima, and pixels measuring the maxima. If the pixels are so large that one is only measuring the two maxima, and smearing over the dip in between then the resolution is limited by the pixel size and not by diffraction effects.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
Actually the data between the two lines represents one cycle from high to low and back to high in intensity.
No it doesn't. That’s what you’re missing.

The transition from the maximum to the minimum of an Airy disk does NOT represent any such transition at the source. It is not data. It is a there as a consequence of light passing through an aperture.
Sorry...mispoke. Meant to say "It is not detail ."

.
 
I own both G10 & G11. I took a family group photo at Thanksgiving with both cameras. (hard getting so many people to sit still for all that)

Settings on both cameras were exactly the same, taken within 10 minutes of each other.

Printed both off to 13x19 prints. Both were beautiful, but the edge had go to the G11. It was noticeable. But I love both cameras, and will keep both.

G
 
Image quality from the Canon G11 is decidedly inferior to the G10
That is a matter of opinion. Very few who use the cameras being discussed will ever notice the difference in pixel count because of lack of people making prints - which is the main reason you need so many pixels! And you'd have to get your eye pretty close to the print to notice the difference between a 10 and 15MP print even of 11x14 or 16x20 size!
Whether one is satisfied with low resolution images and doesn't care about image quality or cannot discern the difference is NOT the point! The G10 has far better resloution at low ISO and this is not a matter of "opinion" but a matter of fact .
Sure, but as RedFox88 points out, the resolution advantage of the G10 is borderline invisible if you print at typical sizes, while the noise advantage of the G11 can be clearly visible for the same prints. I understand why many people would pay for this actual real-life advantage in image quality.
 
The transition from central maximum to first minimum of the Airy diffraction pattern represents the intensity of a point source after it has passed through a circular aperture. It is the data arriving at the sensor from the source.
Yes, but it's not detail. It does not represent contrast that exists in the scene.
When two such sources approach one another in angular separation as seen by the camera, the two diffraction patterns start to overlap, and eventually cease to become distinct when there is no "dip in the middle" between the two primary maxima.
Ya...
The separation of the two maxima is source detail one wants to resolve,
Right...
... the decreasing difference between the depth of the dip and the height of the central maxima is a decreasing MTF due to diffraction.
Okay.
MTF is line pairs , and so MTF50 arises when the dip is at half the peak height, and extinction is when the dip is at the same height as the two surrounding maxima, ie the patterns merge. But the MTF is certainly measured wrt to the contrast between pixels measuring the dip between the maxima, and pixels measuring the maxima. If the pixels are so large that one is only measuring the two maxima, and smearing over the dip in between then the resolution is limited by the pixel size and not by diffraction effects.
Okay. That doesn't seem to answer anything.

What if we forget about the line pairs for a moment and make everything white...that is, we're taking a capture of a perfectly smooth, uniform white surface.

1. You're going to Airy disks.
2. Those disks will have maximums and minimums.
3. Two disks separated by the Rayleigh Criterion will have a dip.

So is that dip real detail? If you have pixels small enough to capture it, are you really capturing more detail from the scene?

.
 
ejmartin wrote:
[snip]
What if we forget about the line pairs for a moment and make everything white...that is, we're taking a capture of a perfectly smooth, uniform white surface.

1. You're going to Airy disks.
No, the Airy pattern is the image of a point source taken through a finite circular aperture imaging device. A uniform surface will have no Airy pattern; the Airy pattern is a blurring of a point source; a blurring of a uniform tonality source will be of uniform tonality.

Since the rest of your post is predicated on point 1, it doesn't really apply.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
What if we forget about the line pairs for a moment and make everything white...that is, we're taking a capture of a perfectly smooth, uniform white surface.

1. You're going to Airy disks.
I assume you meant "you're going to have airy disks". That would not be true at all. You only get airy disks when there are points of light brighter than the background. In a smooth white surface, airy disks are only probability maps for photon displacement. They're only histograms of spatial displacement.

--
John

 

Keyboard shortcuts

Back
Top