Why is 4/3 "stuck" to 16mp when Sony 1" sensor is 20mp?

The 4/3s cameras have much better video than anything Sony has released to date.
If the rumours are true and Sony puts out a full frame A7s 12mp camera with 4K video...



I prefer my Sony A7 for video over my M4/3 GX7. Both are recent cameras with nice video...both have issues but the A7 is easier to use and (to me) gives better results in low light for live music video.
 
The theoretical best linear difference in resolution between 20Mp and 16Mp is just 12%. Just 1/8 th more resolution.

That's not much at all.
Yes agreed.
In terms of image resolution I doubt if many people could tell the difference between 16 & 20 MP.
However, noise the performance between 4:3 & Type 1 will almost certainly be different.
 
Last edited:
Exactly. Totally agree with you on that. This will happen more and more as we increase resolution of bigger sensor. There will be some (maybe dull) peak with two downhills - one from Diffraction, second for lens construction (size, DoF etc) on every camera later in next generations.
 
Exactly. Totally agree with you on that. This will happen more and more as we increase resolution of bigger sensor. There will be some (maybe dull) peak with two downhills - one from Diffraction, second for lens construction (size, DoF etc) on every camera later in next generations.
 
Exactly. Totally agree with you on that. This will happen more and more as we increase resolution of bigger sensor. There will be some (maybe dull) peak with two downhills - one from Diffraction, second for lens construction (size, DoF etc) on every camera later in next generations.

--
Why does he do it?
You can see the issue looking at the performance of a lens like the Zeiss Otus on the D800, that is a lens that offers great performance even at large apertures. According to DxO the maximum sharpness isn't reached at F/8, or even F/5.6 but way down at F/2.8.

That rather agrees with this...

http://www.cambridgeincolour.com/tutorials/diffraction-photography.htm
It rather agrees with general diffraction theory which says the wider the aperture the less the diffraction blur if a lens is good enough to be diffraction limited (that is it's resolution is essentially defined by diffraction). It has nothing to do with McHugh's ludicrous pixel/diffraction theory.
The D800's pixel size is smaller than the airy disks formed by around F/3.5, the RX100 on the other hand starts to be effected by around f/1.8.
And yet, the pixel size makes no difference to the f-number where the maximum sharpness is reached at all.
Nothing "ludicrous" there for me, smaller pixels will be effected more by airy disks of a similar size.

Just to be clear I'm not saying that more pixels will mean more negative effects of diffraction at the same aperture on a sensor of the same size, only that the same number of pixels on a smaller sensor will mean more negative effects at the same aperture.
That's not to say the effects are very large at that stage but it shows you the difference, 2 stops earlier for the RX100. So the kind of obvious dropoff you get at F/11 on the D800 is going to happen at F/5.6 on the RX100, only just above the max aperture at the long end where the lens still isn't performing well at all.
Here's a thing to consider. The crop factor between the D800 and RX100 is 2.7. That is, to look at them the same size, you need to enlarge the RX100 2.7 times more. So, to get the same diffraction blur in the image that you view, you need 2.7 times less diffraction at the sensor, which means an f-number 2.7 times smaller, so f/11 on the D800 looks like f/4 on the RX100 whatever the relative pixel size. Diffraction just has nothing to do with pixel size.
In order to get decent boarder performance at the long end of the RX100 your having to stop down to F/8, that's like having to stop the D800 down to F/13.
8 x 2.7 = 22, not 13.

--
Bob
My comparison was based on maximising the sensors performance, the D800 having more pixels will obviously take away some of the crop factor.
 
Last edited:
Exactly. Totally agree with you on that. This will happen more and more as we increase resolution of bigger sensor. There will be some (maybe dull) peak with two downhills - one from Diffraction, second for lens construction (size, DoF etc) on every camera later in next generations.

--
Why does he do it?
You can see the issue looking at the performance of a lens like the Zeiss Otus on the D800, that is a lens that offers great performance even at large apertures. According to DxO the maximum sharpness isn't reached at F/8, or even F/5.6 but way down at F/2.8.

That rather agrees with this...

http://www.cambridgeincolour.com/tutorials/diffraction-photography.htm
It rather agrees with general diffraction theory which says the wider the aperture the less the diffraction blur if a lens is good enough to be diffraction limited (that is it's resolution is essentially defined by diffraction). It has nothing to do with McHugh's ludicrous pixel/diffraction theory.
The D800's pixel size is smaller than the airy disks formed by around F/3.5, the RX100 on the other hand starts to be effected by around f/1.8.
And yet, the pixel size makes no difference to the f-number where the maximum sharpness is reached at all.
Nothing "ludicrous" there for me,
Which just exposes a lack of knowledge of diffraction.
smaller pixels will be effected more by airy disks of a similar size.
Actually, no. That isn't how it works, because the 'Airy disks' don't conveniently line up with pixels.
I don't see any claim that the pixels and airy disks have to "line up" just that a larger airy disk will obviously be more likely to have more overlap thus more negative effect.

Of course you could claim that diffraction will really be effected every optical system no matter the aperture, but common useage in photography seems more to refer to a certain level of effect.
Just to be clear I'm not saying that more pixels will mean more negative effects of diffraction at the same aperture on a sensor of the same size, only that the same number of pixels on a smaller sensor will mean more negative effects at the same aperture.
It has nothing to do with the number of pixels. The effect you're talking about there is caused because you enlarge the smaller sensor more to view it.
..and a smaller sensor with the same or more pixels than a larger sensor will need to be enlarged more.

As your opening post was specifically talking about megapixels I don't see the problem with me reffering to per pixel sharpness.

Again that doesn't mean that more pixels will ever make the effect worse but it may well be that they impact other areas of a cameras/sensors performance to the extent that manufacturers don't view adding more as a good trade off.
That's not to say the effects are very large at that stage but it shows you the difference, 2 stops earlier for the RX100. So the kind of obvious dropoff you get at F/11 on the D800 is going to happen at F/5.6 on the RX100, only just above the max aperture at the long end where the lens still isn't performing well at all.
Here's a thing to consider. The crop factor between the D800 and RX100 is 2.7. That is, to look at them the same size, you need to enlarge the RX100 2.7 times more. So, to get the same diffraction blur in the image that you view, you need 2.7 times less diffraction at the sensor, which means an f-number 2.7 times smaller, so f/11 on the D800 looks like f/4 on the RX100 whatever the relative pixel size. Diffraction just has nothing to do with pixel size.
In order to get decent boarder performance at the long end of the RX100 your having to stop down to F/8, that's like having to stop the D800 down to F/13.
8 x 2.7 = 22, not 13.

--
Bob
My comparison was based on maximising the sensors performance, the D800 having more pixels will obviously take away some of the crop factor.
No, the number of pixels doesn't affect the crop factor at all.

--
Bob
It does effect a calculation(well an estimate) that involved the crop factor though. If your measuring the point at which sharpness on a per pixel level will be effected by diffraction to the same degree then the D800 having more pixels will mean that a similar level of IQ loss will be happening earlier than merely looking at crop factor would suggest.
 
Last edited:
I´ve just tried "A. stupid" approach to this. The same way as "lens speed" which is more like nonsense in technical speech of aperture aspect. General audience is considered being half brain, so they´d propably not understand the aperture or diffraction term. Because general audience does not give a duck about diffraction. They need resolution and speed :-). You buy some lens due to it´s price, size and THE SPEED, and now you are (sensor)RESOLUTION LIMITED to that lens. You can buy 10Mpx sensor cam for it and enjoy low diffraction effects, You can buy 18Mpx to get most of it, and you have useless piece of gear(sensor) with 40Mpx, because there is no more detail anymore. For compact cam you don´t decide all this, but for good glass for your ILC, where glass is more expensive than body, you will be pushed to do so later in time, until they make new system of light gathering than normal "lens".

It really depends on angle of view and state of the technology. I believe we are heading that way, not the diffraction limit way wiew.
 
smaller pixels will be effected more by airy disks of a similar size.
Actually, no. That isn't how it works, because the 'Airy disks' don't conveniently line up with pixels.
I don't see any claim that the pixels and airy disks have to "line up" just that a larger airy disk will obviously be more likely to have more overlap thus more negative effect.
The claim is implicitly there I think
Of course you could claim that diffraction will really be effected every optical system no matter the aperture, but common useage in photography seems more to refer to a certain level of effect.
He [and others] can claim that indeed. Common usage does not alter physical realities, it can be an inadequate description.
 
I believe that in a future generation of 4/3 camera, we will see that 16mp sensor increased as technology permits. I also believe that the resolution on the Fuji APC sensors will also be increased again as technology advances. It wasn't that many years ago when a top of line FF sensor had only 6-8 mp.

But the laws of physics, or at least sensor construction, mean that a larger sensor will always have more pixels than a smaller one.
 
But the laws of physics, or at least sensor construction, mean that a larger sensor will always have more pixels than a smaller one.
There's no such requirement in either engineering or physics.

And as someone has pointed out already the top of the line D4 has fewer pixels than many consumer grade models. And that choice was well received by the target market ( pros ) because it was what they wanted ( better pixels, not more pixels ).
 
MoreorLess wrote:.

I don't see any claim that the pixels and airy disks have to "line up" just that a larger airy disk will obviously be more likely to have more overlap thus more negative effect.
Thinking things are 'obvious' when they aren't is a trap of thinking that very often causes these kind of problems. Calling diffraction a 'negative' effect also displays a predisposition to think of it a certain way - it is just an effect, and what is needed is to understand its effect. So, the implicit flaw of thinking in the McHugh view that you seem to subscribe to, is that we have a 'you see it or you don't' phenomenon. That is, if an Airy disc is smaller than a pixel, you can't see it - so somehow, if the pixel is smaller, you're going to see more of these Airy discs. It ain't like that at all. As well as the discs not being lined up, the pixels aren't just on-off, so if we wanted to consider it in terms of Airy discs, we'd have to consider how an Airy disc at every alignment was captured and not just can we see it or can we not. Then we'd have to do a whole load of maths to combine the effect of all those airy discs in all those different positions and see what happens. The way this ends up is that you take the point spread function for the diffraction (a section through the disc) and then the point spread function for the sensor (a box function representing a pixel and also the AA filter function) and convolve them - and you end up with a combined PSF. Anyway, what you find out is that diffraction is not an on-off phenomenon - there is no definable 'limit' in the sense it gets used.
Of course you could claim that diffraction will really be effected every optical system no matter the aperture, but common useage in photography seems more to refer to a certain level of effect.
I disagree. People talk very loosely about diffraction 'kicking in', but no-one seems to define any 'certain level'
I superficially quoted a lens like the Otus that seems to offer enough performance for the very early effects of diffraction to be notable exactly so we wouldn't end up with this kind of technical and sematic debate.

Would you agree with the idea that a smaller sensor with the same number of pixels as a larger sensor will experience greater negative effects on IQ via diffraction at the same F/stop? if that's the case then I'v got no interest in being draw into a sematic/technical debate.
Just to be clear I'm not saying that more pixels will mean more negative effects of diffraction at the same aperture on a sensor of the same size, only that the same number of pixels on a smaller sensor will mean more negative effects at the same aperture.
It has nothing to do with the number of pixels. The effect you're talking about there is caused because you enlarge the smaller sensor more to view it.
..and a smaller sensor with the same or more pixels than a larger sensor will need to be enlarged more.
The degree of enlargement depends only on the size of the sensor and the size of the final view. The number of pixels does not come into it at all.
The degree of enlargement obviously depends on the size of the print though and I don't think its pushing things to say that printing size maybe influenced by megapixels. Post's like that at the start of this thread specifically seem to point towards that line of thinking.
As your opening post was specifically talking about megapixels I don't see the problem with me reffering to per pixel sharpness.
The problem is that 'per pixel sharpness' is another term that leads to faulty thinking and myths. All it means is that you compare things at different magnifications, which in the real world makes very little sense. Why would you ever want to compare a postcard with a poster?
Again this seems to be descending into a sematic/technical debate when ultimately what you saying leads to the same result in real terms.
Again that doesn't mean that more pixels will ever make the effect worse but it may well be that they impact other areas of a cameras/sensors performance to the extent that manufacturers don't view adding more as a good trade off.
'Manufacturers' and their engineers are pretty much aware of the trade-offs - they probably don't have to deal with the 'may be'.
The maybe was more taking into account things like marketing, its possible for a camera to not be designed for ideal performance and instead push specs the manufacturer thinks will sell. I suspect this is much more likely to be the case with the RX100 than it is the m43 bodies at 16 MP.
No, it doesn't.
If your measuring the point at which sharpness on a per pixel level will be effected by diffraction to the same degree then the D800 having more pixels will mean that a similar level of IQ loss will be happening earlier than merely looking at crop factor would suggest.
What does a 'similar level of IQ loss' mean? So far as I can see, there is no clear or useful meaning to that phrase in the context you are using it. You're welcome to try to suggest one.

--
Bob
A similar level of IQ loss in this case would be a decline in the sharpness when viewer at pixel level, obviously sensor performance would also be a big factor.

I'd refer back to the original post which pushes the line of thinking that megapixels represent an expectation of image quality. Responding to such statements it seems only natural to me to consider the effect at a pixel level rather than an enlargement level.

Again I'd say that the kind of endless technical/sematic debates a small number of people here seem to love to enguage in serve little pratical purpose beyond them showing off their knowledge. Indeed I'd say they often have a negative effect as people insist attacking more simplified explations that serve a much more practical purpose to many more users and muddy the water meaning those seeking info are only confused.
 
Last edited:
quote madness...
I superficially quoted a lens like the Otus that seems to offer enough performance for the very early effects of diffraction to be notable exactly so we wouldn't end up with this kind of technical and sematic debate.

Would you agree with the idea that a smaller sensor with the same number of pixels as a larger sensor will experience greater negative effects on IQ via diffraction at the same F/stop? if that's the case then I'v got no interest in being draw into a sematic/technical debate.
I´d agree. As I am not able to get the same detail level of the grass across the field (total green mush) with compact cam, I can get decent result with DSLR with the same resolution. I don´t have the same megapixel cound DSLR and compact cam right now, but I´ve tried this.
Just to be clear I'm not saying that more pixels will mean more negative effects of diffraction at the same aperture on a sensor of the same size, only that the same number of pixels on a smaller sensor will mean more negative effects at the same aperture.
It has nothing to do with the number of pixels. The effect you're talking about there is caused because you enlarge the smaller sensor more to view it.
Angle of view. You have more pixels at the place, so more pixels are crippled. So it has something with the number of pixels. That´s what matters in end result. More pixels, more mess in original image on pixel level.
..and a smaller sensor with the same or more pixels than a larger sensor will need to be enlarged more.
The degree of enlargement depends only on the size of the sensor and the size of the final view. The number of pixels does not come into it at all.
If you crop...
The problem is that 'per pixel sharpness' is another term that leads to faulty thinking and myths. All it means is that you compare things at different magnifications, which in the real world makes very little sense. Why would you ever want to compare a postcard with a poster?
What is this? You need the same image with two different cameras, and as general user you really don´t care about magnification factor. We are not comparing magnification, we are comparing end result.
 
I've heard the rumors. I'll wait for release and user reports -- not the announcement date. I've bought Sony cameras on specs before. About half of the time, I was disappointed. I like their products, mind you. I just don't trust their sales+marketing department.
 
It trades off against low-light. As you increase resolution, low-light performance falls off ever-so-slightly. As you continue to increase, it falls off more rapidly. Once you go beyond some point, it falls like a rock. Where that point is is a question of the size of support electronics needed for each pixel. Part of that is technology -- and Sony is probably technologically ahead here -- and part of that is what functionality you want.
Do you have a link to prove this? From what I've read the opposite is true, and unless I am mistaking correlation with causation progress in cameras bears this out.
 
It really depends on what you are trying to take a picture of. Yes, diffraction-wise, a fast lens needs to be fully open for maximum resolution (smallest disk). But we all know that it works great only in the center of the sensor (except very few and very expensive Zeiss lenses). Any deviation from the center eliminates all the advantage of diffraction - limited shooting because of other distortions of a real lens. if you are shooting a portrait with blurred background, you are fine. But if you are trying to take a nice landscape (and this is where the extra resolution really matters) with foreground and background being sharp, you have to close the lens to f/5.6 - f/8 or even more, and then all the diffraction advantage of a fast lens is gone - the extra resolution of the sensor is left mostly unused.

Having said that, I am sure that m4/3 makers will produce higher MPx cameras as soon as technology allows doing that while keeping noise and DR in check - simply because MPx still sell cameras.
 
Again I'd say that the kind of endless technical/sematic debates a small number of people here seem to love to enguage in serve little pratical purpose beyond them showing off their knowledge. Indeed I'd say they often have a negative effect as people insist attacking more simplified explations that serve a much more practical purpose to many more users and muddy the water meaning those seeking info are only confused.
I couldn't disagree more. An explanation rooted in the truth becomes easy to understand once the principles it's based on are understood. A simplified explanation only becomes more difficult to understand as you dig deeper (and at some point, you'll realize that it's a lie).

I, for one, am happy that someone out there took the time to fully explain exposure (what it is and what it isn't) and equivalent f-stops on this forum. I might still be confused how a larger sensor gains any advantage over a small sensor if I remained satisfied with explanations like the "exposure triangle" and "f/2 = f/2 =f/2".
 
MoreorLess wrote:.

I don't see any claim that the pixels and airy disks have to "line up" just that a larger airy disk will obviously be more likely to have more overlap thus more negative effect.
Thinking things are 'obvious' when they aren't is a trap of thinking that very often causes these kind of problems. Calling diffraction a 'negative' effect also displays a predisposition to think of it a certain way - it is just an effect, and what is needed is to understand its effect. So, the implicit flaw of thinking in the McHugh view that you seem to subscribe to, is that we have a 'you see it or you don't' phenomenon. That is, if an Airy disc is smaller than a pixel, you can't see it - so somehow, if the pixel is smaller, you're going to see more of these Airy discs. It ain't like that at all. As well as the discs not being lined up, the pixels aren't just on-off, so if we wanted to consider it in terms of Airy discs, we'd have to consider how an Airy disc at every alignment was captured and not just can we see it or can we not. Then we'd have to do a whole load of maths to combine the effect of all those airy discs in all those different positions and see what happens. The way this ends up is that you take the point spread function for the diffraction (a section through the disc) and then the point spread function for the sensor (a box function representing a pixel and also the AA filter function) and convolve them - and you end up with a combined PSF. Anyway, what you find out is that diffraction is not an on-off phenomenon - there is no definable 'limit' in the sense it gets used.
Of course you could claim that diffraction will really be effected every optical system no matter the aperture, but common useage in photography seems more to refer to a certain level of effect.
I disagree. People talk very loosely about diffraction 'kicking in', but no-one seems to define any 'certain level'
I superficially quoted a lens like the Otus that seems to offer enough performance for the very early effects of diffraction to be notable exactly so we wouldn't end up with this kind of technical and sematic debate.
No answer then. What is this 'certain level'?
Would you agree with the idea that a smaller sensor with the same number of pixels as a larger sensor will experience greater negative effects on IQ via diffraction at the same F/stop? if that's the case then I'v got no interest in being draw into a sematic/technical debate.
I took out the irrelevant bit. The number of pixels makes no difference to diffraction. And of course, why would you be using the same f-number with a smaller sensor? If you were trying to look for the resolution sweet spot, you'd choose a smaller f-number on the smaller sensor.
Just to be clear I'm not saying that more pixels will mean more negative effects of diffraction at the same aperture on a sensor of the same size, only that the same number of pixels on a smaller sensor will mean more negative effects at the same aperture.
It has nothing to do with the number of pixels. The effect you're talking about there is caused because you enlarge the smaller sensor more to view it.
..and a smaller sensor with the same or more pixels than a larger sensor will need to be enlarged more.
The degree of enlargement depends only on the size of the sensor and the size of the final view. The number of pixels does not come into it at all.
The degree of enlargement obviously depends on the size of the print though and I don't think its pushing things to say that printing size maybe influenced by megapixels. Post's like that at the start of this thread specifically seem to point towards that line of thinking.
Do you really think that printing size is influenced by megapixels - that people with 24 MP cameras spend all their time looking at bigger images than people with 12MP cameras. Do you think that people with 16MP compacts look at their images at the same size as people with D4s? I think that neither is the case. I think that people decide on the size they want to view (maybe by default - the size of their TV or computer screen) and one way or another the image gets scaled to that viewing size. And of course, the degree of enlargement to the same size output image depends on the size of the sensor.
As your opening post was specifically talking about megapixels I don't see the problem with me reffering to per pixel sharpness.
The problem is that 'per pixel sharpness' is another term that leads to faulty thinking and myths. All it means is that you compare things at different magnifications, which in the real world makes very little sense. Why would you ever want to compare a postcard with a poster?
Again this seems to be descending into a sematic/technical debate when ultimately what you saying leads to the same result in real terms.
It doesn't at all lead to the same 'result in real terms' You keep on introducing pixel count, which is largely irrelevant to the diffraction question and because you've lumped it in unnecessarily with the things that do affect diffraction, you claim it's the 'same thing'. No.
Again that doesn't mean that more pixels will ever make the effect worse but it may well be that they impact other areas of a cameras/sensors performance to the extent that manufacturers don't view adding more as a good trade off.
'Manufacturers' and their engineers are pretty much aware of the trade-offs - they probably don't have to deal with the 'may be'.
The maybe was more taking into account things like marketing, its possible for a camera to not be designed for ideal performance and instead push specs the manufacturer thinks will sell. I suspect this is much more likely to be the case with the RX100 than it is the m43 bodies at 16 MP.
Very few next gen cameras are worse than teh last gen. There have been one or two cases, but they are very rare.
No, it doesn't.
If your measuring the point at which sharpness on a per pixel level will be effected by diffraction to the same degree then the D800 having more pixels will mean that a similar level of IQ loss will be happening earlier than merely looking at crop factor would suggest.
What does a 'similar level of IQ loss' mean? So far as I can see, there is no clear or useful meaning to that phrase in the context you are using it. You're welcome to try to suggest one.
 
Why is 4/3 "stuck" to 16mp? The argument that it is not possible to get more real resolution falls flat: Sonys´1" sensor is 20mp and shows a great deal of resolution. Only the lens which cannot make the sharpness through out in the corners (RX100)
It isn't stuck. The MP will gradually increase for all sensor sizes up to the limit that can be tolerated for each sensor size using the existing sensor and in-camera processing technology.

The general pattern is that when your camera manufacturer of choice only makes a 10 MP camera owners of the camera say "I can't understand why any normal person needs more than 10 MP - I can make razor sharp 4 ft. x 3 ft. prints".

When the manufacturer goes to 16 MP these people then say "I can't understand why any normal person needs more than 16 MP".
 

Keyboard shortcuts

Back
Top