Re: k1 pixel shift - can you see the different?
1
Peace_VN wrote:
I though k1 has no AA?
Well, this camera doesn't have a built-in AA filter, but it allows to simulate it via sensor "shaking". And you can even select a different intensity of such shaking/filtering. Or no shaking/filtering at all.
And I am sorry if I (we ) led you astray with our, a little bit hotter, argumentation.
If you understand the sampling theory, which applies to any digital imaging, please ignore my following text. Otherwise, if you want to understand it (a little bit ) just continue reading.
The diagram below is what you find in any text dealing with the digital signal processing, which starts with the analog signal sampling. The diagram shows the envelope (profile) of an analog signal "frequencies". In photography this corresponds to the lens ability to project details of the subject. Usually it is measured in line pairs per millimeter. The best lenses can deliver around 100 such lp/mm. 100 line pairs equals to 200 pixels, so pixels per mm is another measure to express the lens "resolution".
Now, there is a so called sampling (or Nyquest-Shannon sampling) theorem, which governs this process. And it states, that the sampling frequency (fs) has to be at least 2-times higher than the highest frequency component of the sampled analog signal.
In the diagram below a signal with the red envelope satisfies such a condition, but not that bluish signal. The red signal will be properly sampled, and after further digital processing (specific to the digital imaging), it can be fully "reconstructed", for example, printed on a paper.

The bluish signal will be fully sampled as well, so the sensor will get even more details (bluish area) than from the red signal. However, each "extra" detail (i.e. the one above the Nyquest frequency) will "generate" one extra detail, which lies symmetrically across the Nyquest frequency line (yellow area). Those extra details are false, they do not exist in the original. This phenomenon is known as "aliasing".
Now back to your digital camera. let's assume that your lens is capable delivering up to 100 lp/mm (200 pixels/mm). To satisfy the sampling theorem, the camera sensor should have density 400 pixels/mm in both dimensions. An FF (24mm x 36mm) camera then should have a sensor with 14,400 x 9,600 pixels, or 138 Mp !!! Well, in practice those sensors have even 5-times less pixels. Your 36Mp camera will get (almost *) everything what your lens can deliver, but there will be a lot of aliasing. If you engage AA (anti-aliasing) filter, you will reduce the lens resolution, thus you will reduce the aliasing but the amount of details as well. If you take a picture in the shift mode, you practically quadruple your sensor resolution, so you will satisfy the theoretical requirement to get a picture completely free of any aliasing.
To the (almost *) comment I am just adding that the practical resolution will be around 18 mega-detail . As I am a friendly soul, I leave this to James, for example, for further argumentation . But if he still insists on something like 9 mega details, don't believe him . However, to see the improvement in the resolution as well (if shooting in the shift mode) you ready would need to shoot a subject with a lot of fine details, exactly like Roland commented.