James O'Neill wrote:
flektogon wrote:
MaKeR wrote:
I can see the difference, and it's clear that *89.PEF is the Pixel Shift image. The other one is less sharp and has false detail.
Yes, this explanation is fully supported by the sampling theorem! Any good lens like mentioned Pentax 100/2.8 Macro can project as a maximum around 35 mega-details/points on a full frame sensor. So a camera with 36 Mp is (theoretically) capable register all those details.
Unfortunately it doesn't work like that.
Take a theoretical lens. A 1 dimensional line or 2D shape doesn't have a perfect edge. You get a halo. Lets say for this lens the smudge / blur / halo is 1/500th of a mm. If we put two lines 1/250th mm apart their halos merge. And we can't separate (resolve) the two lines. So realistically the lines need to be 1/200th mm apart and the lens resolves 100 light/dark line pairs per mm. You might take a lens and say it can resolve X number of points in 2D rather than Y number of line pairs.
Now imagine that you take a pattern of perfectly sharp lines and lay them down on a MONOCHROME sensor either horizontally or vertically. The lines are 1 pixel wide and 1 pixel apart. In the very best case the lines align with the pixels and all the light goes in one row / column of pixels, no light goes in the next one. In the very worst case half the light goes in one and half in the next, and the pattern becomes a 50% grey. And in the average case the pattern is 75% in one and 25% in the next.
These two edge effects add together. If we ask the theoretical lens lens to resolve 105 line pairs per mm. (So a light line 1/200 mm wide, a dark line 1/200 wide). The K1 sensor has ~200 pixels per mm. So if the centre of the light line lines up with the centre of a pixel we would just be able to resolve the lines. But if even slightly offset we won't. In other words we need to add the two halos to know the effective resolution.
So if the K1 sensor could resolve 3680 Monochrome light and dark line pairs over its width, (i.e. 7360 pixels) and the lens can also resolve 3680 light/dark pairs over that width, (i.e. both resolve a line 1/200th mm wide) the finest line we can see in the output is 1/100th mm wide. (This is sometimes quoted as 1/image_res = 1/lens_res + 1/recording_res) . So your theoretical "36M detail" lens and a 36MP sensor can only resolve 9 million ~details.
As we add more pixels we get closer and closer to recording the full resolution of the lens.
But the Bayer filter and getting to colour pixels by filling in two colours form neighbouring pixels means the thinnest line the sensor can resolve in colour is more like 1/66th mm so the finest line we can see in the output is 1/40th mm wide , so instead of 9M details, it's more like 1.4M
However, every "extra" detail above 18Mp will be accompanied with another, false detail. With the pixel shift your camera behaves like having a 144Mp sensor, so it will be (again, theoretically ) capable of registering 72 mega-details without any aliasing. Well, there are no lenses capable delivering so much details, but still.
Obviously a 36MP sensor can't record 72Mega details when RGB and present at each location. It's still 36 mega details. But that gets it closer to recording the full res of the lens.
So, if looking into difference between a single and shifted image, don't look at the amount of details, rather at a lack of false details (moire).
What is resolved properly doesn't result in false details so this IS correct.
ah, may be I need a better lens for that to work. which lens would you recommend for landscape work? I am having old MF lens - but may be worth getting a modern lens. I have SMC Pentax 28mm F3.5 Shift as well - but don't really know how to use it yet...