DPReview.com is closing April 10th - Find out more

k1 pixel shift - can you see the different?

Started 4 months ago | Discussions
Peace_VN
Peace_VN Regular Member • Posts: 389
k1 pixel shift - can you see the different?

dear all

I did this test to myself... it is tough to see the difference. If I don't know which one is which, I would not know... Just wondering if you could see it. without knowing which file is pixel-shifted.

I used the Pentax 100mm macro 2.8 lenses, viewing with Asus pro art screen 4k.

Pentax K-1
If you believe there are incorrect tags, please send us this post using our feedback form.
robgendreau Forum Pro • Posts: 10,926
Re: k1 pixel shift - can you see the different?
5

Peace_VN wrote:

dear all

I did this test to myself... it is tough to see the difference. If I don't know which one is which, I would not know... Just wondering if you could see it. without knowing which file is pixel-shifted.

I used the Pentax 100mm macro 2.8 lenses, viewing with Ausu pro art screen 4k.

The difference is often pretty subtle, but I've seen it.

It's tough to see in these images because a lot is out of focus. Not in a bid way, content wise, but not so hot for comparison. I have found shooting something like text makes it easier to see the difference. And I usually do need to add some sharpening to see it as well.

 robgendreau's gear list:robgendreau's gear list
Pentax 645Z
Peace_VN
OP Peace_VN Regular Member • Posts: 389
Re: k1 pixel shift - can you see the different?

robgendreau wrote:

Peace_VN wrote:

dear all

I did this test to myself... it is tough to see the difference. If I don't know which one is which, I would not know... Just wondering if you could see it. without knowing which file is pixel-shifted.

I used the Pentax 100mm macro 2.8 lenses, viewing with Ausu pro art screen 4k.

The difference is often pretty subtle, but I've seen it.

It's tough to see in these images because a lot is out of focus. Not in a bid way, content wise, but not so hot for comparison. I have found shooting something like text makes it easier to see the difference. And I usually do need to add some sharpening to see it as well.

oh, I see. I will try a landscape shoot and see.

MaKeR Senior Member • Posts: 1,006
Re: k1 pixel shift - can you see the different?

I can see the difference, and it's clear that *89.PEF is the Pixel Shift image. The other one is less sharp and has false detail.

To do a better test, use the same Pixel Shift PEF file and develop (recommend to 16-bit TIFF output at native resolution) with and without Pixel Shift, keeping all other settings constant (as you can do in the Pentax DCU software). A-B compare the two resultant images and the difference should be apparent.

I have made hundreds of Pixel Shift images so my eye is attuned to the difference. For sure, not every viewer will notice it, and viewing conditions play a big part in how visible the difference is. Reduced size, screen shots, JPEG compression, etc. are all going to impede visibility of the difference.

On the other hand, significantly enlarging a well-developed PS image will accentuate the improvement over a non-PS image. A PS image can be printed larger.

(There is a dust spot near the top of your image, left of centre, by the way.)

ellover009 Senior Member • Posts: 1,003
Re: k1 pixel shift - can you see the different?
5

No perceivable difference.

If I remember the old reviews of the K1, the biggest improvement from pixel shift was found in noise error color improvements.

https://www.imaging-resource.com/PRODS/pentax-k1/pentax-k1PSR_MODE.HTM

Scroll down to the super high ISO comparison shots. There's an improvement.

 ellover009's gear list:ellover009's gear list
Canon EOS 30D Pentax K-1 Canon EF-S 17-55mm f/2.8 IS USM Pentax FA 28-105mm F3.5-5.6
mxx
mxx Senior Member • Posts: 1,250
Re: k1 pixel shift - can you see the different?
2

The difference has always been very clear to me in the DPR test shot.

 mxx's gear list:mxx's gear list
Sony Cyber-shot DSC-V1 Canon PowerShot SX50 HS Sigma DP3 Merrill Pentax K100D Pentax K-m +11 more
mxx
mxx Senior Member • Posts: 1,250
Re: k1 pixel shift - can you see the different?
1

The difference has always been very clear to me in the DPR test shot.

 mxx's gear list:mxx's gear list
Sony Cyber-shot DSC-V1 Canon PowerShot SX50 HS Sigma DP3 Merrill Pentax K100D Pentax K-m +11 more
flektogon
flektogon Veteran Member • Posts: 6,226
Re: k1 pixel shift - can you see the different?
1

MaKeR wrote:

I can see the difference, and it's clear that *89.PEF is the Pixel Shift image. The other one is less sharp and has false detail.

Yes, this explanation is fully supported by the sampling theorem! Any good lens like mentioned Pentax 100/2.8 Macro can project as a maximum around 35 mega-details/points on a full frame sensor. So a camera with 36 Mp is (theoretically) capable register all those details. However, every "extra" detail above 18Mp will be accompanied with another, false detail. With the pixel shift your camera behaves like having a 144Mp sensor, so it will be (again, theoretically ) capable of registering 72 mega-details without any aliasing. Well, there are no lenses capable delivering so much details, but still.

So, if looking into difference between a single and shifted image, don't look at the amount of details, rather at a lack of false details (moire).

-- hide signature --

Regards,
Peter

James O'Neill Veteran Member • Posts: 6,117
Re: k1 pixel shift - can you see the different?
4

flektogon wrote:

MaKeR wrote:

I can see the difference, and it's clear that *89.PEF is the Pixel Shift image. The other one is less sharp and has false detail.

Yes, this explanation is fully supported by the sampling theorem! Any good lens like mentioned Pentax 100/2.8 Macro can project as a maximum around 35 mega-details/points on a full frame sensor. So a camera with 36 Mp is (theoretically) capable register all those details.

Unfortunately it doesn't work like that.

Take a theoretical lens. A 1 dimensional line or 2D shape doesn't have a perfect edge. You get a halo. Lets say for this lens the smudge / blur / halo is 1/500th of a mm.  If we put two lines 1/250th mm apart their halos merge. And we can't separate (resolve) the two lines. So realistically the lines need to be 1/200th mm apart and the lens resolves 100 light/dark line pairs per mm.  You might take a lens and say it can resolve X number of points in 2D rather than Y number of line pairs.

Now imagine that you take a pattern of perfectly sharp lines and lay them down on a MONOCHROME sensor either horizontally or vertically. The lines are 1 pixel wide and 1 pixel apart. In the very best case the lines align with the pixels and all the light goes in one row / column of pixels, no light goes in the next one. In the very worst case half the light goes in one and half in the next, and the pattern becomes a 50% grey. And in the average case the pattern is 75% in one and 25% in the next.

These two edge effects add together. If we ask the theoretical lens lens to resolve 105 line pairs per mm. (So a light line 1/200 mm wide, a dark line 1/200 wide). The K1 sensor has ~200 pixels per mm. So if the centre of the light line lines up with the centre of a pixel we would just be able to resolve the lines. But if even slightly offset we won't. In other words we need to add the two halos to know the effective resolution.

So if the K1 sensor could resolve 3680 Monochrome light and dark line pairs over its width, (i.e. 7360 pixels) and the lens can also resolve  3680 light/dark pairs over that width, (i.e. both resolve a line 1/200th mm wide) the finest line we can see in the output is 1/100th mm wide. (This is sometimes quoted as 1/image_res = 1/lens_res + 1/recording_res) . So your theoretical "36M detail" lens and a 36MP sensor can only resolve 9 million ~details.

As we add more pixels we get closer and closer to recording the full resolution of the lens.

But the Bayer filter and getting to colour pixels by filling in two colours form neighbouring pixels means the thinnest line the sensor can resolve in colour is more like 1/66th mm so the finest line we can see in the output is 1/40th mm wide , so instead of 9M details, it's more like 1.4M

However, every "extra" detail above 18Mp will be accompanied with another, false detail. With the pixel shift your camera behaves like having a 144Mp sensor, so it will be (again, theoretically ) capable of registering 72 mega-details without any aliasing. Well, there are no lenses capable delivering so much details, but still.

Obviously a 36MP sensor can't record 72Mega details when RGB and present at each location. It's still 36 mega details. But that gets it closer to recording the full res of the lens.

So, if looking into difference between a single and shifted image, don't look at the amount of details, rather at a lack of false details (moire).

What is resolved properly doesn't result in false details so this IS correct.

 James O'Neill's gear list:James O'Neill's gear list
Pentax K-5 IIs Pentax K-1 Pentax smc FA 50mm F1.4 Pentax smc DA 18-250mm F3.5-6.3 Pentax smc FA 43mm F1.9 Limited +3 more
Peace_VN
OP Peace_VN Regular Member • Posts: 389
Re: k1 pixel shift - can you see the different?
1

mxx wrote:

The difference has always been very clear to me in the DPR test shot.

yes. i looked at those as well. I think I need to use it in the landscape shoots

Peace_VN
OP Peace_VN Regular Member • Posts: 389
Re: k1 pixel shift - can you see the different?

James O'Neill wrote:

flektogon wrote:

MaKeR wrote:

I can see the difference, and it's clear that *89.PEF is the Pixel Shift image. The other one is less sharp and has false detail.

Yes, this explanation is fully supported by the sampling theorem! Any good lens like mentioned Pentax 100/2.8 Macro can project as a maximum around 35 mega-details/points on a full frame sensor. So a camera with 36 Mp is (theoretically) capable register all those details.

Unfortunately it doesn't work like that.

Take a theoretical lens. A 1 dimensional line or 2D shape doesn't have a perfect edge. You get a halo. Lets say for this lens the smudge / blur / halo is 1/500th of a mm. If we put two lines 1/250th mm apart their halos merge. And we can't separate (resolve) the two lines. So realistically the lines need to be 1/200th mm apart and the lens resolves 100 light/dark line pairs per mm. You might take a lens and say it can resolve X number of points in 2D rather than Y number of line pairs.

Now imagine that you take a pattern of perfectly sharp lines and lay them down on a MONOCHROME sensor either horizontally or vertically. The lines are 1 pixel wide and 1 pixel apart. In the very best case the lines align with the pixels and all the light goes in one row / column of pixels, no light goes in the next one. In the very worst case half the light goes in one and half in the next, and the pattern becomes a 50% grey. And in the average case the pattern is 75% in one and 25% in the next.

These two edge effects add together. If we ask the theoretical lens lens to resolve 105 line pairs per mm. (So a light line 1/200 mm wide, a dark line 1/200 wide). The K1 sensor has ~200 pixels per mm. So if the centre of the light line lines up with the centre of a pixel we would just be able to resolve the lines. But if even slightly offset we won't. In other words we need to add the two halos to know the effective resolution.

So if the K1 sensor could resolve 3680 Monochrome light and dark line pairs over its width, (i.e. 7360 pixels) and the lens can also resolve 3680 light/dark pairs over that width, (i.e. both resolve a line 1/200th mm wide) the finest line we can see in the output is 1/100th mm wide. (This is sometimes quoted as 1/image_res = 1/lens_res + 1/recording_res) . So your theoretical "36M detail" lens and a 36MP sensor can only resolve 9 million ~details.

As we add more pixels we get closer and closer to recording the full resolution of the lens.

But the Bayer filter and getting to colour pixels by filling in two colours form neighbouring pixels means the thinnest line the sensor can resolve in colour is more like 1/66th mm so the finest line we can see in the output is 1/40th mm wide , so instead of 9M details, it's more like 1.4M

However, every "extra" detail above 18Mp will be accompanied with another, false detail. With the pixel shift your camera behaves like having a 144Mp sensor, so it will be (again, theoretically ) capable of registering 72 mega-details without any aliasing. Well, there are no lenses capable delivering so much details, but still.

Obviously a 36MP sensor can't record 72Mega details when RGB and present at each location. It's still 36 mega details. But that gets it closer to recording the full res of the lens.

So, if looking into difference between a single and shifted image, don't look at the amount of details, rather at a lack of false details (moire).

What is resolved properly doesn't result in false details so this IS correct.

ah, may be I need a better lens for that to work. which lens would you recommend for landscape work? I am having old MF lens - but may be worth getting a modern lens. I have SMC Pentax 28mm F3.5 Shift as well - but don't really know how to use it yet...

JeremieB Senior Member • Posts: 2,041
Re: k1 pixel shift - can you see the different?
4

flektogon wrote:

MaKeR wrote:

I can see the difference, and it's clear that *89.PEF is the Pixel Shift image. The other one is less sharp and has false detail.

Yes, this explanation is fully supported by the sampling theorem! Any good lens like mentioned Pentax 100/2.8 Macro can project as a maximum around 35 mega-details/points on a full frame sensor. So a camera with 36 Mp is (theoretically) capable register all those details. However, every "extra" detail above 18Mp will be accompanied with another, false detail.

I'm sorry but that's a slightly misleading way to describe the Nyquist theorem

First it should be 9Mp - sampling theorem applies on both rows AND columns.

Then it's wrong to talk about MP and details, the theorem applies on frequencies.

When we shoot with the simulated AA filter of Pentax, the sensor still scans 36MP but there's no moiré, why ? Because details are not MP

With the pixel shift your camera behaves like having a 144Mp sensor, so it will be (again, theoretically ) capable of registering 72 mega-details without any aliasing. Well, there are no lenses capable delivering so much details, but still.

So, if looking into difference between a single and shifted image, don't look at the amount of details, rather at a lack of false details (moire).

 JeremieB's gear list:JeremieB's gear list
Pentax K-70 Pentax K-3 Mark III Pentax smc FA 50mm F1.4 Pentax smc DA 18-55mm F3.5-5.6 AL Pentax smc D-FA 100mm F2.8 Macro WR +9 more
flektogon
flektogon Veteran Member • Posts: 6,226
Re: k1 pixel shift - can you see the different?
1

JeremieB wrote:

flektogon wrote:

MaKeR wrote:

I can see the difference, and it's clear that *89.PEF is the Pixel Shift image. The other one is less sharp and has false detail.

Yes, this explanation is fully supported by the sampling theorem! Any good lens like mentioned Pentax 100/2.8 Macro can project as a maximum around 35 mega-details/points on a full frame sensor. So a camera with 36 Mp is (theoretically) capable register all those details. However, every "extra" detail above 18Mp will be accompanied with another, false detail.

I'm sorry but that's a slightly misleading way to describe the Nyquist theorem

First it should be 9Mp - sampling theorem applies on both rows AND columns.

Then it's wrong to talk about MP and details, the theorem applies on frequencies.

When we shoot with the simulated AA filter of Pentax, the sensor still scans 36MP but there's no moiré, why ? Because details are not MP

With the pixel shift your camera behaves like having a 144Mp sensor, so it will be (again, theoretically ) capable of registering 72 mega-details without any aliasing. Well, there are no lenses capable delivering so much details, but still.

So, if looking into difference between a single and shifted image, don't look at the amount of details, rather at a lack of false details (moire).

Don't be sorry, I am fully aware that what I wrote was (a little bit ) misleading. Of course, for and a 100% reconstruction of the sampled original and avoiding any aliasing, the sampling frequencies in both directions have to be 2 times greater than the highest frequency component of the original (i.e. the maximum lp/mm resolution of a lens). But practically this is not used in the digital photography. If it was, then for example, such a camera like Pentax K-1 with a 36 Mp sensor with (fully) engaged AA filter would not deliver more than 9 mega details. What we see, all digital cameras, even those with built-in AA filters, deliver something like 40%-60% of their sensor resolution. Look for example at the DXOMark lens reviews. Well, I just simplified it to 50% .

-- hide signature --

Regards,
Peter

flektogon
flektogon Veteran Member • Posts: 6,226
Re: k1 pixel shift - can you see the different?
1

James O'Neill wrote:

So if the K1 sensor could resolve 3680 Monochrome light and dark line pairs over its width, (i.e. 7360 pixels) and the lens can also resolve 3680 light/dark pairs over that width, (i.e. both resolve a line 1/200th mm wide) the finest line we can see in the output is 1/100th mm wide. (This is sometimes quoted as 1/image_res = 1/lens_res + 1/recording_res) . So your theoretical "36M detail" lens and a 36MP sensor can only resolve 9 million ~details.

James, that formula is perfectly valid for a combination of lens and film! But not for the digital film (i.e. sensor). If a given lens can deliver the same amount of details as is the sensor pixel count, it is possible that all those details will be recorded. but...

Imagine that the lens is projecting on a sensor a board with black-and-white fields (like a chess board) having together 36 million fields. If such a subject was perfectly projected on (aligned with) the sensor pixels, all 36 million fields will be recorded. One field, one pixel. Of course, just a half point shift in both dimensions and you would get just one, gray field. So, statistically you can record 18 mega details. Well, this "statistics" may not be valid as it is my invention , but please, read my response to JeremieB below.

-- hide signature --

Regards,
Peter

JasonTheBirder
JasonTheBirder Senior Member • Posts: 3,967
Re: k1 pixel shift - can you see the different?
1

Seems to me that both images are a little soft.

-- hide signature --
James O'Neill Veteran Member • Posts: 6,117
Re: k1 pixel shift - can you see the different?
3

flektogon wrote:

James O'Neill wrote:

So if the K1 sensor could resolve 3680 Monochrome light and dark line pairs over its width, (i.e. 7360 pixels) and the lens can also resolve 3680 light/dark pairs over that width, (i.e. both resolve a line 1/200th mm wide) the finest line we can see in the output is 1/100th mm wide. (This is sometimes quoted as 1/image_res = 1/lens_res + 1/recording_res) . So your theoretical "36M detail" lens and a 36MP sensor can only resolve 9 million ~details.

James, that formula is perfectly valid for a combination of lens and film! But not for the digital film (i.e. sensor). If a given lens can deliver the same amount of details as is the sensor pixel count, it is possible that all those details will be recorded. but...

Imagine that the lens is projecting on a sensor a board with black-and-white fields (like a chess board) having together 36 million fields.

It doesn't work like that. The resolution of a lens is not perfect up so many line pairs per mm and then grey mush.

Imagine six chessboards. suck together to form a 24 x 16 square grid. 192 black squares and 192 white ones. When the lens forms an image of that the transition from white to black isn't perfect / instantaneous. A little of the light that should be in a white square spills into the 4 neighbouring black squares. But if this the whole of a 36x24 image with each square will be 1.5 mm wide and 1.5mm tall and I was talking about a lens with a blur of 0.002 mm So we have a smudge which is 0.1333% of the square width/height Really easy to resolve black and white squares.

Now make it 600 chessboards 240x160 squares. 19,200 black and white Now each square is .15 mm wide with a .002mm smudge. 1.3% Still easy to tell black from white.

Same again to give 2400 x 1600 squares , 1,920,000 each of black and white 0.015mm wide with a 0.002 mm smudge. 13% of the width. Still OK

Lets go to 4800 x 3200 , 7,680,000 black/white . with , 0.0075 wide with a 0.002 mm smudge. Now its getting hard to see the blacks and whites they're smaller (.0035mm) than the transitions (0.004mm) but there are still blacks and whites to see.

Lets go to 9600 x 6400 , 15,360,000 black/white . with , 0.00375 wide with a 0.002 mm smudge. Now smudges meet and we can no longer resolve the pattern.

If such a subject was perfectly projected on (aligned with) the sensor pixels, all 36 million fields will be recorded. One field, one pixel.

As the Spartans famously said IF.

If we are right at the resolution limit for the lens ANY imperfection, 1/1000th of a pixel width horizontally or vertically, means the can no longer resolve the 36 million fields.

And where with the image even at 0.0075 squares with a 0.002 mm smudge, we still get spots of maximum black and maximum white, on a digital sensor, perfectly aligned we get 80% in white squares and 20% in the black ones. We normally say a 50% difference between "black" and "white" is "resolved" so it doesn't take much offset to bring it down to 75-25.

The average offset is 1/4 pixel horizontally and 1/4 vertically. So even if the grid projected had perfect dots with no smudge, on average the white dots would get 3/4 x 3/4 = 9/16 of the light. So the blacks would be 7/16. I.e. 2n MP sensor can't resolve n million back and n million white dots.

Of course, just a half point shift in both dimensions and you would get just one, gray field. So, statistically you can record 18 mega details. Well, this "statistics" may not be valid as it is my invention ,

Back to basics,
some of the light from white squares falls in black squares due to lens imperfections, diffraction etc.. The amount determines how small the squares or lines can be and still be resolved

Additional light from white squares falls in black squares due to misalignment with pixel boundaries. This also determines the smallest square or thinnest line.

"Smudge" from the lens is 1/Lines_per_mm

"Smudge" from the recording medium is  also 1/Lines_per_mm

So add them and you get the total smudge, and  1/total_smudge is effective lines per mm.

 James O'Neill's gear list:James O'Neill's gear list
Pentax K-5 IIs Pentax K-1 Pentax smc FA 50mm F1.4 Pentax smc DA 18-250mm F3.5-6.3 Pentax smc FA 43mm F1.9 Limited +3 more
Peace_VN
OP Peace_VN Regular Member • Posts: 389
Re: k1 pixel shift - can you see the different?

flektogon wrote:

JeremieB wrote:

flektogon wrote:

MaKeR wrote:

I can see the difference, and it's clear that *89.PEF is the Pixel Shift image. The other one is less sharp and has false detail.

Yes, this explanation is fully supported by the sampling theorem! Any good lens like mentioned Pentax 100/2.8 Macro can project as a maximum around 35 mega-details/points on a full frame sensor. So a camera with 36 Mp is (theoretically) capable register all those details. However, every "extra" detail above 18Mp will be accompanied with another, false detail.

I'm sorry but that's a slightly misleading way to describe the Nyquist theorem

First it should be 9Mp - sampling theorem applies on both rows AND columns.

Then it's wrong to talk about MP and details, the theorem applies on frequencies.

When we shoot with the simulated AA filter of Pentax, the sensor still scans 36MP but there's no moiré, why ? Because details are not MP

With the pixel shift your camera behaves like having a 144Mp sensor, so it will be (again, theoretically ) capable of registering 72 mega-details without any aliasing. Well, there are no lenses capable delivering so much details, but still.

So, if looking into difference between a single and shifted image, don't look at the amount of details, rather at a lack of false details (moire).

Don't be sorry, I am fully aware that what I wrote was (a little bit ) misleading. Of course, for and a 100% reconstruction of the sampled original and avoiding any aliasing, the sampling frequencies in both directions have to be 2 times greater than the highest frequency component of the original (i.e. the maximum lp/mm resolution of a lens). But practically this is not used in the digital photography. If it was, then for example, such a camera like Pentax K-1 with a 36 Mp sensor with (fully) engaged AA filter would not deliver more than 9 mega details. What we see, all digital cameras, even those with built-in AA filters, deliver something like 40%-60% of their sensor resolution. Look for example at the DXOMark lens reviews. Well, I just simplified it to 50% .

I though k1 has no AA?

James O'Neill Veteran Member • Posts: 6,117
Re: k1 pixel shift - can you see the different?
1

Peace_VN wrote:

I though k1 has no AA?

Correct.

As resolution gets higher the it risk of aliasing pattern (moiré) reduces, and the K1 uses the the stabilization system if you want to emulate the effect of an AA filter. It's also fairly easy to deal with it in post.

The K5-IIs (but not the original K5 or the non-S K5-ii) introduced this and I think all the 24MP crop sensor cameras do the same. I'm sure someone will correct me if I'm wrong.

In 6 or 7 years of having it available I've never turned it on and the number of picture I've needed to fix is fewer than 1 per 1000 - probably closer to 1 in 5000

 James O'Neill's gear list:James O'Neill's gear list
Pentax K-5 IIs Pentax K-1 Pentax smc FA 50mm F1.4 Pentax smc DA 18-250mm F3.5-6.3 Pentax smc FA 43mm F1.9 Limited +3 more
Roland Karlsson Forum Pro • Posts: 30,035
Yes!
1

Yes,

It is easy to see the improvement with pixel shift. But ... you need sharp images. There is no meaning with using pixel shift if the images are soft. You will not get any improvement then. And your images are soft.

-- hide signature --

/Roland
Kalpanika X3F tools:
https://github.com/kalpanika/x3f

 Roland Karlsson's gear list:Roland Karlsson's gear list
Sigma DP3 Merrill Sigma dp2 Quattro Sony RX100 III Pentax K-3 Pentax K-1 +14 more
flektogon
flektogon Veteran Member • Posts: 6,226
Re: k1 pixel shift - can you see the different?
1

Peace_VN wrote:

I though k1 has no AA?

Well, this camera doesn't have a built-in AA filter, but it allows to simulate it via sensor "shaking". And you can even select a different intensity of such shaking/filtering. Or no shaking/filtering at all.

And I am sorry if I (we ) led you astray with our, a little bit hotter, argumentation.

If you understand the sampling theory, which applies to any digital imaging, please ignore my following text. Otherwise, if you want to understand it (a little bit ) just continue reading.

The diagram below is what you find in any text dealing with the digital signal processing, which starts with the analog signal sampling. The diagram shows the envelope (profile) of an analog signal "frequencies". In photography this corresponds to the lens ability to project details of the subject. Usually it is measured in line pairs per millimeter. The best lenses can deliver around 100 such lp/mm. 100 line pairs equals to 200 pixels, so pixels per mm is another measure to express the lens "resolution".

Now, there is a so called sampling (or Nyquest-Shannon sampling) theorem, which governs this process. And it states, that the sampling frequency (fs) has to be at least 2-times higher than the highest frequency component of the sampled analog signal.

In the diagram below a signal with the red envelope satisfies such a condition, but not that bluish signal. The red signal will be properly sampled, and after further digital processing (specific to the digital imaging), it can be fully "reconstructed", for example, printed on a paper.

The bluish signal will be fully sampled as well, so the sensor will get even more details (bluish area) than from the red signal. However, each "extra" detail (i.e. the one above the Nyquest frequency) will "generate" one extra detail, which lies symmetrically across the Nyquest frequency line (yellow area). Those extra details are false, they do not exist in the original. This phenomenon is known as "aliasing".

Now back to your digital camera. let's assume that your lens is capable delivering up to 100 lp/mm (200 pixels/mm). To satisfy the sampling theorem, the camera sensor should have density 400 pixels/mm in both dimensions. An FF (24mm x 36mm) camera then should have a sensor with 14,400 x 9,600 pixels, or 138 Mp !!! Well, in practice those sensors have even 5-times less pixels. Your 36Mp camera will get (almost *) everything what your lens can deliver, but there will be a lot of aliasing. If you engage AA (anti-aliasing) filter, you will reduce the lens resolution, thus you will reduce the aliasing but the amount of details as well. If you take a picture in the shift mode, you practically quadruple your sensor resolution, so you will satisfy the theoretical requirement to get a picture completely free of any aliasing.

To the (almost *) comment I am just adding that the practical resolution will be around 18 mega-detail . As I am a friendly soul, I leave this to James, for example, for further argumentation . But if he still insists on something like 9 mega details, don't believe him . However, to see the improvement in the resolution as well (if shooting in the shift mode) you ready would need to shoot a subject with a lot of fine details, exactly like Roland commented.

-- hide signature --

Regards,
Peter

Keyboard shortcuts:
FForum MMy threads