How many pixel do we need?

Stop shouting at me in bold all caps. Haha.

Did you do all that math with a slide rule?
Apologies for the all caps. It was only intended to draw attention and hopefully elicit a reply - not wanting to be disrespectful at all.

As to the substance of my question: should I interpret your evasive joke about the "slide rule" as proof that I hit the nail on the head, and that your claim "I'm just looking at my image at full res (not pixel peeping)" is impossibly contradictory? ;-)

Again: when you say that, which of the two interpretations that I proposed is the correct one?

A) you are looking at a small section of the full-res image at 100% (hence indeed "pixel peeping" ;-))

B) you are looking at a downsampled (i.e., not "full res") version of the full image?

Best,

Marco
 
Stop shouting at me in bold all caps. Haha.

Did you do all that math with a slide rule?
Apologies for the all caps. It was only intended to draw attention and hopefully elicit a reply - not wanting to be disrespectful at all.

As to the substance of my question: should I interpret your evasive joke about the "slide rule" as proof that I hit the nail on the head, and that your claim "I'm just looking at my image at full res (not pixel peeping)" is impossibly contradictory? ;-)

Again: when you say that, which of the two interpretations that I proposed is the correct one?

A) you are looking at a small section of the full-res image at 100% (hence indeed "pixel peeping" ;-))

B) you are looking at a downsampled (i.e., not "full res") version of the full image?

Best,

Marco
It's actually a good point.
Ultra-high-res-display look fantastic but they manage to cram so many pixels into the screen, we often times don't really appreciate the actual resolution of the photos.

Kind of explains this recent renewed pixel-lust and people wanting for more. I see the same thing happening when editing images on the 16" 2k+ display vs the desktop 27" 1440p display.

Even 150MP will look slightly underwhelming on a 6k display.
 
Thanks for the tips David. We probably all need to cull more.

You think I posted too many shots yesterday just walking aroung my neighborhood? Just wait till I get to Rome next week. Haha.
Nope. You posted to illustrate a point and did it well. I was thinking more about your comment about not demonstrating an 'eye' and the steps anyone might take to enhance their reputation. Shoot a lot and cull ruthlessly when presenting your showcase work and inevitably you build a better reputation. Show it all, warts and all and it dilutes your impact.

But that doesn't mean you should be ruthless when demonstrating stuff, just when building a portfolio. It doesn't seem to be your thing, but if you wanted to, I think that's the way to do it.
 
Hi,

Actually, still useful. At least the specialized ones. I still use a slide rule with scales to determine component values for RF filters. Something Pickett made. Back when I got it there were only high priced 4 function calculators and no personal computers. The Steves were still in their garage at that time.

It will still arrive at the necessary component values for filters quicker than the most modern personal computer with circuit modeling software.

The Pickett is a standard linear rule, but there are other circular rules used in many technical endeavors still. although, most folks never think of the circular ones as slide rules. They always picture the linear ones in their mind's eye.

There are some specialized slide rules used in photography. I haven't looked, but can we still buy some of those?

Stan
 
Stop shouting at me in bold all caps. Haha.

Did you do all that math with a slide rule?
Apologies for the all caps. It was only intended to draw attention and hopefully elicit a reply - not wanting to be disrespectful at all.

As to the substance of my question: should I interpret your evasive joke about the "slide rule" as proof that I hit the nail on the head, and that your claim "I'm just looking at my image at full res (not pixel peeping)" is impossibly contradictory? ;-)

Again: when you say that, which of the two interpretations that I proposed is the correct one?

A) you are looking at a small section of the full-res image at 100% (hence indeed "pixel peeping" ;-))

B) you are looking at a downsampled (i.e., not "full res") version of the full image?

Best,

Marco
It's actually a good point.
Ultra-high-res-display look fantastic but they manage to cram so many pixels into the screen, we often times don't really appreciate the actual resolution of the photos.

Kind of explains this recent renewed pixel-lust and people wanting for more. I see the same thing happening when editing images on the 16" 2k+ display vs the desktop 27" 1440p display.

Even 150MP will look slightly underwhelming on a 6k display.
Crispy, I think you are probably a world-class pro photog in general and certainly a top pro macro shooter. But I'm not sure you are getting this whole 150-200 MP thing straight. Or, at least I don't understand it.

It's going to look incredible on future monitors and there is going to be some further image fidelity benefit.

I've been reading all your posts and tech arguments (discussions) with Jim. I'm not following well this whole argument of yours on this wall we hit past 100 MP and all this bit about having to shoot 150-200 MP at F5.6 or the IQ will be bad.
 
Stop shouting at me in bold all caps. Haha.

Did you do all that math with a slide rule?
Apologies for the all caps. It was only intended to draw attention and hopefully elicit a reply - not wanting to be disrespectful at all.

As to the substance of my question: should I interpret your evasive joke about the "slide rule" as proof that I hit the nail on the head, and that your claim "I'm just looking at my image at full res (not pixel peeping)" is impossibly contradictory? ;-)

Again: when you say that, which of the two interpretations that I proposed is the correct one?

A) you are looking at a small section of the full-res image at 100% (hence indeed "pixel peeping" ;-))

B) you are looking at a downsampled (i.e., not "full res") version of the full image?

Best,

Marco
It's actually a good point.
Ultra-high-res-display look fantastic but they manage to cram so many pixels into the screen, we often times don't really appreciate the actual resolution of the photos.

Kind of explains this recent renewed pixel-lust and people wanting for more. I see the same thing happening when editing images on the 16" 2k+ display vs the desktop 27" 1440p display.

Even 150MP will look slightly underwhelming on a 6k display.
Crispy, I think you are probably a world-class pro photog in general and certainly a top pro macro shooter. But I'm not sure you are getting this whole 150-200 MP thing straight. Or, at least I don't understand it.

It's going to look incredible on future monitors and there is going to be some further image fidelity benefit.

I've been reading all your posts and tech arguments (discussions) with Jim. I'm not following well this whole argument of yours on this wall we hit past 100 MP and all this bit about having to shoot 150-200 MP at F5.6 or the IQ will be bad.
It's not about "the IQ will be bad" - it's about the effective resolution, meaning how much image data is actually available at the end of the day.

You can have a 200MP sensor in your GFX camera but unless you shoot with a wide enough aperture opening, you won't get a clear enough distinction between the image details.

The same principle is observable in microscopes, when the Numerical Aperture does not allow for a clear enough distinction between the individual airy disks:


So while the resulting file has 200MP the actual image has basically the same amount of usable information as a much lower resolution image. You could reduce its pixel dimension without losing any data. It's basically a lot of "empty" pixel data, not unlike an interpolated enlarged image.

You actually provided a good example in your comparison of the 100II and the Q3, when you shot the 100II at f9 and the Q3 at f8.
The 100MP sensor will be limited by the aperture, so its effective resolution is quite a bit lower, which is why the end result is extremely similar - and also pretty close to a 50MP (though I think it's just slightly more than 50MP, perhaps 65MP?).

You might want to compare the images shot at (for example) f6.7, I'm sure you're going to see a bigger difference between the GFX 100II and Q3 then.

The point is: we're at a level where small changes in aperture will knock down your effective resolution considerably and very quickly.

Sure, if you're "lucky" enough to have some straight line or very fine structures in your image (like the staircases) you may still see some aliasing and while you may be able to reduce them with a higher resolution sensor, it won't yield more useable details.

That's all there is to it. Of course we can look at the details at 300% but as we know from Jim, Pixel Peeping is unproductive, so 3x-pixel-peeping is at least 3x as unproductive hehe
 
The red lvertical ine represents pixel level, the unresolved enery is folded around that line. So at the red line true signal and false signal are equal. When looking at lower frequencies true signal will dominate over false signal.
Why is there any any false detail at the red line? My understanding of the Nyquist-Shannon theorum is that if the signal is band-limited to the nyquist frequency or below, the system could perfectly reproduce a wave (in the frequency domain; this doesn't address noise and quantizing error). Is this incorrect?
 
Why is there any any false detail at the red line? My understanding of the Nyquist-Shannon theorum is that if the signal is band-limited to the nyquist frequency or below, the system could perfectly reproduce a wave (in the frequency domain; this doesn't address noise and quantizing error). Is this incorrect?
You have misstated the theory. Perfect reconstruction is possible only for frequencies below the Nyquist frequency.

You can prove this to yourself by imagining a sine wave at the Nyquist frequency. A flat DC signal is the most obvious interpretation.
 
Stop shouting at me in bold all caps. Haha.

Did you do all that math with a slide rule?
Apologies for the all caps. It was only intended to draw attention and hopefully elicit a reply - not wanting to be disrespectful at all.

As to the substance of my question: should I interpret your evasive joke about the "slide rule" as proof that I hit the nail on the head, and that your claim "I'm just looking at my image at full res (not pixel peeping)" is impossibly contradictory? ;-)

Again: when you say that, which of the two interpretations that I proposed is the correct one?

A) you are looking at a small section of the full-res image at 100% (hence indeed "pixel peeping" ;-))

B) you are looking at a downsampled (i.e., not "full res") version of the full image?

Best,

Marco
It's actually a good point.
Ultra-high-res-display look fantastic but they manage to cram so many pixels into the screen, we often times don't really appreciate the actual resolution of the photos.

Kind of explains this recent renewed pixel-lust and people wanting for more. I see the same thing happening when editing images on the 16" 2k+ display vs the desktop 27" 1440p display.

Even 150MP will look slightly underwhelming on a 6k display.
Crispy, I think you are probably a world-class pro photog in general and certainly a top pro macro shooter. But I'm not sure you are getting this whole 150-200 MP thing straight. Or, at least I don't understand it.

It's going to look incredible on future monitors and there is going to be some further image fidelity benefit.

I've been reading all your posts and tech arguments (discussions) with Jim. I'm not following well this whole argument of yours on this wall we hit past 100 MP and all this bit about having to shoot 150-200 MP at F5.6 or the IQ will be bad.
It's not about "the IQ will be bad" - it's about the effective resolution, meaning how much image data is actually available at the end of the day.

You can have a 200MP sensor in your GFX camera but unless you shoot with a wide enough aperture opening, you won't get a clear enough distinction between the image details.

The same principle is observable in microscopes, when the Numerical Aperture does not allow for a clear enough distinction between the individual airy disks:

https://www.microscopyu.com/tutorials/imageformation-airyna

So while the resulting file has 200MP the actual image has basically the same amount of usable information as a much lower resolution image. You could reduce its pixel dimension without losing any data. It's basically a lot of "empty" pixel data, not unlike an interpolated enlarged image.

You actually provided a good example in your comparison of the 100II and the Q3, when you shot the 100II at f9 and the Q3 at f8.
The 100MP sensor will be limited by the aperture, so its effective resolution is quite a bit lower, which is why the end result is extremely similar - and also pretty close to a 50MP (though I think it's just slightly more than 50MP, perhaps 65MP?).
You might want to compare the images shot at (for example) f6.7, I'm sure you're going to see a bigger difference between the GFX 100II and Q3 then.

The point is: we're at a level where small changes in aperture will knock down your effective resolution considerably and very quickly.

Sure, if you're "lucky" enough to have some straight line or very fine structures in your image (like the staircases) you may still see some aliasing and while you may be able to reduce them with a higher resolution sensor, it won't yield more useable details.

That's all there is to it. Of course we can look at the details at 300% but as we know from Jim, Pixel Peeping is unproductive, so 3x-pixel-peeping is at least 3x as unproductive hehe
Thanks for that response and that clears it up for me a little. Is that generally held as true in the scientific community? Would Jim and Erik and other more technical guys agree with you on that?

I can see a difference between 50 and 100 MP and have for years. It is clear to me. But are you saying that we are going to be wasting our time past 100 MP? What do you think the limit of enjoying more res with our eyes is? Is it 100MP?

When I "pixel peep" (I hate that term), I'm viewing at 1:1. At 2 and 3 times res the GFX 100 files hold up much better than taking the Q2/3 files that far. That is obvious, and I agree there is no real need to go to 2 and 3 times res except just to mess around and really dig.

So, at full res (100% 1:1) on a future 8K monitor, we won't see a difference between a 100 MP image and a 200MP image? (I know that would be viewing only part of the image and I'm talking about at 1:1) at around F8 or 9?
 
Stop shouting at me in bold all caps. Haha.

Did you do all that math with a slide rule?
Apologies for the all caps. It was only intended to draw attention and hopefully elicit a reply - not wanting to be disrespectful at all.

As to the substance of my question: should I interpret your evasive joke about the "slide rule" as proof that I hit the nail on the head, and that your claim "I'm just looking at my image at full res (not pixel peeping)" is impossibly contradictory? ;-)

Again: when you say that, which of the two interpretations that I proposed is the correct one?

A) you are looking at a small section of the full-res image at 100% (hence indeed "pixel peeping" ;-))

B) you are looking at a downsampled (i.e., not "full res") version of the full image?

Best,

Marco
It's actually a good point.
Ultra-high-res-display look fantastic but they manage to cram so many pixels into the screen, we often times don't really appreciate the actual resolution of the photos.

Kind of explains this recent renewed pixel-lust and people wanting for more. I see the same thing happening when editing images on the 16" 2k+ display vs the desktop 27" 1440p display.

Even 150MP will look slightly underwhelming on a 6k display.
Crispy, I think you are probably a world-class pro photog in general and certainly a top pro macro shooter. But I'm not sure you are getting this whole 150-200 MP thing straight. Or, at least I don't understand it.

It's going to look incredible on future monitors and there is going to be some further image fidelity benefit.

I've been reading all your posts and tech arguments (discussions) with Jim. I'm not following well this whole argument of yours on this wall we hit past 100 MP and all this bit about having to shoot 150-200 MP at F5.6 or the IQ will be bad.
It's not about "the IQ will be bad" - it's about the effective resolution, meaning how much image data is actually available at the end of the day.

You can have a 200MP sensor in your GFX camera but unless you shoot with a wide enough aperture opening, you won't get a clear enough distinction between the image details.

The same principle is observable in microscopes, when the Numerical Aperture does not allow for a clear enough distinction between the individual airy disks:

https://www.microscopyu.com/tutorials/imageformation-airyna

So while the resulting file has 200MP the actual image has basically the same amount of usable information as a much lower resolution image. You could reduce its pixel dimension without losing any data. It's basically a lot of "empty" pixel data, not unlike an interpolated enlarged image.

You actually provided a good example in your comparison of the 100II and the Q3, when you shot the 100II at f9 and the Q3 at f8.
The 100MP sensor will be limited by the aperture, so its effective resolution is quite a bit lower, which is why the end result is extremely similar - and also pretty close to a 50MP (though I think it's just slightly more than 50MP, perhaps 65MP?).
You might want to compare the images shot at (for example) f6.7, I'm sure you're going to see a bigger difference between the GFX 100II and Q3 then.

The point is: we're at a level where small changes in aperture will knock down your effective resolution considerably and very quickly.

Sure, if you're "lucky" enough to have some straight line or very fine structures in your image (like the staircases) you may still see some aliasing and while you may be able to reduce them with a higher resolution sensor, it won't yield more useable details.

That's all there is to it. Of course we can look at the details at 300% but as we know from Jim, Pixel Peeping is unproductive, so 3x-pixel-peeping is at least 3x as unproductive hehe
Thanks for that response and that clears it up for me a little. Is that generally held as true in the scientific community? Would Jim and Erik and other more technical guys agree with you on that?

I can see a difference between 50 and 100 MP and have for years. It is clear to me. But are you saying that we are going to be wasting our time past 100 MP? What do you think the limit of enjoying more res with our eyes is? Is it 100MP?

When I "pixel peep" (I hate that term), I'm viewing at 1:1. At 2 and 3 times res the GFX 100 files hold up much better than taking the Q2/3 files that far. That is obvious, and I agree there is no real need to go to 2 and 3 times res except just to mess around and really dig.

So, at full res (100% 1:1) on a future 8K monitor, we won't see a difference between a 100 MP image and a 200MP image? (I know that would be viewing only part of the image and I'm talking about at 1:1) at around F8 or 9?
Generally what Chris is talking about is diffraction. As pixels get smaller to increase megapixels in the fixed sensor size of 44x33, you will see the softening of an image at larger apertures. But this is wavelength dependant. So in blue dominant visible light, you will have less diffraction than red dominant visible light.

You can test this between your gfx100ii and IR 100s. With the same lens, take images at f/5.6, f/8, f/11, and f/16. You will see that the gfx100ii will be sharper than the IR 100S at f/8 using the same lens, subjectand focus distance. Maybe even sharper at f/5.6.

It would be great if Fujifilm did in camera deconvolution like Panasonic does on their m43rds cameras with Panasonic lenses. A Panasonic lens will look sharper than an Olympus lens, as Panasonic knows the point spread function and does some in camera corrections on the jpegs. This has been one of my motivation to get a G9 camera.

The great thing about UV photography is f/11 is no problem with m43rds cameras.
 
Last edited:
Hi,

Actually, still useful. At least the specialized ones. I still use a slide rule with scales to determine component values for RF filters. Something Pickett made. Back when I got it there were only high priced 4 function calculators and no personal computers. The Steves were still in their garage at that time.

It will still arrive at the necessary component values for filters quicker than the most modern personal computer with circuit modeling software.

The Pickett is a standard linear rule, but there are other circular rules used in many technical endeavors still. although, most folks never think of the circular ones as slide rules. They always picture the linear ones in their mind's eye.

There are some specialized slide rules used in photography. I haven't looked, but can we still buy some of those?

Stan
I used to be good with a slide rule. My first year in college in 1975 we had to use a slide rule in all my physics and calculus classes. Then the next year we used that first TI calculator, and the world was changed forever.

I was good with a slide rule but have forgotten how to us it.
 
Thanks for the tips David. We probably all need to cull more.

You think I posted too many shots yesterday just walking aroung my neighborhood? Just wait till I get to Rome next week. Haha.
Nope. You posted to illustrate a point and did it well. I was thinking more about your comment about not demonstrating an 'eye' and the steps anyone might take to enhance their reputation. Shoot a lot and cull ruthlessly when presenting your showcase work and inevitably you build a better reputation. Show it all, warts and all and it dilutes your impact.

But that doesn't mean you should be ruthless when demonstrating stuff, just when building a portfolio. It doesn't seem to be your thing, but if you wanted to, I think that's the way to do it.
You are right about this.
 
Some small comments:

Ratio of real and false detail:

The red lvertical ine represents pixel level, the unresolved enery is folded around that line. So at the red line true signal and false signal are equal. When looking at lower frequencies true signal will dominate over false signal.

The red lvertical ine represents pixel level, the unresolved enery is folded around that line. So at the red line true signal and false signal are equal. When looking at lower frequencies true signal will dominate over false signal.

AFAIK it takes about 35% modulation for detail to be perceived sharp. A rule of thumb used to say that modulation at Nyquist (red vertical line) needs to be below 10% to avoid aliasing.

Going to the left, real detail increases and false detail decreases. If you look at 2000 cy/PH, real detail may have 40 modulation and false 10% modulation.

16X pixel shift on Sony A7 (red lines) compared single shot image on smae camera and lens (blue lines). Note that the curves essentially overlap. The difference is that Nyquist limit is shifted by doubling the sampling frequency, so we can use all of the MTF curve.

16X pixel shift on Sony A7 (red lines) compared single shot image on smae camera and lens (blue lines). Note that the curves essentially overlap. The difference is that Nyquist limit is shifted by doubling the sampling frequency, so we can use all of the MTF curve.

This is what the images look like, with the single shot image upscaled to match the pixel shifted image. Note that sharpness is similar, but detail is much cleaner on the 16X sampled image. The smallest cleanly resolved group is marked on ecah image in red. The image needs to be viewed at actual pixel, linked below.

This is what the images look like, with the single shot image upscaled to match the pixel shifted image. Note that sharpness is similar, but detail is much cleaner on the 16X sampled image. The smallest cleanly resolved group is marked on ecah image in red. The image needs to be viewed at actual pixel, linked below.

Halving the pixel pitch and keeping the fill factor would have the same effect as 16X pixel shift, but yield a slightly sharper image as the blur from the pixel aperture would be reduced. That is an argument for 1.9 micron pixels when used with this lens.

Best regards

Erik
You're not reducing the pixel size.

You're taking more measurements using the same sensor with the same pixels and combine them.
This demonstrates the effect oversampling the image to reduce aliasing. Note my writing:

'Halving the pixel pitch and keeping the fill factor would have the same effect as 16X pixel shift, but yield a slightly sharper image as the blur from the pixel aperture would be reduced.'
What you didn't mention was the aperture you were using, the distance of the object, whether you were only looking at the center portion of the image etc. That's relevant valuable data.
Neither is interesting at all for the MTF analysis, but aperture f/4 and distance about 2.5 m. The images show actual pixel data from 240 images. That is the central part but good lenses will deliver alised images over a large part of the image.
And sure, you'll see "more" even if it's "only" a difference between 60MP and 75MP or 80MP or 100MP. But how much exactly you've gained is a good question.

But hey, I've no problem saying that you could theoretically capture 240MP using a wide enough aperture (in a controlled environment with a great lens on a tripod in the centre of the image). I've no problem saying that smaller pixels are a nice idea.

I'm just saying that for regular photography it's not going to matter and people who expect to see a huge difference will most likely be disappointed to see it's not going to look that different from their current setup and just complain that it takes their computers even longer to process the files, create previews etc.
The statement here is that:
  • 3.8 microns have aliasing with good lenses at good apertures.
  • Oversampling with half way pixel shift reduce or eliminate that aliasing.
I don't say that we need something like 240 MP for any images.

Best regards

Erik

--
Erik Kaffehr
Website: http://echophoto.dnsalias.net
Magic tends to disappear in controlled experiments…
Gallery: http://echophoto.smugmug.com
Articles: http://echophoto.dnsalias.net/ekr/index.php/photoarticles
 
The statement here is that:
  • 3.8 microns have aliasing with good lenses at good apertures.
  • Oversampling with half way pixel shift reduce or eliminate that aliasing.
I don't say that we need something like 240 MP for any images.

Best regards

Erik
I fully agree, but as f4.0 would still be enough to theoretically cause aliasing on a 400MP sensor (approximately for the GFX) we'd still need to oversample to remove aliasing.

And since there are 1.2 lenses available for the GF and you could also adapt lenses with apertures beyond 1.0, we're talking about oversampling in multiple Gigapixels until all aliasing is gone.

The question is: where does it become a fools errand?
 
Why is there any any false detail at the red line? My understanding of the Nyquist-Shannon theorum is that if the signal is band-limited to the nyquist frequency or below, the system could perfectly reproduce a wave (in the frequency domain; this doesn't address noise and quantizing error). Is this incorrect?
You have misstated the theory. Perfect reconstruction is possible only for frequencies below the Nyquist frequency.

You can prove this to yourself by imagining a sine wave at the Nyquist frequency. A flat DC signal is the most obvious interpretation.
🥸🥸🥸🥸

b1aab0632bee4a47bae04d89a0003c12.jpg




--
Greg Johnson, San Antonio, Texas
 
Why is there any any false detail at the red line? My understanding of the Nyquist-Shannon theorum is that if the signal is band-limited to the nyquist frequency or below, the system could perfectly reproduce a wave (in the frequency domain; this doesn't address noise and quantizing error). Is this incorrect?
You have misstated the theory. Perfect reconstruction is possible only for frequencies below the Nyquist frequency.

You can prove this to yourself by imagining a sine wave at the Nyquist frequency. A flat DC signal is the most obvious interpretation.
🥸🥸🥸🥸

b1aab0632bee4a47bae04d89a0003c12.jpg
Greg, I know you're not stupid, but I did the work for you anyway:

03c8ec18923d49348355a61f9aa8d5c3.jpg.png


--
https://blog.kasson.com
 
Last edited:
That's all there is to it. Of course we can look at the details at 300% but as we know from Jim, Pixel Peeping is unproductive, so 3x-pixel-peeping is at least 3x as unproductive hehe
Context here: when I said PP was unproductive, I was talking about PP for esthetics, not PP in order to make better edits.
 
Most photographers with a reputation for a photographic "eye" are scrupulous about culling the photos they are prepared to share publicly. Let's say for the sake of debate that a typical 'competent' photographer has a hit rate of something like 1-3% of their images they'd be happy to see hanging on the wall, they must be throwing away or hiding 97-99% of the images. This tactic is effective at boosting the appearance of artistic genius and possession of an 'eye'.
AA famously said he'd be happy with 12 really good images a year.
Further more, if you look at the work of well respected photographers, you tend to see that they cultivate a public style. Their known work tends to look like what you would expect from a particular photographer. Chances are they have lots of other work you don't get to see because even though it is good, it doesn't fit the brand. Bruce Percy has a new blog post about choosing images that fit with other images, even if when you don't think they are the best images: https://brucepercy.co.uk/blog/2024/3/9/free-to-compose-more-anonymous-less-obvious-shots It's about putting together collections of similar photos that work well together where the whole is greater than the sum of the parts.

I get the impression that you are a prolific shooter and focus on culling the images that don't meet your technical standards rather than culling the images that aren't artistically great. You are happy to show almost everything you shoot without attempting to curate them for the biggest impact. In a way this is a more honest way of shooting, but if you really want to convince yourself you have an eye, ruthless culling is needed. Aim for the best 1% and pretend the other 99% don't exist - then you'll look really good :-)
I think culling is an essential part of photography, and that, done right, it will improve your photographic skills.
 
Hi,

Actually, still useful. At least the specialized ones. I still use a slide rule with scales to determine component values for RF filters. Something Pickett made. Back when I got it there were only high priced 4 function calculators and no personal computers. The Steves were still in their garage at that time.

It will still arrive at the necessary component values for filters quicker than the most modern personal computer with circuit modeling software.

The Pickett is a standard linear rule, but there are other circular rules used in many technical endeavors still. although, most folks never think of the circular ones as slide rules. They always picture the linear ones in their mind's eye.

There are some specialized slide rules used in photography. I haven't looked, but can we still buy some of those?

Stan
I used to be good with a slide rule. My first year in college in 1975 we had to use a slide rule in all my physics and calculus classes. Then the next year we used that first TI calculator, and the world was changed forever.
The HP Model 35 was what changed the world.
I was good with a slide rule but have forgotten how to us it.
 
Some small comments:

Ratio of real and false detail:

The red lvertical ine represents pixel level, the unresolved enery is folded around that line. So at the red line true signal and false signal are equal. When looking at lower frequencies true signal will dominate over false signal.

The red lvertical ine represents pixel level, the unresolved enery is folded around that line. So at the red line true signal and false signal are equal. When looking at lower frequencies true signal will dominate over false signal.

AFAIK it takes about 35% modulation for detail to be perceived sharp. A rule of thumb used to say that modulation at Nyquist (red vertical line) needs to be below 10% to avoid aliasing.

Going to the left, real detail increases and false detail decreases. If you look at 2000 cy/PH, real detail may have 40 modulation and false 10% modulation.

16X pixel shift on Sony A7 (red lines) compared single shot image on smae camera and lens (blue lines). Note that the curves essentially overlap. The difference is that Nyquist limit is shifted by doubling the sampling frequency, so we can use all of the MTF curve.

16X pixel shift on Sony A7 (red lines) compared single shot image on smae camera and lens (blue lines). Note that the curves essentially overlap. The difference is that Nyquist limit is shifted by doubling the sampling frequency, so we can use all of the MTF curve.

This is what the images look like, with the single shot image upscaled to match the pixel shifted image. Note that sharpness is similar, but detail is much cleaner on the 16X sampled image. The smallest cleanly resolved group is marked on ecah image in red. The image needs to be viewed at actual pixel, linked below.

This is what the images look like, with the single shot image upscaled to match the pixel shifted image. Note that sharpness is similar, but detail is much cleaner on the 16X sampled image. The smallest cleanly resolved group is marked on ecah image in red. The image needs to be viewed at actual pixel, linked below.

Halving the pixel pitch and keeping the fill factor would have the same effect as 16X pixel shift, but yield a slightly sharper image as the blur from the pixel aperture would be reduced. That is an argument for 1.9 micron pixels when used with this lens.

Best regards

Erik
You're not reducing the pixel size.

You're taking more measurements using the same sensor with the same pixels and combine them.
This demonstrates the effect oversampling the image to reduce aliasing.
If it's necessary for reducing the aliasing, it's not over sampling. Oversampling is sampling at higher than the Nyquist frequency. We don't know if 16x is oversampling or not, and the answer will depend on your criteria for aliased information.


--
 

Keyboard shortcuts

Back
Top