D3s and D3x at base ISO

...
The graph shows the lens MTF (the one we're interested here is the f/2 line) ...
I would imagine that you're using the F#=2 graph in order to demonstrate the effect, showing the spread in the curves vs. F#=22 pretty well, but on the other hand I don't think you'll have an F#=2 lens that is diffraction limited! So the real life curves might be somewhere in between, thereby compressing the curves.

Also, for system MTF, I would think you'd want to have the sensor MTF in there somewhere? Or is that already contained in the OLPF curves? BTW those AA filter curves are pretty cool (I've never seen these before). Are those pretty typical of what AA manufacturers can do?
... with a properly designed digital filter with a sharp cutoff (as opposed to the gradual one of the AA filter)
Do those exist and if so can they give a nice squared-off response?

Thanks,

Chris
 
...
The graph shows the lens MTF (the one we're interested here is the f/2 line) ...
I would imagine that you're using the F#=2 graph in order to demonstrate the effect, showing the spread in the curves vs. F#=22 pretty well, but on the other hand I don't think you'll have an F#=2 lens that is diffraction limited! So the real life curves might be somewhere in between, thereby compressing the curves.
Asyou suggest, the effect depends on all of the various MTF's contributing significantly. Wide open, where it's aberration limited, and stopped down, where it's diffraction limited, the lens MTF is more likely to be the limiting factor. In the lenses 'sweet spot' it's most likely that the OLPF will be the limiting factor.
Also, for system MTF, I would think you'd want to have the sensor MTF in there somewhere? Or is that already contained in the OLPF curves? BTW those AA filter curves are pretty cool (I've never seen these before). Are those pretty typical of what AA manufacturers can do?
I don't know. They all work on the same principle. I took this curve from an optics textbook, haven't the reference handy at the moment. The sensor MTF depends on the sampling window, which is the size of the light sensitive part of the pixel (not the pitch). I haven't factored that in at all. Maybe I will when I get the chance.
... with a properly designed digital filter with a sharp cutoff (as opposed to the gradual one of the AA filter)
Do those exist and if so can they give a nice squared-off response?
You can't get a perfect filter, because you need an infinitely large convolution kernel to do it. The Lanzos filter give the best approximation, the larger convolution kernel you use, the better the function is. This would be one advantage of out of camera processg, more memory and processing power to produce a better approximation to the brick wall filter.
 
I think no-one has though hard about the benefits to be had from optimised downsampling, simply because it is only recently that we have had cameras with enough pixels to require downsampling for even mid size (A3 or so) prints. Up until then, the problem was upsampling, and that's where all the development has gone.
Apparently people only cared how their images look print and not on screen. Since even full-screen, cameras had to be downsampled for quite a while.
Interesting that you raise that. I suspect that a lot of comments about new high MP cameras not being as good as older ones are caused by people letting the OS do the downsampling, and the screen drivers do it very crudely quite often, sometimes a nearest neighbour. As a test, properly downsample an image to a small window size, and display it next to one displayed that size by the OS, very often the difference in quality is amazing.

I suspect with an update of screen drivers to include some well designed downsampling algorithms, a lot of the apparent problems of high MP cameras would disappear instantly.
Why would the OS do the downsampling (except when using applications that came bundled with the OS like Quicklook or Preview with Macs). Shouldn't it be the application itself, be it a browser, any editor (eg, PS) or DAM software like Aperture or Lightroom?
I suppose it depends what you call the 'OS'. Many software writers are going to use things like ActiveX or OpenGL to do these functions, rather than code them from scratch. These in turn depend on the display hardware vendors' routines. Perhaps not strictly the OS, but not part of the app, either.
 
-Which is mostly a load of BS. That stairstepping is what you get when you try to sharpen an upsampled picture to get somewhere close to the edge acuity of a higher resolution camera.
There are ways to upsize without getting stairstepping. My comment was mostly that this was a pretty poor upsize, it was easy to recognize it as such.
The "detailing" though will never get close.
It's funny about using the word "never." Most people who use it some day discover that they were wrong. What you really mean is that you can't see any way that it will get better.
The detail just simply doesn't exist in the 12MP shot - so how will another sampling algorithm improve that?
I can think of a way. Maybe I should patent it ;~).
Comparing the D3 and the D3x, I'd say that the D3x has an edge when it comes to scalability - I can scale a D3x pixel more than I can scale a D3 pixel before it starts to look "artificial".
Using current common software algorithms, I'd agree (though I haven't checked yet to see what the latest Genuine Fractals would do).
This resolution advantage is only there up to ISO400, it all goes away above ~ISO800 as you say - what people experience as "detail" is then mostly "false detail" induced by noise. Just like most AA-less cameras fake sharpness and detail by inducing false detail by aliasing faults and false colour artefacts.
Agreed.
False detail will always poke you in the eye when you magnify.
Actually, it has a tendency to poke you when you do any scaling.
Which may be why the M8 and M9 totally fails to impress me - I just see a lot of aliasing and false colour.
I'm not sure we've seen an optimized M9 conversion yet. Besides, I want a B&W only M9. That would make me rethink a lot of my landscape work.

--
Thom Hogan
author, Complete Guides to Nikon bodies (21 and counting)
http://www.bythom.com
 
Hardly. Thom is like my favorite professors many years ago: sometimes prickly, always demanding, but oh-so smart and essential for a full understanding of the subject matter.
 
In genearl, depending on the algorithm, downres'ing requires some extra sharpening afterwards, since pixel merging may result in loss of detail.
Downsizing is just another form of sampling, and comes with all the problems inherent with re-sampling. One difference, though, is that you're in control of many aspects of the resampling, so if you know what you're doing, you can mitigate those problems.
At some stage in your workflow, you're inevitably going to resample, either up or down. The best results are gained by only doing it once and keeping control of it. This means resampling to the target output pixel count (and not letting printer or screen drivers do it for you) and knowing how to resample up or down optimally.
 
Hardly. Thom is like my favorite professors many years ago: sometimes prickly, always demanding, but oh-so smart and essential for a full understanding of the subject matter.
+1!
 

Keyboard shortcuts

Back
Top