D3s and D3x at base ISO

Hi,
I did this here in January. It's been referred to several times already in this thread.

http://forums.dpreview.com/forums/read.asp?forum=1021&message=30759631
I have seen the samples you pointed to here...

But Thom also pointed me to an area of those samples that Lloyd Chambers posted (the scarf around the straw doll's neck), where indeed the scarf looks a bit smudged up on the D3x sample compared to the D3s...

Now... I have no idea what is going on there...But I will stick to what I said initially in terms of "look" of the images... I am talking about that first impression here...They look very similar...I could not tell them apart at all...

The reason why I asked my initial question as to whether or not people were seeing "similar" images from the two different cameras.

Follow Kristian advice, and take a good look at the samples posted if you have not done so...

Regards

--
Renato

http://www.renato-lopes.com
http://www.renatolopesblog.com
 
I see a difference, but don't know which photo goes with each camera. Thanks by the way, for the additional info on how the earlier six panel lineup was processed...

Happy holidays
 
Thom, thank you for your contribution.

I didn't read the whole thread, but I think I am in the same thinking level as you.

When I look to down sampled images for me myself am speaking of "the quality" of pixels, as other maybe has a better description for it like tonality or edge sharpness.

Down sampled images mostly are more “clean” than right of the box images of other camera’s in this same resolution, as the last maybe has less hard definition at pixel level and detail maybe is “smudged” into other pixels around.

But apart from that I agree that people hardly can see the differences when images are printed and look to it. I have an exhibition right now, showing prints made up to about 28 x 40 inch from several used camera’s like D1x, D80 and D700 (mostly D80).

Using low and high ISO, sometimes generated with extra noise in it to give it a more soft emotional purity to the image (Photoshop “ad noise” filter – uniform / monochromatic 14% !!! The base picture was from a D700 - ISO 100 - Studio flash - lens Voigtländer APO Lanthar 90mm/3.5 used @5.6). Yes there are differences in the images. But people looking to the images don’t bother the technical differences, but look to the expression of it.

I think even if people try to separate images as to guess from which camera the images are made, they can’t made the right guess.

People who don’t know my used equipment at all, think I used old fashioned medium format roll film camera!! And are surprised even more when I say it is digital, and the B&W images at the exhibition wall are no “B&W prints”, but RGB photographic paper color prints made by a normal Photo Lab.

I am amazed by the examples seen from the Nikon D3s at the paid account from Lloyd Chambers. At one side I hope to buy a 24 MP camera in future as I am used also to qualities years ago I got from 4x5 inch and even 8x10 inch sheet film. (But other kind of pictures as today’s imagery).

But at the other side what I see of the D3s it has such a nice rendering at high ISO, I think this is a far better choice for general imaging. Especially when I see results already at my own exhibition from camera’s that have definitely lower quality in general.

Quality depends greatly to used ISO of these camera’s and the processing of the data, far less in the used “de facto” base resolution of these camera’s. D80 used at ISO 100 gives a tremendous nice picture quality.

It is not necessary to use far more pixels at this base ISO. I think the 12 megapixel count of the D700 / D3 / D3s is more than adequate for most high demanding pictures, specially when this quality is spreading at a wide range of ISO’s as the D3s

--
Leon Obers
 
In genearl, depending on the algorithm, downres'ing requires some extra sharpening afterwards, since pixel merging may result in loss of detail.
Downsizing is just another form of sampling, and comes with all the problems inherent with re-sampling. One difference, though, is that you're in control of many aspects of the resampling, so if you know what you're doing, you can mitigate those problems.

--
Thom Hogan
author, Complete Guides to Nikon bodies (21 and counting)
http://www.bythom.com
 
Well, downsizing with bicubic interpolation in PS (which I gather is how most people resize) results in distinct sharpening-like artifacts along edges, and so far I haven't really seen a downres'd 21/24mp image that contained any more real detail than a native 12mp image.

Another point perhaps worth making is that there are several sharpening tools that can be used to produce very sharp images at the RAW level, which can't be used after downsizing. I use a little R-L Deconvolution sharpening on my print 12mp RAW files, and I don't see how a downsized 24mp file could really match the level of detail I get that way. In PS, you loose detail when you downsize, and you exacerbate that problem when you sharpen.

SB
Thanks for info, similar to my experience with CNX2.

--
Renato.
http://www.flickr.com/photos/rhlpedrosa/
OnExposure member
http://www.onexposure.net/

Good shooting and good luck
(after Ed Murrow)
 
Seriously though, both of Simon's samples appear equally sharpened
"Equally sharpened" when? In all likelihood, the original image was sharpened before downsizing, then sharpened again. There are all kinds of tricks we can pull to deal with sampling issues. For instance, we can over-sharpen in a specific way before we downsize, and this produces better results than sharpening after the downsize.

--
Thom Hogan
author, Complete Guides to Nikon bodies (21 and counting)
http://www.bythom.com
 
In fact, one could measure resulting noise and color shifts in both files, locally, after recovering detail in darker areas. But I doubt anyone would have the troubel to do that ...

One note though: when I suggested that maybe there wasn't much of a diference, many here claimed they were obvious. And some used that as an argument for Nikon staying with Sony re sensor design and production, I was saying maybe the D3s' sensor was good evidence for Nikon going alone.
There was a discussion here some time ago and many people claimed D3x's low iso files had more latitude (re PP'ing) than D3's fiiles. Would that be true as well when comparing to the D3s?
--
Renato.
http://www.flickr.com/photos/rhlpedrosa/
OnExposure member
http://www.onexposure.net/

Good shooting and good luck
(after Ed Murrow)
And exactly how would you measure that "more latitude" variable?
Lift the shadows without them becoming ugly (but I guess ugly is hard to measure).
--
Renato.
http://www.flickr.com/photos/rhlpedrosa/
OnExposure member
http://www.onexposure.net/

Good shooting and good luck
(after Ed Murrow)
 
It's well worth it, and in the next 30 mins you'll have all you need instead of reading countless useless posts arguing the fact. It's common sense that the 24MP sensor form the D3x will resolve more detail, it's simple math.
I have never disputed that. However, Lloyd has downsized the D3x images to compare against the D3s images. Once we introduce a re-sampling into the picture, other things start to happen. My objection was to the OP referring to those other things as "resolution." They may not be. Indeed, most of the time they aren't. In Lloyd's examples I can point to several places where the D3s is "out-resolving" the D3x. Thus, we have to consider what happened in the downsampling.

And, no, this is not an esoteric discussion that has no bearing on reality. You can print a D3s image on an inkjet to about 22" at 188 dpi, which is about the minimum everyone agrees you can give it before you start to see visibly see differences. Thus, if you don't print above 22" the question is why would you need the "resolve more detail" aspect of the D3x? This is the reason why elsewhere I ask how many > 24" prints you've made in the last year, and how many you'll make in the next year. If the answer is zero, the discussion is moot for you. If it is a low number, it still may be moot. If it's your lifeblood work, like Roman, the discussion is very relevant but needs to be presented the other way: the D3s needs to be upsized to the D3x level.

--
Thom Hogan
author, Complete Guides to Nikon bodies (21 and counting)
http://www.bythom.com
 
Here's a more reasonable example, D3x vs D700. D700 upsampled to 24MP, and the D3x at 100% pixel-to-pixel mapping. On an A3 fine art print or A2 magazine quality print I'd be hard pressed to tell the difference.
The difference is easily seen, and mostly due to the resizing you used, which produced very visible stairsteps.

--
Thom Hogan
author, Complete Guides to Nikon bodies (21 and counting)
http://www.bythom.com
 
You're wasting your time trying to make yourself believe that the D3s at optimal iso is anywhere near the resolution and overall IQ of the D3X, it simply isn't possible
You seem to imply that more pixels is more IQ? I don't. IQ is DR, tonality, colour, plus resolution and absence of noise that compromises image quality.

If you say that the D3x is showing more detail in the files everyone will agree. If you imply that general IQ (as I defined it above) is higher also in the D3x I would say this is still to be found out. But having seen the examples of the D3s I doubt it, and think it is possible that it is even the other way round.

The pixels of the D3x are much smaller than in the D3s, which normally means lower IQ, not higher. Those are the basics.

--
regards,
Bernie
 
I did this here in January. It's been referred to several times already in this thread.

http://forums.dpreview.com/forums/read.asp?forum=1021&message=30759631
I have seen the samples you pointed to here...

Follow Kristian advice, and take a good look at the samples posted if you have not done so...
Bearing in mind that Kristian has much the same opinion as me regarding the performance of the D3x, why would I pay $29 to see more comparative images from both cameras, when I have thousands of images of my own to compare?

Perhaps photographers on this forum should become confident enough in their own knowledge and experience to rely on their own judgement, instead of slavishly following the 'advice' handed down from on high by the self-appointed guardians of the secrets, who, although apparently experts in everything to do with photography, digital technology, software engineering, computer programming, and for all I know the meaning of life, never seem to actually use their cameras for anything as mundane as taking photographs.

Here's my 'advice', given in a spirit of goodwill: go and shoot some pictures, and stop worrying about what the white-coated scientists are getting up to in their secret laboratories.
--
Lightbox Photography : http://www.the-lightbox.com
Aerial Photography : http://www.aerial-photographer.co.uk
Stock: http://photo.the-lightbox.com/
 
There are some more or less weird discussions regarding "resolution" in this thread, but if we keep things reasonably simple and think of resolution as the ability to resolve real (as in the pictured subject) detail (as in distinguishable from artifacts), then the result is obvious to me:

I d/l Imaging Resources raw files from the D3x at ISO 100 and the D3s at ISO 200.

After developing in dcraw, and adding a little sharpening to both, I re-sampled the D3x file to D3s size using Lanczos3.

Viewing them side by side it is very clear that the D3x reduced file resolves more actual detail than the D3s untouched file. This is easily seen e.g. in the "scale disc" to the right (I take it you guys know exactly what I mean) or in some bottle labels.

And...this while the D3x file size is approxamitely half the D3s size, this due to different softwares way of saving to JPEG at max quality (resizing was done in FastStone).
 
Hi,
Bearing in mind that Kristian has much the same opinion as me regarding the > performance of the D3x, why would I pay $29 to see more comparative images from > both cameras, when I have thousands of images of my own to compare?
Point taken... But I referenced to them again, because those samples are the center of what we were discussing here... Just thought you might want to look at what I was looking at directly and why the comments both from myself and Thom came about...
Perhaps photographers on this forum should become confident enough in their own > knowledge and experience to rely on their own judgement, instead of slavishly > following the 'advice' handed down from on high by the self-appointed guardians of > the secrets, who, although apparently experts in everything to do with photography, > digital technology, software engineering, computer programming, and for all I know > the meaning of life, never seem to actually use their cameras for anything as > mundane as taking photographs.
Sure...But a lot of discussion has been raised around this issue... Whether or not you disagree or agree with the points being made is another issue...
Here's my 'advice', given in a spirit of goodwill: go and shoot some pictures, and > stop worrying about what the white-coated scientists are getting up to in their secret > laboratories.
Thanks very much for your advice... I have no worries on this one...

I shall not die (I hope...) for thinking that the D3x and D3s images kinda like have a similar look... It would be tragic if I did...

Regards
--
Renato

http://www.renato-lopes.com
http://www.renatolopesblog.com
 
Well, he could well be. The downsampled 24MP image should have a flatter spatial frequency response (which might be interpreted as having more resolution - it would make the MTF50 of the combined system occur at a higher frequency).
Ah, now we're getting somewhere. This is indeed how we try to figure out "resolution" in lab. The question still remains as to whether that's what we're seeing in the examples. And my quick analysis is no, it is not. They do not seem to be prepared in a way that would allow us to say that the downsampled D3x is out-resolving the D3s. There are too many areas where the downsampling should be better and isn't.
Absolutely. To do this properly, the downsample needs a 'brick wall' low pass filter at the output Nyquist frequency. A Lanczos filter is the closest to that ideal. Most downsamplers don't pay attention to that, and the resultant low pass actrion might be more gradual than a physical AA filter.
Of course, as you suggest, to do this it is critical to downsample properly, with that sharp filter effect, but it remains true that a 24MP camera can achieve higher resolution at 12MP than a 12MP one can. It is akin to the effect of removing the AA filter, without the aliasing.
Yes, it should be. But I see aliasing in the downsampled images.
Which is what happens if the downsampling filter is so gradual that there is still significant frequency content above the output Nyquist. This is one reason why its recommended to pre-blur (apply a low pass filter) before downsampling, if you don't know the characteristics of the downsampler.

I get the impression that most resampling algorithmes have been targeted at upsampling, since when camera pixel counts were lower,than output pixel counts that was what was generally needed. When this situation is reversed, the emphasis needs to be on resampling methods which produce the required sharp cutoff.
 
Well, he could well be. The downsampled 24MP image should have a flatter spatial frequency response (which might be interpreted as having more resolution - it would make the MTF50 of the combined system occur at a higher frequency).
Ah, now we're getting somewhere. This is indeed how we try to figure out "resolution" in lab. The question still remains as to whether that's what we're seeing in the examples. And my quick analysis is no, it is not. They do not seem to be prepared in a way that would allow us to say that the downsampled D3x is out-resolving the D3s. There are too many areas where the downsampling should be better and isn't.
Of course, as you suggest, to do this it is critical to downsample properly, with that sharp filter effect, but it remains true that a 24MP camera can achieve higher resolution at 12MP than a 12MP one can. It is akin to the effect of removing the AA filter, without the aliasing.
Yes, it should be. But I see aliasing in the downsampled images.
In genearl, depending on the algorithm, downres'ing requires some extra sharpening afterwards, since pixel merging may result in loss of detail. This is very common when we reduce resolution for internet posting.
As I suggested in my answer to Thom, if using a properly designed downsampling algorithm, with a flat passband and sharp cutoff, no sharpening should be necessary.
Thus, in principle, it could be that a resident 12MP image could be better than one downres'ed from 24MP. I've never seen a careful testing of that, with different resolution reducing procedures and controlled sharpening done to images.
I think no-one has though hard about the benefits to be had from optimised downsampling, simply because it is only recently that we have had cameras with enough pixels to require downsampling for even mid size (A3 or so) prints. Up until then, the problem was upsampling, and that's where all the development has gone.
 
I think no-one has though hard about the benefits to be had from optimised downsampling, simply because it is only recently that we have had cameras with enough pixels to require downsampling for even mid size (A3 or so) prints. Up until then, the problem was upsampling, and that's where all the development has gone.
Apparently people only cared how their images look print and not on screen. Since even full-screen, cameras had to be downsampled for quite a while.
 
I think no-one has though hard about the benefits to be had from optimised downsampling, simply because it is only recently that we have had cameras with enough pixels to require downsampling for even mid size (A3 or so) prints. Up until then, the problem was upsampling, and that's where all the development has gone.
Apparently people only cared how their images look print and not on screen. Since even full-screen, cameras had to be downsampled for quite a while.
Interesting that you raise that. I suspect that a lot of comments about new high MP cameras not being as good as older ones are caused by people letting the OS do the downsampling, and the screen drivers do it very crudely quite often, sometimes a nearest neighbour. As a test, properly downsample an image to a small window size, and display it next to one displayed that size by the OS, very often the difference in quality is amazing.

I suspect with an update of screen drivers to include some well designed downsampling algorithms, a lot of the apparent problems of high MP cameras would disappear instantly.
 
I think no-one has though hard about the benefits to be had from optimised downsampling, simply because it is only recently that we have had cameras with enough pixels to require downsampling for even mid size (A3 or so) prints. Up until then, the problem was upsampling, and that's where all the development has gone.
Apparently people only cared how their images look print and not on screen. Since even full-screen, cameras had to be downsampled for quite a while.
Interesting that you raise that. I suspect that a lot of comments about new high MP cameras not being as good as older ones are caused by people letting the OS do the downsampling, and the screen drivers do it very crudely quite often, sometimes a nearest neighbour. As a test, properly downsample an image to a small window size, and display it next to one displayed that size by the OS, very often the difference in quality is amazing.

I suspect with an update of screen drivers to include some well designed downsampling algorithms, a lot of the apparent problems of high MP cameras would disappear instantly.
Why would the OS do the downsampling (except when using applications that came bundled with the OS like Quicklook or Preview with Macs). Shouldn't it be the application itself, be it a browser, any editor (eg, PS) or DAM software like Aperture or Lightroom?
 
Hi Simon...
Perhaps photographers on this forum should become confident enough in their own > knowledge and experience to rely on their own judgement, instead of slavishly following > the 'advice' handed down from on high by the self-appointed guardians of the secrets,
I seem to have missed this "slavishly" comment of yours...

I was the first one to say in this thread, after my original post, that I could see more resolution on the D3x image on the samples Lloyd posted...

Now I based my observation on the cropped samples posted by Lloyd Chambers.

Thom then comes in and points me to a different area of the D3X image where the scarf on the straw doll really looks smudged up compared to the D3s image.

There is really no slave driving needed from Thom or anyone else here to tell me pair of eyes that the scarf is indeed smudged up... Now...I can't explain that...Maybe you can explain what I am seeing?... The reason why I asked you to go and see the samples...

On this one it really is not a matter of knowledge or confidence... It's as simple as what the eyes see... So tell me... What do yours see from those samples?...

Regards

--
Renato

http://www.renato-lopes.com
http://www.renatolopesblog.com
 
-Which is mostly a load of BS. That stairstepping is what you get when you try to sharpen an upsampled picture to get somewhere close to the edge acuity of a higher resolution camera. The "detailing" though will never get close. It doesn't matter if you have black-ops or NSA spec software, you can never recreate the things lost close to, or over Nyquist from just one shot - even though it is indeed possible to do so with supersampling multi-models. I've written such applications myself.

In this particular shot that's the fiddle's strings, and the detailing in the pants and hair/beard. The detail just simply doesn't exist in the 12MP shot - so how will another sampling algorithm improve that?

Comparing the D3 and the D3x, I'd say that the D3x has an edge when it comes to scalability - I can scale a D3x pixel more than I can scale a D3 pixel before it starts to look "artificial". From this, I'd say that the difference is actually MORE than what the 12/24 difference would imply. The D3s is reportedly an improvement regarding this - I wouldn't know since I haven't tested the D3s yet, or started to receive any real amounts of D3s files.

This resolution advantage is only there up to ISO400, it all goes away above ~ISO800 as you say - what people experience as "detail" is then mostly "false detail" induced by noise. Just like most AA-less cameras fake sharpness and detail by inducing false detail by aliasing faults and false colour artefacts. This is quite easily measured by doing dE measurements of scenes.

I've done the experiment you did too, several years ago - Add just a little bit of the right kind of high frequency noise to a very clean shot, and most people not in the printing business will go -"oooohh! aaaahhhh! - this is so much better, so much more detailed...!" :-) - Which is all well and dandy for the one-off example, but it does in no way make the shot more scalable. False detail will always poke you in the eye when you magnify.

Which may be why the M8 and M9 totally fails to impress me - I just see a lot of aliasing and false colour - which just begs for a 2x downsample to make the picture realistic.
 

Keyboard shortcuts

Back
Top