Patco: Dear Nikon,
Please exert less effort on engineering 3rd party incompatibilities, and more effort on creating a high-end DX D300 replacement.
If, as Patco and others are implying, Nikon engineered this incompatibility intentionally, then people have every right to be upset (even if Nikon does arguably have every right to do what they did). That would mean Nikon was intentionally devaluating people's existing lens collection, which sounds like a great reason to think about switching to another system.
Even if this wasn't intentional, Nikon has to be aware that breaking computability with existing lenses, even third party lenses, isn't exactly an encouraging move for people who've bought into the system or are thinking about doing so.
It would be nice if you could post just one article for new product launches, so all the comments ended up in the same place. Three is a little ridiculous (though I realize each one is technically something different).
mpgxsvcd: Making it only shoot photographs doesn't mean it shoots better photographs. It just means you took out something(Video) that every other manufacturer wouldn't dream of leaving it out and called it retro.
People will buy this camera but they will be wasting their money.
I don't see any good reason to leave out video functionality at this point. On the D800, it only adds a couple extra controls which really don't get in the way when you're not using them, and the functionality is nice to have on occasion.
Jogger: Looks like a D700 successor. Also, love the retro NIKON lettering.
Why do you say that (implying the D800 isn't really a D700 successor)? I would say it looks like a redesigned D800, which is fine. If this looks like what I imagine and at least has feature parity with the D800, I would certainly buy it over a D800.
ZAnton: I assume there are too many variables to calculate good result. For example LoCAs are distant dependent, so unless we know the distance to ALL objects on the photo, we can't calculate back the initial image. Similar with non-flat focus-plane (field curvature). If the object is blurred by that, one must know the distance to the object for the reverse calculation of the "ideal" image.
You don't even have accurate distance information within the focus "point", since it's actually a region of the image, some of which is likely to be out of focus.
harold1968: This is effectively "inventing" detail where you have some idea of the reasons why that detail was not recorded properly in the first place.
IMHO this is good for snaps but useless for serious photography as the original detail, and as much as possible, is what you need and indeed actually wanted to take the picture for
Saying that this could come out with some more advanced techniques for improving a photo during PP
That's not necessarily an accurate description. Yes, it's possible to irrecoverably destroy information in a photo. A perfect Gaussian blur or aliasing are examples of this. However, a lot of the perceived loss of detail here doesn't necessarily destroy it, but just obscures it. I don't really see a problem with a mathematical reconstruction of details, as long as it isn't recreating information that never actually existed in the original photo.
I hope Sony has bulletproof moiré correction for the A7R. I look forward to some thorough testing. (Personally, I'll take a slight loss in sharpness over sampling artifacts any day.)
misha marinsky4: I read diglloyds review: "Observe the fine details within the iris of the cat’s eyes as well as the small hairs"
The image's URL is: http://diglloyd.com/articles/ZeissZ/images-ZeissZ-Otus-55f1_4/_D8E5727-ap1.jpg
Here's my photograph of my cat:
I used a Fuji E550.
It doesn't exactly take much to produce a completely sharp image at that resolution. I'm fairly certain my cell phone can do that (albeit with much larger depth of field).
miejoe: Wonderful. Time to buckle in and prepare for yet another format war. When either XQD or CFast loses the battle, we can look forward to wasted investment dollars, obsolete equipment, and the consumer paying the price, as usual.
My experience is that by the time I buy a new camera, my old memory cards are already obsolete even if they're supported by the new one. (i.e. I can get something so much better for a fraction of the camera's price that it would just be silly not to.)
CameraLabTester: I would get one, just to place it beside the 256 MB card.
There's space for a third... 256 TB, then it goes on a picture frame, and will hang on a wall.
Please go lookup the sizes (physical and memory) of currently available flash chips and let us know which ones you would put in a 1TB CF card. (Or are you going to claim any of the flash memory manufacturers wouldn't jump at the chance to sell something with four times the memory density of any of their competitors?)
I think the description of D800 vs. D800E AA filters in the article isn't quite correct. Both have two layers, but the layers in the D800 are oriented at 90° so that they split light in one direction and then again in the other. (After all, if you only "blur" along one axis, you'll still get aliasing along the other.) The D800E has the layers at 180°, so they split and then recombine the image.
In the same way, the diagrams are incomplete (presumably for the sake of simplicity). They only show AA along one axis.
AstroStan: It seems to me that variable AA can be done in software (firmware) via pre-de-Bayer color-specific Gaussian blur. Physical blurring can be very closely emulated in software. Software AA would slow down image processing and might noticeably decrease the burst rate, though turning it off would not interfere with RAW throughput.
To clarify, once you've thrown away half or three quarters of the pixels that would make up a complete image for any given color, there is no way to restore the information. The point of an AA filter is to blur the image before it passes through the bayer filter, making each pixel work like one covering between two and four times the area (but without actually capturing two to four times the light).
peevee1: Unfortunately LCD panel is not perfectly transparent even when "open", robbing some light.
This is not an LCD panel, just LC. The polarization filter that blocks half of the light even when an LCD is on isn't a part of this design.
MJSfoto1956: interesting that the DxOmarks for the D800 + Bigma are so different than the D600. This is probably due to the fact that the Bigma is 1/2 stop slower at each focal length than the Nikkor. If we are to trust this measurement, then I would probably pass on the Bigma for use with my D800. But the D600 looks like it could be a good match.
The lens doesn't actually get any worse on the D800, though. There are just (much more expensive) alternatives that can do better.
yabokkie: don't know if it's a good idea for Canon but I'd want to see Canon cooperate with ML and eventually provide us a sound API for Canon cameras, like programmable (instead of programmed) P mode.
it's can go very complex to let camera choose shutter and aperture for best resolution based on analysis of subject movement (need live view but we have dual-pixel AF now) and lens sweeping, etc.
Actually, what you're describing sounds extremely complex, especially if the camera wasn't designed with that kind of real-time image analysis in mind. It may be possible, using the same hardware / processing power that powers autofocus in live view, but that's the only vaguely similar feature I know of included in Canon's firmware (and it's much simpler than what you're suggesting).
One minor quibble: syncing white balance settings in RAW images after the fact not only causes no perceptible loss in quality, it causes no loss in quality at all. RAW images store exactly what comes from the sensor (possibly with lossy compression these days), which is unaffected by white balance. The white balance stored in the RAW file is nothing more than metadata used (or just as easily ignored) later by the RAW converter.
yabokkie: the problem is I don't wear wrist watches most of time becasue I have a mobile.
Your problem is…
This is interesting advice, but it all seems to be based on the assumption that you're going into a purchase without the ability to be near-100% confident.
I would certainly classify myself as a "maximizer" at heart, but I find that most of the supposedly negative points either don't apply or are actually positives. I certainly spend a lot of time in product comparisons (assuming it isn't a product category I keep track of on an ongoing basis), but I find that deciding what to buy is at least half the fun of buying something. I almost never regret purchases because I rarely make one without knowing that it's the best choice given the situation. For the same reason, there's no point in comparing purchase decisions except to find out what factors lead someone else to make a different choice, and returns are never a concern unless a product is defective. I simply don't buy things I'm not completely sure I want.
I will admit to savoring positive events and brooding on occasion.
Fixx: Does this work with RAW-photos or does the processing produce JPEG-files? That is, if I have distortion control enabled in my camera, do I get corrected NEF-files to my Lightroom library?
If the camera corrected the NEF, it wouldn't be RAW anymore. The whole point is that it stores exactly what comes from the sensor (or almost exactly if you use lossy compression).
That said, it would be great if they encoded the distortion correction parameters in the NEF so the RAW processing program could apply it later (without needing it's own database).
Peiasdf: The color on the car is off vs. bayer picture. I guess phone users don't really care for color accuracy.
What I wonder is, how do they manage exposure in colored vs. clear pixels? If the clear pixels receive roughly three times the light that the colored ones do, wouldn't they tend to overexpose and destroy any chance of accurately recovering the green channel in highlights?
Get a weekly update of all that's new in the digital
photography world by subscribing to the Digital Photography Review