It's hella ugly and an usability nightmare.
justmeMN: It's nice so see that Canon is working on sensor research, but their next sensor upgrade is likely to be something more conventional.
Nikon designs some of their sensors that are then manufactured by Sony (and Toshiba, if the last info I heard is true), and the ones that aren't completely designed by them, are tweaked.
For instance, the whole processing chain, from ADCs to ASSPs, for the D5100/D7000 is Nikon-designed, and that's why, despite using the same sensor, Sony has worse results.
Sensor tech is just a part of the results we see (and I daresay that Canon lags more in the essential non-sensor part than on the sensor itself - the Bayer filters aren't terribly good, their ACDs introduced a lot of patterned noise et cetera).
Easycass: A good article. There will always be various ways to achieve better results; explaining the technique was the point.
And ‘cheating’? Even purist photographers are never able to fully represent reality. Would that mean everyone ‘cheats’?
In camera we: crop, rotate, use focal length, DOF, compression, distortion, shutter speed, expression, props, shadows, viewpoint, film, exposure, blockers, lighting, reflectors, masks, filters, etc.
In the darkroom we: dodge, burn, mask, filter, diffuse, solarize, graduate, spot, colour balance, vignette, bleach, and even do composites (you know, a house and a different sky) by multiple exposures and sandwiching negatives.
In Photoshop we: do all of the above and more.
All the above distort what is reality. Am I to believe that those ‘photographers’ here who accuse people of ‘cheating’ do none of the above?
Photography - Painting with light - You do not have to like what is created, but love that we have the ability and freedom to create...
Not that hard, actually. I mean, that's pretty much what Knoll did with PS 1.0. It's certainly doable.
Making your own digital camera from scratch, though, that's much harder ;)
On the other hand, I've built my own film camera, already. It's a cakewalk.
"You rely on a plethora of software engineers and programmers for your images. I don't."
Unless you built your camera from the ground up, you do too.
CameraLabTester: A third layer with a UFO would really smash it!
Come to think of it... a fourth layer would also be good:
Painted on the roof: "Aliens Go Away!"
JWest - and by stick you mean a stick he actually cut himself from a tree, by ink you mean pigments he mixed himself from egg whites and rocks. Because that's how real artists do it, and if you're doing it differently, you're WRONG!
l_d_allan: Very helpful article. Thanks!
I am not at all a "purist" as far as the "composites are not art mindset / camp". YMMV and my 2¢
I've been struggling with getting decent composites with winter trees in the foreground. As the author points out, that's a difficult compositing task. "Refine Edge" and "Color Range" haven't been coming out all that well. Neither has the Fluid Mask plug-in.
While this could be a case of "a poor craftsman blames his tools", I'm hoping that the Gradient mask approach you describe will not only be simpler, but work better.
Complex composites are a matter of mixing techniques. Gradient masks are awesome as a rough starter - you can get most stuff out of the way with a few clicks, with very good results.
You can then focus on the foreground elements - I usually go about it by painting on the same mask with a digitizer, what gives you a lot of control and precision. You can reduce the Flow and achieve semi-transparency of borders, what helps mixing the layers. Takes time and a lot of effort, sure, but the results are top notch.
You can do something similar with Poly Lasso, painting with the mouse and a lot of elbow grease, as well.
One of the things most people tend to miss, though, is the value of applying global effects to all the layers. Whenever you apply the same effect to all the layers of a composite, you make them aesthetically closer. For instance, something I usually do is apply small hue shifts and grain.
"Have you tried, say, removing that small annoying seagull that just happen you fly into your shot ruining your sky. On a colour RA4 print it's impossible (apart from actually 'airbrushing' over it). In a B&W silver gelatine print, well, give it a go! All you digital freaks get off your ar5e and give it a go with a real B&W print. "
That's actually patently false, as part of my film photography classes was exactly replicating digital darkroom techniques in an actual darkroom - and yeah, we did actually remove stuff and composite photographs, both B&W and RA4 through frame splicing and clever use of masks. Is it easy? Nope. But it's doable - and, furthermore, people have been doing these very things since forever.
AluKd: It just struck me that Nikon doesn't sell Nikon Yellow cameras. I mean, they even got red entry-level DSLRs, they surely could reinforce their branding with yellow ones =D
Because I'm pretty sure Nikon decides on the colors thinking about the stands they will be using at Photokina.
It just struck me that Nikon doesn't sell Nikon Yellow cameras. I mean, they even got red entry-level DSLRs, they surely could reinforce their branding with yellow ones =D
Let's see how they perform: slow as molasses.
Karl Gnter Wnsch: There should be an easy way around this detection. Simply take a noise reference shot of your camera (grey card) and overlay that over the final image (maybe after a pass of noise reduction on the shopped image. Why use the noise reference shot? Well it's a better representative than any algorithmic noise and would be harder to detect...
Watermarking is a completely different beast. It works by encrypting information by modulating somehow the carrier data, usually with strong reliance on redundancy.
No kind of watermarking survives "all kinds of transformations", either. Some fare better, some fare worse - some are even made weak on purpose to catch tampering.
The thing to understand here is that watermarking is (fairly) strong because it's made on purpose. It's not reliant on visible aspects, but on the underlying data. It's also very redundant.
AluKd: Anyone who works professionally with image compositing already knew this for quite a long time.
Most of the time compositing actually goes torwards making sources match in photographic aspects and that, yes, usually includes dosing BOTH photographs with heavy loads of NR.
After the composite is done, a few passes of random noise (with different intensities in different channels, usually Blue gets 4x, Red 2x and Green 1x), global color balancing, global adjustments and - a favorite of mine - upsizing by 33% and downsizing by the same amount - usually makes it so any trace of compositing is obliterated. I have a REALLY hard time believing this algorithm of theirs would be able to identify a composite properly done.
All NR incurs in loss of real detail, so yeah, of course. It's pretty hard to differentiate this loss of real detail from the one you get from general, day-to-day editing, though.
The noise added will cover it, though, as any algorithm that looks for high frequency patterns will mistake noise for relevant data at any significant detection threshold. I mean, WE humans are fooled by random noise into thinking we're seeing more detail than there actually is.
One thing that COULD give away a composite would be large scale patterned noise, bit if they were searching for this kind of pattern mismatch, I'd guess they'd have quite a lot of false positives and wouldn't really catch all the real stuff.
Anyone who works professionally with image compositing already knew this for quite a long time.
Burbclaver: Wow, after extensive testing the D800 and Canon 5D Mk III both score exactly 82%. Those boffins in their separate labs seem to get exactly the same result all the time according to you guys. I wonder why you score out of 100 hundred. If you scored out of a gazillion would you still order the same points?
Oh, I see.
*files under fudging*
Do the bars used on the graphs and the ones used to explain the weights actually mean something or they're complete bull? Because if they're intended to have any meaning, by the calculations I've made here using relative proportions to the weights and sizes of the bars, nope, they don't score within 1% of each other. The Canon camera actually scores almost one and a half percentile under the Nikon one.
If you guys just fudge the graphics to show whatever, you should disclaim it somewhere.
tlinn: DPR — When you compare resolution I hope you'll see what happens to the D800's resolving power as the aperture is stopped down. I'd really like to see the effect of diffraction as it relates to real world resolution.
Diffraction kicks in at exactly the same point in every camera sensor, despite pixel density, given the same glass and Aperture.
What happens, though, is that with more pixel density, a camera like the D800 is just able to resolve diffraction earlier than a camera with less pixels. Downsample a D800 image to 16,2mp and, given the same glass and Aperture, it will look pretty much the same as a D4 picture.
Solarcoaster: Fish and water drop look photoshopped. So much for real photography.
If you take into consideration that Common Kingfishers average at 15cm of lenght, this fish can't be longer than, say, 4cm. The droplet is quite feasible at this scale.