Francis Sawyer: These aren't HDR. They're LDR. Step one is educating the public about the misused terminology that pervades this fad.
If you see something in a JPEG online, you know it's not HDR. HDR requires a file format that can store it, like EXR. It also can't be viewed on any normal computer monitor.
People often confuse High Dynamic Range Capture and tonemapping.
It is tonemapping that creates the oversaturated and oversharpened images., not the dynamic range of the original scene.
Tonemapping can be applied to single LDR images as well. If the dynamic range of the original scene was within the range of the camera sensor, then the working image file can have as much shadow and highlight information as a high dynamic range scene captured with multiple exposures and rendered as an "HDR" image.
A high dynamic range scene captured via the multi shoot HDR method, can be rendered as a rather conventional looking image. It will simply reveal more highlight and shadow detail than a single shot image would.
SDPharm is correct. HDR refers to the method of capture, not the final loutcome.
Tonemapping is just the methods employed to render the HDR image to the desired result.
The "HDR look" isn't the result of HDR and should be refered to as the "tonemapped look"
obeythebeagle: Don't take my Kodachrome, or HDR, or B&W, away. It's all fun. HDR is Ansel Adam's brain on LSD.
They took my Tech Pan film away years ago. Now I need a 46MPX camera to be happy!
CaseyComo: Call me old-fashioned, but I prefer the look of a single exposure. If the sky is too bright, expose for the shadows and use a grad ND filter.
You can still have the look of a single exposure with HDRI. With HDR capture, we can record more information than we really need, how much of it you use is up to you. Its better to have more information to work with than to be struggling to tame a highlight or open a shadow that has no detail.
The average person now spends hours a day staring at brightly lit TV screens, computer monitors and smart phones. Thats what their eyes are atuned to. Neon colors are even big in fashion.
Many artists think in order to be seen they have to be as bright and garish as everyone else. They may be correct, but that doesn't make them right.
The question is: Who is YOUR target audience? Is it the viewer who appreciates subtleties, or is the the viewer who wants to be dazzled? Its like pop music VS classical. The masses are drawn to the overprocessed HDR images like moths to a candle.
There really is no right or wrong, it is merely a matter of taste.
dark goob: Just think, if you had an external HDMI monitor you wouldn't have to bend to the ground. Or a camera with a rotating screen.
Why doesn't Canon put a damn rotating screen on the Mark III? I would buy one if it had it.
Why doesn't Canon and all the other DSLR makers, make their own proprietory external monitors?
I can use a cell phone app to connect my smart phone to my DSLR and contro, the camera from the cell phone.Put the camera on a pole with a tripod head and one does not even have to bend over to get close to the ground.
But those apps all have their limitations.
Canon makes cameras with touch screens that allow the user to change seting on the screen. All they need to do is make a larger touch screen that can beconnected to the camera with a cable..or maybe even WiFi.
If the app writers can create an app for this, I'm sure that it is within the technological capability of Canon or Nikon to create an off the camera controller. They could do better than the cell phone apps by replicating all the buttons and dials of the camera on the remote monitor!!
Wouldn't that be cool?
Wye Photography: After reading this I am still not a 'fan' of HDR. The images on the first page look 'overcooked', the first two images on the second page, I admit, do look nice. The last image on page 2 looks like it has been done with one of those water colour apps on an iPad. A waste of an EOS 5D MkIII.
I don't think this article convinces me that HDR, and I quote, "emulate(s) the world we see with our own eyes". My world certainly doesn't look anything like these images. In dynamic range perhaps, in colour NO!
Then again, this article is for those ALREADY into HDR.
It is nothing new for photgraphers to overcook special techniques. I remember when I was was a teenager I learned how to use use color filters with black and white film....or how to solarize a print or how to sandwhich my negatives with Kodalith masks. I went crazy creating wachy over the top special effects. I made alot of crazy surrealistic prints but in the end, after I learned how to control the techniques to use them as a means of controling the quality of more conventional looking images.
Whenever a new technique is introduced, people will have a tendancy to overdo it to create unusual looking images, but in time, they outgro it. Eventually the overcooked images are regarded as cheap amateur manipulations. Skill practioners will learn to restrain themselves and apply just the right amount of the effect.
Unfortunately, many amateurs are using simple one size fits all HDR processors on point and shoot cameras and cell phones. They don't know any better, they think its cool.
TakePictures: Read the light, be patient, and you don't need HDR.
Take, this served photographers well for decades and many great photographs were made during that time. But throughout the history of photography, photographers have struggled with the inherent limitations of film. That gave rise to the Zone System and artificial lighting.
The aesthetics of what defines a great image were determined by the limitations of film. Photographers had to work with what they had. Ansels Adams stock in trade was his ability to create prints that had beautifully rich and detailed shadows and highights that made his prints "pop" in a way that others did not.
Many scenes are within the dynamic range of film and required no special technique to render them.
And when using artificial light the photographer is simply using the manipulation of light to bring the dynamic range of the scene within the range of the film or sensor.
HDRI is not a requirement, it is merely an option. HDRI simply captures the full DR, but the image does not have to use it all.
elefteriadis alexandros: HDR? just use film..
I used the Zone System in large format black and white for many years and used every technique I knew of to try to sqeeze more range out of the film without turning it to mush. I mastered the craft of b&w printing so I could render all the details in the negative...without turning the image to mush.
To a more limited degree I applied Zone methods to color.
Multi exposure HDR imaging blows Zone System B&W out of the water in its ability to capture the full dynamic range of an scene (...without turning it to mush.) Even more so when compared to color film.
Long before digital, I used the 3 exposure technique with 4x5 B&W. With 3 enlargers, carefull allignment, alot of burning & dodging and darkroom trickery, I could , after a few hours, have a handfull of really gorgeous prints. Thus I was able to bring the full DR of a high contrast scene to the print.
That still doesn't compare to what can be done with digital HDRI.
Shooting architecture on 4x5 film, I was often faced with the challenge of balancing dark shadows with bright lights and windows, utilizing split exposures, artificial lighting, reflectors, scrims, gobos, diffusers and filters..all in an effort to capture in camera a relatively 'normal' looking image.
Digital photography was a godsend because I could bracket my shots and composite the various componants in Photoshop to create a "normal" image.
So right from the getgo, I saw HDR imaging as a means of bringing out shadow details and taming blown out highlights to create a "normal" image. It's really just one more technique for quality control, as are artificial lighting and Zone System.
One could just as easily create wild and wacky surrealistic images in Photoshop and with film. While it is fun to experiment, it's HDRIs ability to render highlight and shadow detail that I find most useful. I use just enough to make my images pop without looking obviously tonemapped.
Laminated: DPReview - thank you for posting the details of some of the cameras on the Curiosity rover. I'm surprised you've got so much detail here - typically this info is tightly controlled.
To those underwhelmed by the cameras, speaking as an engineer who has designed parts of a camera in Mars orbit, I can tell you:- engineers at Malin, JPL and NASA know what they are doing and being cost effective is a daily concern.- the optical engineers do know how to design a camera and they understand light.- this is a science mission balanced with generating public support- the system requirements are very demanding for such a mission - the launch is incredibly violent, weight is tracked to the tenth of a gram, space is cold (duh), temperature swings are Mars are very large, etc. - power is provided by solar panels, not recharcheable Li-Ion batteries.- communication to Mars is not like plugging in a USB cable - if something breaks, you can't take it in for service.
The development of these craft takes many many years and just like Voyager, they end up being equiped with old outdated equipment.
I actually made some of the componants that were installed on the Viking Mars landers. They were upgraded versions of devices that were in use in oil refineries, chemical plants and nuclear powerplants. They were designed to endure conditions that make a trip to Mars look like a walk in the park. Between the time the parts were made and the actual launch, production of the device was discontinued and it was replaced by new (and very well tested) technology that was not only more sensitive and accurate, but also more reliable and durable than what was installed on the lander. The new devices were being installed in oil refineries a few years before Vikings launch. One of the refineries exploded and burned but all the new devices survived the blast and fire.
With 17 cameras on board, there's no reason 1 of them could not be more advanced.
Alizarine: Wow so much ranting on the image quality...
Me I'm happy the United States' NASA still goes on missions like this, in the name of discovery and learning. While its residents and the rest of the world concentrate on luxury... or getting there.
These plantary probes takemore than a decade tod esign and build. By the time they get around to launching, the technology onboard is old and outdated. NASA needs to modify its design and testing policies to allow the newest and best technology to be installed on the probes shortly before launch. As an insurance measure they could still include the old but thoroughly tested and prooven technology, while allowing a place for newer technology to be installed.
Since Curiosity has 17 cameras on board, they could have made room for at least one state of the art camera. Given the quality of images that we can get from tiny point and shoot cameras, there's no reason why they couldn't have a installed a more advanced version on board.
Even the engineers who designed the craft so many years ago had no idea how quickly and how far digital cameras would advance before it was launched.
SDPharm: So I downloaded the image, rotated it and cropped out the computer rendered gray image. The resulting image is about 3700 x 2800, or about 10 MP. What's going on here? How does the 2mp camera produce a 10mp image?
Its a composite image.
The compositing software technology that allows us to make panoramas, high rez stitches and HDR images and the hardware and software that lead to the shifting sensor backs of the early generation professional digitial cameras. (like the early 2000s Imacon) are the result of technology devloped by NASA for the Viking landers.
The signal uplink to the orbiting satillites cannot handle large image files . What I find puzzling is that they did not equip the satellites with more powerful transmitters. That was not a matter of technological capability, but simply size and weight. If NASA had bigger boosters, then they could send bigger probes, satellites and landers. The manned Appollo command and landing modules were huge in comparison to the typical plantary probe. If large boosters like the Saturn V were used for plantary probes, then NASA engineers would not be inhibited by size and weight and they could send bigger and better technology.
gwales: Curiosity isn't on Mars to take pictures...
Then why does it have 17 cameras?
Without pictures, Curiosity would have no way of knowing where to go and where to use its miriad of devices to test the soil. The rover cannot make those decisions on its own. They are made by scientists on the ground who examine the images to determine which sites and rocks to go to and test. One of the most intersting finds on Mars was discovered in an image and only confirmed by the test instruments.
To the scientists, the images serve a practical function, but to the rest of us,the visual images are very important. We want to see what it looks like on the surface and view these pictures with wonder.
Visual imagery has and still does represent the most important tool of discovery and exploration. The mysterious features of Jupiter, Saturn and Europa, Titan and the other moons, that have piqued our curiosity, were discovered by vusual imagery.
A visit to Mars is a photo op we could not fail to take advantage of.
How did DP review come to the conclusion that the D3200 competes with the Canon T3?
The T3 is nothing more than a bare bones low entry level DSLR for consumers who've never owned a DSLR and don't know how to buy one.
To anyone who wants a T3 I say "You really should buy a $1300 camera....Because by the time you figure out how to use the camera you will realize that it's less camera than you want and you'll end up spending another $800 for a T3i sooner rather than later."
Regardless of any shortcomings some may find in the D3200, it far outclasses the T3.
I think the D3200 would appeal more to the kind of photographer who might buy a Canon T3i or T4i. With it's higher resolution and lower price point it'll certainly give those cameras a run for the money.
If I didn't already own a bagfull of Canon glass, I'd give this camera serious consideration. And if the recent trend of Canon and Nikon continues, I may dump that glass and switch to Nikon.
facedodge: Consumers will soon learn that it's not all about Megapixels with camera phones catching up. Until then... 24MP will be a key selling feature. It's the first item they list on the stat sheet.
"hey Joe, what about this red one over here? it's $150 more expensive than that Canon, but it's got twice the pixels"
Only someone who doesn't understand the relationship between sensor size, focal length, perspective, angle of view, depth of field and optical resolution, digital noise and dynamic range would think that a camera phone with a 24mpx is equivelent to a DSLR.
Charrick: There are so many people who hate more megapixels. I know that more megapixels decreases the size of each pixel (that is, image sensor element), thus allowing it to gather less light. I'm not disputing that.
But with the D800, I thought people would have learned that, at least in low to moderate ISO settings, more megapixels DOES translate into a sharper picture with more details. Some people are pretending that technological innovation with sensor sensitivity to light stopped in 2006. And if that were the case, then perhaps 6 megapixel sensors would be good enough.
I, for one, am glad that some companies are pushing the envelope. I don't like pixels just for their own sake, but it's clear that at the 24MP range, pictures taken in daytime will probably look better than with, say, a 12-16MP sensor of the same size. Then again, I take far more pictures in the daytime than in the middle of the night or in candle-lit rooms.
Oh, one more thing:
Since the beginning of digital photography one of the most important commandments was ;"Thou shalt not waste pixels.". In other words, frame the image as close to the final cropping as possible, so as to avoid having to crop any pixels out.
With high MPX images, we can go ahead and crop the image and still make 8x10 prints or larger, without having to uprez.
Who needs a $12,000, 600mm lens when for less money you can get a 36MPX camera and a couple of great lenses and just crop the image on the rare occasions that 300mm just isn't enough.
Many photographers have gotten so used to cropping in camera and saving pixels that they have forgotten that once, long ago in a galaxy far far away, photographers shot film and cropped their images in the darkroom.
With more pixels we can throw away the "Thall shalt not waste pixels" commandment.
In a head to head comparison of my older 10MPX to my new 18MPX Canons. with camera noise suppression shut off, the 18MPX camera exhibited less noise at ISO 1600 than the 10MPX does at 800.
In post production noise reduction using Photoshop and Topaz Denoise, of images of the same scene same crop, the high MPX images required a lesser degree of filtration to supress noise than the 10 MPX image.
Finally, in post production sharpening , the 18MPX image required less sharpening to achieve optimal sharpness resulting in less enchancement of noise and fewer sharpening artifacts. In higher resolution images ,sharpening has a greater effect on details in proportion to its effect on noise in smooth low contrast areas. Conversely, in higher resolution images more sharpening can be applied before noise becomes objectionable.
When scaled down to match the 10 MPX print output, 8X12 @300PPI, the 18MPX prints are distinctly crisper and cleaner than those from the 10MPX.
Higher MPX does matter.
AnHund: First of all D3200 is not an evolution it is a revolution. Image quality is stunning. If you won't or can't buy a D800 (which is extraordinary) then get the D3200 which is really capable of producing very, very good images seen from a technical perspective.
Regarding the comment "you can always use the 18-55mm as a paperweight" - this is a downright ridiculus comment. The 18-55mm is really good for the price and can easily be used with the D3200 for very good results. Of course better and more expensive lenses will give better results, but the 18-55mm is really a lot of value for next to no money.
Josh....About a decade ago when I was shooting with a 6 MPX Canon 10D, the 7 K$ Canon 1DS was an astounding 11 MPX!! At the same time I was shooting in the studio with a 22MPX Imacon back on a RZ67, which cost twice what the 1DS was selling for at the time. Now 30mpx is the low end for medium format and the top end is at 80 MPX and could soon go higher.
Every few years, what was top of the line in MPX works it's way down into the entry level mainstream while the top of the line high MPX cameras reach ever higher resolutions. Nothing remarkable about that. Whats revolutionary is the difference in MPX between the entry and top levels is less...in 2002 the top canon camera had twice the MPX as the entry level,(6MPX:11MPX) now the top canon has only about 20% more (19MPX:22MPX). The D800 only has 50% more than the D3200 (24MPX:36MPX) Whats revolutionary is the price for the top end;3k for 36 MPX!!! More than 50% more MPX than canons flagship, for fewer $$$s! Thats revolutionary!
I've done alot of aerial photography from a helicopter and I've used a remote controled model helicopter. Actually going up in the helipcopter is way more fun than using the RC platform.
Making great pictures is only part of the reason I got into photography...the other is that the act of taking photographs is so much fun.
The willingness and ability to be in he right place at the right time is one of the reasons we are hired as photographers. But now they are going to hire a technician to mount the camera and the photographer does not even have to be on the scene to take the shot...he could be locked away in a mobile vehicle far away from the location like the TV crews.
That takes all the fun out of it.
Ashley Pomeroy: I can't wait for them to attach cameras to the athletes - it might slow 'em down a bit, but think about the possibilities. e.g. beach volleyball. What happens if two rival news agencies put robot cameras next to each other? Will they fight?
While photography the start of a road race, I noticed quite few runners with those small action video cams strapeed to their chests.
jon404: Next -- robot cameras with 'decisive moment' AI software!
Why not? Your point 'n shoot will already take a picture when it 'sees' a cat or dog... and can process a pile of snaps to come up with the 'best portrait.'
Brave new world! Sort of.
With 4K video cams,it's not necessary to take a picture at the decisive moment, a still quality high resolution image can be extracted from the video recording. The decisive moment will take place in the editing.
I don't even like to use fast frame rate on my DSLR...too much work downloading and editing. Imagine going through three seconds of action at 30 FPS???