Behind the shot: 'Louisville in Motion'
1 Behind the shot: 'Louisville in Motion'
The time-lapse video above started out by accident. I was learning how to use a slider to create a motion-controlled time-lapse with photos, rather than sped-up video. My first attempt turned out alright, but hardly anything ever turns out exactly the way you envision it, especially when you're learning something new.
I tried a few other motion-controlled time-lapses, and when they were turned from photos into a video file, I was fairly pleased with the results. During these early efforts, while learning how to use this new gear, I would browse Vimeo and check out what time-lapse videos others had made. I was amazed at some of the city montages on the site, and figured since I had already created a few of my own, I would make a video showing off Louisville, Kentucky during the summer.
Little did I realize how much time and effort went into creating something that really does justice to a good-sized city. It ended up taking a little over a year and a half to make this, and I thought a 'Behind-the-Shot' type article could help others who may be thinking about getting into time-lapse photography. I shot these sequences using a Panasonic Lumix DMC-GH2 with a range of lenses, including the Panasonic Lumix G Vario 7-14mm F4 ASPH, 20mm F1.7 ASPH, Leica DG Macro-Elmarit 45mm F2.8 ASPH OIS, 14-140mm F3.5-5.6 ASPH and the 100-300mm F4-5.6 OIS.
First I'm going to talk about exposure, since many have asked if this video is made up of 'HDR' shots. The answer is no, each frame of the video was taken from an image converted from a single Raw file. The reason very little of the video looks blown out while still retaining detail in the shadows comes down to exposure settings, how the image was saved, and techniques I used to process the images.
Here's a video that shows how a typical clip looks before and after exposure and color correction.
Before capturing each scene I take a few test shots and have the Panasonic GH2 set to highlight overexposed areas. Knowing I want to still see detail in the brightest part of the image and not induce excess noise in the darker parts of the photo, I would let a small portion of the clouds 'blow out' or become completely white information. Because clouds are usually white, if a small percentage is overexposed I'm still not losing much detail in the highlights, while maximizing what the camera can detect in the darker areas.
|The blue circle marks the section of sky that is overexposed.|
When exposing to protect the bright parts of an image, pictures usually look pretty dark overall, since the sky and especially clouds are generally brighter than things on the ground. To brighten darker parts of an image we want to have as much information from the camera's imaging sensor as possible. When saving something as a JPEG, the camera throws away a lot of what the sensor is capable of reading; however when shooting Raw, many more color values are saved that we can boost later.
Finally, Raw files have to be processed to create an image. I'm not going to go into detail about how, since each scene will require different settings and there are many Raw converters available. But most Raw conversion software can darken highlights and brighten the shadows, as well as change color temperature, and even adjust specific shades and brightness of color. For each scene I spend about an hour trying to get the cityscape to my liking. I admit it's not how things look in real life, but then again, what is? I make my adjustments to match how I remember those moments.
Motion blur and depth of field
When I first started working on this video I didn’t have any ND filters so during the day shoots I would generally stop down to about F8 to get as much of the scene in focus as possible, then used a high shutter speed to ensure good exposure for the highlights in my photos. I didn’t mind the stop-motion appearance of the cars or people with this technique because the clouds and the buildings are the really important parts, and they look smooth. As time went on I began experimenting more and ended up getting a Formatt 77mm Neutral Density 2.4 Filter, which lowered the light hitting the sensor by eight stops. That allowed me to play around with blurring motion in the daytime.
After processing one time-lapse scene and seeing cars and people turn into blurry undefined streaks I decided I much preferred a higher shutter speed during the day. Strangely enough, as much as I appreciated a sharp, blur-free image during the day, I loved the way headlights and tail lights blur on vehicles at night and I would sometimes leave the shutter open for a couple seconds.
A lot of the video is about seeing something ordinary in a way that’s not possible in real life and the streaking of lights emphasizes this. The cool thing about shooting at night is that since the light level isn’t really changing much you don’t have to be nearly as consistent about taking the next photo every, say 15 seconds. You can wait until something interesting happens even if the time between shots fluctuates radically. A good example of this would be the shot of the fountain at night and lights darting through the frame.
This was taken in a quiet residential neighborhood and cars didn’t go by all that often. To make the scene more exciting, sometimes I would wait 30 seconds between shooting and sometimes I might have waited a minute and a half to get a really neat shot when a bus would go by, or two cars would be in the frame at once.
One other reason I shot these sequences for greater depth of field is I wanted the viewer to decide what they want to look at. Usually there is a main area of interest for a shot where I use camera movement and framing to help the viewer see what I think is most important, but it’s really up to the person watching to pick out what they want to focus on. Even now I’ll re-watch the video and see something I didn’t the first hundred or so times around.
To add interest to the video I thought there should be some form of movement besides just the clouds or sun changing position. Since most buildings move very little, I decided the camera should physically travel in each shot. The items listed aren’t the only, or necessarily the best tools, but I believe they worked well to help create the movement in this video.
Manfrotto 535 legs
These are three-stage carbon-fiber tripod legs. They are light, which is nice for when climbing to the top of a parking garage, and can get really low to the ground while also extending a little over my head when a tripod head is attached. They are also rated to hold 44lbs (20kg) which is more than enough for any of the time-lapse gear used in the video.
Sachtler FSB-8 head
Truthfully this head is overkill for anything in this video, a much cheaper head with a half-ball base would have worked fine, but this fluid head did make leveling for hyper-lapses more enjoyable, plus this is great for everyday video use.
Kessler Pocket Dolly Ver1
The slider was used for the shorter, slower moves, or most of the day-to-night time-lapses seen in the video. It’s basically a rail with a carriage for the camera to move along.
Kessler Shuttle Pod Mini
This is similar to the Pocket Dolly, but it’s a modular device that can range from 4 feet to 16 feet depending on how many sections of track you decide to add. This was also used for shorter moves, but mostly when I wanted a longer vertical move than I could get with the Pocket Dolly.
- Oracle controller and motors
The two previous items aren’t much good for a time-lapse without a motor to move the camera between each photo and a controller that waits a set amount of time before turning the motor on and off. The controller and motor are what makes the short moves look smooth.
I most often get asked how the long dolly-type moves are made. These shots are called hyper-lapses and don't require much in the way of specialized video gear. Most decent video tripods have a recessed bowl built into the tripod legs, and then the tripod head attaches to a half ball that in turn sits in the tripod's recessed bowl. This half ball allows the tripod's head to be leveled so when panning and tilting, the horizon will always be pretty close to horizontal (I'll explain why a pistol grip or ball head aren't my preferred style of head in the 'helpful hyper-lapse tips' section below).
|Above is a video head attached to half-ball mount, which fits into the tripod with a bowl base.|
When contemplating a hyper-lapse sequence, I first walk the distance I want to record while looking at a structure off in the distance; this will be my object of interest. If the parallax effect looks interesting, I'll focus in on a very specific point on my object of interest. For example, if the object is a building, I'll pick out the top left corner of it and retrace my path making sure nothing ever obstructs my view: the top left corner of my building is now my point of interest. If a light post, tree, or sign ever block my line of site to this point I'll either scrap the location, move further back, or move in front of whatever is blocking my view and see if that fixes the problem. If my point of interest is no longer blocked while walking, I'll start the the hyper-lapse.
|The object of interest is colored light red, with the point of interest circled in yellow.|
The first thing I do is level the tripod's half-ball, then pan and tilt the camera until I get my framing correct. Since the camera I used for this project is a Panasonic G series, I was able to set a vertical line and a horizontal line to act as an anchor point that I would always snap to the point of interest. This is important so the framing between shots is almost identical to the shot before.
I used a Panasonic GH2, but if your brand of camera doesn’t have this option I would either use one of the autofocus points in the optical viewfinder, or if you want to use live view, tape some fishing wire across the screen in both the vertical and horizontal directions so the points that line are up now your cross hairs.
Once a photo is taken, I move the tripod in my preferred direction of travel, about the length of a shoe, level the half-ball, pan and tilt so the camera's anchor point matches up with the point of interest, and take another photo. I repeat until I run out of space or have enough photos to make my desired sequence. Once I've finished shooting, the video will look very shaky, so I use a video stabilization program to smooth out the scene (I'll briefly go over this in the post-production area).
|The point of interest is lined up with an anchor point onscreen.|
A few helpful hyper-lapse tips
Finding an edge. Moving the camera in a straight line will make for a smoother hyper-lapse. The edges in sidewalks or curbs are great for this. Pick a sidewalk that is straight, place two of the tripod's legs against the edge of the curb. Level the tripod head and take a photo, then continue lining up those two tripod legs against the edge of the curb as you move along to your next shots.
Setting duration. To figure out how far you want to move between each shot, think of how long you want the scene to last. I recommend at least five seconds. In countries that use NTSC as the video format you will most likely have a video running at 24 or 30 frames per second (25 frames per second in PAL countries). For this project I decided I would make everything 24 frames per second. To get five seconds of footage I multiplied five seconds by 24 frames and came up with 120. So the camera would have to take a photo and be moved 120 times between my starting and ending point.
Calculating number of shots. If you're not sure how much distance to move the camera each time, just walk along the ground you want to cover and count your footsteps. Next figure out how many lengths of your shoe cover each step. I usually cover two shoe lengths between each step, so If it took me 30 steps to walk the path I wanted to cover that would come out to be 60 lengths of my shoe. Knowing I want to take 120 photos, I would move a particular leg of my tripod a half shoe length between each shot.
Intervalometer. For picking an interval between shots on a hyper-lapse it’s not usually necessary to have an intervalometer. The only time I do use an intervalometer with a hyper-lapse is with day-to-night, or night-to-day shots because they take two to three hours.
Horizons with ultra-wide lenses. Use of a half-ball tripod is more important when using ultra-wide-angle lenses. Pistol grips or ball heads make it harder to keep the horizon line straight when you move the tripod, and the extreme distortion of ultra-wide-angle lenses makes it harder to keep this aspect consistent when you're only using the one anchor point.
Pay attention. Hyper-lapsing is very repetitive. Don't let your mind wander too much or you might snap your anchor point to the wrong edge of the building. This is more applicable if your point of interest is a specific window on a building where all the windows look the same.
Secure that zoom. If using a zoom lens, tape the zoom ring unless you want to incorporate a zoom into the hyper-lapse. Having your focal length slip between shots can ruin the sequence.
Click the link below to read page two of Stemen's behind the scenes look at creating his time-lapse video.
Mar 21, 2017
Mar 20, 2017
Mar 16, 2017
Mar 11, 2017
|Devil Rock (Stuttgart, Germany) by cornissimo|
from Neon Signs
|Carla... by lickity split|
from Beautiful caucasian female faces
|Lunar New Year Fireworks by Michael L NYC 99|
|Vatican Basilica by wam7|
from Street lights
Go behind the scenes with National Geographic photographer Renan Ozturk and see what it takes to capture a dangerous, harrowing, stunning Nat Geo photo essay.
Erez Marom tells the story behind this ominous photo of the sand 'reaching up' towards the mountains at Skagsanden beach in Norway. He calls this photo 'Torment.'
DPReview staffer Carey Rose has taken the Panasonic Leica DG 15mm F1.7 along for everything from a city-side boat ride to a bachelor party across the mountains. Find out how the little Leica fared.
Canon just unveiled the largest 12-ink printer on the market. The new imagePROGRAF PRO-6000 printer can make prints from 17 all the way up to 60 inches wide.
"Standing in one of the holiest places on earth, I felt uneasy," writes Wired's Jason Parham. "Most of my fellow visitors, I realized with a brief bloom of nausea, were taking selfies."
Christopher Nolan's Dunkirk has been receiving great reviews, but it's a challenge to see it in its full glory. This handy infographic reveals the aspect ratio chaos that is wrought as the industry retreats from film.
Anti-bullying organization Ditch the Label's Annual Bullying Survey 2017 reveals yet again that Instagram, more so than any other social network, has a the worst effect on youth mental health.
It's been a crazy day for innovative patent news. Apparently Sony is thinking of developing a medium format curved sensor camera.
An update to the Silkypix Raw converter fixes some bugs and adds support for several popular new cameras.
This crazy custom-built underwater camera shoots 8x10 large format film. It's supposedly "the first successful underwater 8x10 ever made," and it can be yours for $5,800... plus shipping.
Blackmagic just reveled a new accessory for their Cintel Film Scanner. The Cintel Audio and KeyKode Reader can capture KeyKode data and high-quality audio from film in real-time as it is being scanned.
A new Nikon patent shows a lens designed for a curved full-frame sensor. Could this be the high-end Nikon mirrorless camera people are hoping for?
The ability to shoot images at 1,000 fps first appeared in a Sony smartphone sensor. Now the Japanese manufacturer is using the same feature for industrial applications.
Astronomy expert and photographer Dr. Tyler Nordgren thinks you should "see your first eclipse, photograph your second." But if you do plan on taking photos this August, here are a few tips from someone who's been there.
How confident are you that you can spot a manipulated photo? A recent study at the University of Warwick shows that many people are pretty bad at it.
If you purchased a Leica TL2, do NOT attach Leica's Visoflex electronic viewfinder. Leica is working on a fix, but for now, it's possible the viewfinder will break your camera.
Google just released Motion Stills for Android. Unlike the iOS version, the Android app uses a redesigned video processing pipeline that processes each frame of a video as it is being recorded, creating instant results.
A huge copyright lawsuit between photography firm VHT and Zillow Group is heating up again, as both sides appeal a court ruling that granted VHT $4 million in damages.
European Space Agency astronaut Thomas Pesquet spent 6 months on board the International Space Station where he worked with Google capturing spheric panorama images that are now available in Street View.
It's official. PDN has confirmed with parent company Aurelius that 94-year-old lighting company Bowens is indeed going out of business.
The newly launched firmware version 1.06 fixes AF-issues that can occur with some lenses that are not officially compatible with the MC-11 converter.
Voyager is a waterproof smart light stick you can control entirely from your phone. The light has already blown past its $300K funding goal on Indiegogo.
2018 is the last year Photokina will take place during the traditional end-of-September dates. In 2019, Photokina will take place from the 8th to the 11th of May.
The Canon IXUS 50 (known as the SD400 Digital ELPH in North America) was one of a string of high-performing, pocketable PowerShots of the mid-2000s. In this week's throwback Thursday, Barney casts his mind back to 2005.
A close look at the EOS 6D II's Raw files suggest its dynamic range has taken a significant step backwards compared with the company's recent DSLRs. We look at how much difference this might make for your photos.
With a full-production review unit in our hands, we've got over 100 production samples from the new Canon EOS 6D Mark II to share.
Need a break from your day? Kick back and watch the making of a somewhat unconventional mojito filmed on Canon's new EOS 6D Mark II.
The Bonfoton Camera Obscura Room Lens can turn any room into a camera obscura, projecting the view from your window onto the walls of your room.
Adobe just released version 2015.12 of Lightroom CC, adding support for several new cameras and lenses, and baking in several important bug fixes while they were at it.
In this interview, Chiara Marinai, photo editor for VanityFair.com, explains exactly what she looks for in new photographers and photo submissions. Take notes.