Behind the shot: 'Louisville in Motion'
1 Behind the shot: 'Louisville in Motion'
The time-lapse video above started out by accident. I was learning how to use a slider to create a motion-controlled time-lapse with photos, rather than sped-up video. My first attempt turned out alright, but hardly anything ever turns out exactly the way you envision it, especially when you're learning something new.
I tried a few other motion-controlled time-lapses, and when they were turned from photos into a video file, I was fairly pleased with the results. During these early efforts, while learning how to use this new gear, I would browse Vimeo and check out what time-lapse videos others had made. I was amazed at some of the city montages on the site, and figured since I had already created a few of my own, I would make a video showing off Louisville, Kentucky during the summer.
Little did I realize how much time and effort went into creating something that really does justice to a good-sized city. It ended up taking a little over a year and a half to make this, and I thought a 'Behind-the-Shot' type article could help others who may be thinking about getting into time-lapse photography. I shot these sequences using a Panasonic Lumix DMC-GH2 with a range of lenses, including the Panasonic Lumix G Vario 7-14mm F4 ASPH, 20mm F1.7 ASPH, Leica DG Macro-Elmarit 45mm F2.8 ASPH OIS, 14-140mm F3.5-5.6 ASPH and the 100-300mm F4-5.6 OIS.
First I'm going to talk about exposure, since many have asked if this video is made up of 'HDR' shots. The answer is no, each frame of the video was taken from an image converted from a single Raw file. The reason very little of the video looks blown out while still retaining detail in the shadows comes down to exposure settings, how the image was saved, and techniques I used to process the images.
Here's a video that shows how a typical clip looks before and after exposure and color correction.
Before capturing each scene I take a few test shots and have the Panasonic GH2 set to highlight overexposed areas. Knowing I want to still see detail in the brightest part of the image and not induce excess noise in the darker parts of the photo, I would let a small portion of the clouds 'blow out' or become completely white information. Because clouds are usually white, if a small percentage is overexposed I'm still not losing much detail in the highlights, while maximizing what the camera can detect in the darker areas.
|The blue circle marks the section of sky that is overexposed.|
When exposing to protect the bright parts of an image, pictures usually look pretty dark overall, since the sky and especially clouds are generally brighter than things on the ground. To brighten darker parts of an image we want to have as much information from the camera's imaging sensor as possible. When saving something as a JPEG, the camera throws away a lot of what the sensor is capable of reading; however when shooting Raw, many more color values are saved that we can boost later.
Finally, Raw files have to be processed to create an image. I'm not going to go into detail about how, since each scene will require different settings and there are many Raw converters available. But most Raw conversion software can darken highlights and brighten the shadows, as well as change color temperature, and even adjust specific shades and brightness of color. For each scene I spend about an hour trying to get the cityscape to my liking. I admit it's not how things look in real life, but then again, what is? I make my adjustments to match how I remember those moments.
Motion blur and depth of field
When I first started working on this video I didn’t have any ND filters so during the day shoots I would generally stop down to about F8 to get as much of the scene in focus as possible, then used a high shutter speed to ensure good exposure for the highlights in my photos. I didn’t mind the stop-motion appearance of the cars or people with this technique because the clouds and the buildings are the really important parts, and they look smooth. As time went on I began experimenting more and ended up getting a Formatt 77mm Neutral Density 2.4 Filter, which lowered the light hitting the sensor by eight stops. That allowed me to play around with blurring motion in the daytime.
After processing one time-lapse scene and seeing cars and people turn into blurry undefined streaks I decided I much preferred a higher shutter speed during the day. Strangely enough, as much as I appreciated a sharp, blur-free image during the day, I loved the way headlights and tail lights blur on vehicles at night and I would sometimes leave the shutter open for a couple seconds.
A lot of the video is about seeing something ordinary in a way that’s not possible in real life and the streaking of lights emphasizes this. The cool thing about shooting at night is that since the light level isn’t really changing much you don’t have to be nearly as consistent about taking the next photo every, say 15 seconds. You can wait until something interesting happens even if the time between shots fluctuates radically. A good example of this would be the shot of the fountain at night and lights darting through the frame.
This was taken in a quiet residential neighborhood and cars didn’t go by all that often. To make the scene more exciting, sometimes I would wait 30 seconds between shooting and sometimes I might have waited a minute and a half to get a really neat shot when a bus would go by, or two cars would be in the frame at once.
One other reason I shot these sequences for greater depth of field is I wanted the viewer to decide what they want to look at. Usually there is a main area of interest for a shot where I use camera movement and framing to help the viewer see what I think is most important, but it’s really up to the person watching to pick out what they want to focus on. Even now I’ll re-watch the video and see something I didn’t the first hundred or so times around.
To add interest to the video I thought there should be some form of movement besides just the clouds or sun changing position. Since most buildings move very little, I decided the camera should physically travel in each shot. The items listed aren’t the only, or necessarily the best tools, but I believe they worked well to help create the movement in this video.
Manfrotto 535 legs
These are three-stage carbon-fiber tripod legs. They are light, which is nice for when climbing to the top of a parking garage, and can get really low to the ground while also extending a little over my head when a tripod head is attached. They are also rated to hold 44lbs (20kg) which is more than enough for any of the time-lapse gear used in the video.
Sachtler FSB-8 head
Truthfully this head is overkill for anything in this video, a much cheaper head with a half-ball base would have worked fine, but this fluid head did make leveling for hyper-lapses more enjoyable, plus this is great for everyday video use.
Kessler Pocket Dolly Ver1
The slider was used for the shorter, slower moves, or most of the day-to-night time-lapses seen in the video. It’s basically a rail with a carriage for the camera to move along.
Kessler Shuttle Pod Mini
This is similar to the Pocket Dolly, but it’s a modular device that can range from 4 feet to 16 feet depending on how many sections of track you decide to add. This was also used for shorter moves, but mostly when I wanted a longer vertical move than I could get with the Pocket Dolly.
- Oracle controller and motors
The two previous items aren’t much good for a time-lapse without a motor to move the camera between each photo and a controller that waits a set amount of time before turning the motor on and off. The controller and motor are what makes the short moves look smooth.
I most often get asked how the long dolly-type moves are made. These shots are called hyper-lapses and don't require much in the way of specialized video gear. Most decent video tripods have a recessed bowl built into the tripod legs, and then the tripod head attaches to a half ball that in turn sits in the tripod's recessed bowl. This half ball allows the tripod's head to be leveled so when panning and tilting, the horizon will always be pretty close to horizontal (I'll explain why a pistol grip or ball head aren't my preferred style of head in the 'helpful hyper-lapse tips' section below).
|Above is a video head attached to half-ball mount, which fits into the tripod with a bowl base.|
When contemplating a hyper-lapse sequence, I first walk the distance I want to record while looking at a structure off in the distance; this will be my object of interest. If the parallax effect looks interesting, I'll focus in on a very specific point on my object of interest. For example, if the object is a building, I'll pick out the top left corner of it and retrace my path making sure nothing ever obstructs my view: the top left corner of my building is now my point of interest. If a light post, tree, or sign ever block my line of site to this point I'll either scrap the location, move further back, or move in front of whatever is blocking my view and see if that fixes the problem. If my point of interest is no longer blocked while walking, I'll start the the hyper-lapse.
|The object of interest is colored light red, with the point of interest circled in yellow.|
The first thing I do is level the tripod's half-ball, then pan and tilt the camera until I get my framing correct. Since the camera I used for this project is a Panasonic G series, I was able to set a vertical line and a horizontal line to act as an anchor point that I would always snap to the point of interest. This is important so the framing between shots is almost identical to the shot before.
I used a Panasonic GH2, but if your brand of camera doesn’t have this option I would either use one of the autofocus points in the optical viewfinder, or if you want to use live view, tape some fishing wire across the screen in both the vertical and horizontal directions so the points that line are up now your cross hairs.
Once a photo is taken, I move the tripod in my preferred direction of travel, about the length of a shoe, level the half-ball, pan and tilt so the camera's anchor point matches up with the point of interest, and take another photo. I repeat until I run out of space or have enough photos to make my desired sequence. Once I've finished shooting, the video will look very shaky, so I use a video stabilization program to smooth out the scene (I'll briefly go over this in the post-production area).
|The point of interest is lined up with an anchor point onscreen.|
A few helpful hyper-lapse tips
Finding an edge. Moving the camera in a straight line will make for a smoother hyper-lapse. The edges in sidewalks or curbs are great for this. Pick a sidewalk that is straight, place two of the tripod's legs against the edge of the curb. Level the tripod head and take a photo, then continue lining up those two tripod legs against the edge of the curb as you move along to your next shots.
Setting duration. To figure out how far you want to move between each shot, think of how long you want the scene to last. I recommend at least five seconds. In countries that use NTSC as the video format you will most likely have a video running at 24 or 30 frames per second (25 frames per second in PAL countries). For this project I decided I would make everything 24 frames per second. To get five seconds of footage I multiplied five seconds by 24 frames and came up with 120. So the camera would have to take a photo and be moved 120 times between my starting and ending point.
Calculating number of shots. If you're not sure how much distance to move the camera each time, just walk along the ground you want to cover and count your footsteps. Next figure out how many lengths of your shoe cover each step. I usually cover two shoe lengths between each step, so If it took me 30 steps to walk the path I wanted to cover that would come out to be 60 lengths of my shoe. Knowing I want to take 120 photos, I would move a particular leg of my tripod a half shoe length between each shot.
Intervalometer. For picking an interval between shots on a hyper-lapse it’s not usually necessary to have an intervalometer. The only time I do use an intervalometer with a hyper-lapse is with day-to-night, or night-to-day shots because they take two to three hours.
Horizons with ultra-wide lenses. Use of a half-ball tripod is more important when using ultra-wide-angle lenses. Pistol grips or ball heads make it harder to keep the horizon line straight when you move the tripod, and the extreme distortion of ultra-wide-angle lenses makes it harder to keep this aspect consistent when you're only using the one anchor point.
Pay attention. Hyper-lapsing is very repetitive. Don't let your mind wander too much or you might snap your anchor point to the wrong edge of the building. This is more applicable if your point of interest is a specific window on a building where all the windows look the same.
Secure that zoom. If using a zoom lens, tape the zoom ring unless you want to incorporate a zoom into the hyper-lapse. Having your focal length slip between shots can ruin the sequence.
Click the link below to read page two of Stemen's behind the scenes look at creating his time-lapse video.
Mar 21, 2017
Mar 20, 2017
Mar 16, 2017
Mar 11, 2017
|Fangorn Forest by cand1d|
|Yosemite Falls with Moonbow by Jonathan Shapiro|
from Best Landscape of the Week 4
The new stuff should have better red hues, improved sensitivity and finer grain - but don't worry - will still shift blues to green, greens to purple and yellows to pink.
Ricoh has introduced a new rugged compact camera with a 16MP CMOS sensor, 28-140mm lens, 2.7" LCD and built-in LED macro lights. Read more
This compact drone can shoot HD video using a 2-axis stabilized 12MP camera. Read more
The new Prynt Pocket can print a photo directly from their iPhone simply by inserting the phone into the printer, then snapping a photo. Each print will cost about 50 cents. Read more
Updates for Adobe Camera Raw and Lightroom CC bring support for the Sony A9 and Panasonic ZS70/TZ90, along with bug fixes.
The Triggertrap remote camera control system is no longer sold due to the company folding, but now users will be able to build their own. Read more
The Magic Format Converter comes with internal optics that expand the image circle of full-frame DSLR lenses for use on the Fuji medium format camera. Read more
The usually Apple-exclusive MacPhun software developer has announced that it will introduce PC versions of two of its most popular applications. Both Aurora HDR and Luminar should be available for the Windows operating system by the autumn of this year. Read more
Sony's newest G Master telephoto zoom, announced alongside the a9, is the first of the company's FE lenses to reach 400mm natively. We had one in California and photographed horses, portraits, and landscapes - check out how it did. Read more
Garmin has entered the 360-camera market with the VIRB, which captures 5.7K video at 30p as well as 15MP stills. Read more
German media reports that the founders of the company behind the Panono 360-degree ball camera have filed for bankruptcy at a court in Berlin. Read more
With a claimed 800 new custom parts, Microsoft's updated Surface Pro comes with the latest Kaby Lake processors, better battery life, a new hinge, plus the Surface Pen is updated as well. Read more
DW Photo is attempting to resurrect the Hy6 medium format camera, though the legal tangles of its development may stop it being branded Rolleiflex.
The Kodak EKTRA, the company's 'camera first' smartphone, is now available to purchase in the United States. Read more
Apple and Nokia have settled their years-old patent dispute. Apple will make an undisclosed payment to Nokia and sign a licensing agreement related to digital health products with the Finnish company.
David Gibson, one of Britain's best known street shooters, shares all.
Photographers from the SKYGLOW project travelled 150k miles and took 3 million photos in increasingly rare locations: those without light pollution.
The world's fastest 200mm was produced for 16 years. In that time, only 8000 were made.
Photokina, the biennial photo industry trade show in Cologne, Germany, has announced that it will become an annual event beginning in 2018, and expand its focus to additional areas of imaging technology. Read more
No mic socket? No problem. In this video, Daniel Peters at Photo Gear News shows you how to make a lapel microphone using just a smartphone and a pair of earbuds.
How does the iPhone 7 Plus stack up against the Arri Alexa cinema camera? Watch this short video to find out.
Canon Australia's video series "The Lab" is designed to make photographers experiment and think outside the box. In the latest video a group of photographers create images based on their sense of taste.
The GH5 is expected to get a firmware update this summer to support 400Mbps internal recording. NewsShooter explores what memory cards you'll need to make it work.
Microsoft's new Surface Pro offers Intel's latest processor generation and improved battery life.
Riding a mountain bike downhill is dangerous enough in daylight, but potentially lethal at night. Which is where drones come in.
Rumors abound that Canon (and maybe Nikon) may produce a mirrorless camera based using their existing DSLR mount. Does this guarantee immediate great lens choice or a perpetually second-rate experience? Read more
According to rumors, the next camera from Nest will be able to capture 4K video, though that resolution will be only used for 'virtual' pan and tilt functions.
Boundary's Prima 'fully modular' backpack is expandable to 30L and has a removable camera case and tablet sleeve. Early Kickstarter backers can get one for $189.
Stanley Greene captured 'brutally honest' photographs in the war zones of the Middle East, Chechnya and Georgia. He was also one of the few African-American photographers working internationally.
Owners of Leica M cameras that suffer from peeling CCDs will be able to claim a free repair in the future so long as the camera was purchased within five years of the fault becoming apparent, the company has announced. Read more