Behind the shot: 'Louisville in Motion'
1 Behind the shot: 'Louisville in Motion'
The time-lapse video above started out by accident. I was learning how to use a slider to create a motion-controlled time-lapse with photos, rather than sped-up video. My first attempt turned out alright, but hardly anything ever turns out exactly the way you envision it, especially when you're learning something new.
I tried a few other motion-controlled time-lapses, and when they were turned from photos into a video file, I was fairly pleased with the results. During these early efforts, while learning how to use this new gear, I would browse Vimeo and check out what time-lapse videos others had made. I was amazed at some of the city montages on the site, and figured since I had already created a few of my own, I would make a video showing off Louisville, Kentucky during the summer.
Little did I realize how much time and effort went into creating something that really does justice to a good-sized city. It ended up taking a little over a year and a half to make this, and I thought a 'Behind-the-Shot' type article could help others who may be thinking about getting into time-lapse photography. I shot these sequences using a Panasonic Lumix DMC-GH2 with a range of lenses, including the Panasonic Lumix G Vario 7-14mm F4 ASPH, 20mm F1.7 ASPH, Leica DG Macro-Elmarit 45mm F2.8 ASPH OIS, 14-140mm F3.5-5.6 ASPH and the 100-300mm F4-5.6 OIS.
First I'm going to talk about exposure, since many have asked if this video is made up of 'HDR' shots. The answer is no, each frame of the video was taken from an image converted from a single Raw file. The reason very little of the video looks blown out while still retaining detail in the shadows comes down to exposure settings, how the image was saved, and techniques I used to process the images.
Here's a video that shows how a typical clip looks before and after exposure and color correction.
Before capturing each scene I take a few test shots and have the Panasonic GH2 set to highlight overexposed areas. Knowing I want to still see detail in the brightest part of the image and not induce excess noise in the darker parts of the photo, I would let a small portion of the clouds 'blow out' or become completely white information. Because clouds are usually white, if a small percentage is overexposed I'm still not losing much detail in the highlights, while maximizing what the camera can detect in the darker areas.
|The blue circle marks the section of sky that is overexposed.|
When exposing to protect the bright parts of an image, pictures usually look pretty dark overall, since the sky and especially clouds are generally brighter than things on the ground. To brighten darker parts of an image we want to have as much information from the camera's imaging sensor as possible. When saving something as a JPEG, the camera throws away a lot of what the sensor is capable of reading; however when shooting Raw, many more color values are saved that we can boost later.
Finally, Raw files have to be processed to create an image. I'm not going to go into detail about how, since each scene will require different settings and there are many Raw converters available. But most Raw conversion software can darken highlights and brighten the shadows, as well as change color temperature, and even adjust specific shades and brightness of color. For each scene I spend about an hour trying to get the cityscape to my liking. I admit it's not how things look in real life, but then again, what is? I make my adjustments to match how I remember those moments.
Motion blur and depth of field
When I first started working on this video I didn’t have any ND filters so during the day shoots I would generally stop down to about F8 to get as much of the scene in focus as possible, then used a high shutter speed to ensure good exposure for the highlights in my photos. I didn’t mind the stop-motion appearance of the cars or people with this technique because the clouds and the buildings are the really important parts, and they look smooth. As time went on I began experimenting more and ended up getting a Formatt 77mm Neutral Density 2.4 Filter, which lowered the light hitting the sensor by eight stops. That allowed me to play around with blurring motion in the daytime.
After processing one time-lapse scene and seeing cars and people turn into blurry undefined streaks I decided I much preferred a higher shutter speed during the day. Strangely enough, as much as I appreciated a sharp, blur-free image during the day, I loved the way headlights and tail lights blur on vehicles at night and I would sometimes leave the shutter open for a couple seconds.
A lot of the video is about seeing something ordinary in a way that’s not possible in real life and the streaking of lights emphasizes this. The cool thing about shooting at night is that since the light level isn’t really changing much you don’t have to be nearly as consistent about taking the next photo every, say 15 seconds. You can wait until something interesting happens even if the time between shots fluctuates radically. A good example of this would be the shot of the fountain at night and lights darting through the frame.
This was taken in a quiet residential neighborhood and cars didn’t go by all that often. To make the scene more exciting, sometimes I would wait 30 seconds between shooting and sometimes I might have waited a minute and a half to get a really neat shot when a bus would go by, or two cars would be in the frame at once.
One other reason I shot these sequences for greater depth of field is I wanted the viewer to decide what they want to look at. Usually there is a main area of interest for a shot where I use camera movement and framing to help the viewer see what I think is most important, but it’s really up to the person watching to pick out what they want to focus on. Even now I’ll re-watch the video and see something I didn’t the first hundred or so times around.
To add interest to the video I thought there should be some form of movement besides just the clouds or sun changing position. Since most buildings move very little, I decided the camera should physically travel in each shot. The items listed aren’t the only, or necessarily the best tools, but I believe they worked well to help create the movement in this video.
Manfrotto 535 legs
These are three-stage carbon-fiber tripod legs. They are light, which is nice for when climbing to the top of a parking garage, and can get really low to the ground while also extending a little over my head when a tripod head is attached. They are also rated to hold 44lbs (20kg) which is more than enough for any of the time-lapse gear used in the video.
Sachtler FSB-8 head
Truthfully this head is overkill for anything in this video, a much cheaper head with a half-ball base would have worked fine, but this fluid head did make leveling for hyper-lapses more enjoyable, plus this is great for everyday video use.
Kessler Pocket Dolly Ver1
The slider was used for the shorter, slower moves, or most of the day-to-night time-lapses seen in the video. It’s basically a rail with a carriage for the camera to move along.
Kessler Shuttle Pod Mini
This is similar to the Pocket Dolly, but it’s a modular device that can range from 4 feet to 16 feet depending on how many sections of track you decide to add. This was also used for shorter moves, but mostly when I wanted a longer vertical move than I could get with the Pocket Dolly.
- Oracle controller and motors
The two previous items aren’t much good for a time-lapse without a motor to move the camera between each photo and a controller that waits a set amount of time before turning the motor on and off. The controller and motor are what makes the short moves look smooth.
I most often get asked how the long dolly-type moves are made. These shots are called hyper-lapses and don't require much in the way of specialized video gear. Most decent video tripods have a recessed bowl built into the tripod legs, and then the tripod head attaches to a half ball that in turn sits in the tripod's recessed bowl. This half ball allows the tripod's head to be leveled so when panning and tilting, the horizon will always be pretty close to horizontal (I'll explain why a pistol grip or ball head aren't my preferred style of head in the 'helpful hyper-lapse tips' section below).
|Above is a video head attached to half-ball mount, which fits into the tripod with a bowl base.|
When contemplating a hyper-lapse sequence, I first walk the distance I want to record while looking at a structure off in the distance; this will be my object of interest. If the parallax effect looks interesting, I'll focus in on a very specific point on my object of interest. For example, if the object is a building, I'll pick out the top left corner of it and retrace my path making sure nothing ever obstructs my view: the top left corner of my building is now my point of interest. If a light post, tree, or sign ever block my line of site to this point I'll either scrap the location, move further back, or move in front of whatever is blocking my view and see if that fixes the problem. If my point of interest is no longer blocked while walking, I'll start the the hyper-lapse.
|The object of interest is colored light red, with the point of interest circled in yellow.|
The first thing I do is level the tripod's half-ball, then pan and tilt the camera until I get my framing correct. Since the camera I used for this project is a Panasonic G series, I was able to set a vertical line and a horizontal line to act as an anchor point that I would always snap to the point of interest. This is important so the framing between shots is almost identical to the shot before.
I used a Panasonic GH2, but if your brand of camera doesn’t have this option I would either use one of the autofocus points in the optical viewfinder, or if you want to use live view, tape some fishing wire across the screen in both the vertical and horizontal directions so the points that line are up now your cross hairs.
Once a photo is taken, I move the tripod in my preferred direction of travel, about the length of a shoe, level the half-ball, pan and tilt so the camera's anchor point matches up with the point of interest, and take another photo. I repeat until I run out of space or have enough photos to make my desired sequence. Once I've finished shooting, the video will look very shaky, so I use a video stabilization program to smooth out the scene (I'll briefly go over this in the post-production area).
|The point of interest is lined up with an anchor point onscreen.|
A few helpful hyper-lapse tips
Finding an edge. Moving the camera in a straight line will make for a smoother hyper-lapse. The edges in sidewalks or curbs are great for this. Pick a sidewalk that is straight, place two of the tripod's legs against the edge of the curb. Level the tripod head and take a photo, then continue lining up those two tripod legs against the edge of the curb as you move along to your next shots.
Setting duration. To figure out how far you want to move between each shot, think of how long you want the scene to last. I recommend at least five seconds. In countries that use NTSC as the video format you will most likely have a video running at 24 or 30 frames per second (25 frames per second in PAL countries). For this project I decided I would make everything 24 frames per second. To get five seconds of footage I multiplied five seconds by 24 frames and came up with 120. So the camera would have to take a photo and be moved 120 times between my starting and ending point.
Calculating number of shots. If you're not sure how much distance to move the camera each time, just walk along the ground you want to cover and count your footsteps. Next figure out how many lengths of your shoe cover each step. I usually cover two shoe lengths between each step, so If it took me 30 steps to walk the path I wanted to cover that would come out to be 60 lengths of my shoe. Knowing I want to take 120 photos, I would move a particular leg of my tripod a half shoe length between each shot.
Intervalometer. For picking an interval between shots on a hyper-lapse it’s not usually necessary to have an intervalometer. The only time I do use an intervalometer with a hyper-lapse is with day-to-night, or night-to-day shots because they take two to three hours.
Horizons with ultra-wide lenses. Use of a half-ball tripod is more important when using ultra-wide-angle lenses. Pistol grips or ball heads make it harder to keep the horizon line straight when you move the tripod, and the extreme distortion of ultra-wide-angle lenses makes it harder to keep this aspect consistent when you're only using the one anchor point.
Pay attention. Hyper-lapsing is very repetitive. Don't let your mind wander too much or you might snap your anchor point to the wrong edge of the building. This is more applicable if your point of interest is a specific window on a building where all the windows look the same.
Secure that zoom. If using a zoom lens, tape the zoom ring unless you want to incorporate a zoom into the hyper-lapse. Having your focal length slip between shots can ruin the sequence.
Click the link below to read page two of Stemen's behind the scenes look at creating his time-lapse video.
Mar 21, 2017
Mar 20, 2017
Mar 16, 2017
Mar 11, 2017
|Patrick Finds Inner Peace by ecastellon|
from Your best photo of the week!
|Forks by Kukla|
from Arranged everyday objects
The new iZugar 3.25mm F2.5 super fisheye lens offers an insane 220-degree angle of view. That means it can basically see behind itself... good luck keeping your feet out of the shot.
You'll laugh, you'll cry, you'll remember that time you took a picture of the frozen pizza baking directions.
A Craigslist poster has discovered the worst possible way to photograph a car: taking pictures of pictures displayed on a cracked and scratched up smartphone screen.
With the iPhone X coming out soon, the title probably won't last, but the iPhone 8 Plus is officially the best smartphone camera DxOMark has ever tested, and the iPhone 8 is second.
Kodak's new Facebook Messenger chatbot is trying to bring back the 'Kodak Moment' by digging up your old social media photos and trying to sell you prints and custom coffee mugs.
Affinity Photo for iPad was touted as "the first full blown, truly professional photo editing tool to make its way onto the Apple tablet." This update makes it that much more convenient.
Yashica has released a new teaser video, and this one claims they'll be releasing an "unprecedented camera" in October on Kickstarter. Ready... set... speculate!
Storage solutions company Synology has just released its very first 6-bay NAS tower. Combined with the DX1215 expansion units, it can hold and control up to thirty drives.
We're always expanding our collection of product overview content, and we've just added videos for the Canon EOS 6D Mark II, the EOS Rebel SL2 and EOS M6.
The venerable Canon PowerShot G1 was announced seventeen years ago this week, marking the start of a line of enthusiast-focused compacts that's still alive and kicking.
Super macro photographer Can Tuncer captured these incredible close-ups of a single peacock feather using a special setup and three different microscope lenses.
After successfully crowdfunding the Biotar 75mm F1.5, Oprema Jena is at it again. This time they're bringing back the Biotar 58mm F2: the world's only lens with a 17-blade aperture.
Adobe's move to a subscription model is treating it very well indeed. The company has posted record revenue for the second quarter in a row, hauling in a mind-boggling $1.84 billion.
More details have emerged about the potential sale of Blackstone's 45% stake in iconic camera brand Leica.
Popular mobile editing app Snapseed just got a major update that includes a new interface and 11 new presets for both Android and iOS, as well as adding the Perspective tool to the iOS version.
It might sound like a strange idea, but taking macro photos of boiling water can actually result in some really cool photographs. A good photo experiment for a rainy day.
The database was created to "break with the narrow lens through which history… has been recorded" by equipping those who commission photography with "the resources to discover photographers of color available for assignments.
Lensbaby has released two new optics for their special "optic swap system." The Lensbaby Sweet 80 Optic gives you that trademark sweet spot of focus, while the Creative Bokeh optic gives you 9 different drop in aperture plate options to play with.
TechCrunch has already posted their review of the upcoming iPhone 8 (not yet the iPhone X), and they're calling it "a look into the augmented future of photography."
Affinity Photo is a $50 photo editing software with no subscriptions. That's it – pay for it once and you're done. And we think it's actually pretty darn good.
Instagram is currently testing a major change to the app's profile layout: replacing the 3-photo across grid with a 4-photo grid... and some users are NOT taking the news well.
A report by USSRPhoto is shedding some light on the return of the famed Zenit camera brand. It seems the full-frame mirrorless camera they're working on will be made in part by Leica using components from the Leica SL.
According to a reliable Korean report, Samsung is developing a smartphone sensor that's capable of super slow motion. Translation: Samsung's next batch of Galaxy smartphones may be able to shoot 1,000fps.
This simple photograph of a seahorse and Q-tip has taken the internet by storm. We spoke to photographer Justin Hofman about how it was captured, and what it means to him.
After a massive leak last week, Profoto has officially debuted the Profoto A1: the company's first on-camera flash system that they're calling "the world's smallest studio flash."
"When the first hyperfocal distance charts were designed, someone decided that an acceptably sharp background contained some blur — enough to notice in a medium-sized print [...] After that point, nearly every other hyperfocal chart followed suit."
The Canon EOS Rebel SL2 (also known as the EOS 200D) is the company's impressively compact entry-level DSLR. Packing a 24MP APS-C sensor, DIGIC 7 processor and Dual Pixel AF, it promises a lot of bang for the buck. And while not mind-blowing, it handles most tasks very well.
Correct these four common composition mistakes and your photos will be more balanced, tell a better story, and lead your viewer's eye where you want it to go.
The rugged, compact 360° action camera Kodak unveiled at Photokina in 2016, the Kodak PixPro Orbit 360, is finally available in the United States.
iOS 11 launches tomorrow, and it'll save all of your pictures in a new high efficiency image format called HEIC. Fortunately, there's now a converter that will let you turn those photos back into JPEGs.