Confused about G3/GH2 video stats

Travis

Senior Member
Messages
3,188
Solutions
1
Reaction score
34
Location
Quad Cities, US
All of the technical details about the way the Panasonic G series cameras capture video is confusing to me, so I was hoping you guys could either help me out or point me in the right direction.

I currently have a GH2 and am planning on getting a G3 as a second body. I understand the GH2 captures video at one of three ways: 720p/60, 1080i/60, or 1080p/24. And I understand the upcoming G3 will capture at 720p/60 or 1080i/60. However, I've read that the GH2 captures "from the sensor at 60fps" while the G3 captures "from the sensor at 30fps". So my first question is what does this mean?

And second of all, I don't understand the correlation between fps and interlaced vs progressive video capture. I do understand that interlaced means it captures a frame in two passes (even and odd), while progressive captures the entire frame in one pass. I'm confused, however, how this directly affects the frame rate?

Specifically, if I have an MTS file from the GH2 that was captured at 720p/60, that means the frame rate is 60 and each frame has the entire sensor readout, right? And if I have a video file captured at 1080i/60, it means I have a frame rate of 60 but each frame only has half of the image from the sensor? And if I de-interlace the video file, then the effective frame rate is now only 30 fps? Or does it stay at 60 but just has less data (interpolated) than before?

And how can the G3 capture at 1080i/60 when it is only getting a sensor readout at 30 fps? Does the frame rate double to 60 simply because it's taking half each pass and putting them into separately frames during the file writing?

Obviously, I'm pretty confused. In real-world terms, I understand that 60 fps gives you smoother footage, right? But what's the difference between 30 fps progressive versus 60 fps interlaced? Aren't they pretty much equal since the 60i footage only has half of the image for each frame, while the 30p footage has the entire frame? So a 720p/60 setting has a smooth framerate but less resolution, while a 1080i/60 has more resolution but effectively only 30 fps smoothness?

Thanks for any help you can give,
Travis
--
http://travisimo.smugmug.com/
 
It is damned confusing, but I'll give it a shot.
All of the technical details about the way the Panasonic G series cameras capture video is confusing to me, so I was hoping you guys could either help me out or point me in the right direction.

I currently have a GH2 and am planning on getting a G3 as a second body. I understand the GH2 captures video at one of three ways: 720p/60, 1080i/60, or 1080p/24. And I understand the upcoming G3 will capture at 720p/60 or 1080i/60. However, I've read that the GH2 captures "from the sensor at 60fps" while the G3 captures "from the sensor at 30fps". So my first question is what does this mean?
That is a very good question. Clearly if the sensor can only sample at 30fps it can't be recording true 720p or 1080i/60 video, both of which require the sensor to be sampled at 60fps. Either the specs. that have been reported are incorrect or the G3 is doing some heavy resampling.
And second of all, I don't understand the correlation between fps and interlaced vs progressive video capture. I do understand that interlaced means it captures a frame in two passes (even and odd), while progressive captures the entire frame in one pass. I'm confused, however, how this directly affects the frame rate?

Specifically, if I have an MTS file from the GH2 that was captured at 720p/60, that means the frame rate is 60 and each frame has the entire sensor readout, right?
Corret.
And if I have a video file captured at 1080i/60, it means I have a frame rate of 60 but each frame only has half of the image from the sensor? And if I de-interlace the video file, then the effective frame rate is now only 30 fps? Or does it stay at 60 but just has less data (interpolated) than before?
Correct. If you deinterlace 1080i/60 then you end up with 30fps.
And how can the G3 capture at 1080i/60 when it is only getting a sensor readout at 30 fps? Does the frame rate double to 60 simply because it's taking half each pass and putting them into separately frames during the file writing?
It can't, AFAIK. I think they would have to use each recorded frame twice to achieve 1080i/60.
Obviously, I'm pretty confused. In real-world terms, I understand that 60 fps gives you smoother footage, right? But what's the difference between 30 fps progressive versus 60 fps interlaced? Aren't they pretty much equal since the 60i footage only has half of the image for each frame, while the 30p footage has the entire frame? So a 720p/60 setting has a smooth framerate but less resolution, while a 1080i/60 has more resolution but effectively only 30 fps smoothness?
I believe that is essentially correct, and you also have to consider the transfer rate that the camera is using, with higher transfer rates equating to higher quality frames. The GH2's highest transfer rate is in 1080p/24 mode, which is the mode that the film industry uses. The G3 can't match this transfer rate in any mode.
Thanks for any help you can give,
Hopefully I haven't mislead you with any of the above. ;)
 
Specifically, if I have an MTS file from the GH2 that was captured at 720p/60, that means the frame rate is 60 and each frame has the entire sensor readout, right?
Correct.
And if I have a video file captured at 1080i/60, it means I have a frame rate of 60 but each frame only has half of the image from the sensor?
It's more correct to say that each field has half the image from the sensor. So, for instance, the GH2 has a frame rate of 60 and the G3 has a frame rate of 30, but they both have the same field rate.
And if I de-interlace the video file, then the effective frame rate is now only 30 fps? Or does it stay at 60 but just has less data (interpolated) than before?
It depends on what your deinterlacing software does. The correct thing to do is to deinterlace to 60 frames per second by interpolating the missing data. Very simple software will do this interpolation by looking at just a single field and reconstructing the missing lines by looking at the lines above and below. Smarter software will also look at the corresponding lines in the preceding and following fields. Really smart software will use motion prediction to extrapolate movement seen in the preceding and following fields into the missing lines. Good deinterlacing software can recover quite a lot of the missing information.

If you just slap succeeding odd and even frames together, this will work for the GH1 and G3 because every two fields are taken from a single frame. For the GH2, this will combine odd and even fields from different frames, and the motion between the frames will produce visible artifacts.
And how can the G3 capture at 1080i/60 when it is only getting a sensor readout at 30 fps? Does the frame rate double to 60 simply because it's taking half each pass and putting them into separately frames during the file writing?
The difference is:
  • The G3 grabs a frame every 1/30th of a second and splits it into odd and even fields for interlaced formats.
  • The GH2 grabs a frame every 1/60th of a second and discards the odd or even lines for interlaced formats.
Note that the G3 also captures only 30 frames per second for 720p mode, and duplicates frames in the output stream, while the GH2 records a full 60 progressive frames per second.
In real-world terms, I understand that 60 fps gives you smoother footage, right? But what's the difference between 30 fps progressive versus 60 fps interlaced? Aren't they pretty much equal since the 60i footage only has half of the image for each frame, while the 30p footage has the entire frame? So a 720p/60 setting has a smooth framerate but less resolution, while a 1080i/60 has more resolution but effectively only 30 fps smoothness?
No, because a television can display each field, so you will see the equivalent of a 60 frame per second deinterlaced stream, and the GH2 video will therefore be smoother. 30fps is close to the threshold of our ability to detect motion, so the improvement from 30fps to 60fps is much less than the improvement from 24fps to 30fps, but it is still visible for many people for fast-moving objects.

--
Joe
 
Thanks for all of the information, it definitely improves my picture (pun intended). I think I'm still a little vague on some of the terminology. For example, what's the difference between a "field" and a "frame"?

And a followup question, if you don't mind:

I obviously want to capture video at the best quality for archiving. I am currently juggling between both 720p/60 and 1080i/60 on my GH2. When viewing the original sized MTS files using VLC on my Mac, the 720p video is smaller and the 1080i is larger. However, the latter also shows objectionable interlacing. I can turn on the on-the-fly deinterlacing setting in VLC which helps somewhat, but the video still appears less smooth at 1080i than 720p.

When viewing these same files on my HDTV via my Boxee Box (it plays the files natively), they both look great and I really can't tell much (if any) difference. I presume the TV is performing the deinterlacing of the 1080i video?

So in terms of strictly viewing on the computer and uploading to Youtube/Video, then I presume 720p is the way to go. And if the files both look good on my HDTV, then why would I ever want to capture at 1080i? Is it simply because 1080i will perhaps have higher resolution for videos with very little movement?

Thanks again!
--
http://travisimo.smugmug.com/
 
The difference is:
  • The G3 grabs a frame every 1/30th of a second and splits it into odd and even fields for interlaced formats.
  • The GH2 grabs a frame every 1/60th of a second and discards the odd or even lines for interlaced formats.
When de-interlaced the G3 video ends up as perfect 30p (30 full frames with no interpolation).

When the GH2 is de-interlaced the video ends up a bit ugly when there is movement because every frame is made up of 2 fields taken at slightly different times.

Everyone is hoping tha the GH2 is hacked and 60i is replaced with 60p or at least 30p.
 
I only use 1080p24 24mb/sec mode in my GH2 and 1080p60 28mb/sec with my TM700. My first test with my GH2 was in 1080p24 17mb/sec mode and it also looked great to me.
http://vimeo.com/17899410

The G3 does not have any 1080p 24mb/sec mode. It has 1080i60 17mb/sec and 1280p60 17mb/sec mode.
 
The AVCHD codec does not currently support 1080p60 (or 50 in PAL regions). The following quote from Wikipedia may be helpful in this regard:

"In the professional and prosumer markets, AVCHD camcorders such as the Panasonic AG-HMC150, the Panasonic AG-HMC40, the Sony HDR-AX2000 and the Sony HXR-NX5U, are capable of recording in all three high definition formats: 1080i, 1080p and 720p. Sony camcorders do not support film-like frame rates — 24p, 25p, 30p — in 720p mode.

In the consumer market, 60 Hz variants of some Canon, Panasonic and Sony models are capable of recording native 1080p24 video.

In 2010, Panasonic introduced a new lineup of consumer AVCHD camcorders with 1080-line 50p/60p progressive-scan mode (frame rate depending on region).[19] While this mode is not compliant with current AVCHD specification, it uses the same compression schemes for video and audio, the same container files and the same folder structure as AVCHD-compliant recordings.[20] Panasonic advised that not all players that support AVCHD playback could play 1080-line 50p/60p video.[21]

In 2011 Sony introduced consumer and professional AVCHD models also capable of 1080-line 50p/60p video recording. Like Panasonic, Sony uses AVCHD folder structure and container files for storing video, with the same maximum bitrate of 28 Mbit/s."
 
When de-interlaced the G3 video ends up as perfect 30p (30 full frames with no interpolation).

When the GH2 is de-interlaced the video ends up a bit ugly when there is movement because every frame is made up of 2 fields taken at slightly different times.
As I noted above, that is what happens when a simplistic deinterlacing algorithm is used. A good one can produce nearly the equivalent of the source 60p stream. This does require significant processing power, though, and normally can't be done in real-time without special hardware. For editing software this would mean added time in the import clip stage.

--
Joe
 
Thanks for all of the information, it definitely improves my picture (pun intended). I think I'm still a little vague on some of the terminology. For example, what's the difference between a "field" and a "frame"?
Each bundle of odd or even lines in an interlaced video stream is called a "field". A complete picture is called a "frame".
I am currently juggling between both 720p/60 and 1080i/60 on my GH2. When viewing the original sized MTS files using VLC on my Mac, the 720p video is smaller and the 1080i is larger. However, the latter also shows objectionable interlacing. I can turn on the on-the-fly deinterlacing setting in VLC which helps somewhat, but the video still appears less smooth at 1080i than 720p.

When viewing these same files on my HDTV via my Boxee Box (it plays the files natively), they both look great and I really can't tell much (if any) difference. I presume the TV is performing the deinterlacing of the 1080i video?
Most likely they're both doing deinterlacing, but the television is doing it better. The best deinterlacing algorithms are hard to do in pure software at a fast enough rate to display in realtime. The television has specialized hardware to help.

You could see if you deinterlace the video first, perhaps VLC could play the resulting progressive video fast enough.
So in terms of strictly viewing on the computer and uploading to Youtube/Video, then I presume 720p is the way to go.
In general a progressive format is good for this. It will be deinterlaced by either Youtube or your playback software, but as you've seen, you may not want to rely on how good a job that will do. Using a progressive format (with deinterlacing done ahead of time, if necessary) takes the guesswork out.
And if the files both look good on my HDTV, then why would I ever want to capture at 1080i? Is it simply because 1080i will perhaps have higher resolution for videos with very little movement?
For the most part, yes. You may also be able to see a bigger difference on a larger screen. I don't see much difference between 1080i and 720p on a 32" TV, but it may be more noticeable on something like a 60" TV screen.

--
Joe
 
And second of all, I don't understand the correlation between fps and interlaced vs progressive video capture. I do understand that interlaced means it captures a frame in two passes (even and odd), while progressive captures the entire frame in one pass. I'm confused, however, how this directly affects the frame rate?
Interlaced and Progressive in video-capturing defines how frames are stored and not how they are created.

Progressive each full frame is stored one after another. You can think of it like a bunch of 1280x720 images, 60x per second or 1920x1080 images 30x per second.

Interlaced divides each frame in 2 fields and stores them one after another. You can think of it like a bunch of 1920x540 fields, 60x per second.

One field stores the odd, the other one the even lines of a frame.

But this doesn´t say what the content of this fields are.

If this fields are created from a 30fps output always 2 fields have been captured at the same time. If we say the video starts at 0ms, the fields have this offset from the beginning (@ i60):

0ms - 0ms - 33ms - 33ms - 66ms - 66ms - 100ms - 100ms
and so on.

2 fields always capture the same moment. The output is exactly the same as storing it progressive at 30fps.

If the 60i video is created from a 60fps output, every field will represent a different point in time.

So the time-line would look like this:
0ms - 17ms - 33ms - 50ms - 66ms
and so on.
Obviously, I'm pretty confused. In real-world terms, I understand that 60 fps gives you smoother footage, right? But what's the difference between 30 fps progressive versus 60 fps interlaced? Aren't they pretty much equal since the 60i footage only has half of the image for each frame, while the 30p footage has the entire frame?
They are exactly equal when they are created from a 30fps source. When they are created from a 60fps source they are not equal.
So a 720p/60 setting has a smooth framerate but less resolution, while a 1080i/60 has more resolution but effectively only 30 fps smoothness?
Obviously this is the case when the 60i-video is created from only 30fps.

If the 60i video is created from a 60fps source it gets a bit more complicated.

In a static scene it would look the same as a regular progressive video, so 1920x1080 will show increased resolution over 1280x720, fps don´t matter in static scenes anyway.

It is getting complicated when movement is involved. Nowadays dislpay-devices can´t show interlaced video directly (as old televisions did) so a deinterlacer has to convert the 60i-input into a 60p output to be shown on the display. This deinterlacer creates full 1920x1080 frames at 60fps.

So the 60i-video (from a 60fps source) will indeed be smoother than a 30fps-video (regardless if it is stored as 30p or 60i). But it will show less spatial resolution in vertical orientation when movement is involved.

How much smoother the video will appear and how much perceived spatial resolution you will lose depends on many factors. Very important factors are how much movement is involved, and how good the deinterlacer works.

The deinterlacer tries to interpolate spatial resolution from consecutive frames/fields to give you an appearance as close as possible as if the video was shot with full 60fps. Obviously the information for full 60fps isn´t there so it wouldn´t always succeed and if it is too "agressive" it can produce artifacts.

Interlaced video is mainly a compromise trading off spatial resolution for temporal resolution, in a way both are at there maximum when they are needed the most (spatial resolution is highest in static scenes, where you are most likely to notice it and temporal resolution is highest in fast moving scenes, where you won´t notice the higher spatial resolution anyway)

BTW: A very important factor how smooth and how sharp a video is perceived, is the shutter-speed. You can create apparently smooth videos at almost every framerate, if you lower the shutter-speed enough. Obviously this will make your video softer when movement is involved. To get sharp and smooth videos you need high framerates.
 
EXR:

First of all, thanks for the great explanation - the one that made the most sense to me! I appreciate you taking the time to make it easier to understand. I think I'm finally beginning to piece it all together. So basically, the 1080i video from the G3 will be easier to deinterlace because you can simply combine the two fields. But the 1080i video from the GH2 will still be smoother because it actually has a unique field from each grab of the 60fps sensor output, but it's harder to deinterlace?

So if I shot the same scene using my GH2 at 1080i and the G3 at 1080i and played them both back on my HDTV, what practical differences should I expect to see?

Also, you mentioned shutter speed. I've read that when you shoot at 1080/24p on the GH2, it's important to use a specific shutter speed to avoid the "jitter" when panning. I'm not sure I ever understood that either! lol. Can you explain the relationship between the shutter speed and frame rate? Should the shutter speed be the same as the frame rate, or in some kind of multiple of it? Don't feel obligated to answer as you've already spent a considerable amount of time answer my last questions! ;-)

Travis
--
http://travisimo.smugmug.com/
 
Travis - I can at least answer your last question. Normally, as a guideline, it is best to shoot with a shutter speed of around twice the frame rate. You will find this called a 180&deg shutter which comes from film technology. A high shutter speed will sharpen the frames so you might see multiple images instead of smooth motion (which relies on a degree of blurring). A low shutter speed can make the motion look too jerky, especially at a low frame rate like 24 fps. That is why the 180&deg shutter was first invented.

So on the GH2, a shutter speed of 1/50 will suit the 24p cinema mode well. For a given scene, the exposure balance then becomes one between aperture and ISO setting.

That said, I have found there is some latitude in the shutter speed settings - it is worth experimenting to see what you personally like.
 
EXR:

First of all, thanks for the great explanation - the one that made the most sense to me! I appreciate you taking the time to make it easier to understand. I think I'm finally beginning to piece it all together. So basically, the 1080i video from the G3 will be easier to deinterlace because you can simply combine the two fields. But the 1080i video from the GH2 will still be smoother because it actually has a unique field from each grab of the 60fps sensor output, but it's harder to deinterlace?
Yes.
So if I shot the same scene using my GH2 at 1080i and the G3 at 1080i and played them both back on my HDTV, what practical differences should I expect to see?
That´s pretty hard to predict. In scenes with low motion you probably won´t see any difference at all.

In scenes with high motion it depends on a lot of things, including shutter-speed and and the quality of the deinterlacer.

With fast shutter-speeds most likely the output of the G3 would be sharper and more jerky, the GH2 would be softer but smoother.

Depending on the deinterlacer you might see comb-artifacts on the output of the GH2.

If the shutter-speed is slow both would show quite blurry movements, there wouldn´t be that much difference. Also possibly deinterlacing-artifacts would most likely be blurred away by the motion-blur.
Also, you mentioned shutter speed. I've read that when you shoot at 1080/24p on the GH2, it's important to use a specific shutter speed to avoid the "jitter" when panning. I'm not sure I ever understood that either! lol. Can you explain the relationship between the shutter speed and frame rate? Should the shutter speed be the same as the frame rate, or in some kind of multiple of it? Don't feel obligated to answer as you've already spent a considerable amount of time answer my last questions! ;-)
Generally fast shutter-speeds means sharp individual frames (good if you want to grab screenshots from a video for example), but the video appears jerkier. Slow shutter-speeds do the opposite of course.

If you use a shutter-speed of 1/framerate almost every video will be smooth even with low framerate (as long as it isn´t too low). But of course the video will be softer, when you use lower framerates (and therefore slower shutter-speeds)

A general rule of thumb is to use a shutter-speed of 1/2*framerate.

This is a quite good compromise to get acceptable smoothness with sufficient sharp frames.

This compromise comes from the cinema with its 24fps, and is probably quite good at similar framerates like 25 of 30 fps (and maybe at slower ones as well)

If the framerate gets higher the shutter-speed becomes far less important. 24 or 30fps isn´t really enough to make motion fluently, 60fps is much better and with 120fps you would most likely not see any difference between shutter-speeds of 1/120s and 1/4000s when playing the video in real-time (you would notice the sharper frames with faster shutter-speeds when you pause the video). Already with 60fps you probably won´t notice much difference between shutter-speeds of 1/120s and faster in everything except the fastest movements.
 
If the framerate gets higher the shutter-speed becomes far less important. 24 or 30fps isn´t really enough to make motion fluently, 60fps is much better and with 120fps you would most likely not see any difference between shutter-speeds of 1/120s and 1/4000s when playing the video in real-time (you would notice the sharper frames with faster shutter-speeds when you pause the video). Already with 60fps you probably won´t notice much difference between shutter-speeds of 1/120s and faster in everything except the fastest movements.
This has been a very useful thread, and has definately made things clearer for me also.

Ive tended to shoot 1080i at twice the frame-rate, and in decent light-levels ive always needed to use a ND filter. It would be great to be able to shoot without the filter, so i shall try the above, and experiment with higher shutter speeds.

If the higher shutter speeds are not too apparent for the subjects i shoot, then it would make the gh2's option of shooting an occasional stills {whilst video'ing} far more practicle/viable etc, thus allowing me to shoot video with a more appropriate aperture/shutter speed for possible grab-shot stills without the light-loss from an ND filter.

Hope Travis doesn't mind me mentioning this within his thread.

Alan
 
A general rule of thumb is to use a shutter-speed of 1/2*framerate.
I thought the general rule of thumb was to use a shutter speed of 2x framerate?
I believe it was meant 1 / ( 2 * framerate). For instance if the framerate is 25 fps, the shutter speed is 1 / ( 2 * 25 ) or 1/50 sec (not 50 seconds).

By the way, this is a wonderful thread. I learnt more here that I had been able to understand in years.
 
If the higher shutter speeds are not too apparent for the subjects i shoot, then it would make the gh2's option of shooting an occasional stills {whilst video'ing} far more practicle/viable etc, thus allowing me to shoot video with a more appropriate aperture/shutter speed for possible grab-shot stills without the light-loss from an ND filter.
If the movement is low you won´t notice much difference anyway
 
apologizing to Travis for diverting the thread a bit, but this is a good source of discussion for GH2 owners and those considering buying other cameras.

I have an A33, which is a competitor to the m4/3 system as regards portability and video. Last week I picked up a GH2 to add to my GF1 with the intention of building upon my video capabilities.

What has been discussed above about interlacing, framerates, shutter speeds, Etc., I've managed to absorb over the previous few days thanks to these wonderful forums. My question here is what lenses, besides the 14-140 (which I don't have for a couple of reasons, such as weight and bulk) do folks find preferable for video on the GH2. I have the 45-200 but replaced it with the Olympus 40-150, also for weight and bulk reasons. That was an error, and I'm back to the 45-200 for long reach purposes; seems the OIS actually does make a big difference with video, as does smoother zoom operation, Etc.

I have the kit lenses 14-42 (came with GH2) and 14-45, as well as the 20mm and 14mm (ordered) primes. I really didn't use the kit lens much on the GF1, preferring the primes. I really wish there were some equivalent of a 16-80 (35mm) available; I think my primes cover the 7-14 ground.

Since, like many others, I'm new to m4/3 video and even newer to the GH2's capabilities, any comments on which lenses you find most useful?
 
My question here is what lenses, besides the 14-140 (which I don't have for a couple of reasons, such as weight and bulk) do folks find preferable for video on the GH2.
As with still shooting, it depends on what you want to shoot.

The main advantages of the 14-140 for video shooting are that it has all the range you're likely to need for most situations, that it can focus, zoom and change aperature smoothly and silently, and that it does these things while also being pretty decent optically. These features are mostly useful for "run and gun" video shooting, as when journalists are covering breaking events. This is also the most common type of video you see on Youtube: all one shot, just the raw footage out of the camera.

If you want to make more effective videos, though, you need to think about planning your shots to tell a story, and editing clips together in an effective way. This is analogous to thinking about composition in still shooting. In this type of shooting, zooming and focusing during a shot are considered distracting; they are used very sparingly, if at all, and only to achieve specific effects. In this environment, "manual everything" lenses are the norm, and high-quality prime lenses are common. This is why pro filmmakers are acting like kids in a candy store with all the manual lenses that can be adapted for the GH1 and GH2; they don't lose any functionality they would use anyway, and gain access to a huge range of glass.

Think about things like:
  • getting a wide shot to introduce a location (establishing shot)
  • get some slow pans that you can insert in the video for transitions or voiceovers
  • take short whole body clips of people in a crowd, that you can insert to add extra interest where needed
  • if you film people talking about something, see if you can take some shots of what they're talking about, that you can insert with their voiceover instead of just showing a talking head
  • get shots from different perspectives, including high and low shots
Once you have an idea of how you're going to tell the story, you'll know what lenses you need. You don't have to be doing a big production to use these techniques; they work just as well for filming a car show, a party, etc.

And once you've mastered all that, it's time to start working on sound. :)

--
Joe
 

Keyboard shortcuts

Back
Top