I was a C and assembly programmer who programmed computer games for nearly a decade, was a systems admin in a life prior to that, and I build my own computers. My current main system has an overclocked i7 965ee and a pair of NVIDIA 285s in SLI. To use the GPU video playback acceleration of current GPUs, the software has to be written to specifically use it, because it is not part of the OS (yet). When you buy those cards, they come with a "special" version of a DVD playback program written specifically to use the hardware video decoding of that particular card. This makes playback smooth as long as you use that program. You can put that same exact video into a video editor and it will playback choppy unless the CPU has enough threads and speed to decode it. Current video editing software does not use the GPU video decoder (plus those GPUs don't even have a video encoder). The current generation of video editing software only uses the GPU to accelerate filter rendering and to render the 2D image buffer to the screen after decoding (which doesn't require much GPU horsepower). This means you can resize the window with faster redraws, but the video will still play back choppy on weak CPUs. The decoding and encoding of the video stream is done using the CPU and that's what makes video playback choppy on slower computers, no matter what GPU you stick in it.