it does make a difference
(and requires advanced de-interlacing algorithms for solid display on
today's typical progressive displays)
I mentioned it as a placebo effect becos' the frame rates chosen for good old TV was based on our eyes threshold & after decades of TV ogling, i've not seen an action pack program, from video or film transfer, looking bad with artifacts that those manufacturers claimed when viewed interlaced. I can see that kind of artifacts when the frame rates drop or through poor encoding, no doubt, but so far none on TV.
Don't forget that all over the world broadcasters are transmitting in 'i' mode, so even if the display panels can reconstruct the images into frames again, any distortions or artifacts would have been embedded. So would the pic be better? Another point to consider: TV programmes are shot in either 25fps or 30fps, so by converting it to 24p, how much artifacts is it going to introduce? Similarly when a movie was filmed in 24fps, it gets converted to 25 or 30fps for broadcasting, then reconstructed back to 24p for viewing at home, how much errors, distortion & artifacts are generated via all the processes? My take on this is: manufacturers are just hyping up these progressive panels to sell, no advantage when watching TV.
On the other hand, such panels are good when you watch discs becos' there's no limitation to the processing power & bandwidth from the players. Only limitations is the capacity of the medium. Again, that's why you see even with blu-ray having a whopping 25GB, the movies are still compressed, but it can be canned to 24p, 25p or even 30p formats w/o much restrictions.
The full HD that we know, 1920 x 1080i, is only 2MP (1080p is not a
broadcast standard, maybe not yet).
umm 1920x1080p IS about 2MP, just multiply the two numbers
and the 4k cameras are moe like 8MP than 10MP.
Well if its just 4000x2000, ya, you do get 8MP. But the Red One is touting a 4520 x 2540 active array, that gives you closer to 12MP. But i said about 10MP to be more conservative.
http://www.red.com/cameras/tech_specs/
So professional video cameras
with big sensors like 1/2" or 2/3" 3CCDs actually has very good noise
characteristics becos' the pixel density is not high. Eg. a P&S
with 1/8" sensor boosting 10MP vs a 2/3" sensor with only 2MP.
but what about a 35mm FF DSLR still has larger photosites
If the photosites are larger, then the noise performance better. But the 5D II is not having really large pixels since there are 21MP crammed inside, if i'm not mistaken, the pixel pitch is about 5.7microns or somewhere there about. When you compare to the 1st gen 5D, the pixel pitch is like 8+ or 9+ microns. Big difference there. If Canon were to put mkII technology into the old 5D or the 1st gen 1Ds, the noise performance will be unbeatable by any makers. Relatively speaking, a 2/3" sensor with 2MP would have large photosites as well. In the past with SD, a 2/3" sensor housed about 470k to 800k pixels, Sony betacam rules big time back then. Of course things progress & here we are with HD. LOL!!
yeah probably so, going dual-digi already could get to 3MP bandwidth
already or so on a 5dmkii so it seems very possible in even just 4-5
more years, I bet a dual-digic VI will be able to (just barely) do it.
As long as the future DIGIC VI can deliver the min, don't think anyone would make an "overkill" product by putting more advance stuff. Exception perhaps an 8 cores Intel i7 CPU computer with tri SLI just for email, posting in forums & pacman classic? LOL!!