BlueCosmo5050
Senior Member
- Messages
- 1,772
- Solutions
- 4
- Reaction score
- 1,178
I have been doing video with DSLR's for a while, I under stand what is happening with a 20 megapixel sensor losing information to output a 1080p signal for example. Although I don't understand it perfectly.
When it comes to 4k, it seems every manufacture, to get rid of alias and moire goes to Super 35mm mode. Now, why is this better? Is it because it doesn't have to down sample as many megapixels?
Also, how does pixel binning work and is that better than what we see on the Sony A6300 for example?
From what I've read, the 1DX super 35mm works different than the Sony super 35mm 4k. Both get rid of most alias and moire by a different process. Now the Sony camera is 24 megapixels I believe, and the information being put forward is there will be no alias and moire (at least really bad alias and moire) because of that number of megapixels going down to 8 megapixels. So no pixel binning, but over sampling.
I've seen them say that the A6300 4k will look better than the A7Rii 4k for example.
So is it because 8x3 is 24, so 24 megapixels are able to equally over sample the 8 megapixel video? Now, with something like the next Canon 5D. Let's say it has 4k and 28 megapixels and goes into super 35mm for 4k, how does that process work?
Or the 1D, having 20 megapixels. I can't seem to grasp fully what is happening with pixel binning and why one is better than the other.
I also don't understand the concept behind being able to have full frame video in 1080p but needing to go into super 35mm mode for 4k? If it's to use less megapixels, why wouldn't one need to do that in 1080p as well?
For example, my Canon 6D with the VAF-6D filter has no Alias and Moire hardly, at least I cannot make it do it. Without the filter it has alias and moire like crazy. It actually seems better at it than the 5D Mark iii which I owned before I went to Sony and then came back to Canon.
Why can't we just do that and get full frame 4k video? Why is it the A7S has full frame 4k video but it's the only one? Is it because of there only being 12 megapixels?
I know that 4k is extremely sharp and there is a lot of alias and moire if it's not shot in a flat profile when down sampled to 4k. I had a Sony AX100 4k video camera, which had all manual controls like DSLR's but it would not allow the sharpness to be turned down.
It had worse aliasing than any DSLR I've seen, especially when the 4k was output as 1080p.
I've also noticed a lot of YouTubers who are using 4k and outputting in 1080p with the Gh4 or the Sony A7Rii or A7S, and I notice there is always alias and moire. Even though they shoot in a flat profile and add sharpening in post.
It also appears to me that although motion .jpeg is an older codec and doesn't compress as well as the newer stuff, wouldn't it contain more information and be better for color grading than 100MBPS 4k codec like Sony has? I understand it takes more space, but one is basically getting an 8 megapixel jpeg for every frame correct?
When it comes to 4k, it seems every manufacture, to get rid of alias and moire goes to Super 35mm mode. Now, why is this better? Is it because it doesn't have to down sample as many megapixels?
Also, how does pixel binning work and is that better than what we see on the Sony A6300 for example?
From what I've read, the 1DX super 35mm works different than the Sony super 35mm 4k. Both get rid of most alias and moire by a different process. Now the Sony camera is 24 megapixels I believe, and the information being put forward is there will be no alias and moire (at least really bad alias and moire) because of that number of megapixels going down to 8 megapixels. So no pixel binning, but over sampling.
I've seen them say that the A6300 4k will look better than the A7Rii 4k for example.
So is it because 8x3 is 24, so 24 megapixels are able to equally over sample the 8 megapixel video? Now, with something like the next Canon 5D. Let's say it has 4k and 28 megapixels and goes into super 35mm for 4k, how does that process work?
Or the 1D, having 20 megapixels. I can't seem to grasp fully what is happening with pixel binning and why one is better than the other.
I also don't understand the concept behind being able to have full frame video in 1080p but needing to go into super 35mm mode for 4k? If it's to use less megapixels, why wouldn't one need to do that in 1080p as well?
For example, my Canon 6D with the VAF-6D filter has no Alias and Moire hardly, at least I cannot make it do it. Without the filter it has alias and moire like crazy. It actually seems better at it than the 5D Mark iii which I owned before I went to Sony and then came back to Canon.
Why can't we just do that and get full frame 4k video? Why is it the A7S has full frame 4k video but it's the only one? Is it because of there only being 12 megapixels?
I know that 4k is extremely sharp and there is a lot of alias and moire if it's not shot in a flat profile when down sampled to 4k. I had a Sony AX100 4k video camera, which had all manual controls like DSLR's but it would not allow the sharpness to be turned down.
It had worse aliasing than any DSLR I've seen, especially when the 4k was output as 1080p.
I've also noticed a lot of YouTubers who are using 4k and outputting in 1080p with the Gh4 or the Sony A7Rii or A7S, and I notice there is always alias and moire. Even though they shoot in a flat profile and add sharpening in post.
It also appears to me that although motion .jpeg is an older codec and doesn't compress as well as the newer stuff, wouldn't it contain more information and be better for color grading than 100MBPS 4k codec like Sony has? I understand it takes more space, but one is basically getting an 8 megapixel jpeg for every frame correct?