BlueCosmo5050

Senior Member
Messages
1,772
Solutions
4
Reaction score
1,178
I have been doing video with DSLR's for a while, I under stand what is happening with a 20 megapixel sensor losing information to output a 1080p signal for example. Although I don't understand it perfectly.

When it comes to 4k, it seems every manufacture, to get rid of alias and moire goes to Super 35mm mode. Now, why is this better? Is it because it doesn't have to down sample as many megapixels?

Also, how does pixel binning work and is that better than what we see on the Sony A6300 for example?

From what I've read, the 1DX super 35mm works different than the Sony super 35mm 4k. Both get rid of most alias and moire by a different process. Now the Sony camera is 24 megapixels I believe, and the information being put forward is there will be no alias and moire (at least really bad alias and moire) because of that number of megapixels going down to 8 megapixels. So no pixel binning, but over sampling.

I've seen them say that the A6300 4k will look better than the A7Rii 4k for example.

So is it because 8x3 is 24, so 24 megapixels are able to equally over sample the 8 megapixel video? Now, with something like the next Canon 5D. Let's say it has 4k and 28 megapixels and goes into super 35mm for 4k, how does that process work?

Or the 1D, having 20 megapixels. I can't seem to grasp fully what is happening with pixel binning and why one is better than the other.

I also don't understand the concept behind being able to have full frame video in 1080p but needing to go into super 35mm mode for 4k? If it's to use less megapixels, why wouldn't one need to do that in 1080p as well?

For example, my Canon 6D with the VAF-6D filter has no Alias and Moire hardly, at least I cannot make it do it. Without the filter it has alias and moire like crazy. It actually seems better at it than the 5D Mark iii which I owned before I went to Sony and then came back to Canon.

Why can't we just do that and get full frame 4k video? Why is it the A7S has full frame 4k video but it's the only one? Is it because of there only being 12 megapixels?

I know that 4k is extremely sharp and there is a lot of alias and moire if it's not shot in a flat profile when down sampled to 4k. I had a Sony AX100 4k video camera, which had all manual controls like DSLR's but it would not allow the sharpness to be turned down.

It had worse aliasing than any DSLR I've seen, especially when the 4k was output as 1080p.

I've also noticed a lot of YouTubers who are using 4k and outputting in 1080p with the Gh4 or the Sony A7Rii or A7S, and I notice there is always alias and moire. Even though they shoot in a flat profile and add sharpening in post.

It also appears to me that although motion .jpeg is an older codec and doesn't compress as well as the newer stuff, wouldn't it contain more information and be better for color grading than 100MBPS 4k codec like Sony has? I understand it takes more space, but one is basically getting an 8 megapixel jpeg for every frame correct?
 
First, I am not sure what Super 35 has to do with it.

My understanding is that typically, video in consumer cameras (aside from A7S and some of the new Sony's) is done by line skipping. Then they are trying to compensate that by interpolating different frames (?). This site has some tests. Line skipping creates bad aliasing.

Some modern cameras, and I think that the A6300 is one of them, interpolate between all pixels. Whether they do binning or not - I am not sure but resizing would be better than binning, and binning would be better than line skipping.

Format is not important for what you are asking.

The A7S does not need to do resizing, etc., because it has just the right number of pixels, I believe, to do 4k 1:1. A higher mp camera that would resize to 4K would do even better.
 
Last edited:
First, I am not sure what Super 35 has to do with it.

My understanding is that typically, video in consumer cameras (aside from A7S and some of the new Sony's) is done by line skipping. Then they are trying to compensate that by interpolating different frames (?). This site has some tests. Line skipping creates bad aliasing.

Some modern cameras, and I think that the A6300 is one of them, interpolate between all pixels. Whether they do binning or not - I am not sure but resizing would be better than binning, and binning would be better than line skipping.

Format is not important for what you are asking.

The A7S does not need to do resizing, etc., because it has just the right number of pixels, I believe, to do 4k 1:1. A higher mp camera that would resize to 4K would do even better.
Yeah I get that, but I'm not understanding why it's better for them to crop it to super 35mm on a full frame camera if they aren't cropping to 14 megapixels or something that is a multiple of 8. Or maybe they are cropping to that many I'm not sure. I think 15 megapixels on the A7Rii. I don't know how many megapixels it crops to on the 1DX.

The line skipping definitely creates bad aliasing. The 6D was unusable for video due to how bad it was but once I put that VAF-6D filter over the sensor, it's immaculate. It won't alias, at all. Yet the picture is still extremely sharp. That's what I noticed the most over the 5D Mark iii footage, as I had a 5D before I sold it and went to Sony for a while.

I noticed that the anti alias filter in the 5D Mark iii gets rid of most aliasing but it's very soft. With the 6D, the VAF-6D filter gets rid of all aliasing and yet it's extremely sharp.

Once I added sharpness in post it blew me away because I'm used to the 5D Mark iii video and how that looks. The detail is quite amazing with the 6D. Especially to be 1080p.

Here is a test I did, was testing out the alias filter. The lens is not stabilized but I wasn't trying to shoot smooth I was just testing color grading and the filter. But this is not magic lantern or raw shooting. This is cinestyle straight out of the camera and I could never get anything this sharp with the 5D.

The guitar cab you see, if I take the VAF-6D filter out, it moires like crazy, severe color moire patterns. Yet there is absolutely none here as you can see.

You'll want to click the link and go over to youtube, if you watch it in the small box it might appear to have aliasing, but it doesn't, it's just that when sharp video is sized down to a small box it can look like it. In the regular youtube box in 1080p you can see how sharp it is without there being alias.

 
Last edited:
Compressing 25 high quality 4k frames per second is processor intensive. Few cameras has done clean readouts, even if we'll see more and more now that tech has improved.

The Canon 1Dc and 1Dx mkII both do clean 4096x2160 readouts into the 520Mbps .mjpeg codec. The active sensor area is a bit larger than S35 and has about 1.28x crop on FF.

If you don't have the CPU power to do clean readouts, using a smaller area of the sensor minimises artifacts when you do skipping and such.

In Canon's cinema EOS line they designed the sensor to be S35 to begin with in order to be similar to a traditional film negative in size and to be compatible with a vast amount of existing cine glass. Since the beginning they have used 4k sensors there, even if they did internal debayering to 1920x1080 output.
 
Compressing 25 high quality 4k frames per second is processor intensive. Few cameras has done clean readouts, even if we'll see more and more now that tech has improved.

The Canon 1Dc and 1Dx mkII both do clean 4096x2160 readouts into the 520Mbps .mjpeg codec. The active sensor area is a bit larger than S35 and has about 1.28x crop on FF.

If you don't have the CPU power to do clean readouts, using a smaller area of the sensor minimises artifacts when you do skipping and such.

In Canon's cinema EOS line they designed the sensor to be S35 to begin with in order to be similar to a traditional film negative in size and to be compatible with a vast amount of existing cine glass. Since the beginning they have used 4k sensors there, even if they did internal debayering to 1920x1080 output.
 
So which ends up being better, on paper at least, the A6300 from Sony which has 24 megapixels and uses the whole sensor, or a full frame camera that crops it down and uses another process?
There are so many variables that I'm not sure I can give you a reasonable reply.

For some, a full frame video sensor would be GREAT. Others like the S35 or academy size better. If the sensor size gets too big it can become less flexible in some ways. That said, RED's new 8k VistaVision sensor is greater in width compared to full frame.

A full pixel readout is always preferable. If you have to crop to do that—then crop and do a full readout in the smaller area. If you can read every pixel of a 24MP sensor fast enough for video, that sounds like a recipe for great quality. That would mean some serious oversampling (recording at a higher resolution than final output) that would improve the image in several ways.

But then there is the question of sensor quality (DR/noise), internal scaling, codec and bit depth and so on.

Just saying that a camera is 4k actually means very little. I'm not a Sony shooter myself, but it looks like the A6300 offers quite a bit of image for the money and like it's capable of producing images above its weight class.
 
So which ends up being better, on paper at least, the A6300 from Sony which has 24 megapixels and uses the whole sensor, or a full frame camera that crops it down and uses another process?
There are so many variables that I'm not sure I can give you a reasonable reply.

For some, a full frame video sensor would be GREAT. Others like the S35 or academy size better. If the sensor size gets too big it can become less flexible in some ways. That said, RED's new 8k VistaVision sensor is greater in width compared to full frame.

A full pixel readout is always preferable. If you have to crop to do that—then crop and do a full readout in the smaller area. If you can read every pixel of a 24MP sensor fast enough for video, that sounds like a recipe for great quality. That would mean some serious oversampling (recording at a higher resolution than final output) that would improve the image in several ways.
That means proper sampling, actually, or even udersampling since there is no AA filter.

Using all pixels on APS-C would be better than FF with line skipping, in principle; and that includes noise, too.
But then there is the question of sensor quality (DR/noise), internal scaling, codec and bit depth and so on.

Just saying that a camera is 4k actually means very little. I'm not a Sony shooter myself, but it looks like the A6300 offers quite a bit of image for the money and like it's capable of producing images above its weight class.
 
A full pixel readout is always preferable. If you have to crop to do that—then crop and do a full readout in the smaller area. If you can read every pixel of a 24MP sensor fast enough for video, that sounds like a recipe for great quality. That would mean some serious oversampling (recording at a higher resolution than final output) that would improve the image in several ways.
That means proper sampling, actually, or even udersampling since there is no AA filter.

Using all pixels on APS-C would be better than FF with line skipping, in principle; and that includes noise, too.
Hi, J A C S

you are quoting me, but I'm not sure if you agree, disagree or if you wanted to make a general statement independent of what I wrote.

I would reply to you if something I wrote wasn't clear.

Cheers
A
 
A full pixel readout is always preferable. If you have to crop to do that—then crop and do a full readout in the smaller area. If you can read every pixel of a 24MP sensor fast enough for video, that sounds like a recipe for great quality. That would mean some serious oversampling (recording at a higher resolution than final output) that would improve the image in several ways.
That means proper sampling, actually, or even udersampling since there is no AA filter.

Using all pixels on APS-C would be better than FF with line skipping, in principle; and that includes noise, too.
Hi, J A C S

you are quoting me, but I'm not sure if you agree, disagree or if you wanted to make a general statement independent of what I wrote.
Take the second sentence as a general statement. The first one is more like a remark on the semantics.
 
Take the second sentence as a general statement. The first one is more like a remark on the semantics.
OK. In my book, reading 6000x3374 and outputting 1920x1080 or even 3840x2160 means oversampling. You sample a higher than output resolution.

I'm not talking about/referring to bayer/no bayer.
 
J A C S wrote:

In most other books, oversampling has a different meaning. There is no output in sampling.
Sampling means "taking samples". It's a word, not just one thing. Over sampling or extra sampling is a simple word construction that means taking more samples, for the benefit of increased resolution/precision, than might else be strictly required.

If you are shooting a 24MP still, or 50MP still even, only to publish on a site that restricts photos to 2048px wide—then you oversampled the pixels of the submitted picture.

I would have hoped that what I wrote, when read in context of my post, was clear enough. If not, I hope it's clear now.

I feel this makes my point as clear as I can make it and marks the end of my participation in this little sub-thread.
 
My Panasonic GH3 and GH4 cameras have a fair bit of Aliasing/Moire in 1080p mode (the GH3 is noticeably worse, I spent ages and ages and ages looking for settings to minimise it) as they only use about 69% of the sensor pixels, leaving the camera to guess at the gaps. The GH4 4k uses a sensor pixel per video pixel and I see no issues. The Sony A7s and A7s II both scale the sensor data to make the video and there are examples on YouTube showing the 4k has moire with both, although the 1080p looks very clean (I guess they just have so many pixels to scale down, with sensor samples for every colour per video pixel).

BTW the 1080 from my 5Dsr is stunningly clean if shot in the Neutral Picture Style, leaves the GH cameras in the dust (except GH4/4k). I have no idea what Canon did, but I think they are using a lot of sensor pixels per video pixel.
 
J A C S wrote:
In most other books, oversampling has a different meaning. There is no output in sampling.
Sampling means "taking samples". It's a word, not just one thing. Over sampling or extra sampling is a simple word construction that means taking more samples, for the benefit of increased resolution/precision, than might else be strictly required.
For 2mp (HD) output without sampling, many more pixels are required. You are under the wrong impression that if you need a 2mp output, 2mp out of the 24mp are enough; and therefore using all of them is a luxury.
If you are shooting a 24MP still, or 50MP still even, only to publish on a site that restricts photos to 2048px wide—then you oversampled the pixels of the submitted picture.
No, you properly sampled (kinda) the image that the lens produced.
 
J A C S wrote:

You are under the wrong impression that if you need a 2mp output, 2mp out of the 24mp are enough; and therefore using all of them is a luxury.
Please feel free to expand your own thoughts and ideas, but DO NOT say what my impression is.

I don't do internet arguments, so as long as you don't involve me you can explain in as many posts as you like how things work.

This topic can go very deep and I feel we are communicating on different levels. I told you I am done, but you are free to continue to assist the OP. Please don't reference me going forward—I don't like having words put in my mouth that then go unanswered.
 
J A C S wrote:
In most other books, oversampling has a different meaning. There is no output in sampling.
Sampling means "taking samples". It's a word, not just one thing. Over sampling or extra sampling is a simple word construction that means taking more samples, for the benefit of increased resolution/precision, than might else be strictly required.

If you are shooting a 24MP still, or 50MP still even, only to publish on a site that restricts photos to 2048px wide—then you oversampled the pixels of the submitted picture.

I would have hoped that what I wrote, when read in context of my post, was clear enough. If not, I hope it's clear now.

I feel this makes my point as clear as I can make it and marks the end of my participation in this little sub-thread.

--
https://www.flickr.com/photos/andreemarkefors/albums/
JACS is very OCD about his jargon. If your choice of words happen to include words that have been co-oped for photography jargon, it throws him off. This is not a criticism, more of an observation.

--
Once you've done fifty, everything else is iffy.
 
Last edited:
J A C S wrote:
In most other books, oversampling has a different meaning. There is no output in sampling.
Sampling means "taking samples". It's a word, not just one thing. Over sampling or extra sampling is a simple word construction that means taking more samples, for the benefit of increased resolution/precision, than might else be strictly required.

If you are shooting a 24MP still, or 50MP still even, only to publish on a site that restricts photos to 2048px wide—then you oversampled the pixels of the submitted picture.

I would have hoped that what I wrote, when read in context of my post, was clear enough. If not, I hope it's clear now.

I feel this makes my point as clear as I can make it and marks the end of my participation in this little sub-thread.

--
https://www.flickr.com/photos/andreemarkefors/albums/
JACS is very OCD about his jargon. If your choice of words happen to include words that have been co-oped for photography jargon, it throws him off. This is not a criticism, more of an observation.
I wish it were my jargon, but I did not really contribute to sampling theory.

As far as AM is concerned - I replied directly about his remark that I highlighted in bold in the other post. To reiterate, the sampling rate is determined by the highest (approximately) frequency of the image, not by the highest frequency of the output. It is not about the jargon, it is what it is - a theorem.
 
Fundamental aliasing occurs at the sensor which will translate spatial frequencies above the sensor resolution about the Nyquist point into the sampled image.

When a camera does subsampling it may introduce additional aliasing depending on the design of the subsampling algorithm. I've seen very little actual measurements about how good this secondary operation is from the different manufacturers. It's not hard to determine. Does someone here know of any tests that have been done? Likely that post processing of the full resolution image would produce the least secondary aliasing since compute intensive algorithms can be applied w/o real time constraints.
 
This guy has done some tests but there are not easy to find on his site.
The Pentax K-x video aliasing is pretty bad. It appears there is no low pass down sample processing in the camera when capturing video. Wouldn't surprise me if this is quite common.

Interesting. The guy uses similar concentric circles to bring out moire to my post here on the 5DsR v 1Ds III.

 
Last edited:

Keyboard shortcuts

Back
Top