Pixel Binning vs. Skipping vs Oversampling

Started Nov 6, 2015 | Discussions
smorgasbord Regular Member • Posts: 261
Pixel Binning vs. Skipping vs Oversampling
21

Apparently, there is a bunch of confusion over the terms Pixel Binning, pixel skipping, and oversampling. Tony Northrup got it wrong, calling skipping and binning the same, which they are definitely not. So, I ended up doing a bunch of research and thought I'd share the results. I'm not an expert, so I could be wrong and welcome corrections and amplifications.

First, what makes this complicated is that sensor pixels are not the pixels with which we are used to dealing in post-production. It starts with sensor layout. Each sensor pixel (or photosite) has a Red, Green, or Blue filter over it, so it captures only the amount of light in the color frequency that the filter lets through. These colored filters are laid out in an alternating pattern, the most common being the Bayer Pattern (https://en.wikipedia.org/wiki/Bayer_filter), in which every other photosite is green (50% of the total), while every other other photosite alternates between red and blue. This is an approximation of the sensitivity of the human eye. To get from these single-color pixels to the combined RGB-pixels we all know and love requires what is called "demosaicing," which is the combination of nearby R, G, and B photosites into a single output pixel (https://en.wikipedia.org/wiki/Demosaicing). There are various algorithms for doing this.

One of these algorithms is called "Pixel Binning" (see http://www.thedailynathan.com/demosaic/algorithms.php?algorithm=pixelbinning&image=i/yhmgr_bayer.png). This is a simple algorithm and does not yield high quality results. Note that a Raw Image contains the values from the photosites, leaving it to the subsequent processor (like Adobe Camera Raw) to implement a demosaicing algorithm and produce RGB pixels. Thus, different raw converters can (and do) produce different results. It's possible that in the future someone will invent a new algorithm and then all of our old raw images will look better.

Andrew Reid in his post (http://www.eoshd.com/2014/04/sony-a7s-super-35mm-4k-mode/) says: "Sony say no binning is used by the sensor for video on the A7S in either 4K or 1080p modes. That implies the sensor scans at maximum resolution or close to it and that the binning for 1080p is done on the image processor, like on the RX10." I think he's right, although he's mis-using the term "binning," which is a sensor readout technique that started with CCDs but has been carried over into the CMOS world to reflect adding of photosite values together before processing. On CCDs, just reading the sensor changes the stored values (a Schroeder's cat kind of thing), so there are consequences to binning.

At any rate, Sony is employing a sophisticated demosaicing algorithm for reading photosite values off the sensor (and all the kinds of cameras we're talking about do this) to create an image of RGB pixels, but that image is higher in resolution than the resolution of the desired output image. Mr. Reid is postulating that the output image is scaled down in real time by the camera's image processor to produce the desired resolution. And this is done for every video frame, whether 30, 60 or 120 times a second.

Since I've gone this far, I might as well mention the Nyquist Theorem. When going from an analog signal (light heading towards a sensor) to a digital signal (array of pixels), you ideally want to sample the analog signal at twice the frequency of the desired digital output image. This is called the Nyquist Theorem. Imagine looking through a picket fence at another picket fence right next to it. If both are at the same pitch, then you might have slat alignment and not see the other fence at all, or you might be exactly off and the front's gaps are filled with the back's slats and so it looks like a solid fence. But, with twice as many openings in the front fence in the same space as the back fence, you're guaranteed to see all the slats of the back fence. If you don't have a full 2X frequency, then you will see artifacts as the fences relative position changes and slats are hidden/revealed - that's what we see as moire. And if you're skipping pixels for video, as most DSLRs do, then you're increasing moire since light hitting the skipped pixels is completely lost and as edges go from read pixels to skipped pixels to read pixels again you get crawling jaggies.

To reduce moire without oversampling you essentially have to blue the image, which reduces detail resolution. But starting with more pixels and scaling down reduces noise in the final image without sacrificing detail resolution if the resulting size is the final size.

And, while many people use "pixel binning" to refer to some generic image scale reduction, I think it's important to understand the topology of image sensor color filters and then how the raw array of single-color "pixels" is demosaiced to produce an image of RBG pixels, and that "binning" is technically part of that demosaicing process, not a post-RGB image scale process. I hate being a nomenclature hawk, but it does help us understand each other better.

The big reasons why 1080p on the A7S/A7Sii looks so clean is because the image is sampled at the Nyquist Frequency. This means Sony does not have to blur the image at all to reduce moire since there is none. For 4K video on the A7Rii Sony is also oversampling, but only at 1.3X, not 2.0X - therefore there is still the potential for moire and unless there's an "anti-aliasing filter" in front of the sensor Sony is doing some amount of blurring.

DSLRs use pixel skipping (and the A7Rii does this for 4k video off the full frame sensor). This retains the shallow DOF abilities of the larger sensor and most importantly yields the same field of view as full-res stills, but does nothing for noise. Worse, the higher the resolution of the sensor the more pixels that are skipped, and so moire increases. And even if there were an anti-aliasing filter in front of the sensor, it would be tuned for high-res stills, not video, so it would be inadequate. Hence DSLR manufacturers have to do some amount of blurring to reduce moire, but not so much that it hurts the image - it's a fine line there. It's also why S35 crop mode for 4K looks better than FF mode for 4K on the A7Rii - no pixels are skipped, yet there's some oversampling.

What's tantalizing about the A7Rii's sensor is that it's actually big enough (7952x5304) to do proper 2X Nyquist level oversampling of a QFHD 4K image (3840x2160). If Sony can get a processor fast enough, they could produce astounding 4K video - but given the overheating that's already happening there would definitely need to be some processor upgrades.

To summarize:

  • Binning is the combining of photosite sensor values as they're read from the sensor and then sent to the demosaicing process.
  • Skipping is literally skipping over some sensor values.
  • Oversampling is reading more pixels than the final output format supports, and uses Downsampling, or scaling, to combine values, which is done after demosaicing.

I hope this was helpful.

Sony a7R II Sony a7S Sony a7S II
If you believe there are incorrect tags, please send us this post using our feedback form.
Ian Matyssik Junior Member • Posts: 32
Re: Pixel Binning vs. Skipping vs Oversampling

Nice! Very well explained with just the right amount of technicalities.
--
https://phpb.com/ - My Photography Blog

 Ian Matyssik's gear list:Ian Matyssik's gear list
Sony a7R II Zeiss Batis 85mm F1.8 Sony FE 55mm F1.8 Sony FE 70-200 F4 Sony FE 35mm F2.8 +2 more
Arizona Sunset
Arizona Sunset Senior Member • Posts: 3,795
Re: Pixel Binning vs. Skipping vs Oversampling

Really helpful.  Explains a lot to me, why we are waiting for processing power to improve video quality more than sensors.  And why some folks see the s35 4K coming off the RII as superior to the SII in detail but not handling of artifacts and moire, the most important parts in motion.  Thanks for writing this.  The confusion across terminology is less interesting to me than the overall understanding that you clarified.  I'd be interested to see how the other cameras fit into your post, e.g., the 10 II, and even the RX1RII.

 Arizona Sunset's gear list:Arizona Sunset's gear list
Canon G7 X II Sony RX100 VI Sony RX1R II Apple iPhone 7 Plus
pimuk Regular Member • Posts: 332
Re: Pixel Binning vs. Skipping vs Oversampling

Thank you very much for sharing. It is well versed and I learn from you.

 pimuk's gear list:pimuk's gear list
Sony a9 Sony FE 24-70mm F4 OSS Sony FE 70-200 F4 Sony FE 35mm F1.4 Sony FE 55mm F1.8 +3 more
tellure99 Forum Member • Posts: 78
Re: Pixel Binning vs. Skipping vs Oversampling

Excellent post, thanks very much!

Just one clarification question.. if the term "pixel binning", is an algorithm for de-mosaicing sensors, when Sony claims it is not used on the A7S ("World’s first full-frame sensor capable of full pixel readout without pixel binning" link), does that just mean that they are using a more sophisticated de-mosaicing algorithm instead of the simple pixel binning technique? Or are they also using the term incorrectly and mean to say the A7S doesn't do pixel-*skipping*?

Astrophotographer 10 Forum Pro • Posts: 12,190
Re: Pixel Binning vs. Skipping vs Oversampling
1

In a mono CCD binning is simply averaging the values of 2x2 pixels ie: 4 pixels. This has the effect of lowering resolution slightly (depending on the scene) but increasing signal to noise ratio.

But not all CCDs do this effectively as the circuit architecture can limit the total amount that is recorded. Some sensors then, do not bin well and others do depending on how well they are designed.

In digital cameras I am not sure how this is implemented if at all. I not aware of any digital camera that does binning. Can you let me know if there are some.

In astrophotography binning is commonly used for rapidity of focusing a dim scene to boost the signal where detail is not as important or in capturing colour for quickness of acquisition and using no binning for capturing luminance data which has all the detail.

Greg.

 Astrophotographer 10's gear list:Astrophotographer 10's gear list
Sony a7R II Sony a7R III Sony FE 55mm F1.8 Zeiss Batis 85mm F1.8 Zeiss Loxia 21mm F2.8 +1 more
mgrum Contributing Member • Posts: 741
Re: Pixel Binning vs. Skipping vs Oversampling

tellure99 wrote:

Excellent post, thanks very much!

Just one clarification question.. if the term "pixel binning", is an algorithm for de-mosaicing sensors, when Sony claims it is not used on the A7S ("World’s first full-frame sensor capable of full pixel readout without pixel binning" link), does that just mean that they are using a more sophisticated de-mosaicing algorithm instead of the simple pixel binning technique? Or are they also using the term incorrectly and mean to say the A7S doesn't do pixel-*skipping*?

Neither. Sony are using the term correctly. The original post fails to mention that binning more commonly refers to the process of combining the signals from sets of pixels of the same colour.

mgrum Contributing Member • Posts: 741
clarifcations on "binning"...
1

smorgasbord wrote:

And, while many people use "pixel binning" to refer to some generic image scale reduction, I think it's important to understand the topology of image sensor color filters and then how the raw array of single-color "pixels" is demosaiced to produce an image of RBG pixels, and that "binning" is technically part of that demosaicing process, not a post-RGB image scale process.

Actually "binning" most commonly refers to the sensor combining the signal from several different pixels in order to produce one larger "pixel", to increase readout speed or reduce read noise.

If it is used on a colour sensor then it is done per colour, i.e. 4 red pixels are binned together, 4 blue pixels binned together.

This is done on certain PhaseOne digital backs (and called "sensor plus") as a workaround to the terrible read noise of their CCDs above base ISO. The demosaicing algorithm has to know that this type of binning has been performed in order to produce accurate results.

RicksAstro
RicksAstro Veteran Member • Posts: 3,842
Re: Pixel Binning vs. Skipping vs Oversampling
2

Astrophotographer 10 wrote:

In a mono CCD binning is simply averaging the values of 2x2 pixels ie: 4 pixels. This has the effect of lowering resolution slightly (depending on the scene) but increasing signal to noise ratio.

But not all CCDs do this effectively as the circuit architecture can limit the total amount that is recorded. Some sensors then, do not bin well and others do depending on how well they are designed.

In digital cameras I am not sure how this is implemented if at all. I not aware of any digital camera that does binning. Can you let me know if there are some.

In astrophotography binning is commonly used for rapidity of focusing a dim scene to boost the signal where detail is not as important or in capturing colour for quickness of acquisition and using no binning for capturing luminance data which has all the detail.

Greg.

Exactly right.  For CCDs, the advantage of binning 2x2 adjacent pixels is to only incur a single read noise hit for 4 pixels, thus the gain in S/N at the expense of lower resolution.   It physically acts like a single larger pixel.  The S/N of true HW binning is greater than just averaging 4 pixels together after being read, since the read noise would be in each of the 4 for averaging.

Really, you can't HW bin a bayer array chip and come out with color data since when the 2x2 array is binned, the RGBG data is lost.   Anything "binning" done after readout, even in the demosaicing process, is averaging or subsampling, not in the strict sense binning.

Unfortunately, the terms binning and subsampling are used by Sony and others interchangeably.   Perhaps binning can imply a special case of subsampling by an even multiple of 2, but it's still done after readout.

Any more, read noise is so low on modern CMOS sensors that HW binning is kind of anachronistic.  Even for Astro work, read noise really is only an issue if you are at a super dark site where sky background noise takes a while to overcome read noise or if you want to take very short exposures.   At normal sites, read noise is well overcome in 30 seconds or so.

-- hide signature --
RicksAstro
RicksAstro Veteran Member • Posts: 3,842
Re: Pixel Binning vs. Skipping vs Oversampling

mgrum wrote:

tellure99 wrote:

Excellent post, thanks very much!

Just one clarification question.. if the term "pixel binning", is an algorithm for de-mosaicing sensors, when Sony claims it is not used on the A7S ("World’s first full-frame sensor capable of full pixel readout without pixel binning" link), does that just mean that they are using a more sophisticated de-mosaicing algorithm instead of the simple pixel binning technique? Or are they also using the term incorrectly and mean to say the A7S doesn't do pixel-*skipping*?

Neither. Sony are using the term correctly. The original post fails to mention that binning more commonly refers to the process of combining the signals from sets of pixels of the same colour.

While it may be commonly used that way, it's still not what the term meant originally.   Binning circuitry was present in CCDs where 2x2 adjacent pixels read read in a single read action, making them act as a single pixel.   Obviously then all color data (if a bayer array) is lost.

Any binning done after readout is really subsampling, whether before, during or after demosaicing.

-- hide signature --
mgrum Contributing Member • Posts: 741
Re: Pixel Binning vs. Skipping vs Oversampling
1

RicksAstro wrote:

While it may be commonly used that way, it's still not what the term meant originally. Binning circuitry was present in CCDs where 2x2 adjacent pixels read read in a single read action, making them act as a single pixel. Obviously then all color data (if a bayer array) is lost.#

There are colour CCDs which HW bin 4 pixels of each colour, for example the chip in the PhaseOne IQ180.

OP smorgasbord Regular Member • Posts: 261
Re: Pixel Binning vs. Skipping vs Oversampling

mgrum wrote:

tellure99 wrote:

Excellent post, thanks very much!

Just one clarification question.. if the term "pixel binning", is an algorithm for de-mosaicing sensors, when Sony claims it is not used on the A7S ("World’s first full-frame sensor capable of full pixel readout without pixel binning" link), does that just mean that they are using a more sophisticated de-mosaicing algorithm instead of the simple pixel binning technique? Or are they also using the term incorrectly and mean to say the A7S doesn't do pixel-*skipping*?

Neither. Sony are using the term correctly. The original post fails to mention that binning more commonly refers to the process of combining the signals from sets of pixels of the same colour.

I thought it obvious that one would never combine a red value with a blue value when creating a color image, sorry. The link on binning that I provided in that post goes into far more detail on binning with bayer color patterns, including more sophisticated algorithms that try to compensate for the offsets of the red, green, and blue photosites. Note that the matter is further complicated by there being twice as many green pixels as red or blue. The link goes into a lot of details on various demosaicing algorithms, with explanations and examples.

OP smorgasbord Regular Member • Posts: 261
Re: Pixel Binning vs. Skipping vs Oversampling
2

RicksAstro wrote:

For CCDs, the advantage of binning 2x2 adjacent pixels is to only incur a single read noise hit for 4 pixels, thus the gain in S/N at the expense of lower resolution. It physically acts like a single larger pixel. The S/N of true HW binning is greater than just averaging 4 pixels together after being read, since the read noise would be in each of the 4 for averaging.

Really, you can't HW bin a bayer array chip and come out with color data since when the 2x2 array is binned, the RGBG data is lost. Anything "binning" done after readout, even in the demosaicing process, is averaging or subsampling, not in the strict sense binning.

Unfortunately, the terms binning and subsampling are used by Sony and others interchangeably. Perhaps binning can imply a special case of subsampling by an even multiple of 2, but it's still done after readout.

Any more, read noise is so low on modern CMOS sensors that HW binning is kind of anachronistic. Even for Astro work, read noise really is only an issue if you are at a super dark site where sky background noise takes a while to overcome read noise or if you want to take very short exposures. At normal sites, read noise is well overcome in 30 seconds or so.

Thanks for the clarification and explanation. What RicksAstro wrote better conveys what I was trying to say about the confusion between binning and downsampling, and that the term binning should just be forgotten with today's CMOS sensors. Most importantly, I wanted to clarify that binning and skipping were definitely not the same, especially since Tony N had confused them.

One related thing I didn't get into much detail during the first post was resolution and offsets. The pixels we deal with in post-production are single locations with combined red, green, and blue values. However, those values are at different points (photosites) on the sensor. Here's what the pattern looks like:

Sensor Bayer pattern

This would be a 36 pixel sensor. If you were to perform a simple demosaic, grabbing each 2x2 sub-array (which has 1 red, 1 blue, and 2 green pixels), and iterate over the sensor to produce a combined RGB value (sum the greens and divide by 2), you'd have an image that was 5x5 pixels, or 25 pixels total. That means 25 unique red values, 25 unique green values, and 25 unique blue values.

But, there aren't 25 unique values for any color - not even green! Each single photosite contributes to 4 final pixels. If we start at the top left corner, then move 1 photosite to the right, we see that the green and red values in the second column of the sensor contribute to both of those final pixels. The red photosite at (2,2) is going to be the same for 4 final pixels. So, you're not really getting detail there. It's better for green since there's so many of them, but there's still not 25 unique green values in the array. (BTW, the missing row and column in going from 6x6 to 5x5 don't matter for high resolution sensors).

My point here is that when we think of a sensor that's 4240 x 2384 pixels and we get an image that is that size as well, we need to remember that each color channel is not really at that resolution - that adjacent pixels are re-using photosite values.

Now, there's been a lot of work coming up with better algorithms than the 2x2 box filter I described (and some of them are included in link from my first post on binning/demosaicing), so the situation is somewhat better than it seems, but in no case are we really getting 12 megapixels of unique RGB data from a 12 megapixel bayer sensor.

And it's not just the uniqueness of the data at each pixel, it's the offset of different color photosites on the sensor that are combined into a single post-processing pixel in which the values all exist at the exact same spot. That can create color fringing or smearing, depending on the algorithm employed. Think about the moire examples I gave in my first post and now think about that happening in different colors.

The old Technicolor film process (#3) used a prism to split the light into 3 paths, where each path went through a red, green, or blue filter and then exposed a frame on 3 separate reels of black and white film. Since the light was split in analog, there is no pixel offset. This could be done today, but of course each color gets only 1/3 the amount of light (actually less due to losses), so this would only be suitable for very bright scenes. Athough, as sensor technology continues to improve we might eventually be willing to live with a 2-stop loss of brightness in order to gain resolution. Then again, Technicolor cameras were very bulky, and so would a stills camera doing the same.

BTW, one advantage to Technicolor is that since the negatives were black and white, they don't fade like color negative and so reconstructing a color print can be done at any time since we know the color temp of each color filter.

mgrum Contributing Member • Posts: 741
Re: Pixel Binning vs. Skipping vs Oversampling

smorgasbord wrote:

I thought it obvious that one would never combine a red value with a blue value when creating a color image, sorry.

The original post seems to suggest that binning is a primitive demosaicing algorithm where you just combine 4 pixels into a colour pixel. There are examples of this usage, but it's more commonly used to refer to combining pixels at the sensor level.

mgrum Contributing Member • Posts: 741
Re: Pixel Binning vs. Skipping vs Oversampling

smorgasbord wrote:

The old Technicolor film process (#3) used a prism to split the light into 3 paths, where each path went through a red, green, or blue filter and then exposed a frame on 3 separate reels of black and white film. Since the light was split in analog, there is no pixel offset. This could be done today, but of course each color gets only 1/3 the amount of light (actually less due to losses), so this would only be suitable for very bright scenes. Athough, as sensor technology continues to improve we might eventually be willing to live with a 2-stop loss of brightness in order to gain resolution. Then again, Technicolor cameras were very bulky, and so would a stills camera doing the same.

3 CCD video cameras were very popular for a period of time, when resolutions were much lower. As you increase the sensor resolution the penalty for Bayer sampling is reduced so it's unlikely we'll see any such techniques employed going forward. Likewise with Foveon style layered sensors. Oversampled Bayer is the way to go.

RicksAstro
RicksAstro Veteran Member • Posts: 3,842
Re: Pixel Binning vs. Skipping vs Oversampling

mgrum wrote:

RicksAstro wrote:

While it may be commonly used that way, it's still not what the term meant originally. Binning circuitry was present in CCDs where 2x2 adjacent pixels read read in a single read action, making them act as a single pixel. Obviously then all color data (if a bayer array) is lost.#

There are colour CCDs which HW bin 4 pixels of each colour, for example the chip in the PhaseOne IQ180.

I actually did just read about that once it was mentioned. Looked like a super specific circuitry designed to HW bin non-adjacent sensels. Very interesting. But I'll stand by my statement that this is very, very uncommon and not "more commonly refers to the process of combining the signals from sets of pixels of the same colour".  99% of pixel binning implementations have historically been adjacent pixels.

But like I said in my other post, this concept of pixel binning to minimize read noise is less relevant now with the super low read noise sensors.

-- hide signature --
ProfHankD
ProfHankD Veteran Member • Posts: 6,287
From analog sensor sensels to digital image pixels...
1

I'll try to walk through the poorly used, and much abused, terminology. Sensels (sensor pixels) go through a lot on the way to becoming image pixels....

In the analog domain, it generally only makes sense to add (bin) values from sensels if they see the same spectrum. For monochrome sensors, analog binning of neighboring sensels is easy. However, most color cameras use a CFA (color filter array) to alter the spectral sensitivity of sensels in some repeating pattern, such as a Bayer RG over GB, so binning would add sensel values that are not adjacent, which is awkward. Fuji has sometimes used non-Bayer CFAs that place same-color sensels next to each other to allow binning, but that did not work out so well for them -- the X10 "white orbs" were probably due in part to leakage on those circuit paths. Analog binning is not common in color cameras.

The sensel values get read-out and digitized, but it generally isn't necessary to read out all sensels. In CCDs, you can skip lines, but each line is an analog shift register, so you generally read every sensel to read one sensel anywhere in that line. Depending on the control logic, CMOS sensors can be much more flexible, allowing you to just read out just the sensels within a rectangular ROI (region of interest) or even an arbitrary pattern of individual sensels. The skipping you hear about usually refers to the idea of literally skipping digitizing entire lines of sensels, but could refer to only looking at digitized values of pixels in any gapped pattern. For example, Sony's 960FPS mode for the RX100IV is probably skipping lines within a ROI. Digitizing all sensel values within a ROI (without skipping) also is a quite common trick for getting a higher framerate -- this is usually what's happening when you see the view angle narrow for video as the framerate is increased (and for FF bodies in APS-C crop modes).

Given the digitized data, which might or might not be partial in the above listed ways, there is an interpolation process that constructs RGB channel values for each pixel. The interpolation is computationally expensive to do well, so some cameras do much worse jobs than others, especially in high framerate modes. To save time producing fewer image pixels than there were sensels, digitized sensel data can be skipped or simply added (binned) rather than interpolated. For example, a 2x2 sensel RG over GB clump might be converted into a single RGB pixel by simply taking the R and B sensel values and averaging the two G values... which is sort-of binning in the digital domain. High-quality interpolation schemes don't simply copy or average sensel values, but try to recognize gradients and edges to synthesize the best possible missing color data for each pixel position; the R value for a pixel that wasn't perfectly aligned with a R sensel might be computed as a non-linear function of the values of the 4 (or more) nearest R sensel values. One normally thinks of interpolation as resulting in at least as many pixels as there were sensels, but the concept still applies when the number of pixels is less than the number of sensels. In general, an interpolation function that uses more sensels than it produces pixels can be called oversampling.

Many variations are possible. For example, it is possible to nonlinear interpolation in the analog domain. However, I believe the above are the most common cases. As for Nyquist, well, none of this really deals with Nyquist limits directly. A 6000x4000 Bayer CFA sensor only is a Nyquist sampling of a maximum of 1500x1000 scene resolution, and you would need a very strong AA (anti-alias) filter to ensure no spatial frequencies beyond that get to the sensor. Fortunately, credible reconstructions of a scene can be made with much higher apparent resolution; being below a Nyquist sampling doesn't say you can't reconstruct, but simply that you can't be certain that the reconstruction is correct.

 ProfHankD's gear list:ProfHankD's gear list
Olympus TG-860 Canon PowerShot SX530 Sony a7R II Sony a6500 Canon EOS 5D Mark IV +30 more
JimKasson
JimKasson Forum Pro • Posts: 22,424
Re: Pixel Binning vs. Skipping vs Oversampling

smorgasbord wrote:

One related thing I didn't get into much detail during the first post was resolution and offsets. The pixels we deal with in post-production are single locations with combined red, green, and blue values. However, those values are at different points (photosites) on the sensor. Here's what the pattern looks like:

Sensor Bayer pattern

This would be a 36 pixel sensor. If you were to perform a simple demosaic, grabbing each 2x2 sub-array (which has 1 red, 1 blue, and 2 green pixels), and iterate over the sensor to produce a combined RGB value (sum the greens and divide by 2), you'd have an image that was 5x5 pixels, or 25 pixels total. That means 25 unique red values, 25 unique green values, and 25 unique blue values.

But, there aren't 25 unique values for any color - not even green! Each single photosite contributes to 4 final pixels. If we start at the top left corner, then move 1 photosite to the right, we see that the green and red values in the second column of the sensor contribute to both of those final pixels. The red photosite at (2,2) is going to be the same for 4 final pixels. So, you're not really getting detail there. It's better for green since there's so many of them, but there's still not 25 unique green values in the array. (BTW, the missing row and column in going from 6x6 to 5x5 don't matter for high resolution sensors).

My point here is that when we think of a sensor that's 4240 x 2384 pixels and we get an image that is that size as well, we need to remember that each color channel is not really at that resolution - that adjacent pixels are re-using photosite values.

Now, there's been a lot of work coming up with better algorithms than the 2x2 box filter I described (and some of them are included in link from my first post on binning/demosaicing), so the situation is somewhat better than it seems, but in no case are we really getting 12 megapixels of unique RGB data from a 12 megapixel bayer sensor.

And it's not just the uniqueness of the data at each pixel, it's the offset of different color photosites on the sensor that are combined into a single post-processing pixel in which the values all exist at the exact same spot. That can create color fringing or smearing, depending on the algorithm employed. Think about the moire examples I gave in my first post and now think about that happening in different colors.

Well and clearly put. However, I'd like to point out that the Bayer CFA pattern is not the only game in town:

http://blog.kasson.com/?p=5521

The old Technicolor film process (#3) used a prism to split the light into 3 paths, where each path went through a red, green, or blue filter and then exposed a frame on 3 separate reels of black and white film. Since the light was split in analog, there is no pixel offset. This could be done today, but of course each color gets only 1/3 the amount of light (actually less due to losses), so this would only be suitable for very bright scenes. Athough, as sensor technology continues to improve we might eventually be willing to live with a 2-stop loss of brightness in order to gain resolution. Then again, Technicolor cameras were very bulky, and so would a stills camera doing the same.

BTW, one advantage to Technicolor is that since the negatives were black and white, they don't fade like color negative and so reconstructing a color print can be done at any time since we know the color temp of each color filter.

For proper reconstruction, you're a lot better off if you know the wavelength-by-wavelength  transmission curve of each filter with 5 or 10 nm granuarity, not just the color temperature.

Jim

-- hide signature --
 JimKasson's gear list:JimKasson's gear list
Fujifilm GFX 50S Sony a9 Nikon D5 Sony a7R III Sony a7 III +5 more
joelR42 Regular Member • Posts: 183
Re: Pixel Binning vs. Skipping vs Oversampling

smorgasbord wrote:

DSLRs use pixel skipping (and the A7Rii does this for 4k video off the full frame sensor). This retains the shallow DOF abilities of the larger sensor and most importantly yields the same field of view as full-res stills, but does nothing for noise. Worse, the higher the resolution of the sensor the more pixels that are skipped, and so moire increases. And even if there were an anti-aliasing filter in front of the sensor, it would be tuned for high-res stills, not video, so it would be inadequate. Hence DSLR manufacturers have to do some amount of blurring to reduce moire, but not so much that it hurts the image - it's a fine line there. It's also why S35 crop mode for 4K looks better than FF mode for 4K on the A7Rii - no pixels are skipped, yet there's some oversampling.

What's tantalizing about the A7Rii's sensor is that it's actually big enough (7952x5304) to do proper 2X Nyquist level oversampling of a QFHD 4K image (3840x2160). If Sony can get a processor fast enough, they could produce astounding 4K video - but given the overheating that's already happening there would definitely need to be some processor upgrades.

Mostly right. The A7RII actually does not use line-skipping (at least when recording 4K) in full-frame mode. It uses pixel binning. So there are still some artifacts but fewer then there would be if they use skipping. This is why they chose 42MP (AKA 8K) instead of 50+ because it will bin cleanly. The Bayer pattern uses a 4X4 grid so for the best results when you must bin pixels you want a multiple of 4.

 joelR42's gear list:joelR42's gear list
Sony a9 Zeiss Batis 85mm F1.8 Sony FE 55mm F1.8 Sony FE 28mm F2 Sony FE 28-70mm F3.5-5.6 OSS +1 more
JimKasson
JimKasson Forum Pro • Posts: 22,424
Re: Pixel Binning vs. Skipping vs Oversampling

joelR42 wrote:

Mostly right. The A7RII actually does not use line-skipping (at least when recording 4K) in full-frame mode. It uses pixel binning. So there are still some artifacts but fewer then there would be if they use skipping. This is why they chose 42MP (AKA 8K) instead of 50+ because it will bin cleanly. The Bayer pattern uses a 4X4 grid so for the best results when you must bin pixels you want a multiple of 4.

Are you sure you don't means a 2x2 grid? RGGB, for example.

Jim

-- hide signature --
 JimKasson's gear list:JimKasson's gear list
Fujifilm GFX 50S Sony a9 Nikon D5 Sony a7R III Sony a7 III +5 more
Keyboard shortcuts:
FForum MMy threads