# Purple flare: Causes and remedies

Started Mar 1, 2013 | Discussions thread
Re: Color-channel imbalance

Now I still do not get it. I do not get the asymetric impact of the color-channel imbalance.

Let's assume 12 pixels ; 6 are green, 3 are blue, 3 are red. Let's assume color-channel pollution : they all fail to capture incoming light that goes directly to the adjacent pixel (one of the 4 pixels with which they share a side, not a corner). So the 6 green --> 3 blue + 3 red (statistically). The 3 red --> 3 green pixels. The 3 blue --> 3 green pixels. So we end up exactly with what we had started with . Why should then the imbalance matter?

Of course, if the light can strike a pixel adjacent by the corner, it is completely different.

But then it's a question of calculating the chances that the photon ends up in this or this adjacent pixel given the incidence angle, to determine how the imbalance will result (I think).

The imbalance occurs because the green channel is so much stronger than red and blue. For a white target in normal daylight, roughly twice as many photons manage to pass the green filter compared to the read and the blue filter. When some of these photons "escape", the pollutive stream from green to red/blue becomes roughly twice as strong as the pollutive stream in the opposite direction. On top of that you have the impact of the "amplification" in the course of image processing applied to the red and blue channels to help them reach parity with green in spite of the significantly smaller of photons they can be expected capture. In practice, this means that even a fairly limited amount of pollution can generate a rather strong color shift towards purple.

Hum...

You said in your explanation post : "Second, the number of photons recorded by the green pixels when the sensor is exposed to ordinary white/gray daylight is approximately twice as large as that recorded by a red or blue one."

I had thought you meant it as a consequence of having twice as many green pixels. Actually you say that not only do we have as many green pixels as red&bluepixels together, but each of them individually receives more light, I assume due to (i) the spectrum of typical incoming light (sun light or bulb/tungsten) and (i) the wavelength bandwidth of each channel (could not find any info on how it is implemented in Bayer's array).

Thanks, it makes sense to me.

Cani's gear list:Cani's gear list
Panasonic Lumix DMC-GH1 Olympus PEN E-P5 Panasonic Lumix G Vario 14-45mm F3.5-5.6 ASPH OIS Panasonic Leica DG Macro-Elmarit 45mm F2.8 ASPH OIS Panasonic Lumix G 14mm F2.5 ASPH +10 more
Complain
Post ()
Keyboard shortcuts: