The end of Bayer?

With Canon's new announcements, and the 1DM2N being more of a
running update, I can't help getting a bit excited that we may have
seen the last high end Canon anouncments that will house Bayer
technology.

To think that single photosite RGB is coming, and AA filtering
nearing an end, is truly enticing.

Sorry to break up the 5D fervor.
What announcement are we talking about?

Some references to this non CFA technology announcement from Canon?

Roland
 
It seems to be a very small minority of people who think that only
bayer-CFA captures need AA filters. Concentric-RGB pixels benefit
from them too. Without them, spatial detail is misplaced, because
multiple points of light are snapped to the same grid point. A
narrow point of light drifting across a greyscale sensor that is
properly AA-filtered will always have changes to the image, even
for tiny sub-pixel movements. Unfiltered, the point will
illuminate a single pixel uniformly, and then quickly jump to the
next pixel (without microlenses, it will even black out between
pixels).

In a static image, everything is displaced without an AA filter
(unless the focused image isn't even sharp to begin with).
There is theory and there is the real stuff.

To get a super duper picture with high resolution you should really use lets say 10 times oversampling (i.e. 100 times the number of pixels you need) and then use a digital filter to get a sharp and very accurate picture. You could start with 1 Gigapixel and then end up with the best 10 Mpixels you ever have seen.

Thats how you do it when recording music. And it would be extremely nice for digital pictures also - if it was possible. But now we all know that we are not there yet :)

So - what have we got?

Ahhhh ... for APS sized sensors we have 8 Mpixel Bayer. rather nice --- but we really don't want to oversample - losing resolution. So - let us try to do an optical anti alias filter. Now the CFA pattern does not have a fixed sampling frequency - it has actually two. So - what frequency shall the anti alias filter use as Nyquist frequency? If we use the higher we will not have a steep enough filter and if we use the lower the picture will be unneccessary soft. Hard choice - I assume that the actual filters are a compromise.

But that is not all - real steep anti alias filters cannot be physically made with simple optical means. What is used is LiNb birefringent material that gives double pictures on the sensor. That is a rather week filter. Nothing even near a perfect anti alias filter.

So - the anti alias filters used for Bayer sensors is rather poor.

For Foveon sensors Sigma (or is it Foveon?) has choosen not to use any anti alias filter at all for SD10. It is supposed to be enough with the filtering you get by using micro lenses.

I have looked at lots of picture on the net. And .. the verdict is that Foveon/Sigma seems to be right. You find just as much aliasing problems in Bayer pictures than in SD10 pictures. And the aliasing for Bayer pictures looks worse as it is colored.

Now - if Foveon/Sigma had more pixels - then they might make an even better camera by introducing an anti alias filter. But at only 3 Mpixels - I think that the choice to omit the filter is a wise one.

Roland
 
What announcement are we talking about?

Some references to this non CFA technology announcement from Canon?
OK - I have found some references to a method for changing filtering on the fly - some kind of patent that Kodak posesses. Now - I assume that Kodak has thousands of patents concerning lots of things they never ever are going to implement.

So - is there any announcement that they really are going to use this technology?

Roland
 
Single photosite RGB will continue to offer no improvement in the
quality of images, because humans too have separate RGB sensors.
This is not really so. A technically better way of recording images
is better even if the human sight has its faults.
Not in this case. Try looking at the red/blue test charts the Foveon types like so much: they're almost impossible to "parse". Foveon gives you something that we can't see. At a cost in noise, color accuracy, and resolution.
Some years ago (100 I think) some photgraphers concluded that all
sharp images was not neccessary because humans see only a small
part sharp. So - they used lenses that was very badly corrected -
and only the central part was sharp. Of course - it was the
emperors new clothes - everyone could see that the pictures looked
more artistic than photographic. So could of course those
photographers - but they insisted nevertheless.
Again, people are very happy with the Bayer images, making insanely large posters from 8MP images that would be a grainy mushy mess from 35mm film.

A theory that claims that Bayer is problematic just doesn't jjibe with observed reality.

--
David J. Littleboy
Tokyo, Japan
 
Peek at the betterlight LF backs without AA filtering. To avoid
the onslaught, I am aware that they are scanning backs, and of all
of the many significant shortcomings, but all the same it is
intersting to see RGB and no AA in action.
Some of the MF digital back types are finding themselves to be seriously unhappy with the Moire (I can't see how a pro could use a camera that randomly backstabs you). Others like the contrast. The arguments are heated.

--
David J. Littleboy
Tokyo, Japan
 
Bayer seems like a near optimal solution to the problem, because it designed to optimize for the human visual system, you can capture more human readable resolution with much less on chip circuitry.

To be a good sampling device you also need at limits filtering of AA. AA serves two purposes in a well designed Bayer system. With properly chosen cut-offs you get a situation where just about all colour and luminance moire is eliminated, yet resolution retained is near 80% of the value you would get if a full colour array was used without AA.

Pretty amazing really. The net result is that a bayer senor with 2/3 the active elements of full colour array, generates the same resolution, higher light gathering efficiency, and is also properly cutoff sampled. Readout speed is also faster because less sensor data needs to be relayed.

Bayer will be with us until the speed and circuitry issues are irrelevant and sensitivity issue is solved. Quite some time yet I would imagine.
 
So - is there any announcement that they really are going to use
this technology?
No, and I did not mean to imply that there was.

That said, Canon would be unlikely to anounce such an intention. It would certainly not help fence sitters to purchase/upgrade in the next year. The patents are there, however, and have been for some time. I would counter the comments above, that these are unlikely to be used on a still camera sensor. These patents are very relevant to still cameras.
 
The FoveonTechnology is in its infancy.

I thought this thread was about the potential of these technologies which by definition must be a theoretical discussion.

--
Kent Dooley
 
You can't correct aliasing in software - it's too late. Sorry.

Cesare
 
I have looked at lots of picture on the net. And .. the verdict is
that Foveon/Sigma seems to be right.
I have looked, too, and the Sigma images are disturbingly gridded. They don't look like images to me; they look like mosaics.
You find just as much aliasing
problems in Bayer pictures than in SD10 pictures.
Every SD10 image taken with sharp optics is visibly aliased (unless there is no high frequency content in the analog scene. Bayers only alias color, and do so mildly, under special circumstances.
And the aliasing
for Bayer pictures looks worse as it is colored.
I've seen it in only about three images, one of them I generated myself on purpose (color LCD screen shot at a calculated distance to induce color aliasing).
Now - if Foveon/Sigma had more pixels - then they might make an
even better camera by introducing an anti alias filter. But at only
3 Mpixels - I think that the choice to omit the filter is a wise
one.
The other side to this coin is that at only 3.43MP, the pixels are more easily seen.

--
John
 
A Foveon sensor (and also all other sensors that sample all colors
at all sites) can get away without an anti alias filter - a Bayer
(and all other CFA sensors) cannot.
Only problem is both Kodak and Leica have proved otherwise. There are many who praise both those cameras for their lack of an optical low pass filter. The Kodak definitely blew apart in spectacular fashion due to this in certain circumstances. There is an excellent thread over on Fred Miranda's site where Guy Mancuso is sharing his experience with the Leica DMR. So far, so images show sign of aliasing in the form of noticeable jaggies sometimes, but we have yet to see a bad color moire situation. I think there was one image from a preproduction DMR that demonstrated serious color moire.

Still, the DMR looks to be a huge hit among the Leica crowd. So I'd say you can definitely "get away" with leaving optical low pass filtering off a Bayer filtered sensor. Probably more and more so as resolution increases.
A Bayer sensor without an anti alias filter have problems with
colored aliasing if the lens is too sharp. It looks awful.
Most Bayer filtered cameras with optical low pass filters can be made to do the same thing to a lesser extent if you try hard.
The SD10 on the outher hand has only micro lenses. The micro lenses
create a box filter that removes the higher frequencies. It does
not filter away the frequencies at nyquist frequency. But - it does
not really matter as the pictures look good nevertheless. And that
is what counts.
Comes down to what artifacts people are willing to accept. My experience indicates that there are a vast range of tastes in this and some will accept color moire in exchange for increased sharpness, and many will happily ignore jaggies and monochrome moire. And if you listen to the anti-antialiasing filter crowd, apparently most Canon shooters are happy to live with soft images in exchange for lack of moire and jaggies. (And a different group denigrates Canon for its plastic look in exchange for low noise. None of this being backed up by anything approaching real measurements.)

-Z-
 
at some point, just doing it right makes more sense.
Made me laugh...

So, yeah, the Low-pass is required for using a regular sampling grid. Got it. What about an imaging system that was based on a jittered grid (or perhaps an actual poission-disc random sampling?) Even with no AA, the jittered sampling turns too-high frequency data into high-frequency noise rather than aliasing it down to noticible low-frequencies. Aliasing isn't so bad most of the time, and I'd rather have the camera that backstabs me with a a little high-freq chroma noise (which you eye won't pick up anyways) rather than with horrible, large, purple-green blobs.

When I run the world, I'll give it a try. What do you think?

Thanks for the good comments,
Dave
 
But what low-pass filtering has to do with Bayer interpolation ?

To me these things are completely independent. One is to prevent
aliasing, the other is to overcome the effect that current sensors
are in fact color blind.

--
Didier
If you ignore the engeneering issue like I do and look at it practically, they are independent at first. What makes them link together to me are issues like moire. Most people don't realise how much moire they would get without the filter. Leaf for example let you use it's own moire removal tool that works on their HDR files. Hundreds of images with moire to remove locally would be a pain in the neck. I talk by experience here.
 
Bayer seems like a near optimal solution to the problem, because it
designed to optimize for the human visual system, you can capture
more human readable resolution with much less on chip circuitry.

To be a good sampling device you also need at limits filtering of
AA. AA serves two purposes in a well designed Bayer system. With
properly chosen cut-offs you get a situation where just about all
colour and luminance moire is eliminated, yet resolution retained
is near 80% of the value you would get if a full colour array was
used without AA.

Pretty amazing really. The net result is that a bayer senor with
2/3 the active elements of full colour array, generates the same
resolution, higher light gathering efficiency, and is also
properly cutoff sampled. Readout speed is also faster because less
sensor data needs to be relayed.

Bayer will be with us until the speed and circuitry issues are
irrelevant and sensitivity issue is solved. Quite some time yet I
would imagine.
The 3.4 Mpixel Foveon sensor is often said to be as sharp as an 6 Mpixel Bayer sensor. A 3.4 Mpixel Foveon has 10 million sensels.

Now - this comparison is not 100% fair.

The Foveon lacks an anti alias filter which is (at least theoretical) a problem and the Bayer sensor have an imperfect anti aliasing filter leading to color aliasing in practice.

But - thes sensor economy is better in Bayer - no doubt about it.

Roland
 
Interesting idea!

As a sampling dude, I can tell you that it won't be random sampling if the sampling intervals are fixed and never change from image to image. Eventually you hit areas where the pseudo-random pattern of sensors would cause aliasing with some particular images. So every now or then, you'd get a big 'gotcha' because the tiles of a roof just happen to line up with a patch of 'randomly' spaced sensors.

Also my hunch is we'd need much higher densities of pixel sites to pull of your idea.

Tom
 
Your standpoint is exactly my standpoints some few months ago. Then some here (forgot who) told me I was in error. Stubborn as I am I went into discussions defending my belief.

OK - I still think there are some merits to my former standpoint - but I have now a much harder time defending it.

Let me explain why.

I did (just as I assume you do) look at pictures at 200 or 400% to really see the pixels and the mosaic lookalike Foveon pictures. Ouch - that hurts!

But - this is not a fair comparison. Foveon pictures have high contrast between nearby pixels and Bayer pictures have not. So - just as you said - Foveon pictures look like mosaics and not like photographs. The Bayer on the other hand (being unsharp at pixel level) looks more like photographs.

So - here comes the twist.

You are not supposed to upscale pictures without doing some low pass filtering. That is not correct according to the sampling theorem. So - surprise - if you do low pass filtering before upscaling - then the difference in photographic quality disappears. Both get blurry and both looks good.

I have done this experiment lots of times now. And I assure you - I cannot any longer "prove" that Foveon pictures are inferior.

There is of course another twist here. How many Foveon owners do know that they have to low pass their pictures when upscaling for printing? Not many I assume. So - they miss out the full potential of their cameras - if they don't like mosaics that is :)

Roland
 
Again, people are very happy with the Bayer images, making insanely
large posters from 8MP images that would be a grainy mushy mess
from 35mm film.

A theory that claims that Bayer is problematic just doesn't jjibe
with observed reality.
Oh yes - I am very happy with my Bayer cameras. Brilliant solution really.

But there are problems. Where I live we have a large house with lots of windows. This house I can only take photos of with my less sharp lenses. If I try a sharp lens I get a very colorful result as all blinds in the windows create aliasing. Lots of it. BTW - I have Pentax *istD S.

Unfortunately I have no Sigma camera. It would be nice to try with a sharp Sigma lens on that "test pattern".

Roland
 
Not me. If you have to measure all three colors at every point, you
need three times the data for essentially no improvement in image
quality.
You avoid color aliasing. Thats a big improvement for those picture where it is evident. You also avoid the plastic filters that we have no knowledge about how long they will keep their properties.
Even worse, you need to store three times as much charge at each
sensel site, increasing noise radically compared to using that
whole charge storage capacity (well depth) for one color.
Entirely correct. And this might be the main reason why the "normal" sensor makers are not interested in Foveon. It is not (technically) economical.
Foveon is a horrendeously bad idea.
No - not really.

It might be an idea that cannot be economically defended. But - the pictures I see proves that it really is a more than reasonable approach.

Roland
 
The FoveonTechnology is in its infancy.
I thought this thread was about the potential of these technologies
which by definition must be a theoretical discussion.
This "throwing away 2/3" argument is not as easy as it first seems.

First - you don't miss any sensitivity by using less sensing elements - just as you don't miss any sensitivity by halving the size of the sensor. What you do by only detecting one color at each site is losing information - not decreasing sensitivity. Think about it - and you see why.

Second - the Bayer interpolation algorithms are rather clever at computing non meassured data. So the data (faulty as it may be) results in a rather good picture - without any decrease in sensitivity.

So - the net result is that it is almost impossible to say what you lose using Bayer. You cannot give a figure. And it does absolutely not lose 2/3 of the sensitivity.

And actually - the Foveon situation is similar.

The Foveon layers don't detect Red, Green and Blue as we want them to be to construct color images. They detect something entirely different. So - we have to compute the end result. In this computation noise from the three layers add up nicely - but the signal does not add up as nicely. So - the signal to noise ratio decreases - in practice that decreases the sensitivity.

Not only that - the analogue output signals from the chip is already computed as a difference between charges. So - even there the nosie adds up faster than the signal.

Roland
 
The 3.4 Mpixel Foveon sensor is often said to be as sharp as an 6
Mpixel Bayer sensor. A 3.4 Mpixel Foveon has 10 million sensels.
Now - this comparison is not 100% fair.
It's not even 100% true. The ouput image may sometimes appear as sharp, but if you take a 6MP bayer image, upsample it 200% or 300%, and then downsize it to 2268*1512 with the nearest neighbor algorithm, then it looks similar to a Sigma/Foveon image. The "sharpness" of the Sigma images is an artifact of the sampling method, not sharpness composed of "image detail".

--
John
 

Keyboard shortcuts

Back
Top