Bayer sensor vs. Foveon...

if Foveon was SOOOOO good then why did Canon and Nikon not beat each other's brains out to license it.
----------- Exactly. And now they're so proprietarily entrenched, is there even a chance they'd consider it? Actually, we're all aware of the advantage of Canon making its own sensors, so they're probably far less likely to jump on a new technology... and/or far more able to if they really saw the need. Of course there are threads all over DPR speculating whether Sony will pull the plus on Nikon's sensor supply, if they get seriously competitive in the dSLR market... so there's a fallback for Nikon, if the need arises and the hype supports it. It'll be interesting to see where it all goes... of course, it could turn out to be the best thing since sliced bread, but still suffer Betamax/Mac/dbx syndrome.
 
Bayer interpolation can probably do a reasonable job when the light
of all the colors lines up exactly but when they are shifted by one
pixel width or even less its obviously not possible anymore
Huh? No discrete sampler can do a reasonable job when things "line up exactly". That's why every one always starts talking about this Nyquist fellow and how you have to sample at 2x the rate to be sure that you've really seen something.
youll just get 1/4th resolution for red and blue and 1/2 for green.
You are assuming that the green, red, and blue are spectrally pure and narrow so that the adjacent greens are not stimulated at all. Secondly, this is a lot like how human vision works. Our own perception is not as good at blue/red detail as we are with green.
but an ideal sensor that
takes all colors per pixel across the whole pixel area would have
2-4 times the resolution of bayer and without the artifacts.
I'm not sure where you are getting the 4x except for the pathological case adjacent pure red/non-red or blue/non-blue. If that describes things that are important for you to shoot, then current mosaics may not be a good fit OR you need a mosaic sensor that samples at enough higher a rate to make up the difference. Well, surprise, that's what you see with the current crop of cameras.
A bayer sensor can also never capture the full pixel area wich
means there will always be aliasing unless they blur it with a
filter.
You can leave out the word "bayer" in that sentence. No sensor can capture 100% of the area (although microlenses help.) The primary difference is that mosaics will have the moire show up as color artifacts.
Another big improvement that can come with full color pixels is
better sensitivity, bayer is currently trowing away two thirds of
the light.
Not all full color pixels are created equal. X3 pixels "throw away" the light trying to compensate for the large color overlaps.

--
Erik
 
Look at the actual resolution from the dpreview test
http://www.dpreview.com/reviews/sigmasd10/page18.asp
--the 3.4 mp SD10 has about the same resolution as the 6MP 10D
Well...I've seen and studied that review 100 times. It appears to
me that the charts show obviously better resolution from the Sigma
SD9 & 10 than either of the two other cameras. Not sure what you're
pointing out or pointing to.
You are looking at artifacts and thinking you are seeing resolution.

As before:

(1) Up to about 2/3 of the Nyquist frequency, the foveon resolves the same as Bayer. In this area, both cameras are coming up with reasonable representations of the subject matter.

(2) From 2/3 of the Nyquist frequency to the Nyquist frequency, Bayer cameras gradually go gray and foveon cameras show strong conrast and strong Moire patterns.

As you might expect, IMHO, Bayer is doing the right thing here. In engineering terms, the pattern of concern is a square wave pattern, and such patterns require frequency components at multiples of the basic frequency to represent. The mathematics here only guarantees sine waves, not square waves, up to the Nyquist frequency.

(3) At the Nyquist frequency, the Foveon shows a pattern with the same number of lines, but at the wrong position in the image.

As you might expect, IMHO, this is silly, since it leads to stairstepping.

(4) Above the Nyquist frequency, the Foveon generates a random pattern. But since it's a high-contrast random pattern, people see it as resolving patterns above the Nyquist frequency.

As you might expect, IMHO, this completely ridiculous and insane.

Note that the charts are also bogus because they are largely lined up with the pixel matrix, which is why they seem able to resolve patterns at the Nyquist frequency; those would turn into Moire and jaggies at a slight angle to the matrix.

--
David J. Littleboy
Tokyo, Japan
 
Bayer interpolation can probably do a reasonable job when the light
of all the colors lines up exactly but when they are shifted by one
pixel width or even less its obviously not possible anymore
Huh? No discrete sampler can do a reasonable job when things "line
up exactly". That's why every one always starts talking about this
Nyquist fellow and how you have to sample at 2x the rate to be sure
that you've really seen something.
I was talking about the fact that light of different colors is refracted differently and so while at the center of the image you can get a higher resolution with bayer interpolation, further out to the sides where the different colors dont align perfectly it doesnt work anymore.
youll just get 1/4th resolution for red and blue and 1/2 for green.
You are assuming that the green, red, and blue are spectrally pure
and narrow so that the adjacent greens are not stimulated at all.
Secondly, this is a lot like how human vision works. Our own
perception is not as good at blue/red detail as we are with green.
That doesnt mean you shouldnt capture the image more accurately, even for luminance resolution full color pixels can be better theoretically.
but an ideal sensor that
takes all colors per pixel across the whole pixel area would have
2-4 times the resolution of bayer and without the artifacts.
I'm not sure where you are getting the 4x except for the
pathological case adjacent pure red/non-red or blue/non-blue. If
that describes things that are important for you to shoot, then
current mosaics may not be a good fit OR you need a mosaic sensor
that samples at enough higher a rate to make up the difference.
Well, surprise, that's what you see with the current crop of
cameras.
A bayer sensor can also never capture the full pixel area wich
means there will always be aliasing unless they blur it with a
filter.
You can leave out the word "bayer" in that sentence. No sensor can
capture 100% of the area (although microlenses help.) The primary
difference is that mosaics will have the moire show up as color
artifacts.
Its not theoretically impossible to have a sensor that captures 100% (or 99%) There is no reason that circuitry has to be at the surface of the sensor, this is just part of current technology.
Another big improvement that can come with full color pixels is
better sensitivity, bayer is currently trowing away two thirds of
the light.
Not all full color pixels are created equal. X3 pixels "throw away"
the light trying to compensate for the large color overlaps.
This is just a problem with the technology, i know foveon is flawed.
 
Bayer interpolation can probably do a reasonable job when the light
of all the colors lines up exactly but when they are shifted by one
pixel width or even less its obviously not possible anymore and
youll just get 1/4th resolution for red and blue and 1/2 for green.
This is not true. You will still get reasonably accurate luminance (black & white) information, because there is an Anti Alias filter before the sensor. This will slightly unsharpen the image so that there cannot be points and lines that are exactly one pixel wide. This is the reason why a Foveon sensor of 3x3 megapixels and without an AA filter can produce a somewhat sharper image than a Bayer 3 megapixel sensor with an AA filter.
And if color details change at pixel size its even more obvious
interpolation cant work.
I have a surprise for you: with very few exceptions, all JPEG images are encoded with 4:2:0 colour coding, so you have only 1/4 resolution for colour in nearly all your final images. This encoding has been chosen because it saves space and because colour vision is less accurate than luminance (black & white) vision. As it happens, this matches the colour capabilities of Bayer sensors quite nicely. A good piece of software can, if necessary, do colour sharpening for JPEGs using the luminance information (like all high-quality NTSC and PAL tv receivers do for analog RF TV signal which has very low colour resolution).
I dont know how good the foveon sensor is but an ideal sensor that
takes all colors per pixel across the whole pixel area would have
2-4 times the resolution of bayer and without the artifacts.
Nope. If a Foveon sensor is done properly, and if aliasing is to be avoided (the current Foveon sensor cameras do nothing to prevent aliasing), they too would need an Anti Alias filter. With an AA filter, the difference to a similar Bayer sensor would be negligible. As it is now, there is a difference, but it is clearly less than the 2X, let alone 3X, what has been stated.
A bayer sensor can also never capture the full pixel area wich
means there will always be aliasing unless they blur it with a
filter.
Instead of this being a different issue from what you wrote, this is exactly what AA filters are for, and they also pretty much solve the other issues I commented earlier in this message.
Another big improvement that can come with full color pixels is
better sensitivity, bayer is currently trowing away two thirds of
the light.
Not exactly. Green is responsible for 60% of the luminosity of the image, and because of this standard GRGB sensors have 50% green sensors, and only 25% red and blue sensors. The theoretical colour filter efficiency is 0.6*0.5 (green) + 0.3*0.25 (red) + 0.1*0.25 (blue) = 0.4 = 40%.

Kind regards,
  • Henrik
--
And if a million more agree there ain't no great society
My obligatory gallery at http://www.iki.fi/leopold/Photo/Galleria/
 
The only way bayer interpolation can work right is when adjacent pixels change only in luminance and keep the same color spectrum, otherwise youre just making a wrong guess for the interpolation, and youll get the wrong brightness value, not just the wrong color. I wanted to add this as an answer to your point about human vision being much more limited in color perception than luminence.

I dont know how much of a problem chromatic aberations really are for bayer interpolation so i hope you can give me some comments about that?
Im sure you know a lot about this, i didnt want to disagree with everything :-)

Most of my comments where about bayer compared to the ideal sensor, i dont have much knowledge about foveon (almost nothing).
 
I was talking about the fact that light of different colors is
refracted differently and so while at the center of the image you
can get a higher resolution with bayer interpolation, further out
to the sides where the different colors dont align perfectly it
doesnt work anymore.
You point is not valid. If you have glass with noticeable chromatic aberrations, you will have a problem regardless of sensor or film technology.

When a lens with significant chromatic aberration is used, all primary colours will become unsharp and there won't be such a thing as a "1 pixel wide line" (the Anti Alias filter is another reason for this, but let's forget it for the time being).

You have to understand that, for example, "green" is not one wavelength, it is a combination of different wavelengths. If you have bad glass with chromatic aberrations, the greens of different wavelengths will spread so that you won't get a sharp green. The only exception to this would be if you shot in monochromatic light, but how often to you shoot lasers or similar objects?

If you want corner-to-corner sharpness, then you buy glass with as little chromatic aberrations as possible. Just the same as with film.

Kind regards,
  • Henrik
--
And if a million more agree there ain't no great society
My obligatory gallery at http://www.iki.fi/leopold/Photo/Galleria/
 
Most of my comments where about bayer compared to the ideal sensor,
i dont have much knowledge about foveon (almost nothing).
Ah, the ideal sensor, the photon counter. Both Bayer and Foveon sensors have a long way to go before they can even come close to the ideal sensor. I will quote myself from another message I just wrote. Quote begins:

Let's get philosophical for a moment and ask: what is the absolute lower limit of noise?

If you had a theoretical sensor which had pixels that would receive and record every single photon without making any errors at all, there would still be noise in high-ISO photos. This is simply because when it is dark enough, there are so few photons that they won't land evenly on the sensor pixels. So, when going far enough, there is always going to be noise.

How far are we then from this ideal "photon counter" sensor? Astronomy photographers have told me that the efficiency of current best DSLRs is around 10%. Much of it is due to the Bayer sensor matrix: you are allowing only a part of the full colour spectrum to enter each photo-site, or pixel. The best monochrome CCD sensors, when cooled to way below freezing point, can go right upto 80-90%. So they are very close to the ideal.

What can be done to increase the efficiency of DSLRs? I believe there is still some room for improvement using the Bayer matrix. We can perhaps go from 10% to 30% (and perhaps this is just a pipe dream of mine). This would give us a working ISO 10000 on a EOS 5D-like camera.

For better efficiency we must device of a way to record all colours at all pixel locations or redirect different wavelengths to their respective receivers. The obvious answer would be a Foveon-type sensor as used in Sigma's cameras, where every photosite has three sensors, one for each primary colour. However, at least so far Foveon sensors have had their own tehnological difficulties: their RGB separation (on hardware) leaves a lot to be desired and as a result particularly the blue colour suffers. You wouldn't see this in everyday shots as the embedded camera firmware / RAW converter would compensate for this automatically. However, with high ISO this would be and is noticeable.

At the moment it seems impossible to tell how sensitive DSLRs can get.

Quote ends.

I will just add that even the ideal sensor would need some kind of an Anti Alias filter to reduce aliasing (wrong high-detail information). Doing such a filter without sacrificing some resolution is, unfortunately, quite impossible. So even the ideal sensor would have to create non-ideal pictures: either slightly softer images, or images with aliasing on details. Damned if you do, damned if you don't.

Kind regards,
  • Henrik
--
And if a million more agree there ain't no great society
My obligatory gallery at http://www.iki.fi/leopold/Photo/Galleria/
 
I was talking about the fact that light of different colors is
refracted differently and so while at the center of the image you
can get a higher resolution with bayer interpolation, further out
to the sides where the different colors dont align perfectly it
doesnt work anymore.
Again, huh? If your lens does not focus colors of light equally at the edges, it's suffering from chromatic abberations and your resolution/contrast will be impacted regardless of whether you are using film, a mosaic, or a 3 color sensor. In fact, you could argue that a mosaic sensor's reduced sensitivity to fine color detail would make the issue less noticable.
That doesnt mean you shouldnt capture the image more accurately,
even for luminance resolution full color pixels can be better
theoretically.
Sure, but we don't take photos with theories - we use cameras. As such we need to account for the various engineering tradeoffs needed to make images with imperfect systems.
Its not theoretically impossible to have a sensor that captures
100% (or 99%) There is no reason that circuitry has to be at the
surface of the sensor, this is just part of current technology.
But a 100% theoretical fill factor would apply to mosaics as well. So your point is as obscure as ever.

--
Erik
 
I couldn't give a flying fudge brownie about the sensor technology,
to be honest. As long as it delivers a few things I want - great
images, full frame, low noise across the entire range of ISO
sensativities - I'm happy. Whether it's "x3" or a little gnome
with a paint brush is irrelivant if I get the results I'm after.
Hmm... who sells the cameras with the little gnome? and are they priced reasonably? I'm interested if the price isnt too high:)

--



http://www.geocities.com/wild_tiger_1

http://flickr.com/photos/selrahcharles/

Whats more important to you? Taking photographs that have great image quality, or taking photographs that are quality images?
 
Bayer interpolation can probably do a reasonable job when the light
of all the colors lines up exactly but when they are shifted by one
pixel width or even less its obviously not possible anymore and
youll just get 1/4th resolution for red and blue and 1/2 for green.
This is not true. You will still get reasonably accurate luminance
(black & white) information, because there is an Anti Alias filter
before the sensor. This will slightly unsharpen the image so that
there cannot be points and lines that are exactly one pixel wide.
This is the reason why a Foveon sensor of 3x3 megapixels and
without an AA filter can produce a somewhat sharper image than a
Bayer 3 megapixel sensor with an AA filter.
I think you misunderstood my comment but ill reply to this one anyway.

If the anti alias filter does a good job and sends light from the whole area of a red pixel(2x2 its sensor coverage) to that pixel then it would mean the resolution is cut in half. It seems obvious that a bayer sensor has a resolution much closer to its resolution per color than the total combined mp number.
And if color details change at pixel size its even more obvious
interpolation cant work.
I have a surprise for you: with very few exceptions, all JPEG
images are encoded with 4:2:0 colour coding, so you have only 1/4
resolution for colour in nearly all your final images. This
encoding has been chosen because it saves space and because colour
vision is less accurate than luminance (black & white) vision. As
it happens, this matches the colour capabilities of Bayer sensors
quite nicely. A good piece of software can, if necessary, do colour
sharpening for JPEGs using the luminance information (like all
high-quality NTSC and PAL tv receivers do for analog RF TV signal
which has very low colour resolution).
Thanks for the information.
I dont know how good the foveon sensor is but an ideal sensor that
takes all colors per pixel across the whole pixel area would have
2-4 times the resolution of bayer and without the artifacts.
Nope. If a Foveon sensor is done properly, and if aliasing is to be
avoided (the current Foveon sensor cameras do nothing to prevent
aliasing), they too would need an Anti Alias filter. With an AA
filter, the difference to a similar Bayer sensor would be
negligible. As it is now, there is a difference, but it is clearly
less than the 2X, let alone 3X, what has been stated.
Total coverage full color pixels wouldnt need an anti alias filter.

In 3d graphics 'anti aliasing' is done by calculating the total light hitting the area of the pixel as accurately as possible. The calculation without antialiasing is point sampling, one way they do antialiasing is by just calculating the image at a much higher resolution called subsampling.

A sensor pixel covering 100% of the area would basically be doing infinite resolution subsampling.

I know there are moire patterns that can still arise but thats something else than aliasing and should be dealth with in processing.
Another big improvement that can come with full color pixels is
better sensitivity, bayer is currently trowing away two thirds of
the light.
Not exactly. Green is responsible for 60% of the luminosity of the
image, and because of this standard GRGB sensors have 50% green
sensors, and only 25% red and blue sensors. The theoretical colour
filter efficiency is 0.6*0.5 (green) + 0.3*0.25 (red) + 0.1*0.25
(blue) = 0.4 = 40%.
Well i knew i did not know the details of that, i was making a rough guess. That is different than i expected though especially blue, thanks for explaining.
 
So..."liking" to have several cameras is one thing. For me, having
these particular cameras means having the right tool for the job at
hand and being able to achieve the look I want to achieve...much in
If I regularly used that many cameras, I'm not sure I'd be able to
get the look I'm after, or be able to use the right tool
appropriately for any job. Instead I focus all of my attention
on the 5D, know it like the back of my hand, and have learned to do
great things with it.

This isn't to suggest that what might be a personal limitation on
my part necessarily applies to other people, but I just don't
understand.
Uh, I understand. He's got a 1Dsmk2, which is too heavy to use, so what he actually uses is the 20D.

And he started out in dSLRs with the Stigmas and moved up, but never discarded or ebayed the old ones. Surprisingly, these things just don't die. One of my sisters in-law is still using my old Sony S85.

It's like guitars. I bought an Ibanez John Schofield model to play jazz, found I was playing standards/swing so bought the Pat Metheny model, and then when I had a bit of spare change, bought an L5 (the most expensive object (including cars!) I've ever purcahsed). If I hadn't sold each earlier guitar to help finance the next, I'd have a half dozen instead of one.

And I'd pick them up occasionally. (Actually, I just bought another Ibanez; an ultra-cheap made-in-China floating pickup archtop (AF105FNT).)

But my main modus operandi is one guitar/one camera.

--
David J. Littleboy
Tokyo, Japan
 
A sensor pixel covering 100% of the area would basically be doing
infinite resolution subsampling.
Absolutely not.
I know there are moire patterns that can still arise but thats
something else than aliasing and should be dealth with in
processing.
"In statistics, signal processing, and related disciplines, aliasing is an effect that causes different continuous signals to become indistinguishable (or aliases of one another) when sampled."

Here is a thought experiment: you are imaging a single bright red square surrounded by white with your theoretical 100% fill factor 3 color sensor. However, the pesky square happened to align right in the center of the intersection of 4 pixels. What's your sensor going to record? Most likely 4 pale red or pink pixels? How can you tell the difference between the signal of this situation vs. 4 actual pink squares. And if you cannot tell, why is that not aliasing given the definition above?

--
Erik
 
You point is not valid. If you have glass with noticeable chromatic
aberrations, you will have a problem regardless of sensor or film
technology.

When a lens with significant chromatic aberration is used, all
primary colours will become unsharp and there won't be such a thing
as a "1 pixel wide line" (the Anti Alias filter is another reason
for this, but let's forget it for the time being).

You have to understand that, for example, "green" is not one
wavelength, it is a combination of different wavelengths. If you
have bad glass with chromatic aberrations, the greens of different
wavelengths will spread so that you won't get a sharp green. The
only exception to this would be if you shot in monochromatic light,
but how often to you shoot lasers or similar objects?
I know one color is already a range of frequencies but it is smaller than the three colors combined so they will still be sharper, i think one improvement for the future with full color pixels would be if they sample for example 6 or 12 colors seperately to get more sharpness.

I think it is not necessary to avoid chromatic aberations with digital cameras since you can align the images digitally, the software just needs to have the information about the lens design.

Similarly i think it will become unnecessary to make rectilinear lenses. Lens design will be focused on other factors and the amount of chromatic aberation and barrel destortion of the lens will just be random side effects with the optimal design. I think this will allow cheaper, smaller and better lenses? I dont have much knowledge about lens design so maybe you can tell me.

I think different focus depths of the different colors will still be a problem even if you sample the frequencies so accurately thats why i think eventually they will need to have cameras take multiple pictures at different depths. This could be taken to the extreme by focusing not just at one distance for the different colors but also at all the different objects at different distances in the scene. You could take every picture with the aperture wide open then. It will require a lot of processing power and intelligent software though...
 
To me aliasing means for example the stair effect you can get on edges or other blockyness in images.
http://www.cambridgeincolour.com/tutorials/image-interpolation.htm#anti-aliasing

A bayer sensor without alias filter would be closer to point sampling and uses the alias filter to get something close to infinite subpixel sampling.

A full coverage pixel obviously wouldnt need that and would automatically get the result of the anti-aliased image in his demonstration.

I know that a certain resolution sensor(display) can only really capture(show) about half the resolution depending if the details line up with the pixel grid or not. I happened to spend some time reading and thinking about this yesterday, and this is really a software issue since those moire patterns actually go on infinitely at continuously reduced contrast, obviously you cant blur a 100 pixels with an antialias filter on the sensor.
 
Its not theoretically impossible to have a sensor that captures
100% (or 99%) There is no reason that circuitry has to be at the
surface of the sensor, this is just part of current technology.
But a 100% theoretical fill factor would apply to mosaics as well.
So your point is as obscure as ever.
Well no, a red pixel can only have 25% coverage, i know this is what the antialias filter is for :-)
 
I know my language isnt good so its often difficult to understand what i mean and its also difficult for me to write something.
 
Well no, a red pixel can only have 25% coverage, i know this is
what the antialias filter is for :-)
Or you can consider it to have 100% red coverage of that pixel ;-)

No will argue that if you a finite number of sampling locations, sampling all colors at will be better than using a mosaic. But that's not a very interesting comparison because that's not the choice we have for implementations.

--
Erik
t
 
Uh, I understand. He's got a 1Dsmk2, which is too heavy to use, so
what he actually uses is the 20D.
I was under the impression Lin regularly shoots with all of his cameras, you as the situation dictates. What you described is a lot easier to wrap my mind around, but I get the idea that's not the case?
And I'd pick them up occasionally. (Actually, I just bought another
Ibanez; an ultra-cheap made-in-China floating pickup archtop
(AF105FNT).)
Mine is the Ibanez "violin sunburst;" a beautiful instrument. I bought it when my previous ax was stolen. I notice different guitars have slightly different amounts of distance between strings and frets, so while I'd like a few of the same model for alternate tunings, I like the consistance of one body because my fingers know where they need to be without me having to think about it.
But my main modus operandi is one guitar/one camera.
Mine also, but it's just a personal preference, the way I work best. I sort of admire people who can juggle so easily, but I'm not one of them.
 
What i was arguing for is that a 12mp bayer sensor should be considered closer to a full color 3-6mp sensor and you havent changed my mind about that. Its not even that important, as you said in your other post a bigger reason for full color is sensitivity so they will change to full color eventually even if they would have less detail per sampled pixel.
 

Keyboard shortcuts

Back
Top