Fun with Bayer interpolation

  • Thread starter Thread starter Ron Parr
  • Start date Start date
R

Ron Parr

Guest
There are many misconceptions about Bayer interpolation. I dispel many of these in my FAQ (listed below). However, it's worth mentioning a few here:

1. Bayer interpolation gets the luminance (the B&W part) of the image right, and just interpolates the color. This is wrong. Every pixel interpolates 2 out of 3 of the RGB values. Errors in any one of these will cause errors in luminance (though errors in green are worse). If you figure out how much of the luminance signal a Bayer sensor is actually getting (by projecting into the luminance dimension) you find that the sensor is capturing about 40% of the luminance information.

2. The types of patterns that cause trouble for Bayer sensors don't occur in real life. Would that this were so! Here's an amazing example I came across recently on pbase, with my thanks and apologies to the original photographer (D30, I think):

http://www.pbase.com/image/1121096

Admittedly, this is a worst case kind if example, but it is real and there's no reason to think that our images aren't riddled with lots of smaller patches of errors like this in areas of fine detail as well as lots of subtler errors that would only be noticeable in comparison to the correct image.

If you want to see some examples of what's going on with Bayer interpolation (and an education on interpolation methods), I suggest you download this paper:

http://www4.ncsu.edu:8030/~rramana/Research/demosaicking4.pdf

Even if you don't follow the math, skip forward to the figures, where you can see some wonderful examples of the types of errors Bayer interpolation can make. Figure 18 is particularly striking.

--Ron Parr
FAQ: http://www.cs.duke.edu/~parr/photography/faq.html
Gallery: http://www.pbase.com/parr/
 
Looks good from here. He just got out one of his old jackets from the sixties.
There are many misconceptions about Bayer interpolation. I dispel
many of these in my FAQ (listed below). However, it's worth
mentioning a few here:

1. Bayer interpolation gets the luminance (the B&W part) of the
image right, and just interpolates the color. This is wrong.
Every pixel interpolates 2 out of 3 of the RGB values. Errors in
any one of these will cause errors in luminance (though errors in
green are worse). If you figure out how much of the luminance
signal a Bayer sensor is actually getting (by projecting into the
luminance dimension) you find that the sensor is capturing about
40% of the luminance information.

2. The types of patterns that cause trouble for Bayer sensors
don't occur in real life. Would that this were so! Here's an
amazing example I came across recently on pbase, with my thanks and
apologies to the original photographer (D30, I think):

http://www.pbase.com/image/1121096

Admittedly, this is a worst case kind if example, but it is real
and there's no reason to think that our images aren't riddled with
lots of smaller patches of errors like this in areas of fine detail
as well as lots of subtler errors that would only be noticeable in
comparison to the correct image.

If you want to see some examples of what's going on with Bayer
interpolation (and an education on interpolation methods), I
suggest you download this paper:

http://www4.ncsu.edu:8030/~rramana/Research/demosaicking4.pdf

Even if you don't follow the math, skip forward to the figures,
where you can see some wonderful examples of the types of errors
Bayer interpolation can make. Figure 18 is particularly striking.

--
Ron Parr
FAQ: http://www.cs.duke.edu/~parr/photography/faq.html
Gallery: http://www.pbase.com/parr/
--John
 
There are many misconceptions about Bayer interpolation. I dispel
many of these in my FAQ (listed below). However, it's worth
mentioning a few here:

1. Bayer interpolation gets the luminance (the B&W part) of the
image right, and just interpolates the color. This is wrong.
Every pixel interpolates 2 out of 3 of the RGB values. Errors in
any one of these will cause errors in luminance (though errors in
green are worse). If you figure out how much of the luminance
signal a Bayer sensor is actually getting (by projecting into the
luminance dimension) you find that the sensor is capturing about
40% of the luminance information.
It is wonderful how you have taken on yourself to misrepresent what other people are saying. There is luma information in all the 3 colors. It is not perfect and thus there is some resolution lost. If one want to do amature color interpolation then it will not be very good.
2. The types of patterns that cause trouble for Bayer sensors
don't occur in real life. Would that this were so! Here's an
amazing example I came across recently on pbase, with my thanks and
apologies to the original photographer (D30, I think):

http://www.pbase.com/image/1121096
Nice annecdotal evidence. Once again you misrepresent what is being said. Yes it can happen and nobody I have seen has denied that. It has also been commented that it normally occurs when shooting a MAN MADE object like fabric or screen doors or computer generated test patterns. These do occur with the right object that is the right size. I have taken 15,000 pictures with the D30 in the last year with a half and I have a few with this problem (easily remedied if it does occur -- duplicate layer, set layer to color, select the offending area and do a gausian blur).

BTW there is B&W aliasing in the jacket of the subject picture as well. Not sure what would happen with a sensor with half the number of pixels as the comparable Bayer camera.

All things being equal, having coincident sampling would be better. But all things are not equal. The X3 sensors are coming out at about 1/2 the resolution of the Bayer sensors. The net result is that there is not that big a difference. It is not "revolutionary" for a demonstration, that has not even made it into a final product to beat a camera that was demonstrated 2 year earlier and has been shipping in volume for a year and a half.
Admittedly, this is a worst case kind if example, but it is real
and there's no reason to think that our images aren't riddled with
lots of smaller patches of errors like this in areas of fine detail
as well as lots of subtler errors that would only be noticeable in
comparison to the correct image.
Seems like a lot of speculation. Are you taking only out of theory are real experience? And how many pictures have YOU shot and looked at with a 1.5X to 1.6X sensor Camera? I am at about 15,000. I guess I better go back over all those in Photoshop at 300% and go on a witch hunt looking for Bayer artifacts. I'm from the camp if a tree falls in the forest and nobody hears it then it makes no noise.
If you want to see some examples of what's going on with Bayer
interpolation (and an education on interpolation methods), I
suggest you download this paper:

http://www4.ncsu.edu:8030/~rramana/Research/demosaicking4.pdf

Even if you don't follow the math, skip forward to the figures,
where you can see some wonderful examples of the types of errors
Bayer interpolation can make. Figure 18 is particularly striking.
I will give you that you have a lot of interesting references. But I think you should take some pictures with a good camera and you will find there there are a lot of things in say a D30 that need fixing before you can even come to worrying about the Bayer filter.

Karl--Karl
 
There are many misconceptions about Bayer interpolation. I dispel
many of these in my FAQ (listed below). However, it's worth
mentioning a few here:

1. Bayer interpolation gets the luminance (the B&W part) of the
image right, and just interpolates the color. This is wrong.
Every pixel interpolates 2 out of 3 of the RGB values. Errors in
any one of these will cause errors in luminance (though errors in
green are worse). If you figure out how much of the luminance
signal a Bayer sensor is actually getting (by projecting into the
luminance dimension) you find that the sensor is capturing about
40% of the luminance information.
It is wonderful how you have taken on yourself to misrepresent what
other people are saying. There is luma information in all the 3
colors. It is not perfect and thus there is some resolution lost.
If one want to do amature color interpolation then it will not be
very good.
The only part that makes any sense in there is, "There is luman information in all 3 colors." I agree with this and indeed took it into account in my calcuations. The rest of what you're saying just doesn't make any sense, including the wreckless accusation.
2. The types of patterns that cause trouble for Bayer sensors
don't occur in real life. Would that this were so! Here's an
amazing example I came across recently on pbase, with my thanks and
apologies to the original photographer (D30, I think):

http://www.pbase.com/image/1121096
Nice annecdotal evidence. Once again you misrepresent what is
being said. Yes it can happen and nobody I have seen has denied
Um. The picture doesn't say anything, so there's nothing to misrepresent. Well, I guess the picture does say something, but I doubt you'd agree with what I think it says. :-)
that. It has also been commented that it normally occurs when
shooting a MAN MADE object like fabric or screen doors or computer
generated test patterns. These do occur with the right object that
is the right size. I have taken 15,000 pictures with the D30 in
the last year with a half and I have a few with this problem
(easily remedied if it does occur -- duplicate layer, set layer to
color, select the offending area and do a gausian blur).
It's a good thing that most of people I photgraph are naked and live in caves!

Oh please! It's an example. AS I SAID, it's a worst case, but it does happen. There are obviously tiny little errors all of our pictures. It's just that most of them aren't that big. I can't believe that you would honestly think otherwise.
BTW there is B&W aliasing in the jacket of the subject picture as
well. Not sure what would happen with a sensor with half the
number of pixels as the comparable Bayer camera.
You're right that moire will occur in B&W as well. The color just adds insult to injury.
All things being equal, having coincident sampling would be better.
But all things are not equal. The X3 sensors are coming out at
about 1/2 the resolution of the Bayer sensors. The net result is
that there is not that big a difference. It is not
"revolutionary" for a demonstration, that has not even made it into
a final product to beat a camera that was demonstrated 2 year
earlier and has been shipping in volume for a year and a half.
Assuming you're comparing a 3.5MP Foveon sensor to a 6MP sensor, then no, it's not half the resolution. The 6MP sensor only gets between a third and a half of what it claims, depending upon how you count.
Seems like a lot of speculation. Are you taking only out of theory
are real experience? And how many pictures have YOU shot and
looked at with a 1.5X to 1.6X sensor Camera? I am at about 15,000.
I guess I better go back over all those in Photoshop at 300% and go
on a witch hunt looking for Bayer artifacts. I'm from the camp if
a tree falls in the forest and nobody hears it then it makes no
noise.
Ignorance is bliss.
I will give you that you have a lot of interesting references.
But I think you should take some pictures with a good camera and
you will find there there are a lot of things in say a D30 that
need fixing before you can even come to worrying about the Bayer
filter.
I'm not arguing about what the most important thing to fix in the D30 is. I am arguing with people who treat Bayer interpolation as if its some kind of pure method. It's a hack that throws away about 2/3 of the information it might hope to capture. That's just a fact and the worst case examples are here to convince anybody who lacks the insight to see how the math relates to the real world. (I'm not implying that you're in this categoy - just explaining why I posted the message.)

--Ron ParrFAQ: http://www.cs.duke.edu/~parr/photography/faq.htmlGallery: http://www.pbase.com/parr/
 
There is luma information in all the 3
colors. It is not perfect and thus there is some resolution lost.
If one want to do amature color interpolation then it will not be
very good.
I am not completely sure what you mean by this. Let's
say sensor s(n,m) measures predominantly green light at lattice
point (n,m).

There is absolutely nothing you can say about the luminance at
(n,m) except that it is at least as large as 0.59 times the green
sensor's voltage.

The only way you can estimate the luminance is to also estimate
the red and blue values at location (n,m).

To do that, you need to use a filter kernel that is no higher in
spatial frequency response than the red and blue lattice points.

On top of that all finite kernels have a resolution loss.

By the time you are done, to get a proper luminance sample, you
have a luminance resolution that is below the resolution of the red or
the blue channel.

Now, if you are willing to admit some luminance error, say 5%, then
you can improve the "resolution" somewhat. But when you do that,
you will also be admitting more interpolation error. (Here I am using
Pratt's notation. For EEs out there, resolution error is related to the
energy in the "out of band" part of the response of a filter, and
interpolation error is related to the energy in the "in band" part of
the filter response.)

When you allow interpolation error, the hue at each pixel is now wrong.
Poof goes color purity and leads to the psychedelic image Ron linked to,
and other artifacts.
  • kc
 
First, I notice that the only posts you have made on this forum are in support of Foveon X3 and nothing in any of the forums on taking a picture using ANY camera. It looks like you are just a shill for Foveon.

Your comments with all their mathematical garbage reminds me of the (probably apocryphal) story of Edison who gave a mathematician the problem of finding the volume of an light bulb shape he handed the mathematician. The Mathematician worked on the problem all night and in the next morning he gave the result to Edison. Edison said that the mathematician was wrong and proceeded to fill the bulb with water and then empty the bulb into a graduated cylinder.

The Following is a picture I took with my D30 of a test pattern I calibrated to a D30 so that a “1” on the chart by the lines represents 1 pixel wide lines. “1.5” on the test chart represents lines that are 1.5 D30 pixels wide. At 100% magnification you can just about make out the lines separation of the “1” lines (theoretically impossible according to you). Viewed at higher magnification (zoomed say in Photoshop) the “1” lines are pretty bad as I would expect with the D30’s AA filter and yes there is some loss by the Bayer filter (I did notice it is a little lower in resolution of the vertical lines probably as a result of the Bayer pattern). By “1.5” the lines are very well resolved. If I follow your math from other threads it would be a down right miracle to be able to resolve even the 2 pixel wide lines. So I guess Canon is doing something miraculous with their Bayer conversion.

http://www.fototime.com/ {9388D641-BB7B-4594-9537-81A2C7C1D16A} picture.JPG

Karl
There is luma information in all the 3
colors. It is not perfect and thus there is some resolution lost.
If one want to do amature color interpolation then it will not be
very good.
I am not completely sure what you mean by this. Let's
say sensor s(n,m) measures predominantly green light at lattice
point (n,m).

There is absolutely nothing you can say about the luminance at
(n,m) except that it is at least as large as 0.59 times the green
sensor's voltage.

The only way you can estimate the luminance is to also estimate
the red and blue values at location (n,m).

To do that, you need to use a filter kernel that is no higher in
spatial frequency response than the red and blue lattice points.

On top of that all finite kernels have a resolution loss.

By the time you are done, to get a proper luminance sample, you
have a luminance resolution that is below the resolution of the red or
the blue channel.

Now, if you are willing to admit some luminance error, say 5%, then
you can improve the "resolution" somewhat. But when you do that,
you will also be admitting more interpolation error. (Here I am using
Pratt's notation. For EEs out there, resolution error is related
to the
energy in the "out of band" part of the response of a filter, and
interpolation error is related to the energy in the "in band" part of
the filter response.)

When you allow interpolation error, the hue at each pixel is now
wrong.
Poof goes color purity and leads to the psychedelic image Ron
linked to,
and other artifacts.
  • kc
--Karl
 
Ron Parr wrote:
((( Cutting to the chase )))
I am arguing with people who treat Bayer interpolation as
if its some kind of pure method. It's a hack that throws away
about 2/3 of the information it might hope to capture. That's just
a fact and the worst case examples are here to convince anybody who
lacks the insight to see how the math relates to the real world.
(I'm not implying that you're in this categoy - just explaining why
I posted the message.)
Where are all these people arguing that Bayer interpolation is "pure?" I would agree that there are some people that may not realized that a "N-Megapixel" Bayer camera is "marketing concept." Yes it is a hack, but all things considered it is a pretty good hack.

The X3, just in terms of eliminating the Bayer pattern is pretty much solving a non-problem except for the occasionally picture with just the right size pattern in it. This is particularly true if you compare it to cameras AVAILABLE at the same time. From what I see, the 6MP Bayers will be in the market before the 3.4MP X3.

Most of the things I shoot are not 100% full range 1 pixel wide black and white patterns that will show up the color problems of a Bayer. A text pattern is pretty much worse case for showning up bayer problems.

Maybe you can explain the picture below. It was taken by a D30 of a Test chart "calibrated to D30 pixels." The number 1 is by lines that are 1 pixel wide with 1 pixel spacing (or right at the Nyquist limit for a sine wave). The 1.5 lines are 1.5 D30 pixels wide by 1.5 pixel spaces. You can just about make out the lines at 1 and they are well resolved by 1.5. BTW, the vertical lines are a bit less resolved that the horizontal lines (probably a result of the Bayer pattern).

Still not bad for a hack and not a lot of room to improve on. Do you think Canon has floating point DSP's inside the D30? My guess is that they had some guys in a lab trying out a bunch of different algorithms. They would have fired any of them that suggested using floating point.

http://www.fototime.com/ {9388D641-BB7B-4594-9537-81A2C7C1D16A} picture.JPG

--Karl
 
First, I notice that the only posts you have made on this forum are
in support of Foveon X3 and nothing in any of the forums on taking
a picture using ANY camera. It looks like you are just a shill
for Foveon.
Have you ever engaged in a rational argument without resorting to
personal attacks?

Shill? Even though I work in the Silicon Valley, I wish I even know a
single Foveon employee, so I can get an image taken with a real
camera. No such luck.

I didn't even know about Foveon until I saw it mentioned by others
here. My only connection is I bought some miniscue amount of NSM
stocks in early March. Heck, when Lynn Conway and Mead first published
their VLSI methodolgy, it took me quite a while to be convinced (I am an
old EE who used to design with tubes; today, I still build RF electronics
and antennas for my hobby). For me, it came as a pleasant surprise that
Mead was involved in the X3 image senso (although, my guess is that most
of the credit really belongs to Merrill at Foveon).

My background is in signal processing associated with Radar Astronomy and
SETI. When I left school, I was involved in the early development of laser
printing. In my current company, my work involves algorithm development
in the areas of graphics, imaging and color science. In the past, I have worked
on halftoning and resolution enhancement implementations.

You accuse me of not having used a digital camera; because of my work,
my first encounter with digital camera image quality was with the early
Apple QuickTake cameras (Kodak OEM'ed,later Fijis). I still own a
QuickTake 150; for nostalgic reasons.

From the first QuickTakes, as each generation of cameras came out, I still
longed for a sensor which does not use mosaics. At the same time, I
knew that until some other solution comes along, the Bayer technology
is the best and only viable solution. Without Bayer, we will still be shooting
digital on monochrome cameras or color wheels.

I have seen Phil's resolution images from both Bayer and Foveon cameras,
and they validate my suspicions on the potential quality improvements
of non-mosaic cameras.

Instead of arguing with you, why don't we just wait for Phil to publish images
taken from a production (or close to production) Foveon-based camera. I
would, in the meantime, appreciate some civility from you.

I also have no axe to grind with Nikon or Canon. My favourite film camera is
still an ancient M3 with a dual-range Summicron, but sprinked in the collection
are some Rolleiflexes, Nikons and Olys. Shows how long it has been since
I have seriously used film. I'll keep using my now two-year old digital camera
until a Foveon camera that suits me (mainly landscape and portrait, and lots
of junk snapshots) appears.

I had intended to buy a D30, and then the rebates came out in December.
Special rebates indicated to me that something better is imminent. So, I waited
to see what is next. While waiting, the Foveon announcement hit the streets.
Now there is reason for me to wait some more.
  • kc
 
Ron Parr wrote:

Where are all these people arguing that Bayer interpolation is
"pure?" I would agree that there are some people that may not
realized that a "N-Megapixel" Bayer camera is "marketing concept."
Yes it is a hack, but all things considered it is a pretty good
hack.
Have you been reading this forum?
The X3, just in terms of eliminating the Bayer pattern is pretty
much solving a non-problem except for the occasionally picture with
just the right size pattern in it. This is particularly true if
you compare it to cameras AVAILABLE at the same time. From what I
see, the 6MP Bayers will be in the market before the 3.4MP X3.
Can't agree with that.
Most of the things I shoot are not 100% full range 1 pixel wide
black and white patterns that will show up the color problems of a
Bayer. A text pattern is pretty much worse case for showning up
bayer problems.
That's right and people show these examples because some have trouble understanding the subtler problems that come up. It can get pretty frustrating: If you show the small problems, people write them off as small. If you show a larger example, some people say that it's a contrived example. Of course, all of our images don't have wacky colors. The point is that fine details such as patterns and transitions will be better captured with a Foveon style sensor than a Bayer style sensor. If you don't care about fine details, then why get an expensive camera?
Maybe you can explain the picture below. It was taken by a D30 of
a Test chart "calibrated to D30 pixels." The number 1 is by lines
that are 1 pixel wide with 1 pixel spacing (or right at the Nyquist
limit for a sine wave). The 1.5 lines are 1.5 D30 pixels wide by
1.5 pixel spaces. You can just about make out the lines at 1 and
No. You can't make out the lines at 1. This is a good example of how unreasonable you're being.
they are well resolved by 1.5. BTW, the vertical lines are a bit
less resolved that the horizontal lines (probably a result of the
Bayer pattern).
There are red a green stripes between the horizontal lines.

I can't comment on the test conditions since I can't verify them.
Still not bad for a hack and not a lot of room to improve on. Do
you think Canon has floating point DSP's inside the D30? My guess
is that they had some guys in a lab trying out a bunch of different
algorithms. They would have fired any of them that suggested using
floating point.
You're just all over the place the now.

I don't know what hack Canon is using. Whatever it is 1) It has to do a decent amount of computation and 2) It has to make mistakes.

--Ron ParrFAQ: http://www.cs.duke.edu/~parr/photography/faq.htmlGallery: http://www.pbase.com/parr/
 
It amazes me the number of people that do not want a better image
sensor to be available. while the Bayer pattern is good it is
easily fooled and causes a serious performance hit on sharpness and
contrast. It forces photos to be sharpened because the output of
even the best interpolations are soft. Look at the unsharpened
outputs of cameras like the D30 and 1D. They are SOFT. This s a direct
result of the Bayer pattern. This is really true with black on red or
black on blue.

The X3 (While not 3 times better) will make any image better. They will
be sharper without the need to sharpen. I hate the artifacts left behind
by sharpening.

Don't get me wrong, my G1 always surprises me and me 1D blows it out
of the water. But they would both be better with a X3 based even sensor at
a lower resolution.

The perfect camera:
EOS 1D, 6-8 MPixel X3 sensor, 1.3-> 1.0 multiplier, ISO to 3200.

In time...
There are many misconceptions about Bayer interpolation. I dispel
many of these in my FAQ (listed below). However, it's worth
mentioning a few here:

1. Bayer interpolation gets the luminance (the B&W part) of the
image right, and just interpolates the color. This is wrong.
Every pixel interpolates 2 out of 3 of the RGB values. Errors in
any one of these will cause errors in luminance (though errors in
green are worse). If you figure out how much of the luminance
signal a Bayer sensor is actually getting (by projecting into the
luminance dimension) you find that the sensor is capturing about
40% of the luminance information.
It is wonderful how you have taken on yourself to misrepresent what
other people are saying. There is luma information in all the 3
colors. It is not perfect and thus there is some resolution lost.
If one want to do amature color interpolation then it will not be
very good.
2. The types of patterns that cause trouble for Bayer sensors
don't occur in real life. Would that this were so! Here's an
amazing example I came across recently on pbase, with my thanks and
apologies to the original photographer (D30, I think):

http://www.pbase.com/image/1121096
Nice annecdotal evidence. Once again you misrepresent what is
being said. Yes it can happen and nobody I have seen has denied
that. It has also been commented that it normally occurs when
shooting a MAN MADE object like fabric or screen doors or computer
generated test patterns. These do occur with the right object that
is the right size. I have taken 15,000 pictures with the D30 in
the last year with a half and I have a few with this problem
(easily remedied if it does occur -- duplicate layer, set layer to
color, select the offending area and do a gausian blur).

BTW there is B&W aliasing in the jacket of the subject picture as
well. Not sure what would happen with a sensor with half the
number of pixels as the comparable Bayer camera.

All things being equal, having coincident sampling would be better.
But all things are not equal. The X3 sensors are coming out at
about 1/2 the resolution of the Bayer sensors. The net result is
that there is not that big a difference. It is not
"revolutionary" for a demonstration, that has not even made it into
a final product to beat a camera that was demonstrated 2 year
earlier and has been shipping in volume for a year and a half.
Admittedly, this is a worst case kind if example, but it is real
and there's no reason to think that our images aren't riddled with
lots of smaller patches of errors like this in areas of fine detail
as well as lots of subtler errors that would only be noticeable in
comparison to the correct image.
Seems like a lot of speculation. Are you taking only out of theory
are real experience? And how many pictures have YOU shot and
looked at with a 1.5X to 1.6X sensor Camera? I am at about 15,000.
I guess I better go back over all those in Photoshop at 300% and go
on a witch hunt looking for Bayer artifacts. I'm from the camp if
a tree falls in the forest and nobody hears it then it makes no
noise.
If you want to see some examples of what's going on with Bayer
interpolation (and an education on interpolation methods), I
suggest you download this paper:

http://www4.ncsu.edu:8030/~rramana/Research/demosaicking4.pdf

Even if you don't follow the math, skip forward to the figures,
where you can see some wonderful examples of the types of errors
Bayer interpolation can make. Figure 18 is particularly striking.
I will give you that you have a lot of interesting references.
But I think you should take some pictures with a good camera and
you will find there there are a lot of things in say a D30 that
need fixing before you can even come to worrying about the Bayer
filter.

Karl
--
Karl
 
There is a huge amount to improve on with this very soft image.

In the vertical and horizontal direction, The sharpening and interpolation artifacts take 3-4 pels to transition. The X3 Takes 2-3 pels. Take a look at the sample output of the X3 sensor and do some upsampling on the data. It upsamples great. Better than great, it is fantastic.

I just hope that canon recognizes this and buys into the technology.

Steven
Where are all these people arguing that Bayer interpolation is
"pure?" I would agree that there are some people that may not
realized that a "N-Megapixel" Bayer camera is "marketing concept."
Yes it is a hack, but all things considered it is a pretty good
hack.

The X3, just in terms of eliminating the Bayer pattern is pretty
much solving a non-problem except for the occasionally picture with
just the right size pattern in it. This is particularly true if
you compare it to cameras AVAILABLE at the same time. From what I
see, the 6MP Bayers will be in the market before the 3.4MP X3.

Most of the things I shoot are not 100% full range 1 pixel wide
black and white patterns that will show up the color problems of a
Bayer. A text pattern is pretty much worse case for showning up
bayer problems.

Maybe you can explain the picture below. It was taken by a D30 of
a Test chart "calibrated to D30 pixels." The number 1 is by lines
that are 1 pixel wide with 1 pixel spacing (or right at the Nyquist
limit for a sine wave). The 1.5 lines are 1.5 D30 pixels wide by
1.5 pixel spaces. You can just about make out the lines at 1 and
they are well resolved by 1.5. BTW, the vertical lines are a bit
less resolved that the horizontal lines (probably a result of the
Bayer pattern).

Still not bad for a hack and not a lot of room to improve on. Do
you think Canon has floating point DSP's inside the D30? My guess
is that they had some guys in a lab trying out a bunch of different
algorithms. They would have fired any of them that suggested using
floating point.

http://www.fototime.com/ {9388D641-BB7B-4594-9537-81A2C7C1D16A} picture.JPG

--
Karl
 
1) It has to do a decent amount of computation
You keep asserting this without showing any examples that the amount of computation is either an economic problem or a performance problem. A Canon 1D can output 8FPS full frame. A D60 can output 3FPS. In both cases the limiting factor is NOT Bayer interpolation or even JPEG compression time but I/O write speeds and memory buffer limitations. (If processing time were the problem you would expect the RAW image rate to be faster than JPEG image rate.) Even the sub $100 cameras manage to do this "expensive" processing (if more slowly.)
2) It has to make mistakes.
Yes, and so will an X3 sensor for detail beyond the Nyquist limit. Noise and color sensitivity errors are still unknown. The real question will be, for a given price point, which errors are more significant?

That's why we look at cameras. The D100, D60, SD9, and S2 all list for about the same price. Each has made different tradeoffs: Canon has their 6MP Bayer CMOS imager, Nikon using a more conventional Bayer CCD (Sony's?), Sigma with the Foveon, and Fuji with the SuperCCD (BTW, would using the SuperCCD pixel orientation improve the X3 resolution? Or is it just a Bayer thing?) We'll find out what's what in a few months.

The most likely answer will be that will be some scenes/situations where each camera is superior to the others. For example, with a current rated max ISO of 400 (and the F7 spec sheet implies that you have to use pixel averaging to get higher sensitivity than ISO 100), the SD9 is NOT going to be the camera of choice for indoor sports. BTW, this pixel averaging sounds like a special case of interpolation (or just giving up on preserving resolution.)

Now are any current limitions of the Foveon just verson 1.0 bugs that will be lifted with more development? Will it make the Bayer sensor obsolete or even push it into only niche applications? Maybe, but I say there is not enough evidence to support this for now.

--ErikFree Windows JPEG comment editor http://home.cfl.rr.com/maderik/edjpgcom
 
1) It has to do a decent amount of computation
You keep asserting this without showing any examples that the
amount of computation is either an economic problem or a
performance problem. A Canon 1D can output 8FPS full frame. A D60
I've never asserted that it's a performance problem. I've asserted that for a cheap camera eliminating this step would give a cost savings. This should be obvious to you.
can output 3FPS. In both cases the limiting factor is NOT Bayer
interpolation or even JPEG compression time but I/O write speeds
and memory buffer limitations. (If processing time were the problem
you would expect the RAW image rate to be faster than JPEG image
rate.) Even the sub $100 cameras manage to do this "expensive"
processing (if more slowly.)
So, is the problem that you don't understand why a manufacturer would want to put one less chip or a cheaper chip in a sub $100 product? Again, this should really be very obvious, not something that requires defending.
2) It has to make mistakes.
Yes, and so will an X3 sensor for detail beyond the Nyquist limit.
Noise and color sensitivity errors are still unknown. The real
question will be, for a given price point, which errors are more
significant?
I think that there's an implicit assumption that after production stabilizes the cost per chip should be no more than a Bayer pattern chip for equivalent resolution.

I've avoided speculation on how a 3.5MP Foveon sensor will perform in comparison to a 6MP Bayer interpolated Canon sensor because it would be just that - speculation. Moreover, it would be comparing a first generation product (Foveon) with a second generation product (Canon).
That's why we look at cameras. The D100, D60, SD9, and S2 all list
for about the same price. Each has made different tradeoffs: Canon
has their 6MP Bayer CMOS imager, Nikon using a more conventional
Bayer CCD (Sony's?), Sigma with the Foveon, and Fuji with the
SuperCCD (BTW, would using the SuperCCD pixel orientation improve
the X3 resolution? Or is it just a Bayer thing?) We'll find out
what's what in a few months.

The most likely answer will be that will be some scenes/situations
where each camera is superior to the others. For example, with a
current rated max ISO of 400 (and the F7 spec sheet implies that
you have to use pixel averaging to get higher sensitivity than ISO
100), the SD9 is NOT going to be the camera of choice for indoor
sports. BTW, this pixel averaging sounds like a special case of
interpolation (or just giving up on preserving resolution.)

Now are any current limitions of the Foveon just verson 1.0 bugs
that will be lifted with more development? Will it make the Bayer
sensor obsolete or even push it into only niche applications?
Maybe, but I say there is not enough evidence to support this for
now.
I don't disagree with any of this.

I've never predicted that the actual implementation of the Foveon approach would be superior to all competitors in their first generation products.

I have been trying to get people to understand that:

1. There are some real issues with Bayer interpolation that many people currently accept without really thinking about it.
2. Attempting to address these issues is a good idea.

3. Foveon's claims to be capturing 3X as much data as a Bayer interpolated sensor are not ridiculous and have serious potential to translate into visible improvements in pictures.

--Ron ParrFAQ: http://www.cs.duke.edu/~parr/photography/faq.htmlGallery: http://www.pbase.com/parr/
 
BTW, would using the SuperCCD pixel orientation improve the X3 resolution?
If I understand it correctly if all other factors (such as number of effective pixels, pixel size, sensor size etc) are the same, it should generally improve on diagonal resolution and general resolution, but in some cases slightly decrease vertical and horizontal resolution. That would especially be evident when comparing it to a test chart captured by a perfectly aligned row and column pattern sensor. The honecomb patter would not capture the same horizontal and vertical resolution, but the more varied nature of the honecomb structure does not require perfect aligning of the subject pattern to produce good resuls. This means that a worst case scenario of misaligned honeycomb structure should outperform a worst case scenario of a misaligned row and column sensor.

The honecomb structure also require interpolation to make the data into a row and column picture, loosing the slight speed and purity advantages of non interpolated pictures. However, for large printing etc it's usually best to upsample the picture anyway in order to not see the induvidual pixels as small rectangles, so in that case the honeycomb patter and the interpolation needed to convert it into a row and column pattern would not degrade the quality compared to a row and column picture that is upsampled to the same size.

Most inportant though, is that I don't know how it affects production difficulty and expense to make the X3 sensor in a honeycomb pattern compared to row and column pattern, so I can't say if it would be economically viable to produce such chips, but it will probably be worth experimenting with.
-- ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ http://hem.bredband.net/maxstr
 
I've never asserted that it's a performance problem. I've
asserted that for a cheap camera eliminating this step would give a
cost savings. This should be obvious to you.
No, it's not obvious to me:

1. The same "cost savings" is available for Bayer CCD cameras and it's not being done even on $100 retail cameras (Well, the Dj-1000 and some of the Agfas did it.) So I don't see that this is as big a savings as you make it out to be.

2. The RAW file output for the F10 Foveon sensor would be about 3.7 x 12 bits / 8 / 2 = ~ 2.8MB. A normal mode JPEG from a current 2 MP camera is 400 kb. That's a 7-1 difference (or pick your own numbers). To achieve comparable shot-to-shot time, this camera is going to have to have a D30 class I/O subsystem. You seem to think that's a trivial cost OR that it's not needed and a 4-7x write time will be acceptable. I disagree here as well.
So, is the problem that you don't understand why a manufacturer
would want to put one less chip or a cheaper chip in a sub $100
product?
No, I think that the chips they will have to add to handle the additional buffering and I/O will more than balance out the cost of the interpolation processing chips.
I think that there's an implicit assumption that after production
stabilizes the cost per chip should be no more than a Bayer pattern
chip for equivalent resolution.
There is both and IF and a WHEN component to this assumption.
1. There are some real issues with Bayer interpolation that many
people currently accept without really thinking about it.
Most people in the sub$100 class could care less about these issues. They just don't show up in 4x6 prints or screen resolutions enough to matter. (I predict Foveon sensors will be preferred by landscape and studio photographers.)
2. Attempting to address these issues is a good idea.
Yes, but what if you are just trading one set of issues for another?
3. Foveon's claims to be capturing 3X as much data as a Bayer
interpolated sensor are not ridiculous and have serious potential
to translate into visible improvements in pictures.
Of course the x3 has potential. I personally hope it's all it's cracked up to be. But:

1. I don't believe the Bayer CCD issues are as significant as you make them out.

2. I think there are some possibly significant issues with X3 that may limit it's competitiveness. (Low ISO and larger file sizes.)

The only real information we have is from Foveon who, like anyone selling something, is only presenting the positive side of their product. What are the downsides? Can we live with them? Can they be fixed? Will the potential be realized in a useful form? If yes, when?--ErikFree Windows JPEG comment editor http://home.cfl.rr.com/maderik/edjpgcom
 
Maybe you can explain the picture below.
If you want to understand more about the differences between the technologies I have a suggestion for you. Take 3 aditional test shots with your D30 on three additional charts. I don't know if you can get hold of the appropriate charts, but the black markings on the white chart should be exchanged for a pure basic color on each chart, namely one chart with red markings, one with blue markings and one with green markings. Now check the color resolution using the results from these 3 charts.

If you want even more advanced resolution experiments, try the example above agin, but this time use charts with black background instead of white. It should probably give some very interesting readings.

I have not done these experiments myself, so I only have a theory on the outcome, but I assume that the Foveon sensor should outperform any RGBG bayer sensor (of the same size) by several times under these extreme conditions.

The sensor based AutoFocus should also be improved by this, as current RGBG sensor based AF usually have problems with focusing on blue or red objects.-- ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ http://hem.bredband.net/maxstr
 
We've established that we disagree on the costs and benefits for cheap cameras in another thread. I don't think you've added anything new. I stand by my arguments before and I don't feel any need to repeat them.

I am a bit curious about why you keep harping on this, apart from responding to people like you who seem obsessed with it, it's something that I've never pushed as a reason to be excited about this technology.

It's all a bit strange to me. Whenever the discussion gets focused on image quality, one or two guys suddenly want to discuss whether it will really be cheaper to manufacture $100 cameras - as if any of us really cared about $100 cameras. Start a $100 camera thread if it's really that important to you.
I think that there's an implicit assumption that after production
stabilizes the cost per chip should be no more than a Bayer pattern
chip for equivalent resolution.
There is both and IF and a WHEN component to this assumption.
And this is relevant to the discussion because...
1. There are some real issues with Bayer interpolation that many
people currently accept without really thinking about it.
Most people in the sub$100 class could care less about these
issues. They just don't show up in 4x6 prints or screen resolutions
enough to matter. (I predict Foveon sensors will be preferred by
landscape and studio photographers.)
And you're obsessing about sub $100 cameras because...
2. Attempting to address these issues is a good idea.
Yes, but what if you are just trading one set of issues for another?
I'm still waiting for you to tell us a serious issue.
1. I don't believe the Bayer CCD issues are as significant as you
make them out.
I don't know what else to tell you. I've explained the theory. I've given examples. You can continue to believe that it's possible to always guess right, along with all of the inconsistencies that this implies. I really don't care what you believe any more.
2. I think there are some possibly significant issues with X3 that
may limit it's competitiveness. (Low ISO and larger file sizes.)
There are reasons to be concerned that the fill rate for the first sensors may be lower than methods with fewer transistors per pixel. Fortunately, transistors get smaller and the percentage of the surface of the chip occupied by transistors will shrink very rapidly with the normal improvements that occur in CMOS lithography on a regular basis.

If you want to have a sane discussion about the implementation issues in bringing Foveon technology to a real product, I'm sure you will find many interested parties. However, I suggest you keep these things separate in your mind. This thread was about Bayer interpolation and what can happen when you do it. These are facts about Bayer interpolation that will be true whether Foveon's chips cost $1 and work brilliantly or cost $1000 and work poorly.

--Ron ParrFAQ: http://www.cs.duke.edu/~parr/photography/faq.htmlGallery: http://www.pbase.com/parr/
 
I'm another person who admires your persistence Ron. But in every aspect of life there are always some who will resist change until both sides are blue on the face. It could be that they don't feel comfortable admiting past unawareness about problems associated with the old... just a guess.

Michael--Michael
 

Keyboard shortcuts

Back
Top