lateral color filter vs vertical color filter comparison

There is no one would rather
have 3 color samples
we are not talking about the sd9 here. we are talking about the
resolution capability of a vertical color filter pixel relative to
a lateral
color filter pixel.
Who doesn’t believe that 3 samples at one special location is better? The real/current market issue is how an X3 at 3.4MP X3 compares to Bayer at something like 6MP (and going to about 9MP). We all know Bayer sampling has problems.
the vertical color filter pixel represents the best solution to the
physics problems: quantum efficiency and diffraction limited
resolution.
You have a very good point I think about diffraction limited resolution. Optics is not my field but I do know to worry about diffraction limits. I think the market is going to move from “megapixels” to the quality of each pixel in terms of color accuracy, dynamic range, and noise/ISO. The issue will then become as it was for high quality film, what technology can best capture light PER unit area.
we have some work to do engineering some of the more practical
problems such as noise floor. but I expect that progress will be
more rapid than the CCD guys
Was it deliberate that you left of CMOS Bayer Cameras? I agree that CMOS will eventually rule and squeeze out CCD, but it could be Bayer/Spatial color filtered still. The only current product with the X3 is competing mostly with Canon’s CMOS DSLRs (and to a lesser extent Nikon’s CCD). I guess it could be that you are more thinking about competing in the P&S world where Sony dominates with CCD.
I think it is a good idea to do a reality check with REAL cameras
the 300d image has no color moire because there are no resolved
features near nyquist- that is there is no single pixel contrast at
all.
no place in the image shown is there a black pixel / white pixel
/ black pixel where pattern pitch = pixel pitch. since they cannot
resolve minimum pitch, they should not be claiming so many pixels!
What? This is a test chart that sweeps through all different pitches of the distance between black and white transitions. For comparison take a look at the older D30 chart. As the lines converge it goes into rather distinct color moiré as would be expected (and shown in your demonstration figure). I was quite surprised that the 10D/300D did not have this as well. I don’t think that it is a function of the AA filter as they would have to blur the image too much to get it this clean, but rather some form of moiré detection algorithm (the 10D/300D are “too clean”).


the SD10 image shows classic chromatic aberration (CA), a radially
symetric pattern. if single pixel color features are going to be
resolved,
then chromatic aberration is going to be resolved too. a better
sensor will show more lens problems. the CA is just being blurred
out of the image from the 300D.
Why is it showing some violet color fringing on the Black to White transitions in the SD10 photo even in the center of the photo where the aberrations should be minimal? Per my previous comments, if the 300D AA filter was blurring out the color this much, it could not resolve as much as it does. I’m not sure what Canon is doing, but they seem to have some other algorithm at work.
the vertical color filter has fewer radial color artifacts than a
color
mosaic sensor because there is no lateral color crosstalk possible
(all pixels are the same)
That would seem to be true in theory, but I would like to see it in practice.
Historically for a new technology to win out over established high
volume technology, the new technology has to show clear and
indisputable advantages
this raises a little discussed issue with the vertical color filter
technology,
which is manufacturing. the plastic color filters are a messy and
difficult
to control wafer process technology. rework rates are high and color
depends on incoming QA, layer thickness, etc etc. by contrast, the
color filter crossovers of the vertical color filter depend only on a
fundamental property of the most perfect crystal made by humans
in mass production. this is very important for "high volume
technology".
That sounds nice in theory, but they are stamping out those messy color filters by the many millions. They also have the advantage of being able to “tune” their characteristics and even use extra colors (ex. Sony’s “E” filter) to expand their gamut.
technologies that are fundamentally easier to make have a way of
succeeding in the long term.
It depends on how significant the differences may be. It would seem that the X3 process is more complex than say Canon’s CMOS sensor process at least at the Silicon level. The X3 has multiple wells and is squeezing 3 times the transistors per pixel (from 3 for Canon to 9 for the X3). Thus the Canon’s CMOS would seem to be cheaper per unit area for the Silicon.
happily America is still populated by large numbers of people who
foolishly believe that if they invent a better mousetrap,
Hey, I’m all for the better mousetrap and American inventiveness. I’m an independent inventor that is selling a much better way to build high resolution displays for near eye and projectors.
I agree that better dynamic range is important to consumers.
Once again, others are working on the CMOS sensors at both the high and low end too with Bayer filters. I agree that CMOS is going to win out in the long run.
 
What? This is a test chart that sweeps through all different
pitches of the distance between black and white transitions. For
comparison take a look at the older D30 chart. As the lines
converge it goes into rather distinct color moiré as would be
expected (and shown in your demonstration figure). I was quite
surprised that the 10D/300D did not have this as well. I don’t
think that it is a function of the AA filter as they would have to
blur the image too much to get it this clean, but rather some form
of moiré detection algorithm (the 10D/300D are “too clean”).
Hey Dimage,

I think you are onto something. I own a S2 Pro, and color moire' rears its ugly head once in a while. But in photoshop it is VERY easy to remove color moire', and do so perfectly without losing resolution. I have a feeling Canon and others are doing somethign similar to my photoshop action to remove it. I routinely batch a bunch of S2 images whether I see the problem or not since it is so easy, and does not affect the image other than removing the color moire'. In fact I tried it on that D30 image of the resolution chart. It removed it perfectly.

Regards,
Sean
 
At least there were even some tests concerning noise in the sigma
slr talk that imho showed that it is used in the SD9.
Do you still have the link? IIRC when this was discussed (at least in the context long exposures), no significant difference was found (vs. just the effects of software resizing.) I've also not seen this as part of any published review.

--
Erik
 
the vertical color filter pixel represents the best solution to the
physics problems: quantum efficiency and diffraction limited
resolution.
I can see a possible advantage in quantum efficiency, but not diffraction. I would think that the size of the smallest resolvable feature would be limited to the diffraction spot size regardless of sensor type, and in a sense, diffraction spot sizes a bit larger than photosite size with a Bayer CFA system would be doing the same thing as one wants the low pass (AA) filter to do anyway.
 
Of all of Foveon arguments, the diffraction one makes the most sense in the long term to me. At some point the sensor array starts acting more like a diffraction grating rather than an array of sensors.

So lets say you can make the transistors (and color filters for CFA) as small as you want (with technology scaling this is a given). Let's also say Foveon perfects their technology so that they sense as well as the Bayer/CFA pixels do so the samples are the same. But the Bayer needs 2X the samples per unit area to give about the same resolution, so they make their "pixels" 1/2 the area or .707 the size of the Foveon pixels. Thus the Bayer Pixels hit the diffraction limit before the Foveon sensor would.

But then there is the practical question of when this would happen and at how many megapixels per unit area and what market it would be going after.
the vertical color filter pixel represents the best solution to the
physics problems: quantum efficiency and diffraction limited
resolution.
I can see a possible advantage in quantum efficiency, but not
diffraction. I would think that the size of the smallest resolvable
feature would be limited to the diffraction spot size regardless of
sensor type, and in a sense, diffraction spot sizes a bit larger
than photosite size with a Bayer CFA system would be doing the same
thing as one wants the low pass (AA) filter to do anyway.
 
but I expect that progress will be
more rapid than the CCD guys since their process is more mature,
they have had 30 years to optimize and we have had 2 years.
Foveon's US patent 5,965,875 was filed in October 1999, so that's over 4 years. That puts you off by a factor of two.

That doesn't really tell the whole story, however. The starting points are about equal: Bryce Bayer's patent on color filter array CCDs filed in 1975, vs. Layne's patent on multilayer sensors using silicon depth penetration filed in 1977. Both camps have had virtually equal time to optimize.
this raises a little discussed issue with the vertical color filter
technology,
which is manufacturing. the plastic color filters are a messy and
difficult
to control wafer process technology. rework rates are high and color
depends on incoming QA, layer thickness, etc etc.
Color density depends on layer thickness. Color crossover (which you mention a few lines down) doesn't.
by contrast, the
color filter crossovers of the vertical color filter depend only on a
fundamental property of the most perfect crystal made by humans
in mass production.
Foveon color crossovers depend on the diffusion depths and depths of the grown layers of silicon you add to make the triple layer junctions. The tolerances of those thicknesses are also highly dependend on process control. No rework possible.
this is very important for "high volume
technology".
technologies that are fundamentally easier to make have a way of
succeeding in the long term.
Now that Foveon is running one of those "messy and difficult to control" plastic processes to add microlenses to the X3 sensor, doesn't that put them about even with the CFA folks in the "easier to make" department?
I agree that better dynamic range is important to consumers. CMOS
sensors have the ability to have both lower noise floor and higher top
end dynamic range than CCDs, because of local analog signal
processing.
so far there has been surprisingly little serious effort in this
domain but
that will change...
This is true, but it doesn't help Foveon's competative position, since their biggest competator is also a CMOS sensor.

And don't underestimate what you can do with a CCD. Everything from the extra "tiny bucket" cells in Fuji's new hi range sensor, to the possibility of having additional cells in a CFA filtered with ND filters.

--
Ciao!

Joe

http://www.swissarmyfork.com
 
At least there were even some tests concerning noise in the sigma
slr talk that imho showed that it is used in the SD9.
Do you still have the link? IIRC when this was discussed (at
least in the context long exposures), no significant difference was
found (vs. just the effects of software resizing.) I've also not
seen this as part of any published review.
I don't recall the thread, but I do remember doing lo-res vs hi-res comparisons myself, at ISO 400, and there was a definite noise difference.

There's at least one low-res high-iso shot in the SD10 sample gallery
http://www.pbase.com/sigmasd9

j
 
Shooting "3/2"
further back would completely invalidate the test.

Or are you suggesting that the two patterns are not comparable? Can
you please claify what you mean by

"the example I shot has minimum pattern pitch aligned with minimum
pixel pitch, that is alternate pixels black / white / black etc (ie
nyquist)"?
My understanding from Rich's post, Sam, is that his samples were set up in such a distance that the b/w/b/w pattens are captured by one and only one pixel per alteration, X3 or Bayer.

Rich seems to imply that the distance of your sample was too close and the sensor was using three pixels to capture each color alteration. In this case you have enough bayer pixels to capture enough data to correctly guess the missing pixel and hence will not produce the wrong color.

--
jc
 
This is true, but it doesn't help Foveon's competative position,
since their biggest competator is also a CMOS sensor.
Or the other side of the coin is that, whatever advantage CMOS sensor gains, Foveon is able to apply the same advantage in addition to the pixel level advantages.

So as we see more none Bayer related improvement from the CMOS chip, Foveon can use that to their advantage as well.

--
jc
 
Yes, if you have 41,760 pieces of data (116 x 120 x 3) you get
noticibly better results than if you only have 13,806 picese of
data (117 x 118). Of course, the price you pay for that is having
to transfer, store, and process three times as much data.

I wonder what the results would look like if you pitted 41,760 data
points of Foveon and CFA (Bayer) sensor.
Joe, you two-faced funny guy.

First you complain that each location can't be counted as three, and then you come and propose to do the comparison just that way.

Of course, both comparisons are valid and useful. merrill's are interesting because they let you count discrete locations in the image and see over how many pixel locations the luma or chroma can change. And it lets you see how the frequencies relate to the pixel pitch and nyquist rate.

The other way you're suggesting is more about whole pictures. People have done lots of SD9 pix against 10D and 1Ds, to see how 10.2M "vertical" looks against 6M or 11M "lateral". I'm sure you're quite familiar with those, so why do you say you wonder?

j
 
My understanding from Rich's post, Sam, is that his samples were
set up in such a distance that the b/w/b/w pattens are captured by
one and only one pixel per alteration, X3 or Bayer.
We get that. But, let's say we're talking about 1/1.8" sensors like the F19 aka 5M. Shrinking the pixel size to 4.4um will increase the number of pixels to about 2M x3.

2M spatial locations on a 1/1.8" sensor is where CFA CCD technology was over 3 years ago. It's almost impossible to find a current camera with such a sensor (hence the Casio.) Most 1/1.8" CCDs these days are 3, 4, and 5 million sensor sites. So a typical CCD of the same physical size will be able to oversample the pattern.

Even if we stick to the same size, neither of the examples he shows are typical of what we saw in cameras. For example, here is a 120x120 crop from the resolution chart of a Olympus C2040 with a slightly smaller 1/2" sensor. (resized up 400% bicubic).



It shows some color moire (but a lot less than the Roper sample) and does not smash the resolution like the lens limited Casio. And this camera was announced almost exactly 3 years ago.

--
Erik
 
2M spatial locations on a 1/1.8" sensor is where CFA CCD technology
was over 3 years ago. It's almost impossible to find a current
camera with such a sensor (hence the Casio.) Most 1/1.8" CCDs
these days are 3, 4, and 5 million sensor sites. So a typical CCD
of the same physical size will be able to oversample the pattern.
Granted, but has the per pixel properties of these 3/4/5 mp sensors changed? In other words, has there any improvelemt on the bayer part? If yes, then your point is vaild. If it hasnt, then that 3 year old chip is just as valid as today's 5mp chip when you compare on the pixel level.

MP is a null factor in these tests as you are testing pixel level behavior.

--
jc
 
2M spatial locations on a 1/1.8" sensor is where CFA CCD technology
was over 3 years ago. It's almost impossible to find a current
camera with such a sensor (hence the Casio.) Most 1/1.8" CCDs
these days are 3, 4, and 5 million sensor sites. So a typical CCD
of the same physical size will be able to oversample the pattern.
Granted, but has the per pixel properties of these 3/4/5 mp sensors
changed? In other words, has there any improvelemt on the bayer
part? If yes, then your point is vaild. If it hasnt, then that 3
year old chip is just as valid as today's 5mp chip when you compare
on the pixel level.

MP is a null factor in these tests as you are testing pixel level
behavior.
Yes, because that is the way to make the competition appear weaker than reality. Pixel count does matter in the real word.

And today you cannot find one new design using a 1/1.8" chip that has less than 4MP. So in Reality this fabled X3 camera would get only half as many pixels on any scene or detail as the equivalent bayer camera. To show them with an equal count is meaningless because the competition is not so limited.

Peter
 
In other words, has there any improvelemt on the bayer
part?
Yes, the interpolation algorithms seemed to have improved noticably. Look at the number of times Phil notes interpolation issues then vs. now. Even approaching the (higher) resolution limits, look at the degree of color moire then vs. now. (Also see Phil's comment in this thread about "the worst bayer interpolation I've ever seen").

Also it's rather silly to just ignore the increase in sensor density. Cost is more related to sensor area than anything else. For the same area, the CCD cameras have improved resolution significantly. When a 1/1.8" X3 camera actually ships, then we can better estimate the price/performance tradeoffs between the technologies.
MP is a null factor in these tests as you are testing pixel level
behavior.
But my other point is that even choosing a rather average CFA CCD camera of 3 years ago, you can see much better results than either of the samples presented. So even if nothing has changed, it's still a presentation of X3 best case vs. CFA worst case even for "pixel level behavior".

--
Erik
 

Keyboard shortcuts

Back
Top