X3: Why Greenx2=Resolutionx2? Wowx3 Images?

Re: Foveon X3: Why Greenx2=Resolutionx2? Wowx3 Images?

Question 1. Why does Resolution depend on Green?
Green is 60% of what our eyes see as luminance. Bayer patterns with every other sensor being green force a number of clever curve-fit, missing-countour computations to be used in order to more accurately estimate the underlying luminance of the image.

With every sensor able to evaluate the fullness of luminance and color, computations can be better used to work on other, more fruitfull aspects of the image structure. There is still a lot of computation going on, even with the Fove, but the physics of the sensor go a long way toward improving things.
According to FOVEON: http://www.foveon.net/X3_comparison.html
"As you can see, the camera equipped with Foveon X3 technology
takes sharper pictures. Twice as sharp, to be precise. That 's
because it captures twice as much green as the camera with the
mosaic capture system, and the green wavelengths of light are
critical in defining image detail."

Question 2. Why these 3 1.3 MB JPEG images have the BEST COLOR AND
RESOLUTION THAT I HAVE SEEN TO DATE? Technical Reasons?
You would get the same thing from a HDTV 3-chip camera. When each color is full resolution every pixel sees not only color but technically correct luminance as well, something that comes only after strong processing from other cameras.

If you had a 1.3 MB B&W camera with every pixel working for detail, you would be seeing a superior B&W image, too. Cameras like this DO exist, mostly for industrial uses, and they look as good in monochrome as these look in color.

-iNova
 
Regarding your comment about "no individual sensor data is
available from the Bayer array," That may not be correct. I would
assume that the original color measurements in a ccd with the Bayer
pattern come through unscathed and only the missing values are
derived by the fancy interpolation schemes. I also doubt that
processing is done in groups of 2x2 pixels. This would throw a lot
of useful information away that would be used to fill in the color
holes. I have seen some versions of the math formulas used and
they were not "grouped."
That's a point I've been trying to make for quite a while and nobody ever agrees with it or disagrees with it - except this time.

I've shot lots of images of resolution targets, and the results are very similar to what shows up in reviews. At the point the number of lines equals the nominal pixel count (I'm always talking about linear measure and in the narrower dimension) there is no image other than flat gray. Try it, and you'll see.

I've only tested it on a limited number of cameras, but it's quite consistent, when the camera is used at its highest resolution. I was surprised - but probably shouldn't have been - to discover that at LOWER resolutions, this effect doesn't appear.

In other words, my Toshiba PDR-M70 can't resolve anything from 750 line pairs, in spite of its nominal 1536x2048 resolution. Nothing. But in "HALF" mode - nominally 768x1024 - it produces a "countable" image of 380 line pairs. (I don't know the official method, but I look at the point where if you try to count line pairs, the artifacts are so bad that you come up with the wrong number of lines - that's beyond the resolution ability of the system.)

The arithmetic makes sense to me only if the luminance data from individual cells has been lost at "FULL" resolution. I can (barely) count the lines when I shoot a target with lines equal to the nominal resolution/sqrt(2). Each sensor's data appears to be used four times - once with each 2x2 set that can be constructed with its neighbors.

So a camera which claims to be "1536x2048" actually has that number of sensors, calculates that number of values, and reports that number of output pixels. But because of the intermediate 2x2 averaging, adjacent values have an artificial correlation of 0.50 with one another. And that means the information content of the outputs is less than was contained in the inputs.

This point has irritated me for years because I don't do ANY color work - everything I do is critically dependent on absolute resolution. And the resolution delivered by Bayer-based cameras seems to be only 70% of what's claimed. Obviously, if there were no color filtering and Bayer interpolation, the same CCD could produce a monochrome image where lines would be countable up to the linear pixel count. But I've never seen such a camera or even such a claim.

Maybe the Foveon thing will be different?
 
When you do actual testing, there is another effect coming in. It is said that with 2x pixels per inch, you should be able to see x line pairs per inch. That assumes, however, that the pixels in the ccd are precisely aligned with the line pattern. If the pixels are off by 1/2 pixel dimension, you will see gray. If I assume that you want to always preserve the original black white contrast between lines then, 2x pixels per inch can only resolve x/2 line pairs per inch. A simple pencil and paper exercise will show this. Now I concede that these results assume that the real ccd cell is full size, etc. but it does show that in the real world, ccd arrays don't perform quite as described. This also applies to the Fovean device.
I've shot lots of images of resolution targets, and the results are
very similar to what shows up in reviews. At the point the number
of lines equals the nominal pixel count (I'm always talking about
linear measure and in the narrower dimension) there is no image
other than flat gray. Try it, and you'll see.

I've only tested it on a limited number of cameras, but it's quite
consistent, when the camera is used at its highest resolution. I
was surprised - but probably shouldn't have been - to discover that
at LOWER resolutions, this effect doesn't appear.

In other words, my Toshiba PDR-M70 can't resolve anything from 750
line pairs, in spite of its nominal 1536x2048 resolution. Nothing.
But in "HALF" mode - nominally 768x1024 - it produces a "countable"
image of 380 line pairs. (I don't know the official method, but I
look at the point where if you try to count line pairs, the
artifacts are so bad that you come up with the wrong number of
lines - that's beyond the resolution ability of the system.)

The arithmetic makes sense to me only if the luminance data from
individual cells has been lost at "FULL" resolution. I can
(barely) count the lines when I shoot a target with lines equal to
the nominal resolution/sqrt(2). Each sensor's data appears to be
used four times - once with each 2x2 set that can be constructed
with its neighbors.

So a camera which claims to be "1536x2048" actually has that number
of sensors, calculates that number of values, and reports that
number of output pixels. But because of the intermediate 2x2
averaging, adjacent values have an artificial correlation of 0.50
with one another. And that means the information content of the
outputs is less than was contained in the inputs.

This point has irritated me for years because I don't do ANY color
work - everything I do is critically dependent on absolute
resolution. And the resolution delivered by Bayer-based cameras
seems to be only 70% of what's claimed. Obviously, if there were
no color filtering and Bayer interpolation, the same CCD could
produce a monochrome image where lines would be countable up to the
linear pixel count. But I've never seen such a camera or even such
a claim.

Maybe the Foveon thing will be different?
 
That's very right! And there may be space in between the photo active areas of the photo sites. A line that has a very narrow angle with the pattern, may become thicker and thiner and show some form of stepping. Anti aliasing or blurring will still be necessary.
Jake.
I've shot lots of images of resolution targets, and the results are
very similar to what shows up in reviews. At the point the number
of lines equals the nominal pixel count (I'm always talking about
linear measure and in the narrower dimension) there is no image
other than flat gray. Try it, and you'll see.

I've only tested it on a limited number of cameras, but it's quite
consistent, when the camera is used at its highest resolution. I
was surprised - but probably shouldn't have been - to discover that
at LOWER resolutions, this effect doesn't appear.

In other words, my Toshiba PDR-M70 can't resolve anything from 750
line pairs, in spite of its nominal 1536x2048 resolution. Nothing.
But in "HALF" mode - nominally 768x1024 - it produces a "countable"
image of 380 line pairs. (I don't know the official method, but I
look at the point where if you try to count line pairs, the
artifacts are so bad that you come up with the wrong number of
lines - that's beyond the resolution ability of the system.)

The arithmetic makes sense to me only if the luminance data from
individual cells has been lost at "FULL" resolution. I can
(barely) count the lines when I shoot a target with lines equal to
the nominal resolution/sqrt(2). Each sensor's data appears to be
used four times - once with each 2x2 set that can be constructed
with its neighbors.

So a camera which claims to be "1536x2048" actually has that number
of sensors, calculates that number of values, and reports that
number of output pixels. But because of the intermediate 2x2
averaging, adjacent values have an artificial correlation of 0.50
with one another. And that means the information content of the
outputs is less than was contained in the inputs.

This point has irritated me for years because I don't do ANY color
work - everything I do is critically dependent on absolute
resolution. And the resolution delivered by Bayer-based cameras
seems to be only 70% of what's claimed. Obviously, if there were
no color filtering and Bayer interpolation, the same CCD could
produce a monochrome image where lines would be countable up to the
linear pixel count. But I've never seen such a camera or even such
a claim.

Maybe the Foveon thing will be different?
--Jake.
 
When you do actual testing, there is another effect coming in. It
is said that with 2x pixels per inch, you should be able to see x
line pairs per inch. That assumes, however, that the pixels in
the ccd are precisely aligned with the line pattern. If the pixels
are off by 1/2 pixel dimension, you will see gray.
That's correct. At the limit, where there are exactly as many sensors as lines, the image will fall in and out of sync, and will vary from sharp (when they're in sync) to flat gray (when the image of each line falls precisely between sensors). Any slight shift in position or angle dramatically changes the image. That's characteristic of a limit condition.

As the number of sensors is reduced toward the limit, the image will be increasingly murky, but the number of apparent lines will match the number of lines in the target. As the number of sensors declines still further, beyond the limit, there may be an image, formed from the interaction between the number of sensors and the number of lines, but it will be ALL artifact, and the line count won't match the line count of the target.

But the limit that I've found ISN'T at the nominal pixel count, but at approximately 70% of that. My guess is that it's (asymptotically) 1/sqrt(2).
 
Whew!! I have read this entire thread and have come to this sad conclusion. To much for my limited brain. Luckily I have right here with me the most important Bayer product. I am now going to take three of them with a glass of water. I guess I will just wait until the camera comes out and see if I like the photos. :)

John
When you do actual testing, there is another effect coming in. It
is said that with 2x pixels per inch, you should be able to see x
line pairs per inch. That assumes, however, that the pixels in
the ccd are precisely aligned with the line pattern. If the pixels
are off by 1/2 pixel dimension, you will see gray.
That's correct. At the limit, where there are exactly as many
sensors as lines, the image will fall in and out of sync, and will
vary from sharp (when they're in sync) to flat gray (when the image
of each line falls precisely between sensors). Any slight shift in
position or angle dramatically changes the image. That's
characteristic of a limit condition.

As the number of sensors is reduced toward the limit, the image
will be increasingly murky, but the number of apparent lines will
match the number of lines in the target. As the number of sensors
declines still further, beyond the limit, there may be an image,
formed from the interaction between the number of sensors and the
number of lines, but it will be ALL artifact, and the line count
won't match the line count of the target.

But the limit that I've found ISN'T at the nominal pixel count, but
at approximately 70% of that. My guess is that it's
(asymptotically) 1/sqrt(2).
--John
 
Mark,

In theory, if you are a just little below the resolution limit, you can upsample your image using a good digital filter and get whole of your resolution without any artifacts.

In practice, a good filter can be complex and hard to implement and require a lot of CPU.
So, for consumer grade products your 70% number is roughly correct.
Scientific, space and military grade equipement squeeze it closer to the limits.

Vladimir.
When you do actual testing, there is another effect coming in. It
is said that with 2x pixels per inch, you should be able to see x
line pairs per inch. That assumes, however, that the pixels in
the ccd are precisely aligned with the line pattern. If the pixels
are off by 1/2 pixel dimension, you will see gray.
That's correct. At the limit, where there are exactly as many
sensors as lines, the image will fall in and out of sync, and will
vary from sharp (when they're in sync) to flat gray (when the image
of each line falls precisely between sensors). Any slight shift in
position or angle dramatically changes the image. That's
characteristic of a limit condition.

As the number of sensors is reduced toward the limit, the image
will be increasingly murky, but the number of apparent lines will
match the number of lines in the target. As the number of sensors
declines still further, beyond the limit, there may be an image,
formed from the interaction between the number of sensors and the
number of lines, but it will be ALL artifact, and the line count
won't match the line count of the target.

But the limit that I've found ISN'T at the nominal pixel count, but
at approximately 70% of that. My guess is that it's
(asymptotically) 1/sqrt(2).
--Vladimir.
 

Keyboard shortcuts

Back
Top