Technical inaccuracy in Foveon press release?

Thanks for the correction, Peter.

However, I'm still not sure about the horizontal difference. We both agree that the X3 has 2x vertical difference. Here's why:

Here's the Mosaic again versus Foveon - let's just look at Green pixels. On top are column numbers (sorry for non-proportional font)

123 45 678910 1234 5678910

G G G G G GGGGGGGGGG
G G G G G GGGGGGGGGG
G G G G G GGGGGGGGGG
G G G G G GGGGGGGGGG
G G G G G GGGGGGGGGG
G G G G G GGGGGGGGGG
G G G G G GGGGGGGGGG
G G G G G GGGGGGGGGG
G G G G G GGGGGGGGGG
G G G G G GGGGGGGGGG

As you can see in the horizontal direction, there are twice the number of sampling green pixels in the X3: Mosaic: 5, X3: 10. Accordingly, the horizontal resolution is double.

Now, for the vertical direction, you must look at only ONE column at a time, not two. As you can see for column 1 of the Mosaic, there are only 5 pixels, Column 2 of the Mosaic, there are only 5 pixels; etc. So, for each column, there are twice the number of sampling green pixels in the X3: Mosaic: 5, X3: 10. Accordingly, the vertical resolution is also double.

2x horizontal x 2x vertical = 4 times resolution.

My only, "on the other hand" is that perhaps with the staggered pixel array in the mosaic, maybe a little bit of detail can be interpolated in some way with a mosaic. Is this so? Thoughts?

Steve
Your mosiac wasnt quite right. Fully half the mosaic is green, not
in a linear direction, but of the whole pixel count. Simple
doubling of the amount of pixels will yeild 1 green for every
foveon pixel. If you fill the green holes in a vertical direction,
there will be none left when you look at the horizontal.

The correct pattern has every non-green surrounded on all sides by
GREEN.

Mosaic:

GRGRGRGRGR
BGBGBGBGBG
GRGRGRGRGR
BGBGBGBGBG
GRGRGRGRGR
BGBGBGBGBG
GRGRGRGRGR
BGBGBGBGBG
GRGRGRGRGR
BGBGBGBGBG

100 pixels 50 = green.

X3

XXXXXXXXXX
XXXXXXXXXX
XXXXXXXXXX
XXXXXXXXXX
XXXXXXXXXX
XXXXXXXXXX
XXXXXXXXXX
XXXXXXXXXX
XXXXXXXXXX
XXXXXXXXXX

100 pixels, 100 = green ( 100 red and blue too )

Mosaic x 2 only horizontally.

BGBGBGBGBGBGBGBGBGBG
GRGRGRGRGRGRGRGRGRGR
BGBGBGBGBGBGBGBGBGBG
GRGRGRGRGRGRGRGRGRGR
BGBGBGBGBGBGBGBGBGBG
GRGRGRGRGRGRGRGRGRGR
BGBGBGBGBGBGBGBGBGBG
GRGRGRGRGRGRGRGRGRGR
BGBGBGBGBGBGBGBGBGBG
GRGRGRGRGRGRGRGRGRGR

200 pixels, 100 = green.

So if green is a critera, detail eqivalence happens around double
for mosaic. Thats not to say there still wont be more colour errors
in mosaic, but since you have many more pixels your colour errors
will not be that large.

I don't know how it will all work out, but Double the pixels for
equivalence seems to be a good ballpark for rule of thumb
comparison.

All else being equal, I would purchase a 3MP x3 before a 6mp
Mosaic, but it will be quite a while till all else is equal, or
even close enough. Things like higher ISO shooting. Reasonable
speed continuous shooting modes, flexible in camera storage (ie
jpg), lens system equivalence, user interface, features...

It will also be quite a while before DSLRs are affordable IMO.

Peter
 
That last post really messed up the spacing, try this example:
Mosaic:
G_G_G_G_G_: 5 Green pixels across for each row
G_G_G_G_G
G_G_G_G_G

G_G_G_G_G
G_G_G_G_G

G_G_G_G_G
G_G_G_G_G

G_G_G_G_G
G_G_G_G_G

G_G_G_G_G
---_5 Green pixels down for EACH column


X3:
GGGGGGGGGG: 10 Green pixels across for each row
GGGGGGGGGG
GGGGGGGGGG
GGGGGGGGGG
GGGGGGGGGG
GGGGGGGGGG
GGGGGGGGGG
GGGGGGGGGG
GGGGGGGGGG
GGGGGGGGGG
---_10 Green pixels down for EACH column
As you can see in the horizontal direction, there are twice the
number of sampling green pixels in the X3: Mosaic: 5, X3: 10.
Accordingly, the horizontal resolution is double.

Now, for the vertical direction, you must look at only ONE column
at a time, not two. As you can see for column 1 of the Mosaic,
there are only 5 pixels, Column 2 of the Mosaic, there are only 5
pixels; etc. So, for each column, there are twice the number of
sampling green pixels in the X3: Mosaic: 5, X3: 10. Accordingly,
the vertical resolution is also double.

2x horizontal x 2x vertical = 4 times resolution.

My only, "on the other hand" is that perhaps with the staggered
pixel array in the mosaic, maybe a little bit of detail can be
interpolated in some way with a mosaic. Is this so? Thoughts?

Steve
Your mosiac wasnt quite right. Fully half the mosaic is green, not
in a linear direction, but of the whole pixel count. Simple
doubling of the amount of pixels will yeild 1 green for every
foveon pixel. If you fill the green holes in a vertical direction,
there will be none left when you look at the horizontal.

The correct pattern has every non-green surrounded on all sides by
GREEN.

Mosaic:

GRGRGRGRGR
BGBGBGBGBG
GRGRGRGRGR
BGBGBGBGBG
GRGRGRGRGR
BGBGBGBGBG
GRGRGRGRGR
BGBGBGBGBG
GRGRGRGRGR
BGBGBGBGBG

100 pixels 50 = green.

X3

XXXXXXXXXX
XXXXXXXXXX
XXXXXXXXXX
XXXXXXXXXX
XXXXXXXXXX
XXXXXXXXXX
XXXXXXXXXX
XXXXXXXXXX
XXXXXXXXXX
XXXXXXXXXX

100 pixels, 100 = green ( 100 red and blue too )

Mosaic x 2 only horizontally.

BGBGBGBGBGBGBGBGBGBG
GRGRGRGRGRGRGRGRGRGR
BGBGBGBGBGBGBGBGBGBG
GRGRGRGRGRGRGRGRGRGR
BGBGBGBGBGBGBGBGBGBG
GRGRGRGRGRGRGRGRGRGR
BGBGBGBGBGBGBGBGBGBG
GRGRGRGRGRGRGRGRGRGR
BGBGBGBGBGBGBGBGBGBG
GRGRGRGRGRGRGRGRGRGR

200 pixels, 100 = green.

So if green is a critera, detail eqivalence happens around double
for mosaic. Thats not to say there still wont be more colour errors
in mosaic, but since you have many more pixels your colour errors
will not be that large.

I don't know how it will all work out, but Double the pixels for
equivalence seems to be a good ballpark for rule of thumb
comparison.

All else being equal, I would purchase a 3MP x3 before a 6mp
Mosaic, but it will be quite a while till all else is equal, or
even close enough. Things like higher ISO shooting. Reasonable
speed continuous shooting modes, flexible in camera storage (ie
jpg), lens system equivalence, user interface, features...

It will also be quite a while before DSLRs are affordable IMO.

Peter
 
You are talking about the Green image forming pixels that outnumber the the Red and Blue Pixels by two. How does all this work for those colors.

If The Bayer pattern CCD can get pretty good imaging from less than half the pixels to form a complete image, can the Foveon be used to construct an image that is not only double (x4) the size but maybe even larger.

Twice the pixels in both directions should double the resolution. Is it possible to interpolate to doulble the size again to come even in sharpness of a Bayer CCD?
Any thoughts?
Rinus
 
This discussion thread is all over the place, with all kinds mumbo jumbo speculations about how to compare Foveon vs mosaic sensors. I'm posting in this part of the thread because it has had the most recent activity.

I think that technically any 6mp camera will have the edge over even a 3mp Foveon in absolute ability to distinguish that fine details exist. I think the Foveon has will have a big advantage in the quality of any detail it shows. To try to assign a useful ratio is fruitless, because you're comparing carrots to pineapples.

All of the issues of Foveon vs Bayer Mosaic sensors relate to human perception of luminance and color, plus issues of artifact, noise, aliasing, interpolation, etc.

First, related to the discussion of RGB patterns, there are all kinds of good sample pictures of mosaic sensors at this web-site. See
http://www.dpreview.com/news/0202/02021101foveonx3.asp
or the DPReview Glossary
http://www.dpreview.com/learn/Glossary/Camera_System/Colour_Filter_Array_01.htm

If you compare a 3mp Foveon to a 3mp mosaic, the mosaic typically has 1/2 the green, and 1/4 the red and blue pixels. There tends to be a confusion in this whole thread about linear resolution vs mega-pixels. If you double the horiz & vert resolution, you will end up with 4 times as many pixels. That is: a 12mp camera has double the resolution of a 3mp camera.

Technically, if you photograph a B&W high-rez image (with high frequency detail) in good well balanced light (probably day-light) using the B&W mode of the color mosaic camera, you should be able to approach the theoretical resolution of the camera. Why? Because the white will be equally sensed by any red, green, or blue sensor. Similarly, black will be sensed equally by any sensor. In B&W mode, the camera's software "assumes" that rapid changes between pixels are due to luminance, and just outright eliminates any color. However, in color mode, the camera has absolutely no way to distinguish whether rapid changes from the red to the green pixel are due to color, or due to high frequency luminance detail. So the high resolution is present, but tends to fall apart in color mode. This gets even more murky when the high resolution detail is very colorful (flowers, birds, ...) So what? A Foveon sensor should theoretically give the same resolution in B&W or color mode, and will give you it's true and full resolution under ALL circumstances.

I happen to own a 3-CCD video camera, in addtion to a Canon G1. 3-CCD cameras use very different technology to achieve a similar result to the Foveon chip. The 3-CCD cameras use dichroic prisms to split the red, green, and blue light to three B&W CCDs (non-mosaic). Although my video camera gives only 640x480 resolution still images, each pixel has red, green & blue, and the quality of the images is excellent, other than the low-rez limitation. The 3-CCD cameras are very good at what they do, but cannot easily reproduce certain colors, for example a deep violet iris shows up as a deep blue. The red cones in the human eye have a secondary sensitivity peak in the violet end of the spectrum. The 3-CCD cameras do such a good job of separating the red from the blue ends of the spectrum that the very deep violet cannot be properly represented. However, most purples actually reflect both red and blue ends of the spectrum, and look fine with a 3-CCD camera. I will be very interested to see how the Foveon sensor deals with certain extreme colors.

I personally believe that MUCH of the "noise", and certainly all of the color moire patterns in mosaic CCDs are related to the conversion from adjacent RG&B pixels into full resolution images, without proper anti-aliasing filters. Some of the high end cameras do actually have mechanical anti-aliasing (blurring) filters in front of the CCD. Classical engineering training teaches that you MUST ELIMINATE any frequencies above 1/2 the sampling rate (Nyquist criterion) BEFORE SAMPLING the signal. Otherwise, you WILL GET aliasing (moire patterns the photo domain). Many moire patterns are impossible to correct for. The unfortunate side effect of a properly designed anti-aliasing filter is that it WILL blur your picture, but only in a way that is similar to the human eye or film. For example, using your eyes, if you walk away from a subject with a striped shirt, as you approach a point the lines start to lose contrast, then start to blur together. As you continue to move away, the shirt will simply look like a solid color. Not so with most digital cameras. Even a Foveon sensor has the possibility of exhibiting aliasing, but not the severe chromatic aliasing very common with digital cameras.

People can go on and on about 2x or 3x or resolving power or 56% or 75% or sharpness or file size or compression. The fact of the matter is that this group has grown accustomed to the type of artifacts inherent in the mosaic CCDs. I predict that many readers here will be pleasantly surprised by the clarity and cleaness of Foveon type images, even though they are lower in resolution. I don't know how the industry will play out. It wouldn't surprise me if the industry giants (camera or CCD manufacturers) try to quash or circumvent Foveon, but I applaud Foveon's ingenuity and efforts.

One more little thing about RAW in a Foveon vs mosaic camera. A 6mp mosaic has to store 6 million x only 8-12 bits per pixel (one color), so size-wise it has a 2-3x size advantage over TIFF at 6mp x 24bits (three colors). Software to open RAW images must reconstruct the color interpolations similarly to what is done in the camera. Since every pixel in a Foveon image has three colors, a RAW format doesn't give that same automatic 2-3x size reduction advantage.
 
That last post really messed up the spacing, try this example:
Mosaic:
G_G_G_G_G_: 5 Green pixels across for each row
G_G_G_G_G
G_G_G_G_G

G_G_G_G_G
G_G_G_G_G

G_G_G_G_G
G_G_G_G_G

G_G_G_G_G
G_G_G_G_G

G_G_G_G_G
---_5 Green pixels down for EACH column
Well, you have the picture right, but your math is still wrong. So I suppose you think with 5 pixels in each row and, 5 pixels in each column you have:

5pixels x 5 pixels = 25 green pixels versus the 100 green in the X3. Thus the need for the 4 times multiplication. I suggest you count them. :-)

Multiply x2 Case:

G_G_G_G_G_G_G_G_G_G_ 10 Green pixels across
G_G_G_G_G_G_G_G_G_G
G_G_G_G_G_G_G_G_G_G

G_G_G_G_G_G_G_G_G_G
G_G_G_G_G_G_G_G_G_G

G_G_G_G_G_G_G_G_G_G
G_G_G_G_G_G_G_G_G_G

G_G_G_G_G_G_G_G_G_G
G_G_G_G_G_G_G_G_G_G

G_G_G_G_G_G_G_G_G_G

---_5 Green pixels down for EACH column

Hmmmmm, so does 5/column and 10/row give you 5x10 = 50 green? Sitll not as many as x3? Count them. :-)

What you are failiing to consider is the amount of rows. The proper calculation is:

pixels per row x number of rows (or pixels per column x number of columns).

This the general formula for regular occurrance within rows. If it is irregular you will have to count them, that is the most general formula and it always works. :-)

When you multiply the number of pixels in a row, by the number of pixels in a column you only get the right answer in the special case where the number of pixels in a row equals the number of columns by coincidence.

You applied the special case formula back to a general case problem.

Applying the proper formula to the original case mosaic.

5 pixels/row X 10 rows = 50 pixels.
or
5 pixels/col X 10 cols = 50 pixels.

Half the amount in the X3 array not a quarter.

For the MULTIPLY two case mosaic.

10 pixels/row X 10 rows = 100 pixels.
or
5 pixels/col X 20 cols = 100 pixels.

Still only 5 pixels/col but now there are also twice as many columns.

Cheers,

Peter
X3:
GGGGGGGGGG: 10 Green pixels across for each row
GGGGGGGGGG
GGGGGGGGGG
GGGGGGGGGG
GGGGGGGGGG
GGGGGGGGGG
GGGGGGGGGG
GGGGGGGGGG
GGGGGGGGGG
GGGGGGGGGG
---_10 Green pixels down for EACH column
As you can see in the horizontal direction, there are twice the
number of sampling green pixels in the X3: Mosaic: 5, X3: 10.
Accordingly, the horizontal resolution is double.

Now, for the vertical direction, you must look at only ONE column
at a time, not two. As you can see for column 1 of the Mosaic,
there are only 5 pixels, Column 2 of the Mosaic, there are only 5
pixels; etc. So, for each column, there are twice the number of
sampling green pixels in the X3: Mosaic: 5, X3: 10. Accordingly,
the vertical resolution is also double.

2x horizontal x 2x vertical = 4 times resolution.

My only, "on the other hand" is that perhaps with the staggered
pixel array in the mosaic, maybe a little bit of detail can be
interpolated in some way with a mosaic. Is this so? Thoughts?

Steve
Your mosiac wasnt quite right. Fully half the mosaic is green, not
in a linear direction, but of the whole pixel count. Simple
doubling of the amount of pixels will yeild 1 green for every
foveon pixel. If you fill the green holes in a vertical direction,
there will be none left when you look at the horizontal.

The correct pattern has every non-green surrounded on all sides by
GREEN.

Mosaic:

GRGRGRGRGR
BGBGBGBGBG
GRGRGRGRGR
BGBGBGBGBG
GRGRGRGRGR
BGBGBGBGBG
GRGRGRGRGR
BGBGBGBGBG
GRGRGRGRGR
BGBGBGBGBG

100 pixels 50 = green.

X3

XXXXXXXXXX
XXXXXXXXXX
XXXXXXXXXX
XXXXXXXXXX
XXXXXXXXXX
XXXXXXXXXX
XXXXXXXXXX
XXXXXXXXXX
XXXXXXXXXX
XXXXXXXXXX

100 pixels, 100 = green ( 100 red and blue too )

Mosaic x 2 only horizontally.

BGBGBGBGBGBGBGBGBGBG
GRGRGRGRGRGRGRGRGRGR
BGBGBGBGBGBGBGBGBGBG
GRGRGRGRGRGRGRGRGRGR
BGBGBGBGBGBGBGBGBGBG
GRGRGRGRGRGRGRGRGRGR
BGBGBGBGBGBGBGBGBGBG
GRGRGRGRGRGRGRGRGRGR
BGBGBGBGBGBGBGBGBGBG
GRGRGRGRGRGRGRGRGRGR

200 pixels, 100 = green.

So if green is a critera, detail eqivalence happens around double
for mosaic. Thats not to say there still wont be more colour errors
in mosaic, but since you have many more pixels your colour errors
will not be that large.

I don't know how it will all work out, but Double the pixels for
equivalence seems to be a good ballpark for rule of thumb
comparison.

All else being equal, I would purchase a 3MP x3 before a 6mp
Mosaic, but it will be quite a while till all else is equal, or
even close enough. Things like higher ISO shooting. Reasonable
speed continuous shooting modes, flexible in camera storage (ie
jpg), lens system equivalence, user interface, features...

It will also be quite a while before DSLRs are affordable IMO.

Peter
 
I don't want an argument, I just don't see much mumbo jumbo. There has been reasonable theory and more importantly experimental testing that indicates the 2X mosaic pixels is ROUGH parity with X3.

Beyond x2, Mosaics will actually have MORE green sensors than X3, then you are in the territory of absolute detail, vrs colour detail. Thats very murky.

It seems very handy to me to have a ballpark figure when comparing cameras with different sensors. People compare Digital and Film all the time. I think this comparison is much more straight forward.

Peter
This discussion thread is all over the place, with all kinds mumbo
jumbo speculations about how to compare Foveon vs mosaic sensors.
I'm posting in this part of the thread because it has had the most
recent activity.

I think that technically any 6mp camera will have the edge over
even a 3mp Foveon in absolute ability to distinguish that fine
details exist. I think the Foveon has will have a big advantage in
the quality of any detail it shows. To try to assign a useful
ratio is fruitless, because you're comparing carrots to pineapples.

All of the issues of Foveon vs Bayer Mosaic sensors relate to human
perception of luminance and color, plus issues of artifact, noise,
aliasing, interpolation, etc.

First, related to the discussion of RGB patterns, there are all
kinds of good sample pictures of mosaic sensors at this web-site.
See
http://www.dpreview.com/news/0202/02021101foveonx3.asp
or the DPReview Glossary
http://www.dpreview.com/learn/Glossary/Camera_System/Colour_Filter_Array_01.htm

If you compare a 3mp Foveon to a 3mp mosaic, the mosaic typically
has 1/2 the green, and 1/4 the red and blue pixels. There tends to
be a confusion in this whole thread about linear resolution vs
mega-pixels. If you double the horiz & vert resolution, you will
end up with 4 times as many pixels. That is: a 12mp camera has
double the resolution of a 3mp camera.

Technically, if you photograph a B&W high-rez image (with high
frequency detail) in good well balanced light (probably day-light)
using the B&W mode of the color mosaic camera, you should be able
to approach the theoretical resolution of the camera. Why? Because
the white will be equally sensed by any red, green, or blue sensor.
Similarly, black will be sensed equally by any sensor. In B&W
mode, the camera's software "assumes" that rapid changes between
pixels are due to luminance, and just outright eliminates any
color. However, in color mode, the camera has absolutely no way to
distinguish whether rapid changes from the red to the green pixel
are due to color, or due to high frequency luminance detail. So
the high resolution is present, but tends to fall apart in color
mode. This gets even more murky when the high resolution detail is
very colorful (flowers, birds, ...) So what? A Foveon sensor
should theoretically give the same resolution in B&W or color mode,
and will give you it's true and full resolution under ALL
circumstances.

I happen to own a 3-CCD video camera, in addtion to a Canon G1.
3-CCD cameras use very different technology to achieve a similar
result to the Foveon chip. The 3-CCD cameras use dichroic prisms
to split the red, green, and blue light to three B&W CCDs
(non-mosaic). Although my video camera gives only 640x480
resolution still images, each pixel has red, green & blue, and the
quality of the images is excellent, other than the low-rez
limitation. The 3-CCD cameras are very good at what they do, but
cannot easily reproduce certain colors, for example a deep violet
iris shows up as a deep blue. The red cones in the human eye have
a secondary sensitivity peak in the violet end of the spectrum.
The 3-CCD cameras do such a good job of separating the red from the
blue ends of the spectrum that the very deep violet cannot be
properly represented. However, most purples actually reflect both
red and blue ends of the spectrum, and look fine with a 3-CCD
camera. I will be very interested to see how the Foveon sensor
deals with certain extreme colors.

I personally believe that MUCH of the "noise", and certainly all of
the color moire patterns in mosaic CCDs are related to the
conversion from adjacent RG&B pixels into full resolution images,
without proper anti-aliasing filters. Some of the high end cameras
do actually have mechanical anti-aliasing (blurring) filters in
front of the CCD. Classical engineering training teaches that you
MUST ELIMINATE any frequencies above 1/2 the sampling rate (Nyquist
criterion) BEFORE SAMPLING the signal. Otherwise, you WILL GET
aliasing (moire patterns the photo domain). Many moire patterns
are impossible to correct for. The unfortunate side effect of a
properly designed anti-aliasing filter is that it WILL blur your
picture, but only in a way that is similar to the human eye or
film. For example, using your eyes, if you walk away from a subject
with a striped shirt, as you approach a point the lines start to
lose contrast, then start to blur together. As you continue to
move away, the shirt will simply look like a solid color. Not so
with most digital cameras. Even a Foveon sensor has the
possibility of exhibiting aliasing, but not the severe chromatic
aliasing very common with digital cameras.

People can go on and on about 2x or 3x or resolving power or 56% or
75% or sharpness or file size or compression. The fact of the
matter is that this group has grown accustomed to the type of
artifacts inherent in the mosaic CCDs. I predict that many readers
here will be pleasantly surprised by the clarity and cleaness of
Foveon type images, even though they are lower in resolution. I
don't know how the industry will play out. It wouldn't surprise me
if the industry giants (camera or CCD manufacturers) try to quash
or circumvent Foveon, but I applaud Foveon's ingenuity and efforts.

One more little thing about RAW in a Foveon vs mosaic camera. A 6mp
mosaic has to store 6 million x only 8-12 bits per pixel (one
color), so size-wise it has a 2-3x size advantage over TIFF at 6mp
x 24bits (three colors). Software to open RAW images must
reconstruct the color interpolations similarly to what is done in
the camera. Since every pixel in a Foveon image has three colors,
a RAW format doesn't give that same automatic 2-3x size reduction
advantage.
 
Mark W. Johnson wrote:
....
To try to assign a useful
ratio is fruitless, because you're comparing carrots to pineapples.
It might be nice to have a ball-park (as Peter puts it) idea of equivalence. Not that you'll come up with the most precise ratio for all cameras under all conditions, but some of us just want to know, in general, how good the new sensor is. So, personally, I think this thread has been helpful (if long) -- in general. ;-)

...
Technically, if you photograph a B&W high-rez image (with high
frequency detail) in good well balanced light (probably day-light)
using the B&W mode of the color mosaic camera, you should be able
to approach the theoretical resolution of the camera. Why? Because
the white will be equally sensed by any red, green, or blue sensor.
Similarly, black will be sensed equally by any sensor. In B&W
mode, the camera's software "assumes" that rapid changes between
pixels are due to luminance, and just outright eliminates any
color.
Is this what most digital cameras do? Avoid the beyer interpolation (or however you want to refer to it)? The fact that it registers luminance for different colors doesn't matter?

Sounds to me like a good excuse to take some B&W photos... :-)

If you take a color photo and convert it to B&W, does it have less effective resolution, then, because it has passed through an interpolation step that has (effectively) "blurred" the raw numbers?

....
without proper anti-aliasing filters. Some of the high end cameras
do actually have mechanical anti-aliasing (blurring) filters in
front of the CCD. Classical engineering training teaches that you
MUST ELIMINATE any frequencies above 1/2 the sampling rate (Nyquist
criterion) BEFORE SAMPLING the signal. Otherwise, you WILL GET
aliasing (moire patterns the photo domain). Many moire patterns
are impossible to correct for. The unfortunate side effect of a
properly designed anti-aliasing filter is that it WILL blur your
picture, but only in a way that is similar to the human eye or
I think you are probably right about the filter, but if you did this, you'd have to optimize it for color, and it would mess-up the advantage in using B&W mode for higher detail work, that you mentioned earlier. :-(

Personally, I haven't noticed much of a problem with moire patterns. Maybe I'm just lucky, and/or don't take many photos of screened porches. ;-) Seriously, maybe it's just hard to see such "artifacts" in a typical picture. So, is the filter still necessary if it's not noticable?

--
Gary W.
Nikon 880
 
Thanks Peter!

Ok ok I get it! The 3MP X3 appears to be equal in green resolution to a 6MP Mosaic. Based upon your fine 2x horizontal and 2x vertical example, here's the 1.41 x horizontal and 1.41 vertical example. (of course 1.41 x 1.41 = 2)
Suppose an X3 has the following Green pattern:
Array of 5 pixels across, 5 pixels down = 25 total pixels, 25 green pixels.


GGGGG
GGGGG
GGGGG
GGGGG
GGGGG

Then, I figure that an equivalent Mosaic will have about the following Green pattern: (5x1.41 =7):

Array of 7 pixels across, 7 pixels down = 49 total pixels, but only 25 green pixels.

G_G_G_G
G_G_G
G_G_G_G
G_G_G
G_G_G_G
G_G_G
G_G_G_G

Ta da! for the Green, 3MP X3 = 6MP Mosaic.
For the Red and Blue, it still appears that 3MP X3
= 12MP Mosaic.

I agree that, the X3 appears to be a great development. Cheaper (?) and simpler (?) with superior color! I can't wait for good $500 - $1000 X3 with at least an f2.0 lens for Christmas. Are you listening FOVEON?

Steve :)
That last post really messed up the spacing, try this example:
Mosaic:
G_G_G_G_G_: 5 Green pixels across for each row
G_G_G_G_G
G_G_G_G_G

G_G_G_G_G
G_G_G_G_G

G_G_G_G_G
G_G_G_G_G

G_G_G_G_G
G_G_G_G_G

G_G_G_G_G
---_5 Green pixels down for EACH column
Well, you have the picture right, but your math is still wrong. So
I suppose you think with 5 pixels in each row and, 5 pixels in each
column you have:

5pixels x 5 pixels = 25 green pixels versus the 100 green in the
X3. Thus the need for the 4 times multiplication. I suggest you
count them. :-)

Multiply x2 Case:

G_G_G_G_G_G_G_G_G_G_ 10 Green pixels across
G_G_G_G_G_G_G_G_G_G
G_G_G_G_G_G_G_G_G_G

G_G_G_G_G_G_G_G_G_G
G_G_G_G_G_G_G_G_G_G

G_G_G_G_G_G_G_G_G_G
G_G_G_G_G_G_G_G_G_G

G_G_G_G_G_G_G_G_G_G
G_G_G_G_G_G_G_G_G_G

G_G_G_G_G_G_G_G_G_G

---_5 Green pixels down for EACH column

Hmmmmm, so does 5/column and 10/row give you 5x10 = 50 green? Sitll
not as many as x3? Count them. :-)

What you are failiing to consider is the amount of rows. The proper
calculation is:

pixels per row x number of rows (or pixels per column x number of
columns).

This the general formula for regular occurrance within rows. If it
is irregular you will have to count them, that is the most general
formula and it always works. :-)

When you multiply the number of pixels in a row, by the number of
pixels in a column you only get the right answer in the special
case where the number of pixels in a row equals the number of
columns by coincidence.

You applied the special case formula back to a general case problem.

Applying the proper formula to the original case mosaic.

5 pixels/row X 10 rows = 50 pixels.
or
5 pixels/col X 10 cols = 50 pixels.

Half the amount in the X3 array not a quarter.

For the MULTIPLY two case mosaic.

10 pixels/row X 10 rows = 100 pixels.
or
5 pixels/col X 20 cols = 100 pixels.

Still only 5 pixels/col but now there are also twice as many columns.

Cheers,

Peter
 
The problem in all of these things being done in camera, is this.
They get done the same way everytime. The sharpening
gets done the same. Interpolation gets done the same. The
compression gets done the same.
.....

I don't think my question really is affected by raw mode or no raw mode.

If there is no additional "information" in my 3.3 megapixel camera photos, due to the bayer interpolation, why didn't they just save it as a 2.5 megapixel, with less interpolation (but arguably a "sharper" image)? I would suggest that they provide it as an option, except that maybe they would rather not advertise that they don't have as much resolution as they print on the box (and in the manual...).

...
strictly influenced by image quality. From the number of posters
that have complained about this, from a marketing standpoint,
it is most likely a mistake. For those that edit in high bit spaces,
and then convert the images back to 8/24 bit for printing and
web display, having the RAW only option has no impact, as that
is all they would use anyway. For everyone else, the RAW only
option seems to be a limitation.
As a practical matter, as someone else posted (Peter?), you have to zoom in and give the picture a pretty thorough going-over to prefer TIFF to JPEG. I virtually never use TIFF mode, because even with a high capacity card, I'd only get a few shots. Meanwhile, the higher-quality JPEG settings look really good. It's very difficult to determine which is better, TIFF or JPEG, without zooming at least 4x and for most uses, it just doesn't matter. Generally, if I can get the shot non-blurred, that's more important. :-)

So, yeah, RAW-only is a serious problem for most of us. Having an optional RAW mode is definately a plus. I would not get rid of TIFF either, even if I seldom use it. Even I could see where occasionally I might want to try to get the best pic possible.
So, if I'm getting an inferior, interpolated picture from my
camera, can I resize it to 75% (height & width), with no loss of
real information? Why don't they do that in-camera? So they can
quote more impressive pixel counts?
...
--
Gary W.
Nikon 880
 
Gary, there is a big difference between a Bayer RAW, where
the only benefit is that the interpolation is done on the computer
where the user has control of the parameters of the interpolation,
and the Foveon RAW, where no interpolation is involved. Yes, the
Bayer RAW is very little better other than the control offered to
the editor. The Foveon RAW is pure samples, and there would be
a vast difference between a RAW and a high quality jpeg. As of
now, the difference would be 24 bit to 36/48 bit on the color. The
lost resolution is not a factor that can be measured using
pure speculation, but it will be there also. With jpeg2000, possibly
the larger bit space may make such an option work for those that
want compression, and quality. As of now, I estimate that a shot
processed in camera, and saved as a jpg, would lose over 40%
of the available quality. You are comparing this tech to Bayer
transfers, and they are not the same. Using film as an example,
what jpeg offers is only saving 1/4th of the negative. That works
with Bayer, as that is not much less than the actual information
present. With the Foveon, it would be directly like cropping
half of a 35mm negative and wanting the same image quality
from half frame. The Foveon would still contain more information
than the Bayer, even with the compression, but the damage to
the possible image quality is much greater. I am not getting
ready to spend that amount of money, and then throw away
the advantage I paid for, just to save storage costs.
The problem in all of these things being done in camera, is this.
They get done the same way everytime. The sharpening
gets done the same. Interpolation gets done the same. The
compression gets done the same.
.....

I don't think my question really is affected by raw mode or no raw
mode.

If there is no additional "information" in my 3.3 megapixel camera
photos, due to the bayer interpolation, why didn't they just save
it as a 2.5 megapixel, with less interpolation (but arguably a
"sharper" image)? I would suggest that they provide it as an
option, except that maybe they would rather not advertise that they
don't have as much resolution as they print on the box (and in the
manual...).

...
strictly influenced by image quality. From the number of posters
that have complained about this, from a marketing standpoint,
it is most likely a mistake. For those that edit in high bit spaces,
and then convert the images back to 8/24 bit for printing and
web display, having the RAW only option has no impact, as that
is all they would use anyway. For everyone else, the RAW only
option seems to be a limitation.
As a practical matter, as someone else posted (Peter?), you have to
zoom in and give the picture a pretty thorough going-over to prefer
TIFF to JPEG. I virtually never use TIFF mode, because even with a
high capacity card, I'd only get a few shots. Meanwhile, the
higher-quality JPEG settings look really good. It's very difficult
to determine which is better, TIFF or JPEG, without zooming at
least 4x and for most uses, it just doesn't matter. Generally, if
I can get the shot non-blurred, that's more important. :-)

So, yeah, RAW-only is a serious problem for most of us. Having an
optional RAW mode is definately a plus. I would not get rid of
TIFF either, even if I seldom use it. Even I could see where
occasionally I might want to try to get the best pic possible.
So, if I'm getting an inferior, interpolated picture from my
camera, can I resize it to 75% (height & width), with no loss of
real information? Why don't they do that in-camera? So they can
quote more impressive pixel counts?
...
--
Gary W.
Nikon 880
 
The Foveon RAW is pure samples, and there would be
a vast difference between a RAW and a high quality jpeg. As of
now, the difference would be 24 bit to 36/48 bit on the color. The
lost resolution is not a factor that can be measured using
pure speculation, but it will be there also.
Not quite sure what your point is here? You think there will be some huge but unquantifiable difference in quality between jpg and RAW?

48 bit? AFAIK the SIGMA D9 captures 12 bits/colour or 36 total.

Show me the pixels. Do you have anything to back up this massive destruction of quality by jpeg? Much like when the X3 discussion broke I did testing. Then I wrote a program to mosaic and demosaic an image, and the degredation was plainly obvious. Colour edge transition were destroyed.

This time I made an artificial image .bmp with abrupt red-green-blue transitions , single pixel random lines of one colour scrawled on another, then I gave it to PS and saved it as a quality 10 JPG (PS goes to 12).

There was NO DIFFERENCE, all the abrupt colour transitions were perfect, the compression was still about 12 to 1. At 200% I couldn't find a pixel out of place.

Demonstrate any kind of photo real or otherwise that suffers the kind of massive degradation via JPG compression.
As of now, I estimate that a shot
processed in camera, and saved as a jpg, would lose over 40%
of the available quality.
So now it is quantifiable? What is the measure of quality, 40% difference should be visible from across the room.

Bob this seems to be moving farther away from practical reality with each message. I just flashed back to Carl Sagans story about the Invisible fire breathing dragon in his garage. Of course everything is on the net these days, for the curious: http://www.users.qwest.net/~jcosta3/article_dragon.htm
  • I have this awesome RAW image with 40% better quality image than jpeg!
Great can you show it to me?
  • Well we have to convert it to 8 bit to actually see it, so you can't really see the difference.
Can we print it?
  • No same problem.
Ok, what can I do with it.
  • You can edit and get the most of your images.
So then I can see the 40% difference?
  • Well you probably don't have the skill, to do the proper editing to get the full benefit.
Could you do it for me?
  • Sure here you go, see the difference?
It doesn't look any better than the jpg.
  • Well your eyes are not properly trained to see the quality benefits.
Uhhh huh, yes, I think I will be going now.

Peter
 
Funny thread.

Seems everybody is fixed on what can be seem on screen. But photographs are not seen on screen. They are printed in small format, as snapshot, or in big format, to be hanged to the wall, or in colour separation on a newspaper or book.

On screen you can see no difference between 8-bit or higher bit images, because the screen is 8-bit. Also everything looks right because it is RGB. But the print is always CMY, and sometimes CMYK, with the K being no colour but just a deepening of shadows. In print different problems from what you see on screen come out. A "mad" pixel in the sky, caused by JPEG or bad sensor, may never show. Chromatic aberration you clearly see at 300% on screen may never show. But the subtle gradients you saw on screen can become bad looking steps, and that beautiful grey can posterize. And when you print in 13"x19" (33x48cm for most of the world) or more, you always long you had a bigger negative to scan, or more bits to work on.

I run a printing shop in b&w, and 16-bit scanning and processing is becoming our norm. There is simply too much difference in the final print. And there is no way to see the difference on screen.So the "show me the difference" does not work on the net.

Fabio
 
Thank you! After you have spent a few hundred hours
editing files, trying to squeeze that last little bit of quality
out of them for print, the differences become obvious.
Actually, the Foveon data will be a lot closer to scanner
files than to the Bayer capture files. That is one of the
basic mis-conceptions in this discussion. People keep comparing
the Foveon files to the digital camera files they are used to,
instead of the scanner files they resemble. Now tell me. Do
you wnat your scanner to scan in 36 bit, process the files
to 24 bit, and then save them as JPG's? This is what
they keep asking for.
The 36/48 bit notation is simple. When you process a 12 bit
RAW file, you get 36 bit color, but you have to save the
file in a 16 bit, not 12 bit format. This is self explanitory to
anyone that has worked with a few RAW files.
Funny thread.
Seems everybody is fixed on what can be seem on screen. But
photographs are not seen on screen. They are printed in small
format, as snapshot, or in big format, to be hanged to the wall, or
in colour separation on a newspaper or book.
On screen you can see no difference between 8-bit or higher bit
images, because the screen is 8-bit. Also everything looks right
because it is RGB. But the print is always CMY, and sometimes CMYK,
with the K being no colour but just a deepening of shadows. In
print different problems from what you see on screen come out. A
"mad" pixel in the sky, caused by JPEG or bad sensor, may never
show. Chromatic aberration you clearly see at 300% on screen may
never show. But the subtle gradients you saw on screen can become
bad looking steps, and that beautiful grey can posterize. And when
you print in 13"x19" (33x48cm for most of the world) or more, you
always long you had a bigger negative to scan, or more bits to work
on.
I run a printing shop in b&w, and 16-bit scanning and processing is
becoming our norm. There is simply too much difference in the final
print. And there is no way to see the difference on screen.So the
"show me the difference" does not work on the net.

Fabio
 
But the subtle gradients you saw on screen can become
bad looking steps, and that beautiful grey can posterize.
Interesting. Do you have a theory why this happens? I have seen this effect numerous times when you take a subtle image and try to represent it with less colours, but never when doing the reverse.

If you have subtle gradients in 8bit space, then moving to greater bit depth should not posterise it, unless there is some inaccuracy in your conversion process or between your view devices.
And when
you print in 13"x19" (33x48cm for most of the world) or more, you
always long you had a bigger negative to scan, or more bits to work
on.
No dissagreement here, but you will get more beneifit from the bigger negative than you will out of going from a 14 bit/colour to 16/bit colour scanner.
I run a printing shop in b&w, and 16-bit scanning and processing is
becoming our norm. There is simply too much difference in the final
print. And there is no way to see the difference on screen.So the
"show me the difference" does not work on the net.
When doing pure grey scale, 8bit colour only gives 256 shades which can be somewhat limiting. For processing you want all the precision you can get as the round-off errors would probably quickly destroy a greyscale image processed in 8bit. I believe some 3d graphic card makers are moving calculation pipeline to high precision Floating Point.

If you read the whole thread, you would see that I am not in any way against RAW modes, just not to the exclusion of highly usefull compression formats. My recent sarcasm was in response to Bobs attempt to villify the jpg format.

Ironically serveral months ago I was in a similar argument in the Nikon format with those who claimed RAW was useless for them and shouldn't be on the camera.

The Nikonites: RAW is useless and isn't needed on the camera.
Bob: Jpeg is useless and isn't needed on the camera.

I can only find a few reasons for views like these.

1. Egotistical; I don't need it, so no one else does.

2. Patronizing; You don't have brains to decide when to use this feature, so should be kept from it.

3. Sour Grapes; my camera of choice doesn't have it, so I must cut down its usefullness.

I think all cameras should have both a raw lossless data capture and high quality Jpeg compression.

Peter
 
Thank you! After you have spent a few hundred hours
editing files, trying to squeeze that last little bit of quality
out of them for print, the differences become obvious.
Actually, the Foveon data will be a lot closer to scanner
files than to the Bayer capture files. That is one of the
basic mis-conceptions in this discussion. People keep comparing
the Foveon files to the digital camera files they are used to,
instead of the scanner files they resemble. Now tell me. Do
you wnat your scanner to scan in 36 bit, process the files
to 24 bit, and then save them as JPG's? This is what
they keep asking for.
If my scanner could only store its data on tiny, very expensive data cards, I certainly would want that OPTION.
The 36/48 bit notation is simple. When you process a 12 bit
RAW file, you get 36 bit color, but you have to save the
file in a 16 bit, not 12 bit format. This is self explanitory to
anyone that has worked with a few RAW files.
Yes, I know that TIFF is either 8bit or 16bit, but it equally as obvious that saving your 12bit data in a 16bit file, does not give you 16 bits of data.

I can also convert and save my jpg as 16bit TIFF as well.

Peter
 
I haven't read every message in this thread, but from what I have read, people seem to be making the comparison more complex and "mysterious" than it actually is.

If you are lucky enough to have a 6MP Bayer pattern sensor, what you really have is a 1.5MP red sensor, a 1.5MP blue sensor, and a 3MP green sensor. By comparison, a 3.4MP Foveon sensor has a 3.4MP red sensor, a 3.4MP blue sensor, and a 3.4MP green sensor.

If you are photographing anything that is predominantly red or blue, your 6MP Bayer sensor has less than half (1.5/3.4) the "spatial" resolving power of a Foveon 3.4MP sensor. In the green (middle) area of the spectrum, the advantage of the Foveon is not so great.

Another process that futher limits resolution of a Bayer pattern sensor, is the need to apply an anti-aliasing filter, to avoid false colours at edges due to the interpolation. The Foveon sensor does not obviously need an anti-aliasing filter.

However you look at it, the Foveon technology is "potentialy" superior to Bayer pattern sensors. If the Foveon sensor is cheaper and similarly compact, eventually all digital cameras will be using Foveon sensors.

Mark H.
 
Gary, there is a big difference between a Bayer RAW, where
....

Bob, I'm still not sure if you're replying to me specifically or just to the thread. It's not really the discussion I was curious about, but hey, I am flexible. ;-) I am not an advocate of removing a RAW mode, but I'm not exactly worked up over not having it. In this respect, I think my opinion is very similar to Peter's. Personally, I think my time is better spent trying to get a better source picture. I'm sure RAW would be more of an issue if I were a professional photographer; maybe in a few years, I'll be more concerned, but hopefully the point will be moot by then. :-)

I still am wondering about the impact of interpolation -- effective resolution (as opposed to what we really get), and B&W modes, but I guess I've gotten about all the response to my previous messages that I'm going to get! ;-)

...
pure speculation, but it will be there also. With jpeg2000, possibly
...

Jpeg 2000 would be great to have as an in-camera format. Are they still arguing about the format, or patents, or what? Why don't they just release it?

As for scanners saving JPEGs, this is not so bad as long as you have the scanner software work in 48bits (or whatever) to do the biggest modifications to the raw data (adjusting contrast, brightness, gamma, etc.). Once you get it close to optimal, saving to JPEG isn't quite as bad, but I understand what you mean about losing detail at that point.

Peter has the right point, though -- we're not scanning, we're taking photos, and there is a limit to the cards in our pockets. I took over 100 photos (at the highest quality JPEG) today, and filled up 2 cards, and was working on the 3rd. Most of these were "snapshots", so it's not as if I really needed RAW or even TIFF. A couple of the photos really turned out wonderful, and I probably could have switched to TIFF for a couple of particular shots and still had enough room, but it would have been close. But switching a lot of stuff in the menu just gets in the way of trying to get the moment recorded before your subjects move. So, I guess I'm saying that just because TIFF is a higher quality, it just isn't necessarily as practical. Even blown up, the high quality JPEG pics look great; JPEG just isn't that bad, as long as you don't overdo it.

Or, here's another one: I had to resort to ISO 200 and sometimes even 400. What does THAT do to your picture?! I don't think JPEG could damage the picture nearly as much as the noise that creeps in then! But if I don't do that, I often have too much trouble stabilizing the picture (assuming that I need to hand-hold or brace the camera). 1/4 sec shutter speeds are bad enough, but 1", forget it! Better to get a reasonable shot than no shot at all...

So, I guess I have a lot more to worry about than RAW mode. :-) But again, I agree with Peter -- why be limited? Have both modes and let the user decide as the situation dictates?
--
Gary W.
Nikon 880
 
I still am wondering about the impact of interpolation -- effective
resolution (as opposed to what we really get), and B&W modes, but I
guess I've gotten about all the response to my previous messages
that I'm going to get! ;-)
I haven't done much testing on Grayscale modes, but IMO they simply desaturate a colour image. So there will be no benefit to resolution, shooting in B&W mode.

Now you could have a special greyscale mode that treated all the sensors as pure light level readings (ie greyscale) and you would get resolution equivalent to an X3 camera in greyscale.

BUT with the very BIG CATCH that it only works if your original source was greyscale itself!

With a colour source, it breaks down. Shoot pure saturated BLUE with your camera reading somekind of freaky greyscale mode and your Blue sensors tell you it is absolutely white (255), your Green and Red sensors say it is absolutely black (0). Pure colours give you a black and white checkerboard. It is clear you still need interpolation to shoot greyscale of colour images.

I remember cameras with TEXT mode, that might have worked this way, since you knew the source was B&W you could treat the sensor data as light levels and ignore the filtering. But I think they captured 2 colour Black and white GIFs, and they still may have simply been using a colour interpolated image as a source.

I would be neat to have a camera with tech like Canons freaky switchable colour filter. Presumably you could switch them off. Now a camera that could eliminate the filters for B&W mode would capture full resolution and have very good sensitivity. Of course X3 could probably have a killer monochrome mode as well.

Anyway IMO you get no resolution benefit for shooting a mosaic camera in B&W mode, but I am sure there are other opinions.

Peter
 
I haven't done much testing on Grayscale modes, but IMO they simply
desaturate a colour image. So there will be no benefit to
resolution, shooting in B&W mode.
...
Anyway IMO you get no resolution benefit for shooting a mosaic
camera in B&W mode, but I am sure there are other opinions.
Peter, that makes a lot of sense, that colors can throw off the end result, but then why do so many people (including in this thread) say that you'd get the full uninterpolated resolution with B&W mode? I suppose the real proof would be try to construct a resolution test. Is there any place I can download and print a resolution test pattern? (Hmm, actually, shouldn't I be able to programmatically create one?!)

My camera has a "copy" mode that probably increases the contrast to have the "black or white" effect. Hmm, I wonder what it looks like if I try photographing in that mode? Probably something weird... but maybe something interesting. Something else to play with. :-)
--
Gary W.
Nikon 880
 
Peter, that makes a lot of sense, that colors can throw off the end
result, but then why do so many people (including in this thread)
say that you'd get the full uninterpolated resolution with B&W
mode? I suppose the real proof would be try to construct a
resolution test. Is there any place I can download and print a
resolution test pattern? (Hmm, actually, shouldn't I be able to
programmatically create one?!)
I don't really remember seeing a serious claim that b&w mode on Mosaic cameras would capture more detail. I would suspect that in some cases when you get colour fringing artifacts from mosaic, the image may "appear" better because it lacks the obvious colour frining. Its still there, just less visible with the obvious colour effects removed.

For example; Some cameras produce coloured moire in their res chart photos, now if you desaturate that shot, it looks better since you know it should be black and white. The absence of any colour makes it look better and more in line with expectations, but it is not sharper.



Download this clip and convert to greyscale and see how much better it "looks". Its is certainly no sharper from converting to greyscale.

I also think if there were an algorithm that could produce a better B&W resolution, it would be mixed with the Chroma info from the current algorithm to produce the best of both worlds.

Peter
 

Keyboard shortcuts

Back
Top