Technical inaccuracy in Foveon press release?

The problem in all of these things being done in camera, is this.
They get done the same way everytime. The sharpening
gets done the same. Interpolation gets done the same. The
compression gets done the same.
Taking the case of the jpeg compression. Years before anyone
with less money than NASA could afford a CCD of any kind, all
of the better editors had already included variable jpeg
compression settings, and the best had included a preview
window that allowed you to see the effects of the compression
on the image. They also offered lossless compression methods
for the times when Jpeg was unacceptable. Warnings about the
change to the image are still displayed in my primary editor. There
was no single compression setting that covered all of the images
you might work with. There still isn't, just like no single set of
sharpening settings are useful for all of the images you process.
This is one of the primary strengths of the RAW format. In use,
it allows the white balance to be set after capture, in case the
logical choice for that shot turned out wrong. It allows the
sharp settings to be set post capture, for the best quality, just
like editing in a good editing program. It allows you to choose the
storage option that is best for that image. I regularly process
whole directories of RAW images to tiff files, using the in camera
settings. I then look at the tiffs. If the image is good, I erase
the tiff, and keep the RAW for a more controled conversion using
my settings. If the image is junk, I delete both the RAW and the
tiff. When I am done going thru the directory, the RAW files
that are left get converted one at a time using the best settings
for each one. The newer D60 option of having a jpg file the size
of a G1 file included, makes the first conversion a moot point.
This is why I think that the Sigma did not include a jpg option
in the original specs. The designers understood the advantages
of having RAW. After having all of the discussions about this one
point, I still think they are correct, but my point of view is
strictly influenced by image quality. From the number of posters
that have complained about this, from a marketing standpoint,
it is most likely a mistake. For those that edit in high bit spaces,
and then convert the images back to 8/24 bit for printing and
web display, having the RAW only option has no impact, as that
is all they would use anyway. For everyone else, the RAW only
option seems to be a limitation.
Consider a 6mp regular camera. It measures 6 million total values
but generates 18 million: 6 million reds, 6 million greens, and 6
million blues. This is from the Bayer interpolation process. When
this image is JPG compressed the JPG works on the 18 million
values.
....
Now consider a 6mp ("2mp" equilivant) x3 camera. It also measures
6 million total values but only generates 6 million. The JPG works
on these 6 million values and treats them all equally. The
compression ratio will be slightly worse than images coming out of
a "2mp" camera but not enough worse to compenste for the factor of
3 in generated data.
...

That is an interesting way to look at it. Although, based on the
".56" figure that others posted above, I would stick to the theory
of there being more of a 2 times difference, not 3 times. So, your
2mp x3 camera has the filesize of a 2mp nomal camera, but the
quality/detail of a 4mp, not 6. (See other messages for the
derivation of the .56 figure.)

So, if I'm getting an inferior, interpolated picture from my
camera, can I resize it to 75% (height & width), with no loss of
real information? Why don't they do that in-camera? So they can
quote more impressive pixel counts?

(You might be wondering why I would want to bother reducing pixels.
Even if they are "extra", as long as you have them,you might as
well keep them, right? Maybe, but then if you're going to
manufacture information, you could use a different technique, such
as Genuine Fractals. Just a thought....)
--
Gary W.
Nikon 880
 
That is an interesting way to look at it. Although, based on the
".56" figure that others posted above, I would stick to the theory
of there being more of a 2 times difference, not 3 times. So, your
2mp x3 camera has the filesize of a 2mp nomal camera, but the
quality/detail of a 4mp, not 6. (See other messages for the
derivation of the .56 figure.)
This is all super complicated and based around perceptual quality--I'm not ready to quote numbers but I'll tentativly agree with other's numbers.
So, if I'm getting an inferior, interpolated picture from my
camera, can I resize it to 75% (height & width), with no loss of
real information?
Yep (almost no loss of information)
Why don't they do that in-camera? So they can
quote more impressive pixel counts?
Yep. Actually, the d1x does do that in the one dimension where it has very good resolution.
 
The answer lies in the fact that JPG compression isn't smart enough
to "undo" the Bayer Interpolation .
Well, maybe. When you compare a JPEG to an uncompressed original, what are the differences? Total number of colors are reduced (smoothed) and edge detail is lost. JPEG compression attempts to keep the luminance values for each point but only sample the color information. Sound familiar? It's what a Bayer sensor does as well. (Not surprising as both are based on exploiting the same limitations of human vision.)

My hypothesis is that for the same compression parameters, a JPEG from a mosaic original will either compress more (as a percentage of original size) or degrade less (again compared to the original) than the Foveon image. Will this be enough to compensate for the ~ 2x larger file size? I think it will depend on the compression parameters.
The JPG works
on these 6 million values and treats them all equally.
If this were strictly true, JPEG would not be a "lossy" compression. If the reason that Foveon images look better for their pixel dimensions is because of better color edge discrimination and/or finer color graduations, how much of of this will be lost to the JPEG compression?
Hope this helps--this is a complicated subject.
It's a very complicated subject, which is why I think your description was too simple to be helpful.
In short: the JPG
process, from an entropy standpoint, is not totally efficient in
its compression when applied after an interpolation process.
How in the heck do you measure this?

Here is what I would propose: assume that you are starting from two photographs of the exactly the same scene and they are of "equivalent" quality -- perhaps Phil's comparison still life shots of bottles & things. Now, pick a target file size or set of sizes (e.g. 2.5MB, 1.0MB, 250kb). Adjust the JPEG parameters as needed to compress the originals to this size. Now compare the images again.

If your theory is true, then the Foveon images will now be noticably superior. If my theory is true, they will again be comparable. Note that to compare, we would have to normalize the viewing size, e.g. either use 8x10 prints or full-screen image views. Also, if one of the originals started out noticably superior in some aspect, would that superiority be maintained or lost? In addition, would the answers come out differently for JPEG2000 vs. JPEG?

--
Erik
Free Windows JPEG comment editor
http://home.cfl.rr.com/maderik/edjpgcom
 
Let me phrase the same concept another way to see if I can express my point better.

Let's assume that an uncompressed image from the SD-9 and the D60 (or D100) are "informationally equivalent." That is they are comparable in quality because they contain roughly the same amount of "true" information (detail, color information, etc.)

Because of the Bayer interpolation, a TIFF from the D60/D100 has stored the "information" rather inefficiently. Some data is redundantly "smeared" among several pixels when it actually belongs to just one. By the same analogy, a TIFF from the SD-9 data is more efficiently encoded. In fact, we could simulate the inefficiency of the Bayer data simply by resizing the SD-9 image to the same pixel dimensions as the D60/D100 image. (We are all agreed that a resizing interpolation does not add "information" to an image.) For a reasonable definition of quality, the resized SD-9 image would still be the same as the D60/D100 image, right? (In fact, isn't this the heart of Foveon's claims about quality?)

So, if we have two files of the same size that contain the same amount of raw "information", wouldn't you expect them to compress almost identically as well?

If the Foveon file contains more useful information, it will compress LESS and vice versa. To simultaneously claim that the images have the same or more information but will compress better flies in the face of information theory.

Now of course, JPEG practice = information theory. JPEG is not a perfect compressor. However, for the Foveon images to compress MORE than bayer images, you have to hypothesize some aspect of mosaic sensor images that JPEG does not handle very well such that compression is less than for Foveon images. Current experience shows that mosaic camera images do compress very well (e.g. it's hard to distinguish between 2.5x compression and an original by eye and even 8x is pretty darn good).

--
Erik
Free Windows JPEG comment editor
http://home.cfl.rr.com/maderik/edjpgcom
 
Exactly Thomas,

Many fail to realize that the X3 technology uses well known quantum affects to gather the light energies from various wavelengths impinged on the sensors surface. If any thing at all can be likened to a filter it would be the substrate materials chosen to increase the probability of absorbing certain photon energies. There is a probability of absorbtion for light of various wavelengths at particular depths based on the energy of that light. See, wavelength is inversly proportional to energy, frequency (the inverse of wavelength) is directly proportional to photon energy.

E = h v (where h is planks constant and v is the frequency which is 1/wavelength)

this said highier energy wavelengths will tend to be absorbed deeper into the silicon surface (this is enhanced or selected per photosite region by adding "dopants" exotic elements like Arsenic, Gallium ..etc. so that particular energies are more likely to absorb at that level than others) for example various "dopants" can be added to the substrate to affect better absorption of certain frequencies at a particular layer. This is how Foveon could have partially adressed the so called quantum efficiency problem. As it stands the highier energy wavelenths (blues) have a greater probability of being absorbed deeper in the substrate, intermediate energies (greens to yellows) absorb a bit highier up and reds (lowest energy photons) tend to absorb just near the surface. Very careful doping and seperation of response to each region could effectively ensure a high enough percentage of red,geen and blue photons are captured from the impinged light so that humans will percieve accurate color. This is in essence what Foveon apparently has been successful at doing.

Rinus,

I wrote a post about the possible advantages and pitfalls in the technology in feburary after it was announced, I predicted the low ISO potential but attribute it to other affects. I am still mixed over weather or not those are the correct reasons, however many people contributed to the post with alternate ideas that I think would be worth reading.

http://www.dpreview.com/forums/read.asp?forum=1007&message=2225323

Regards,
Low ISO is natural for a multi layered CCD. Just like in film, the
CCD must employ filters between the layers to get only the correct
color to the three apropriate layers. If the three layers were to
be sensitive to different frequencies of the spectrum and so
circumvent filters, the light would still have to pass trough a
layer or two to get to its own color sensitive pixel. The
sensitivity would have to be adequate for the lowest layer so as
not to produce (too much) noise. The other two layers would have to
be less sensitive. The top layer would receive the most light and I
am sure it would haver to be attenuated so as not to be too strong
in comparisson to the others. I am assuming here that the Foveon
works in this fashion. I have seen the simple cross section of the
chip design but no further explanation has been given as to how
they want to accomplish the combined capture of light.
Rinus
--
--

 
The answer lies in the fact that JPG compression isn't smart enough
to "undo" the Bayer Interpolation .
Well, maybe. When you compare a JPEG to an uncompressed original,
what are the differences? Total number of colors are reduced
(smoothed) and edge detail is lost. JPEG compression attempts to
keep the luminance values for each point but only sample the color
information. Sound familiar? It's what a Bayer sensor does as
well. (Not surprising as both are based on exploiting the same
limitations of human vision.)
The number of colors aren't reduced but I'll agree that some detail is lost. Actually, the edges are mostly preserved at the expense of some "ringing" that should be familiar to anyone who has played with the lower end of the JPG compression range. You are right that JPG uses approximatly the same ratio (2:1:1) of compressed data that a Bayer CCD does. However, and i'm sure you know this, it's worth noting that they are not identical as JPG uses Y:U:V where CCDs use G:R:B (usually.)
My hypothesis is that for the same compression parameters, a JPEG
from a mosaic original will either compress more (as a percentage
of original size) or degrade less (again compared to the original)
than the Foveon image.
Absoutely.
Will this be enough to compensate for the ~
2x larger file size? I think it will depend on the compression
parameters.
The extra information from the Interpolation step is not nearly totally ingored by the JPG compression making that (traditional) approach not efficient. Overall percieved visual quality is a hard thing to quantify and it is difficult to compare results from numbers like 2x directly.
The JPG works
on these 6 million values and treats them all equally.
If this were strictly true, JPEG would not be a "lossy"
compression. If the reason that Foveon images look better for their
pixel dimensions is because of better color edge discrimination
and/or finer color graduations, how much of of this will be lost to
the JPEG compression?
Poorly stated on my part. What I meant to imply was that all 6 million values were equally valid measurements by the sensor.
Hope this helps--this is a complicated subject.
It's a very complicated subject, which is why I think your
description was too simple to be helpful.
:)
In short: the JPG
process, from an entropy standpoint, is not totally efficient in
its compression when applied after an interpolation process.
How in the heck do you measure this?
You don't need to. You just need to know that it's not totally efficient because the interpolated values aren't thrown out by the compressor.
... If your theory is true, then the Foveon images will now be
noticably superior. If my theory is true, they will again be
comparable.
I agree and believe that in fact the Foveon images will be superior unless the compression was set to such a low bitrate that you would, in effect, be down-sampling the images.
Note that to compare, we would have to normalize the
viewing size, e.g. either use 8x10 prints or full-screen image
views. Also, if one of the originals started out noticably superior
in some aspect, would that superiority be maintained or lost? In
addition, would the answers come out differently for JPEG2000 vs.
JPEG?
JPEG2000 would yield slightly more comparable results because of the superior wavelet compression.

Thanks for the thoughtful reply,
Dave
--
Erik
Free Windows JPEG comment editor
http://home.cfl.rr.com/maderik/edjpgcom
 
Let's assume that an uncompressed image from the SD-9 and the D60
(or D100) are "informationally equivalent." That is they are
comparable in quality because they contain roughly the same amount
of "true" information (detail, color information, etc.)

Because of the Bayer interpolation, a TIFF from the D60/D100 has
stored the "information" rather inefficiently. Some data is
redundantly "smeared" among several pixels when it actually belongs
to just one. By the same analogy, a TIFF from the SD-9 data is
more efficiently encoded. In fact, we could simulate the
inefficiency of the Bayer data simply by resizing the SD-9 image to
the same pixel dimensions as the D60/D100 image. (We are all agreed
that a resizing interpolation does not add "information" to an
image.) For a reasonable definition of quality, the resized SD-9
image would still be the same as the D60/D100 image, right? (In
fact, isn't this the heart of Foveon's claims about quality?)
Absoutely.
So, if we have two files of the same size that contain the same
amount of raw "information", wouldn't you expect them to compress
almost identically as well?
Yes, almost . :)
Now of course, JPEG practice = information theory. JPEG is not a
perfect compressor. However, for the Foveon images to compress
MORE than bayer images, you have to hypothesize some aspect of
mosaic sensor images that JPEG does not handle very well such that
compression is less than for Foveon images. Current experience
shows that mosaic camera images do compress very well (e.g. it's
hard to distinguish between 2.5x compression and an original by eye
and even 8x is pretty darn good).
Well put. However I don't agree that mosaic camera images compress very well compared to factor of 3 that they give up off the bat. I am familiar with the Bayer interpolation techniques and argue that they create artificial detail which will be in turn interpreted by the JPG compressor as measured detail. I think that this is a real and measurable effect. I also think that the effect is pretty far removed from the hobby of photography. It's fun to talk about though.

Thanks for the reply,
Dave
 
Well put. However I don't agree that mosaic camera images compress
very well compared to factor of 3 that they give up off the bat.
The consensus opinion is only 2x not 3x (which would imply either the Foveon is less than 100% efficient and/or the Bayer pattern actually does derive some useful information about luma from the other pixels.) In JPEG terms, 2x is NOT a large factor (e.g. the quality difference between 2:1 and 4:1 is not usually that significant modulo a few large steps when you change the subsampling.)
I
am familiar with the Bayer interpolation techniques and argue that
they create artificial detail which will be in turn interpreted by
the JPG compressor as measured detail. I think that this is a real
and measurable effect.
I'm hypothesizing that JPEG and Bayer are complementary, e.g. that the "interpolated detail" is thrown back out. This matches my experience with scanned images vs. digital camera images.
I also think that the effect is pretty far
removed from the hobby of photography. It's fun to talk about
though.
Gives us something to do on a rainy day when we can't go out take photos.

--
Erik
Free Windows JPEG comment editor
http://home.cfl.rr.com/maderik/edjpgcom
 
The number of colors aren't reduced but I'll agree that some detail
is lost.
Try counting sometime. Here is an example from a scanned image:
4069x2543x24 TIFF = 1335424 unique colors
4069x2543x24 JPEG = 506721 unique colors (Photoship lvl 8, 20:1 compression)

Much of the "detail" that is lost is fine color variations at the edges of color transitions. If you do a "diff" you will see something like (same example):


Overall percieved visual quality is a hard
thing to quantify and it is difficult to compare results from
numbers like 2x directly.
I agree 100%, which is why the Foveon claim could be both true and effectively meaningless at the same time. Even in my proposed experiment how do you weight resolution vs. color fidelity vs. noise to get "quality"?

--
Erik
Free Windows JPEG comment editor
http://home.cfl.rr.com/maderik/edjpgcom
 
I think the 75% figure given for the ratio of linear resolution in a Bayer pattern to the physical pixel resolution may be somewhat optimistic - or at least a rarely achieved best case scenario. With anything short of a high end DSLR with a really expensive hunk of glass hanging off it, this number is more like 60% or less.

I've done any number of resolution tests using images of stars to measure the point spread function (PSF) in my Nikon 995. In all cases the full width at half max image was 1.8 - 2 pixels wide. (1.8 at mid range zoom, 2 at full wide and telephoto FWIW). Including the fact that the PSF is round not square this gives a linear resolution scale factor of 56% - 62% rather than the 75% quoted earlier. This means that the actual resolving power of the 3.1 MP array in this camera is actually only 31-38% of this value - or 1 - 1.3 MP of independent color information.

Admittedly the 995 is a cut or two (or three) below the quality of a good SLR, but if the bulk of this point spread is due to the interpolation process then we can expect a properly implemented Foveon array to truly be about as good as a standard mosaic sensor with three times the MP (i.e. 1/(.31-.38)).

X3 for real.

Fred Vachss
 
I read your post and decided to answer in here.

Your suggestion in regards to Foveon cross-over system is realy no different than color separation in modern color films. Overlaps are very important for secondary colors and the peaks of the response curves can shift up and down the frequency range. In various films, this leads to typical response curves that make the Fuji film look like Fuji and Kodak like Kodak. The separating of the layers with filters as I suggested, can just as easily done with electronic tuning. I believe that this method may not be the end in innovation. I am sure another method can be born without the infringement of copyright. Thanks for your enlightenment.
Rinus
 
I think the 75% figure given for the ratio of linear resolution in
a Bayer pattern to the physical pixel resolution may be somewhat
optimistic - or at least a rarely achieved best case scenario.
With anything short of a high end DSLR with a really expensive hunk
of glass hanging off it
Well, when we're talking about D60/D100 vs. SD-9, that's a given. (More given with Canon or Nikon glass than with Sigma, but we'll ignore that as a minor difference.)
if the bulk of this point spread is due to the
interpolation process then we can expect a properly implemented
Foveon array to truly be about as good as a standard mosaic sensor
with three times the MP (i.e. 1/(.31-.38)).
That's a mighty big if to swallow w/o evidence to back it up. The 995 is not known for it's high quality lens. Apples to apples, now. You can't compare the best theoretical X3 figure for a APS sized sensor with high quality glass with the numbers for a 1/2" sensor with a mediocre lens. Let's see how the F10 performs with P&S lenses compared to contemporary cameras before leaping to such a conclusion. (Heck, the F7 performance vs theory is still speculation.)

--
Erik
Free Windows JPEG comment editor
http://home.cfl.rr.com/maderik/edjpgcom
 
I think the 75% figure given for the ratio of linear resolution in
a Bayer pattern to the physical pixel resolution may be somewhat
optimistic - or at least a rarely achieved best case scenario.
With anything short of a high end DSLR with a really expensive hunk
of glass hanging off it, this number is more like 60% or less.
And you think X3 sensors are going to capture 100% with less than perfect glass? A lot of different theory worked, and practical tests had a lot of people independantly come up with the number around 75%.
I've done any number of resolution tests using images of stars to
measure the point spread function (PSF) in my Nikon 995. In all
cases the full width at half max image was 1.8 - 2 pixels wide.
(1.8 at mid range zoom, 2 at full wide and telephoto FWIW).
Including the fact that the PSF is round not square this gives a
linear resolution scale factor of 56% - 62% rather than the 75%
quoted earlier.
This is one of the softer lenses on a 3mp camera P&S cameras, let alone good DSLR glass. An X3 sensor would be lucky get 75% out of this one, likely it would still only 60% as well.

When trying to determine the resolution limitation of the Sensor, you want the BEST glass you can get so you get closer to eliminating that in your numbers.

Most of us are probably using the Foveon Sample from this site shot using the Canon 50mm F1.4 lens for the x3 comparison that lead to (for me ) a practical measurement of 100%. There probabaly isn't sharper glass on the planet. To this you compare a Nikon 995? Really now?

Likewise I compared several good DSLRs and very sharp consumer cameras (Sony 707), to arrive at a figure of 75% for Mosaic cameras.

This is ballpark number, but at least it is based on comparable glass. Mosaics will still no doubt still come up short of an x3 camera even with this factor applied. This was just absolute resolution without colour detail.

My simple rule of thumb I will use to compare x3 cameras with mosaics, is that you need twice as many Mosaic pixels to APPROXIMATE X3 pixels.

YMMV

Peter
 
Erik (or anyone),

I was wondering if you know if the manufacturers have control over the frequency responce of the AA filter as a function of wavelength. I have no idea--but I'd guess no. If not, this is a big win for x3--a regular CCD AA filter would have to sacrifice bandwidth in the green channel to avoid aliasing artifacts in the red and blue. With x3, the filter could be designed to roll off at the correct frequency for all colors together since the sampling rate is identical--not a factor of sqrt(2) different.

Cheers,
Dave
 
Try counting sometime. Here is an example from a scanned image:
4069x2543x24 TIFF = 1335424 unique colors
4069x2543x24 JPEG = 506721 unique colors (Photoship lvl 8, 20:1
compression)
Wow. I can't imagine why that would be the case in a good JPG compressor. Agressive under-quantization of the DC frequency component? Bad rounding in the IDCT? Ideas anyone?
 
Try counting sometime. Here is an example from a scanned image:
4069x2543x24 TIFF = 1335424 unique colors
4069x2543x24 JPEG = 506721 unique colors (Photoship lvl 8, 20:1
compression)
Wow. I can't imagine why that would be the case in a good JPG
compressor. Agressive under-quantization of the DC frequency
component? Bad rounding in the IDCT? Ideas anyone?
The IJG (as implemented in ThumbsPlus) compressor gives just about the same numbers. Try the experiment yourself with what you think is "good" JPEG compressor.

BTW, A LEAD tools JPEG2k compressor does better: 1240464 colors.

--
Erik
Free Windows JPEG comment editor
http://home.cfl.rr.com/maderik/edjpgcom
 
Foveon array to truly be about as good as a standard mosaic sensor
with three times the MP (i.e. 1/(.31-.38)).
"THANKS TO ITS new technology, Foveon estimates its pixels are worth about two of anyone else's"

http://www.zdnet.com/anchordesk/stories/story/0%2C10738%2C2847371%2C00.html

This the first time I saw this, but it is the same as what I worked out with my testing and what I will use for my approximations.

Double the pixel count is turning out to be the concensus approximation. With Foveon using the same number that pretty much clinches it for me.

Peter
 
Maybe I'm missing something here, it seems a 3MP X3 should be equivalent to a 12MP Mosaic sensor, if not more. Here's my math:

Mosaic pattern:
GRGB GRGB GRGB GRGB
BGRB BGRB BGRB BGRB
GRGB GRGB GRGB GRGB
BGRB BGRB BGRB BGRB
GRGB GRGB GRGB GRGB
BGRB BGRB BGRB BGRB
GRGB GRGB GRGB GRGB
BGRB BGRB BGRB BGRB

Count the number of R,G,B pixels across a sample row: 4,8,4
Count the number of R,G,B pixels down a sample column: 4,4,4

Foveon Pattern: X=R,G,B
XXXX XXXX XXXX XXXX
XXXX XXXX XXXX XXXX
XXXX XXXX XXXX XXXX
XXXX XXXX XXXX XXXX
XXXX XXXX XXXX XXXX
XXXX XXXX XXXX XXXX
XXXX XXXX XXXX XXXX
XXXX XXXX XXXX XXXX

Count the number of R,G,B pixels across a sample row: 16,16,16
Count the number of R,G,B pixels down a sample column: 8,8,8

As can be seen, in order to match the Green across resolution of the Mosaic pattern to match the Green across resolution of the X3, the number of pixels across in the Mosaic must at least double: 4,8,4 x 2 = 8,16,8.

As can be seen, in order to match the vertical resolution of the Mosaic pattern to match the vertical resolution of the X3, the number of pixels across in the Mosaic must at least double: 4,4,4 x 2 = 8,8,8.

Accordingly, using regular math: (2 x across Mosaic resolution) x (2 x vertical Mosaic resolution) = 4 x Mosaic pixels !!

Therefore a 3 MP X3 would have a resolution comprable than a 12 MP Mosaic! The color would be better, because of the Red and Blue resolution across is still double.

Remember, if you don't sample, there is no way to get information with just processing alone.

Did I miss something???

Steve
In terms of absolute detail. Mosaic tends to top out at 75% of the
detail warranted by their number of pixels in a linear direction. I
measured it on several res charts during the initial arguments.

OTOH X3 cameras are capable of resolving all 100% of the detail for
their pixel dimensions.

To map that to pixel equivalences you need both dimensions. .75x
.75 = .56. So 0.56 is the empirical factor.

A 6.0 MP mosaic camera approximates ( 6.x .56) = 3.36MP of X3 type
detail.
....

With all of this talk about .75 and .56, which determines the
effective resolution of most cameras, can someone explain how black
& white mode might change those numbers? Would you obtain the full
resolution, and effectively use 100% of the pixels, not .75? It
might give me enough excuse to try B&W some more.....

And if B&W is 1.0 and color is .75, what if you just lower the
contrast? Does that make it .85? :-) But you'd probably mess it
all up again if you tried adjusting the contrast in the paint
program, as you just don't have the data to recover.

--
Gary W.
Nikon 880
 
Your mosiac wasnt quite right. Fully half the mosaic is green, not in a linear direction, but of the whole pixel count. Simple doubling of the amount of pixels will yeild 1 green for every foveon pixel. If you fill the green holes in a vertical direction, there will be none left when you look at the horizontal.

The correct pattern has every non-green surrounded on all sides by GREEN.

Mosaic:

GRGRGRGRGR
BGBGBGBGBG
GRGRGRGRGR
BGBGBGBGBG
GRGRGRGRGR
BGBGBGBGBG
GRGRGRGRGR
BGBGBGBGBG
GRGRGRGRGR
BGBGBGBGBG

100 pixels 50 = green.

X3

XXXXXXXXXX
XXXXXXXXXX
XXXXXXXXXX
XXXXXXXXXX
XXXXXXXXXX
XXXXXXXXXX
XXXXXXXXXX
XXXXXXXXXX
XXXXXXXXXX
XXXXXXXXXX

100 pixels, 100 = green ( 100 red and blue too )

Mosaic x 2 only horizontally.

BGBGBGBGBGBGBGBGBGBG
GRGRGRGRGRGRGRGRGRGR
BGBGBGBGBGBGBGBGBGBG
GRGRGRGRGRGRGRGRGRGR
BGBGBGBGBGBGBGBGBGBG
GRGRGRGRGRGRGRGRGRGR
BGBGBGBGBGBGBGBGBGBG
GRGRGRGRGRGRGRGRGRGR
BGBGBGBGBGBGBGBGBGBG
GRGRGRGRGRGRGRGRGRGR

200 pixels, 100 = green.

So if green is a critera, detail eqivalence happens around double for mosaic. Thats not to say there still wont be more colour errors in mosaic, but since you have many more pixels your colour errors will not be that large.

I don't know how it will all work out, but Double the pixels for equivalence seems to be a good ballpark for rule of thumb comparison.

All else being equal, I would purchase a 3MP x3 before a 6mp Mosaic, but it will be quite a while till all else is equal, or even close enough. Things like higher ISO shooting. Reasonable speed continuous shooting modes, flexible in camera storage (ie jpg), lens system equivalence, user interface, features...

It will also be quite a while before DSLRs are affordable IMO.

Peter
Maybe I'm missing something here, it seems a 3MP X3 should be
equivalent to a
12MP Mosaic sensor, if not more. Here's my math:

Mosaic pattern:
GRGB GRGB GRGB GRGB
BGRB BGRB BGRB BGRB
GRGB GRGB GRGB GRGB
BGRB BGRB BGRB BGRB
GRGB GRGB GRGB GRGB
BGRB BGRB BGRB BGRB
GRGB GRGB GRGB GRGB
BGRB BGRB BGRB BGRB
Count the number of R,G,B pixels across a sample row: 4,8,4
Count the number of R,G,B pixels down a sample column: 4,4,4

Foveon Pattern: X=R,G,B
XXXX XXXX XXXX XXXX
XXXX XXXX XXXX XXXX
XXXX XXXX XXXX XXXX
XXXX XXXX XXXX XXXX
XXXX XXXX XXXX XXXX
XXXX XXXX XXXX XXXX
XXXX XXXX XXXX XXXX
XXXX XXXX XXXX XXXX

Count the number of R,G,B pixels across a sample row: 16,16,16
Count the number of R,G,B pixels down a sample column: 8,8,8

As can be seen, in order to match the Green across resolution of
the Mosaic pattern to match the Green across resolution of the X3,
the number of pixels across in the Mosaic must at least double:
4,8,4 x 2 = 8,16,8.

As can be seen, in order to match the vertical resolution of the
Mosaic pattern to match the vertical resolution of the X3, the
number of pixels across in the Mosaic must at least double: 4,4,4 x
2 = 8,8,8.

Accordingly, using regular math: (2 x across Mosaic resolution) x
(2 x vertical Mosaic resolution) = 4 x Mosaic pixels !!

Therefore a 3 MP X3 would have a resolution comprable than a 12
MP Mosaic! The color would be better, because of the Red and Blue
resolution across is still double.

Remember, if you don't sample, there is no way to get information
with just processing alone.

Did I miss something???

Steve
In terms of absolute detail. Mosaic tends to top out at 75% of the
detail warranted by their number of pixels in a linear direction. I
measured it on several res charts during the initial arguments.

OTOH X3 cameras are capable of resolving all 100% of the detail for
their pixel dimensions.

To map that to pixel equivalences you need both dimensions. .75x
.75 = .56. So 0.56 is the empirical factor.

A 6.0 MP mosaic camera approximates ( 6.x .56) = 3.36MP of X3 type
detail.
....

With all of this talk about .75 and .56, which determines the
effective resolution of most cameras, can someone explain how black
& white mode might change those numbers? Would you obtain the full
resolution, and effectively use 100% of the pixels, not .75? It
might give me enough excuse to try B&W some more.....

And if B&W is 1.0 and color is .75, what if you just lower the
contrast? Does that make it .85? :-) But you'd probably mess it
all up again if you tried adjusting the contrast in the paint
program, as you just don't have the data to recover.

--
Gary W.
Nikon 880
 

Keyboard shortcuts

Back
Top