The sensor in the SD14 has:
2652 x 1768 = 4,688,736 "red" photosites
2652 x 1768 = 4,688,736 "green" photosites
2652 x 1768 = 4,688,736 "blue" photosites
Each photosite's sample runs through a 12-bit A/D converter, giving a total of 14,066,208 samples.
Because of the quantum physics behind how the X3 sensor can have 3 sensors on top of each other, the raw data doesn't 100% correspond to a normal RGB colour-space - on average blue light gets captured by the "blue" photosite, but it's not 100%, and so on. Fixing this is a pretty simple calculation and you can see the colour conversion matrix on page 4 here:
http://www.alt-vision.com/documentation/5074-35.pdf
(the SD14's sensor might have slightly different)
To create a 2652 x 1768 colour computer image from the sensor is actually quite simple - AFAIK, you just have to do the colour conversion and that's it.
The best thing about such an image is that it would be very close to perfect. No light is "filtered" by the sensor which means no values have to be "guessed". The sensor does not need an anti-alias filter (blur filter). The only real problem is when few photons hit a particular vertical sensor group (ie dark areas) or when one or two of the photosites is subject to signal clipping (bright colour areas) - the colour conversion matrix doesn't work quite as well. This results in more colour noise for dark parts of the image, or slightly odd colours in over-exposed colour areas. However, the luminescence noise is not affected by the colour conversion matrix.
If you want to turn the 2652 x 1768 colour computer image into a 4608 x 3072 (14,155,776) image, or what the SD14 will give you if you save a max resolution JPEG image, then any decent upscaling algorithm will do. But that's purely a software thing.
Personal opinion time: if the world only had Foveon style vertical sensor groups and no Bayer sensors, and then someone tried to introduce a Bayer sensor, I think the Bayer sensor people would get a LOT more flak than Foveon currently get. Basically, the Foveon style is much more intuitive and simple to explain to normal people (the physics behind the actual sensor is rather more complex though) while Bayer is not particularly intuitive to explain while the physics behind the actual sensor are a lot simpler.
A typical 4608 x 3072 Bayer sensor would have:
3,538,944 red photosites
7,077,888 green photosites
3,538,944 blue photosites
Note that the Foveon sensor would have 32.5% more red and blue sensors, and 0.66x the number of green sensors. While the Foveon has balanced colour accuracy, the Bayer is unbalanced.
To create a 4608 x 3072 colour computer image from the sensor requries using a demozaicing algorithm - which can get very complicated and there's a lot of research going into such things. Putting it another way, that 4608 x 3072 image needs 10,616,832 red values, 7,077,888 green values and 10,616,832 blue values to be guessed.
Many of those guesses will be wrong and occur mostly in areas of sharp transition. In other words, the accuracy is variable. It would be particularly bad in areas with little green light as well. In addition, to try to counter the more noticible image defects, an anti-alias filter is used which further hurts the low level accuracy.
This is why it is inherantly impossible to reduce the resolution of a Foveon sensor vs a Bayer sensor to a single number such as "megapixels" - the relative difference in image quality is entirely scene dependant.
PS To save repitition I've decided to save this write-up here:
http://www.aceshardware.com/chris/vcgVbayer.html
--
New-ish SLR user and Sigma owner in London.
See my profile for my equipment list