The end of Bayer?

The Foveon X3 sensor in the SD-9/10 provides a 14 megapixel TIF file from it's software based on 10.5 megapixel sampling. The X3 occurs in the Z dimension (as opposed to XY only for Kodak/Bayer sensors). There is only 33% of the TIF file data that is interpolated vs. 66% or more for bayer sensors at native pixel sampling.

[REPLY]
Z (is real) - 19 hours ago

The Betterlight scan backs are range from 52 lp/mm to 125 lp/mm sampling fully measured color from lenses designed for 4x5 format. (Large format lenses tend to have lower peak MTF compared to the best 35mm lenses.) There is also some indication that they may overlap the sample positions slightly, but I can't find details on their site. (They mention "fractional pixels" but I don't see a definition.) There is also the possibility of jiggling the linear sampling array very slightly at each scan position, but I doubt the Betterlight camera does that.

(Joseph Wisniewski posted about the possibility of moving the sampling apparatus in a very small circle to accomplish low pass filtering in a discussion of Leica claiming to not need optical low pass filtering. Strikes me as a workable idea and that is the first place I'd heard of it. If one can get the mechanicals to work, it could even provide variable amounts of low pass filtering for different circumstances.)> > >

Sinar had been doing just that with their backs for quite some time. Using the "chip wiggler" aspect of piezo based pixel-shifting on the sensor for greater resolution of color and luminance.

It is possible to out-resolve the lenses in this manner, thus using the MTF limitations of the glass as low pass filters. The only penalty is the process time and huge files (well, the price of admission too).

--
Claude
 
Each photon is the irreducible unit of colour information. If you
filter photons out you are throwing away information.
Correct. The ultimate sensor detects all incoming photons.

But - you also lose information if you have a filtering that don't matches the filters needed to give RGB information. This is the case for Foveon. This is also the case if you make any color balancing before saving the result - but normally color balancing is much weaker than Foveon conversion.

Roland
 
No one ever asked Pink Floyd what kind of tape recorder was used to
create their masterpiece: Dark Side of the Moon.
Unfortunate really as the question leads to a number of very interesting stories and explanations. More so than most albums, the sound of DSotM is colored by the technology with which it was made.

-Z-
 
Sinar had been doing just that with their backs for quite some
time. Using the "chip wiggler" aspect of piezo based pixel-shifting
on the sensor for greater resolution of color and luminance.

It is possible to out-resolve the lenses in this manner, thus using
the MTF limitations of the glass as low pass filters. The only
penalty is the process time and huge files (well, the price of
admission too).
Do you know the details of how this works? The particular detail I'd like to know is whether they jiggle the sensor while the shutter is open (so to speak) or if they use a piezo mechanism to do a shift between capturing separate images off the sensor. Jiggling the sensor in a small (relative to the pixel pitch) circle would accomplish low pass filtering while sampling at (about) the same frequency. As opposed to taking an exposure, reading the sensor, then shifting one half pixel pitch, or even a whole pixel pitch for Bayer filtered sensors, and then taking another exposure, which increases the sampling frequency, but does not add low pass filtering.

(Intrestingly, to do fully measured color with a Bayer sensor via pixel shifts would require four captures, not three.)

-Z-
 
I can't follow your reasoning about how Foveon sensors filter out photons.

If a a red photon hits the Foveon sensor(photosite), it is counted in the red layer, if a green one hits, it is counted in the green layer etc. So how are photons being lost?

Sure there is the chance of cross talk between layers, which I think you may be getting at, but even in the current incarnation this is not of any significance in terms of either information loss or colour infidelity. This is definitely not a theoretical limitation.

--
Kent Dooley
 
If a a red photon hits the Foveon sensor(photosite), it is counted
in the red layer, if a green one hits, it is counted in the green
layer etc. So how are photons being lost?
From a paragraph like this, it is fairly easy to tell that the poster doesn't have enough knowledge about the process involved to say something useful about the subject.

There are many ways that photons are either not converted or not recorded. (E.g. the charge carrier resulting from a photon conversion can recombine with an opposite charge carrier in the semiconductor and thus not be recorded.)

The end result is that realistic quantum efficiencies are down in the 30% range. While some improvement might be expected, it is unlikely either technology X3 or Bayer will get anything like a 2X improvement in quantum efficiency. If we see much above 40%, that would be significant.

Last time I looked, I couldn't find any good data on Foveon X3 quantum efficiency. The numbers for various Bayer sensors are fairly easy to come by on datasheets from companies such as Kodak.

-Z-
 
I can't follow your reasoning about how Foveon sensors filter out
photons.
If a a red photon hits the Foveon sensor(photosite), it is counted
in the red layer, if a green one hits, it is counted in the green
layer etc. So how are photons being lost?
The photons are not getting lost.

But the Foveon sensor (and all other sensors with large overlap) creates natively a strongly undersaturated color picture. This undersaturated color picture consists of three low noise layers. It is actually a good idea to start with those layers if you want to make a B&W picture or an IR picture. The result is stunning.

But - if you want to make a normal saturated color picture you must increase the color stauration. Its here were you get increasing noise - increasing color saturation always does.

The theory behind why it does so can be found in how noise and signal adds up.

Roland
 
The Foveon X3 sensor in the SD-9/10 provides a 14 megapixel TIF
file from it's software based on 10.5 megapixel sampling.
No, not in the least. The data from the sensor can only create a 3.43MP image. One that resolves red vs green/blue nicely at the nyquist (which the bayers can't do), but the maximum meaningful data is only 3.43MP. A 14MP output from an X3 is nothing but 75% worthless fluff/fodder. Perhaps a little better than 3.43MP for printing, as it softens up the aliasing sharpness a bit.
The X3
occurs in the Z dimension (as opposed to XY only for Kodak/Bayer
sensors). There is only 33% of the TIF file data that is
interpolated vs. 66% or more for bayer sensors at native pixel
sampling.
The Foveon, as used in the sigmas, is very good at recording luminance, with the three channels summed, but much the color is actually interpolated quite a bit, as the sensitivities of the green and blue channels to various green/blue frequencies is not different enough to properly distinguish greens and blues.

Sigma RAW data can be converted to a TIFF or JPEG simply by applying a gamma curve to the data, in which case the pixels are not interpolated spatially in any way, but the colors are extremely bland; almost like a full color image with the saturation of the reds reduced to 40%, and the greens and blues reduced to 15%.

--
John
 
You should read the previous posts in this thread to see that I have already adressed the issue of capture efficiency before making an ad hominum argument.

You will also then learn that your comments are completely off topic.
--
Kent Dooley
 
A red photon which hits a green sensor(potosite) is blocked by the filter that is over that sensor site. It is then turned into heat and is no longer a red photon. To quote Monty Python's Flyin Circus, "It is deceased... no more...gone to meet its maker ... a former photon."

--
Kent Dooley
 
I suspect you'd find that the Foveon sensors do essentially the
same thing. I don't see how you'd explain the comparatively poor
noise performance if they were really that much more light
sensitive.
The SD9 loses 70% of the photons hitting the sensor surface, as only 30% of it is photo-receptive. Then, the light is divided amongst 3 layers.

--
John
 
You can't correct aliasing in software - it's too late. Sorry.

Cesare
Yes you can...

As long as it's not Jpeg compressed format nor the compressed raw format.. So your RGB color channels are still seperate

Canon 20D , Pro1

Prime : 15 f/2.8 fisheye, 24 f/1.4L,35 f/1.4L, 50 f/1.4 ,85 f/1.8 ,100 f/2.8 Macro ,135 f/2L ...

Zoom: 10-22 EFS , 17-85 EFS , 24-70 f/2.8L ,70-200 f/4L & f/2.8L IS , 100-400 f/4.5-5.6L IS...
Saving for : 1Ds MKII then later 16-35 f/2.8L
 
Bayer is a smart compromise between the amount of photosites
available and the luminance resolution you can get. A Foveon sensor
gives you much less luminance resolution all other things being
equal, but the colour resolution is as good as the luminance
resolution.

At some point, an X megapixel SLR will be as good as an X + 5
megapixel sensor, because the lenses cannot keep up. What's X?
Lets say 24 to 30 megapixels, but that is just a guess.

You have gone as far as you can with Luminance resolution. So what
do you do with the extra photosites that fabrication improvements
give you? Improve the Colour resolution. A 100% then will have
vibrant colour texture.
Another conclusion could be that when sensors outperforms lenses in resolution - then you could use the extra resolution in any sensor (e.g. Bayer) to create better color resolution. As a matter of fact - the color resolution for Bayer is approx 2 times lower than the luminance resolution. So - you would outperform the lens by having 4 times as many sites as needed for luminance resolution - which is comparable to using a Foveon sensor. But - in the case of the Bayer camera you get RGB directly instead of computed - improving signal to noise. Both Bayer and Foveon are compromises.

A true RGB sensor on the other hand would outperform them both. But we don't have such a thing other than in scanning backs.

Note that the three sensor thingies in film cameras don't count. The filters used are VERY steep (which is not beneficial for picture quality) and the alignment for mega pixel pictures is almost impossible. And the result is a VERY large camera if the sensors are big - definitely stationary.

Roland
 
A red photon which hits a green sensor(potosite) is blocked by the
filter that is over that sensor site. It is then turned into heat
and is no longer a red photon. To quote Monty Python's Flyin
Circus, "It is deceased... no more...gone to meet its maker ... a
former photon."
Correct. And this is a drawback of the Bayer design.

I just pointed out that the Foveon design has a similar drawback - not losing photons - but increasing noise when doing color conversion.

Both solutions are doing quite weel in spite of the drawbacks.

Roland
 
Aliasing is a standard problem in digital sampling, be it audio or video sampling. The problem is caused by frequencies > the sampling rate/2 reaching the digitising stage, and causes alias frequencies.

I'll repeat, if you don't have an anti-alias filter, you can't tell an aliased frequency from that real frequency appearing in the image. No software can remove it without making assumptions about what the world looks like.

An example is probably the easiest way of looking at it. Imagine a wall with even black/white vertical stripes on it. If my camera had 1000 pixels across it's image sensor, imagine standing back and taking a photo of the wall where there were 1000 pairs of black/white stripes in the image. What would the image look like?

You could say that the pixels were all aligned to see the black stripes, so the overall image would be black. You could also say that the pixels were all aligned to see white stripes, so the overall image would be white. Do you see the confusion? There are in effect multiple images which would be produced by the camera when looking at a single scene depending on exactly how the camera were aligned, and these images could look completely different. This is an effect of aliasing. So when the spacial freqency of the scene is > pixel spacing / 2 then we can get unpredicable/unwanted images

With anti-aliasing we tend to find the opposite problem. That is, two different scenes would produce the same output image, and people aren't used to thinking this way. In the above example, a properly implemented anti-aliasing filter would produce a grey image no matter what the camera alignment. This is to say the anti-aliased camera would produce the same picture when the scene was of tight black/white stripes on the wall, or if pointing at a grey wall.

The anti-aliasing filters in some digital cameras tend to be quite aggresive and this gives the grey effect above, but tends to loose some information which should ideally reach the sensor unaffected.

Cesare
 
Good example.

On the other hand a sensor without anti aliasing filter with a random pattern would resove the sripes rather well - due to the fact that the stripes are long and the brain will integrate over the whole length of each stripe.

This is also how film does it - and long thin lines are better resolved with film than with digital sensors. NOTE - this has nothing to do with that film resolves more than digital sensors - it has only to do with tha fact that the sensor is random - and that the lines are long.

Roland
 

Keyboard shortcuts

Back
Top