Confusion on Foveon basics

Weth

Senior Member
Messages
1,291
Reaction score
4
Location
San Diego, CA, US
My confusion comes from the fact it is my understanding that a photoreceptor is colorblind - it only collects photons regardless of wavelength and that their are no color filters on a Foveon sensor.

It is also repeated that each layer of the Foveon sensor collect 100% Blue on the top layer, 100% Green on the second layer, and 100% Red on the bottom layer, and that there is no light loss going through the layers. I can't figure out how that makes sense (both the pure color at each level and no light loss as you get deeper in the silicon).

To me, in a simplified approach it would make sense that the top layer (layer A) is sensing B+G+R light. The second layer (layer B) is sensing G+R light, and the bottom layer (layer C) is sensing R light. In my understanding you would then do a mathematical calculation to get the true RGB color of each layer. Again simplified:
Blue = A-B-C
Green = B-C
Red = C

These calculations would give a true RGB value without interpolation. I know true calculations are certainly much more involved, but I am trying to figure out the basics.

My concern with this, and I don't know if it pans out in real world photos, is that it would be possible to blow out the layer A photoreptor since it is getting all the light passing through, and to get lots noise in the Layer C (red) photoreceptor since it is the only layer measuring a single color band.

Thanks.
 
My confusion comes from the fact it is my understanding that a
photoreceptor is colorblind - it only collects photons regardless
of wavelength and that their are no color filters on a Foveon
sensor.
Hi Weth, Best place to begin is with the tech papers on Foveon's website, http://www.foveon.com "Eyeing the Camera: Into the Next Century" by **** Lyon and Paul Hubel and "Spatial Frequency Response of Color Image Sensors; Bayer Color Filters and Foveon X3" by Paul Hubel, John Lieu, and Rudlolph Guttosch.

**** Lyon spoke last night at the Computer History Museum in Mountain View, CA, "Pixels and Me"
http://www.computerhistory.org/events/index.php?id=1109183291
with some trepidation I include a link to a recent thread....
http://forums.dpreview.com/forums/read.asp?forum=1027&message=12639269
Best regards, Sandy
[email protected]
http://www.pbase.com/sandyfleischman
 
I wont get too techno, because there is too much mudslinging going on here at this forum with way too much technical "I know more than you do" and "you are wrong and I am right" and not enough general how to take a better photo.

I don't think this will answer your question in a direct manner, but maybe it will help you understand how the Sigma cameras act in the real world. JL might be able to help you with the intricate technical details of how the sensor works, if that really interests you.

Also there is plenty of information on the Fovean site on the image chip and how it works. One thing I will say is that it works very well, and IMO its the right design for an image chip. There are too many problems with bayer sensors.

What I will say about the fovean chip is the blue channel seems to be the most sensitive for some reason. Don't really care why, but it is and sometimes you have to work around it. The blue channel is also usually the noisiest, followed by the green channel. The red channel is usually very clean.

Incandescent light Wreaks havoc with the blue channel and especially if it is underexposed.

The blue channel is usually the first to clip, and usually there will be some info left on the red and green channels, but that is removed by SPP to keep things in balance.

The blue channel is also usually the first one to tank. If you underexpose about a stop, the blue channel really drops off in some places way ahead of the other two. That seems to be one of the make causes of yellow skin tones and the early drab SD9 colors. Thanks to Roger Beckett for pointing that out. One of the smart sensible photographers here that does not feel he has to spout techno babble to impress people. That was enough to impress me.

I am really making the SD9/10 sound worse than it really is though. Its got a huge DR which is great. Take the S2 and 10D. They blow out every other shot if you are lucky. Otherwise it might be every shot. Really terrible, especially the 10D.

With the SD_'s You have to learn when to shoot at the very top, filling the channels for the best color, like for portraits and when to shoot -ev, and learn which colors are the worst for clipping. Bright yellow is bad and some bright reds and oranges. A few others too.
My confusion comes from the fact it is my understanding that a
photoreceptor is colorblind - it only collects photons regardless
of wavelength and that their are no color filters on a Foveon
sensor.

It is also repeated that each layer of the Foveon sensor collect
100% Blue on the top layer, 100% Green on the second layer, and
100% Red on the bottom layer, and that there is no light loss going
through the layers. I can't figure out how that makes sense (both
the pure color at each level and no light loss as you get deeper in
the silicon).

To me, in a simplified approach it would make sense that the top
layer (layer A) is sensing B+G+R light. The second layer (layer B)
is sensing G+R light, and the bottom layer (layer C) is sensing R
light. In my understanding you would then do a mathematical
calculation to get the true RGB color of each layer. Again
simplified:
Blue = A-B-C
Green = B-C
Red = C
These calculations would give a true RGB value without
interpolation. I know true calculations are certainly much more
involved, but I am trying to figure out the basics.

My concern with this, and I don't know if it pans out in real world
photos, is that it would be possible to blow out the layer A
photoreptor since it is getting all the light passing through, and
to get lots noise in the Layer C (red) photoreceptor since it is
the only layer measuring a single color band.

Thanks.
--
http://www.troyammons.com
http://www.pbase.com/tammons
http://www.troyammons.deviantart.com
 
I don't think this will answer your question in a direct manner,
but maybe it will help you understand how the Sigma cameras act in
the real world. JL might be able to help you with the intricate
technical details of how the sensor works, if that really interests
you.
I don't know about intricate details, but I can describe my understanding of how it works. The sensor is basically just silicon. Light goes in, and at some point kicks an electron out of the crystal lattice. These mobile "photoelectrons" are then collected in regions at three different depths. The only absorption is the event that converts photon energy to mobile electron energy; i.e., absorption is detection.

But how does this do color? Because for any wavelength, there's an interaction strength that makes a probabilistic function for where the absorption happens. Put it all together with some integrals and you get the three spectral sensitivity curves for the three depth regions. It's described better in their patents or papers or some place; I don't recall exactly where.

j
 
Here is a link to one of my posts in the thread SandyF sent you to.

http://forums.dpreview.com/forums/read.asp?forum=1027&message=12642002

One thing that most folks seemed to agree on is that some of the questions you ask are not answered anywhere.
My confusion comes from the fact it is my understanding that a
photoreceptor is colorblind - it only collects photons regardless
of wavelength and that their are no color filters on a Foveon
sensor.
Joe W. has claimed (and I have great faith in him) there is miminal degration of light as it passes thru the silicon layers. There have been many posts, and the Sigma blurbs also say the three different silicon layers are sensitive to different wave links. On the other hand it is not clear to me how you can easily figure out which photon comes from which wave length.
It is also repeated that each layer of the Foveon sensor collect
100% Blue on the top layer, 100% Green on the second layer, and
100% Red on the bottom layer, and that there is no light loss going
through the layers. I can't figure out how that makes sense (both
the pure color at each level and no light loss as you get deeper in
the silicon).
As I posted earlier in the other thread this is kinda a misnomer. The blue layer does not just collect blue light, same for red layer not just collecting red light, or green layer collecting red light. And no one seemed to be able to point to Foveon or Sigma data that said what wave length was collected in what layer.

Since Joe W. chewed me out for saying light loss was a problem I am now of the opinion the light loss is minimal.
To me, in a simplified approach it would make sense that the top
layer (layer A) is sensing B+G+R light. The second layer (layer B)
is sensing G+R light, and the bottom layer (layer C) is sensing R
light. In my understanding you would then do a mathematical
calculation to get the true RGB color of each layer. Again
simplified:
Blue = A-B-C
Green = B-C
Red = C
These calculations would give a true RGB value without
interpolation. I know true calculations are certainly much more
involved, but I am trying to figure out the basics.
This makes as much sense as anything else I have seen about how the collected color information is turned into a raw file.
My concern with this, and I don't know if it pans out in real world
photos, is that it would be possible to blow out the layer A
photoreptor since it is getting all the light passing through, and
to get lots noise in the Layer C (red) photoreceptor since it is
the only layer measuring a single color band.
My understand is no layer is only measuring a single color band. As you noted the ture calculations would be more involved. But I suspect you are correct in saying there is some subtracting (and possibbly adding) information from layers based on what happens in other layers
Course I may be wrong about all of this.
 
Actually Weth is correct about the signal intensities which arrive from the simple Foveon design of equally sized Blue Layers on top of Green Layers on top of Red Layers.

However if you look at detailed diagrams of the Foveon photosites you will see that the Blue Layer layer does not completely cover the green layer. Thus the blue layer is smaller than the Green layer. Similarly the Green Layer is smaller than the Red Layer. This compensates for the effect of photon loss as they progress through the layers.
Kent
http://forums.dpreview.com/forums/read.asp?forum=1027&message=12642002

One thing that most folks seemed to agree on is that some of the
questions you ask are not answered anywhere.
My confusion comes from the fact it is my understanding that a
photoreceptor is colorblind - it only collects photons regardless
of wavelength and that their are no color filters on a Foveon
sensor.
Joe W. has claimed (and I have great faith in him) there is miminal
degration of light as it passes thru the silicon layers. There
have been many posts, and the Sigma blurbs also say the three
different silicon layers are sensitive to different wave links. On
the other hand it is not clear to me how you can easily figure out
which photon comes from which wave length.
It is also repeated that each layer of the Foveon sensor collect
100% Blue on the top layer, 100% Green on the second layer, and
100% Red on the bottom layer, and that there is no light loss going
through the layers. I can't figure out how that makes sense (both
the pure color at each level and no light loss as you get deeper in
the silicon).
As I posted earlier in the other thread this is kinda a misnomer.
The blue layer does not just collect blue light, same for red layer
not just collecting red light, or green layer collecting red light.
And no one seemed to be able to point to Foveon or Sigma data that
said what wave length was collected in what layer.

Since Joe W. chewed me out for saying light loss was a problem I am
now of the opinion the light loss is minimal.
To me, in a simplified approach it would make sense that the top
layer (layer A) is sensing B+G+R light. The second layer (layer B)
is sensing G+R light, and the bottom layer (layer C) is sensing R
light. In my understanding you would then do a mathematical
calculation to get the true RGB color of each layer. Again
simplified:
Blue = A-B-C
Green = B-C
Red = C
These calculations would give a true RGB value without
interpolation. I know true calculations are certainly much more
involved, but I am trying to figure out the basics.
This makes as much sense as anything else I have seen about how the
collected color information is turned into a raw file.
My concern with this, and I don't know if it pans out in real world
photos, is that it would be possible to blow out the layer A
photoreptor since it is getting all the light passing through, and
to get lots noise in the Layer C (red) photoreceptor since it is
the only layer measuring a single color band.
My understand is no layer is only measuring a single color band.
As you noted the ture calculations would be more involved. But I
suspect you are correct in saying there is some subtracting (and
possibbly adding) information from layers based on what happens in
other layers
Course I may be wrong about all of this.
--
Kent Dooley
 
Hi Weth,
You are close with your understanding.
Check Pavel's thread for the math involved:
http://forums.dpreview.com/forums/read.asp?forum=1027&message=12781244

Every wavelenght photon has a probability of being absorbed at given depth in silicon. It is just that shorter wavelenghts are, on average, absorbed earlier. Foveon design makes use of this fact, measuring the number of photons at absorbed at three different depths - this is enough info to calculate the colours. Again, see Pavel's thread for dependences between these numbers.

Also, the mechanism of the absorbed photon energy being converted to the electric charge which can be measured is more involved than the simplified picture described by JL above. You need the concept of the gap energy in doped silicon but the net efect is the same: the charge which can be measured is proportional to the number of photons absorbed: twice as much light leads to twice as strong signal. Photon energy above this gap energy is converted to heat, not electricity :-).

This linearity between the light intensity and the electric charge is what limits the dynamic range of all the digital cameras as opposed to film, where, for most of the range of the incident light intensity, the darkening of the emulsion changes as logarithm of the intensity: twice as much light is one stop on photo scale.

Hope it helps and it will not lead to discussing solid state physics next.
My confusion comes from the fact it is my understanding that a
photoreceptor is colorblind - it only collects photons regardless
of wavelength and that their are no color filters on a Foveon
sensor.

It is also repeated that each layer of the Foveon sensor collect
100% Blue on the top layer, 100% Green on the second layer, and
100% Red on the bottom layer, and that there is no light loss going
through the layers. I can't figure out how that makes sense (both
the pure color at each level and no light loss as you get deeper in
the silicon).

To me, in a simplified approach it would make sense that the top
layer (layer A) is sensing B+G+R light. The second layer (layer B)
is sensing G+R light, and the bottom layer (layer C) is sensing R
light. In my understanding you would then do a mathematical
calculation to get the true RGB color of each layer. Again
simplified:
Blue = A-B-C
Green = B-C
Red = C
These calculations would give a true RGB value without
interpolation. I know true calculations are certainly much more
involved, but I am trying to figure out the basics.

My concern with this, and I don't know if it pans out in real world
photos, is that it would be possible to blow out the layer A
photoreptor since it is getting all the light passing through, and
to get lots noise in the Layer C (red) photoreceptor since it is
the only layer measuring a single color band.

Thanks.
--
Cheers,
Wojtek
 
However if you look at detailed diagrams of the Foveon photosites
you will see that the Blue Layer layer does not completely cover
the green layer. Thus the blue layer is smaller than the Green
layer. Similarly the Green Layer is smaller than the Red Layer.
This compensates for the effect of photon loss as they progress
through the layers.
What photon loss?

The wells in the pictures aren't necessarily indicative of what things actually look like, but the most plausible explanation for any actual difference in size would be that this this is the easiest way to do the lithogrpahy.

--
Ron Parr
FAQ: http://www.cs.duke.edu/~parr/photography/faq.html
Gallery: http://www.pbase.com/parr/
 
My confusion comes from the fact it is my understanding that a
photoreceptor is colorblind - it only collects photons regardless
of wavelength and that their are no color filters on a Foveon
sensor.
That's an odd premise. Most photodetectors will have varying sensitivities, depending upon wavelength. I'm not aware of any that absorb uniformly, although I have little doubt that it's possible to approximate this if the right materials are used. There's also no reason to assume that some detectors won't miss some ranges of wavelengths almost entirely.
It is also repeated that each layer of the Foveon sensor collect
100% Blue on the top layer, 100% Green on the second layer, and
100% Red on the bottom layer, and that there is no light loss going
through the layers. I can't figure out how that makes sense (both
the pure color at each level and no light loss as you get deeper in
the silicon).
Hmmm... I"m not sure anybody has made the 100% claim. Even if they did, it's not clear what it would mean. Let's consider, "100% red," for example. This could mean 100% of the photons at a particular red wavelength and nothing else, but the each photodiode would detect only a few wavelengths and miss most of visible light. It could mean that it absorbs 100% of the energy in some range labeled red , but then this would make it impossible for the sensor to detect yellow, since we'd want yellow to be detected about 50% intensity by a red detector and about 50% intensity by a green detector.
To me, in a simplified approach it would make sense that the top
layer (layer A) is sensing B+G+R light. The second layer (layer B)
is sensing G+R light, and the bottom layer (layer C) is sensing R
light. In my understanding you would then do a mathematical
calculation to get the true RGB color of each layer. Again
simplified:
Blue = A-B-C
Green = B-C
Red = C
These calculations would give a true RGB value without
interpolation. I know true calculations are certainly much more
involved, but I am trying to figure out the basics.
Well, this is an odd case where what you think is easy is actually much harder than the real thing. The probability of a red wavelength photon getting absorbed at a shallow level in the substract is pretty low and it gradually increases as you move towards blue, so the three layers naturally separate the wavelengths, subject to some slop arising from the fact that the whole thing is probabilistic and that each photon only has a probability distirbution over absorption depth and not a deterministic absorption depth.

In other words, your B+G+R layer would be quit hard to make.
My concern with this, and I don't know if it pans out in real world
photos, is that it would be possible to blow out the layer A
photoreptor since it is getting all the light passing through, and
to get lots noise in the Layer C (red) photoreceptor since it is
the only layer measuring a single color band.
The math you have described isn't what happens, but there are two related issues:
  • The RGB components in most colorspaces don't match up exactly with the spectral properties of the layers, so there is a mapping from the recorded values at the layers to RGB spaces that may involves the values of one layer influencing values for more than one primary in the target space. (Of course, this can be true of traditional color filter array sensors as well, but the effect may be smaller.)
  • In any sensor, there is a possibility of charge leaking adjacent pixels. Its possible that a new set of conerns arise for vertically stacked photodiodes, that haven't been studied as carefully as the concerns that arise for horizontally spaced photodiodes.
--
Ron Parr
FAQ: http://www.cs.duke.edu/~parr/photography/faq.html
Gallery: http://www.pbase.com/parr/
 
To me, in a simplified approach it would make sense that the top
layer (layer A) is sensing B+G+R light. The second layer (layer B)
is sensing G+R light, and the bottom layer (layer C) is sensing R
light. In my understanding you would then do a mathematical
calculation to get the true RGB color of each layer. Again
simplified:
Blue = A-B-C
Green = B-C
Red = C
These calculations would give a true RGB value without
interpolation. I know true calculations are certainly much more
involved, but I am trying to figure out the basics.
Tom. I've thought about this for a bit, and I dont know if its correct. I'm sure Just will correct me if I'm wrong :-), but I dont think this is physical. In order for a photon to be "measured" it must be destroyed - i.e. "turned into" and electron. If the top layer "senses" B+G+R, then its gotta measure B+G+R, which mean the B+G+R photons are absorbed and turned into electrons. So, this would leave little or no G+R for the second layer, and even less R for the final layer.

I think its correct as suggested by Just that we are considering quantum probabilities of absorbtion, i.e. we can say that "its most likely" that a photon of XXXnm will be absored at a depth of YYYum for doped silicon. Doesnt mean that all photons of XXXnm are absorbed at YYYum, some are absorbed and turned into electrons before and some after thta depth. I dont know quantum electronics or optics at all, but it would not suprise me in the least if this relation is a guassian distribution centered about the penetration depth.

Just my thoughts.
MsM
 
My confusion comes from the fact it is my understanding that a
photoreceptor is colorblind - it only collects photons regardless
of wavelength and that their are no color filters on a Foveon
sensor.
Joe W. has claimed (and I have great faith in him) there is miminal
degration of light as it passes thru the silicon layers.
It simply does not matter even if it is true what he says, as long as it is inside of a pixel it is just one sample, so it could degrade in sharpness (due to diffusion) or whatever as much as it wants....
the other hand it is not clear to me how you can easily figure out
which photon comes from which wave length.
different wavelength are absorbed at different depth and the layers are buried at different depth so whats the point here?
And no one seemed to be able to point to Foveon or Sigma data that
said what wave length was collected in what layer.
this data has been around for quite some time, there is a pdf that has been around since mid 2002 (seems to be removed by now) that includes the relevant curves.

--
http://www.pbase.com/dgross (work in progress)
http://www.pbase.com/sigmasd9/dominic_gross_sd10

 
And no one seemed to be able to point to Foveon or Sigma data that
said what wave length was collected in what layer.
this data has been around for quite some time, there is a pdf that
has been around since mid 2002 (seems to be removed by now) that
includes the relevant curves.
It's in the patent:

http://l2.espacenet.com/dips/bnsviewer?CY=ep&LG=en&DB=EPD&PN=US5965875&ID=US+++5965875A1+I+

This was granted in 1999 and filed in 1998, so it has been in the public record since then. It has been discussed all over the place, so I don't know what he's complaining about.

--
Ron Parr
FAQ: http://www.cs.duke.edu/~parr/photography/faq.html
Gallery: http://www.pbase.com/parr/
 
Hi Dom,

Sorry, I clicked on the wrong PDF. If I am reading fig 7 correctly all three layers are sensitive to most of the visible spectrum with green most sensitive in the mid range peaking at about 550 nm, red shifted to the right peaking at around 600 nm, and blue shifted left and peaking at around 450. However there seems to be a great deal of overlap. To my untrained eye it seems like red and blue overlap in ~ 1/3 of the total area, while the other combinations overlap ~ 2/3. Am I understanding this correctly?

And if this is so it would seem to require good firmware to create the raw file, and a very good raw converter (like SPP) to create a TIFF or JPG from the raw file which has the right color levels. Putting aside the advantages SPP has (like light fill or negative light fill) do you know how SPP compares with the raw converters Nikon or Canon has (or the extra cost ones folks buy for those cameras) in getting the color levels right?

I have browsed other forums like 1d and Nikon. One huge difference I note is in the Sigma forum SPP is almost always praised, while other camera brands seldom speak of their raw converters, and when they do it is usually to gripe about them.

I am wondering if this is because the in camera jpg is acceptable because the firmware in the camera does this well, or because the raw converters are so bad that there is not so great advantage in shooting raw. Even in the 1D forum there are plenty of guys who dont shoot raw. dRebel users are almost distainful of raw shooting, yet almost everyone agrees there are advantages in shooting raw, but only Sigma users always do it and seem to be happy doing so.
The "Eyeing the Camera" paper that SandyF mentioned has the curves:
http://www.foveon.com/X3_tech_papers.html

j
on which page
figure 7 and 9 (and 5 & 6 without IR cut)
--
http://www.pbase.com/dgross (work in progress)
http://www.pbase.com/sigmasd9/dominic_gross_sd10

 
Pardon me for jumping in...
Sorry, I clicked on the wrong PDF. If I am reading fig 7 correctly
all three layers are sensitive to most of the visible spectrum with
green most sensitive in the mid range peaking at about 550 nm, red
shifted to the right peaking at around 600 nm, and blue shifted
left and peaking at around 450. However there seems to be a great
deal of overlap. To my untrained eye it seems like red and blue
overlap in ~ 1/3 of the total area, while the other combinations
overlap ~ 2/3. Am I understanding this correctly?
There's quite a bit of overlap - more than you would see in typical color filters. However, you hopefully realize that some overlap is necessary or else it would impossible to detect, for example, yellow.
And if this is so it would seem to require good firmware to create
the raw file, and a very good raw converter (like SPP) to create a
TIFF or JPG from the raw file which has the right color levels.
Putting aside the advantages SPP has (like light fill or negative
light fill) do you know how SPP compares with the raw converters
Nikon or Canon has (or the extra cost ones folks buy for those
cameras) in getting the color levels right?

I have browsed other forums like 1d and Nikon. One huge difference
I note is in the Sigma forum SPP is almost always praised, while
other camera brands seldom speak of their raw converters, and when
they do it is usually to gripe about them.

I am wondering if this is because the in camera jpg is acceptable
because the firmware in the camera does this well, or because the
raw converters are so bad that there is not so great advantage in
shooting raw. Even in the 1D forum there are plenty of guys who
dont shoot raw. dRebel users are almost distainful of raw
shooting, yet almost everyone agrees there are advantages in
shooting raw, but only Sigma users always do it and seem to be
happy doing so.
Well, they have no alternative.

--
Ron Parr
FAQ: http://www.cs.duke.edu/~parr/photography/faq.html
Gallery: http://www.pbase.com/parr/
 
I wont get too techno, because there is too much mudslinging going
on here at this forum with way too much technical "I know more than
you do" and "you are wrong and I am right" and not enough general
how to take a better photo.
Well, no matter who is right and who is wrong, the last idea is off topic.
I don't think this will answer your question in a direct manner,
but maybe it will help you understand how the Sigma cameras act in
the real world. JL might be able to help you with the intricate
technical details of how the sensor works, if that really interests
you.

Also there is plenty of information on the Fovean site on the image
chip and how it works. One thing I will say is that it works very
well, and IMO its the right design for an image chip. There are too
many problems with bayer sensors.

What I will say about the fovean chip is the blue channel seems to
be the most sensitive for some reason. Don't really care why, but
it is and sometimes you have to work around it. The blue channel is
also usually the noisiest, followed by the green channel. The red
channel is usually very clean.

Incandescent light Wreaks havoc with the blue channel and
especially if it is underexposed.

The blue channel is usually the first to clip, and usually there
will be some info left on the red and green channels, but that is
removed by SPP to keep things in balance.

The blue channel is also usually the first one to tank. If you
underexpose about a stop, the blue channel really drops off in some
places way ahead of the other two. That seems to be one of the make
causes of yellow skin tones and the early drab SD9 colors. Thanks
to Roger Beckett for pointing that out. One of the smart sensible
photographers here that does not feel he has to spout techno babble
to impress people. That was enough to impress me.

I am really making the SD9/10 sound worse than it really is though.
Its got a huge DR which is great. Take the S2 and 10D. They blow
out every other shot if you are lucky. Otherwise it might be every
shot. Really terrible, especially the 10D.

With the SD_'s You have to learn when to shoot at the very top,
filling the channels for the best color, like for portraits and
when to shoot -ev, and learn which colors are the worst for
clipping. Bright yellow is bad and some bright reds and oranges. A
few others too.
My confusion comes from the fact it is my understanding that a
photoreceptor is colorblind - it only collects photons regardless
of wavelength and that their are no color filters on a Foveon
sensor.

It is also repeated that each layer of the Foveon sensor collect
100% Blue on the top layer, 100% Green on the second layer, and
100% Red on the bottom layer, and that there is no light loss going
through the layers. I can't figure out how that makes sense (both
the pure color at each level and no light loss as you get deeper in
the silicon).

To me, in a simplified approach it would make sense that the top
layer (layer A) is sensing B+G+R light. The second layer (layer B)
is sensing G+R light, and the bottom layer (layer C) is sensing R
light. In my understanding you would then do a mathematical
calculation to get the true RGB color of each layer. Again
simplified:
Blue = A-B-C
Green = B-C
Red = C
These calculations would give a true RGB value without
interpolation. I know true calculations are certainly much more
involved, but I am trying to figure out the basics.

My concern with this, and I don't know if it pans out in real world
photos, is that it would be possible to blow out the layer A
photoreptor since it is getting all the light passing through, and
to get lots noise in the Layer C (red) photoreceptor since it is
the only layer measuring a single color band.

Thanks.
--
http://www.troyammons.com
http://www.pbase.com/tammons
http://www.troyammons.deviantart.com
 
Off topic in a photography forum ?? Gees. Okay Uncle, I will give in this time, but who really cares how a sensor works !!

I always thought it was about what ends up in print, that is until about 4-5 years ago, then a lot over zealous people showed up, along with the digital camera revolution, all with a lot of semi liiterate verbal techno babble expertise but little true film photographic experience. (Not specifically pointing at this forum) Some of these people actually ask/answer/respond to their own threads via dual or multiple personalities. Very bizzare at the least. I wonder what Sigmond Freud would have said about this behavior.

Some people are just not happy unless they are complaining, er um arguing, hopefully not with themselves though. You get my drift as you seem to get entrapped in a lot of these name calling wasted time arguments. I get sucked into them too from time to time.

All the tecno arguers go hang some artwork in a gallery, then I might feel better about listening.

To the original poster sorry about the rant. The SD_ really are good digital cameras, and for sharpness and depth of color you really can not beat it unless you want to drop mega bucks.
I wont get too techno, because there is too much mudslinging going
on here at this forum with way too much technical "I know more than
you do" and "you are wrong and I am right" and not enough general
how to take a better photo.
Well, no matter who is right and who is wrong, the last idea is off
topic.
I don't think this will answer your question in a direct manner,
but maybe it will help you understand how the Sigma cameras act in
the real world. JL might be able to help you with the intricate
technical details of how the sensor works, if that really interests
you.

Also there is plenty of information on the Fovean site on the image
chip and how it works. One thing I will say is that it works very
well, and IMO its the right design for an image chip. There are too
many problems with bayer sensors.

What I will say about the fovean chip is the blue channel seems to
be the most sensitive for some reason. Don't really care why, but
it is and sometimes you have to work around it. The blue channel is
also usually the noisiest, followed by the green channel. The red
channel is usually very clean.

Incandescent light Wreaks havoc with the blue channel and
especially if it is underexposed.

The blue channel is usually the first to clip, and usually there
will be some info left on the red and green channels, but that is
removed by SPP to keep things in balance.

The blue channel is also usually the first one to tank. If you
underexpose about a stop, the blue channel really drops off in some
places way ahead of the other two. That seems to be one of the make
causes of yellow skin tones and the early drab SD9 colors. Thanks
to Roger Beckett for pointing that out. One of the smart sensible
photographers here that does not feel he has to spout techno babble
to impress people. That was enough to impress me.

I am really making the SD9/10 sound worse than it really is though.
Its got a huge DR which is great. Take the S2 and 10D. They blow
out every other shot if you are lucky. Otherwise it might be every
shot. Really terrible, especially the 10D.

With the SD_'s You have to learn when to shoot at the very top,
filling the channels for the best color, like for portraits and
when to shoot -ev, and learn which colors are the worst for
clipping. Bright yellow is bad and some bright reds and oranges. A
few others too.
My confusion comes from the fact it is my understanding that a
photoreceptor is colorblind - it only collects photons regardless
of wavelength and that their are no color filters on a Foveon
sensor.

It is also repeated that each layer of the Foveon sensor collect
100% Blue on the top layer, 100% Green on the second layer, and
100% Red on the bottom layer, and that there is no light loss going
through the layers. I can't figure out how that makes sense (both
the pure color at each level and no light loss as you get deeper in
the silicon).

To me, in a simplified approach it would make sense that the top
layer (layer A) is sensing B+G+R light. The second layer (layer B)
is sensing G+R light, and the bottom layer (layer C) is sensing R
light. In my understanding you would then do a mathematical
calculation to get the true RGB color of each layer. Again
simplified:
Blue = A-B-C
Green = B-C
Red = C
These calculations would give a true RGB value without
interpolation. I know true calculations are certainly much more
involved, but I am trying to figure out the basics.

My concern with this, and I don't know if it pans out in real world
photos, is that it would be possible to blow out the layer A
photoreptor since it is getting all the light passing through, and
to get lots noise in the Layer C (red) photoreceptor since it is
the only layer measuring a single color band.

Thanks.
--
http://www.troyammons.com
http://www.pbase.com/tammons
http://www.troyammons.deviantart.com
--
http://www.troyammons.com
http://www.pbase.com/tammons
http://www.troyammons.deviantart.com
 
My confusion comes from the fact it is my understanding that a
photoreceptor is colorblind - it only collects photons regardless
of wavelength and that their are no color filters on a Foveon
sensor.
That's an odd premise. Most photodetectors will have varying
sensitivities, depending upon wavelength. I'm not aware of any
that absorb uniformly, although I have little doubt that it's
possible to approximate this if the right materials are used.
There's also no reason to assume that some detectors won't miss
some ranges of wavelengths almost entirely.
But he's still right, a bayer sensor is a monochrome sensor in that each photosite, whether R, G, or B is identical, and thus absorbs identically. The mosiac filter alters which photons are actually allowed to hit the sensor at a given location, but the sensor is still uniformly monochrome.

Foveon's RGB sensors are also similar to one another in that they all count photons the same way, but nevertheless the CMOS itself always return's inherently full color data, due to the depth at which each sensor is buried.

IOWs:

If you order a Foveon CMOS chip, take it out of the package and place it on the table, you are looking at a full color sensor. If you order a CMOS chip used in a Bayer design, take it out of the package and place it on the table, you are looking at a monochrome sensor. By placing a filter on top of that chip, you can interpolate color using customized software that "knows" how the filters were arranged and combines a few of those monchrome sensors to build a single, full color, recorded pixel, but without the filter and custom software, it is an inherently monochrome CMOS.

To illustrate on step further, you could use 3 of the same Bayer CMOS part numbers as sensors in a beam-splitting full color camera, where prisms are employed to spit the light into RGB color components, then directed to monochrome sensors without any filters glued to them to register the light. You could not, however, intelligently use a Foveon CMOS part number in such a way, because it is inherently a full color design.
 

Keyboard shortcuts

Back
Top