Why wouldn't a b/w resolution chart benefit SD9?

Personally, I think your argument would be have far greater weight
if you showed realistic images where this happens as opposed to
playing with resolution charts. If, as you say, it happens all of
the time, then it should be easy to build a page full of real world
samples.
Well that's an easy one! Here's one.

http://www.ddisoftware.com/testpics/F717-SD9.jpg

Just look at the F717 image and the difference in sharpness between the white/green stitching (which look OK) and the red stitching which has lost all detail and is severely blurred. The difference in the ability to resolve green/white versus a near-pure red is so dramatic that the red stitching almost looks like it is pasted there from a different/blurry shot. In the SD9 comparison on the right, the sharpness and detail in the green/white stitching is comparable to the F717 image (if just a little sharper due to the sharper X3 image), but you can see what the red stitching really looks like, and it is just as sharp as green/white.

Here's another one:

http://www.ddisoftware.com/testpics/redcar.jpg

In the red car example, you can see how the red/black transitions are very jaggy while the black/gray edge (that is at the same angle) shows much better resolution. These are just a couple examples of why we need to look at more than just B/W resolution when comparing Bayer vs. X3 samples.

--
Mike
http://www.ddisoftware.com
 
We're talking about edge detail where 255,0,0 meets 0,0,0 (or
values close to those). A sharp red/black edge isn't going to
benefit much from a 10% shift in any RGB channel, or at least, it's
certainly not going to look like the sharp edge that it really is.
Glancing around my room, I see very few super-saturated red, greens, or blues (more earth tones actually). Looking out the window is pretty much the same, one really blue sky (but not pure blue), although there is one red stop sign visible.

I think that once again, and as someone else mentioned, we're back to arguing theoretical worst-case vs real-world. It would be interesting to see a spectral analysis of a batch of photos and determine the precentage of super-saturated pure reds, greens, and blues, where your objections would hold true, vs the remainder of the image.

It's also pretty obvious that for the bayer-pattern you're always going to have at least a one-pixel "alias" separation between severe edge/color transitions. Although to be fair, you'll in all likelyhood have the same issue with a Foveon chip, as the odds are pretty low that any given color transition will hit a set of sensor boundaries perfectly. As such you'd still have at least a one pixel wide red-black aliasing layer (see the boxer photo for an example).

It's also pretty obvious that Karl has a point (you guys have been busy this morning), in that the Foveon pixel well size is no accident. They probably don't know how to generate adequate yields on larger sensors, and they can't make smaller pixels w/o killing performance (IOS 200 max).

As such a bayer system could still brute-force the issue, by decreasing the well size and increasing the sampling frequency, thus making it's transition boundaries "smaller" (and we're back to the the whole 3MP vs 6MP argument).

Unfortunately, Canon has made things a little more difficult, in that we don't know the sensor performance of the D60, nor do we really know the demosaicing algorithm used. As such a lot of this is conjecture.

BTW, how are you "bayerizing" your images, and how are you getting them into qimage to demosaic?
 
What one should notice is how poorly, relative to the Bayer (by
Fill Factory) that the X3 appears to separate the colors. The
“color” of a given pixel has to be computed as a function of the
other color sensors. Notice for the Foveon how little difference
there is in the Blue and Green responses and that there is still a
significant amount of each color detected in the other sensor.
Very roughly speaking, the amount of crosstalk between the colors
detected by the various sensors has to be subtracted out to get a
given color. The more the curves overlap, the harder it is to
determine the actual color.
I have to side with the "F" team here. I think the "cross-talk" is unavoidable, as without it I don't think the various frequencies of light would penetrate successfully from one color region to the next. And if you in fact do have a large blue component and a significant green component, then you are mapping into cyan space, as the cross-over would indicate.

It seems as if the Foveon isn't really sampling distinct color layers, but color regions.

I'm also operating in a bit of a vacuum here, as w/o more specific data from Canon, I don't know which set of conditions are at work. You can use a lower pass filter aka the Fill Factor graph and work the numbers in software. Most algorithms, however, seem to be based on a high-pass filter (as Mike Chaney posted) and use software to interpolate for the missing color AND luminance channels.

Finally, the Fill Factor graph shows higher "cross-talk" on the blue-green and green-red boundaries, which indicates you could be in trouble if the color you're trying to resolve is magenta.
 
Glancing around my room, I see very few super-saturated red,
greens, or blues (more earth tones actually).
I see them everywhere. I have packs of Canon Photo Paper Pro in my office that are stacked and the red edges are interlaced with the black shadow between the packs. My daughter is wearing a shirt that has a cartoon character wearing a red shirt with fine black lines/stitching. There is a black phono cable where the cable goes into a red plastic plug. Red pens with black detail. Blue Epson photo cartridge boxes with black shadows behind them. Etc. Etc.
Looking out the
window is pretty much the same, one really blue sky (but not pure
blue), although there is one red stop sign visible.
There are lots of times when I want to shoot a red flower against a blue sky. I often choose something else because I know what's going to happen to the image where the petals meet the blue sky.

We could argue about this stuff all day... what you see... what I see. The fact remains that it is an issue and needs to be addressed in testing. I don't see any need to keep discussing the magnitude of the problem, because it does exist, and I have plenty of examples of it in real images (and I wasn't "looking" for the problem when I shot them either).
BTW, how are you "bayerizing" your images, and how are you getting
them into qimage to demosaic?
That is explained on the page where I have the samples. Basically, the process involves simulating the effect of an AA filter and then discarding 2 of the 3 colors on the SD9 image in a Bayer pattern. I then execute the interpolation inside QP's NEF decoder to interpolate, minus the color profile application and sharpening.

--
Mike
http://www.ddisoftware.com
 
BTW, that was not intended as a swipe against Michael Long.

It's just that Sony dug for themselves a hole wide enough that it's tough to cover it over in people's eyes, when they expect to see a "red problem".

But that's a conversation for another thread and another forum. :-)
The red car example is from a D1. In addition, the red stitching
is not a blowout problem.
--

Ulysses
--

Ulysses
 
The red car example is from a D1. In addition, the red stitching
is not a blowout problem.
I never said it did not happen, only that they would be rare. And of course the effect is going to be a lot more noticable with only 2.6 MP.

That said, I do find these examples far more pursuasive than a contrived resolution chart.

--
Erik
 
Sorry to rain on your crusade, Karl, but the overlap doesn't hurt
your ability to recognize colors as long as every distinct hue can
be mapped to a distinct R, G, B combination in the sensor's space.
There is a practical issue in constructing transformation between
these spaces but as I've pointed out to you many times, the problem
of mapping between spaces is a common problem with well-known
solutions.
I did not say that the problem could not be solved, nor did I say that there should be no overlap. I was saying and it has been known for a long time that a problem with the depth of silicon separating the colors is that it does not do a good job of it.

The response curves are clearly non-optimal, you know that. The CFA people can try to tune the filters for the curves that they want, where Foveon is a bit more stuck with the Silicon characteristics. I would also assume that the scatter within the Silicon contributes to the problem (many path lenghts to get to the diode at a given depth).

It it not that it is impossible to separate colors, but that having the response curves being so far from optimal hurts the ability to separate colors. It reduces the "effective signal" of what the sensors otherwise see and makes them more prone to noise.
Your claim that the sensor would have trouble distinguishing
between cyan and blue is just wrong. From the very graphs that you
provided, it's clear that for cyan, the sensor will see some red,
and much more green than blue. While for blue, the sensor will see
no red, and much more blue than green.
One again, I did not say it was impossible, but rather that is is more difficult. More difficult in turn means that at low light levels where the signal to noise is worse, there is more of a chance for the X3 to get the color wrong.
More formally, all we need for the sensor to do its job in terms of
resolving correct hues is for there to be a bijection between hues
in the sensor's space and hues in perceptual space. We can think
of each color's response curve as a basis function. What matters
is that the three layers form a basis that spans the perceptual
space. If they do, we can do a change of basis into, e.g., sRGB.
Yep you certainly can, only will less accuracy for a given signal to noise level.
I could be persuaded by such an argument, but this is a relatively
subtle argument that will depend upon a lot of data we just don't
have. Until then, I don't think it serves anybody to spread FUD.
There is a lot of data on Foveon that we do not have. I am not spreading FUD, I am just responding to all the HYPE.

Karl
--
Karl
 
My opinion, please read.
The natural counter-example would be the Canon D-30 sensor. It presumably would have had the same 1st generation issues, but the noise was surprisingly well controlled. Other companies seem to have a better handle on this problem than Foveon. With all of their supposed expertise, it's easier to image that they have a harder problem then that they are less competent.

--
Erik
 
We could argue about this stuff all day... what you see... what I
see. The fact remains that it is an issue and needs to be
addressed in testing.
Actually, you responded to the least interesting part of the message.

I was more interested in the Foveon performance issues and the possibility of "solving" the bayer artifacts by oversampling.
 
Exactly Eric,

It is not that the Foveon guys are "dumb," they appear to be very smart to have gotten this far. I don't know why it is so hard for people to understand that the X3 concept while solving one problem creates some new and unique problems some of which are harder or more noise prone than what the Bayer people have to deal with.

In the example of the color response. The Foveon have to deal with the properties of the silicon's separation of light. The Bayer people can independently tune their filters with the dye's they use.

Another issue for the X3 is that there are 3 sets of Transitors with 3 times the number of read lines per pixel. This is a MAJOR source of noise. I don't know they order that they read, but one has to suspect that whether read at the same or at different times, the reading one color has to affect the state of the other colors.

I not saying that any of this is unsolvable given time and technology, but like most new concepts, they create some new problems along with the benefit they hoped to bring.

Karl
My opinion, please read.
The natural counter-example would be the Canon D-30 sensor. It
presumably would have had the same 1st generation issues, but the
noise was surprisingly well controlled. Other companies seem to
have a better handle on this problem than Foveon. With all of their
supposed expertise, it's easier to image that they have a harder
problem then that they are less competent.

--
Erik
--
Karl
 
Hi Erik,

Canon made many digicams before the D30 ! They have at least 10 years of expierience before making the D30.

My point is that you have to deal with noise in SYSTEM LEVEL: from the sensor, analog amplification, analog-digital conversion to the software , NOT ONLY THE SENSOR. Are you sure that Canon didnt use any kind of noise reduction algorithm (for normal exposure mode) in their Dxx series???

Seem to me that Foveon have some thing to deal with the on-chip AD-converter and the analog layout structure of photosite.

So
My opinion, please read.
The natural counter-example would be the Canon D-30 sensor. It
presumably would have had the same 1st generation issues, but the
noise was surprisingly well controlled. Other companies seem to
have a better handle on this problem than Foveon. With all of their
supposed expertise, it's easier to image that they have a harder
problem then that they are less competent.

--
Erik
 
Hi Erik,

Canon made many digicams before the D30 ! They have at least 10
years of expierience before making the D30.
Not in CMOS sensors. That is a whole different ball of wax from CCD. IIRC, all previous cameras, Canon used other vendors' sensors.
My point is that you have to deal with noise in SYSTEM LEVEL: from
the sensor, analog amplification, analog-digital conversion to the
software , NOT ONLY THE SENSOR.
So Foveon/Sigma could not manage to hire anyone who was competent in this field? This is not Foveon's first product. Presumably Sigma had the Foveon reference design to work from. From the F7 data sheet: "Foveon has pioneered the use of CMOS sensors for high quality image capture."
Are you sure that Canon didnt use
any kind of noise reduction algorithm (for normal exposure mode) in
their Dxx series???
I don't care how they did it. If the solution they chose was somehow not applicable to X3, then that would be the whole point not wouldn't it?
Seem to me that Foveon have some thing to deal with the on-chip
AD-converter and the analog layout structure of photosite.
So we are back to the same choices: either there is something different about X3 where the usual rules don't quite apply OR Foveon/Sigma are having trouble with something that seems to be a pretty standard problem that other vendors have solved.

It's pretty embarrasing when your marketing makes a big deal over how much more sensitive the X3 technology is supposed to be and the first product seems to be less sensitive than the competition.

--
Erik
 
Sorry,
Another issue for the X3 is that there are 3 sets of Transitors
with 3 times the number of read lines per pixel. This is a MAJOR
source of noise. I don't know they order that they read, but one
has to suspect that whether read at the same or at different times,
the reading one color has to affect the state of the other colors.
Why you could make is statement? How can 3 set of transistors and 3 read lines be Major noise sources????? It just the same as you read signal from 3 seperate sensors.

Actualy, at the same time, differential amplification technique eliminate this issue easily.
Be careful please!
So.
 
Another issue for the X3 is that there are 3 sets of Transitors
with 3 times the number of read lines per pixel. This is a MAJOR
source of noise. I don't know they order that they read, but one
has to suspect that whether read at the same or at different times,
the reading one color has to affect the state of the other colors.
Why you could make is statement?
Because it is true. In the case of the X3 the Diodes and transitors are phyically near and/or on top of each other.
How can 3 set of transistors and 3
read lines be Major noise sources?????
As per above physical locality and the an the fact that the the more read lines, the more charge is moves and thus noise sources.
It just the same as you read
signal from 3 seperate sensors.
First there are more and thus more noise. Second, they are physically closer to each other and thus more ways to have coupling.
Actualy, at the same time, differential amplification technique
eliminate this issue easily.
Well then, if it is so easy, then back to WHY are they at ISO400. My answer is that it is a combination of effects and not their inability to do circuit design.
Be careful please!
I wish you would be more careful.
--
Karl
 

Keyboard shortcuts

Back
Top