FOVEON - Dead in the Water?

only 1/3 of a sample though,

its compromise, either interpolate between less tru colour sample, or more highly compromised (66%) samples.

Thats why I don't have a problem with Foveon claiming 10MP, because Bayer claim 6MP. Both are by interpolation. The problem is people have MP envy, and don't realise there is actually very little difference between a 10MP image and a 6MP image anyway (remembering to double the resolution in a 6MP image, you need a 24MP image).
technically, I can't see why you can't interpolate just as well
between the pixels in Foveon as you can in bayer.
You can sample between any two points if you want. The difference
in Bayer is that they actually have a sample there, which means
that the demosaic engine has some information that allows it to
better distinguish where edges are.
Even
Sony are trying to find other ways to improve colour in CCDs with
their Cyan detector.
Or are they trying to get out from under patent licensing?

--
Thom Hogan
author, Nikon Field Guide & Nikon Flash Guide
author, Complete Guides to the Nikon D100, D1, D1h, & D1x and
Fujifilm S2
http://www.bythom.com
 
Bayers have had a few generations to improve,
Foveon (using the term loosely) is really in its infancy.
I thought we covered this already. Layered sensors didn't begin
with Foveon, they actually predate Bryce Bayer's work. Many of the
companies with Bayer (or other filter array pattern) sensors in
production today also have research (published and unpublished) and
even patents in layered sensors.
I havn't been in any development discussions before. They would not have had as much development and production as Bayer has seen in recent years with the Digital camera explosion.
Even
Sony are trying to find other ways to improve colour in CCDs with
their Cyan detector.
A very interesting development indeed, since 3 color (i.e. Bayer)
filter sensors already outperform silicon depth color separation
sensors (i.e. Foveon) in terms of color accuracy.
not from what I have seen
Given it 5-10 years, we'll all be using, or wanting 'Foveon' type
design (even if its not Foveon assuming they don't survive...) in
our Pro cameras, and be ashamed at the quality we used to think was
great!
I think it's a basic SITS problem, and the known drawbacks of
filter array sensors are best solved by multiplicity. Make the CFA
cells smaller, and you can solve the chroma aliasing problems by
using cells small enough so that the lens resolution limit becomes
the antialiasing filter (in which case CFA and Foveon sensors are
on equal footing) or by employing alias-proof designs such as
pseudorandom filter layouts.

The solutions to the Foveon noise and color accuracy problems are
not so simple.
but hardely an impossible task. In technology, 5-10 years is a very long time. Do you think creating dual layer writable DVD media was easy? You'll have it in your hands this year.
 
ah, stop and think about it, of course RAW data has no colour!
Each detector for each colour is the same type of detector, it
doesn't detect colour, it detects light levels. Only by the fact
there are colour filters obove the sensors does the post processing
module (in camera or computer) convert the grey levels to a colour
level. So of course in pure RAW data, there is actually no
colour...
Not true with Foveon. Each layer is assigned to certain color - that is the fundamental principal of that kind of sensors.

--
no text
 
You see, thats the problem, its not a 3MP camera. You call your Bayer 6MP when it is no more 6MP than the Foveon is 10MP. The difference is you are brainwashed into your interpretation of MP by the dominance of Bayer sensors, so you have allowed it to become an standard in your mind (resolution, not sensor count, irrelevant of what the sensor sensors), even though it is not a formaly recognised standard in any industry.

I do agree though, I wish they would release a full frame, 6x3MP sensor, and Nikon would let it into one of their bodies...

If Foveon doesn't crash and burn due to marketing, hopefully Nikon will buy it from the liquidator and allow them to develop it with the LBCAST and end up with the best of both worlds... Nikon have the R&D capabilities, to make the best from it, and I don't want Canon with it ;-)

PS: I don't understand why Sigma don't make a Nikon and Canon mount version of their SD10. They cut themselves from a HUGE market opportunity by using their own lens mount. It would be worth having two body variants for the extra sales it would generate.
Why not call it what it is. The very best 3mp camera ever made,
with qualities that make it "comparable" to 6mp DSLRs. Of course
both systems have their flaws and strengths, but if Foveon would
get their act together and make sensors at the same MP ouput size
as Bayer there would not be any of these arguments. A 6mp X3 would
blow away a 6mp bayer. But right now, there is no reason to get
excited over X3 technology. You did the right thing buying into the
Nikon lens system over the Sigma lens system. Would it be nice is
there was a Foveon based Nikon DSLR? Sure, but there is no pressure
to do so when the stuff Nikon currently has is just as good. That
is why Sigma is doomed to be the only X3 player for some time. Fuji
has their own amazing Super CCD, Nikon now has LBCAST, and Canon
has their wonderful CMOS sensors. They are all supurb. What does
Foveon bring to the table. Nothing really at the current state of
things.

Regards,
Sean
 
ah, stop and think about it, of course RAW data has no colour!
Each detector for each colour is the same type of detector, it
doesn't detect colour, it detects light levels. Only by the fact
there are colour filters obove the sensors does the post processing
module (in camera or computer) convert the grey levels to a colour
level. So of course in pure RAW data, there is actually no
colour...
We know.

You missed the point.

The point is that the derivative of the hue and saturation in an RGB triplet WRT changes in the source is very small.

Ironically, the main strength of the X3 approach is as a luminance detector. The main weakness is how poorly it measures saturation and hue.

Contrary to what people think, Bayer pattern sensors are relatively poor luminance detectors, since each pixel misses a good chunk of the incident spectrum. However, they are fairly reliable detectors of (low spatial frequency) color.

These roles are essentially reverse from what many people in the forum seem to think.

(Yes, I realize that color crosstalk is still an issue from the perspective of perceived luminance since not all wavelengths contribute equally to our perception of luminance.)

--
Ron Parr
FAQ: http://www.cs.duke.edu/~parr/photography/faq.html
Gallery: http://www.pbase.com/parr/
 
Hey Joseph,

Joseph S. Wisniewski wrote:
...>
Third, it means an increase in sensor size, from 1.7x to 1.3x. Now,
a lot of photographers would love this. But you just have to
remember that if you nearly double sensor size, sensor cost is
going to increase. It's probably going to more than double (weird
relationship between chip size and "yield", the number of ships
that turn out good).
Very true, not to mention that we would lose our free longer
telephotos :)
not true. If you increase sensor size, by simply adding more sensors around the current sensor, you can still crop the image and end up with the same image you have today with your 'extended' telephoto...

Also, as with all technologies, costs come down over time and yields increase. Imagine trying to make a 3GHz CPU 10 years ago when we were burning CPUs at 100MHz.

As you say though, Foveons problem is in staying alive, and the keeping up with Bayer's future development.

...
Regards,
Sean
 
Hey Hamstor...
not true. If you increase sensor size, by simply adding more
sensors around the current sensor, you can still crop the image and
end up with the same image you have today with your 'extended'
telephoto...
Depends on how many pixels get added... but yes, you are correct.

Regards,
Sean
 
Also, as with all technologies, costs come down over time and
yields increase. Imagine trying to make a 3GHz CPU 10 years ago
when we were burning CPUs at 100MHz.
When a new process technology is developed, yields decrease and than increase up to a fairly stable rate. The yield rate after stabilization hasn't improved very quickly at all.

Area for a typical CPU hasn't increased significantly in the past 10 years. In some cases it has gone down. Most CPU chips today are smaller than the first Pentiums.

--
Ron Parr
FAQ: http://www.cs.duke.edu/~parr/photography/faq.html
Gallery: http://www.pbase.com/parr/
 
I read Canon is in the works, but not Nikon.
What kind of adapter do you mean? Does it keep AF? What about AF-S lenses? What about lenses that do not have the aperture ring any more? Or only for certain lenses. Do you have a link? It sounds very interesting
Ya got me swinging. I read about this over on the Sigma forum.
Maybe try a search to see if it's suitable or not for real world
use. I think the Canon one is still being worked on.

Stan
 
That's a comparison between frame transfer and interline transfer CCD technology. The game is totally different for CMOS.
Okay. So how would LBcast fit into this situation...I understand
that it's not quite in either the ccd or cmos category. Any
thoughts?
I have the impression that LBcast is basically an active pixel CMOS sensor with the MOSFET amplifier replaced with a JFET amplifier. However, this is based on a quick reading of Nikon's press releases and few hard details. AFAIK, Nikon hasn't offered any...

--
Ron Parr
FAQ: http://www.cs.duke.edu/~parr/photography/faq.html
Gallery: http://www.pbase.com/parr/
 
no you don't the transistiors/junctions etc can be layerd, just as the detectors are.

If they were all on the one layer, then you would be right, but they arn't.
you ask for 'proof' yet you provide none yourself

the size is all relative, it doesn't matter if the pixel size is
actually larger than the photodetector as Bayer is no different, so
your point seems irrelevant
Pixels in APS CMOS sensors lose some of their sensing area to
transistors, which reside on the surface of the chip. If you have
more photodiodes per pixel, you need more transistors per pixel,
which means that you sacrifice a larger fraction of the pixel area
to transistors.

--
Ron Parr
FAQ: http://www.cs.duke.edu/~parr/photography/faq.html
Gallery: http://www.pbase.com/parr/
 
hmm, no
say (with 1.5 aspect ration)
6MP = 2000x3000
double vertical and horizontal resolution
4000x6000=24MP

its a 2 squared (4xMP) relationship, not a MP squared relationship
36MP = 4900x7350 (approx)
The problem is people
have MP envy, and don't realise there is actually very little
difference between a 10MP image and a 6MP image anyway (remembering
to double the resolution in a 6MP image, you need a 24MP image).
To double the linear resultion of a 6MP image, you need a 36MP image.

--
Ron Parr
FAQ: http://www.cs.duke.edu/~parr/photography/faq.html
Gallery: http://www.pbase.com/parr/
 
Bayers have had a few generations to improve,
Foveon (using the term loosely) is really in its infancy.
I thought we covered this already. Layered sensors didn't begin
with Foveon, they actually predate Bryce Bayer's work. Many of the
companies with Bayer (or other filter array pattern) sensors in
production today also have research (published and unpublished) and
even patents in layered sensors.
I havn't been in any development discussions before. They would
not have had as much development and production as Bayer has seen
in recent years with the Digital camera explosion.
The first path that yielded a decent, working solution "won", for lack of a better term.
Even
Sony are trying to find other ways to improve colour in CCDs with
their Cyan detector.
A very interesting development indeed, since 3 color (i.e. Bayer)
filter sensors already outperform silicon depth color separation
sensors (i.e. Foveon) in terms of color accuracy.
not from what I have seen
Definitely, from what I've measured and calculated. The Foveon and Sigma folks may have tweaker their software to produce more pleasing colors (in many, but not all situations) but it is considerably less accurate. The Foveon sensor limits how accuratly color can be recorded because the three values from the sensor are not colormetric, there are metamerism problems. Two colors that appear identical to a human may appear wildly different to the sensor, two colors that appear differnt to a human may appear identical to the sensor.

So you can tweak and fudge and apply profiles, curves, etc. and maybe get a correction where the sky is a natural shade of blue in one situation, but then a portrait subject's sky blue eyeshadow becomes turquoise.
Given it 5-10 years, we'll all be using, or wanting 'Foveon' type
design (even if its not Foveon assuming they don't survive...) in
our Pro cameras, and be ashamed at the quality we used to think was
great!
I think it's a basic SITS problem, and the known drawbacks of
filter array sensors are best solved by multiplicity. Make the CFA
cells smaller, and you can solve the chroma aliasing problems by
using cells small enough so that the lens resolution limit becomes
the antialiasing filter (in which case CFA and Foveon sensors are
on equal footing) or by employing alias-proof designs such as
pseudorandom filter layouts.

The solutions to the Foveon noise and color accuracy problems are
not so simple.
but hardely an impossible task. In technology, 5-10 years is a
very long time. Do you think creating dual layer writable DVD
media was easy? You'll have it in your hands this year.
This is because 9 gig writables score very high in market research, so the big guns like Philips and Sony are motivated to spend hundreds of millions on it.

Where's the pull for expensive R&D projects to get that Foveon noise down?

If you're a camera company, and you design two equal cost point and shoot digicams, one with the 1.5mp Foveon chip, and one with a 4.5 Bayer (probably have to choose between either 4 or 5) and put them in a market research clinic, how many consumers will even be able to tell the difference.

Or if you try a similar research project between 10.5 "megasomethings" DSLRs, a 3.5mp Foveon and a 10.5mp Bayer. (equal "megasomethings" means equal processor costs, equal frame rate, etc.

p.s. YOU might have a dual layer DVD writer in your hands this year. I have a big lab and an "infotainment" department. Care to guess what I've got already ;)

--
Ciao!

Joe

http://www.swissarmyfork.com
 
I want a 1:1 aspect ratioso that I can crop to taste.
I wasn't aware that I could not crop my 3:2 aspect ratio pictures to taste. ;)

Seriously, a square sensor is very wasteful. Unless you're final output is square, a square always gets cropped. 20% loss on an 8x10 is the best case. 23% loss on a full page. 35% loss on a tabloid. The "average loss" is going to be somewhere around 28%.

A 3:2 rectangular format cuts your losses dramatically. The 8x12 only loses 17% on the worst case 8x10, 3% on the tabloid. On the averge, you lose about 8% of your expensive pixels.

The 4:3 loses about 8% for either 8x10 or tabloid. And the average loss is something on the order of 2%. (OK, maybe those "ideal format" and 4:3 folks are on to something after all).

--
Ciao!

Joe

http://www.swissarmyfork.com
 

Keyboard shortcuts

Back
Top