Sensor ?

Hmmmm ... this was a faulty example IMHO. The strange look is due to
excessive noise reduction. No low noise CFA images looks anything
like this.
It's from an 8MP 1/2.3" sensor at ISO 800. Imagine what an X3 equivalent 4x3 MP 1/2.3"sensor (e.g. a pixel pitch of 2.5um) would look like at ISO 800.

--
Erik
 
If the sd 14 is considerd a 4.7 mp sensor (capturing 3 colors at each
pixel ) then wouldn"t a 12 mp beyer sensor really be a 4.0 mp X 3
colors = 12 ???.. I have a new sd 14 with a 18x50 ex and also own a
10d and a 20d and have taken identical shots and can see why people
love this sensor yes the body is not as perfected or polished as a
canon or nikon but that iq is impressive and the color is spot on.
Such large files for a 4.7 mp lots o bits
--
--

I really don't like how the term pixels are used in sensors. I would be much happier if they would use the same term as for monitor. One pixel is an array of 3 sub pixels (red, green, blue). See wikipedia: http://en.wikipedia.org/wiki/Pixel

Then a Sigma sd14 would be a 4.7 Mpixel camera with 3 subpixels per pixel. A 14Mpixel camera would be 3.5 Mpixel camera with 4 subpixels per pixel (2 green, 1 red, 1 blue).

I think it would also be more honest towards the customer as monitors are also sold with one pixel holds all primary colours i.e. 3 sub pixels. No one would except a LCD monitor that would be sold with the count of the sub pixels instead of the pixel.
 
I really don't like how the term pixels are used in sensors. I would
be much happier if they would use the same term as for monitor. One
pixel is an array of 3 sub pixels (red, green, blue). See wikipedia:
http://en.wikipedia.org/wiki/Pixel

Then a Sigma sd14 would be a 4.7 Mpixel camera with 3 subpixels per
pixel. A 14Mpixel camera would be 3.5 Mpixel camera with 4 subpixels
per pixel (2 green, 1 red, 1 blue).
That is what I would like too. So say G9 would be advertised as 12 mega sub-pixels.
Dp1 would be 14 mega sub-pixels.

Counting 4 subpixels as one pixel in Bayer CFA is a bit problematic though since they contribute to neighbours too.
I think it would also be more honest towards the customer as monitors
are also sold with one pixel holds all primary colours i.e. 3 sub
pixels. No one would except a LCD monitor that would be sold with the
count of the sub pixels instead of the pixel.
I would not be happy if my 1600x1200 LCD monitor would have every pixel of only single (r,g,b)-color.
 
I really don't like how the term pixels are used in sensors. I would
be much happier if they would use the same term as for monitor. One
pixel is an array of 3 sub pixels (red, green, blue). See wikipedia:
http://en.wikipedia.org/wiki/Pixel
A monitor is a bad example. Both methods of counting pixels are used. And there are several configurations how color is created - at least for the CRT monitors. For CRT monitors you have DPI and not PPI - because the color dots are not aligned to the pixels at all.

Use the example of a color image file instead - then you get the right notion.
Then a Sigma sd14 would be a 4.7 Mpixel camera with 3 subpixels per
pixel. A 14Mpixel camera would be 3.5 Mpixel camera with 4 subpixels
per pixel (2 green, 1 red, 1 blue).
I agree fully with the Sigma. But dividing Bayer with 3 is strange IMHO. There exist no natural divisor. So - in practice 1 have to be used.

This makes it impossible to compare Foveon and Bayer using pixel counting, which is a good thing IMHO.
I think it would also be more honest towards the customer as monitors
are also sold with one pixel holds all primary colours i.e. 3 sub
pixels. No one would except a LCD monitor that would be sold with the
count of the sub pixels instead of the pixel.
Yet they do. Care to try to count the pixel of your LCD screen on your camera?

--
Roland
 
It's from an 8MP 1/2.3" sensor at ISO 800.
No it is not - at least not unmodified. It is heavily smoothed - like using some artistic filter in Photoshop. I assume that the original image is very noisy - but sharp. I would rather have this to start with and do my own noise reduction.
Imagine what an X3
equivalent 4x3 MP 1/2.3"sensor (e.g. a pixel pitch of 2.5um) would
look like at ISO 800.
What was the pitch for the "polaroid" camera? Was it 5 um?

--
Roland
 
Then you have to take the conversion algorithm into account.
According to a white paper on Foveon´s web pages - the conversion is
based on using the middle layer for luminosity and then extracting
downsampled difference channels for color information.
As far as I remember it this white paper clearly states that this approach was indented for fast processing with not so much computation power while high end cameras and PC RAW conversion software would use different methods.

Imho this is clearly aimed at compact DSC / cell phone applications and not SLRs.

Furthermore consider that this article is much newer than the SD9 or SD10, and as we know the SD14 raw files still worked with the old raw processors (just with less heavy noise reduction), so it seems very unlikely that they actually used this in any of the current cameras.

And btw. another key aspect is that with the method proposed in the paper one starts with "full" sampling of luma and chroma information and then blurs the chroma a bit. The impact of this is quite different from starting with a incomplete sampling of luma and chroma and interpolating most of it.

--
http://www.pbase.com/dgross

 
As far as I remember it this white paper clearly states that this
approach was indented for fast processing with not so much
computation power while high end cameras and PC RAW conversion
software would use different methods.
Actually - I dont think so. The paper is slightly fuzzy about this. Yes - it says that the method is suitable to use in smaller cameras. But - it also states that the downsampling is important for noise reduction. So ...
Furthermore consider that this article is much newer than the SD9 or
SD10, and as we know the SD14 raw files still worked with the old raw
processors (just with less heavy noise reduction), so it seems very
unlikely that they actually used this in any of the current cameras.
I dont understand your reasoning there. How can the fact that the paper is new make it likely that new cameras do not use whats in it? And ... the format of the RAW files has nothing to do with the conversion from RAW to RGB.
And btw. another key aspect is that with the method proposed in the
paper one starts with "full" sampling of luma and chroma information
and then blurs the chroma a bit. The impact of this is quite
different from starting with a incomplete sampling of luma and chroma
and interpolating most of it.
Yes - thats right. I just wanted to give the full picture. Some thinks Foveon images are non AA full native RGB images - without any trace of blurring except what the lens does. They are not.

--
Roland
 
Now - I think you are trying to be difficult - and choose not to
understand what I mean :)
No, I'm emphasizing that anything coming out of X3 processing (particularly in-camera processing) for similar sensor size will also be heavily modified.
But - what we saw there was heavily modified.
Of course. No one has yet to get good per-pixel detail out of tiny pixels.

--
Erik
 
Of course. No one has yet to get good per-pixel detail out of tiny
pixels.
In good lighting my Fuijicolor f20 makes rather nice images - but when it gets dimmer - then the image looks like an acrylic painting.

--
Roland
 
As far as I remember it this white paper clearly states that this
approach was indented for fast processing with not so much
computation power while high end cameras and PC RAW conversion
software would use different methods.
Actually - I dont think so. The paper is slightly fuzzy about this.
Yes - it says that the method is suitable to use in smaller cameras.
As far as I remeber it it describes that raw processing so far amplifying noise because of the RGB conversion and therefore requires advanced noise reduction.

It also says that for RAW processing such multipass methods are used. So I don't really see where to read into this that this is used for the SD14 / DP1.
Furthermore consider that this article is much newer than the SD9 or
SD10, and as we know the SD14 raw files still worked with the old raw
processors (just with less heavy noise reduction), so it seems very
unlikely that they actually used this in any of the current cameras.
I dont understand your reasoning there. How can the fact that the
paper is new make it likely that new cameras do not use whats in it?
There is also some evidence that they don't use it. First of all processing takes even longer for the SD14 than previous cameras, suggesting that there is even more processing. The resulting images for the SD14 look like the old processing with better noise reduction on top. etc.
Yes - thats right. I just wanted to give the full picture. Some
thinks Foveon images are non AA full native RGB images - without any
trace of blurring except what the lens does. They are not.
It is interesting that you state this as fact when you have no hard evidence.
To me all this seems very unlikely for the SD14 / DP1.

Should be possible to test this with a synthetic raw file...

--
http://www.pbase.com/dgross

 
Yes - thats right. I just wanted to give the full picture. Some
thinks Foveon images are non AA full native RGB images - without any
trace of blurring except what the lens does. They are not.
It is interesting that you state this as fact when you have no hard
evidence.
Hmmmm ... is it a native RGB sensor?

OK - OK - thats nitpicking :) You meant that I have no hard evidence that they do chroma blur. But - I meant exactly what I wrote (including the RGB part) and I do not want to state 100% that they do any chroma blurring - because I do not know that.

I would be VERY surprised if the noise reduction they DO have do NOT include any kind of chroma blurring.
To me all this seems very unlikely for the SD14 / DP1.
Hmmmm ... unlikely that they use (1) the two pipeline method (2) chroma downsampling and/or (3) chroma blurring?
Should be possible to test this with a synthetic raw file...
And - as a matter of fact - I do have a program that easily can be adapted to generate such a synthetic file :) What test pattern do you suggest?

Just a caveat there. The RAW compression method Sigma uses is not non lossy. It is based on progressive delta coding - and some ringing might be possible at edges. (But to come to think of it - if all lines are horizontal - then the coding will be non lossy.)

--
Roland
 

Keyboard shortcuts

Back
Top