Latest news about FF sigma sensor: dual pixel design

Started 1 week ago | Discussions thread
FDecker Senior Member • Posts: 1,652
Re: Latest news about FF sigma sensor: dual pixel design
4

Rudi wrote:

Scottelly wrote:

Rudi wrote:

Scottelly wrote:

xpatUSA wrote:

Kim Yee wrote:

For me, it's good news. Meaning there's new life in the Sigma camera and it's ideology, though changed or nicely said 'evolved'.

This RB zebra combination sensor may need de-mosaic-ing, but it might turnout to be a very easy and simple algorithm compare to a bayer CFA sensor (I have no idea how bayer works but it should be a harder algorithm due to the 4x4 or even 6x6 of the Fujifilm Xtrans). This is just a 1/2x1/2 algorithm and it's all in an orderly fashion. Won't that be better?

So if this is true, won't this be a 2:1:1 'color info' machine? It's just an evolution/compromised or whatever of the Merrill.

True, it's sad that it's no longer a 3 layer tech, but I personally think that Merrill is good but too niche and making it less marketable.

As @FDecker says, it is impossible to make a trichromatic "RGB" without three layers, sorry Kim.

It has three layers but just is not capturing information from all three layers in any one spot Ted, from what I can tell.

No ???

Even if the illustration in the OP were of an actually proposed physical sensor - the depths shown for the photocells are incorrect and extremely misleading.

Foveon's system works with depths of 0.2um, 3.2um, 8um and this has not changed much since the F7 in the SD9 camera. In fact, Foveon has stated publicly that the Quattro depths are no different to those of the Merrill.

Thanks for the info. Ted.

I recommend this paper by Gilblom et al. to readers of this thread for a fuller understanding of relationship between depth and layer spectral sensitivity - beats guesswork and hearsay any time.

Rudi I don't know what that is, but the way I have been visualizing the new design, is that it is a sensor with three layers, with every pixel on the middle/green layer being captured, every other pixel either horizontally on the top/blue layer being captured, and at the bottom/red layer, the pixels are being captured below the blue layer pixels that are NOT being captured. In other words, there are alternating stripes of blue pixels and red pixels being captured. This means that at each pixel location only two colors are being captured - green and either blue or red. It also means that a red pixel is being captured next to every pixel and a blue pixel is being captured next to every pixel. This means that half the pixels are made up of blue and green, and the red component can be averaged between the two red pixels that are adjacent to that pixel. The same goes for the red pixels. In other words every red pixel has an associated green pixel above it and the blue component can be averaged from the two adjacent blue pixels. This would no doubt offer less noise. Would this mean that the color detail would be better than a Quattro sensor offers? I don't know, but I would guess that it is significantly better. It might be about the same, but if that means that there are benefits, while maintaining Quattro image quality, then the sensor is worth producing. Another thing to consider is that if low-res mode is made of four pixels, then the four pixel combination can be made of two red, two blue, and four green pixels. That means an average of two red can be used to produce the low-res red pixel, rather than just having one red pixel, which could be noisy. The same for blue. The green pixel would be the average of four, the way today's Quattro works for blue, when using low-res mode. Ultimately this seems to me like it might be a better way to bin, producing slightly less noise, by averaging it out to a degree. With the benefit of the speed-up, and ability to process the data, like the sensor is a Bayer pattern CFA sensor, for creating a video feed to the viewfinder and a video compression chip, for capturing video, this could all be a major benefit, if the sensor can handle the heat that all the processing may produce. I don't know if the data coming off the sensor does indeed produce a lot of heat though. It might not produce much at all. I haven't noticed my SD Quattro H getting hot unless I actually shoot a lot of photos with it in a short period of time, so I suspect the heat is generated by the processor, and not the sensor.

Well Scott, did you read the paper Ted referred to ?

What You´re talking about is the new sensor concept they presented

... that means they have to guess at least 30% off the colors in the captured image, where as the three layer design gives you an exact direct measurement for each pixel location.

The larger your pixel size the higher is its signal output, which means you can derive better / higher signal values for your RGB signal. The problem with the three layer design is the relatively low signal from small pixels like in the APS C size sensors (processing noise). With larger pixels you can realize a better color information (better SN/N value).

So I assume the problem which makes the whole thing slow and generates heat is the amount of data to be handled by the processing circuitry and that might be even more the case, if they make a 24 x 36mm large sensor with 24Mpx (to many information to be processed at the same time). This would require the development of an all new (multicore) processing engine with higher processing speed and lower power consumption. Not so cheap to realize. So they decided to go the easy way of Bayveon sensor design.

Rudi.

Once again.... It seems to be a patent about a higher speed processing pipeline for the existing Foveon 3-layer sensor, NOT a new sensor.

Post (hide subjects) Posted by
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark MMy threads
Color scheme? Blue / Yellow