E
Erik Magnuson
Guest
But the colors are dependent on the other colors. If they were going to process in parallel, they would process pixels in parallel and not colors. However, there is more that goes on than the simple 3x3 color conversion: sharpening, blurring, and (gasp) noise reduction. The preview bitmap of an SD9/SD10/x530 appears to be what you get from a simple color matrix operation. It's interesting to compare that to SPP output.Why do XF3 raw images have to be processed sequentially? After all,
unlike Bayer sensors, photosites aren't dependent on the value of
neighboring photosites for their values.
Why couldn't Sigma use more than one processor to process the
signal in parallel? Maybe use three of 'em... one for each color?
The idea is to reduce the parts count, not increase it. The way everyone else does it is to a single chip that contains both a regular CPU core and some specialised signal processing logic. (It also looks like the SD14 is a smaller body - no room for 3 processors much less the power/heat budget.)Three of the current processors used in the SD10 have got to be a
lot cheaper than three brand-new, super-fast processors...
If Sigma did not invest in the mechanical R&D for a more than 3 FPS mirror/shutter, then there is little point in fast processing.In short, adding more buffer memory and a couple of processors, and
some smart software engineering could make the SD14 a very fast
camera indeed.
--
Erik