LuLa Review

If you look at how the current SPP is written I'm pretty sure that's exactly what Sigma did (it was totally rewritten in C#).
Hmmmm .... this sounds unlikely. It looks the same and it quacks the same. And they are no software gurus at Sigma. Is it not more likely that it is the same software, slightly tweaked?
 
Last edited:
How do you know they rewrote it in C#? Do you have other insights into why they chose that language, given that many of their users count on Mac support, so they'd know they'd have to use something like Mono?
I'm a little surprised so many people questioned this aspect, there are a lot of aspects that show it's written in C# (using Mono in fact), the main one I see is in the Mac application bundle there's a folder just inside called "MonoBundle" or something along those lines (that did not exist in prior versions), with tons of mono DLL's... you can also find down inside a number of mono bundles with SPP code, and if you profile the application using Xcode instruments it's pretty obvious mono is being used.
To your second point, Foveon's in Silicon Valley, so if they wanted to hire passionate programmers, they could do so in a heartbeat. But they do not list software as one of their strengths. To quote their website, they are composed of:

a multidisciplinary, innovative team comprising knowledge in silicon technology, chip design, image processing, firmware creation , hardware design, and manufacturing all working together
Actually Silicon Valley is the worst place to hire good programmers since they would be very expensive there, and you are competing against the likes of Google and Apple to acquire and retain them. They need to be there to keep the engineering talent that is vital to them though...
The closest thing in that list to 'software engineering' is 'image processing', but those are not even close to the same thing. They do not list software jobs on their employment boards (last time I checked, they wanted another hardware chip person), or even any jobs at the current time. Of course, the website hasn't been updated since 2010, so it may just be that they are bad at maintaining a web presence. This is not the outward appearance of a company that will admit that they have a software problem in the first place, but more of a bunch of hardware guys who think that the software will just sort itself out.
it's a sign of a company that outsources a lot of software, probably most of SPP.
Or maybe Sigma writes SPP using technical information that Foveon gives them? Is that why they have the occasional Kanji showing up in the UI?
The exact structure is hard to say but it's probably along those lines.
 
I'm relatively new to the Foveon world, but I think I've been doing well by using SPP only to export the RAWs to TIFFs and then work on the TIFFs in Lightroom as I would with my Canon files. Is this not advisable?
 
I'm relatively new to the Foveon world, but I think I've been doing well by using SPP only to export the RAWs to TIFFs and then work on the TIFFs in Lightroom as I would with my Canon files. Is this not advisable?
It is, but you should do the WB first and especially recover highlights, because they are lost in the TIFF. Also SPP is doing some sharpening already. So there are a few adjustments that should be done before exporting to LR.
 
I'm relatively new to the Foveon world, but I think I've been doing well by using SPP only to export the RAWs to TIFFs and then work on the TIFFs in Lightroom as I would with my Canon files. Is this not advisable?
It is, but you should do the WB first and especially recover highlights, because they are lost in the TIFF. Also SPP is doing some sharpening already. So there are a few adjustments that should be done before exporting to LR.
Aha, thanks. I was suspecting some highlights were getting lost.
 
Last edited:
If it would have been the case 6 would have support for older cameras already.
That is a valid argument, I have to admit.

But, why make a totally new version with the same problems?
 
I'm a little surprised so many people questioned this aspect,
Maybe because it is surprising.
there are a lot of aspects that show it's written in C# (using Mono in fact), the main one I see is in the Mac application bundle there's a folder just inside called "MonoBundle" or something along those lines (that did not exist in prior versions), with tons of mono DLL's... you can also find down inside a number of mono bundles with SPP code, and if you profile the application using Xcode instruments it's pretty obvious mono is being used.
Hmmmmm .... OK ... it seems you have good evidence.

Then we only have to try to understand why they make a total make over that behaves the same.
 
I'm a little surprised so many people questioned this aspect,
Maybe because it is surprising.
there are a lot of aspects that show it's written in C# (using Mono in fact), the main one I see is in the Mac application bundle there's a folder just inside called "MonoBundle" or something along those lines (that did not exist in prior versions), with tons of mono DLL's... you can also find down inside a number of mono bundles with SPP code, and if you profile the application using Xcode instruments it's pretty obvious mono is being used.
Hmmmmm .... OK ... it seems you have good evidence.

Then we only have to try to understand why they make a total make over that behaves the same.

--
/Roland
X3F tools:
http://www.proxel.se/x3f.html
https://github.com/rolkar/x3f


Not gonna lie, I find that totally mind bottling.





Mind+Bottling.+fans+of+Idol+Hands+and+Blades+of+Glory_c1752e_4753453.jpg
 
I suspect that demosaicing is less taxing processor wise than math involved with processing x3f files.
Maybe,

But you can do optimisations.
  • You can redraw the part you see first.
  • You can redraw with lower resolution.
  • You can have a pipelined approach where the last parts are e.g. white balance, easy stuff that can be recomputed fast, maybe even in the GPU.
  • You can redraw the image while recomputing, showing the effect directly.
One strange thing is that it takes forever to write a 20 MP TIFF. Are they recomputing the image again from RAW data?
 
He liked the Merrill, but he was inaccurate in his estimation of its resolution: "What we're left with is a high quality pocketable 15MP camera that punches well above its weight; more like a 26MP camera in terms of image quality." The fact is, it is more like a 36 MP camera with an AA filter,
He was perfectly right. Do you own a DP Merrill or/and a 36 MP Sony-sensor based camera?
I do. An in my opinion, after careful tests, the DP2m is comparable to a 24MP full frame sensor without AA. So an estimated "26MP camera" with AA sounds reasonable.

Not even the DP Quattro can match the resolution of a Sony A7r.
Most people forget to use great lens on the Bayer camera before comparing it with the Sigma DP.


Well, your experience doesn't match the test images very well. I'll let people make their own judgments, based on test images. I exported 16 bit (per color) TIFF images from the raw files to do this test. Notice I did not use the red-over-blue swatch in the DPreview test images, which is the Merrill's strong point and the Bayer CFA weak point in that image. These are based on the test images from Imaging-Resource.

These images show that the SD1 is a lot closer to the D800 than the D610 (please make sure you view them full-size):

Nikon D610 test image from Imaging-Resource (upscaled to 30,000 pixels across).
Nikon D610 test image from Imaging-Resource (upscaled to 30,000 pixels across).

Sigma SD1 test image from Imaging-Resource (upscaled to 30,000 pixels across).
Sigma SD1 test image from Imaging-Resource (upscaled to 30,000 pixels across).

Nikon D800 test image from Imaging-Resource (upscaled to 30,000 pixels across).
Nikon D800 test image from Imaging-Resource (upscaled to 30,000 pixels across).

As you can see, the SD1 image here is MUCH closer to the D800 image than the D610 image, and that's with a zoom lens on the SD1. The D610 has all the advantage of at least two years of extra development too. The 50mm f1.4 A would make a difference on the SD1, bringing it closer to the results of the DP2 Merrill. I think the DP2 Merrill would match the D800, but Imaging-Resource did not test that camera. The DP2 Quattro matches the D800E for sure . . . and it might even beat it. My results have shown me that it produces some parts of the test image better than the D810 or the Sony A7r . . . or even the Pentax 645 D or 645 Z. (no moiré in the green part a the top of the beer bottle label or in other parts of the image)

There will always be people who say you have to process it more or something like that. Well, I say that you have to be as scientific as possible, so I try not to do anything differently to one image vs. another. I have upscaled the images to the same horizontal size, because that shows the detail differences best, when we look at jpegs on-line.

Do you have any samples or links for me to see?
 
I suspect that demosaicing is less taxing processor wise than math involved with processing x3f files.
How so? Demosaicing involves calculating R, G, B values at every pixel, right? While the Foveon sensor just gives you R, G, B for every pixel right off the sensor, all ready to go! There no further calculation necessary, AFAIK! At least this is how Foveon has been marketed. So either A) they lied about that, or B) they're really bad at writing software!
 
Do you have any samples or links for me to see?
Yes, not studio samples, but real life samples.
Sony A7r with the 35mm Zeiss vs. Sigma DP2m (45mm)

If you look at the crops (DP2m interpolated to 24MP), you will notice a slight advantage for the A7r, now imagine how the advantage would be with a 45mm lens attached to the Sony.

A7r_dp2m_crops.jpg


A7r_dp2m.jpg
 
Last edited:
I suspect that demosaicing is less taxing processor wise than math involved with processing x3f files.
How so? Demosaicing involves calculating R, G, B values at every pixel, right? While the Foveon sensor just gives you R, G, B for every pixel right off the sensor, all ready to go! There no further calculation necessary, AFAIK! At least this is how Foveon has been marketed. So either A) they lied about that, or B) they're really bad at writing software!
I would say they mislead people (lied) about how simple it is to produce full-color pixels from each x-y spatial location on the sensor. Otherwise the camera could produce TIFF and JPEG images very quickly and easily . . . right in the camera. It's obviously not as simple as capturing R, G, and B values from the sensor and that's that. There is white balance involved. There are lots of other things involved too, I suspect.

Frankly, I think you are oversimplifying the situation, and you are guilty of the same thing Sigma is guilty of . . . misleading.
 
I suspect that demosaicing is less taxing processor wise than math involved with processing x3f files.
How so? Demosaicing involves calculating R, G, B values at every pixel, right? While the Foveon sensor just gives you R, G, B for every pixel right off the sensor, all ready to go! There no further calculation necessary, AFAIK! At least this is how Foveon has been marketed. So either A) they lied about that, or B) they're really bad at writing software!
I would say they mislead people (lied) about how simple it is to produce full-color pixels from each x-y spatial location on the sensor. Otherwise the camera could produce TIFF and JPEG images very quickly and easily . . . right in the camera. It's obviously not as simple as capturing R, G, and B values from the sensor and that's that. There is white balance involved. There are lots of other things involved too, I suspect.

Frankly, I think you are oversimplifying the situation, and you are guilty of the same thing Sigma is guilty of . . . misleading.
I agree. Irony is not all that easy. And often not so funny.
 

Keyboard shortcuts

Back
Top