These new Sigma samples are AMAZING

. . . you truly do not understand this technology.
Are you still claiming that images from the SD10
have an "obvious" reduction in sharpness, and that this is due to
"an AA filter or denoise software or both"?
Yes
Do you think that the only way to reduce noise at the sensor level is through the use of "an AA filter or denoise software or both"? If you do, you better sit down with some papers on CMOS technology. There are other ways, believe me.

--
Laurence Φ€ 08 LL

http://www.pbase.com/lmatson/sd9_images
http://www.pbase.com/sigmasd9/root
http://www.pbase.com/cameras/sigma/sd9
http://www.beachbriss.com (eternal test site)
 
I have a Fuji S2 that I love, but....
every time I see these samples from a feveon camera I wonder "
Imagine what you could do if the Foveon on a Nikon F100 Body"
Can you point out one that is 'amazing'? Havent seen one yet that
has really impressed me that much.
I've only seen the line test at Imaging Resource so far, and it still has annoying jaggies and luminance moire that are busy on the monitor.

The fact is, to eliminate these aliasing artifacts, they have to make Foveon output look like Bayer output with real anti-aliasing filters. Just a microfilter is not enough.
--
John
 
Are you still claiming that images from the SD10
have an "obvious" reduction in sharpness, and that this is due to
"an AA filter or denoise software or both"?
Yes
Do you think that the only way to reduce noise at the sensor level
is through the use of "an AA filter or denoise software or both"?
If you do, you better sit down with some papers on CMOS technology.
There are other ways, believe me.
Laurence Laurence Laurence. Is the air thin where you are? : )

I never stated that the only way to reduce sensor noise is by AA or denoise, now did I?
 
Below is the original and 4X enlargement of an unsharpened SD9
image using experimental fractal (IFS) interpolation. I have
applied the same interpolation to both sharpened and unsharpened
SD10 samples from Phil. The SD10 images do not interpolate as well
as the SD9 image. Unfortunately, I can't show you because of
Phil's copyright.

As some of you might know, it is possible to produce more detail
than predicted by Nyquist theory by incorporating statistical a
priori knowledge of natural images. In this case, it is that
natural images contain self similarity, particularly sharp edges.
In the case of the SD10, the Sigma people seem to have thwarted
this.
Thwarted what? That fractal upsizing looks horrible. I can see the edges of the original aliased SD9 pixels. The color looks like watercolors.

--
John
 
I was very very impressed with the net images but when I downloaded
and upsized the files to 6mp I saw output pretty much similar to
what you get with todays dslr's. I'm still a fan but not as much as
I used to be.
That exactly my response too, which is why I got a Canon. I would still maintain that the X3 will produce slightly more detail, but it has such poor performance in high ISO and long exposure, that any positives are lost IMHO.

The biggest improvement I have seen from the SD10 images is the colour.
--

 
. . . why would you suggest they are using "an AA filter or denoise software or both"? Let's just assume for the sake of argument that they are not. How are you going to account for your other claim that the images have an "obvious" reduction in sharpness.

You are basing this conclusion on the disappearing target that the images are less sharp. That's where the ice, and not the air, is thin.
Are you still claiming that images from the SD10
have an "obvious" reduction in sharpness, and that this is due to
"an AA filter or denoise software or both"?
Yes
Do you think that the only way to reduce noise at the sensor level
is through the use of "an AA filter or denoise software or both"?
If you do, you better sit down with some papers on CMOS technology.
There are other ways, believe me.
Laurence Laurence Laurence. Is the air thin where you are? : )
I never stated that the only way to reduce sensor noise is by AA or
denoise, now did I?
--
Laurence Φ€ 08 LL

http://www.pbase.com/lmatson/sd9_images
http://www.pbase.com/sigmasd9/root
http://www.pbase.com/cameras/sigma/sd9
http://www.beachbriss.com (eternal test site)
 
Shouldn't you have started by acknowledging and apologizing for the unjustified accusation you made?
Instead you make new hasty claims...
. . . why would you suggest they are using "an AA filter or denoise
software or both"? Let's just assume for the sake of argument that
they are not. How are you going to account for your other claim
that the images have an "obvious" reduction in sharpness.
The images can be "soft" without me accounting for how they got that way. Even if we for no reason assume what you suggest.
You are basing this conclusion on the disappearing target that the
images are less sharp. That's where the ice, and not the air, is
thin.
I have the images on my hard drive. They aint going anywhere. As I have stated before my opinion is not based on imaging-resource's shifting conclusions. I am more than adequately qualified to assess the quality myself.
 
Yes, God forbid that he should be able to think for himself Laurence : )

Do you even know why you are so desperate to quiet down criticism of the SD10?
 
How can I make such a statement?

Simple, I look at the samples in the most objective way I can.

The SD10 has, so far as the samples I've seen (and that includes quite a few from pbase), produced good to mediocre results whereas the SD9 has produced poor (night images) to staggering (studio) results.

Take a look at Phil's SD10 samples - if anyone can produce excellent results, Phil can! Image IMG00247.jpg is very good, the highlights are well controlled and overall balance is good. Now, take a look at IMG07174.jpg, the image is soft throughout, not a good result at 100% size.

Take a look at Phil's SD9 effort with IMG01462.jpg, it's sharp, real sharp! Superb!

You can soften a sharp image but there is a limit to sharpening a soft image.

There are three rules to good photography ...
1 - The image
2 - The image
3 - The image

Norman
Norman,

These images are just coming out. To the best of my knowledge,
there are around 40 quality images from the SD10 around.

Have you seen these?

http://www.pbase.com/moonlights

Furthermore, Dave changed his conclusion and most of the pundits
have quieted down since, for what it's worth.

--
Laurence Φ€ 08 LL

http://www.pbase.com/lmatson/sd9_images
http://www.pbase.com/sigmasd9/root
http://www.pbase.com/cameras/sigma/sd9
http://www.beachbriss.com (eternal test site)
--
N Hart
 
I don't know why we don't measure color resolution. The Sigma would likely trounce most Bayer cameras. Since Bayers have a relatively small number of red/blue, that would likely be an area they may have the very most trouble with. Or gradual color changes, like flowers.

In other words if a Sigma has 3.43 M Reds and a 6M Bayer has only 1.5 M Red and Blues and 3M Greens at all times there is less actual real color information available for the image. Sure you can interpolate the bayer, but the raw data is less. Yet the luminance (B&W) data is higher in the Bayer but cuts off differently due to antialiasing.

I'd like to see more tests along these lines

Stan
 
Norman,

Have you read any of the provisos floating around Phil's review? Something about pre-production?

Did you notice where he took them? Do you know how much time he had?

At Photokina 2002, he took a bunch of images to feed the masses. Some were pretty good, but all of them were torn apart pixel by pixel. Over time, people finally realized that the camera was pretty good.

Some of the photographers put a similar message up too.

http://www.pbase.com/sigmasd9/dick_lyon_sd10

Of course, he would be no judge of sharpness or the quality of the camera, I guess.

This is starting all over again. You are welcome to your opinion, but it is certainly based a pretty narrow criteria. Nice that Dave gets some company in his rush to judgment.
You can soften a sharp image but there is a limit to sharpening a
soft image.
Boy that reads like the anti-Bayer mantra. Not everyone in that camp agrees with you.

http://forums.dpreview.com/forums/read.asp?forum=1019&message=5738856
There are three rules to good photography ...
1 - The image
2 - The image
3 - The image
Thanks for the lesson. I bookmarked that.

--
Laurence Φ€ 08 LL

http://www.pbase.com/lmatson/sd9_images
http://www.pbase.com/sigmasd9/root
http://www.pbase.com/cameras/sigma/sd9
http://www.beachbriss.com (eternal test site)
 
Once you get past the theory and into the real world engineering and throw sensory perception into the mix, there is obviously a lot more going on in digital imaging than is obvious.

For instance why does bayer work so well? The theoretical weaknesses have been thrashed out ad finitum on this forum, yet....

...Properly taken and processed, I find bayer images are superb. The 1ds for instance is brilliant. If only it weren't so costly.

I also don't really see that the SD9 has better colour than (for example) my D100. In fact I often think it has worse colour. The only conclusion I can draw from all this confusion is that the SD9 colour is not quite as good as theory suggests, or bayer is pretty damn clever, or both.

The only thing I know for sure is all digital cameras seem to give better colour than scanned film unless you really put a lot of effort into balancing it later.

Hats off to Mr Bayer and chums AND to the Foveon crowd; whatever you prefer, they're both marvellous!!
I don't know why we don't measure color resolution. The Sigma
would likely trounce most Bayer cameras. Since Bayers have a
relatively small number of red/blue, that would likely be an area
they may have the very most trouble with. Or gradual color
changes, like flowers.

In other words if a Sigma has 3.43 M Reds and a 6M Bayer has only
1.5 M Red and Blues and 3M Greens at all times there is less actual
real color information available for the image. Sure you can
interpolate the bayer, but the raw data is less. Yet the luminance
(B&W) data is higher in the Bayer but cuts off differently due to
antialiasing.

I'd like to see more tests along these lines

Stan
 
How can I make such a statement?

Simple, I look at the samples in the most objective way I can.

The SD10 has, so far as the samples I've seen (and that includes
quite a few from pbase), produced good to mediocre results whereas
the SD9 has produced poor (night images) to staggering (studio)
results.
The range in image quality you speak of is due to two sources: firstly, the skill of the photographer, and secondly the limitations of the camera. The problem I see with your conclusion about the new SD10, is that you seem to discount the first cause as a reason for the range of results. (This assumes that a third variable, the light being imaged, is constant).
Take a look at Phil's SD10 samples - if anyone can produce
excellent results, Phil can! Image IMG00247.jpg is very good, the
highlights are well controlled and overall balance is good. Now,
take a look at IMG07174.jpg, the image is soft throughout, not a
good result at 100% size.

Take a look at Phil's SD9 effort with IMG01462.jpg, it's sharp,
real sharp! Superb!

You can soften a sharp image but there is a limit to sharpening a
soft image.
Off all the variables that contribute to variations in image sharpness, why do you conclude that it is some inbuilt difference between the SD9 and SD10 that is solely responsible?
 
We could chew up aa great deal of bandwidth arguing this point. At
the end of the day, however, it is the image that counts. There is
a lot of Lomo stuff around that I think is stupendous.
There has already been a great deal of bandwidth used about this topic over the whole last year. :)

But you are right, it is the image that counts, and many times sharpness is not the main concern. However, I have never seen a SD9 image any image of a person at the same distances as in James Russel's shots that were any more detailed. Also, keep in mind these are resized down to 3mp, so he had to throw away over 9 million pixels (I know not 9mp worth of information as the S2 is more like 9mp total actual resolution, not 12mp)! This was throwing away actual information though, and this can make them look a little soft, although I do not see any softness that you see. This was information thrown away that no matter how sharp the SD9 images may be it was not able to capture. People fail to realize that even though bayer has to interpolate for each pixel the true color of that pixell, each RGB sensor is still picking up discreet spatial information. Since the Fovean RGB sensors are stacked they do not have to interpolate for the color of the pixel, true, but each RGB sensor in the X3 is getting the SAME scene spatial information, the x3 is just looking at different wavelengths of that position that all three RGB sensors are in.

Thus the bayer interpolation is an interpolation, true, but with MORE data per sensor (resolution wise) than the foveon sensors have available when you interpolate in photoshop or whatever up to 6mp. See what I mean? While it is true that a 6mp sensor only has 1.5 mp red, 1.5 mp blue, and 3mp green, they are still 6 million total DISCREET sensors. No matter how you look at it, the X3 3mp sensor can never have more than 3 million DISCREET sensors. Color fidelity is better with the X3, but it is not higher resolution than a 6mp bayer sensor. To show this, shoot in gray scale. The bayer sensor no longer has to interpolate color, and all 6 million pixels are recording detail. The foveon in comparison would still only have 3 million points of detail. Yes it may have over 9 million sensors total, but they are still only 3 million discreet postions.

I gaurentee that if you interpolate up a 3mp Foveon image to 12mp, the S2 will have much more resolution than the Foveon. You can see it in the res charts even.

Regards,
Sean
 
You have enumerated a incomplete list of the APPLICATIONS of supperresolution, not DEFINED it.

Faking details is not a problem as long as the fake details to not replace the real details that are visible in the original. The easiest way for the layman to understand the incorporation of a priori knowledge into an enlargement is to think of a good art painter such as Ivan Albright. Suppose you give him an unenlarged natural image and tell him to paint an enlargement. He will incorporate every detail he sees and provide details that he doesn't see based on his experience with similar subject matter such as faces, trees, etc. Some of these details will be wrong, but statistically he will be right more often than not. And, the fake details will be consistent with what he sees, in that, when you take an image of his painting and shrink it back to original size, the result will be a practically perfect match for the original that you gave him to paint.

I don't pretend that IFS is as good as Ivan Albright, but it is a step in this direction.
Most generally you should google "MAP (maximum a priori) estimators."
Most specifically you should google "superresolution."
Ahh... To my knowledge superresolution works for restoring an image
from a series of lower quality images, like a video. Thus, temporal
resolution is sacrified for the spacial one.

Now, we are not talking multiple images here of the same thing, do
we? Another case when it can be used is for correcting a known
problem with the imager. (Granted, you can treat restoration in
Bayer sensor as a subset of this case, but normally in-camera
algorithms and RAW-convertors take care of it).

Still another case, when it's usable, is in "compressing"
[downsampling] image with known parameters with upsampling it later
on. In this case this technique is similar to an image compressing.

In our case, when there is no series of images, no known distortion
parameters, any supersampling would result in faking details. I am
not sure if it should be our goal. (especially if they do not even
look nice;)
--
Author of SAR Image Processor
http://www.general-cathexis.com
 
Below is the original and 4X enlargement of an unsharpened SD9
image using experimental fractal (IFS) interpolation. I have
applied the same interpolation to both sharpened and unsharpened
SD10 samples from Phil. The SD10 images do not interpolate as well
as the SD9 image. Unfortunately, I can't show you because of
Phil's copyright.

As some of you might know, it is possible to produce more detail
than predicted by Nyquist theory by incorporating statistical a
priori knowledge of natural images. In this case, it is that
natural images contain self similarity, particularly sharp edges.
In the case of the SD10, the Sigma people seem to have thwarted
this.
Thwarted what? That fractal upsizing looks horrible. I can see
the edges of the original aliased SD9 pixels. The color looks like
watercolors.

--
John
Thwarted the preservation of sharp edges.
--
Author of SAR Image Processor
http://www.general-cathexis.com
 
Hi Brian

"Skill of the photographer" (?). Hmmm..... I think Phil is both skilled and experienced enough to get the best out of any digicam! Why not take a look at ....

http://www.imaging-resource.com/PRODS/SD9/FULLRES/SD9FARLF.HTM

http://www.imaging-resource.com/PRODS/SSD10/FULLRES/SD10FARLs.HTM





..... and spot the difference!

"Limitations of the camera" .... Well,yes! I've held off buying a digital SLR until I felt that my film cameras were under threat. That threat, I'm glad to say, had come close with the launch of the SD9 (despite it's limitations). The SD10 does not, I feel, come as close as the SD9 - I am, I must say, disappointed!

"Light being constant"... I couldn't agree more! I can't wait for Phil's SD10 test card gallery as well as the "compared to" section of his review.

Bi4now

Norman
How can I make such a statement?

Simple, I look at the samples in the most objective way I can.

The SD10 has, so far as the samples I've seen (and that includes
quite a few from pbase), produced good to mediocre results whereas
the SD9 has produced poor (night images) to staggering (studio)
results.
The range in image quality you speak of is due to two sources:
firstly, the skill of the photographer, and secondly the
limitations of the camera. The problem I see with your conclusion
about the new SD10, is that you seem to discount the first cause as
a reason for the range of results. (This assumes that a third
variable, the light being imaged, is constant).
Take a look at Phil's SD10 samples - if anyone can produce
excellent results, Phil can! Image IMG00247.jpg is very good, the
highlights are well controlled and overall balance is good. Now,
take a look at IMG07174.jpg, the image is soft throughout, not a
good result at 100% size.

Take a look at Phil's SD9 effort with IMG01462.jpg, it's sharp,
real sharp! Superb!

You can soften a sharp image but there is a limit to sharpening a
soft image.
Off all the variables that contribute to variations in image
sharpness, why do you conclude that it is some inbuilt difference
between the SD9 and SD10 that is solely responsible?
--
N Hart
 
Faking details is not a problem as long as the fake details to not
replace the real details that are visible in the original. The
easiest way for the layman to understand the incorporation of a
priori knowledge into an enlargement is to think of a good art
painter such as Ivan Albright. Suppose you give him an unenlarged
natural image and tell him to paint an enlargement. He will
incorporate every detail he sees and provide details that he
doesn't see based on his experience with similar subject matter
such as faces, trees, etc. Some of these details will be wrong,
but statistically he will be right more often than not. And, the
fake details will be consistent with what he sees, in that, when
you take an image of his painting and shrink it back to original
size, the result will be a practically perfect match for the
original that you gave him to paint.
I am sure that many people working in imaging professionally would strongly disagree with you. I do not want to repeat myself but if you do not have priors like known lense deformation, compressing parameters etc, you won't get anything useful out of your sub-Nyquist details. Look at your fractal enlargement - it's pretty self-explanatory.
 

Keyboard shortcuts

Back
Top