it's still a 3MP camera

Not at all, it would be no different than any 10.2MP Bayer,
A 10MP bayer would have 10 million spatially separated samples to work with. The SD9 only has 3.5 million spatial samples. There is no SW magic than can "unstack" the sensors to make them detect more spatial information.
Further, you might be surprised,
I'd be extremely surprised. Over 50 years of science and engineering knowledge would be obsolete. New textbooks would have to be written.
you might be able to obtain a stunning image this way by
modeling the falloff rate considering many adjacent pixels.
That buys you exactly zero additional spatial information. Please, please do some more reading on sampling theory before hypothesizing magic algorithms.
I disagree, it would qualify andrightly so.
You can disagree all you want, Sigma still cannot advertise it that way.
Perfect example of marketing (in that case quite intentionally
dishonest) making an enormous difference.
Well, we'll see how much difference it really makes. From the limited images we've seen so far, I think that reviewers will still compare the F700 to 3 and 4 MP cameras and not 6.
Even then, the reviews are mixed
and there is not even a consensus that it's better than merely 6MP
bayers. So 10M would just make it look worse.
How so?
At this point, I'm just speechless.

--
Erik
 
Not at all, it would be no different than any 10.2MP Bayer,
A 10MP bayer would have 10 million spatially separated samples to
work with. The SD9 only has 3.5 million spatial samples. There is
no SW magic than can "unstack" the sensors to make them detect more
spatial information.
The truth is, what the Foveon does currently is a Bayer interpolation of 10.2MP without spacial artifacting and thus the need to scale up. But the point that it is simple a special case of Bayer interpolation (where the offsets = 0), is too easily lost on pro reviewers with little mathematical background.
Further, you might be surprised,
I'd be extremely surprised. Over 50 years of science and
engineering knowledge would be obsolete. New textbooks would have
to be written.
They've already been rewritten. Bayer interpolation produces tremendous results. I'm not saying it would be as good as quality data in the first place, but it would produce a fine result and force the shouldn't-be-so-stunning realization that 10.2 sensors can produce a true 10.2MP image, just like all the other camera manufactures claim.
you might be able to obtain a stunning image this way by
modeling the falloff rate considering many adjacent pixels.
That buys you exactly zero additional spatial information. Please,
please do some more reading on sampling theory before hypothesizing
magic algorithms.
You're claiming that all 10.2M sensors are co-located? Of course there is spatial separation, it just depends which sensor you elect to use. Its very simple to do, especially with the power of a desktop.
I disagree, it would qualify andrightly so.
You can disagree all you want, Sigma still cannot advertise it that
way.
That would make them as bad as everyone else, I know.
Perfect example of marketing (in that case quite intentionally
dishonest) making an enormous difference.
Well, we'll see how much difference it really makes. From the
limited images we've seen so far, I think that reviewers will still
compare the F700 to 3 and 4 MP cameras and not 6.
Its not that good. The 3MP brick pattern has been tested for years with SuperCCD3, I even own one.
Even then, the reviews are mixed
and there is not even a consensus that it's better than merely 6MP
bayers. So 10M would just make it look worse.
How so?
At this point, I'm just speechless.
I'm surprised you think marketing the SD-9 as a true 10.2MP camera, using the identical pixel-count standard and and indentical interpolation technique as all existing Bayers, without no exceptions beyond except Fuji (who actually double even that pixel count), would seem so repulsive to people.

Ask yourself, if the SD-9 truly only had 3.5M sensors by the Bayer standard of counting R, G, and B sensors separately, would you buy it? Could it produce the images it produces? Of course not.
 
So...what does taste better, Chicken or Beef?

Is there a right answer?
Absolutely, there is a right answer for everyone.
Let me re-phrase....

Is there a universal right answer?

Of course not.....
Dr. H.

I was in the same situation too, ready to call in the order for an
10D. I am sleeping on it now, cause of the SD9 images I've seen.
Same thing happened to me. An errant wave washed me over to
sigma-photo.com, and after I saw flower-eye's lips there was no
going back.

The reviews of the 10D made it look very appealing, and it
certainly is very appealing, until you see a few SD-9 images. I
think pro internet and print mags are in a very difficult position.
How do you tell your cash cows the chicken tastes better?
Personal opinion of course, but I was not prepared for what the SD9
presented to me. I've been all over Photo sites like pbase and
after wending my way through several camera types, the SD9
personally is producing a better image.

IMHO, drhiii
the SD9.

for the same money, I'll take 10D

unless the next version gives me 6MP, for the same money
--
John
http://www.mankman.com
Canon EOS 10D
Canon Powershot S30
Sony DSC-F707

Equipment list in profile...subject to change on a daily basis ;^)

Duct tape is like the Force. It has a light side, a dark side, and
it holds the universe together

--
John
http://www.mankman.com
Canon EOS 10D
Canon Powershot S30
Sony DSC-F707

Equipment list in profile...subject to change on a daily basis ;^)

Duct tape is like the Force. It has a light side, a dark side, and it holds the universe together

 
Not at all, it would be no different than any 10.2MP Bayer,
A 10MP bayer would have 10 million spatially separated samples to
work with. The SD9 only has 3.5 million spatial samples. There is
no SW magic than can "unstack" the sensors to make them detect more
spatial information.
Further, you might be surprised,
I'd be extremely surprised. Over 50 years of science and
engineering knowledge would be obsolete. New textbooks would have
to be written.
you might be able to obtain a stunning image this way by
modeling the falloff rate considering many adjacent pixels.
That buys you exactly zero additional spatial information. Please,
please do some more reading on sampling theory before hypothesizing
magic algorithms.
I disagree, it would qualify andrightly so.
You can disagree all you want, Sigma still cannot advertise it that
way.
Perfect example of marketing (in that case quite intentionally
dishonest) making an enormous difference.
Well, we'll see how much difference it really makes. From the
limited images we've seen so far, I think that reviewers will still
compare the F700 to 3 and 4 MP cameras and not 6.
Even then, the reviews are mixed
and there is not even a consensus that it's better than merely 6MP
bayers. So 10M would just make it look worse.
How so?
At this point, I'm just speechless.

--
Erik
--
http://www.domgross.de
please don't run away because of the cheap design of the first page :)
ICQ UIN: 289647506
 
Not at all, it would be no different than any 10.2MP Bayer,
A 10MP bayer would have 10 million spatially separated samples to
work with.
The SD9 only has 3.5 million spatial samples. There is
no SW magic than can "unstack" the sensors to make them detect more
spatial information.
Actually, there is, but it involves exploiting CA (chromatic aberration) and the end result is that you have bands of differing resolutions.

I use the same type of math in my own Bayer demosaic algorithms, and will soon be applying it to an SD-9 color interpolation algorithm.

The way the Foveon sensor works, CA manifests as areas where colors may be totally out of gamut, and interpolated to very strange values. I'm working to fix that.
Further, you might be surprised,
I'd be extremely surprised. Over 50 years of science and
engineering knowledge would be obsolete. New textbooks would have
to be written.
Actually, there are ways of doing this that fall within current texts. Imagine the red, green, and blue are out of registration by 1 full pixel in the corners of the image (not hard to imagine with many of the wider range zooms). Now, lets make the CA nice and linear, so that at a/2 the corner radius the misalognment is 1/2 pixel. Then an algorithm that does radial interpolation to resize the red, green and blue layers back into alighnment will have a full pizel offset in the corners, which won't increase resolution, but should eliminate a 1 pixel color fringe. And, in the 1/2 pixel "ring" there will be an effective doubling of resolution.
you might be able to obtain a stunning image this way by
modeling the falloff rate considering many adjacent pixels.
That buys you exactly zero additional spatial information. Please,
please do some more reading on sampling theory before hypothesizing
magic algorithms.
SG10 definitely needs to do that. You don't have to be out on the cutting edge like me to realize how many errors are in his arguments.

I've clipped the stuff about advertising claims. As has been pointed out, the JCII has strict rules on how these things are defined, and the SD9 can't be marketed in 1/2 the world as anything other than a 3.4 MP camera. Sigma would look outrageously bad if they marketed it as different resolutions in different countries.

And, on more time, according to well established criteria of information theory (and even by Foveon's own admission) the Foveon sensor has a 1.7x advantage in pixel density over a Bayer. So you can reasonably compare a 3.4MP Foveon to a 5.8MP Bayer. That it holds its own so well agains the 6MP Bayer cameras shows that Bayer algorithms aren't as mature as they should be at this point in the game. There is no way you can compare a 3.4MP Foveon to a 10MP Bayer.

--
Ciao!

Joe
 
The truth is, what the Foveon does currently is a Bayer
interpolation of 10.2MP without spacial artifacting and thus the
need to scale up.
The truth is, that it is nothing of the sort.
But the point that it is simple a special case
of Bayer interpolation (where the offsets = 0), is too easily lost
on pro reviewers with little mathematical background.
The lack of solidity of this argument is not lost on me. I have more than a little mathematical background: BSEE Lawrence 1984, MSEE Oakland 1989, postgrad work in psychoacoustics, DSP, and optics. Employed in psychoacoustics and psychophysics for 2 decades, designer of several optical systems, main architect of 2 cutting edge speech recognition systems.

You're turn, ante up.

p.s. a quick lesson, Bayer sensors have twice as many green pixels as red or blue, the split is 1/2 green, 1/4 red, 1/4 blue. So the green resolution of a typical 6MP Bayer camera is 3MP, very close to the 3.4 of a Foveon. The Foveon quality is more a testimate to the low quality of the interpolation algorithms used in Bayer cameras than any magic inherent in the Foveon design.

--
Ciao!

Joe
 
Without listing all the differences and repeating things I have
stated before, if I were given the choice of keeping one only of
our crop of DSLR's we have, I would keep the SD-9, because it is
capable of producing incredible results, better than the 10D and
better than the 1D.
The strength of the 1D is not resolution, it's an ability to produce excellent results at 8 frames per second, with a 45 point AF system. With electronic shutter to hit 1/500 sec X sync, 1/16000 sec max shutter speed, very clean high ISO performance, etc. it is a go anywhere, shoot anything, camera.

The 10D has problems. Properly focused, it holds its own pretty well against a SD-9 (looking at the prints, the only thing that really matters).

--
Ciao!

Joe
 
Joseph S. Wisniewski wrote:
Joe, nice inputs here as usual.
And, on more time, according to well established criteria of
information theory (and even by Foveon's own admission) the Foveon
sensor has a 1.7x advantage in pixel density over a Bayer. So you
can reasonably compare a 3.4MP Foveon to a 5.8MP Bayer. That it
holds its own so well agains the 6MP Bayer cameras shows that Bayer
algorithms aren't as mature as they should be at this point in the
game. There is no way you can compare a 3.4MP Foveon to a 10MP
Bayer.
Do you know how Sigma came to that 1.7x advantage? Is this calculation done on basis of an AA filter or not?

-
Geir
 
I think so too. Someone posted this estimation and somehow it got connected with Foveon or Sigma. I think this whole issue is much more complex to reduce it to one number. Maybe we can put it this way:

The resolution of the X3 is constant 3.5MP while the resolution of a Bayer sensor might differ from scene to scene. If it is mostly B&W information that makes the picture, like the MTF charts, it is high (some of the optic experts can place a number here) and the more critical resolution of the chrominace data gets, see the colored mtf charts on outbackphoto, the lower it gets.
Do you know how Sigma came to that 1.7x advantage? Is this
calculation done on basis of an AA filter or not?
Wasn't that just a mis-quote from a talk?

j
--
http://www.domgross.de
please don't run away because of the cheap design of the first page :)
ICQ UIN: 289647506
 
I thought about something similar. After reading sg10's statements I though why not make the 3 layers size a bit different to get a higher resolution? Isn't that similar? Maybe this is part of the Data that DCRAW ignores? If we have a look on the first versions of CRW / DCRAW that could decompress X3F the edges of objects were definitly in different positions in comparison to SPP. Now they are in similar positions, maybe this smoothing (bottom-to-top, smooth right-to-left ...) is the trick and this would also explain why the output of DCRAW is alot softer than SPP's. Imho this can't be the sharpening algrithmn alone that makes that difference.

Does someone see any sense in the above?

Dominic
Not at all, it would be no different than any 10.2MP Bayer,
A 10MP bayer would have 10 million spatially separated samples to
work with.
The SD9 only has 3.5 million spatial samples. There is
no SW magic than can "unstack" the sensors to make them detect more
spatial information.
Actually, there is, but it involves exploiting CA (chromatic
aberration) and the end result is that you have bands of differing
resolutions.

I use the same type of math in my own Bayer demosaic algorithms,
and will soon be applying it to an SD-9 color interpolation
algorithm.

The way the Foveon sensor works, CA manifests as areas where colors
may be totally out of gamut, and interpolated to very strange
values. I'm working to fix that.
Further, you might be surprised,
I'd be extremely surprised. Over 50 years of science and
engineering knowledge would be obsolete. New textbooks would have
to be written.
Actually, there are ways of doing this that fall within current
texts. Imagine the red, green, and blue are out of registration by
1 full pixel in the corners of the image (not hard to imagine with
many of the wider range zooms). Now, lets make the CA nice and
linear, so that at a/2 the corner radius the misalognment is 1/2
pixel. Then an algorithm that does radial interpolation to resize
the red, green and blue layers back into alighnment will have a
full pizel offset in the corners, which won't increase resolution,
but should eliminate a 1 pixel color fringe. And, in the 1/2 pixel
"ring" there will be an effective doubling of resolution.
you might be able to obtain a stunning image this way by
modeling the falloff rate considering many adjacent pixels.
That buys you exactly zero additional spatial information. Please,
please do some more reading on sampling theory before hypothesizing
magic algorithms.
SG10 definitely needs to do that. You don't have to be out on the
cutting edge like me to realize how many errors are in his
arguments.

I've clipped the stuff about advertising claims. As has been
pointed out, the JCII has strict rules on how these things are
defined, and the SD9 can't be marketed in 1/2 the world as anything
other than a 3.4 MP camera. Sigma would look outrageously bad if
they marketed it as different resolutions in different countries.

And, on more time, according to well established criteria of
information theory (and even by Foveon's own admission) the Foveon
sensor has a 1.7x advantage in pixel density over a Bayer. So you
can reasonably compare a 3.4MP Foveon to a 5.8MP Bayer. That it
holds its own so well agains the 6MP Bayer cameras shows that Bayer
algorithms aren't as mature as they should be at this point in the
game. There is no way you can compare a 3.4MP Foveon to a 10MP
Bayer.

--
Ciao!

Joe
--
http://www.domgross.de
please don't run away because of the cheap design of the first page :)
ICQ UIN: 289647506
 
Actually, there are ways of doing this that fall within current
texts. Imagine the red, green, and blue are out of registration by
1 full pixel in the corners of the image (not hard to imagine with
many of the wider range zooms). Now, lets make the CA nice and
linear, so that at a/2 the corner radius the misalognment is 1/2
pixel. Then an algorithm that does radial interpolation to resize
the red, green and blue layers back into alighnment will have a
full pizel offset in the corners, which won't increase resolution,
but should eliminate a 1 pixel color fringe. And, in the 1/2 pixel
"ring" there will be an effective doubling of resolution.
Perhaps my imagination is failing me. I can see how this might work if you make the some assumptions about the target. But otherwise how do you tell between the CA and real data? And if you can't reliably detect the difference, aren't we back to arguing over the difference between resolution and aliasing error?

--
Erik
 
I thought about something similar. After reading sg10's statements
I though why not make the 3 layers size a bit different to get a
higher resolution? Isn't that similar? Maybe this is part of the
Data that DCRAW ignores? If we have a look on the first versions of
CRW / DCRAW that could decompress X3F the edges of objects were
definitly in different positions in comparison to SPP. Now they are
in similar positions, maybe this smoothing (bottom-to-top, smooth
right-to-left ...) is the trick and this would also explain why the
output of DCRAW is alot softer than SPP's. Imho this can't be the
sharpening algrithmn alone that makes that difference.

Does someone see any sense in the above?
Yes, I have noticed this as well. It also seems to me that the R,G and B positions of the pure RAW data is not entirely "on top of each other". Maybe I did something wrong when I researched these issues, but it is a fact that the Photo Pro software handles the RAW data better than Dave Coffin's utility.

-
Geir
 
Yes, I have noticed this as well. It also seems to me that the R,G
and B positions of the pure RAW data is not entirely "on top of
each other". Maybe I did something wrong when I researched these
issues, but it is a fact that the Photo Pro software handles the
RAW data better than Dave Coffin's utility.
I mean, the R, G and B positions appeared to be displaced in a linear fashion. CA wouldn't displace the RGB layers in a linear fashion, but more in a circular fashion.
 
The strength of the 1D is not resolution, it's an ability to
produce excellent results at 8 frames per second,
and when shot in RAW (like the SD9) and developed with Capture One is more than a match for anything bar a 1DS .. sharp as a tack and every pixel putting in 110% , colour, dynamic range and exposure are Unsurpassed (even BY the 1DS IMO) ..

--
Please ignore the Typos, I'm the world's worst Typist

The No1 Dedicated 1D forum in the UK -------->

http://www.1dforum.co.uk/php/phpBB2/

 
and when shot in RAW (like the SD9) and developed with Capture One
is more than a match for anything bar a 1DS .. sharp as a tack and
every pixel putting in 110% , colour, dynamic range and exposure
are Unsurpassed (even BY the 1DS IMO) ..
Yes, the 1D should be capable of something. The 1D is still priced at $10000 (body only) in Norway, and the 1DS can be yours for $13000. But then again, the 10D goes for $2500. I don't know about the SD9, because I'm still not able to find any resellers at all in Norway for this camera.

-
Geir
 
Please ignore the Typos, I'm the world's worst Typist
Maybe so, but I'm the one who misspelled "fair". So there!

--
Ciao!

Joe
 
And, on more time, according to well established criteria of
information theory (and even by Foveon's own admission) the Foveon
sensor has a 1.7x advantage in pixel density over a Bayer.
Do you know how Sigma came to that 1.7x advantage? Is this
calculation done on basis of an AA filter or not?
I'm not sure how they got it, but it's close enough to my own calculations for me to find the number to be comfortable. Mine are based on the amount of high frequency information lost by an AA filter, and the amount of luminance information that can be reconstructed by an optimal Bayer interploation algorithm. The hypothesized figure of 1.707 was determined by a very ugly circular integration. It was then checked by analysis of a test set of images selected at random from our online image library (which covers everything from machine vision, landscapes, art nudes, architecture, etc. Suprisingly, you only have to run a few hundred images before it converges quite nicely.

And yes, we assumed an AA filter in the Foveon case also, although horizontally and vertically optimal, instead of diagonally as in the Bayer case, so the spatial frequency limit was 1.414 times higher.

Running the Foveon case without an AA filter gives much higher numbers for distortion. Although the image may appear sharper, this is false sharpness, similar to "comfort noise".
--
Ciao!

Joe
 
as your complete ignorance of anything DSP has already been well
demonstrated throughout this thread.

Best of luck to ya..

--Steve
Yup,
Where's the moron now??

Funny how cowards act.He would post every minute here.Now that HE'S WRONG..he runs.

Clay P.
 

Keyboard shortcuts

Back
Top