Reflections on directly comparing Foveon and Bayer Images

I don't see how temporal aliasing is a problem with a stationary subject. It's good enough for astronomers.

Scanners
Why do you think the vertical design makes a difference, I'm curious.

The difference between Foveon/scanning back and Bayer is the lack
of a CFA and AA filter. This means no loss of sharpness approaching
Nyquist and a lack of colour aliasing because of the alignment of
spatial and colour detail and losing the requirement for
interpolation. The vertical array bits is simply a means to achieve
this not an end in itself...
Of course it is a means to an end. As you indicate the VFA
contributes

"a lack of colour aliasing because of the alignment of
spatial and colour detail and losing the requirement for
interpolation."
The need for interpolation occurs because the CFA model cannot
record the full color spectrum at any x,y grid point. To do that
you need the z axis, the vertical postioning of the B,G and R
"sensors." The Vertical placement allows you to gather full chroma
and luma data at each x,y point without interpolation. I think that
is a significant factor ad I cannot think of any other way to do it
(the designs that "rotate" filters in from of a sensor (and rotate
can be accomplished lectronically, not really tiny wheels,
introduces temporal aliasing). If you can I would love to hear how
it can be done.

Pete
--
Galleries and website: http://www.whisperingcat.co.uk/mainindex.htm
 
Perhaps you are right but this should be part of the wash up and design considerations for follow up tests! Let's not jump to conclusions...

;-)
The only way I can think of doing that is by challenging the viewer
to reliably demonstrate they can pick the Foveon image out of a
pack when they don't know what camera produced the image.
The problem is that this is not exactly the same question as which
is better.

I don't recall the cite or more details, but I once read about one
of the audio A/B tests where a pair of "golden ears" could reliably
detect a difference when normal listeners did not. The reason was
that they used a recording of which he was intimately familiar. It
was easy for him to detect the subtle differences from what he was
used to. From then on, it's just a matter of confirming his biases.
His score dropped considerably (but was still better than 50/50)
when they played unfamiliar music (and different music types.)

I think with scrutiny, many of us could identify the Foveon images
rather reliably. If we were doing science, I think we'd also need
to introduce a time element. Show the images only for a short time
and then have them scored as better/worse.

--
Erik
--
Galleries and website: http://www.whisperingcat.co.uk/mainindex.htm
 
The only way I can think of doing that is by challenging the viewer
to reliably demonstrate they can pick the Foveon image out of a
pack when they don't know what camera produced the image.
The problem is that this is not exactly the same question as which
is better.
Defining what better means is never an easy task.
I don't recall the cite or more details, but I once read about one
of the audio A/B tests where a pair of "golden ears" could reliably
detect a difference when normal listeners did not. The reason was
that they used a recording of which he was intimately familiar. It
was easy for him to detect the subtle differences from what he was
used to. From then on, it's just a matter of confirming his biases.
His score dropped considerably (but was still better than 50/50)
when they played unfamiliar music (and different music types.)

I think with scrutiny, many of us could identify the Foveon images
rather reliably. If we were doing science, I think we'd also need
to introduce a time element. Show the images only for a short time
and then have them scored as better/worse.
I am still not convinced anyone could reliably ID which images in a stack of 8X10 were from a Foveon sensor or which were from a Bayer sensor. And the same goes for images on a computer screen. Lots of guys use 1280 so that may be a good size, but I would go for a 1600 screen too.

At a larger size it might be possible to look at the rolloff and draw some conclusions. But I suspect it would also be possible to use some CS action to get better rolloff in Bayer images. Problem such actions that do that would probably be useless for anything but pixel peepers.

But at some point you need to ask what are images being used for? If the answer is a 4X6 or a 8X109 or a 1280 screen that is the size that I claim should be used for the ID test.

And it is possible to make the claim that golden eyes (or golden ears) should be excluded from such tests as outliers. Cameras and the images they produce are not for the exceptional, but for normal folks who buy and view them. I imagine there is a very small market for outliers, and a much larger market for normal folks.
 
There is speculation in audio that one reason LP may sound more dynamic than CD despite inferior SNR is because the background "vinyl roar" is a more effective reference than silence.

Perhaps, similarly, random grain is something the eye can fasten onto in areas of very smooth tone that otherwise might be too clean and give rise to the perception of a 'plastic' look?

It could be true... ;-)

One think I am sure about is that our perceptions are very plastic and easily fooled - especially by ourselves. Anyone seen the famous video of the person in the monkey suit gate crashing a ball throwing exercise? It is used to demonstrate attentional masking in action. Before showing the video, ask a question such as "count how many times did the people in the video throw a ball to each other". Afterwards, ask the audience if they saw anything unusual. Typically over 50% will not have noticed the monkey suit. The brain is odd like that...
We know that noise/grain can add to the perception of sharpness and
detail of images. So if something is smoother, is it more or less
accurate?
Interesting observation, Erik, and the reason why in some of my
Photoshop illustrations, I add noise (very subtly) to instill more
"realism" to the finished art.
--
Kindest regards, Jim Roelofs

I colour my world with Foveon, everyday!
Please visit my gallery at http://www.pbase.com/jrdigitalart/
--
Galleries and website: http://www.whisperingcat.co.uk/mainindex.htm
 
Very good points. Only three comments. First, my suggestion of "verisimlitude" is a hypothesis, an attempt to understand why I find Foveon images so captivating, and I do not have the same reaction to the Bayer images I have seen. (And here I accept that it is very hard to use jpeg compressed images on the internet, though it is not impossible. I was seriously considering a Nikon D100 when an early SD9 image caught my attention.) I have a good D200 print sent to me by a friend and have printed a good number of 20D prints for my nephew. the D200 is closer than the 20D but still not there. But I do start and end with the simple fact that I like what I see when I look at good Foveon images

Secondly, I do not claim that only Foveon sensor images can produce the sense of "verisimilitude." Indeed, as I have noted a number of times I am rather surprised that those people who have access to high end Bayer cameras and Foveon cameras are not reporting that the Bayer cameras produce "realistic" images to the degree that Foveon cameras. Though Laurence M has said he has been impressed by some D2X prints.

I do not claim all Foveon images do so. Any image with poor resolution or poor colors, etc. will not have it. Poor images can be produced by any camera. But enough people producing Foveon image seem to be reporting it, and enough non-photographers looking at my prints reporting (some who have no idea the picture was produced by a Foveon camera) that I suspect it is not simply "bias." And there are things that are "objectively"noted showing differences (for example, the difference of performance of the Foveon sensors and Bayer on color test images (there is a technical article on the Foveon web site about it), and the sharp "pixel roll-off" displayed by Foveon images). It is just that since I am not a scientist I will leave these quantitative debates to those who have more technical skills then I do. I just like what I see and I am curious enough about why that I think about it and like to debate ideas when I have them.

Finally, I do remember the SD9 blue-sky debates. In some cases the difference was in the sky. Even clear blue skies do show subtle differences, and the Foveon sensors are very adept at recording subtle differences in chroma and luma. Some were problems of some LCD monitors that created noticelbe banding. (since I was using a good CRT monitor it was not until I checked on my DEll protables LCD I saw this.

But some were undoubtable the result of the fact that the SD9 was extra UV sensitive, a problem fixed with the SD10. Actually I was never bothered that much by the fact that the skys were sometimes a little deeper blue, having used a polarizer to get a similar effect for years. But if people just dismissed all sky anomolies as user error they were in error.

I believe the goal of all debate is to get closer to the truth then you can on your own. But debates are combatitive by nature and so it is easy to go over the line into informal fallacies of all sorts.

Pete
 
Some of these shots do come lose to the feel I like (the building shot and the water tower, and the guy in front of the chairs, for example). But there is a sense of liveliness I miss even with these. Look, the 5D is a good camera and does produce good images. But for me the Foveon images just feel better. As people have noted this may be the result of the smoothing caused by the AA filter, the way Bayer handles details as you go "beyond nyquist" etc., but for me it is just a feeling of liveliness in the image that I miss. I had a similar sense of "flatness" when looking at Clint Thayer's D200 "Train to Nowhere" print he sent me.

The good pictures in this gallery are very good indeed (this gallery is a test of the 5D and many of the shots are presented to highlight some point, so there is a great range of overall quality as you would expect.)

The 5d, the D200, D80, the D2X, and the continuing saga of the 1D series are all getting closer. But for me they are not yet there. To give you a sense of what I like I do remember the SD9 image that kept haunting me as I considered the Nikon D100. It was one of the first images by the SD9 that Sam Pyrtle posted:

http://www.pbase.com/sigmadslr/image/16177710/original

And here is one of Chunsum's SD14 shots I really like:

http://www.pbase.com/sigmadslr/image/75896458/original

All of this is subjective, but that is where we start and end. looking at pictures and liking or disliking them over all.

Pete
 
True, but very little is stationary. It will be interesting to see when it gets out of the lab and into a real camera.
Pete
 
I think one (the?) key here is preference, as between the single pixel potential of the Sigma compared to the watercolor look of the CFA. This in turn makes the proposed double blind test questionable, or certainly more problematic, if the question is at all which image the viewer prefers, or more subtly, which image the viewer is more comfortable with, or more accustomed to.

In regard to testing, if the issue is at all which image is "sharper," are we asking a human viewer to provide a resolution test? What is the criteria for that? Wouldn't some people be much better than others? Are we going to use the uninformed as testers,or those with photographic experience, or both? Do we give extra weight to expert opinion? Or do we exclude them as outliers, as suggested, so only the ignorant are relevant? How about simple differences in vision? Do we test for that first? I will add that my 90 year old father, who has been a photographer for over 70 of those years and currently uses a Nikon D80, has commented repeatedly on the detail captured by my SD10, and wondered how I do it. (And this after his cataract surgery and lens implantation, so I think he sees better than most younger people...)

I am also reminded of a comment by Clint Thayer, long ago on this forum, in which he commented that the Foveon effect was much more pronounced for close ups as compared to shots at a distance: there are many comments regarding the excellence of the Sigma camera for macro shots and feather details. But foliage, even at a distance, seems better with the Foveon.

For myself, I prefer the Foveon. But then, I liked my Mamiya 7 images too.
 
Hi Pete,

What I was trying to show was that I could not look at some of Ron's images or some of the images you linked to, or many other images I have seen on the internet and be sure I could say which ones came from which camera. And if the images were printed I would be even less sure which images came from which camera.
 
Good points. In theory double blind tests seem relatively simple and easy to do, but even here there are all sorts of variables. To mention another perceptual bias that showed up in the early debates over the SD9. Some people are very sensitive to stair-stepping aliasing, and even though it is relatively rare in real world photographs they are repelled by its presence. Give the fact that the SD9's xy grid was modest that cpndemned the camera in their judgment. Once you realized that was the basis for the judgment the limits of such judgments were clear, but without that we had some wonderfully noisy crossed monologues!

To go back to my original point. Direct comparisons can be of use, but they are very tricky to set up and interpret, so caution is advised. This does not mean we should not try to make them, just that modesty in claims is advisable.

Good night all!

Pete
 
Very good points. Only three comments. First, my suggestion of
"verisimlitude" is a hypothesis, an attempt to understand why I
find Foveon images so captivating, and I do not have the same
reaction to the Bayer images I have seen.
Actually exploring this hypothesis is something I've very interested in.
But I do start and end with the simple fact that
I like what I see when I look at good Foveon images
That may be enough for you. I'm one of those annoying analytical people who want to know if the effect is reproduceable.
Indeed, as I have noted a number of
times I am rather surprised that those people who have access to
high end Bayer cameras and Foveon cameras are not reporting that
the Bayer cameras produce "realistic" images to the degree that
Foveon cameras.
But your sample set are those who share your tastes and have some investment in Foveon/Sigma. Almost by definition, they have not found a way to process mosaic images to their satisfaction. Their Sigma images may excel, but do you think their other images are technically top-notch as well? What about those that have tried X3 and moved on the other way?
and enough non-photographers looking
at my prints reporting (some who have no idea the picture was
produced by a Foveon camera) that I suspect it is not simply
"bias."
Non-photographers say that about my prints as well. Perhaps most people are just easily impressed by anything in focus ;-)
And there are things that are "objectively"noted showing
differences (for example, the difference of performance of the
Foveon sensors and Bayer on color test images (there is a technical
article on the Foveon web site about it), and the sharp "pixel
roll-off" displayed by Foveon images).
What puzzles me as an analyst is that if there was something inherent in the sensor, then it should be more obvious in the side by side tests. An effect that vanishes or diminishes under more controlled circumstances is always a red flag to the skeptic.
But if people just dismissed all sky anomolies as user
error they were in error.
Again, no controls were ever done. And then we get into the nature of reality vs. perception. If the SD9 was more sensitive to something than the human eye, is that a good thing? Did it reflect the sky as you perceive it? As most perceive it (e.g. assuming your images are not just for you alone?) As you wanted to artistically render it (yeah, getting deep blue w/o a polarizer was occasionally valuable.) Much of the debate seemed to be an exercise in rationalization.

To me there are several possible hypothesis, not all mutually exclusive:

a) there is something special about X3 that cannot be easily reproduced.

b) there is something about X3 that makes this more likely or easier to produce than with mosaics. However, well processed mosaic images may be functionally equivalent.

b1) there is something about X3, but it's also subject specific. The side by side tests are always of boring subjects that never "pop" anyway.

c) there is nothing significant. It's mainly selective bias (e.g. when you see a good X3 image you attribute it to this effect. When you see a good mosaic image, you dismiss it as a fluke.) Good sharp images in contrasty light will always have this look. That's why every brand claims to have it at one time or another.

d) the differences exist but are mainly subjective. Different people look at different cues to appreciate some photos.

d1) Because Sigma owners value sharpness and the 3D effect so highly, they tend to work for these images more often and show them more often.

Alas while there are demonstrable differences, there is nothing conclusive to eliminate any of these hypothesis. Not enough directly comparable images and bias is hard to measure w/o a lot of time and money. It does not help that the best photographers in the general community are also the ones least likely to make high resolution work (particularly raw files) available.

--
Erik
 
In regard to testing, if the issue is at all which image is
"sharper," are we asking a human viewer to provide a resolution
test?
Aren't we taking photographs for human appreciation? if the measurements do not correlate with what we prefer, then we're measuring the wrong thing.
Are we going to use the uninformed as
testers,or those with photographic experience, or both?
Who is the audience you want to reach/impress?

--
Erik
 
You can eliminate all variables except the sensor-camera system by using a bellows and identical lens for both systems. You can also use that same system to take a range of photos from +-10mm or whatever you wish from the presumed sharpest focus for each system. You can also use software to measure several characteristics of the resulting images.

I'm pretty sure you will find that there are differences and that the differences will be somewhat consistent over a sample of different images.

Pete, I don't think the commonly used term "3D" means something like a real 3D image but rather something synonomous with "pop".

As I've said multiple times Mike Chaneys resolution tests show why CFA images may be less consistent across a range of colors in resolution and can go a long way toward explaining some of the difference people describe. It wouldn't be that difficult of put a large set of resolution test patterns under a print, maybe on the paper before it is printed then sample those patterns with software to look for differences.

Dave Milliier have you done any testing yet?

Mike
'America is not at war,
The Marine Corps is at war;
America is at the mall.'
 
Peter

Yes, double blind testing and interpretation can go wrong sometimes.

But...

...bearing in mind the unreliability of perceptions, blind testing has proven itself time and time again as the best way of pulling out truth from fantasy.

People who dismiss the validity of double blind test results (when properly done) are almost certainly peddling snake oil. Read the Boston Audio Society articles or do a search for "Peter Belt effect" for some gobsmacking examples....
Good points. In theory double blind tests seem relatively simple
and easy to do, but even here there are all sorts of variables. To
mention another perceptual bias that showed up in the early debates
over the SD9. Some people are very sensitive to stair-stepping
aliasing, and even though it is relatively rare in real world
photographs they are repelled by its presence. Give the fact that
the SD9's xy grid was modest that cpndemned the camera in their
judgment. Once you realized that was the basis for the judgment
the limits of such judgments were clear, but without that we had
some wonderfully noisy crossed monologues!

To go back to my original point. Direct comparisons can be of use,
but they are very tricky to set up and interpret, so caution is
advised. This does not mean we should not try to make them, just
that modesty in claims is advisable.

Good night all!

Pete
--
Galleries and website: http://www.whisperingcat.co.uk/mainindex.htm
 
Need the camera first!
You can eliminate all variables except the sensor-camera system by
using a bellows and identical lens for both systems. You can also
use that same system to take a range of photos from +-10mm or
whatever you wish from the presumed sharpest focus for each system.
You can also use software to measure several characteristics of the
resulting images.

I'm pretty sure you will find that there are differences and that
the differences will be somewhat consistent over a sample of
different images.

Pete, I don't think the commonly used term "3D" means something
like a real 3D image but rather something synonomous with "pop".

As I've said multiple times Mike Chaneys resolution tests show why
CFA images may be less consistent across a range of colors in
resolution and can go a long way toward explaining some of the
difference people describe. It wouldn't be that difficult of put
a large set of resolution test patterns under a print, maybe on
the paper before it is printed then sample those patterns with
software to look for differences.

Dave Milliier have you done any testing yet?

Mike
'America is not at war,
The Marine Corps is at war;
America is at the mall.'
--
Galleries and website: http://www.whisperingcat.co.uk/mainindex.htm
 
Peter

Yes, double blind testing and interpretation can go wrong sometimes.

But...

...bearing in mind the unreliability of perceptions, blind testing
has proven itself time and time again as the best way of pulling
out truth from fantasy.

People who dismiss the validity of double blind test results (when
properly done) are almost certainly peddling snake oil. Read the
Boston Audio Society articles or do a search for "Peter Belt
effect" for some gobsmacking examples....
agree with this 100%, in a former life when I had a proper job...european ops mg for a uk plc....blind testing of new products was an absolute requirement before product launch. I remember on more times than I care to admit products failing to launch due to unfavorable blind testing....
best
--
Geoff Roughton

'Always look on the bright side life...'
 
You can eliminate all variables except the sensor-camera system by
using a bellows and identical lens for both systems. You can also
use that same system to take a range of photos from +-10mm or
whatever you wish from the presumed sharpest focus for each system.
You can also use software to measure several characteristics of the
resulting images.
When I mentioned that I have modified two Sigma 1.4 TCs by switching the rear plates so I can mount my EF lens on my Sigmas and my SA lens on my Canons on FM when a topic similar to this one was being discussed the point was made that sensor size and detector size would not be held constant. Seems like the same could be said for using bellows instead of a TC. And the Canons produce an image with more pixels than a Sigma image so software measurements on images with different dimensions could lead to objections too. And when I posted two images of a test pattern shot with both a Canon and a Sigma complaints were lodged that the Canon should be closer than the Sigma because the crop factors were not the same (1.7 v 1.3). When I reshot the images complaints were lodged that the images were not shot at the same focal length.

I was kinda burned out by this point, but it was clear to me that eliminating variables was not as easy a task as I had originally assumed.
I'm pretty sure you will find that there are differences and that
the differences will be somewhat consistent over a sample of
different images.
I am not sure, but would not be suprised if that was the case. Bottom line is I just dont know.
Pete, I don't think the commonly used term "3D" means something
like a real 3D image but rather something synonomous with "pop".
Agree with this.
As I've said multiple times Mike Chaneys resolution tests show why
CFA images may be less consistent across a range of colors in
resolution and can go a long way toward explaining some of the
difference people describe. It wouldn't be that difficult of put
a large set of resolution test patterns under a print, maybe on
the paper before it is printed then sample those patterns with
software to look for differences.
Would the pattern put under a 5d image measuring 4368X2912 be the same size as the pattern put under a sd14 image measuring 2640X1760? Would you resize one up, one down, or both up and down a little? How would you do the resizing if you did. Not to mention how sharpening and other post processing would be done.

All these issues and many more were raised at FM in a thread that went on for over 100 pages (there is no 150 post limit at FM like there is here). Larry was very active in that thread and will probably back me up that there was no final concensus reached.
Dave Milliier have you done any testing yet?
If Dave does try to do some tests I would suggest first trying to address some of the issues that came up in the FM thread.

What would the image be of, a test pattern or a real life subject? What lens/converter are to be used? How will the focal length, FOV, distance from subject issues be solved? What postprocessing (including sharpening) and resizing will be used, if any? What software will be used for the final test?

And are there any other issues that may cause complaints about the test design?

This is why I suggested say 10 Bayer images and 10 Foveon images at a specified resolution or print size having to be identified by sensor source as a reasonable test.

I am still firmly convinced no one could consistently ID which image came from which camera in a blind test.
 
People who dismiss the validity of double blind test results (when
properly done) are almost certainly peddling snake oil. Read the
Boston Audio Society articles or do a search for "Peter Belt
effect" for some gobsmacking examples....
Peter Belt, now theres a name i haven't heard for a few years. Jimmy Hughes from Hi-Fi Answers was quite often talking about him.

Always remember the negatively charged paper clip amounst other things, mind you i do think he was on to something.

tried some of his products on my Linn, Naim tri amped system (yes i still listen to LP's and they still sound better than 99% of CD's on the Linn LP12) but could never really hear a difference but boy my Linn, Naim system is still an enjoyment to listen to even after 15 years of use.
 

Keyboard shortcuts

Back
Top