artifacts and haloing are more likely to happen in Bayer based
cameras because of the way the interpolation algorithms deal with
sensor data ( which is mostly guessing with a lot of
averaging out ). The averaging out process creates thicker
edge details ( haloing ) worsened by sharpening AND clumpier
details (artifacts) worsened during JPEG transform.
OK, so first when you say haloing, I am assuming by your further
comments that in fact you do not mean sharpening effects. This is
what most people talk about with halos. You seem to mean soft
edges (which I would not call halos). If you in fact do mean
sharpening, then (1) if you get halos you have oversharpened
regardless of the sensor, and (2) halos from USM occur on sharp
edge boundaries, meaning they are
more likely on a Foveon if you
think the Foveon is sharper.
If you're talking about soft edges, then again I am not following
why you think you would get JPEG artifacts. 8x8 block artifacts
must be irrelevent because this is caused by too much compression
regardless of anything else. DCT type artifacts are caused by
dropping too many higher frequency terms. If you think Bayer
images are softer, then they will have
fewer artifacts, not more.
Lastly there is chroma subsampling. I haven't thought about how
this would effect either one differently.
When you say "mostly guessing with a lot of averaging out," this is
trivializing the work that goes on. I believe you are doing this
because it makes it sound bad. A common phrase heard in
creationism is "evolution is only a theory," which is meant to make
one think that it is poorly supported. Similarly I could talk
about how the Foveon doesn't even capture red, green, and blue, and
instead makes all sorts of guesses about the values based on the
muddy brown, dark brown, and lilac that it does capture. This
obviously omits the theory behind why one can correctly do this.
Clumpier details? Could you be a tad more specific? And exactly
why are they picked up on by JPEG, where sharp edges aren't? Do
remember that the Foveon does have some aliasing as well (of a
different type) -- a single pixel wide object can easily span 4
sensor pixels, hence giving each one 1/4th of its value.
Once again you seem to think that you can join a debate and be the
judge at the same time. I guess you find yourself on the winning
side more often that way. While slightly more grown up than
"nyanny nyanny you can't catch me," it seems in the same vein.
Let's forget any audience you are playing to, and just convince me.
Why does sharpening cause more artifacts on Bayer sensors than
Foveon sensors? Why does JPEG cause more artifacts on Bayer
sensors than Foveon sensors? These are your assertions -- my
current believe is that these aren't true. Convince me with
logical arguments and I will be wiser for it. I apologize for the
rather testy nature of this response, but it's just one of those
days.