SD14 JPEG interpolation

Just BTW, you can discern a subpixel continuation of the leftmost (vertical) antenna pole on the TIFF if you zoom a bit (or sharpen). JPEG seems to have eaten it :)

--
--------------------------------------------
Ante Vukorepa
 
Have to go soon, and hope we are all coming to a smoothed point in
the discussion.
...and see that we have, below.

A pleasure to talk with you, Ante, and I think we are going to enjoy having you around ;)

All best,
Clive
 
Fuji's interpolation had a very solid reason ;)

Speaking of which, it's interesting to note Fuji S3Pro's interpolated 12MP had approx. the same resolving power as a "normal" 9MP bayer, in part thanks to the fact it resolved best in horizontal and vertical directions (unlike the normally oriented Bayer which resolves best in diagonal directions).
As long as multiple JPG sizes are available in-camera (namely 1X),
having an upsampled size is no big deal IMO. As others have
mentioned, it might be handy for direct-to-print.

As far as causing potential confusion, I don't think it's going to
cause that much turmoil anyways. Fuji did fine with their
upsampling for awhile.

--
http://www.madmaxmedia.com
--
--------------------------------------------
Ante Vukorepa
 
But the important part here is, the AA filters aren't that exact
either :)
First of all, as silly as it sounds (for such a simple element of
the imager), the technology has changed over the course of years.
Recent cameras really do behave noticably more differently than
older cameras at and around extinction. The way RAW processing
algorithms handle those frequencies changed a lot too, although it
changed in widely different ways for different companies (which
makes comparisons even more pointless at times).
Technology has changed and improved. But it cannot recapture detail the AA filter has already disposed of, or that has been lost by a CFA discarding key wavelengths at a partcular location. That information is gone and only hollywood has the algorithms to bring this detail back when the sensor has not received it nor stored it.
What i DO know is that 5D actually reproduces some pixel-wide
details and even some sub-pixel sized ones (in certain conditions)
that 20D just did not and wasn't expected to (even bearing in mind
that, while 5D does have a larger pixel count, at the same time it
has a much lower pixel density).
It is possible for this to occur if the subject matter is right, but not consistant - I never argued some parts were not sharp, just that if the whole thing is not really at that level of sharpness you cannot say it

s delivering "as advertised" sharpness. Also of course it helps that they have a 13MP sensor and are only outputting a 12MP image, which leads to somewhat increased sharpness by dropping away some of the blur as they downsample.
My bottom point is this - let's not expect interpolated miracles
from SD14's sensor. At least until we can scrutinize some real
world comparisons :)
I don't expect anything other than the behaviour I see today with the SD-10. I'm not looking at miracles, only repetition and consistency of behaviour with an increased pixel count.
I'm sure it will perform as an 8-9 MP Bayer, but without the Bayer
nasties, but i just refuse to believe it will do as good as a 13 or
14 MP one.
Again using the green photosites alone you can tell that 8 is simply too low a number. Even 9 is a bit too low. This is simply an extrapolation of past behaviour where a 3.2x3 Foveon image held somewhat more detail than a 6MP bayer image with 3MP of green photosites. Some arguments exist over how much more detail, and I don't want to go there - I am only saying what I believe the floor on detail equivilence is, here I am totally doing so independant of artifacts.

By using the figure "8MP" you are ignoring what the existing sensors do and discounting lots and lots of real world tests people have done with existing imagers. Again, an increase in pixel count in a Foveon imager leads to a simple linear increase in performance so it's pretty easy to say what it will do. What is not easy is to compare that against the moving target of real-world bayer cameras at higher MP ratings, that's why we have to wait for the real camera to get a good idea what it can do comparitivley.

I am not looking for a miracle from the Foveon sensor so much as I am simply not expecting greatly improved performance against a given MP rating from the bayer ones; like I said I suspect the detail "per MP" is somewhat lower for the higher MP bayer cameras because of stronger AA filters.

--
---> Kendall
http://InsideAperture.com
http://www.pbase.com/kgelner
http://www.pbase.com/sigmasd9/user_home
 
You just nailed my exact feelings here, people seem compelled to
perform imaginary comparisons. Very frustrating.
You can do imaginary comparisons. You can e.g make two different assumptions.

1. The SD14 is as good as a SD10, pixel per pixel.

2. The SD14 is as perfect as it gets per pixel.

Then you can compare an imaginary output created under any of those assumptions. It is possible to do.

In both cases you can get (almost the same) spatial resolution.

Then you can analyse noise behaviours if you so wish.

You could even write computer simulations to compare a Bayer 10 Mpixel camera with the SD14 under those assumptions.

And actually - this will (in some respects) tell more than any RAW examplefiles coming from Foveon/Sigma. Those might look good - but they cannot be compared - so - it migh be an illusion made by a very skilled photographer.

Photos from the same camera look very different for some curious reason. It just looks like some people must be lucky and buy better samples :)

--
Roland
http://klotjohan.mine.nu/~roland/
 
s delivering "as advertised" sharpness. Also of course it helps
that they have a 13MP sensor and are only outputting a 12MP image,
which leads to somewhat increased sharpness by dropping away some
of the blur as they downsample.
Actually, they aren't. The image is 12.7 MP, the sensor is 12.8 MP. There is no downsampling happening there.
Again using the green photosites alone you can tell that 8 is
simply too low a number. Even 9 is a bit too low. This is simply
an extrapolation of past behaviour where a 3.2x3 Foveon image held
somewhat more detail than a 6MP bayer image with 3MP of green
photosites. Some arguments exist over how much more detail, and I
But here's what's wrong with the "extrapolation" and what i've been talking about. It depends on data we've seen from Bayer cameras that are contemporary to the SD10 :)

Things have changed somewhat in between then and now. Some new Bayer cameras aren't as bad as the old ones. Some are worse. Some new RAW converters make Bayer images look better than before. Some make it look downright disguisting (but attractive to the mass market).

Bah... Let's just wait for the SD14 and see. I'd really love to get a review sample at some point and do a direct comparison :(
(i've contacted the local Sigma distributor and got a "maybe")
I am not looking for a miracle from the Foveon sensor so much as I
am simply not expecting greatly improved performance against a
given MP rating from the bayer ones; like I said I suspect the
detail "per MP" is somewhat lower for the higher MP bayer cameras
because of stronger AA filters.
Just BTW, that's not really a proven trend (AA filters getting stronger, that is). Things aren't that simple - each company has a different policy regarding AA filters and different AA filters get applied to different product ranges (amateur, semi-pro, pro) on top of that. Like i said, i'm pretty much 99% convinced the AA filter in the 5D is much MUCH more weaker than the one 20D's filter was. And 20D seemed to have a bit weaker filter than the 10D.

--
--------------------------------------------
Ante Vukorepa
 
That is why the 14 MP JPEG image capability of the SD14 is
interesting. People can argue all they want about Foveon versus
Bayer, photosites versus pixels, etc., but it all comes down to
image quality... at least that is what Sigma is angling for.
So the 14 MP JPEG mode is a kind of demo mode? A kind of show case for the technology? A method to make unbelivers understand?

--
Roland
http://klotjohan.mine.nu/~roland/
 
ACR:



DCRAW:



The differences are subtle (it seems ACR has gotten a bit better resolution-wise since the last time i've taken a 100% peek) but you can easily discern them if you layer the pics on top of each other and flip between them. Take a look at the roof texture and the woods in the background.

--
--------------------------------------------
Ante Vukorepa

My (perpetually) temporary gallery can be found here - http://www.flickr.com/photos/orcinus/
 
That's actually the best explanation i've heard in a while...

An interpolated, 14 MP pic would be a great way to explain the "average joe user" what per-pixel resolution/sharpness is, as well as how much information there really is in the "native resolution" Foveon image.

Could be especially useful on elderly "i'm-not-going-to-zoom-at-100%-because-that's-pixel-peeping" photographers ;)))
So the 14 MP JPEG mode is a kind of demo mode? A kind of show case
for the technology? A method to make unbelivers understand?

--
Roland
http://klotjohan.mine.nu/~roland/
--
--------------------------------------------
Ante Vukorepa

My (perpetually) temporary gallery can be found here - http://www.flickr.com/photos/orcinus/
 
Exactly. You are falling into the trap of training yourself to
"spot the artifact." Does this really serve an artistic purpose?
Yes because to me, art is attention to detail and consistency of vision. The differnece to me between a truly great picture and one that is merely nice is paying exact attention to each and every element present in an image, especially color elements in a color image with a mixture of hard lines. As I said the lines coming in and out of detail actually really annoys me almost as much as color issues like that so seeing the antenna as they are in a real print - I would not like it at all.
For a 30" wide print with 4368 pixels, you are talking about 145
ppi. That maze pattern is ~ 10 pixels wide. So 1/14th of an inch
on paper. Or less than 2mm. Exactly how close are you planning to
stand to that print?
The pattern is 10 pixels wide - and 27 high. That is close enough to see at what I would consider to be a decent viewing distance. I can tell because it's pretty noticable at 100% on a 19" 1600x1200 monitor, as are issues wtih the antennas.

The problem is that the brain subconciously can pick on on details like this that are small but visible and can detract from the image.

Also, if I am doing an image for myself I can control my viewing and know what I will ignore. It's like cleaning around the house, knowing what I will find acceptable for myself - but there is a whole different standard of cleaning when it comes to company. And so it is for images, if I am to present an image to the world I am a lot more careful about what is in it, at least if I consier it to be really good and I want it to be thought of as art rather than pure documentation.
Yes, 17% more pixels (linear) will result in 17% more linear
resolution. Guess what? That's the same difference Phil measures
between the 350D and the 400D (JPEG, in the D80 review.) So the
SD14 will compare to the D80/D200 pretty much like the SD10 does to
the 20D.
That does not follow though.

You are equating the 17% linear increase in Foveon pixels with bayer output pixel increases, so you need to roughly double the foveon increase (or at least not leave it at 17%)... thus yielding a comparison more like the SD-10 vs. the 300D against the SD-14 vs. the D200.

If you look at the linear increase between the 300D (6MP) and the 400D (10MP) you'll find it's only around 27%, for example.
Phil has a big database of measured resolution numbers for CFA
cameras - for the same class of camera, the correlation between
linear number of pixels and measured resolution is very good. YMMV,
but if anything the more modern cameras have better tuned AA
filters.
Yes but we all know well the arguments about how B&W resolution charts. These are pretty much the best possibly case for demosiacers and alogrithms that recover detail beacuse every photosite is helping resolve the edges of those lines. However...

Within the same camera class (say comapring all the Canons) it has seemed to increase roughly linearlly, so I'm willing to forgo the assumption that the AA filter strength has changed very much as MP's have risen. I think extinction measurements would actually be the more accurate way to look for this trend but it doesn't seem like the 300D extinction figures are as high as they really would be with a better chart, but again I'm willing to hold back on that.

However you can see in test measurements when looking across camera bodies the AA filter (and/or processing techniques) is having an effect on detail resolved that is pretty significant, look at the 5D resolution chart compared to other cameras.

http://www.dpreview.com/reviews/canoneos5d/page31.asp

This does show some difference in LPH resolved where the D2X is somewhat ahead, while in theory the 5D has slightler more pixels. Also note the extinction resolution is much higher for the D2X.
There would be differences, but they would be swamped by the more
common differences of lenses and processing. The "optimal"
processing chain will be different (e.g. a little more sharpening
for one vs. the other) and the tradeoff between artifacts (e.g.
moire vs. halos) but that's going to be very subjective. You can
find differences if you pixel peep each image to a high enough
degree, but it's difficult to argue that they are significant in
prints. (This is where I got into that futile p*ssing match with
Lin - Phil for example, does not call these differences
significant.)
From the LPH results I'm willing to acept that, though again I think the real world yields surprizes that make that difference less acacdemic than it would appear from just the charts. That to me is I think the difference bewteen what Phil sees and what Lin sees.

--
---> Kendall
http://InsideAperture.com
http://www.pbase.com/kgelner
http://www.pbase.com/sigmasd9/user_home
 
Just BTW, are Phil's resolution measurements taken from a converted RAW file or from JPEG? Because sometimes i have a nagging feeling he's using JPEG for some of them (which would be absolutely pointless).
This does show some difference in LPH resolved where the D2X is
somewhat ahead, while in theory the 5D has slightler more pixels.
Also note the extinction resolution is much higher for the D2X.
--
--------------------------------------------
Ante Vukorepa

My (perpetually) temporary gallery can be found here -
http://www.flickr.com/photos/orcinus/
 
Fuji's interpolation had a very solid reason ;)

Speaking of which, it's interesting to note Fuji S3Pro's
interpolated 12MP had approx. the same resolving power as a
"normal" 9MP bayer, in part thanks to the fact it resolved best in
horizontal and vertical directions (unlike the normally oriented
Bayer which resolves best in diagonal directions).
More like 6.5MP in the real world. Resolution chart might conveniently favor the tilted matrix, but against even remotely irregular scenes, I think it does little to nothing. I lean toward nothing, because I would prefer the standard bayer 6MP output for it's lack of artifacts.

Check this.
http://www.dpreview.com/reviews/fujifilms3pro/page21.asp

Fujis Claims are even more hot air than some of the Foveon ones I see here. When put to a fair test, the hot air escapes.
 
Yes because to me, art is attention to detail and consistency of
vision.
Sometimes. There are some paintings where you want to see the brushstrokes. And others where it's not so important.
As I said the lines coming in
and out of detail actually really annoys me almost as much as color
issues like that so seeing the antenna as they are in a real print
  • I would not like it at all.
Perhaps. I'd still like you to consider that you might be looking for trouble and finding it whether it really matters or not. (The same thing can be said for excessive attention to X3 aliasing issues.) That said, dcraw is not my favorite converter because of some of these issues. There are other converters that are better at artifact suppression and still have good sharpness.
You are equating the 17% linear increase in Foveon pixels with
bayer output pixel increases, so you need to roughly double the
foveon increase (or at least not leave it at 17%)...
Think about that again. A percentage increase is independent of the magnitude of the values. 2 vs. 2.34 is a 17% increase. Just as 1 vs. 1.17. Dividing the absolute values by 2 does not change the percent differences.
If you look at the linear increase between the 300D (6MP) and the
400D (10MP) you'll find it's only around 27%, for example.
http://www.dpreview.com/reviews/nikond80/page28.asp

350D: 1850 lines, 400D: 2200 lines. That's 17%. However the linear pixel difference is only 12.5%, so some of that is AA filter and/or processing and/or measurement error.
These are pretty much the best possibly case for
demosiacers and alogrithms that recover detail beacuse every
photosite is helping resolve the edges of those lines.
Sure. I would not put too much weight on the absolute numbers, but the amount of improvement over time vs. number of pixels should apply to both color and B&W resolution (unless you are suggesting the algorithms have been tweaked to improve one at the expense of the other...)
I think extinction measurements
Extinction measurements are probably the least accurate and relevant on these charts. They are much more algorithm dependent as you see when you look at the raw results.
This does show some difference in LPH resolved where the D2X is
somewhat ahead, while in theory the 5D has slightler more pixels.
Also note the extinction resolution is much higher for the D2X.
Yes, that's why Lin claimed the raw resolution of the D2X was higher than the 5D. But that's just Canon's default in-camera treatment of detail beyond the Nyquist limits.

Look at
http://www.dpreview.com/reviews/canoneos5d/page20.asp

"The biggest difference among the RAW converters was how they handled 'information' beyond nyquist, beyond the absolute resolution limit of the camera. RIT did the same as the camera and blurred it to 'be on the safe side [...]"
From the LPH results I'm willing to acept that, though again I
think the real world yields surprizes that make that difference
less acacdemic than it would appear from just the charts.
Well, we also have the user side-by-side raw tests. And they tend to echo Phil's results. But there is also the matter of interpretation. If you find certain imperfections more objectionable than others, you will score images differently.

--
Erik
 
Just BTW, are Phil's resolution measurements taken from a converted
RAW file or from JPEG?
All JPEG (except where the camera does not support JPEG like the SD19/10.) And all using the "default image parameters".
he's using JPEG for some of them (which would be absolutely
pointless).
Not pointless, but certainly of limited usefulness for advanced users. I would venture that still most camera buyers never change the defaults or use raw. However, they've become a smaller and smaller part of his reviews. Most of the time is now spent on the sample scene which is done with both raw and JPEG.

--
Erik
 
Well, like i said - "in part thanks to the fact it resolved best in horizontal and vertical directions" :)

6.5 sounds a tad to low, though, even accounting for the favourable matrix orientation. I had an S3 for quite a while back when i still had a 20D, so i did a few side by side real world comparisons. In most real world situations you could resize S3's 12 MP output to 8 MP and get slightly more per pixel details than from 20D's "native" 8 MP output.

There were, ofcourse, scenes that looked simply horrible (mostly irregular textures), but those were a relatively minor percentage.
Fuji's interpolation had a very solid reason ;)

Speaking of which, it's interesting to note Fuji S3Pro's
interpolated 12MP had approx. the same resolving power as a
"normal" 9MP bayer, in part thanks to the fact it resolved best in
horizontal and vertical directions (unlike the normally oriented
Bayer which resolves best in diagonal directions).
More like 6.5MP in the real world. Resolution chart might
conveniently favor the tilted matrix, but against even remotely
irregular scenes, I think it does little to nothing. I lean toward
nothing, because I would prefer the standard bayer 6MP output for
it's lack of artifacts.

Check this.
http://www.dpreview.com/reviews/fujifilms3pro/page21.asp

Fujis Claims are even more hot air than some of the Foveon ones I
see here. When put to a fair test, the hot air escapes.
--
--------------------------------------------
Ante Vukorepa

My (perpetually) temporary gallery can be found here -
http://www.flickr.com/photos/orcinus/
 
BTW, bear in mind that Phil's comparison is JPEG based.
6.5 sounds a tad to low, though, even accounting for the favourable
matrix orientation. I had an S3 for quite a while back when i still
had a 20D, so i did a few side by side real world comparisons. In
most real world situations you could resize S3's 12 MP output to 8
MP and get slightly more per pixel details than from 20D's "native"
8 MP output.

There were, ofcourse, scenes that looked simply horrible (mostly
irregular textures), but those were a relatively minor percentage.
Fuji's interpolation had a very solid reason ;)

Speaking of which, it's interesting to note Fuji S3Pro's
interpolated 12MP had approx. the same resolving power as a
"normal" 9MP bayer, in part thanks to the fact it resolved best in
horizontal and vertical directions (unlike the normally oriented
Bayer which resolves best in diagonal directions).
More like 6.5MP in the real world. Resolution chart might
conveniently favor the tilted matrix, but against even remotely
irregular scenes, I think it does little to nothing. I lean toward
nothing, because I would prefer the standard bayer 6MP output for
it's lack of artifacts.

Check this.
http://www.dpreview.com/reviews/fujifilms3pro/page21.asp

Fujis Claims are even more hot air than some of the Foveon ones I
see here. When put to a fair test, the hot air escapes.
--
--------------------------------------------
Ante Vukorepa

My (perpetually) temporary gallery can be found here -
http://www.flickr.com/photos/orcinus/
--
--------------------------------------------
Ante Vukorepa

My (perpetually) temporary gallery can be found here -
http://www.flickr.com/photos/orcinus/
 
Well, like i said - "in part thanks to the fact it resolved best in
horizontal and vertical directions" :)

6.5 sounds a tad to low, though, even accounting for the favourable
matrix orientation. I had an S3 for quite a while back when i still
had a 20D, so i did a few side by side real world comparisons. In
most real world situations you could resize S3's 12 MP output to 8
MP and get slightly more per pixel details than from 20D's "native"
8 MP output.

There were, ofcourse, scenes that looked simply horrible (mostly
irregular textures), but those were a relatively minor percentage.
I don't really buy this 45 degree Fuji stuff.

OK - if you photo architecture - face on. Then most lines are vertical or horizontal. You win!

But - if there is perspective in the picture - then most horizontal lines become slanted lines. You lose!

The BS with the eye being more sensible to horizontal and vertical resolution I do not believe the slightest. I have read the orignal scientific report that is the ground for this - and it is surely a fake. The fact and figures was totally wrong and if you recomputed correctly - no difference could be found. And this report was accepted to a conference! Ouch.

--
Roland
http://klotjohan.mine.nu/~roland/
 
Erik, you know, this resolution area for cameras is getting to be a bit like the Mad Hatter's tea party.

I couldn't imagine what you meant by Raw packages 'handling information beyond the Nyquist', so I looked at Phil's page you linked.

I'm still not sure what I see there, or what greater import it may have. Most of the 'make higher resolution' converter outputs looks like straight aliasing on the resolution chart lines - what used to be criticized on the SD9. Except, how could it be there quite that way given the blur filter? Has somebody come up with an algorithm for beating resolution charts?

This seems it could be related to the issue with the 5D we had been discussing, where 1 px width items like television antennae etc. fade in and out of presence, for example.

I looked at some of Phil's 5D samples and could also see artifacts like that. It in fact looks like the blur filter isn't adequate - some rainbow moirés would back that up. Hard to find samples with antennas on recent cameras like the D200 to see if other brands are similar.

I haven't the time at all to go digging into all this, but interested if you have some kind of answer.

Kind regards,
Clive
Yes because to me, art is attention to detail and consistency of
vision.
Sometimes. There are some paintings where you want to see the
brushstrokes. And others where it's not so important.
As I said the lines coming in
and out of detail actually really annoys me almost as much as color
issues like that so seeing the antenna as they are in a real print
  • I would not like it at all.
Perhaps. I'd still like you to consider that you might be looking
for trouble and finding it whether it really matters or not. (The
same thing can be said for excessive attention to X3 aliasing
issues.) That said, dcraw is not my favorite converter because of
some of these issues. There are other converters that are better at
artifact suppression and still have good sharpness.
You are equating the 17% linear increase in Foveon pixels with
bayer output pixel increases, so you need to roughly double the
foveon increase (or at least not leave it at 17%)...
Think about that again. A percentage increase is independent of the
magnitude of the values. 2 vs. 2.34 is a 17% increase. Just as 1
vs. 1.17. Dividing the absolute values by 2 does not change the
percent differences.
If you look at the linear increase between the 300D (6MP) and the
400D (10MP) you'll find it's only around 27%, for example.
http://www.dpreview.com/reviews/nikond80/page28.asp

350D: 1850 lines, 400D: 2200 lines. That's 17%. However the linear
pixel difference is only 12.5%, so some of that is AA filter and/or
processing and/or measurement error.
These are pretty much the best possibly case for
demosiacers and alogrithms that recover detail beacuse every
photosite is helping resolve the edges of those lines.
Sure. I would not put too much weight on the absolute numbers, but
the amount of improvement over time vs. number of pixels should
apply to both color and B&W resolution (unless you are suggesting
the algorithms have been tweaked to improve one at the expense of
the other...)
I think extinction measurements
Extinction measurements are probably the least accurate and
relevant on these charts. They are much more algorithm dependent as
you see when you look at the raw results.
This does show some difference in LPH resolved where the D2X is
somewhat ahead, while in theory the 5D has slightler more pixels.
Also note the extinction resolution is much higher for the D2X.
Yes, that's why Lin claimed the raw resolution of the D2X was
higher than the 5D. But that's just Canon's default in-camera
treatment of detail beyond the Nyquist limits.

Look at
http://www.dpreview.com/reviews/canoneos5d/page20.asp

"The biggest difference among the RAW converters was how they
handled 'information' beyond nyquist, beyond the absolute
resolution limit of the camera. RIT did the same as the camera and
blurred it to 'be on the safe side [...]"
From the LPH results I'm willing to acept that, though again I
think the real world yields surprizes that make that difference
less acacdemic than it would appear from just the charts.
Well, we also have the user side-by-side raw tests. And they tend
to echo Phil's results. But there is also the matter of
interpretation. If you find certain imperfections more
objectionable than others, you will score images differently.

--
Erik
 
Except, how could it be there quite that
way given the blur filter? Has somebody come up with an algorithm
for beating resolution charts?
It's the same point I've been trying to make for years: the processing matters a lot!

1. You overestimated how much was due to the AA filter and how much was due to conservative default processing. The AA filter is not a perfect cut filter and it's strength varies between models.

2. The B&W charts are the easiest case. Remember the "black flags" on the cruise ship photos? There are limits where they can recover some detail, but not always it's color.

3. Yes, there is aliasing, but it's a lot gentler than the X3 aliasing.
This seems it could be related to the issue with the 5D we had been
discussing, where 1 px width items like television antennae etc.
fade in and out of presence, for example.
Which we called the "twisted rope" effect when applied to the SD9/10. None of these sensors has a 100% fill factor.
I looked at some of Phil's 5D samples and could also see artifacts
like that. It in fact looks like the blur filter isn't adequate -
some rainbow moirés would back that up.
It's always a compromise between sharpness, detail, and artifact suppression. Not just in the AA filter or even the pixel organization, but also the demosaicing algorithm, upsizing, and sharpening. Nobody gets it perfect, but there are a lot of different approaches that have their own subtle (and not so subtle) differences.

--
Erik
 
these are all things I would know - would think you remember that in fact from the cruise ship flags episode so long ago.

I asked you because the comparisons on the Phil page referenced seem a lot odder to me:

http://www.dpreview.com/reviews/canoneos5d/page20.asp

I'm not sure we are any closer to understanding how information past Nyquist does not produce bunched up blur area series and lowered numbers of resolution lines presented, on this contemporary CFA and standard charts. I can only say it looks odd to me, and I have to wonder how much special-casing is going on.

On the fill factor issue, that is exactly what I'm getting at on the 5D, plus its marginal blur filter - fill factor in fact looks like it is quite low, causing the disappearing stretches of low-pixel lines. I think the SD10 is quite better, and likely due to its microlenses.

Could be that I'm having one of those late-night moments where we naturally question things - which is very valuable for our growth of insight - or don't you agree?

Time for sleep now in any case - good night, Erik.

Regards,
Clive
Except, how could it be there quite that
way given the blur filter? Has somebody come up with an algorithm
for beating resolution charts?
It's the same point I've been trying to make for years: the
processing matters a lot!

1. You overestimated how much was due to the AA filter and how much
was due to conservative default processing. The AA filter is not a
perfect cut filter and it's strength varies between models.

2. The B&W charts are the easiest case. Remember the "black flags"
on the cruise ship photos? There are limits where they can recover
some detail, but not always it's color.

3. Yes, there is aliasing, but it's a lot gentler than the X3
aliasing.
This seems it could be related to the issue with the 5D we had been
discussing, where 1 px width items like television antennae etc.
fade in and out of presence, for example.
Which we called the "twisted rope" effect when applied to the
SD9/10. None of these sensors has a 100% fill factor.
I looked at some of Phil's 5D samples and could also see artifacts
like that. It in fact looks like the blur filter isn't adequate -
some rainbow moirés would back that up.
It's always a compromise between sharpness, detail, and artifact
suppression. Not just in the AA filter or even the pixel
organization, but also the demosaicing algorithm, upsizing, and
sharpening. Nobody gets it perfect, but there are a lot of
different approaches that have their own subtle (and not so subtle)
differences.

--
Erik
 

Keyboard shortcuts

Back
Top