SD14 JPEG interpolation

ACR:
http://o.orcinus.googlepages.com/Zeiss_35_f8_crop_ACR.jpg

DCRAW:
http://o.orcinus.googlepages.com/Zeiss_35_f8_crop_DCRAW.jpg

The differences are subtle (it seems ACR has gotten a bit better
resolution-wise since the last time i've taken a 100% peek) but you
can easily discern them if you layer the pics on top of each other
and flip between them. Take a look at the roof texture and the
woods in the background.
Thanks I had wondered if I was doing something wrong. This is similar to what I see, except I get more noise but I have a G6.

Really I overlapped, pixel peeped. I can't find anything that looks like more detail in the DCraw image. I see more noise and that gives a more textured look. Nothing wrong with that. When I do ACR with my G6 with the NR at zero I get that and I find it preferable to totally smoothed out sections. Did you have all NR off in ACR?

Not just that but I see the occaisional demosaic artifact in DCraw, but not in ACR, but it is way better than when DCR was using a Gradient demosaic.
 
Aliasing artefacts are partly due to different handling of frequencies > Nyquist. And that's what makes the DCRAW output different - you get subpixel details, however you're risking moire/aliasing artefacts here and there.

Zoom to about 400% (the contrast is rather low in that part) and try looking at the vertical antenna pole on the extreme left (just next to the image border). Flip between the ACR and DCRAW output and you'll notice the subpixel extension that comes out of the thicker pillar just isn't there in the ACR image.

Also, some of the noise you're seeing in the DCRAW image isn't really noise but texture that just gets smeared in ACR. (background trees and some parts of wall and roof are a good example)

But like i said, the differences seem to have been reduced a lot through the past few versions of ACR.

--
--------------------------------------------
Ante Vukorepa

My (perpetually) temporary gallery can be found here -
http://www.flickr.com/photos/orcinus/
 
Think about that again. A percentage increase is independent of the
magnitude of the values. 2 vs. 2.34 is a 17% increase. Just as 1
vs. 1.17. Dividing the absolute values by 2 does not change the
percent differences.
http://www.dpreview.com/reviews/nikond80/page28.asp

350D: 1850 lines, 400D: 2200 lines. That's 17%. However the linear
pixel difference is only 12.5%, so some of that is AA filter and/or
processing and/or measurement error.
Yes, I got my numbers all wrong on that I realized after I posted, I'm going to have to ponder that all some more.
Extinction measurements are probably the least accurate and
relevant on these charts. They are much more algorithm dependent as
you see when you look at the raw results.
Least accurate and relevant as far as detail yes - I'm only thinking here of hints the AA filter is stronger or not since it seems like no processing could pull back detail (even false) an AA filter had dropped, where as you noted different processing is otherwise better at bringing back detail that might have been lost due to inferior processing in the same image.
Look at
http://www.dpreview.com/reviews/canoneos5d/page20.asp

"The biggest difference among the RAW converters was how they
handled 'information' beyond nyquist, beyond the absolute
resolution limit of the camera. RIT did the same as the camera and
blurred it to 'be on the safe side [...]"
Thanks for that, I'll have a look.

--
---> Kendall
http://InsideAperture.com
http://www.pbase.com/kgelner
http://www.pbase.com/sigmasd9/user_home
 
I asked you because the comparisons on the Phil page referenced
seem a lot odder to me:
I guess I was surprised that they would seem odd to you because they are the natural consequence of what you saw way back then.
I'm not sure we are any closer to understanding how information
past Nyquist does not produce bunched up blur area series
Why do you think it would always produce bunched-up blurs? Because that's what you've seen with simpler in-camera processing? You've just assumed that was mostly due to the AA filter. Like any blur, this is not going to generate perfect gray, just reduced contrast. However, attempting to restore that contrast can have negative effects: look at the moire and the artifacts in the DPP version (and the maze artifacts in other shots.)
I can only say it looks odd to me, and I
have to wonder how much special-casing is going on.
There is plenty of evidence that it works on more than just charts. Why do you think these converters are so popular?
I think the SD10 is quite better, and likely due
to its microlenses.
The Canons have microlenses too. I'm not sure that the effective fill factors are all that different. But the SD10 will have the advantage on some color combinations (e.g. against a blue sky) and this is a potential artifact for some algorithms.
Time for sleep now in any case - good night, Erik.
Have some sweet full measured color dreams.

--
Erik
 
Like it's been said before, no AA filter is 100% efficient :)

But that inefficiency (dropoff) extends both below and above Nyquist. It seems to me modern cameras rely a lot on software in the AA department.

The increase of the AA effect you see in a lot of new cameras could, in fact, be due to increasingly conservative handling of above-extinct frequencies in in-camera software, since most samples presented on various review sites and forii are, in fact, in-camera JPEG's.
Least accurate and relevant as far as detail yes - I'm only
thinking here of hints the AA filter is stronger or not since it
seems like no processing could pull back detail (even false) an AA
filter had dropped, where as you noted different processing is
otherwise better at bringing back detail that might have been lost
due to inferior processing in the same image.
--
--------------------------------------------
Ante Vukorepa

My (perpetually) temporary gallery can be found here -
http://www.flickr.com/photos/orcinus/
 
Well, buy it or don't buy it, but that's what i saw using both a 45 deg matrix and a "normally" oriented one simultaniously, for quite a while.

Sorry if it doesn't fit in your (pre)conceptions ;)

And just BTW, i don't photo architecture that often and certainly not face on.
I don't really buy this 45 degree Fuji stuff.

OK - if you photo architecture - face on. Then most lines are
vertical or horizontal. You win!

But - if there is perspective in the picture - then most horizontal
lines become slanted lines. You lose!
--
--------------------------------------------
Ante Vukorepa

My (perpetually) temporary gallery can be found here -
http://www.flickr.com/photos/orcinus/
 
Maybe they should build a 45 degree foveon chip. Then they would have a legitimate reason to output double the pixel count. :-)
 
The very same idea occured to me an hour ago but i was affraid of
starting another flame :)))
Rotating the Foveon sensor any amount would yield the same result in terms of detail, because it does not record lines "better" in any one direction, the benefit of not sampling different wavelengths in different spatial locations.

Rotating a bayer sensor helps because of how vertical/horizontal lines fall across the grid... but I too am dubious as to the full extent of the benefit offered there.

--
---> Kendall
http://InsideAperture.com
http://www.pbase.com/kgelner
http://www.pbase.com/sigmasd9/user_home
 

Keyboard shortcuts

Back
Top