SD9 Resolution prediction

Could be... more likely I mis-understood the chart - I (apparently
mistakenly) believed that the "2000 lines" mark meant 2000 black
lines alternating with 2000 white lines; whereas the data you just
pointed me to indicates that it really meant 1000 black and 1000
white lines (i.e. 1000 line pairs).
Phil's chart is lines per picture height. The 20 means 2000 lines, and the lines are alternating black and white. 2000 lines means only 1000 line pairs.

Theoretically, it's possible for a 2000 pixel high image to show 2000 lines. Possible, but not very likely.
 
Imagine a single wire stretched across a piece of white paper..
Taken with a Bayer CFA sensor the wire would be at least two pixels
wide all the way across the image (plus some alias to keep the line
smooth). With an X3 sensor the line would be one pixel wide.
Not exactly correct; if it were, the theoretical limit of a Bayer sensor would be 1/2 the one-dimensional nominal resolution, whereas in fact it approaches 1/sqrt(2).

In two dimensions, Bayer's resolution approaches 1/2 the nominal resolution, rather than 1/4 as this post infers.

This is because the 2x2 averaging required by Bayer is done on overlapping groups; each pixel's output is used in the calculation of four averages. If each sensor were only used in the calculation of a single average (i.e. the 2x2 groups were non-overlapping) this post would be correct.

Because of the overlapping 2x2 groups, each output pixel has a correlation of 0.50 with the value of each adjacent output pixel. Since the input array and the output array have the same number of elements, the 0.50 correlation indicates that a certain amount of information has been discarded.

Although Bayer would never correctly render an array of evenly spaced 1-pixel wide lines, the minimum spacing isn't quite as wide as 2.0 pixels. That is to say, there would be (at the extreme limit of resolution) a great deal of aliasing, but a Bayer sensor is able to resolve an arbitrarily oriented grating where each line approaches 1.414 pixels in width.
 
I looked at that, and it doesn't make any sense. How can 2048
pixels show more than 2000 lines?
As Phil explained it to me below ... It takes the Foveon chip only one pixel to resolve the black (0,0,0) or white (255, 255, 255), but it will take a Bayer sensor 2x2 pixels to resolve the same RG/GB. keeping that in mind, a 2x2 Foveon pixel area can be BW/WB, whereas a 2x2 Bayer area can only be black or white. So with 2048 pixles, you can have up to 1024 black/white pairs, or 2048 lines with the Foveon chip.

--
jc
Sony F707
http://www.reefkeepers.org/gallery/f707
http://www.reeftec.com/gallery
 
why I've got 1D samples of line detail approaching 1 and a half pixels wide.
  • DL
Although Bayer would never correctly render an array of evenly
spaced 1-pixel wide lines, the minimum spacing isn't quite as wide
as 2.0 pixels. That is to say, there would be (at the extreme
limit of resolution) a great deal of aliasing, but a Bayer sensor
is able to resolve an arbitrarily oriented grating where each line
approaches 1.414 pixels in width.
 
Agreed, but how do you represent 1.414 pixels?
limit of resolution) a great deal of aliasing, but a Bayer sensor
is able to resolve an arbitrarily oriented grating where each line
approaches 1.414 pixels in width.
--
Phil Askey
Editor / Owner, dpreview.com
 
Here's two crops, both fsc and 400% in PS, from my 1D.
(original full size image at



;)

Example 1 at 400% in PS



and full size crop



Example 2 at 400% in PS



and full size crop



These are straight out of the camera. Zero sharpening in camera, blowups are as displayed in photoshop. The first one, in particular, is impressive. There's more aliasing going on in the second, but it still shows line detail less than 2 pixels wide, albeit a bit roughly
  • DL
Agreed, but how do you represent 1.414 pixels?
 
Agreed, but how do you represent 1.414 pixels?
Huh? We're talking frequency here. One inscribes lines that are 1.414 times as wide as the pixel pitch taking into account the lens system magnification.

Using a 1:1 macro lens, the physical line size/spacing for Canon DSLRs are as follows:

D30: .015mm
D60: .011mm
EOS-1D: .017mm
EOS-1Ds: .013mm

-Z-
 
Here's my simulation of the difference between X3 and Bayer. This is ONLY a simulation, the source image is actually a downsampled D60 image. These are purely guesses.. we'll see the real truth when I get a review camera.

Simulated X3 sensor:



Simulated Bayer CFA sensor:



The second crop was created by downsampling the first shot by 71% and then upsampling it back to the original size.. This is approximately what we'd expect to lose from Bayer interpolation. Also bear in mind that the camera may sharpen the second image, we haven't done that.


;)

Example 1 at 400% in PS
http://www.coastphotos.com/pixel/example1.jpg
and full size crop
http://www.coastphotos.com/pixel/example1fsc.jpg

Example 2 at 400% in PS
http://www.coastphotos.com/pixel/example2.jpg
and full size crop
http://www.coastphotos.com/pixel/example2fsc.jpg

These are straight out of the camera. Zero sharpening in camera,
blowups are as displayed in photoshop. The first one, in
particular, is impressive. There's more aliasing going on in the
second, but it still shows line detail less than 2 pixels wide,
albeit a bit roughly
  • DL
Agreed, but how do you represent 1.414 pixels?
--
Phil Askey
Editor / Owner, dpreview.com
 
Nothing to do with it.
Agreed, but how do you represent 1.414 pixels?
Huh? We're talking frequency here. One inscribes lines that are
1.414 times as wide as the pixel pitch taking into account the lens
system magnification.

Using a 1:1 macro lens, the physical line size/spacing for Canon
DSLRs are as follows:

D30: .015mm
D60: .011mm
EOS-1D: .017mm
EOS-1Ds: .013mm

-Z-
--
Phil Askey
Editor / Owner, dpreview.com
 
Here's my simulation of the difference between X3 and Bayer. This
is ONLY a simulation, the source image is actually a downsampled
D60 image. These are purely guesses.. we'll see the real truth
when I get a review camera.
I don't disagree that the foveon will perform better than the same "resolution" (meaning image x-y) bayer, only that it won't be 3x better, and probably not even 2x better AS FAR AS detail. The image may well appear 2x better (or more) because it will be cleaner, with less aliasing, and better color, and this will affect our perception of sharpness (resolution).

I'm very excited about the foveon and if I had thought that one of the major (Canon, Nikon, Kodak) manufacturer's might adopt it soon I would have waited. But when it became clear that Sigma would be the only choice for the immediate future I went ahead and bought a 1D in June. The sigma multiplier was a killer for my UWA habits and I wasn't anxious to invest in a bunch of Sigma glass.

I'm still hoping one of the the big-3 does eventually come around because I've never been comfortable with the -idea- of bayer. It seems to work reasonably well in practice, (witness the 1D), but at the expense of aliasing and moire (even in my sample image - look at the boot of the right rider), and I'm sure the foveon can provide a superior image.
  • DL
 
The second crop was created by downsampling the first shot by 71%
and then upsampling it back to the original size.. This is
approximately what we'd expect to lose from Bayer interpolation.
This is not the 100% difference you implied earlier. To get back to the point of my examples, your "thought" example of a single line

"Imagine a single wire stretched across a piece of white paper.. Taken with a Bayer CFA sensor the wire would be at least two pixels wide all the way across the image (plus some alias to keep the line smooth). With an X3 sensor the line would be one pixel wide."

This is wrong, or at least over simplified. With bayer theoretically a black line would be one pixel black, with the adjacent rows roughly 25% grey, appearing thinner than a 2 pixel wide line. The point of my example was to show that bayer is capable of producing nearly 1 pixel wide detail, and also to answer your question "how is a 1.4 pixel wide line represented?". Answer: with a "shaded" adjacent pixel (aliasing). Our eye perceives this as a narrower line.
  • DL
 
The arguments presented here are all very interesting even though most of them are wayyy over my head! It seems as if this new Foveon chip is more efficient than current CCD/CMOS chips.

In a world of numbers it sounds like it is doomed to failure. In this way it mirrors the CPU market where higher and higher clock speeds is more important than efficiency.
Here is a solid prediction:

The SD9 will have:

Horizontal LPH
Vertical LPH

Why?

Because there is NO way around Nyquest... or, to explain it to
those readers who never heard of Nyquest... think of alternating
black & white lines. Even if they are perfectly aligned with the
photo receptors it will take one photo receptor to read "white" and
one to read "black".. so you CANNOT resolve (on a rectangular
layout) more than 1/2 the number of line pairs as you have photo
receptors on that axis.
 
Agreed, I'd guess about 1.5x better, especially considering this is Foveon's first iteration of the technology.
I don't disagree that the foveon will perform better than the same
"resolution" (meaning image x-y) bayer, only that it won't be 3x
better, and probably not even 2x better AS FAR AS detail. The image
may well appear 2x better (or more) because it will be cleaner,
with less aliasing, and better color, and this will affect our
perception of sharpness (resolution).

I'm very excited about the foveon and if I had thought that one of
the major (Canon, Nikon, Kodak) manufacturer's might adopt it soon
I would have waited. But when it became clear that Sigma would be
the only choice for the immediate future I went ahead and bought a
1D in June. The sigma multiplier was a killer for my UWA habits and
I wasn't anxious to invest in a bunch of Sigma glass.

I'm still hoping one of the the big-3 does eventually come around
because I've never been comfortable with the -idea- of bayer. It
seems to work reasonably well in practice, (witness the 1D), but at
the expense of aliasing and moire (even in my sample image - look
at the boot of the right rider), and I'm sure the foveon can
provide a superior image.
  • DL
--
Phil Askey
Editor / Owner, dpreview.com
 
What if it were a black line running over a red, green or blue background?

White is too easy :p
This is not the 100% difference you implied earlier. To get back to
the point of my examples, your "thought" example of a single line

"Imagine a single wire stretched across a piece of white paper..
Taken with a Bayer CFA sensor the wire would be at least two pixels
wide all the way across the image (plus some alias to keep the line
smooth). With an X3 sensor the line would be one pixel wide."

This is wrong, or at least over simplified. With bayer
theoretically a black line would be one pixel black, with the
adjacent rows roughly 25% grey, appearing thinner than a 2 pixel
wide line. The point of my example was to show that bayer is
capable of producing nearly 1 pixel wide detail, and also to answer
your question "how is a 1.4 pixel wide line represented?". Answer:
with a "shaded" adjacent pixel (aliasing). Our eye perceives this
as a narrower line.
  • DL
--
Phil Askey
Editor / Owner, dpreview.com
 
What if it were a black line running over a red, green or blue
background?

White is too easy :p
Yeah, color complicates it, as does (I think) contrast. You'll notice in my example that the almost-single-line detail worked a lot better in example1 where it was basically grey, and low contrast.

I would love to have a fovean sensor. I just am not ready to accept the compromises that the Sigma entails. And as resolution continues to climb I'm not sure it'll matter any more from a practical standpoint. History is rife with examples of superior technology that missed it's window of opportunity.
  • DL
 
Phil,

If you join in the argument you will lay yourself open to shouts of "bias" when you do a real test on the Sigma if it comes out better than Canon/Nikon etc
Better keep to the side lines and let time win or loose the argument for you.
Looking forward to your indepth reviews on all the new cameras.
Regards,
Kevin.
 
Here's my simulation of the difference between X3 and Bayer. This
is ONLY a simulation, the source image is actually a downsampled
D60 image. These are purely guesses.. we'll see the real truth
when I get a review camera.

Simulated X3 sensor:

My interpolation of the same thing, but using Mitchell. No sharpening or any processing done other than a resize to 312x213 then back to 440x300, just like you said you did. Of course I used your JPEG image, did the processing, then recompressed, so there may be a few minor losses there.


Simulated Bayer CFA sensor:



The second crop was created by downsampling the first shot by 71%
and then upsampling it back to the original size.. This is
approximately what we'd expect to lose from Bayer interpolation.
Also bear in mind that the camera may sharpen the second image, we
haven't done that.
I think the one I did using Mitchell interpolation is better. Still softer obviously. Still, note the thin middle line in the upper left crane. It has disappeared entirely from your image but is still present in mine. I'm not sure what changing the interpolation algorithm really means in this context however. But obviously you chose one to use, so I figured I might as well choose a decent one.

I am in the middle of the following experiment, which may or may not go anywhere:

1) take a N-MP original image, preferably not from a camera.

2) downscale this (with averaging) 2x and call this a N/2-MP Foveon image.

3) apply a CFA filter and interpolation algorithm of choice to the original -- i.e. apply a RG/GB color filter, then reconstruct the other color values. This is now the N-MP Bayer interpolated image.

Compare the images in 2 and 3 and see what results we get. Vary the number used in #2 to try different things like how a 3MP Foveon would compare to an 11MP 1Ds. The big sticking point for me is the interpolation algorithms. I have complete textual descriptions of quite a few, maddeningly incomplete descriptions of many more, and code for exactly one (Cofin's Canon RAW converter). I'll give it a try and see if anything works.

One last comment on this thread. It is theoretically possible, given pure white and black lines, for a CFA interpolation algorithm to give you the limit of resolution, for example 1000 line pairs given 2000 linear pixels (or 2000 line pairs in terminology used here). After all, some data is being captured at every sensor, hence you have a whole line of everything 0, so even if the values of green on both sides were 255, the red pixel being 0 should result in some attenuation in the green signal. I just checked a handy algorithm, "Linear interpolation with Laplacian second-order correction terms" (Hamilton and Adams) and my numbers come out that in the green channel we'd see alternating 255 and 64 if exactly every other line was 255 and 0 coming in. This is a pretty big difference. No, it isn't as nice as the 255 and 0 given by a 3-color pixel, but it does show that we could determine there are alternating lines.
 
The arguments presented here are all very interesting even though
most of them are wayyy over my head! It seems as if this new
Foveon chip is more efficient than current CCD/CMOS chips.
In a world of numbers it sounds like it is doomed to failure. In
this way it mirrors the CPU market where higher and higher clock
speeds is more important than efficiency.
Yep. Before Intel we had IBM Power vs. DEC Alpha, with exactly the same thing going on. The Power did a LOT in each cycle, while the Alpha chose a very different route and did less, but enabled great clock speed scaling. But at least in that market most people used benchmarks. Now in the consumer market it is exactly that -- MHz sells. It's kind of like selling cars based on their maximum rpm.

In printers the same thing happens with resolution. There is so much more, like color depth (1-bit per pixel vs. 2-bit vs. 8-bit) and in inkjet, drop size. But you'll see people buying a 1200dpi 1-bit printer instead of a 600dpi 8-bit printer, even though the latter produces better images. Hmm, less resolution but more information per pixel. Imagine that. Everyone wants higher dpi, whether they need it or not.

Scanners are similar with insane resolutions being touted.

I feel we should quite Nigel here. "But this one goes to 11..."
 
The arguments presented here are all very interesting even though
most of them are wayyy over my head! It seems as if this new
Foveon chip is more efficient than current CCD/CMOS chips.
In a world of numbers it sounds like it is doomed to failure. In
this way it mirrors the CPU market where higher and higher clock
speeds is more important than efficiency.
Yep. Before Intel we had IBM Power vs. DEC Alpha, with exactly the
same thing going on. The Power did a LOT in each cycle, while the
Alpha chose a very different route and did less, but enabled great
clock speed scaling. But at least in that market most people used
benchmarks. Now in the consumer market it is exactly that -- MHz
sells. It's kind of like selling cars based on their maximum rpm.

In printers the same thing happens with resolution. There is so
much more, like color depth (1-bit per pixel vs. 2-bit vs. 8-bit)
and in inkjet, drop size. But you'll see people buying a 1200dpi
1-bit printer instead of a 600dpi 8-bit printer, even though the
latter produces better images. Hmm, less resolution but more
information per pixel. Imagine that. Everyone wants higher dpi,
whether they need it or not.

Scanners are similar with insane resolutions being touted.

I feel we should quite Nigel here. "But this one goes to 11..."
Rock on baby
 

Keyboard shortcuts

Back
Top