If this is not false advertisement, what is?

While looking for information on "back CCDs", I ran into literally
thousands of papers online that attempt ot analyze the MTF
functions of sensors, just like they do with film and lenses. It's
a little difficult, there are excellent papers discussing
techniques to compensate for aliasing, etc. But it is doable. Of
course, it took years for someone to round up enough stereo
manufacturers to form the IHF, I don't know how we're going to get
camera manufacturers into a rating program.
Well, it would address part of the problem, but it wouldn't deal with the R and G and B sensors vs. R or G or B sensors that people are so hot under the collar about here.

Does anybody ever quote MTF for each colour separately from grayscale MTF?
 
The idea is good but i bet the brand name company won't quote MTF for each folor separately since it only show thier weakness.
Well, it would address part of the problem, but it wouldn't deal
with the R and G and B sensors vs. R or G or B sensors that people
are so hot under the collar about here.

Does anybody ever quote MTF for each colour separately from
grayscale MTF?
--
Thomas the C.Wolf 8^)
Gallery:
http://www.pbase.com/sigmasd9/c_wolf
http://www.pbase.com/c_wolf
 
This gets at the heart of the misunderstanding many "Bayer people"
have.
As soon as you start labeling people like this, most people stop
following your arguments. You should be more careful. I've read
some of your other posts, and you usually have valid points.
True. I am chastened. I do try not to use words that have a "slant"
to them. I will have to try harder.
Be careful how hard you try. You should have seen me about 3 years ago where I got overly into "gender neutral" writing. Dodged around gendered pronouns, by using the non gendered plurals most of the time "they", "their" even if only talking about one person. Even tried th gender neutral "xir", instead of "him" or "her".
Again, I apoligize. I do most of my work on Imax images
Cool. Tell us more...
(where 6MP
is pretty low) so I jumped to the erroneous conclusion that you
really were looking at a 6 megapixel monitor (just not thinking
clearly) and that you thought two million were red (i.e. were
confusing pixels with phosper dots). I now understand what you
meant, and I agree.
I'm getting a better idea of where you're coming from, too.
Again, try to avoid prejudicial terms like colors "guessed from
neighboring pixels". Foveon sensors need color guessing too, to
compensate for the metamerism issues.
On this one, I'm not so sure I should feel guilty: perhaps i should
have said "interpolated" but -- and the last thing I want is to
start yet another argument over subtle distinctions of definition
-- "interpolation" is very close to "intelligent guess". But I do
concede the point: it is a "slanted" way of putting it.

As far as Foveon chips guessing, the Foveon process derives x
values from x measurements (where x is 300 for a 10x10 image). The
bayer process derives x values from x/3 measurements. Both
processes need to do some math, but the amount of isn't of the
same magnitude at all.
This is true. The magnitude is different. It's just not different in the direction you believe it is. True, the Bayer sensor needs some spatial interpolation. Algorithms for this range a huge gamut from very simple (as few as four math operations per pixel) to enormously complex. (I'm workng on a method that uses 256 sample sinc interpolators and needs about 1000 math operations per pixel. Algorithms actually use in cameras and post processing programs are somewhere in between these extremes.

But this doesn't include what it takes to get a good image out of the sensor. The Foveon sensor typically requires red sharpening (as mentioned by Peter Ventura of Foveon http://www.alt-vision.com/r/documents/5074-35.pdf ) and a well documented characteristic of sensors used in scientific applications. That's heavy math.

But not as bad as the math needed to get the colors right. There are a number of ways to get from "sensor space" to "color space" mathematically. The most common way is a simple 3x3 matrix multiplication. That means just nine MACs (multiplications and accumulations) per pixel. It works very well when the spectral characteristics of the sensor are analoogous to those of the human eye. This is true of any good Bayer sensor (an advantage of being able to use both inorganic and organic filters). But it's not true of the human eye. That same Gilblom, Yoo, and Ventura paper, paragraph 2.1.2.1 points out that such a matrix transform requires a very strong matrix (not good for noise or dynamic range).

Several of us working independently on software to process images (including Kok Chen and myself) observed that linear combinations of sensor values (the afore mentioned 3x3 matrix) are not sufficient to produce accurate color because of the very unusual spectral response curves of the X3 sensor. Generation of an interpolation surface (as used in an ICC profile) produces less objectionable color. It is obvious looking at pictures that have been processed by SPP that it uses such an interpolation surface approach, not a matrix approach. This is computationally intensive, at least 192 math operations/pixel.

The Foveon sensor requires other post processing. It does not handle overloads gracefully (the strange gray and white "blowouts" in bright colors on SD-9 prints). It requires algoritms that evaluate blowouts, make decisions and apply rules, to make the blowouts look more pleasant. Fortunatly, the new version of SPP (shipping about the same time as SD-10) has these new capabilities. Larger blowout can be fixed by "inpainting" algorithms.

Metamerism can be compensated for by "memory color" algorithms. Evaluate a picture. A large bluish area at the top must be sky. If it's colors are outside the range of RGB values typical of a sky, transform it. Same with skin, grass, etc. Identify and correct.
The original point I was trying to make is that I have noticed a
number of posts (although not a large percentage) from people who
appear to believe that a bayer pixel is in fact made up of four
sensors (similar to the relationship between phosphor dots and
pixels on a monitor).
What a strange thing to believe...
This is what triggered my misunderstanding of
your post.
We've all made assumptions that have sent us off in weird directions at times.
--
Ciao!

Joe

http://www.swissarmyfork.com
 
From my observation. Foveon successfully pulled the detail out from
random textures (i.e. foliage or grass in a far distance), from
where bayer has to guess, and covers the weakness with
antialiassing.
My guess is that here the X3 sensor just displayed artifacts that look like detail, since there's no AA filter to blur detail the sensor can't accurately capture.

If you look at a full-size version of Phil's res chart, you can see the same thing.

http://www.dpreview.com/reviews/sigmasd9/page23.asp

Take a close look at the SD9 images and you can see aliasing artifacts fairly clearly, but it still looks like there's detail past where the artifacts start.

--
Jeremy Kindy
 
From my observation. Foveon successfully pulled the detail out from
random textures (i.e. foliage or grass in a far distance), from
where bayer has to guess, and covers the weakness with
antialiassing.
My guess is that here the X3 sensor just displayed artifacts that
look like detail, since there's no AA filter to blur detail the
sensor can't accurately capture.
If I were to weigh your guess against the observation of a guy who actually compared prints of the sample images, ... never mind, I'll do it myself.

j
 
But this doesn't include what it takes to get a good image out of
the sensor. The Foveon sensor typically requires red sharpening (as
mentioned by Peter Ventura of Foveon
http://www.alt-vision.com/r/documents/5074-35.pdf ) and a well
documented characteristic of sensors used in scientific
applications. That's heavy math.
Hey, Joe, thanks for that reference. Interesting reading. But I can't find what you're referring to about "red sharpening". Do they use different words that I could search for?

How heavy is the math? Just a single-plane local sharpening filter? Or something more elaborate?
But not as bad as the math needed to get the colors right. There
are a number of ways to get from "sensor space" to "color space"
mathematically. The most common way is a simple 3x3 matrix
multiplication. That means just nine MACs (multiplications and
accumulations) per pixel. It works very well when the spectral
characteristics of the sensor are analoogous to those of the human
eye. This is true of any good Bayer sensor (an advantage of being
able to use both inorganic and organic filters). But it's not true
of the human eye. That same Gilblom, Yoo, and Ventura paper,
paragraph 2.1.2.1 points out that such a matrix transform requires
a very strong matrix (not good for noise or dynamic range).
What is it you're saying is not true of the human eye? That it's spectral characteristics are not analogous to those of itself? Please clarify.

So you're saying they do just use a matrix? Does the stronger matrix require more heavy math than a weaker matrix?
Several of us working independently on software to process images
(including Kok Chen and myself) observed that linear combinations
of sensor values (the afore mentioned 3x3 matrix) are not
sufficient to produce accurate color because of the very unusual
spectral response curves of the X3 sensor. Generation of an
interpolation surface (as used in an ICC profile) produces less
objectionable color. It is obvious looking at pictures that have
been processed by SPP that it uses such an interpolation surface
approach, not a matrix approach. This is computationally intensive,
at least 192 math operations/pixel.
Cool. Until yours is ready, I'll stick with PhotoPro.

j
 
Good. Let's take this to your concurrent thread at
http://forums.dpreview.com/forums/read.asp?forum=1027&message=6503328

j
Well, OK, some interpretation: that page is about as it says in
the subtitle, "A discussion of the ambiguous meanings of pixel and
megapixel," not a justification for a uniquely correct answer. If
I can ever get a clear definition of "pixel" from you, I'd like to
compare and contrast it to these, and see what shakes out.
Hey j, I think I've finally got the clear definition of "pixel".
And it doesn't even involve buyng a $220 book (or getting my copy
back from the guy who borrowed it).

It's from RLG, the "Research Libraries Group", and they seem to be
a pretty authorative outfit, founded by Columbia, Harvard, and Yale
Universities and The New York Public Library and claiming a
membership of "over 160 universities, national libraries, archives,
historical societies, and other institutions".

http://www.rlg.org/rlg.html

This is part of their series of "Guides to Quality in Visual
Resource Imaging". Those are frighteningly well researched, quote
ISO standards, technical papers, you name it.

http://www.rlg.org/visguides/visguide3.html

I've quoted the entire section 2 "Basic Terminology". It pretty
clearly defines a "pixel" as a spatial entity, and color as an
attribute of a pixel.

First, three sentences, out of context ;) ;)

"The pixel dimensions of the image are its width and height in
pixels. The density of the pixels, i.e., the number of pixels per
unit length on the document, is the spatial sampling frequency, and
it may differ for each axis".

"The value of each pixel represents the brightness or color of the
original object..."

Then the entire "Basic Terminology" section, with those three
sentences in context.

--- begin quote ---

2.0 Some Basic Terminology

Digital images are composed of discrete picture elements, or
pixels, that are usually arranged in a rectangular matrix or array.
Each pixel represents a sample of the intensity of light reflected
or transmitted by a small region of the original object. The
location of each pixel is described by a rectangular coordinate
system in which the origin is normally chosen as the upper left
corner of the array and the pixels are numbered left-to-right and
top-to-bottom, with the upper left pixel numbered (0,0).

It is convenient to think of each pixel as being rectangular and as
representing an average value of the original object's reflected or
transmitted light intensity within that rectangle. In actuality,
the sensors in most digital image capture devices do not "see"
small rectangular regions of an object, but rather convert light
from overlapping nonrectangular regions to create an output image.

A document or another object is converted into a digital image
through a periodic sampling process. The pixel dimensions of the
image are its width and height in pixels. The density of the
pixels, i.e., the number of pixels per unit length on the document,
is the spatial sampling frequency, and it may differ for each axis.

The value of each pixel represents the brightness or color of the
original object, and the number of values that a pixel may assume
is the number of quantization levels. If the illumination is
uniform, the values of the pixels in a gray-scale image correspond
to reflectance or transmittance values of the original. The values
of the pixels in a color image correspond to the relative values of
reflectance or transmittance in differing regions of the spectrum,
normally in the red, green, and blue regions. A gray-scale image
may be thought of as occupying a single plane, while a color image
may be thought of as occupying three or more parallel planes.

A bitonal image is an image with a single bit devoted to each pixel
and, therefore, only two levels—black and white. A gray-scale image
may be converted to a bitonal image through a thresholding process
in which all gray levels at or below a threshold value are
converted to 0-black—and all levels above the threshold are
converted to 1-white. The threshold value may be chosen to be
uniform throughout the image, that is, "global thresholding" or it
may be regionally adapted based on local features, or "adaptive
thresholding." Although many high-contrast documents may be
converted to bitonal image form and remain useful for general
reading, most other objects of value to historians and researchers
should probably not be. Too much information is lost during
thresholding.

These concepts are illustrated in fig.1, which contains a
gray-scale image of the printed word "Gray" and a color image of
the printed word "Color." A portion of the gray-scale image has
been enlarged to display the individual pixels. A bitonal image has
been created from the enlarged section to illustrate the consequent
loss of information caused by thresholding. As may be observed, the
darker "r" remains recognizable, but the lighter "a" is rendered
poorly. The color image is also displayed as three gray-scale
images in its red, green, and blue planes. Note that the pixels
corresponding to the color red are lighter in the red plane image
and similarly for the green and blue plane images.

Figure 1. A gray-scale image with a portion enlarged, a bitonal
image of the same portion, and a color image with its three color
planes shown separately.

--- end quote ---

--
Ciao!

Joe

http://www.swissarmyfork.com
 

Keyboard shortcuts

Back
Top