Sense & Sensors in Digital Photography

Thought this might be of interest, especially the section "How many
pixels do you need?"

http://db.tidbits.com/getbits.acgi?tbart=07860

--
John P. Sabo
[email protected]
Interesting! But Polaroid shows deceiving pixel resolution on its
X530 specifications(2460x1836).
Your right, that is incorrect!...2460x1836 is 4.5mp, the total
amount of photosites on the X530's sensor, not how many "pixels" it
has which of course is only 1.5mp.
Perhaps it was a simple error on Polaroids part or maybe Polaroid
is simply trying to hype the abilities of the X530 out of all
proportion to help it sell?
Or maybe they are doing what every Bayer manufacturer is doing, interpolating the recorded image to final dimensions equal to the number of individual sensors, not complete triples, then pretending like there is enough data to support it.
 
Yep pretty good article and his review of the SD10 is good too.
Considering it was supposed to be "How to Choose a Digital Camera", I thought it was useless.

It looked more like a justification of his SD10 purchase than anything else.

It's understandable that SD10 owners would see it in a favorable light.

-=s=-
 
In any case the eye is most sensitive for high frequency in the
green region of the spectrum so he is wrong, and his explanation
why this should be a myth is not much more than hot air, no facts,
no proofs that would really contradict what is known, just nothing.
The double green hoopla is simply wrong, it was fabricated to
rationalize to buyers the obvious inefficiency of a 2D sensor
design (2x2 scalable pixel elements) when using a 3 primary color
model, so I agree with him there too. The reason there is double
green is because you must double a primary color in a 2x2 mosiac if
using a 3 color model, there is no choice.
The double green is based upon scientifically establish facts that you can't deny. The eye is more sensitive to degradation of spatial resolution in the green range than in other ranges. That's just a fact.

There is no requirement to use a 2x2 mosaic, so your argument is based upon a misunderstanding of the science and a false premise about the requirements of sensor design.
If more green was always better, sensors would be all green.
The claim has never been that, "more green is always better." You know that. The above statement is at best silly and irrelevant and at worst a dishonest rhetorical device.
As has already been pointed out, it really doesn't matter what the
eye is most sensitive to, because a digital image is an emitter not
a receptor.
It matters because the images are viewed by eyes.
Further, the emission has identical amounts of RGB,
because the medium (prints and monitors) require complete RGB at
every pixel location
Irrelevant.
. Neither media can display anything but
complete RGB triples, where there is not enough R and B to form a
complete triple, they are digitally interpolated. There is no
objective way to know if the digital placeholders/guesses were
right or wrong, but in any case, there is the exact same amount of
red, green, and blue in the final emission.
The amount of red, green and blue energy will depend upon the image, but that's irrelevant to the point that degradation in green resolution is much more noticeable to us than degradation in blue or red.
It's really just common sense. If you cram 50M sensors on to a
sensor that is a millionth of an inch wide, I dare you to try to
hand hold it without blurring/overlapping two of them. If you
spread 50M sensors over a sensor the size of the solar system, it
can jiggle a little without overlapping most of them.
The fact remains that within the range of sensor sizes typically used in cameras, the rule of thumb of using 1/(35mm equivalent focal length) as your slowest hand held shutter speed, holds quite well. If you read what he said, you might think that it's significantly harder to get blur free handheld shots with small sensor cameras, but this is just false.

The reason is quite simple: If I put two cameras with an equivalent field of view on tripods right next to each other and I move both cameras equal amounts, the image will change equal amounts. If they didn't, you would have a real contradiction: You could spin one camera around 360 degrees, thin spin the other around 360 degrees, and they would both see different images. However, this would be impossible since by spinning both around 360 degrees, they must both be seeing the same thing as when you started.

--
Ron Parr
FAQ: http://www.cs.duke.edu/~parr/photography/faq.html
Gallery: http://www.pbase.com/parr/
 
In any case the eye is most sensitive for high frequency in the
green region of the spectrum so he is wrong, and his explanation
why this should be a myth is not much more than hot air, no facts,
no proofs that would really contradict what is known, just nothing.
The double green hoopla is simply wrong, it was fabricated to
rationalize to buyers the obvious inefficiency of a 2D sensor
design (2x2 scalable pixel elements) when using a 3 primary color
model, so I agree with him there too. The reason there is double
green is because you must double a primary color in a 2x2 mosiac if
using a 3 color model, there is no choice.
The double green is based upon scientifically establish facts that
you can't deny.
The science is that you can't fit 3 types of pegs into four holes without doubling one. Once you realize one had to be doubled, you can explore the wisodom of choosing green. And again, if more green was desirable, the sensor would be all green.
The eye is more sensitive to degradation of
spatial resolution in the green range than in other ranges. That's
just a fact.

There is no requirement to use a 2x2 mosaic, so your argument is
based upon a misunderstanding of the science and a false premise
about the requirements of sensor design.
If more green was always better, sensors would be all green.
The claim has never been that, "more green is always better." You
know that. The above statement is at best silly and irrelevant
and at worst a dishonest rhetorical device.
So why not not 55%? Why not 63%? Why not 78%? Why not 94%? Why not 100%?

A: 3 pegs, 4 holes.
As has already been pointed out, it really doesn't matter what the
eye is most sensitive to, because a digital image is an emitter not
a receptor.
It matters because the images are viewed by eyes.
Not necessarily, you can scan them, you can take pictures of them with film, you can do all sorts of things that require another accurate emission to a receptor that isn't an eyeball. Are you claiming that if I take a picture of a digital photo with film it will look all wrong?
Further, the emission has identical amounts of RGB,
because the medium (prints and monitors) require complete RGB at
every pixel location
Irrelevant.
Only if you want to ignore the obvious reality that all the pixels are all complete RGB triples in the end due to interpolation.
. Neither media can display anything but
complete RGB triples, where there is not enough R and B to form a
complete triple, they are digitally interpolated. There is no
objective way to know if the digital placeholders/guesses were
right or wrong, but in any case, there is the exact same amount of
red, green, and blue in the final emission.
The amount of red, green and blue energy will depend upon the
image, but that's irrelevant to the point that degradation in green
resolution is much more noticeable to us than degradation in blue
or red.
It's really just common sense. If you cram 50M sensors on to a
sensor that is a millionth of an inch wide, I dare you to try to
hand hold it without blurring/overlapping two of them. If you
spread 50M sensors over a sensor the size of the solar system, it
can jiggle a little without overlapping most of them.
The fact remains that within the range of sensor sizes typically
used in cameras, the rule of thumb of using 1/(35mm equivalent
focal length) as your slowest hand held shutter speed, holds quite
well.
That's because snipping the border off you picture doesn't make it more blurry, but you lose sensors/film area when you crop them away. If you compress the same number of sensors into some smaller image area, you more accurately sense small errors and it becomes much more difficult to avoid senosr overlap. Another way to look at it is that the smaller area of the final image has the same amount of blur as the full frame view, but in percentage terms the movement is proportionately bigger.
If you read what he said, you might think that it's
significantly harder to get blur free handheld shots with small
sensor cameras, but this is just false.

The reason is quite simple: If I put two cameras with an
equivalent field of view on tripods right next to each other and I
move both cameras equal amounts, the image will change equal
amounts. If they didn't, you would have a real contradiction: You
could spin one camera around 360 degrees, thin spin the other
around 360 degrees, and they would both see different images.
However, this would be impossible since by spinning both around 360
degrees, they must both be seeing the same thing as when you
started.
They are, but the camera with the 1 millionth inch sensor dimension, and the same number of photosites, will only image a tiny little completely blurred area. Unless you hold it millions of times more still.
 
He means full-color pixels/photosites. Foveon has 3.4m, with 3
stacked photosites each. This bayer also has 3.4m, with 4
photosites each. 2 green, 1 red, 1 blue in a square. As discussed
ad nauseum on this forum, the Kodak interpolates this back into
13.7m pixels right away.
That is oversimplified and wrong, the 4 photosites do not form 1 pixel in the 14n. You can't simply add them up to a RGB triple.
The SD10 does not, but SPP can with double
size processing. I won't bother getting into which has more
resolution at that point.
Counting pixels does not say much about resolution and therefore it was not my intention to discuss resolution, I was just pointing to a very whacky concept of pixel counting in this article.
Than we have his BS on the green sampling, just have a look at the
original Bayer patent or try it yourself in Photoshop and you will
see that he is just wrong. Or have a look here:
http://www.stanford.edu/~esetton/experiments.htm

In any case the eye is most sensitive for high frequency in the
green region of the spectrum so he is wrong, and his explanation
why this should be a myth is not much more than hot air, no facts,
no proofs that would really contradict what is known, just nothing.
What does that have to do with eyes?
Everything, the destruction of data that was there is the same no matter which of the three color channels you subsample. But if you do it with the green one you see it because you destroy the high frequency detail in the channel in which the eye notices it

It does not matter if this is about image compression or vision because the image compression will always have to work on what the eye can see and what not (thats the stuff that can be left out and therefore used for compression).
I think you meant this: http://www.stanford.edu/~esetton/hvs.htm
And that shows it is the luminance, not the color, that the eye
perceives most easily. Green is a bit easier, but not much,
according to this page.
This page does not explicitly state that green is of high importance for the luminance but the Luminance as we see it is mostly made up by green, also notice how much less is "resolved" in blue and red, this only leaves green for being important, the follow up page (the one I linked) is pretty specific on this:

"The image that is the less distorted is the third one where the B component has been subsampled. This was expected, because we have explained that the Blue-Yellow channel is the one the eye is the less sensitive to at high frequencies. The image that is the most distorted is the second one where the G component has been subsampled."

On the last point, no matter if it is a missunderstanding or not, as it wrong, and for somebody who writes an article to debunk some myths it is unacceptable that he starts so many myths.
--
http://www.pbase.com/dgross (work in progress)
http://www.pbase.com/sigmasd9/dominic_gross_sd10

 
The double green is based upon scientifically establish facts that
you can't deny.
The science is that you can't fit 3 types of pegs into four holes
without doubling one. Once you realize one had to be doubled, you
can explore the wisodom of choosing green.
The game is not one of trying to fit 3 pegs into 4 holes, so this comment is irrelevant.
And again, if more
green was desirable, the sensor would be all green.
As you know, the argument is not that more green is always more desirable. Attacking a silly and irrelevant position doesn't strengthen yours.
The claim has never been that, "more green is always better." You
know that. The above statement is at best silly and irrelevant
and at worst a dishonest rhetorical device.
So why not not 55%? Why not 63%? Why not 78%? Why not 94%? Why
not 100%?

A: 3 pegs, 4 holes.
25/50/25 is a pretty decent approximation to the relative importance of these wavelength ranges to our perception of luminance, but it's only an approximation. We could probably afford to trade some blue for red, but it might not be worth the bother.

So, yes a regular pattern of some kind is favored over a completely irregular on in some cases, but as you've been told before:
  • There is nothing special about 2x2. It might have been easier to deal with in the old days, but it's irrelevant today.
  • One of the alternatives against which Bayer compared in his patent was, in fact, a striped sensor with equal amounts of alternating RGB and one of the reasons why it was considered inferior was, 'the sampling patterns which result tend to provide a disproportionate quantity of information regarding basic color vectors to which the eye has less resolving power, e.g., "blue" information relative to "green" information.'
As has already been pointed out, it really doesn't matter what the
eye is most sensitive to, because a digital image is an emitter not
a receptor.
It matters because the images are viewed by eyes.
Not necessarily, you can scan them, you can take pictures of them
with film, you can do all sorts of things that require another
accurate emission to a receptor that isn't an eyeball. Are you
claiming that if I take a picture of a digital photo with film it
will look all wrong?
Not to a human.
Further, the emission has identical amounts of RGB,
because the medium (prints and monitors) require complete RGB at
every pixel location
Irrelevant.
Only if you want to ignore the obvious reality that all the pixels
are all complete RGB triples in the end due to interpolation.
I can't comment on this becaue it doesn't make sense.
The fact remains that within the range of sensor sizes typically
used in cameras, the rule of thumb of using 1/(35mm equivalent
focal length) as your slowest hand held shutter speed, holds quite
well.
That's because snipping the border off you picture doesn't make it
more blurry, but you lose sensors/film area when you crop them
away. If you compress the same number of sensors into some smaller
image area, you more accurately sense small errors and it becomes
much more difficult to avoid senosr overlap. Another way to look
at it is that the smaller area of the final image has the same
amount of blur as the full frame view, but in percentage terms the
movement is proportionately bigger.
No - this is precisely what I'm arguing against: Two camers with different size sensors and the same field of view are about equally sensitive to camera shake. This follows both logically and experimentally. If you don't believe me, just try it. A best, you'll get a slight advantage with a heavier camera, because it has more inertia.
If you read what he said, you might think that it's
significantly harder to get blur free handheld shots with small
sensor cameras, but this is just false.

The reason is quite simple: If I put two cameras with an
equivalent field of view on tripods right next to each other and I
move both cameras equal amounts, the image will change equal
amounts. If they didn't, you would have a real contradiction: You
could spin one camera around 360 degrees, thin spin the other
around 360 degrees, and they would both see different images.
However, this would be impossible since by spinning both around 360
degrees, they must both be seeing the same thing as when you
started.
They are, but the camera with the 1 millionth inch sensor
dimension, and the same number of photosites, will only image a
tiny little completely blurred area. Unless you hold it millions
of times more still.
We have assumed that both cameras have the same field of view, and I have proved the in this case, same amount of motion causes the same amount of shift in the image for both cameras, so your comment either contradicts the assumptions of the experiment or is just wrong.

--
Ron Parr
FAQ: http://www.cs.duke.edu/~parr/photography/faq.html
Gallery: http://www.pbase.com/parr/
 
First of all lets go back to the part I quoted...

"There I have compared cameras with a Foveon (SD10) and a Bayer
(14n) sensor containing the same number of pixels - pixels, not
cells. Both have 3.4 million pixels (although the Bayer has 13.8
million cells)."

Either he counts pixels as spatially distinct on the sensor
surface, in this case he would be right about the SD10 having 3.4
million pixel but than he would be wrong about the 14n which in
this case has 13.8 million. Or he could count photosensors, that
would put the SD10 at 10.2 million and the 14n at 13.8 million. But
I see no way to put it like the author did, this is just decieving
rubbish.
Or, he could be defining pixels the way Foveon defines pixels:
Nowhere in the document you linked a comment is made about how to count pixels in a CFA Camera, define a pixel any way you want but Foveon made no statement there that supports your point.

Infact the cited guideline show that a 14mp is indeed a 14mp Camera:

"( 1 – a ) “Number of Effective Pixels”
The number of pixels on the image sensor which receive input
light through the optical lens, and which are effectively
reflected in the final output data of the still image."

The only thing this document really gives advice on is how to count pixels in a X3.
If more green was always better, sensors would be all green.
You could not see color in this case, so you have to find a reasonable trade off. In this case it was to give the color more pixel that contributes most to high frequency / fine detail in human vision.
As has already been pointed out, it really doesn't matter what the
eye is most sensitive to, because a digital image is an emitter not
a receptor.
You are confused, that can also be a reason that it actually matters. Because a PC screen "emitts" a Image to be seen by humans jpeg compression works so well.
It's really just common sense. If you cram 50M sensors on to a
sensor that is a millionth of an inch wide, I dare you to try to
hand hold it without blurring/overlapping two of them. If you
spread 50M sensors over a sensor the size of the solar system, it
can jiggle a little without overlapping most of them.
Still wrong. It just does not matter what you project on. Maybe I should do the math for you since you refuse to do it... it is pretty basic trigonometry.
Or how about this:
"Electronic sensors pick up random fluctuations in light that we
cannot see. These show up on enlargements like grain in film"

nonsense again.
He's just oversimplifying an explanation of noise. He is
essentially correct, it's due to random fluctuations in lots of
things, semantically, you can nitpick any explanation of noise
endlessly.
It is certainly not random fluctuations in LIGHT. So he ist wrong, no matter how you try to spin it.

--
http://www.pbase.com/dgross (work in progress)
http://www.pbase.com/sigmasd9/dominic_gross_sd10

 
The double green is based upon scientifically establish facts that
you can't deny.
The science is that you can't fit 3 types of pegs into four holes
without doubling one. Once you realize one had to be doubled, you
can explore the wisodom of choosing green.
The game is not one of trying to fit 3 pegs into 4 holes, so this
comment is irrelevant.
It is in the case of the Bayer mosiac. Ther is no choice but to double a primary. That is why there is a doubled primary. Except for Sony, who are going to great lens to fix the double green problem by splitting the fouth hole into two half pegs, blue and green.
And again, if more
green was desirable, the sensor would be all green.
As you know, the argument is not that more green is always more
desirable. Attacking a silly and irrelevant position doesn't
strengthen yours.
So more is better and more is worse. Makes no sense.
The claim has never been that, "more green is always better." You
know that. The above statement is at best silly and irrelevant
and at worst a dishonest rhetorical device.
So why not not 55%? Why not 63%? Why not 78%? Why not 94%? Why
not 100%?

A: 3 pegs, 4 holes.
25/50/25 is a pretty decent approximation to the relative
importance of these wavelength ranges to our perception of
luminance, but it's only an approximation. We could probably
afford to trade some blue for red, but it might not be worth the
bother.
Funny that Sony disagrees with you, they split the fouth hole into two half pegs, blue and green, giving the them (the largest manufacturer of Bayer sensors in the world) a 37/37/25% split.

Obviously they agree, we need less green.
So, yes a regular pattern of some kind is favored over a completely
irregular on in some cases, but as you've been told before:
  • There is nothing special about 2x2. It might have been easier to
deal with in the old days, but it's irrelevant today.
Well, Sony 's RGBE attempt to fix the double green problem would indicate that maybe is very hard/expensive to break away from a 2x2 scalable unit.
As has already been pointed out, it really doesn't matter what the
eye is most sensitive to, because a digital image is an emitter not
a receptor.
It matters because the images are viewed by eyes.
Not necessarily, you can scan them, you can take pictures of them
with film, you can do all sorts of things that require another
accurate emission to a receptor that isn't an eyeball. Are you
claiming that if I take a picture of a digital photo with film it
will look all wrong?
Not to a human.
So if I evaluate and compare the emission the final print generates with reality using a computer it will actually be way too green, right? Secretly, all good cameras skew their images toward too much green, but we just can't see it, right?

Or could it be that the Bayer mosaic digitally inserted blue and red where none was sensed? Wouldn't it be better to know those values since you have to insert them anyway? So how is not knowing better than knowing?
Only if you want to ignore the obvious reality that all the pixels
are all complete RGB triples in the end due to interpolation.
I can't comment on this becaue it doesn't make sense.
Ok, then don't. But don't pretend there is more of a green component in RGB ouput media than any other color, all components are there, where the camera had a deficiency it was interpolated. There is no way to know if the guesses were right or wrong, but all the components are equally weighted in the resulting image.
That's because snipping the border off you picture doesn't make it
more blurry, but you lose sensors/film area when you crop them
away. If you compress the same number of sensors into some smaller
image area, you more accurately sense small errors and it becomes
much more difficult to avoid senosr overlap. Another way to look
at it is that the smaller area of the final image has the same
amount of blur as the full frame view, but in percentage terms the
movement is proportionately bigger.
No - this is precisely what I'm arguing against: Two camers with
different size sensors and the same field of view are about equally
sensitive to camera shake. This follows both logically and
experimentally. If you don't believe me, just try it. A best,
you'll get a slight advantage with a heavier camera, because it has
more inertia.
I've tried many times, the SDs are very challenging to had hold and this is exactly why. "Heavy" has nothing to do with sensor size.
They are, but the camera with the 1 millionth inch sensor
dimension, and the same number of photosites, will only image a
tiny little completely blurred area. Unless you hold it millions
of times more still.
We have assumed that both cameras have the same field of view, and
I have proved the in this case, same amount of motion causes the
same amount of shift in the image for both cameras, so your comment
either contradicts the assumptions of the experiment or is just
wrong.
If you think a croping DSLR has the same FOV, you need to rethink.

So am I correct to say that you have now reached my position, that a DSLR cropper with the same number (or more) of sensors is harder to hand than full frame (or any larger sensor to a proportional degree) DSLR?

A simple yes or no would be nice. The answer really should be pretty obvious.
 
Would Rochester still get the deal?

Would the ecclesiasts still lay down the canon?

Would there still be a nick on the heath?

The pace would be slower; that's for sure. But everyone would still think it too fast.
Thought this might be of interest, especially the section "How many
pixels do you need?"

http://db.tidbits.com/getbits.acgi?tbart=07860

--
John P. Sabo
[email protected]
--
Laurence

There is a tide in the affairs of men,
Which, taken at the flood, leads on to fortune;
Omitted, all the voyage of their life
Is bound in shallows and in miseries.

http://www.pbase.com/lmatson/root
http://www.pbase.com/sigmasd9/root
http://www.pbase.com/cameras/sigma/sd10
http://www.pbase.com/cameras/sigma/sd9
http://www.beachbriss.com (eternal test site)
 
The game is not one of trying to fit 3 pegs into 4 holes, so this
comment is irrelevant.
It is in the case of the Bayer mosiac. Ther is no choice but to
double a primary. That is why there is a doubled primary. Except
for Sony, who are going to great lens to fix the double green
problem by splitting the fouth hole into two half pegs, blue and
green.
Fitting 3 pegs into 4 holes isn't the problem; it's one form of the solution. The problem is how to best capture useful luminance information using a pattern of color filters.
As you know, the argument is not that more green is always more
desirable. Attacking a silly and irrelevant position doesn't
strengthen yours.
So more is better and more is worse. Makes no sense.
By any chance, have you heard of Goldilocks?
25/50/25 is a pretty decent approximation to the relative
importance of these wavelength ranges to our perception of
luminance, but it's only an approximation. We could probably
afford to trade some blue for red, but it might not be worth the
bother.
Funny that Sony disagrees with you, they split the fouth hole into
two half pegs, blue and green, giving the them (the largest
manufacturer of Bayer sensors in the world) a 37/37/25% split.

Obviously they agree, we need less green.
I've explained this to you before in other threads, so I'll do the condensed version here: The E filter is passing a different range of wavelengths and the reason is for greater color accuracy, not greater resolution of detail.
Well, Sony 's RGBE attempt to fix the double green problem would
indicate that maybe is very hard/expensive to break away from a 2x2
scalable unit.
Actually, it's a good indication of how little there is to be gained by doing this. There's no noticeable improvement in detail from the images from the RGBE sensor over those from cameras with the same sensor and a traditional RGB filter. The main reason for the E filter was color accuracy, and there might even be some examples where it helps with this, but it doesn't appear to have caught on. Sony's latest sensors (1/1.8" 7MP, and Nikon D2X) are RGB.
So if I evaluate and compare the emission the final print generates
with reality using a computer it will actually be way too green,
right? Secretly, all good cameras skew their images toward too
much green, but we just can't see it, right?
No. It will have higher resolution in the green wavelengths.
Or could it be that the Bayer mosaic digitally inserted blue and
red where none was sensed? Wouldn't it be better to know those
values since you have to insert them anyway? So how is not knowing
better than knowing?
Nobody is saying a Bayer pattern is better than sensing the entire spectrum at each pixel. The point is that if you are only going to sample the spectrum at one range of frequencies per pixel, a system that emphasizes the green range somewhat more heavily than others is a smarter way to allocate resources.

You know that I like the Foveon idea. What you need to understand is that liking the Foveon idea does not imply thinking that the Bayer pattern was a bad solution to the problem of using just one filter per pixel.
Ok, then don't. But don't pretend there is more of a green
component in RGB ouput media than any other color, all components
are there, where the camera had a deficiency it was interpolated.
There is no way to know if the guesses were right or wrong, but all
the components are equally weighted in the resulting image.
Light emitting output media will display R, G and B for each pixel and we will be most sensitive to high frequency detail in the G range.
No - this is precisely what I'm arguing against: Two camers with
different size sensors and the same field of view are about equally
sensitive to camera shake. This follows both logically and
experimentally. If you don't believe me, just try it. A best,
you'll get a slight advantage with a heavier camera, because it has
more inertia.
I've tried many times, the SDs are very challenging to had hold and
this is exactly why. "Heavy" has nothing to do with sensor size.
"Heavy," has to do with camera size. If, for the same field of view, you have more blur from the smaller sensor camera, it is entirely because the camera is lighter and an impulse from your hand tremors moves the camera more.

For me it makes no difference. Within reasonable weight ranges, my hand seems to move the same amount regardless of what I'm holding, so the acceptable handheld shutter speed for reasonably weighted cameras is determined by field of view alone. Many people have had similar experiences, but you will occasionally find somebody who has more difficulty with very light cameras.
If you think a croping DSLR has the same FOV, you need to rethink.
I can't parse that sentence.

As I've indicated clearly. The discussion assumes that both cameras have the same field of view.
So am I correct to say that you have now reached my position, that
a DSLR cropper with the same number (or more) of sensors is harder
to hand than full frame (or any larger sensor to a proportional
degree) DSLR?
Again, I can't fully parse what you're saying. I'm saying that if both cameras capture the same image (in terms of field of view) when held still, they will have the same sensitivity to blur from camera shake.

The point is to compare two cameras taking similar pictures.

--
Ron Parr
FAQ: http://www.cs.duke.edu/~parr/photography/faq.html
Gallery: http://www.pbase.com/parr/
 
It is in the case of the Bayer mosiac. Ther is no choice but to
double a primary. That is why there is a doubled primary. Except
for Sony, who are going to great lens to fix the double green
problem by splitting the fouth hole into two half pegs, blue and
green.
Fitting 3 pegs into 4 holes isn't the problem; it's one form of the
solution. The problem is how to best capture useful luminance
information using a pattern of color filters.
Luminance is only part of the problem. Capturing only luminance is clearly less advantageous than capturing both luminance and useful color information.
As you know, the argument is not that more green is always more
desirable. Attacking a silly and irrelevant position doesn't
strengthen yours.
So more is better and more is worse. Makes no sense.
By any chance, have you heard of Goldilocks?
This soup is too cold, this soup is even colder. Which does she like?
I've explained this to you before in other threads, so I'll do the
condensed version here: The E filter is passing a different range
of wavelengths and the reason is for greater color accuracy, not
greater resolution of detail.
Exactly, Sony says less green is better.
Actually, it's a good indication of how little there is to be
gained by doing this. There's no noticeable improvement in detail
from the images from the RGBE sensor over those from cameras with
the same sensor and a traditional RGB filter. The main reason for
the E filter was color accuracy, and there might even be some
examples where it helps with this, but it doesn't appear to have
caught on. Sony's latest sensors (1/1.8" 7MP, and Nikon D2X) are
RGB.
If it doesn't appear to be much different, that also proves double green is a waste.
So if I evaluate and compare the emission the final print generates
with reality using a computer it will actually be way too green,
right? Secretly, all good cameras skew their images toward too
much green, but we just can't see it, right?
No. It will have higher resolution in the green wavelengths.
And lower everywhere else. It's a zero sum game.
Nobody is saying a Bayer pattern is better than sensing the entire
spectrum at each pixel. The point is that if you are only going to
sample the spectrum at one range of frequencies per pixel, a
system that emphasizes the green range somewhat more heavily than
others is a smarter way to allocate resources.
Correct at last, Ron. Bayer is making the best of a bad situation, not optimizing the sensor. We agree at last.
I've tried many times, the SDs are very challenging to had hold and
this is exactly why. "Heavy" has nothing to do with sensor size.
"Heavy," has to do with camera size. If, for the same field of
view, you have more blur from the smaller sensor camera, it is
entirely because the camera is lighter and an impulse from your
hand tremors moves the camera more.
That has nothing to do with what we are talking about. You said that all else equal, there is no difference between cameras with different sensor sizes. I think even you have realized that is wrong, a more severe cropper with the same MPs or more is proportionately more difficult to hand hold, because the blur/shake distance is a proportionately large fraction of the sensor dimensions.
If you think a croping DSLR has the same FOV, you need to rethink.
I can't parse that sentence.
You didn't realize DSLRs with smaller sensors crop more?
As I've indicated clearly. The discussion assumes that both
cameras have the same field of view.
How can a cropping DSLR and a non-cropping DLSR have the same FOV? I think you need to go back and re-read what we are talking about. The question is, is a DSLR with a smaller overall sensor size more susceptable to camera shake, all else being equal (camera, lens, MPs, all being the "else").

The answer is yes. Sorry.
So am I correct to say that you have now reached my position, that
a DSLR cropper with the same number (or more) of sensors is harder
to hand than full frame (or any larger sensor to a proportional
degree) DSLR?
Again, I can't fully parse what you're saying.
It's a trend when you realize you are wrong.
 
Luminance is only part of the problem. Capturing only luminance is
clearly less advantageous than capturing both luminance and useful
color information.
Of course, that's why we're using a color filter array in the first place and since our sensitivity to color variations is provably not as high as our sensitivity to luminance variations, we have some slack to play with the sampling rate of the colors if it helps with luminance.
Exactly, Sony says less green is better.
No. Sony is claiming that sampling the color spectrum at 4 points instead of 3 gives greater color accuracy. This would apply to Foveon X3 too, BTW.
If it doesn't appear to be much different, that also proves double
green is a waste.
No - it shows that E is still capturing useful luminance information. That double green is useful has already been proved by the examples on the web page provided earlier in this thread.
No. It will have higher resolution in the green wavelengths.
And lower everywhere else. It's a zero sum game.
Except that our perception of detail does not favor all wavelengths equally. That's the point.
Correct at last, Ron. Bayer is making the best of a bad situation,
not optimizing the sensor. We agree at last.
We have never been in disagreement that sampling the power spectrum at 3 points per pixel is nice. You will never find a statement from me suggesting otherwise.

What we have consistently disagreed about is that usefulness of favoring green in color filter arrays.
That has nothing to do with what we are talking about. You said
that all else equal, there is no difference between cameras with
different sensor sizes.
My description of the situation has been precise and correct throughout. I can't say the same for yours, or even your above version, which is subject to multiple interpretations.
I think even you have realized that is
wrong, a more severe cropper with the same MPs or more is
proportionately more difficult to hand hold, because the blur/shake
distance is a proportionately large fraction of the sensor
dimensions.
Again, it's impossible to assess your ambiguous statements. All else cannot be equal. Either the focal lengths are equal or the fields of view are equal. It cannot be both if the sensor size is different. All I can do is to restate the precise version to which I have stuck consistently. For the same field of view, you have the same sensitivity to camera shake.
If you think a croping DSLR has the same FOV, you need to rethink.
I can't parse that sentence.
You didn't realize DSLRs with smaller sensors crop more?
I said that I couldn't parse your sentence. That's all I said.
As I've indicated clearly. The discussion assumes that both
cameras have the same field of view.
How can a cropping DSLR and a non-cropping DLSR have the same FOV?
Nobody in this thread ever said they did.
I think you need to go back and re-read what we are talking about.
The question is, is a DSLR with a smaller overall sensor size more
susceptable to camera shake, all else being equal (camera, lens,
MPs, all being the "else").

The answer is yes. Sorry
If you mean to say that the focal length stays constant regardless of the change in sensor size, then I should point out that this particular case was explicitly ruled out earlier in the discussion. (Note also, that the original article was comparing 35mm film to small, e.g. 1/1.8" sensor, cameras, for which there can be little doubt that the intention is to compare situations where the field of view is equivalent since the range of focal lengths in typical use has almost no overlap.)

Since you are now raising this new situation of using the same focal length with different size sensors, I will point out the following:
  • In this case, the the cropped sensor sees a crop of what the large sensor would see.
  • For a given amount of camera shake, the amount of blur captured by both sensors is identical because what one camera captures is simply a crop of what the other camera would capture.
  • If you enlarge the crop so that it is the same output size as an image from the uncropped sensor, you will now have two different images that are the same physical size, and the blur from the image with more enlargement will be a larger portion of the total image size.
To put it plainly, if you take a head and shoulders portait and enlarge it to 20x30 inches, then take a crop of the nose and enlarge it to 20x30 inches, you are more likely to notice camera shake in the big nose.

Fortunately, we have the option of keeping our field of view constant by using a wider angle lens, letting us create albums that are pictures of people and not just noses. This is why we do comparisons that keep the FOV constant.
So am I correct to say that you have now reached my position, that
a DSLR cropper with the same number (or more) of sensors is harder
to hand than full frame (or any larger sensor to a proportional
degree) DSLR?
Again, I can't fully parse what you're saying.
It's a trend when you realize you are wrong.
LOL

I will say that over time I have developed some familiarity with your rhteorical tactics. You often state something in a intentionally ambiguous manner with the apparent goal of obtaining temporary agreement on the ambiguous statement. You then try to turn the ambiguity to your favor and suggest that the other party has argued inconsistently.

I've seen this litlte game before and we saw it again above where you tried to get me to agree to an obviously ambiguous statement in which it was, in fact, clearly impossible for, "all else," to be equal.

I'm not playing. I won't guess what you mean or agree to imprecise statements just so you can try to turn them around.

--
Ron Parr
FAQ: http://www.cs.duke.edu/~parr/photography/faq.html
Gallery: http://www.pbase.com/parr/
 
I will say that over time I have developed some familiarity with
your rhteorical tactics. You often state something in a
intentionally ambiguous manner with the apparent goal of obtaining
temporary agreement on the ambiguous statement. You then try to
turn the ambiguity to your favor and suggest that the other party
has argued inconsistently.

I've seen this litlte game before and we saw it again above where
you tried to get me to agree to an obviously ambiguous statement in
which it was, in fact, clearly impossible for, "all else," to be
equal.

I'm not playing. I won't guess what you mean or agree to imprecise
statements just so you can try to turn them around.
Is that how he keeps sucking me into his nonsense? Oh, I feel like such a fool now not to have seen him for what he was. Thanks, Ron, for helping me with the first step of getting over him.

You'll see in the news forum he's been dragging me through the same mess he's got going with you here, with his half-truths and imparsibles.

j
 
He means full-color pixels/photosites. Foveon has 3.4m, with 3
stacked photosites each. This bayer also has 3.4m, with 4
photosites each. 2 green, 1 red, 1 blue in a square. As discussed
ad nauseum on this forum, the Kodak interpolates this back into
13.7m pixels right away.
That is oversimplified and wrong, the 4 photosites do not form 1
pixel in the 14n. You can't simply add them up to a RGB triple.
I understand both you and the writer of this article. I like his concept. This industry is still relatively new and it reminds me of the wattage debate of the late 60's.

For the youngsters: At the time to get good music you had to have a component set. That is, turntable, tuner, amplifier etc. The more watts your amp put out the prouder the owner. Then people began to recognize the signal to noise factor and you then needed power and less distortion.

There originally was IHF watts (Institute of High Fidelity), I see this represented today as Bayer. There were no standards advertisers called it (the watts) what they wanted. Someone got bright and created a formula for Watts and it was called RMS (Root mean Square) This reminds me of Foveon. It took a few years, but RMS is so accepted now that no one even asks which standard you are using.

The author of this article has used some common sense in grouping the photo sites and redefining a pixel. This is something that should be done eventually. If it takes 4 pixels to determine correct color then why not use this standard. If you want to argue that it only takes 3 pixels to determine color, then fine. I don't care which.

I do think that this author should be respected for his opinion and I am a little disappointed in the way he is being disrespected by some.
 
Luminance is only part of the problem. Capturing only luminance is
clearly less advantageous than capturing both luminance and useful
color information.
Of course, that's why we're using a color filter array in the first
place and since our sensitivity to color variations is provably
not as high as our sensitivity to luminance variations, we have
some slack to play with the sampling rate of the colors if it helps
with luminance.
There is no opportunity cost with regard to luminance, so it doesn't matter. If you imbalance the colors of the sensors, you lose color resolution due to no complimentary data. If you spilt the same number of sensors 33/33/33%, you get both usable luminance and usable color data from every sensor. Any imbalance is bad.
Exactly, Sony says less green is better.
No. Sony is claiming that sampling the color spectrum at 4 points
instead of 3 gives greater color accuracy. This would apply to
Foveon X3 too, BTW.
Right, the biggest manufacturer of Bayer sensors says double-green is a problem worth solving. That really is all one needs to know.
Correct at last, Ron. Bayer is making the best of a bad situation,
not optimizing the sensor. We agree at last.
We have never been in disagreement that sampling the power spectrum
at 3 points per pixel is nice. You will never find a statement
from me suggesting otherwise.
There is no way to distinguish a vertical array from a horizontal array when there is a 33/33/33% RGB split. Neither requires interpolative upscaling of the weak channels to align channel topologies during demosiacing, simply, every adjacent triplet can be combined 1:1:1. They are one and the same design, so we agree.
 
There is no opportunity cost with regard to luminance, so it
doesn't matter. If you imbalance the colors of the sensors, you
lose color resolution due to no complimentary data. If you spilt
the same number of sensors 33/33/33%, you get both usable luminance
and usable color data from every sensor. Any imbalance is bad.
That's just wrong. You get less luminance information and luminance is more important to our perception of detail.
No. Sony is claiming that sampling the color spectrum at 4 points
instead of 3 gives greater color accuracy. This would apply to
Foveon X3 too, BTW.
Right, the biggest manufacturer of Bayer sensors says double-green
is a problem worth solving. That really is all one needs to know.
Wrong again. The biggest manfuacturer of Bayer pattern sensors says the 3 color filters may not be enough enough for good color fidelity. If correct, this would apply to Foveon X3 too.
There is no way to distinguish a vertical array from a horizontal
array when there is a 33/33/33% RGB split. Neither requires
interpolative upscaling of the weak channels to align channel
topologies during demosiacing, simply, every adjacent triplet can
be combined 1:1:1. They are one and the same design, so we agree.
I can't understand anything you said in the above paragraph.

--
Ron Parr
FAQ: http://www.cs.duke.edu/~parr/photography/faq.html
Gallery: http://www.pbase.com/parr/
 
There is no opportunity cost with regard to luminance, so it
doesn't matter. If you imbalance the colors of the sensors, you
lose color resolution due to no complimentary data. If you spilt
the same number of sensors 33/33/33%, you get both usable luminance
and usable color data from every sensor. Any imbalance is bad.
That's just wrong. You get less luminance information and
luminance is more important to our perception of detail.
You might get a little less overall luninance from red and green and blue vs green alone, but the lack of usable color info from only having green makes any imbalance much less desirable, which is why sensors are not all green.
No. Sony is claiming that sampling the color spectrum at 4 points
instead of 3 gives greater color accuracy. This would apply to
Foveon X3 too, BTW.
Right, the biggest manufacturer of Bayer sensors says double-green
is a problem worth solving. That really is all one needs to know.
Wrong again. The biggest manfuacturer of Bayer pattern sensors
says the 3 color filters may not be enough enough for good color
fidelity. If correct, this would apply to Foveon X3 too.
Regardless of what you think about the Foveon concept, it shows the 25/50/25% RGB Bayer pattern is less desirable than a 37/37/25% RGB split, according to Sony. It follows that a 33/33/33% would be ideal, since you'd make use of every sensor for both color and luminance.
There is no way to distinguish a vertical array from a horizontal
array when there is a 33/33/33% RGB split. Neither requires
interpolative upscaling of the weak channels to align channel
topologies during demosiacing, simply, every adjacent triplet can
be combined 1:1:1. They are one and the same design, so we agree.
I can't understand anything you said in the above paragraph.
It's a trend when you are proven wrong. It's simple: you said you agree that sensing full color at every pixel location is better than the Bayer 25/50/25% RGB split which cannot do that. That means you also agree that a 33/33/33% mosiac, if such a thing were possible, would also be ideal since it is indistinguishable from a vertical array that builds one full color pixel per every 3 sensors--since you simply combine every RGB triple into 1 full color pixel with no need for interpolation, either way.
 
You might get a little less overall luninance from red and green
and blue vs green alone, but the lack of usable color info from
only having green makes any imbalance much less desirable, which is
why sensors are not all green.
You get a lot less luminance information with equal amounts of R, G, and B. Read up:

http://www.poynton.com/notes/colour_and_gamma/ColorFAQ.html#RTFToC9

We've already covered why sensors aren't all green. Why do you keep raising this ridiculous point?
Regardless of what you think about the Foveon concept, it shows the
25/50/25% RGB Bayer pattern is less desirable than a 37/37/25% RGB
split, according to Sony. It follows that a 33/33/33% would be
ideal, since you'd make use of every sensor for both color and
luminance.
This is, again, incorrect. How many times do I have to explain this?

Sony is using the E filter in an attempt to improve color fidelity, NOT resolution. If anything, the problem that they are fixing is not one of too much green, but too much red:

http://www.cybershotf828.com/detail/rgbe.html

The issue is most likely the dip in the response of our red cones between 450 and 500 nm. Most color filters (including those in Foveon X3) don't have a dip in response in this range and detecting this range could, in principle, improve color fidelity by DECREASING the red when there is a high recorded value in the "emerald" range.

An even mix of the standard red, green and blue filters would decrease luminance response and would NOT fix the negative red problem.
There is no way to distinguish a vertical array from a horizontal
array when there is a 33/33/33% RGB split. Neither requires
interpolative upscaling of the weak channels to align channel
topologies during demosiacing, simply, every adjacent triplet can
be combined 1:1:1. They are one and the same design, so we agree.
I can't understand anything you said in the above paragraph.
It's a trend when you are proven wrong.
You can't have a trend without any data points. However, it is amusing that the times when you are typing total gibberish seem to coincide with your times when you believe you are proving me wrong.
It's simple: you said you
agree that sensing full color at every pixel location is better
than the Bayer 25/50/25% RGB split which cannot do that. That
means you also agree that a 33/33/33% mosiac,
No - it does not mean that I agree to this. The second sentence does not follow logically from the first.
if such a thing were
possible, would also be ideal since it is indistinguishable from a
vertical array that builds one full color pixel per every 3
sensors--since you simply combine every RGB triple into 1 full
color pixel with no need for interpolation, either way.
33/33/33 RGB is obviously possible for a color filter array and also obviously not a good choice for a color filter array because it allocates fewer resources to the wavelengths that most influence our perception of detail.

It has been clearly demonstrated many times that we barely notice decreased blue resolution, while decreased green resolution causes very visible degradation. What further proof could you possibly want?

--
Ron Parr
FAQ: http://www.cs.duke.edu/~parr/photography/faq.html
Gallery: http://www.pbase.com/parr/
 

Keyboard shortcuts

Back
Top