Sense & Sensors in Digital Photography

Thought this might be of interest, especially the section "How many
pixels do you need?"

http://db.tidbits.com/getbits.acgi?tbart=07860

--
John P. Sabo
[email protected]
--Richard Wrote: Thanks for the link. I found it very informative and it revealed alot of what I suspected but couldn't prove. In film the lens was the most important part of the image prosses ,today in digital It's a whole new ball game hardware,software. ect. and more confusing.I hope more experts in this field read this artical and put some of the disputes to rest.
s

Soon! Oh Soon the light. Ours to shape for all time, ours the right. The sun will lead us, our reason to be here.
 
While it seems that he got the big picture right this guy is pretty confused, just one quote to illustrate it:

"There I have compared cameras with a Foveon (SD10) and a Bayer (14n) sensor containing the same number of pixels - pixels, not cells. Both have 3.4 million pixels (although the Bayer has 13.8 million cells)."

--
http://www.pbase.com/dgross (work in progress)
http://www.pbase.com/sigmasd9/dominic_gross_sd10

 
Maybe, but in your choosen example, the author is correct. Certainly if all else is equal, the smaller sensor will be more difficult to hand hold with respect to camera shake. Sensor density will be higher, meaning pixel pitch will be smaller, so it will mandate less shake to avoid the overlap/blurring of sensors from camera movement.

What other problems did you have with it?
 
Thought this might be of interest, especially the section "How many
pixels do you need?"

http://db.tidbits.com/getbits.acgi?tbart=07860

--
John P. Sabo
[email protected]
Interesting! But Polaroid shows deceiving pixel resolution on its
X530 specifications(2460x1836).
Your right, that is incorrect!...2460x1836 is 4.5mp, the total amount of photosites on the X530's sensor, not how many "pixels" it has which of course is only 1.5mp.

Perhaps it was a simple error on Polaroids part or maybe Polaroid is simply trying to hype the abilities of the X530 out of all proportion to help it sell?

Lets face it, with it being sold at £300, it needs all the hype it can get!

Regards

DSG
--
http://sigmasd10.fotopic.net/
 
First of all lets go back to the part I quoted...

"There I have compared cameras with a Foveon (SD10) and a Bayer (14n) sensor containing the same number of pixels - pixels, not cells. Both have 3.4 million pixels (although the Bayer has 13.8 million cells)."

Either he counts pixels as spatially distinct on the sensor surface, in this case he would be right about the SD10 having 3.4 million pixel but than he would be wrong about the 14n which in this case has 13.8 million. Or he could count photosensors, that would put the SD10 at 10.2 million and the 14n at 13.8 million. But I see no way to put it like the author did, this is just decieving rubbish.

Than we have his BS on the green sampling, just have a look at the original Bayer patent or try it yourself in Photoshop and you will see that he is just wrong. Or have a look here: http://www.stanford.edu/~esetton/experiments.htm

In any case the eye is most sensitive for high frequency in the green region of the spectrum so he is wrong, and his explanation why this should be a myth is not much more than hot air, no facts, no proofs that would really contradict what is known, just nothing.

Then we have the Camera shake stuff that is again just wrong, if the number of pixels and the Field of View of the picture are equal the size of the sensor just does not matter for Camera shake, if you still disagree you could just do the math on it.

Or how about this:

"Electronic sensors pick up random fluctuations in light that we cannot see. These show up on enlargements like grain in film"

nonsense again.
--
http://www.pbase.com/dgross (work in progress)
http://www.pbase.com/sigmasd9/dominic_gross_sd10

 
I really can't agree to the author's statement that about 4 good megapixels is all that my vision needs and can distinguish, at any given distance.

Put a black hair on a white paper and check from what distance you can still see it. Then think how wide is your field of vision at this distance.

Look out of the window, check from what distance you can distinguish, say, an electrical wire; how many 'pairs of wire' you would need to fill your field of vision?

Or, I have in front of me a 1600x1200 LCD monitor. From 2 feet I can distinguish dots in a dotted line. At this distance it would take about 6x6 matrix of monitors to fill my field of vision. So it would take about 72 megapixel screen to create illusion of reality with all the detail I can see. Considering that I can see individual black lines from much further distance, I would even say that around 200 megapixels would be about an adequate amount of information for a single picture.

That is my understanding of 'how many pixels do I need'. The current digital cameras, be it Foveon or Bayer, are quite far from that. A good scan from a good 8x10 slide would come close.
 
Thought this might be of interest, especially the section "How many
pixels do you need?"

http://db.tidbits.com/getbits.acgi?tbart=07860

--
John P. Sabo
[email protected]
Interesting! But Polaroid shows deceiving pixel resolution on its
X530 specifications(2460x1836).
Your right, that is incorrect!...2460x1836 is 4.5mp, the total
amount of photosites on the X530's sensor, not how many "pixels" it
has which of course is only 1.5mp.
Perhaps it was a simple error on Polaroids part or maybe Polaroid
is simply trying to hype the abilities of the X530 out of all
proportion to help it sell?

Lets face it, with it being sold at £300, it needs all the hype it
can get!

Regards

DSG
--
http://sigmasd10.fotopic.net/
-
Italo B.
 
I once did some math on it and the result was that the lower limit would be about ideal (!) 5-6 megapixel in order to go to the limits of what can be distinguished by the eye (assuming that you are always far enough away to see the the whole picture). Of course that would mean either a 6 mp Foveon or a 12mp Bayer, which is actually getting pretty close to the resolution of a carefully scanned 35mm slide...

--
http://www.pbase.com/dgross (work in progress)
http://www.pbase.com/sigmasd9/dominic_gross_sd10

 
I once did some math on it and the result was that the lower limit
would be about ideal (!) 5-6 megapixel in order to go to the limits
of what can be distinguished by the eye (assuming that you are
always far enough away to see the the whole picture). Of course
that would mean either a 6 mp Foveon or a 12mp Bayer, which is
actually getting pretty close to the resolution of a carefully
scanned 35mm slide...
--
http://www.pbase.com/dgross (work in progress)
http://www.pbase.com/sigmasd9/dominic_gross_sd10

 
I once did some math on it and the result was that the lower limit
would be about ideal (!) 5-6 megapixel in order to go to the limits
of what can be distinguished by the eye
Then assumptions behind your math must be different than mine.
actually getting pretty close to the resolution of a carefully
scanned 35mm slide...
I make 16x24" prints quite often and resolution of carefully scanned 35mm slides is not enough. I mean, they look pretty good, but a carefully scanned 2x3" slide looks much better and this is not some theoretical difference, any person with no photographic knowledge immediately sees this it. I haven't tried prints of this size from Large format (4x5 or 8x10) but I assume that people who take trouble shooting those formats know what they are doing.
 
I once did some math on it and the result was that the lower limit
would be about ideal (!) 5-6 megapixel in order to go to the limits
of what can be distinguished by the eye
Then assumptions behind your math must be different than mine.
Yes indeed, I don't think we need a 6 x 6 matrix of screens (assuming they are around 20") to cover the human field of view at 2 feet (roughly 60cm) distance. I think maybe two would be enough to cover the part that you really "see".

--
http://www.pbase.com/dgross (work in progress)
http://www.pbase.com/sigmasd9/dominic_gross_sd10

 
First of all lets go back to the part I quoted...

"There I have compared cameras with a Foveon (SD10) and a Bayer
(14n) sensor containing the same number of pixels - pixels, not
cells. Both have 3.4 million pixels (although the Bayer has 13.8
million cells)."

Either he counts pixels as spatially distinct on the sensor
surface, in this case he would be right about the SD10 having 3.4
million pixel but than he would be wrong about the 14n which in
this case has 13.8 million. Or he could count photosensors, that
would put the SD10 at 10.2 million and the 14n at 13.8 million. But
I see no way to put it like the author did, this is just decieving
rubbish.
He means full-color pixels/photosites. Foveon has 3.4m, with 3 stacked photosites each. This bayer also has 3.4m, with 4 photosites each. 2 green, 1 red, 1 blue in a square. As discussed ad nauseum on this forum, the Kodak interpolates this back into 13.7m pixels right away. The SD10 does not, but SPP can with double size processing. I won't bother getting into which has more resolution at that point.
Than we have his BS on the green sampling, just have a look at the
original Bayer patent or try it yourself in Photoshop and you will
see that he is just wrong. Or have a look here:
http://www.stanford.edu/~esetton/experiments.htm

In any case the eye is most sensitive for high frequency in the
green region of the spectrum so he is wrong, and his explanation
why this should be a myth is not much more than hot air, no facts,
no proofs that would really contradict what is known, just nothing.
What does that have to do with eyes? That shows that the greater number of pixels (green in a bayer sensor), when sabotaged, affect the picture the most. But they are adjusting the output (digital picture), not the input (eye). That's why the page has to do with compression algorithms, not vision.

I think you meant this: http://www.stanford.edu/~esetton/hvs.htm

And that shows it is the luminance, not the color, that the eye perceives most easily. Green is a bit easier, but not much, according to this page.
Then we have the Camera shake stuff that is again just wrong, if
the number of pixels and the Field of View of the picture are equal
the size of the sensor just does not matter for Camera shake, if
you still disagree you could just do the math on it.

Or how about this:
"Electronic sensors pick up random fluctuations in light that we
cannot see. These show up on enlargements like grain in film"

nonsense again.
That does look like a misunderstanding of digital noise. Unless he's just talking about luminance variations from pixel to pixel.
 
First of all lets go back to the part I quoted...

"There I have compared cameras with a Foveon (SD10) and a Bayer
(14n) sensor containing the same number of pixels - pixels, not
cells. Both have 3.4 million pixels (although the Bayer has 13.8
million cells)."

Either he counts pixels as spatially distinct on the sensor
surface, in this case he would be right about the SD10 having 3.4
million pixel but than he would be wrong about the 14n which in
this case has 13.8 million. Or he could count photosensors, that
would put the SD10 at 10.2 million and the 14n at 13.8 million. But
I see no way to put it like the author did, this is just decieving
rubbish.
Or, he could be defining pixels the way Foveon defines pixels:

"Accepted Definitions: Picture Element (pixel) - an RGB triple in a sampled image." http://www.x3f.info/technotes/x3pixel/pixelpage.html

In which case, he is exactly right, the SD10 has 3.4M and the 14n has 3.3M.
Than we have his BS on the green sampling, just have a look at the
original Bayer patent or try it yourself in Photoshop and you will
see that he is just wrong. Or have a look here:
http://www.stanford.edu/~esetton/experiments.htm

In any case the eye is most sensitive for high frequency in the
green region of the spectrum so he is wrong, and his explanation
why this should be a myth is not much more than hot air, no facts,
no proofs that would really contradict what is known, just nothing.
The double green hoopla is simply wrong, it was fabricated to rationalize to buyers the obvious inefficiency of a 2D sensor design (2x2 scalable pixel elements) when using a 3 primary color model, so I agree with him there too. The reason there is double green is because you must double a primary color in a 2x2 mosiac if using a 3 color model, there is no choice.

If more green was always better, sensors would be all green.

As has already been pointed out, it really doesn't matter what the eye is most sensitive to, because a digital image is an emitter not a receptor. Further, the emission has identical amounts of RGB, because the medium (prints and monitors) require complete RGB at every pixel location. Neither media can display anything but complete RGB triples, where there is not enough R and B to form a complete triple, they are digitally interpolated. There is no objective way to know if the digital placeholders/guesses were right or wrong, but in any case, there is the exact same amount of red, green, and blue in the final emission.
Then we have the Camera shake stuff that is again just wrong, if
the number of pixels and the Field of View of the picture are equal
the size of the sensor just does not matter for Camera shake, if
you still disagree you could just do the math on it.
It's really just common sense. If you cram 50M sensors on to a sensor that is a millionth of an inch wide, I dare you to try to hand hold it without blurring/overlapping two of them. If you spread 50M sensors over a sensor the size of the solar system, it can jiggle a little without overlapping most of them.
Or how about this:
"Electronic sensors pick up random fluctuations in light that we
cannot see. These show up on enlargements like grain in film"

nonsense again.
He's just oversimplifying an explanation of noise. He is essentially correct, it's due to random fluctuations in lots of things, semantically, you can nitpick any explanation of noise endlessly.
 

Keyboard shortcuts

Back
Top