Argument about pixels seeks comments

Laurence Matson wrote:

The myth that pixel count not is related to resolution is wrong. The
pixel count in a picture is the upper limit for resolution.
Roland,

Being "related" to resolution and "equating" to resolution are different things. The pixel count in a picture only relates to resolution via the number of discrete samplings and the quality of these samplings. Duplicating existing datum points and moving them to another location doesn't affect resolution in any positive way as is evidenced by the results of testing a 3.4 megapixel count Foveon sample against a 6.3 megapixel count CFA sample for true optical measured b&w resolution. That the 3.4 megapixel count Foveon actually has "more" overall measured optical resolution across the spectrum demonstrates rather conclusively that having a greater pixel count in a picture does not equate to having greater resolution.

Only by convenience of similarity of technology may we compare "resolution" indirectly via display pixel count in sensors and only when these pixels are handled in a very similar way. When Fuji designed their "Super CCD" there was a measurable increase in optical resolution in their horizontal and vertical measurements (at the expense somewhat of diagonal) interpolated output over their "native" output and Fuji disliked using the term "interpolation" choosing instead "extrapolation" because strictly from a mathmatical perspect it differs from classical interpolation. So here we have a case of measurable increase via a different way of collecting datum points and manipulating the collection. Foveon has the number of discrete collection points they claim. I'm not going to get into yet another argument about what constitutes a "pixel" and regardless of what we choose to call them, Foveon gets more "resolution" per output pixel than CFA based technology. So using the output pixel matrix size to "equate" to optical resolution is obviously false.

I don't believe any of us who use Sigma cameras really believe that output pixel count isn't "related" to resolution, but it's only meaningful to use this as a guideline for resolution when comparing same type technology.

Best regards,

Lin
The pixel count for imagers don´t really exist anymore after the
current fuzzification of the meaning of a pixel.
I certainly do not want to get in the middle of another exciting
marketing ploy discussion; Roland seems to have launched that
successfully. But if morons like this with great titles add their two
Groschen to the pot, all is lost.
You are already in :) And - it was Jason that started it, not me :)

Look at my posts here - I have consistently adviced not to discuss
it. I have even said that it is uninteresting as it is up to personal
opinions.

My answer to Guido had nothing to do with counting pixels. It was
just to tell Guido that you don´t combine four values to one pixel
when doing Bayer CFA reconstruction. Its not made that way.

--
Roland
 
If you do a B&W conversion on an SD14 image it is 4.7 MP. It's resolution is that of a 4.7 MP B&W sensor. The SD9/10 is a 3.4 MP camera if you do a B&W conversion on the image. The resolution charts show that. They also show that a 8 to 9 MP CFA is has the same resolution as a 4.7 MP SD14.

So how does adding color increase the resolution? It doesn't.

Sigma and Foveon are in somewhat of a pickle in explaining the difference between their sensor and the CFA. However, I think they bought themselves a lot of trouble with the claim that the SD14 is more than what it is. The reviewers are going to lay waste to that claim - and they should. Don't kill the messenger when they point out the Emperor has no clothes.

The marketing department must be running Sigma now.
of course its a 4.7 MP camera just like the SD10 is 3.4MP and the
Fuji S5 is a 6mP camera - and the fuji would be even more arguable
since it does have 12 million photosites.

O.

--
http://www.flickr.com/photos/ollivr/
http://www.flickrleech.net/user/ollivr
http://bighugelabs.com/flickr/dna.php?username=84419270@N00
--
Truman
http://www.pbase.com/tprevatt
 
Perhaps we should "feel" the resolution?
Don't worry about this. David Miller and myself are the only two who
are capable of finding the truth. (Okay, maybe Mike Charnery, too.
But he only took a picture of a flower. ;) We will use a novel
technique NEVER used before in photography. We will look at actual
prints at varying sizes! ;)
Looking is very subjective and prone to errors :P

To really appreciate the true nature of images - blind tests need to
be used!

--
Roland
--
Galleries and website: http://www.whisperingcat.co.uk/mainindex.htm
 
You would have to be blind not to be able to tell if one print is exhibiting more detail than another. Simple as that. Whether or not you know one is from the SD14 or the 14NX is irrelevant. But if someone can show me how to strip EXIF data off of files using a Mac, I'd be happy to do so.
Don't worry about this. David Miller and myself are the only two who
are capable of finding the truth. (Okay, maybe Mike Charnery, too.
But he only took a picture of a flower. ;) We will use a novel
technique NEVER used before in photography. We will look at actual
prints at varying sizes! ;)
Looking is very subjective and prone to errors :P

To really appreciate the true nature of images - blind tests need to
be used!

--
Roland
 
Call me old fashioned but perhaps resolution should be thrown out all together. In my own mind i used defined detail capture. In other words, if you photograph a bill board but can't read the words on it, you haven't captured 'defined detail', meaning there isn't enough detail for you to define what the object is. If another imaging system can allow you to read the words, then it's--in practical terms--higher resolution. It's gets more complicted when you throw in landscapes, etc and then print at varying print sizes.
Don't worry about this. David Miller and myself are the only two who
are capable of finding the truth. (Okay, maybe Mike Charnery, too.
But he only took a picture of a flower. ;) We will use a novel
technique NEVER used before in photography. We will look at actual
prints at varying sizes! ;)
Looking is very subjective and prone to errors :P

To really appreciate the true nature of images - blind tests need to
be used!

--
Roland
--
Galleries and website: http://www.whisperingcat.co.uk/mainindex.htm
 
I think we all agree that using MP as a measure of resolution was
useful in the early days but isn't really now.

The problem I imagine for some people is it seems dishonest to use a
different definition of MP from everyone else (even if it is
legitimate).

To keep the moral highground, yet still be able to market
successfully, maybe Foveon should abandon the term MP altogether?
Whilst the article had a nasty tone and was a bit hazy about the
facts, it did perhaps successfully get across the sense of unease
people might have with Foveon's sudden uprating of their sensor pixel
counts.

It's a difficult one all round. I certainly sympathise with
Foveon/Sigma's dilemma...
snip

Hello, David,

It seems your monsoons (or someone's) are selectively drenching the US Midwest today - at least in spotty fashion. Hope you on the other hand are drying out.

While I have a rudimentary understanding of information theory - if that's an appropriate term - as it applies to digital imaging sensors, I must confess that I've not found a singular resource or body of resources which migh provide an essential foundation for our converstations. Mike Chaney, Sandy F, and others have provided some very helpful references. And, to be realistic, if designing or reverse engineering digital imaging technology were easy, presumably any of us with a few billion in local currency could do it.

But to you or any who might be interested, beyond basic discussions of Nyquist and Shannon, are there some generally available and somewhat comprehensible sources (online would be nice) which are product neutral to expand the grounding of those who choose to actively debate or at least be knowledgeable spectators?

Kind regards,
--
Ed_S
http://www.pbase.com/ecsquires
 
Hi Truman,

Resolution is defined as the amount of discrete information which may be observed, counted, identified, etc., in a capture. It's not limited in any way to black and white information but includes "all" information.

When detail is missing, whether that detail be color or black and white, resolution is affected. If you have "X" number of red lines and blue line which converge precisely like black and white lines and one sensor can detect them while another can't, the sensor which detects them undeniably has greater "resolution" for these colors.

In any image captured there is black and white information as well as color information. It's not simply a matter of "coloring" the black and white information, which would add no relevant discrete detail (as opposed to color), only change the color. However changing the color "could" lower the amount of overall information detectable by the sensor if the sensor can not "detect" certain colors as well as other colors.

Color is an integral part of "resolution." That we use b&w charts to test resolution is a convention born from film days when the assumption was that the sensing media (film) was a constant and that differences in "resolution" were based primarily on optical differences (lenses) with contribution in a minor way from relative flatness of the media at the film plane. With the advent of electronic sensors, we have a whole new ball game with the media itself highly variable in its ability to detect not only black and white detail but also color detail.

Equating display pixel count to resolution was a convention originating from the convenience of a rather nice correlation between display pixel count in disparate sensors and measured optical black and white resolution. No one thought or cared much about color resolution because it was fairly well a "constant" among the various CFA sensors. It wasn't the "same" as black and white resolution and varied a good deal among RGB but these variances were somewhat consistent. But then along comes a new technology, Foveon, where color resolution "is" consistent with little or no important variance between RGB. And, the Foveon sensor does indeed better detect overall color than the CFA equivalents in tested b&w resolution measurements. Therefore the Foveon "system" does produce higher color "resolution" than it's CFA equivalent in measured b&w optical resolution.

Bottom line is that color indeed "does" count......

Lin
So how does adding color increase the resolution? It doesn't.

Sigma and Foveon are in somewhat of a pickle in explaining the
difference between their sensor and the CFA. However, I think they
bought themselves a lot of trouble with the claim that the SD14 is
more than what it is. The reviewers are going to lay waste to that
claim - and they should. Don't kill the messenger when they point out
the Emperor has no clothes.

The marketing department must be running Sigma now.
of course its a 4.7 MP camera just like the SD10 is 3.4MP and the
Fuji S5 is a 6mP camera - and the fuji would be even more arguable
since it does have 12 million photosites.

O.

--
http://www.flickr.com/photos/ollivr/
http://www.flickrleech.net/user/ollivr
http://bighugelabs.com/flickr/dna.php?username=84419270@N00
--
Truman
http://www.pbase.com/tprevatt
 
hi Ed ,
http://www.cambridgeincolour.com/tutorials/diffraction-photography.htm
here you can read :

'Recall that a digital sensor utilizing a bayer array only captures one primary color at each pixel location, and then interpolates these colors to produce the final full color image. As a result of the sensor's anti-aliasing filter (and the Rayleigh criterion above), the airy disk can have a diameter approaching about 2 pixels before diffraction begins to have a visual impact (assuming an otherwise perfect lens, when viewed at 100% onscreen).'
and

'Another complication is that bayer arrays allocate twice the fraction of pixels to green as red or blue light. This means that as the diffraction limit is approached, the first signs will be a loss of resolution in green and in pixel-level luminance.'
read also:
http://www.foveon.com/
http://www.cambridgeincolour.com/tutorials/sensors.htm
http://en.wikipedia.org/wiki/Foveon_X3_sensor
Guido
 
Well IMO one should not confuse pixels, resolution and photosites. All different things.

Pixels are the smallest unit displayable on a screen. The MP count of a camera is the number of native output pixels. Hence, claiming that the SD10 has 10 megapixels is hogwash. It has 3.4MP. Period.

Resolution is a different thing, since resolution (in its original sense) will be lower if you blur a 14MP image (but the number of pixels remains the same). However, resoution and pixels are often used synonymously which is even half true since the resolution is in the best case limited by the number of pixels. Maybe one should distinguish between display resolution and recorded resolution (the latter being lower on normal cameras e.g. due to the AA filter).

Similarly, my scanner has a CLAIMED optical resolution of 4800 dpi but this is not so in reality. While it can give an output of 4800 dpi, there is not more detail in the scan than in a 2400 dpi scan. At best, it gives maybe 1800 dpi true resolution I would say, if even. It gives a helluva lot more megapixels though.

And a photosite would be the smallest single spatial location on a sensor.

At least that is how I understand it.
O.
So how does adding color increase the resolution? It doesn't.

Sigma and Foveon are in somewhat of a pickle in explaining the
difference between their sensor and the CFA. However, I think they
bought themselves a lot of trouble with the claim that the SD14 is
more than what it is. The reviewers are going to lay waste to that
claim - and they should. Don't kill the messenger when they point out
the Emperor has no clothes.

The marketing department must be running Sigma now.
of course its a 4.7 MP camera just like the SD10 is 3.4MP and the
Fuji S5 is a 6mP camera - and the fuji would be even more arguable
since it does have 12 million photosites.

O.

--
http://www.flickr.com/photos/ollivr/
http://www.flickrleech.net/user/ollivr
http://bighugelabs.com/flickr/dna.php?username=84419270@N00
--
Truman
http://www.pbase.com/tprevatt
--
http://www.flickr.com/photos/ollivr/
http://www.flickrleech.net/user/ollivr
http://bighugelabs.com/flickr/dna.php?username=84419270@N00
 
Ed, if readers google search **** lyon you will find his website with many relevant links to photography and science papers and online videos.

In reference to an email exchange I've had, I've obtained and linked online in my pbase to several years worth of Carver Mead audio files, his presentations at conferences on a variety of topics.

I recommend in particular listen first to 2005; he discusses not only computer and technology history but general photonics and particle and quantum and 'coherence' theory.

If you haven't attended Cal Tech with Richard Feynman and Carver Mead lecturing decades ago, this gives you an idea.... or a science refresher course for some. See my photo gallery here for the links links http://www.pbase.com/sandyfleischman/telecosm_at_squaw_valley_sd10&page=1

I'm listening to CM 2005 at the moment, burned it to DVD for some repeat online learning too ;-)
Best regards, Sandy
[email protected]
http://www.pbase.com/sandyfleischman
http://www.flickr.com/photos/sandyfleischmann
 
hi Ed ,
http://www.cambridgeincolour.com/tutorials/diffraction-photography.htm
here you can read :
'Recall that a digital sensor utilizing a bayer array only captures
one primary color at each pixel location, and then interpolates these
colors to produce the final full color image. As a result of the
sensor's anti-aliasing filter (and the Rayleigh criterion above), the
airy disk can have a diameter approaching about 2 pixels before
diffraction begins to have a visual impact (assuming an otherwise
perfect lens, when viewed at 100% onscreen).'
and
'Another complication is that bayer arrays allocate twice the
fraction of pixels to green as red or blue light. This means that as
the diffraction limit is approached, the first signs will be a loss
of resolution in green and in pixel-level luminance.'
read also:
http://www.foveon.com/
http://www.cambridgeincolour.com/tutorials/sensors.htm
http://en.wikipedia.org/wiki/Foveon_X3_sensor
Guido
Guido,

Thank you - I shall peruse with interest later in the weekend. Sounds like good material! Must now collect myself and head off to a church fundraiser (fish fry) for the afternoon).

As maternal grandmother (sainted) used to say,
"All contributions gratefully accepted."

Kind regards,
--
Ed_S
http://www.pbase.com/ecsquires
 
Color is an integral part of "resolution."
Not only color, but contrast as well.
That we use b&w charts to test resolution is a convention born from film days
It's also understood to be an upper limit.
assumption was that the sensing media (film) was a constant
Film resolution was measured as well (e.g. Pan-X vs. Tri-X).
And, the Foveon sensor does indeed better detect overall
color than the CFA equivalents in tested b&w resolution measurements.
Well, that's dependent both on the color combinations used and what you define as the CFA "equivalent." And therein lies the rub: which color combinations do you measure and then how do you weight the results? f B&W is the best case, then Red/Black or Red/Blue are the worst. Mike Cheney used Black vs. White, Red, Green, Blue, Cyan, Magenta, and yellow and used a simple average. Pop Photo measured other color combinations and then gave a subjective answer based on what they think are common photographic scenes. Not surprisingly, they came to rather different conclusions. (And we've not even addressed how to account for aliasing issues.)

And finally what's equivalent. Do we compare based on number of sensor readings? Price class? Price vs. features vs. time?

There is no one answer. A single number cannot capture all aspects of performance.

--
Erik
 
Color is an integral part of "resolution."
Not only color, but contrast as well.
Somewhat, although early Zeiss lenses produced low contrast but "greater" resolution than their Japanese competition even though the Japanese lenses were much preferred for aesthetic quality because of higher contrast.
That we use b&w charts to test resolution is a convention born from film days
It's also understood to be an upper limit.
It is probably the upper limit, so the point being that CFA sensors do best at b&w and poorer at color so overall resolution is less than reported b&w resolution per resolution chart photo evaluations. B&W measurements tell only "part" of the story of usable resolution. Real world prints are the final test for the photographer.
assumption was that the sensing media (film) was a constant
Film resolution was measured as well (e.g. Pan-X vs. Tri-X).
But the media was non-the-less a "constant". Sensors are not a constant.
And, the Foveon sensor does indeed better detect overall
color than the CFA equivalents in tested b&w resolution measurements.
Well, that's dependent both on the color combinations used and what
you define as the CFA "equivalent." And therein lies the rub: which
color combinations do you measure and then how do you weight the
results? f B&W is the best case, then Red/Black or Red/Blue are the
worst. Mike Cheney used Black vs. White, Red, Green, Blue, Cyan,
Magenta, and yellow and used a simple average. Pop Photo measured
other color combinations and then gave a subjective answer based on
what they think are common photographic scenes. Not surprisingly,
they came to rather different conclusions. (And we've not even
addressed how to account for aliasing issues.)
Actually, the statement stands. The Foveon sensor does indeed better detect "overall" color than the CFA equivalents. Equivalents in this case being parity in B&W resolution measurement.
And finally what's equivalent. Do we compare based on number of
sensor readings? Price class? Price vs. features vs. time?
We compare equivalence as described above.
There is no one answer. A single number cannot capture all aspects
of performance.
A single number can, however be used as an indicator of performance. That's the basis of statistics. Of course it can't be used to accurately describe any single event, but it does give the reader some semblance of understanding what to expect in general. Photographs are all different. Some have more green, some more blue and some more red. The point being that a sensor which reveals all the color resolution as well as it reveals black and white resolution gives the viewer a more realistic expectation of accuracy in the photo. Whether that is "desirable" is another question altogether. I find that my CFA cameras tend to overemphasize greens and produce images where vegetation appears greener that the "reality" I remember. I find that my Foveon based cameras tend to mute the greens more than I remember. The "truth" is somewhere in the middle for me. Of course that's why "God" invented Photoshop - LOL

Lin
 
The Foveon sensor does indeed better
detect "overall" color than the CFA equivalents. Equivalents in this
case being parity in B&W resolution measurement.
Methinks I can sense some CFA envy :)

Why not give all that up? Why not just look at what X3 can do and talk about that?

Why always compare to CFA when motivating measures for X3?

When it comes to resolution and counting pixels CFA is the strangie and X3 is the simple one.

--
Roland
 
I believe you can lose the exif data if you save as tif in PS. You may have to go to the file setting though. I know I lost some of my exif data last year after working on shots in PS and I think that's why it happened (saved back to jpg from the tifs).
Don't worry about this. David Miller and myself are the only two who
are capable of finding the truth. (Okay, maybe Mike Charnery, too.
But he only took a picture of a flower. ;) We will use a novel
technique NEVER used before in photography. We will look at actual
prints at varying sizes! ;)
Looking is very subjective and prone to errors :P

To really appreciate the true nature of images - blind tests need to
be used!

--
Roland
 
Well, Fuji was describing their S3/S4/S5 dSLRs as 12 MP cameras. If what counts is the resolution of the JPEG that can be produced in-camera, the SD14 is legitimately a 14 MP camera.

So, the legitimate case can be made for calling SD14 a 14 MP camera... but I also think that it is not a very good 14 MP camera in terms of resolution for that # of pixels (picture elements). One would get a better image by taking the raw 4.3 MP file and up-rezzing it using a third party application.

I also think that using resolution in terms of pixel count is very inadequate.
--
'Do you think a man can change his destiny?'
'I think a man does what he can until his destiny is revealed.'
 
i gave them a very short answer :
10 mb bayer=
5 million pixels for green
2,5 million pixels for blue and red
foveon = 4,7 m for green , for red and for blue.
Not entirely true. Neither Bayer CFA nor Foveon X3 makes pictures
directly from the sensor. Therefore none of the methods results in
picture elements (i.e. pixels). Therefore you cannot count pixels on
the sensor. You can only count pixels in the picture output - i.e.
after conversion.

Thats when it gets messy. Conversion can be made several ways. And
thats why opinions wary.

My opinion is that you shall count pixels in the native unscaled
outputs. That way of counting (unfortunately) favours Bayer CFA.

Foveon fans and manufacturers do not like that. So - they choose
another way of counting that (unfortunately) favours Foveon X3.

Both are useless for comparison. Still the debate goes on :)

--
Roland
Accepting your statement, there's still a difference: With the Foveon, I start with three pieces of data, and end up with three pieces. Not exactly the same, but close. With the Bayer, I start with one piece of information, and end up with three... Obviously the other two have been invented somewhere - estimated by data from surrounding sources. So while in detail the argument may not be exact, I think the broad sense of it holds up - the Foveon is getting three times as much information per output pixel.
Walter
 
Roland,

I have over 30 CFA digital cameras. I have two X3 digital cameras, that's hardly a case for CFA "envy" - LOL.

The post is in response to the very annoying and inflamatory statement by the Senior Technical Editor at EDN who "should" know better.

It a simple fact that with identical b&w resolution chart measurements, the X3 has higher color resolution. Nothing complicated just simple measurement observations.

Best regards,

Lin
The Foveon sensor does indeed better
detect "overall" color than the CFA equivalents. Equivalents in this
case being parity in B&W resolution measurement.
Methinks I can sense some CFA envy :)

Why not give all that up? Why not just look at what X3 can do and
talk about that?

Why always compare to CFA when motivating measures for X3?

When it comes to resolution and counting pixels CFA is the strangie
and X3 is the simple one.

--
Roland
 
Somewhat, although early Zeiss lenses produced low contrast but
"greater" resolution than their Japanese competition even though the
Japanese lenses were much preferred for aesthetic quality because of
higher contrast.
That's assuming your definition of resolution is extinction resolution. If you believe MTF50 is a better match to the human visual system, you get a different answer.
But the media was non-the-less a "constant". Sensors are not a constant.
Sort of. You still had to pay attention to which media and processing was used when comparing results from different testers.
Actually, the statement stands. The Foveon sensor does indeed better
detect "overall" color than the CFA equivalents. Equivalents in this
case being parity in B&W resolution measurement.
Is that an interesting baseline for equivalent? That means comparing the SD14 to the 30D (usiing in-camera JPEG or a conservative converter.) If Canon increases the resolution on the 30D replacement (which is likely) that will mean there will not be any more direct equivalents in production. The whole issue of this thread is that by emphasizing 14MP they are inviting other comparisons.
A single number can, however be used as an indicator of performance.
Only to the degree that number correlates to our perceptions.
The point being that a sensor which reveals all
the color resolution as well as it reveals black and white resolution
gives the viewer a more realistic expectation of accuracy in the
photo.
That contradicts the conventional description of how our visual system works, e.g. humans do not have the same ability to perceive detail in all colors. Thus far your point is a minority opinion. That's why most reviewers do not accept it.

--
Erik
 

Keyboard shortcuts

Back
Top