Sense & Sensors in Digital Photography

You might get a little less overall luninance from red and green
and blue vs green alone, but the lack of usable color info from
only having green makes any imbalance much less desirable, which is
why sensors are not all green.
You get a lot less luminance information with equal amounts of R,
G, and B. Read up:

http://www.poynton.com/notes/colour_and_gamma/ColorFAQ.html#RTFToC9
Interesting, it also says you cannot weight green more heavily because the eye is "extraordinarily sensitive to blue colors." And "If you have an RGB image but have no information about its chromaticities, you cannot accurately reproduce the image."

It also emphasiszes that you cannot have luminance or color infomation forma single component, without superposition of all three components together. The "luminance" of the green component alone is not luminance at all, to characterize it as such is wrong. Luminance requires R+G+B, as does chrominance.
We've already covered why sensors aren't all green. Why do you
keep raising this ridiculous point?
Yet, you keep saying more green is better. You can't say more green is better without saying that all green is best.
Regardless of what you think about the Foveon concept, it shows the
25/50/25% RGB Bayer pattern is less desirable than a 37/37/25% RGB
split, according to Sony. It follows that a 33/33/33% would be
ideal, since you'd make use of every sensor for both color and
luminance.
This is, again, incorrect. How many times do I have to explain this?

Sony is using the E filter in an attempt to improve color fidelity,
NOT resolution. If anything, the problem that they are fixing is
not one of too much green, but too much red:
Sony changed because the new sensor is better. It's very simple.
I can't understand anything you said in the above paragraph.
It's a trend when you are proven wrong.
You can't have a trend without any data points. However, it is
amusing that the times when you are typing total gibberish seem to
coincide with your times when you believe you are proving me wrong.
It's simple: you said you
agree that sensing full color at every pixel location is better
than the Bayer 25/50/25% RGB split which cannot do that. That
means you also agree that a 33/33/33% mosiac,
No - it does not mean that I agree to this. The second sentence
does not follow logically from the first.
if such a thing were
possible, would also be ideal since it is indistinguishable from a
vertical array that builds one full color pixel per every 3
sensors--since you simply combine every RGB triple into 1 full
color pixel with no need for interpolation, either way.
33/33/33 RGB is obviously possible for a color filter array and
also obviously not a good choice for a color filter array because
it allocates fewer resources to the wavelengths that most influence
our perception of detail.

It has been clearly demonstrated many times that we barely notice
decreased blue resolution, while decreased green resolution causes
very visible degradation. What further proof could you possibly
want?
Then you should have never agreed that sensing all colors at every point is better. But you did, the only logical interpretation of that statement is that a 33/33/33% split is best, since there is no other way to do it. Horizontal or vertical array, it doesn't matter, if you have 33/33/33% mosiac or vertical array, each ouput pixel becomes a straight add of 3 adjacent sensors, no interpolation is required and you've sensed the full spectrum at every pixel location, each with a 3 sensor aperture.
 
Interesting, it also says you cannot weight green more heavily
because the eye is "extraordinarily sensitive to blue colors."
It does not say that you cannot weigh green more heavily.

It does suggest that using fewer bits for blue might be a bad idea. Don't confuse bit depth with spatial resolution.
And
"If you have an RGB image but have no information about its
chromaticities, you cannot accurately reproduce the image."

It also emphasiszes that you cannot have luminance or color
infomation forma single component, without superposition of all
three components together. The "luminance" of the green component
alone is not luminance at all, to characterize it as such is wrong.
Luminance requires R+G+B, as does chrominance.
None of this is inconsistent with the reasons for using the Bayer pattern.
We've already covered why sensors aren't all green. Why do you
keep raising this ridiculous point?
Yet, you keep saying more green is better. You can't say more
green is better without saying that all green is best.
What sort of crazy talk is that?

Suppose that having a large kitchen is more important to you than having a large bedroom. Does this mean that you should buy a house that's all kitchen and has no bedroom?
Sony is using the E filter in an attempt to improve color fidelity,
NOT resolution. If anything, the problem that they are fixing is
not one of too much green, but too much red:
Sony changed because the new sensor is better. It's very simple.
1) It's not obvious that it's better, 2) if you think it's better, than you should want a fourth layer in your X3 sensor for the same reason, and 3) any improvement is not because favoring green was bad, but because the red filter doesn't have the correct "negative red" response.
It has been clearly demonstrated many times that we barely notice
decreased blue resolution, while decreased green resolution causes
very visible degradation. What further proof could you possibly
want?
Then you should have never agreed that sensing all colors at every
point is better.
Why?
But you did, the only logical interpretation of
that statement is that a 33/33/33% split is best, since there is no
other way to do it.
This doesn't make sense. How do you get from sampling at three ranges per pixel being better than 1, to an even balance of red green and blue filters in a CFA? Sampling the spectrum at 1000 points per pixel would also be better, but this doesn't mean that we should have 1000 different colors in a CFA.

BTW, I don't know what the last, "it," refers to in your sentence above.
Horizontal or vertical array, it doesn't
matter, if you have 33/33/33% mosiac or vertical array, each ouput
pixel becomes a straight add of 3 adjacent sensors, no
interpolation is required and you've sensed the full spectrum at
every pixel location, each with a 3 sensor aperture.
I'm not sure what a, "3 sensor aperture," is, but if you have 33% R, G, B then you can, at best, resolve detail at 1/3 the nominal MP rating of the sensor. If you have a Bayer pattern, then you can, in MOST cases, resolve detail at close to 50% of the nominal MP rating of the sensor but only 25% in the worse case. This is why a 3.5MP X3 sensor behaves like a 6-8 MP Bayer pattern sensor for more real images, and better than a 6-8 MP Bayer pattern sensor primarily for artificial test patterns.

The reason, again, is that detail in the green range is much more noticeable to us than detail at the ends of the visible spectrum.

--
Ron Parr
FAQ: http://www.cs.duke.edu/~parr/photography/faq.html
Gallery: http://www.pbase.com/parr/
 
Interesting, it also says you cannot weight green more heavily
because the eye is "extraordinarily sensitive to blue colors."
It does not say that you cannot weigh green more heavily.

It does suggest that using fewer bits for blue might be a bad idea.
Don't confuse bit depth with spatial resolution.
Enter the third soon to be aborted argument. Spatial resolution also requires all three components, a horizontal array with 1 red, 1 green, 1 blue sensor has the same spatial resolution as a vertical array with 1 red, 1 green, 1 blue sensor. Unless you measure pure black, the lack of light, which is exactly why Bayer resolution is commonly measured using black lines instead of blue or red, or even green for that matter.
And
"If you have an RGB image but have no information about its
chromaticities, you cannot accurately reproduce the image."

It also emphasiszes that you cannot have luminance or color
infomation forma single component, without superposition of all
three components together. The "luminance" of the green component
alone is not luminance at all, to characterize it as such is wrong.
Luminance requires R+G+B, as does chrominance.
None of this is inconsistent with the reasons for using the Bayer
pattern.
Great, it seems we agree.
We've already covered why sensors aren't all green. Why do you
keep raising this ridiculous point?
Yet, you keep saying more green is better. You can't say more
green is better without saying that all green is best.
What sort of crazy talk is that?

Suppose that having a large kitchen is more important to you than
having a large bedroom. Does this mean that you should buy a house
that's all kitchen and has no bedroom?
Absolutely. The idea is optimization of a zero sum game, if a kitchen is most important than the best house is all kitchen. Any house with less kitchen will lose when evaluated by such a criteria.
Sony is using the E filter in an attempt to improve color fidelity,
NOT resolution. If anything, the problem that they are fixing is
not one of too much green, but too much red:
Sony changed because the new sensor is better. It's very simple.
1) It's not obvious that it's better, 2) if you think it's better,
than you should want a fourth layer in your X3 sensor for the same
reason,
That would skew the already even distribution, not make it more even.
and 3) any improvement is not because favoring green was
bad, but because the red filter doesn't have the correct "negative
red" response.
It really doesn't matter how you slice their reasoning, Sony obviously believes 37/37/25% RGB is better than 25/50/25% RGB. That really is all anyone needs to know to understand this, unless you think the largest manufacturer of Bayer sensors is irrational.
It has been clearly demonstrated many times that we barely notice
decreased blue resolution, while decreased green resolution causes
very visible degradation. What further proof could you possibly
want?
Then you should have never agreed that sensing all colors at every
point is better.
Why?
But you did, the only logical interpretation of
that statement is that a 33/33/33% split is best, since there is no
other way to do it.
This doesn't make sense. How do you get from sampling at three
ranges per pixel being better than 1, to an even balance of red
green and blue filters in a CFA?
Because each output pixel would then have a 3 sensor aperture, no interpolation required when you can physically superimpose. Same reason Foveon doesn't need to interpolate a lack of weak channels.
Sampling the spectrum at 1000
points per pixel would also be better, but this doesn't mean that
we should have 1000 different colors in a CFA.
Especially when you first assume a primary color model that is 997 colors short of a 1000.
Horizontal or vertical array, it doesn't
matter, if you have 33/33/33% mosiac or vertical array, each ouput
pixel becomes a straight add of 3 adjacent sensors, no
interpolation is required and you've sensed the full spectrum at
every pixel location, each with a 3 sensor aperture.
I'm not sure what a, "3 sensor aperture," is,
1 output pixel would need

R
GB

Why would you interpolate with an even color distribution?
but if you have 33%
R, G, B then you can, at best, resolve detail at 1/3 the nominal MP
rating of the sensor.
Which is the best you can ever do, unless you think digital interpolation adds optical resolution. Each output pixel requires three sensors for optical accuracy, not one sensor, interpolating a deficiency in certain color channels can never add optical information or resolution--except in special cases, like a B&W chart--no chrominance.
If you have a Bayer pattern, then you can,
in MOST cases, resolve detail at close to 50% of the nominal MP
rating of the sensor but only 25% in the worse case. This is why a
3.5MP X3 sensor behaves like a 6-8 MP Bayer pattern sensor for more
real images, and better than a 6-8 MP Bayer pattern sensor
primarily for artificial test patterns.

The reason, again, is that detail in the green range is much more
noticeable to us than detail at the ends of the visible spectrum.
Than you should have all green.

And not according to your own source which points out that the eye is "extraordinarily sensitive to blue colors."

Lets do a real life test, can you see any lines below?

Canon 1Ds:



Sigma SD9:

 
...what goes on in that little head of his when he says things like this:
Absolutely. The idea is optimization of a zero sum game, if a
kitchen is most important than the best house is all kitchen. Any
house with less kitchen will lose when evaluated by such a criteria.
but it does explain why these arguments are so long and so pointless.

S, you probably know that I'm a Sigma fan, too, and I'm happy to agree that the Foveon X3 is better than Bayer in various ways. But it doesn't help the argument to remove all credibility by denying the reality of what's good about the Bayer approach. I would have thought that after several years of pushback on your approach you would have learned. Or maybe you're just there as a shill to give us someone to talk reality to...

And by the way, Sony's RGBE pattern is not necessarily a reflection of their opinion of what's better. It's a marketing gimmick, primarily, and all the evidence to date seems to indicate that the regular RGB Bayer is better, because it can get luminance detail to higher frequencies without confusion with color detail, and because the color noise is less. As Ron points out, the only potential advantage is color accuracy, which is unrelated to the reasons why more green samples are advantageous in the Bayer pattern.

j
 
It does suggest that using fewer bits for blue might be a bad idea.
Don't confuse bit depth with spatial resolution.
Enter the third soon to be aborted argument. Spatial resolution
also requires all three components, a horizontal array with 1 red,
1 green, 1 blue sensor has the same spatial resolution as a
vertical array with 1 red, 1 green, 1 blue sensor.
What do you think you're measuring the spatial resolution of here? We're most sensitive to spatial resolution in luminance, and green contributes most to luminance, so it's pays to try harder to get this right.
Unless you
measure pure black, the lack of light, which is exactly why Bayer
resolution is commonly measured using black lines instead of blue
or red, or even green for that matter.
I don't understand this sentence.
Suppose that having a large kitchen is more important to you than
having a large bedroom. Does this mean that you should buy a house
that's all kitchen and has no bedroom?
Absolutely. The idea is optimization of a zero sum game, if a
kitchen is most important than the best house is all kitchen. Any
house with less kitchen will lose when evaluated by such a criteria.
Like many other things you've said here, this is obviously wrong. Any reasonable person understands the difference between favoring A over B, and preferring A exclusively over B. Hint: Most houses have both ktichens and bedrooms, yet the ratio of kitchen size to bedroom size varies satisfy individual preferences.
Sony is using the E filter in an attempt to improve color fidelity,
NOT resolution. If anything, the problem that they are fixing is
not one of too much green, but too much red:
Sony changed because the new sensor is better. It's very simple.
1) It's not obvious that it's better, 2) if you think it's better,
than you should want a fourth layer in your X3 sensor for the same
reason,
That would skew the already even distribution, not make it more even.
Adding a fourth layer to Foveon's sensor would skew what? How? You're getting stricly more accurate information about the power spectrum.
and 3) any improvement is not because favoring green was
bad, but because the red filter doesn't have the correct "negative
red" response.
It really doesn't matter how you slice their reasoning, Sony
obviously believes 37/37/25% RGB is better than 25/50/25% RGB.
That really is all anyone needs to know to understand this, unless
you think the largest manufacturer of Bayer sensors is irrational.
Emeral is sampling a specific range of frequences. These are not 1/2 blue and 1/2 green.

Do you understand that there is a difference between spectral cyan light and 50% blue + 50% green light?
This doesn't make sense. How do you get from sampling at three
ranges per pixel being better than 1, to an even balance of red
green and blue filters in a CFA?
Because each output pixel would then have a 3 sensor aperture, no
interpolation required when you can physically superimpose.
It's always better to have more information than less. This fact alone doesn't imply anything about the design of a CFA.
Same
reason Foveon doesn't need to interpolate a lack of weak channels.
That's not even a sentence. I don't know what you're talking about.
Sampling the spectrum at 1000
points per pixel would also be better, but this doesn't mean that
we should have 1000 different colors in a CFA.
Especially when you first assume a primary color model that is 997
colors short of a 1000.
Huh?
1 output pixel would need

R
GB

Why would you interpolate with an even color distribution?
Well, I'm not sure how this is relevant to any of the points you're trying to make, but the answer is that you'd need to interpolate because you need to assign 3 color components to each pixel, but are sampling from only at any point.
but if you have 33%
R, G, B then you can, at best, resolve detail at 1/3 the nominal MP
rating of the sensor.
Which is the best you can ever do, unless you think digital
interpolation adds optical resolution.
It's well known to do better than this, and not because interpolation adds resolution, but because the green pixels are capturing so much of the information relevant to our perception of detail.
Each output pixel requires
three sensors for optical accuracy, not one sensor, interpolating a
deficiency in certain color channels can never add optical
information or resolution--except in special cases, like a B&W
chart--no chrominance.
What's, "optical accuracy?" If it is something that is related to our ability to perceive detail in real pictures, then your statement is generally false, as proved by test images (not test patterns) at dpreview.
The reason, again, is that detail in the green range is much more
noticeable to us than detail at the ends of the visible spectrum.
Than you should have all green.
Why?
And not according to your own source which points out that the eye
is "extraordinarily sensitive to blue colors."
You read it, but you didn't understand it. You're confusing bit depth with spatial resolution again.
Lets do a real life test, can you see any lines below?
I was wondering how long it would take for you to trot those out again.

We've discussed those before you still don't get that this is not, "real life." When you look out your window, is that what you see?

--
Ron Parr
FAQ: http://www.cs.duke.edu/~parr/photography/faq.html
Gallery: http://www.pbase.com/parr/
 
Ron, I admire your "patience"...
Yes, Dominic, he hasn't even started bashing the USA. It can be
done.
Bashing the USA?
I think he's referring to Dominic's brashness in criticizing the actions of our administration. Sometimes I do a little of that myself, but I wouldn't accept calling it bashing the USA, and would hope that it wouldn't come up in or be relevant to a technical discussion.

j
 
Absolutely. The idea is optimization of a zero sum game, if a
kitchen is most important than the best house is all kitchen. Any
house with less kitchen will lose when evaluated by such a criteria.
but it does explain why these arguments are so long and so pointless.

S, you probably know that I'm a Sigma fan, too, and I'm happy to
agree that the Foveon X3 is better than Bayer in various ways. But
it doesn't help the argument to remove all credibility by denying
the reality of what's good about the Bayer approach. I would have
thought that after several years of pushback on your approach you
would have learned. Or maybe you're just there as a shill to give
us someone to talk reality to...
Personally attacking people with no point in mind at all is certainly the worst way to go.
And by the way, Sony's RGBE pattern is not necessarily a reflection
of their opinion of what's better. It's a marketing gimmick,
primarily, and all the evidence to date seems to indicate that the
regular RGB Bayer is better, because it can get luminance detail to
higher frequencies without confusion with color detail, and because
the color noise is less. As Ron points out, the only potential
advantage is color accuracy, which is unrelated to the reasons why
more green samples are advantageous in the Bayer pattern.

j
 
...what goes on in that little head of his when he says things like
this:
Absolutely. The idea is optimization of a zero sum game, if a
kitchen is most important than the best house is all kitchen. Any
house with less kitchen will lose when evaluated by such a criteria.
but it does explain why these arguments are so long and so pointless.
Yes. I'm amazed that anyone continues to debate him at length. It reminds me of the Golem character in Lord of the Rings who can only focus on his "precious" which largely blindes him to all else.

It is pretty easy to measure specific differences between an X3 and Bayer sensor. And point to the advantages and shortcomings of each. It becomes a bit more subjective when deciding which is the better photographic tool though. Specific situations can be found that favor specific designs.

In the end, all engineering is the art of compromise. This seems to escape the thought processes of some people. There is no such thing as "no compromise" engineering. The Foveon X3 and Bayer sensors are both significant compromises.
S, you probably know that I'm a Sigma fan, too, and I'm happy to
agree that the Foveon X3 is better than Bayer in various ways. But
it doesn't help the argument to remove all credibility by denying
the reality of what's good about the Bayer approach.
So true. I also like the concept of the X3 sensor a lot. But all of my cameras use a Bayer mask (actually a CMYG mask - so much for the three pegs four holes whatever Sony does is rational bit. Sony has produced at least three schemes. Are two of them irrational? Oh criminey - I'm getting sucked into the vortex!!!!)
I would have
thought that after several years of pushback on your approach you
would have learned.
At some point you have to ask yourself the question of whether it makes any sense at all to engage in the discussion.

--
Jay Turberville
http://www.jayandwanda.com
 
Sony
has produced at least three schemes. Are two of them irrational?
Oh criminey - I'm getting sucked into the vortex!!!!)
Why not read their literature:

"In the 4-color filter CCD, a filter with the Emerald (E) color is added to the conventional 3-color RGB filter, in order to reduce the color reproduction errors and to record natural images closer to the natural sight perception of the human eye. As a result, the characteristics of the CCD color filter become much closer to those of the human visual system, achieving dramatic reduction in the color reproduction error."

Sony clearly thinks that the 25/50/25% mosaic is a poor performer compared to a 37/37/25% mosaic, especially as it pertains to human perception. Regardless of the poor performance of the 828 system on noise, CA, and price, their announcement above was intended to be camera independent.
 
He means full-color pixels/photosites. Foveon has 3.4m, with 3
stacked photosites each. This bayer also has 3.4m, with 4
photosites each. 2 green, 1 red, 1 blue in a square. As discussed
ad nauseum on this forum, the Kodak interpolates this back into
13.7m pixels right away.
That is oversimplified and wrong, the 4 photosites do not form 1
pixel in the 14n. You can't simply add them up to a RGB triple.
I understand both you and the writer of this article. I like his
concept. This industry is still relatively new and it reminds me
of the wattage debate of the late 60's.

For the youngsters: At the time to get good music you had to have
a component set. That is, turntable, tuner, amplifier etc. The
more watts your amp put out the prouder the owner. Then people
began to recognize the signal to noise factor and you then needed
power and less distortion.

There originally was IHF watts (Institute of High Fidelity), I see
this represented today as Bayer. There were no standards
advertisers called it (the watts) what they wanted. Someone got
bright and created a formula for Watts and it was called RMS (Root
mean Square) This reminds me of Foveon. It took a few years, but
RMS is so accepted now that no one even asks which standard you are
using.

The author of this article has used some common sense in grouping
the photo sites and redefining a pixel. This is something that
should be done eventually. If it takes 4 pixels to determine
correct color then why not use this standard. If you want to argue
that it only takes 3 pixels to determine color, then fine. I don't
care which.

I do think that this author should be respected for his opinion and
I am a little disappointed in the way he is being disrespected by
some.
Especially since Foveon already took the same stance as the author and remains widely respected because their image quality commands it:

http://www.x3f.info/technotes/x3pixel/pixelpage.html

Most notably:

"Accepted definitions: pixel - an RGb triple in a sampled color image"

"... in reaction to customers wondering why Foveon and Sigma did not clearly state that the SD9 was either a 3.4 megapixel or a 10.2 megapixel camera, we have had more experience with the proposed terminology in the marketplace. Many catalogs and reviews have not been able to accomodate the proposed terminology changes, and needed to put a single number into a megapixel slot; unfortunately they sometimes chose the 3.4 MP number. This number is very misleading, as it suggests that the SD9 is in some sense in the same category as 3 to 4 MP cameras, when in the fact the SD9 is delivering image resolution and sharpness that is outstanding in the DSLR category of 6 to 14 MP.

In response to this misleading information in the marketplace, Sigma and Foveon now agree and insist that if only a single megapixel number can be used, then the 10.2 MP number, based on the number of photodetectors, is the only possibility. It is an objective count of the same kind of detector elements as are usually counted as megapixels. It is incomplete in that it does not fully represent the novel organization of pixel sensors into stacks of three, which allows image capture free of color artifacts and allows all the sharpness to fit naturally into a smaller output file. When more information can be used, a notation such as "10.2 MP (3.4 MP Red + 3.4 MP Green + 3.4 MP Blue)" is appropriate."
 
Sometimes I do a little of that
myself, but I wouldn't accept calling it bashing the USA, and would
hope that it wouldn't come up in or be relevant to a technical
discussion.
Technical discussion? Give me a break, as soon as Sigmasd9 participates in a thread it is hardly "technical" and really no "discussion", what we do have in this case is one individual insisting on its esoteric pseudo knowledge about digital imaging.

--
http://www.pbase.com/dgross (work in progress)
http://www.pbase.com/sigmasd9/dominic_gross_sd10

 
Ron, I admire your "patience"...
Yes, Dominic, he hasn't even started bashing the USA. It can be
done.
Bashing the USA?
Nasty remarks about the current US Administration would be a better description of what he is talking about, he is just, as always, using words a bit loose... Knowing how quarrelsome and stuborn our resident troll can be I wonder why he never commented on any of this "bashing" directly, it is just his joker for situations like this where he drove the discussion against the next best wall by repeating his pseudo knowledge a bit too often.

--
http://www.pbase.com/dgross (work in progress)
http://www.pbase.com/sigmasd9/dominic_gross_sd10

 
Sony
has produced at least three schemes. Are two of them irrational?
Oh criminey - I'm getting sucked into the vortex!!!!)
Why not read their literature:

"In the 4-color filter CCD, a filter with the Emerald (E) color is
added to the conventional 3-color RGB filter, in order to reduce
the color reproduction errors and to record natural images closer
to the natural sight perception of the human eye. As a result, the
characteristics of the CCD color filter become much closer to those
of the human visual system, achieving dramatic reduction in the
color reproduction error."

Sony clearly thinks that the 25/50/25% mosaic is a poor performer
compared to a 37/37/25% mosaic, especially as it pertains to human
perception. Regardless of the poor performance of the 828 system
on noise, CA, and price, their announcement above was intended to
be camera independent.
Even if we actually believed Sony's marketing material as fact, your conclusions still wouldn't follow since, among other things, RGBE is not a 37/37/25 mosaic.

--
Ron Parr
FAQ: http://www.cs.duke.edu/~parr/photography/faq.html
Gallery: http://www.pbase.com/parr/
 
Sony
has produced at least three schemes. Are two of them irrational?
Oh criminey - I'm getting sucked into the vortex!!!!)
Why not read their literature:

"In the 4-color filter CCD, a filter with the Emerald (E) color is
added to the conventional 3-color RGB filter, in order to reduce
the color reproduction errors and to record natural images closer
to the natural sight perception of the human eye. As a result, the
characteristics of the CCD color filter become much closer to those
of the human visual system, achieving dramatic reduction in the
color reproduction error."

Sony clearly thinks that the 25/50/25% mosaic is a poor performer
compared to a 37/37/25% mosaic, especially as it pertains to human
perception. Regardless of the poor performance of the 828 system
on noise, CA, and price, their announcement above was intended to
be camera independent.
Even if we actually believed Sony's marketing material as fact,
Now you know more than the largest maker of Bayer sensors. I hope you don't mind if I trust Sony more than a few Canon diehards who refuse to accept their being sold yesterday's technology at premium prices.
your conclusions still wouldn't follow since, among other things,
RGBE is not a 37/37/25 mosaic.
It's actually 0/100/0.
 
Sometimes I do a little of that
myself, but I wouldn't accept calling it bashing the USA, and would
hope that it wouldn't come up in or be relevant to a technical
discussion.
Technical discussion? Give me a break, as soon as Sigmasd9
participates in a thread it is hardly "technical" and really no
"discussion", what we do have in this case is one individual
insisting on its esoteric pseudo knowledge about digital imaging.
Personal attacks don't help, obviously lots of people are interested in subjects that bore you--just write nothing like mom said.
 

Keyboard shortcuts

Back
Top