How does everyone feel about interpolated pictures?

Considering the fact that it has tilted square pattern, the
vertical and
horisontal frequency of alternating lines is obtained by
multiplying the
alternating pixel frequency to sqrt(2) ~ 1.4

Consequently it has horisontal and vertical resolution as a camera
with 3Mp*1.4 ~ 4.2Mp sensor.

I think Fuji can be fairly positioned somewhere between 3Mp and 4Mp
cameras closer to 4Mp class.
Sergei, the ratio of cells/area isn't changed just because you rotate the square. In fact, rotating the square will slightly REDUCE the vertical and horizontal resolution, favoring the diagonal.

It's pretty easy to demonstrate - draw yourself a little chart, then turn it 45 degrees. Notice how the horizontal and vertical cells are now farther apart, and the diagonals ones closer.

The real advantage of the "Honeycomb" pattern is that it allows the individual cells to be larger and therefore more sensitive to light. There's no change in overall resolution.
 
Actually, most 3mp cameras have no trouble at all recording all of
the available image information in a 3mp file format. They're not
losing anything. Since their CCD geometries match the format of
the image data in the file, they can map the cells of the CCD
directly to the file format.
Jim, the fuji Super CCD in 6m mode also has no geometric
corrections to make to remap. Eack output pixel is created from the
colour information of 4 surrounding ccd element or 4 surrounding
interpoloaterd ccd elements.
They have to do some interpolation in
the color space, but other than that it is a direct feed to the
image file.
This interpolation in the colour space is just as significant only
here each output pixel gets its colour information from 4 adjacent
ccd elements and these vary according to where that pixel is in
space. It is not a straight geometric mapping as I see it
Yes, I can see your point. It is a fairly challenging problem in sampling/information theory to determine exactly how to record the maximum amount of image detail from the raw CCD data - regardless of the geometry. I'm attempting to differentiate between the methods used according to their geometry, yet in reality the processes are quite similar - in fact, more similar than I had realized.

It's a very interesting problem. Ideally, you want to use as few cells as possible in creating the RGB value for an individual recorded pixel - since every time you combine information from mulitple cells, you give up some of the raw resolution. Yet if you use just three cells for each value (the ideal situation, it would seem), the result, and in fact the process, becomes identical for either CCD geometry.

But that suggests that in fact there is no benefit whatsoever for using the 6mp format. Hmmm.
 
Considering the fact that it has tilted square pattern, the
vertical and
horisontal frequency of alternating lines is obtained by
multiplying the
alternating pixel frequency to sqrt(2) ~ 1.4

Consequently it has horisontal and vertical resolution as a camera
with 3Mp*1.4 ~ 4.2Mp sensor.

I think Fuji can be fairly positioned somewhere between 3Mp and 4Mp
cameras closer to 4Mp class.
Sergei, the ratio of cells/area isn't changed just because you
rotate the square. In fact, rotating the square will slightly
REDUCE the vertical and horizontal resolution, favoring the
diagonal.

It's pretty easy to demonstrate - draw yourself a little chart,
then turn it 45 degrees. Notice how the horizontal and vertical
cells are now farther apart, and the diagonals ones closer.

The real advantage of the "Honeycomb" pattern is that it allows the
individual cells to be larger and therefore more sensitive to
light. There's no change in overall resolution.
Thank you for disputing but I drew a little chart.
Will you agree that a number of red lines per area of the blue square
is bigger than a number of black lines divided by the area of the same square.



I only want to say that more vertical and horisontal LINES can be resolved by this manner.

Abstractedly speaking, Fuji camera cannot recognize whether you imposed red pattern over its SuperCCD or black pattern.

But you can see that red pattern has more lines per inch than the black one. So you can grab more information in the horisontal and vertical directions. Also you can see that JPEG does not like abrupt junctions in the picture and blurs them, it has cut highest frequences because JPEG considered them less important for human.

Believe me, sometimes it is possible to prove such incredible things
that you cold not ever imagine. Mathematics is speculative science,
but it really works.

I personally would prefer true honeycomb pattern because of its perfectness, and the largest possible cells allowed, but I can do without.
 
Sergei, the ratio of cells/area isn't changed just because you
rotate the square. In fact, rotating the square will slightly
REDUCE the vertical and horizontal resolution, favoring the
diagonal.

It's pretty easy to demonstrate - draw yourself a little chart,
then turn it 45 degrees. Notice how the horizontal and vertical
cells are now farther apart, and the diagonals ones closer.

The real advantage of the "Honeycomb" pattern is that it allows the
individual cells to be larger and therefore more sensitive to
light. There's no change in overall resolution.
Thank you for disputing but I drew a little chart.
Will you agree that a number of red lines per area of the blue square
is bigger than a number of black lines divided by the area of the
same square.
Sure, I'll agree to that. Now, what have you demonstrated?
But you can see that red pattern has more lines per inch than the black one.
True. Your drawing has more red lines than black ones.
So you can grab more information in the horisontal and vertical directions
False. Sorry, there's no connection between these two statements.

Sergei, pay attention to the individual cells - that's what matters. You've twisted the grid by 45 degrees. Fine. But then you've drawn in a new set of horizontal and vertical lines. Why? There are no more cells in a given area of the CCD than there were before. There's no basis for pretending that it has more resolution. It doesn't.

I could twist it another 45 degrees and add more lines in between the black ones. And you could then twist it 45 more degrees and add more lines in between the red ones. But it's all meaningless. The only thing that governs resolution is the number of points sampled over a given area.

And that hasn't changed.
 
For that one here's what I did:

NeatImage,
Resample to 300%,
USM 500%, 1 pixel,0,
SmartBlur 3.0, 8.0, High,
Resample to 3Mp (2048x1536)
Apply USM to taste e.g. 100%, 0.3 pixel,0

If someone has an easier method with comparable results, please let us know..
Hi Jared could you tell us what you exactly did to these pictures
and how?

Your results seem to be better than mine. I tried to do the same on
the 602 china town pic but although I'm happy with my result yousr
is better so i'd like to know.

Thx
 
This is one of the better explanations I've seen:

http://groups.google.com/groups?q=g:thl1173120767d&dq=&hl=en&safe=off&selm=9pndck%249hd%241%40lily.cs.ubc.ca

Basically there's no free lunch; you get increased resolution
horizontally and vertically at the expense of diagonal resolution.
Agreed, the overall resolution doesn't change - there is a tradeoff. And it is an interesting explanation. But it appears that he's got his numbers reversed.

Take a look at it yourself. Imagine, or draw, a typical square CCD. Note the distance between the individual cells vertically, call it "1 unit". The same "1 unit" is the distance between the cells horizontally. And from basic trigonometry, you can see that the diagonal distance between cells is the hypotenuse of that isosceles right triangle, which is 1.414 units.

Now rotate the cells, and the distances are swapped. The "Honeycomb" arrangement actually increases the cell-to-cell distance in the horizontal and vertical directions, and reduces it diagonally.

Actually, though, I'm oversimplifying his explanation. He takes my point into account, but goes on to claim that the the resolution isn't dependent on the individual sensor cells, but is somehow connected with the fact that they are organized in rows. And since the diagonal cells are organized into "more" rows, they somehow gather more resolution.

But there's no basis for such an assumption. Sampling theory doesn't deal in rows, it deals in points. Individual cells. Single samples. And, in the case of an image, sampling theory deals specifically with samples distributed over an area. If you choose to consider the fact that there are now more rows of cells, you also need to consider that the number of cells in each of those rows has been reduced, and by exactly the same proportion. Like you said, there's no free lunch.

I'm glad we're having this discussion, it brings up some interesting points. Because in reality, the only thing that matters is how many cells are spread out across the image. And from that standpoint, the CCD geometry is completely irrelevant. Which is the same conclusion that came from another discussion earlier today. Interesting. I'm beginning to believe it's true. You guys are beginning to change my mind, but not in the direction you intend. :-)
 
This is one of the better explanations I've seen:

http://groups.google.com/groups?q=g:thl1173120767d&dq=&hl=en&safe=off&selm=9pndck%249hd%241%40lily.cs.ubc.ca

Basically there's no free lunch; you get increased resolution
horizontally and vertically at the expense of diagonal resolution.
Agreed, the overall resolution doesn't change - there is a
tradeoff. And it is an interesting explanation. But it appears
that he's got his numbers reversed.

Take a look at it yourself. Imagine, or draw, a typical square
CCD. Note the distance between the individual cells vertically,
call it "1 unit". The same "1 unit" is the distance between the
cells horizontally. And from basic trigonometry, you can see that
the diagonal distance between cells is the hypotenuse of that
isosceles right triangle, which is 1.414 units.

Now rotate the cells, and the distances are swapped. The
"Honeycomb" arrangement actually increases the cell-to-cell
distance in the horizontal and vertical directions, and reduces it
diagonally.

Actually, though, I'm oversimplifying his explanation. He takes my
point into account, but goes on to claim that the the resolution
isn't dependent on the individual sensor cells, but is somehow
connected with the fact that they are organized in rows. And since
the diagonal cells are organized into "more" rows, they somehow
gather more resolution.

But there's no basis for such an assumption. Sampling theory
doesn't deal in rows, it deals in points. Individual cells.
Single samples. And, in the case of an image, sampling theory
deals specifically with samples distributed over an area. If you
choose to consider the fact that there are now more rows of cells,
you also need to consider that the number of cells in each of those
rows has been reduced, and by exactly the same proportion. Like
you said, there's no free lunch.

I'm glad we're having this discussion, it brings up some
interesting points. Because in reality, the only thing that
matters is how many cells are spread out across the image. And
from that standpoint, the CCD geometry is completely irrelevant.
Which is the same conclusion that came from another discussion
earlier today. Interesting. I'm beginning to believe it's true.
You guys are beginning to change my mind, but not in the direction
you intend. :-)
One more attempt to incline you to the right direction.

Every sensor somehow approximates a line by a sequence of pixels
(more generally by a set of pixels).

Let's to solve the inverse problem - we are in a kindergarten and

trying to join cells of a checkered piece of paper with a line. But also there is an additional requirement - the lines should be as closer each to other as possible and parallel. You can try it in your spare time.

Tip: See the picture with red and black lines.

In real-life pictures even lines are unlikely to appear. But there is
extrem sample of such picture called test pattern.
The common measure of resolution is a number of distincltly resolved
lines per some interval (one inch or millimetre for instance).
The number of red lines per the interval is more than the number of
black lines. No magic.
 
Let's to solve the inverse problem - we are in a kindergarten and
trying to join cells of a checkered piece of paper with a line. But
also there is an additional requirement - the lines should be as
closer each to other as possible and parallel. You can try it in
your spare time.

Tip: See the picture with red and black lines.
Thanks, Sergei. Yes, I grasp your concept completely. I also understand Nyquist well enough to know that it is completely irrelevant. I can see that I'm not going to convince you, but I'm still glad we've had this discussion - it has helped me clarify it in my own mind.

You're so certain that you're right that I'm not even sure that you're reading what I'm writing. But I'll try once more just in case.

You and Jared and Dave Martindale claim that the Fuji gets increased resolution because the cells of the CCD, when considered diagonally, form lines that are closer together than the lines that run squarely through the CCD. And I claim that this fact, while true, is meaningless in terms of resolution. Now I can't prove that to you without getting way over my head in sampling theory. But maybe I can get you to see the folly with an example.

Let's just carry your idea to its extreme. Here's a grid:

A B C D E
F G H I J
K L M N O
P Q R S T
U V W X Y

(Unfortunately, the browsers proportional font may make this slightly less than square, but lets call it square for the purposes of discussion.)

Now, the argument is that the lines connecting CGK and DHLP and EIMQU are closer together than the lines connecting ABCDE and FGHIJ. That's true.

But why should we stop with just the diagonals? Look at the lines BK and GP and CLU. These are closer together still. And how about the lines BP and GU - closer still!

Imagine how many lines we can get, and how close they'll become, if we make this a much larger grid - containing, say, 3 million cells. If your argument held, we'd have nearly infinite resolution!

But it doesn't hold. Because each of these sets of lines, while closer together, also have individual cells that are farther apart, and by exactly the same ratios.

The resolution doesn't change.
 
We've already established there is no free lunch. The SuperCCD increases resolution in one direction while losing it in another. The fact that human perception is more sensitive to resolution in the direction that the SuperCCD excels is what makes it worthwhile. While there may be additional gains if this technique is taken further, we will always have to factor in the practicalities of implementation - CPU's in cameras only have so much power.

Here's a good diagram on page 6:
http://www.fujifilm.com/JSP/fuji/epartners/bin/3rdGenSuperCCD_2.pdf
The resolution doesn't change.
 
We've already established there is no free lunch. The SuperCCD
increases resolution in one direction while losing it in another.
I believe that there is some truth to that. But the explanation offered by drawing diagonal lines through the sensors does not contribute to explaining it. At the very best, it grossly oversimplifies the principles involved. It's quite misleading. More importantly, it also isn't as simple as just stating that "resolution is gained horizontally and vertically, while lost diagonally". The actual scenario is quite a bit more complex, with tradeoffs being made in many different "directions".
While there may be additional gains if this technique is taken further....
That's a very interesting point. I think it would be quite possible for other manufacturers to take exactly this same approach, acquiring more resolution from their standard CCDs. But, I think that they would have to not only double the filesize, but quadruple it, to get the increase in resolution captured.

And right there you have a very clear argument for the advantage of the "honeycomb" orientation. It allows Fuji to accomplish something that would be rather impractical for a more traditional design.
Yes, I've seen it. As in most marketing materials, the diagram grossly misrepresents the reality. It's comical. If the cells in the Fuji CCD were really that tightly packed, it really would be a 6mp sensor!
 
Let's to solve the inverse problem - we are in a kindergarten and
trying to join cells of a checkered piece of paper with a line. But
also there is an additional requirement - the lines should be as
closer each to other as possible and parallel. You can try it in
your spare time.

Tip: See the picture with red and black lines.
Thanks, Sergei. Yes, I grasp your concept completely. I also
understand Nyquist well enough to know that it is completely
irrelevant. I can see that I'm not going to convince you, but I'm
still glad we've had this discussion - it has helped me clarify it
in my own mind.

You're so certain that you're right that I'm not even sure that
you're reading what I'm writing. But I'll try once more just in
case.

You and Jared and Dave Martindale claim that the Fuji gets
increased resolution because the cells of the CCD, when considered
diagonally, form lines that are closer together than the lines that
run squarely through the CCD. And I claim that this fact, while
true, is meaningless in terms of resolution. Now I can't prove
that to you without getting way over my head in sampling theory.
But maybe I can get you to see the folly with an example.

Let's just carry your idea to its extreme. Here's a grid:

A B C D E
F G H I J
K L M N O
P Q R S T
U V W X Y

(Unfortunately, the browsers proportional font may make this
slightly less than square, but lets call it square for the purposes
of discussion.)

Now, the argument is that the lines connecting CGK and DHLP and
EIMQU are closer together than the lines connecting ABCDE and
FGHIJ. That's true.

But why should we stop with just the diagonals? Look at the lines
BK and GP and CLU. These are closer together still. And how about
the lines BP and GU - closer still!
B, P, G and U more likely form segments BG and KP or also it may be a line BGKP. Try to fill those cells with black color on a white paper and you will see this line. Ask somebody else what does he/she see.

You know, that a sensor produces discrete signals and using them we are trying to reconstruct an original picture which is most probably a line BPGU in this case.
Imagine how many lines we can get, and how close they'll become, if
we make this a much larger grid - containing, say, 3 million cells.
If your argument held, we'd have nearly infinite resolution!
Prove this theorem or prove the fact the statement above cannot be proved.

You are only trying to take me at my word. We can impose over the sensor as closest lines as we want, but then we cannot reconstruct these lines using the information read from the sensor because such close lines are beyond the resolution limit.

My picture has being demostrated the resolution limit of SuperCCD sensor.
But it doesn't hold.
Please do not mix up a reason and the consequence.

Because each of these sets of lines, while
closer together, also have individual cells that are farther apart,
and by exactly the same ratios.
This is the exact proof of the fact why we cannot have infinite resolution. You did it!
We cannot reconstruct these lines because the individual cells
are not perceived like a lines.

I love the way you are thinking. Please do just one little step further!
 
We've already established there is no free lunch. The SuperCCD
increases resolution in one direction while losing it in another.
I believe that there is some truth to that. But the explanation
offered by drawing diagonal lines through the sensors does not
contribute to explaining it. At the very best, it grossly
oversimplifies the principles involved. It's quite misleading.
More importantly, it also isn't as simple as just stating that
"resolution is gained horizontally and vertically, while lost
diagonally". The actual scenario is quite a bit more complex, with
tradeoffs being made in many different "directions".
I completely agree that the reality is always more complex than an abstraction. But I am just explaining one particular problem of the reality.
An abstraction that casts aside all unnecessary complexity of the real
object serves us perfectly.
While there may be additional gains if this technique is taken further....
That's a very interesting point. I think it would be quite
possible for other manufacturers to take exactly this same
approach, acquiring more resolution from their standard CCDs.
Why not, Kodak had done this in the begining of the digicam era.
But,
I think that they would have to not only double the filesize, but
quadruple it, to get the increase in resolution captured.
Regarding to quantum effects it wil insignificantely increase the amount of gathering information. Actually a natural noise exeeds this increase.
And right there you have a very clear argument for the advantage
of the "honeycomb" orientation. It allows Fuji to accomplish
something that would be rather impractical for a more traditional
design.
I cannot understand that you said here.
Yes, I've seen it. As in most marketing materials, the diagram
grossly misrepresents the reality. It's comical.
You now that it is strictly prohibited by law to lie in advertisement matherials. You can sue Fuji for it. Do you know that they already took away label "6Mp" from their cameras.
If the cells in
the Fuji CCD were really that tightly packed, it really would be a
6mp sensor!
Fuji has 6Mp sensor but it costs more.
 
SergeiK wrote:

Sergei, some of what you've written suggests to me that we are using the word "lines" in two different ways. You seem to be considering the effect of attempting to resolve "real" lines in an image. While this may be the ultimate goal, I have not used the word "lines" in that sense in our discussion. When I speak of "lines", I'm simply talking about the imaginary lines that we use in thinking of the organization of the cells in the CCD.

I'm trying hard to understand what you've written, but I'm afraid that point may get in the way of our understanding.
Imagine how many lines we can get, and how close they'll become, if
we make this a much larger grid - containing, say, 3 million cells.
If your argument held, we'd have nearly infinite resolution!
Prove this theorem ....
Sergei, that's exactly what I've done. I've shown that if your contention is correct, then by extension the CCD must have nearly infinite resolution.

Your basic assertion is this: The individual cells in the CCD, when considered in diagonal lines, can be shown to create lines that are closer together than the lines that extend horizontally and vertically through the square of the sensor. And since those lines are closer, the resolution in that dimension is improved.

All I've done is to reapply your same principle to all the other possible lines of sensors that exist in the matrix of cells that make up the CCD. After all, there's nothing "special" about those diagonals. So just extend the same principle to every possible line that can be drawn through the individual points of the matrix. The result is that there are lines nearly everywhere, which, by your assertion, means more and more resolution.

But since we know that the CCD does not have nearly infinite resolution, then your assertion cannot be correct. And not only can it not be correct for the infinite, but it also can't be correct for even the first step.

Either the principle works or it doesn't. You can't have it both ways. By (correctly) refuting my proposal of infinite resolution, you are equally (correctly) refuting your own suggestion of increased resolution due to the closeness of the diagonals.
We can impose over the
sensor as closest lines as we want, but then we cannot reconstruct
these lines using the information read from the sensor because such
close lines are beyond the resolution limit.
Right - that's a part of my point.
We cannot reconstruct these lines because the individual cells
are not perceived like a lines.
Yes, exactly.
I love the way you are thinking. Please do just one little step further!
I'll invite you to do the same!
 
I'm trying hard to understand what you've written, but I'm afraid
that point may get in the way of our understanding.
Imagine how many lines we can get, and how close they'll become, if
we make this a much larger grid - containing, say, 3 million cells.
If your argument held, we'd have nearly infinite resolution!
Prove this theorem ....
Sergei, that's exactly what I've done. I've shown that if your
contention is correct, then by extension the CCD must have nearly
infinite resolution.

Your basic assertion is this: The individual cells in the CCD,
when considered in diagonal lines, can be shown to create lines
that are closer together than the lines that extend horizontally
and vertically through the square of the sensor. And since those
lines are closer, the resolution in that dimension is improved.

All I've done is to reapply your same principle to all the other
possible lines of sensors that exist in the matrix of cells that
make up the CCD. After all, there's nothing "special" about those
diagonals. So just extend the same principle to every possible
line that can be drawn through the individual points of the matrix.
The result is that there are lines nearly everywhere, which, by
your assertion, means more and more resolution.

But since we know that the CCD does not have nearly infinite
resolution, then your assertion cannot be correct. And not only
can it not be correct for the infinite, but it also can't be
correct for even the first step.

Either the principle works or it doesn't. You can't have it both
ways. By (correctly) refuting my proposal of infinite resolution,
you are equally (correctly) refuting your own suggestion of
increased resolution due to the closeness of the diagonals.
We can impose over the
sensor as closest lines as we want, but then we cannot reconstruct
these lines using the information read from the sensor because such
close lines are beyond the resolution limit.
Right - that's a part of my point.
We cannot reconstruct these lines because the individual cells
are not perceived like a lines.
Yes, exactly.
I love the way you are thinking. Please do just one little step further!
I'll invite you to do the same!
I think I know now why you do not admit parallel lines model.

In fact this is just a model for explaining of possible higher frequency of captured data in horisontal and vertical directions. This simple model just gives us upper limit of the increase of resolution that is exact sqrt(2).
Maximum resolution is achieved using striped b/w test pattern.

An actually obtained increase of resolution should be calculated in 2-dimentional space and it is lower than 1.4
Then noise reduction, interpolation of output pixels color and JPEG
compression decrease it yet more.

At last I only want you not to deny that proper geometrical
arrangement of cells allows to resolve more details along preferred axes
unless the sensor is stochastically organized.

I noticed that you rather prefer natural physical explanations contrary to
my speculations based on an abstract knowledge.
 
For that one here's what I did:

NeatImage,
Resample to 300%,
USM 500%, 1 pixel,0,
SmartBlur 3.0, 8.0, High,
Resample to 3Mp (2048x1536)
Apply USM to taste e.g. 100%, 0.3 pixel,0

If someone has an easier method with comparable results, please let
us know..
Aha that Neat image again I maybe should take a look at that program.

Thx for the info.
 
At last I only want you not to deny that proper geometrical
arrangement of cells allows to resolve more details along preferred
axes unless the sensor is stochastically organized.
I'm still wresting with that. I think I can see a scenario in which it might be true, but I haven't thought it through completely yet. It seems more likely to me that any specific geometric pattern of cells would cause a range of effects, some of which would benefit and some of which would hinder resolution along any particular axis.
I noticed that you rather prefer natural physical explanations
contrary to my speculations based on an abstract knowledge.
Quite the contrary - my explanation takes place entirely in the abstract. I started with your abstract model, then carried it to its logical extreme to expose the fallacy.

Sergei, I've done my best to explain this, but I think I've taken it as far as I can. Thanks for the discussion - I really appreciate the opportunity to get deeply into some of these issues.
 
At last I only want you not to deny that proper geometrical
arrangement of cells allows to resolve more details along preferred
axes unless the sensor is stochastically organized.
I'm still wresting with that. I think I can see a scenario in
which it might be true, but I haven't thought it through completely
yet. It seems more likely to me that any specific geometric
pattern of cells would cause a range of effects, some of which
would benefit and some of which would hinder resolution along any
particular axis.
I noticed that you rather prefer natural physical explanations
contrary to my speculations based on an abstract knowledge.
Quite the contrary - my explanation takes place entirely in the
abstract. I started with your abstract model, then carried it to
its logical extreme to expose the fallacy.

Sergei, I've done my best to explain this, but I think I've taken
it as far as I can. Thanks for the discussion - I really
appreciate the opportunity to get deeply into some of these issues.
Time For A CUP OF TEA!!
http://www.pbase.com/djkenny
 
Interesting arguments here that are rather like the debates in the Middle Ages about the sun going round the earth and the earth being flat.

Some people cannot accept that breakthroughs in standard technology, tweaking a pattern or design to gain extra benefits and taking an innovative approach with existing technology can reap real benefits - not just in CCD technology but elsewhere.

In the European Union, any product making a claim for its performance MUST, under consumer law, be able to meet that claimed performance CONSISTENTLY.

Therefore the 6900z and the 602 must have/will have to have demonstrably and consistently proveable 6MP output, no matter how it is achieved.

Nikon, Canon, Olympus and the rest and their supporters and all the detractors of the Fuji 6MP claims can debate the merits and demerits and come up with all the theories they wish, but not a single person or company has had the faith in their arguments to test Fuji's claims in court.

Had any of the other manufacturers any real doubts, they would have, at relatively small cost, buried Fuji in a single court case.

So Fuji will continue, as they have done from the start, to sell the products as 6MP output cameras and, until someone proves to the satisfaction of the law that they are not, no opinions, ideas, theories or blind prejudices matter.

PhilB
At last I only want you not to deny that proper geometrical
arrangement of cells allows to resolve more details along preferred
axes unless the sensor is stochastically organized.
I'm still wresting with that. I think I can see a scenario in
which it might be true, but I haven't thought it through completely
yet. It seems more likely to me that any specific geometric
pattern of cells would cause a range of effects, some of which
would benefit and some of which would hinder resolution along any
particular axis.
I noticed that you rather prefer natural physical explanations
contrary to my speculations based on an abstract knowledge.
Quite the contrary - my explanation takes place entirely in the
abstract. I started with your abstract model, then carried it to
its logical extreme to expose the fallacy.

Sergei, I've done my best to explain this, but I think I've taken
it as far as I can. Thanks for the discussion - I really
appreciate the opportunity to get deeply into some of these issues.
 
In the European Union, any product making a claim for its
performance MUST, under consumer law, be able to meet that claimed
performance CONSISTENTLY.

Therefore the 6900z and the 602 must have/will have to have
demonstrably and consistently proveable 6MP output, no matter how
it is achieved.
Fuji has never, ever claimed that these were 6mp cameras, for exactly the reasons that you cite.
 

Keyboard shortcuts

Back
Top