resolution thoughts

  • Thread starter Thread starter Bob Williams
  • Start date Start date
I just got back from the wedding and reception. I am very
releaved to see someone else got to fight with the invader
clowns from where ever they come from, and that sense has
returned while I was not here to be involved. I have not had
time to look at the German tests and comparisons yet, so I
can't comment yet. Anyway, we now have my original
radiating target idea, the two other charts linked here and in
the F100 thread, this German chart, and the progressively
smaller randomly mixed dot target to experiment with.
On the color blind reviewer comment, Photo editing programs
are not color blind.
Eric's comments are right on. I shot 101 shots in a garden at the
botanical gardens today, at 1:00 in the afternoon. The shadows,
backlighting, random sun and sunny water and white stone bridge
highlights were a nightmare. Add to that, the ceremony was under
a Willow, where many of the faces have foliage across them. The
miracle is that I managed to pull anything usable out of a non-posed
event at all under the conditions. I had a good time trying tho.
This is with a 2 year old camera. Using the cameras is what is
important, but understanding the truth about our next purchase
is also important.
That's my story, and I'm sticken to it.
After all of the BS being thrown around about the resolution of
the various cameras lately, I have been looking for an example of
exactly how little the resolution chart shots reflect in real world
shots, and one that also shows exactly how hard it is to compare
cameras based on this test. I think this one works pretty well.



The left side is aligned pretty close to the sensor array.
The bottom, top, and right side are all at different angles to the
photosite array because of the position of the camera, The
differences are obvious.

For the resolution chart to have any real world meaning, the test
would have to be done in a fixed jig, with some way of centering
every different cameras lens in the jig, and a way to make sure
that the lens was perfectly square to the chart. A tripod is no where
near enough for this to work. Just the change from lens position
in the camera body from camera to camera is enough to change
the results by a large percentage. A rotation of even 1 degree
changes everything, and the best resolution is not always found
to be directly square with the image in the viewfinder or LCD.
Any movement from square with the chart causes the problems seen
in this image.
Without a complex test jig, the resolution chart tests are
meaningless.
I have the original image if anyone wants it to see for themselves.
As it is reduced in size, the differences become more and more
apparent tho. This image uses no fine lines, or changing spacing
between them. I was too lazy to do that design.
--
I just want a camera that works.
 
Does your Photo Editor have artificial intelligence that will tell you the result!

People will look at the output, a hardcopy of very specific dimensions, and proclaim the result from the test!

My comment was merely an observation of caution, sounded like a good test BUT who reads the result from the test!?!

I take your point though a program will have to be written that takes the recorded image and declares the results...

This is what should happen with the current test. The converging lines should be scanned past the a central line (point) of the digicams imaging system, and when the square waveform degardes below a certain criteria absolute resolution is exceeded.

As for extinction res ... should anyone care when mush turns into real mush.
On the color blind reviewer comment, Photo editing programs
are not color blind.
 
Eric's comments are right on. I shot 101 shots in a garden at the
botanical gardens today, at 1:00 in the afternoon. The shadows,
backlighting, random sun and sunny water and white stone bridge
highlights were a nightmare. Add to that, the ceremony was under
a Willow, where many of the faces have foliage across them. The
miracle is that I managed to pull anything usable out of a non-posed
event at all under the conditions. I had a good time trying tho.
This is with a 2 year old camera. Using the cameras is what is
important, but understanding the truth about our next purchase
is also important.
Sounds interesting shooting Bob.Any chance of posting some of the results of your travails together with the EXIF? Might do a lot to demo where the limits of the camera are. When I'm trying something difficult I never know whether it's me or the camera. (usually me!)
Regards,
DaveMart
 
Geraint, You are missing the point. The presnt resolution test
was never designed to do what it is being used to do. It was
designed to test lens resolution, period. It has been said here that
in certain cases, they went to a finer grained film to make the
results valid. In this case, we are talking recording media with
random image capture, compared to a fixed grid, with the results
being greatly influenced by the grid alignment with the chart.
I don't have a lot of time messing with this, but I already know that
the score a camera comes up with on the present chart will be
changed to a large degree, if you simply rotate the camera 2 degrees
right or left. To use this test, you would have to have a heavy
jig, with the chart fixed in the jig in relation to the camera lens
mount, You would have to design the jig so that the camera was
held in place by the lens in such a way to keep it both pointed
at the correct center of the target, and so that the lens was
square to the target. These things are easily accomplished, even
if such a jig would not be cheap. There would then need to be
a way to line up the grid on the sensor array with the lines on the
target, so that you are testing the same thing on each camera.
At that point, you would have a decent test that would identify
both lens issues, and the ability of the sensor to capture this kind
of image. This is where things get kinda crazy. How do you align
the sensor grid with the lines on the target? How do you repeat it
accurately from camera to camera? The idea is for a test that does
not have as stringent operator requirements, because the test is
designed to to eliminate the variations except being reasonably
square to the target, and centered. For the record, this is not
for the D7i benefit. It is for my benefit. The present controversial
postings claiming a 40 percent advantage to the 707 were just
the cause of the timing. These claims are clearly wrong, but in some
ways, the results of the currently used test support this error.
As far as results, a standard deviation program that was run four
directions would leave little doubt on the radiating target. I think
the random dots target would plainly visable. The current radiating
target linked in the F100 thread shows the resolution change with
direction across the CCD very well, but misses a couple of things I
would like to see. A standardized chart that is an image of fine
grained wood with randomly placed color and texture targets would
tell us more than the present test does. Still thinking, reading,
searching, and trying to evaluate what would be the optimum.

David, I would have to have permission before posting any of the
wedding shots. It is my son's wedding, but that doesn't change
the ethics of posting things like that without permission. I will
tell ya, I managed about 80 percent usable images, but on most,
it is a good thing I used RAW. The shot I would post is from
the side, with the Willow over the top. Parts of the willow
are shaded, parts bright, and other parts are backlit. The effect
is kinda strange. If we had posed shots, I could have done
very well with the conditions and the surrounding scenery. As
it is, it was a shoot on the fly sort of thing, and even the good
shots have things like the other relative with the video camera
in the background. Full suits, tux's and such do not lend themselves
to long extended posing sessions outdoors when it is 98 degrees
where you are! I suspect that with the results I see in my files,
and knowing I had instant feedback, that the film shooters are in
big trouble!
Eric's comments are right on. I shot 101 shots in a garden at the
botanical gardens today, at 1:00 in the afternoon. The shadows,
backlighting, random sun and sunny water and white stone bridge
highlights were a nightmare. Add to that, the ceremony was under
a Willow, where many of the faces have foliage across them. The
miracle is that I managed to pull anything usable out of a non-posed
event at all under the conditions. I had a good time trying tho.
This is with a 2 year old camera. Using the cameras is what is
important, but understanding the truth about our next purchase
is also important.
Sounds interesting shooting Bob.Any chance of posting some of the
results of your travails together with the EXIF? Might do a lot to
demo where the limits of the camera are. When I'm trying something
difficult I never know whether it's me or the camera. (usually me!)
Regards,
DaveMart
 
Geraint, You are missing the point.
The point is how can the test be made relevant!

The ability of an optical system to record a square wave pattern does have relevance to digital optical systems. It performed correctly it would also show up processing defects like sharpening halos. Its just that the current method is a very poor implementation.

The failure with this test is people pretending it produces scientific results when the experiment has no scientific rigour. It is fair easy to design an expirment based on this test that does have more scientific rigour. The only problem is that people want instant results, but science takes time.
I think he random dots target would plainly visable.
Only if you have perfect colour perception!
 
Does your Photo Editor have artificial intelligence that will tell
you the result!
Blur is blur, regardless of color.
People will look at the output, a hardcopy of very specific
dimensions, and proclaim the result from the test!
They always will, regardless of what anyone comes up with.
My comment was merely an observation of caution, sounded like a
good test BUT who reads the result from the test!?!
Who read the result from the tests today?
I take your point though a program will have to be written that
takes the recorded image and declares the results...
Hardly possible. We're measuring how good a camera is at doing something that in the end gives subjectively pleasing results.
This is what should happen with the current test. The converging
lines should be scanned past the a central line (point) of the
digicams imaging system, and when the square waveform degardes
below a certain criteria absolute resolution is exceeded.
First of all - what square waveform? Do you even know what you're suggesting here?

Second - this is exactly the problem. Measuring absolute resolution using an image that will stress the interpolation algorithm more than the resolution will not get you the resolution - it will give you the ability of the interpolation algorithm to peroform under artificial conditions. And what does that tell you?

--
Jesper
 
Geraint, You are missing the point.
The point is how can the test be made relevant!
Well, duh!
The ability of an optical system to record a square wave pattern
does have relevance to digital optical systems.
Geraint, what are you talking about? What is a "square wave pattern"?

Any recording of patterns approaching the nyquist frequency of the sensor will simply highlight what the interpolation algorithm does to data at the egde of the nyquist frequency - nothing more.
It performed
correctly it would also show up processing defects like sharpening
halos. Its just that the current method is a very poor
implementation.
That will only show up of the pattern is much larger.
The failure with this test is people pretending it produces
scientific results when the experiment has no scientific rigour.
No. The failure with this test is that the test is broken. How it's performed matters little.
It is fair easy to design an expirment based on this test that does
have more scientific rigour. The only problem is that people want
instant results, but science takes time.
Show me where there is a law of nature saying 'science takes time". That's not true. A well designed test can be used to good results with little preparation beyond making sure there is proper light.
I think he random dots target would plainly visable.
Only if you have perfect colour perception!
What does color perception have to do with anything? When the pattern turns to smears or mush, that will be plainly visible - regardless of color.

--
Jesper
 
As one prepared to be shot down in flames, I have to say that the cameras that do best in the apparently flawed resolution tests appear to me at least as the ones that have the best images in terms of perceived sharpness and resolving power. Now this may be a co-incidence but in my simplistic way it all seems to tie nicely together. Surely if a camera can pull out extra detail over another that must count for something? And if it does it on a test chart AND in general shots, that must count for something, too?

Ideally, my present-perfect camera would be a D7i able to produce the most detail in an image (how, where, why it gets it or it is tested is completely irrelevant to me). I don't think it quite does in its class by a few percent. I don't know where the 40% comes from but I think that is out of the question.

I now have a D7i on order, as previously mentioned, so have no axe to grind. But I am willining to live with a little less detail in exchange for its many advantages.

Laurie
 
As one prepared to be shot down in flames, I have to say that the
cameras that do best in the apparently flawed resolution tests
appear to me at least as the ones that have the best images in
terms of perceived sharpness and resolving power. Now this may be a
co-incidence but in my simplistic way it all seems to tie nicely
together. Surely if a camera can pull out extra detail over another
that must count for something? And if it does it on a test chart
AND in general shots, that must count for something, too?
You're absolutely right, and there's no argument there; however, when you use the charts to the extreme and try to determine super-fine resolution from them they break down.

That's really what started this whole debate. Noone is arguing that what the charts show at a glance isn't true, but when someone manages to derive that the 707 has 40% better resolution than the D7i from them that does highlight a problem with them.
Ideally, my present-perfect camera would be a D7i able to produce
the most detail in an image (how, where, why it gets it or it is
tested is completely irrelevant to me). I don't think it quite does
in its class by a few percent. I don't know where the 40% comes
from but I think that is out of the question.
I definitely agree. And the 40% comes from using the charts - in a way they weren't ever designed to be used.
I now have a D7i on order, as previously mentioned, so have no axe
to grind. But I am willining to live with a little less detail in
exchange for its many advantages.
I have one too. I'm sure you won't be disappointed, it's an excellent camera.

--
Jesper
 
I think he random dots target would plainly visable.
Only if you have perfect colour perception!
What does color perception have to do with anything? When the
pattern turns to smears or mush, that will be plainly visible -
regardless of color.
DUH.... Swede...

This test dependent on the colour dots becoming grey....

You need to be abale to separate colours perfectly to judge that point.
 
The ability of an optical system to record a square wave pattern
does have relevance to digital optical systems.
Geraint, what are you talking about? What is a "square wave pattern"?

Any recording of patterns approaching the nyquist frequency of the
sensor will simply highlight what the interpolation algorithm does
to data at the egde of the nyquist frequency - nothing more.
Duh ... Swede ...

What you are aiming to record is black white black white of equal width...

Spatially this is square wave form...
 
Does your Photo Editor have artificial intelligence that will tell
you the result!
Blur is blur, regardless of color.
Not when its dependent on being able to separate colours... which was the root of the suggested test ... the colours become grey.
People will look at the output, a hardcopy of very specific
dimensions, and proclaim the result from the test!
They always will, regardless of what anyone comes up with.
But is that good enough!
My comment was merely an observation of caution, sounded like a
good test BUT who reads the result from the test!?!
Who read the result from the tests today?
People but is that good enough.
I take your point though a program will have to be written that
takes the recorded image and declares the results...
Hardly possible. We're measuring how good a camera is at doing
something that in the end gives subjectively pleasing results.
Very easily possibe... the obective is to make it objective.
This is what should happen with the current test. The converging
lines should be scanned past the a central line (point) of the
digicams imaging system, and when the square waveform degardes
below a certain criteria absolute resolution is exceeded.
First of all - what square waveform? Do you even know what you're
suggesting here?
Yes ... I work in the field of digital signal processing.
Second - this is exactly the problem. Measuring absolute resolution
using an image that will stress the interpolation algorithm more
than the resolution will not get you the resolution - it will give
you the ability of the interpolation algorithm to peroform under
artificial conditions. And what does that tell you?
Irrelevant all we are interested in is the resolving power of the total imaging system... the results we get.
--
Jesper
 
Which goes to show the subjectivity of preference.

Actually the D7 test is a better comparison since it was taken in similar conditions. Either way it would put me off the F707 immediately. But I can see why other folks feel differently. The only cam that came close to the D7 / D7i is the G2 which is truly outstanding.

The F707 is pulling more detail out of the charts and the D7 is pulling more detail out of the objects (wood, metal mesh, etc.) This kinda convinces me anyway that the cameras are set up intentionally to provide for different preferences.

I put the D7i shot through NeatImage and it looks pretty good without all that NOISE:)

Steve
 
Laurie, you will most certainly get a lot out of your D7i and I don't think the resolution loss will kill you, since it seems to be well below the average photographers handheld threshold...mine anyway!

Steve
As one prepared to be shot down in flames, I have to say that the
cameras that do best in the apparently flawed resolution tests
appear to me at least as the ones that have the best images in
terms of perceived sharpness and resolving power. Now this may be a
co-incidence but in my simplistic way it all seems to tie nicely
together. Surely if a camera can pull out extra detail over another
that must count for something? And if it does it on a test chart
AND in general shots, that must count for something, too?

Ideally, my present-perfect camera would be a D7i able to produce
the most detail in an image (how, where, why it gets it or it is
tested is completely irrelevant to me). I don't think it quite does
in its class by a few percent. I don't know where the 40% comes
from but I think that is out of the question.

I now have a D7i on order, as previously mentioned, so have no axe
to grind. But I am willining to live with a little less detail in
exchange for its many advantages.

Laurie
--
Steve
 
Actually it measures two separate things:

When the points become grey or randomly coloured (when they cannot interpolate full colour information)

When they become indistinguishable (total non-linear resolution)

Interesting to see which happens first but I would guess the former. However it would probably be better to arrange for two tests with black dots and coloured ones separately.

I have only seen tests that distinguish black on white. It is entirely possible that come cameras are better at colour distinction and worse at contrast distinction depending on how the algorithm is written.

I keep looking at various camera shots and some lose a lot of surface detail (where contrast is low) but maintain lots of edge detail (where its higher). This may also be relating to the noise reduction algorithm. For my money the D7i is the other way round, possibly because it doesn't do much noise reduction.

Of course using Red, Green and Blue dots is not a pre-req. You could use other mixed shades of differing intensity too. Yellow, Magenta and Cyan for instance?

Even if you are colour blind, you only have to to blow the pic up and test the pixel colour value in photoshop:)

And be nice! Duh :)

Steve
I think he random dots target would plainly visable.
Only if you have perfect colour perception!
What does color perception have to do with anything? When the
pattern turns to smears or mush, that will be plainly visible -
regardless of color.
DUH.... Swede...

This test dependent on the colour dots becoming grey....

You need to be abale to separate colours perfectly to judge that
point.
 
Duh ... Swede ...

What you are aiming to record is black white black white of equal
width...

Spatially this is square wave form...
Ok, I see where you're coming from - and where the misunderstanding comes from.

Would you argue that a good way to test, for example, an MP3 algorith would be to feed it a square sound wave close to the Nyquist frequence of the waveform sampled and then see if the algorithm can reproduce it?

Of course not - that test would not test anything that is relevant in the real world - on the contrary, an algorithm that does this well would most likely turn music into mush.

In this case it's not quite that extreme, but getting close. Resolution in digital representation of objects with patterns approaching the Nyquist frequency is not best done by alternating high and low value inputs - this will not measure anything that is really useful any more than measuring square waves in music reproduction systems will.

So we're looking for something that has more relevance, and so far I've seen a couple of good ideas here.

If you work in DSP, you should know how this all works, and why feeding square wavelength data to a sampler at or above it's Nyquist frequencey is, well, not gonna tell you much.

In other words, no, that's not what I'm aiming to record - that's what I want to get away from as a test pattern because after having gone through the camera it's of duibious utility.

--
Jesper
 
I'm just saying that test is probably MORE void than any test you've shown me. When one image is taken at a different range and the D7i images are closer, there will be more detail than the same shot taken at the exact same range as the 707 regardless of any other factor.You can't fairly compare when one has the advantage of being closer. Especially in those line tests. But in detail as well.

B A H
Actually the D7 test is a better comparison since it was taken in
similar conditions. Either way it would put me off the F707
immediately. But I can see why other folks feel differently. The
only cam that came close to the D7 / D7i is the G2 which is truly
outstanding.

The F707 is pulling more detail out of the charts and the D7 is
pulling more detail out of the objects (wood, metal mesh, etc.)
This kinda convinces me anyway that the cameras are set up
intentionally to provide for different preferences.

I put the D7i shot through NeatImage and it looks pretty good
without all that NOISE:)

Steve
--
http://www.pbase.com/gdguide
http://adigitaldreamer.com
 
Yep, as I said they changed the test specs. The D7 and F707 were both shot on their "normal" defult quality setting. They are a much better comparison. Now they shoot at "high" so there is no direct comparison between D7i and F707.

Other part of the thread discussing this, just that it was another example of a chart test, just not well executed in this case but interesting nonetheless.

Steve
B A H
Actually the D7 test is a better comparison since it was taken in
similar conditions. Either way it would put me off the F707
immediately. But I can see why other folks feel differently. The
only cam that came close to the D7 / D7i is the G2 which is truly
outstanding.

The F707 is pulling more detail out of the charts and the D7 is
pulling more detail out of the objects (wood, metal mesh, etc.)
This kinda convinces me anyway that the cameras are set up
intentionally to provide for different preferences.

I put the D7i shot through NeatImage and it looks pretty good
without all that NOISE:)

Steve
--
http://www.pbase.com/gdguide
http://adigitaldreamer.com
--
Steve
 
I tried the radiating lines test in black, green, red, blue, and a
composite with a red green blue pattern with a different colored
line every three degrees. I used too thick of a line, and the
results are not usable on this first test. The G1 was able to follow
all of the colors to what appears to be a sharp point at the
intersection at 100 percent. At 600 percent, you can see that
none of them actually reach a one pixel point, but mostly fall off
at about 2 to 3 pixels. I will try it again tomorrow using much thinner
lines. The only thing plainly shown is the difference between a
line aligned with the grid array, as they have relatively smooth
sides, where the ones off line with the array are jagged.
Duh ... Swede ...

What you are aiming to record is black white black white of equal
width...

Spatially this is square wave form...
Ok, I see where you're coming from - and where the misunderstanding
comes from.

Would you argue that a good way to test, for example, an MP3
algorith would be to feed it a square sound wave close to the
Nyquist frequence of the waveform sampled and then see if the
algorithm can reproduce it?

Of course not - that test would not test anything that is relevant
in the real world - on the contrary, an algorithm that does this
well would most likely turn music into mush.

In this case it's not quite that extreme, but getting close.
Resolution in digital representation of objects with patterns
approaching the Nyquist frequency is not best done by alternating
high and low value inputs - this will not measure anything that is
really useful any more than measuring square waves in music
reproduction systems will.

So we're looking for something that has more relevance, and so far
I've seen a couple of good ideas here.

If you work in DSP, you should know how this all works, and why
feeding square wavelength data to a sampler at or above it's
Nyquist frequencey is, well, not gonna tell you much.

In other words, no, that's not what I'm aiming to record - that's
what I want to get away from as a test pattern because after having
gone through the camera it's of duibious utility.

--
Jesper
 

Keyboard shortcuts

Back
Top