starting the squable anew, film vs. digital

But maybe you intended this in a more lite hearted fashion than it reads
to me. If so I resolve to not be offended and forgive.
Yep, I did. Sorry if that wasn't clear.
When I said digital prints were indistinguishable from photographic
prints I hope I said, or at least hinted, to the casual observer. The
majority of the people reading this can see the difference, but the
people who you took you photo's for: your family, friends, most clients
cannot tell.
The big problem is of course the lite coloured areas, which can look awful.
Ah, okay. No problems then. I find myself occasionally amazed by what people will pay for--a friend had wedding pictures taken and to my eyes they were not acceptable, but he seemed quite happy with them.

Chris
 
If people are rating film resolution based on crystal count then the
estimate is too high. If three crystal colour types are present then the
actual count must be divided by three at least.
Ah, but there is a difference. In colour film, there are multiple (three or four) layers, each of which is sensitive to a different frequency range. This would be like having three separate CCDs in a camera, one for each colour (which is done in broadcast-quality video cameras, and in one of the Minolta digicams).

As for the analysis of film resolution, while it is true that there is a random distribution and a random crystal size, it is highly likely that these are bounded by certain max and min values, and that they obey a statistical distribution, which could be measured. Because of this, they would be susceptable to statistical analysis methods if anyone were to actually take the time to do it.

Chris
 
David S wrote:

Ouch, I just reread what I wrote, much apologies for the typos. I've shall double prove read postings from now on, I shall double prove read postings from now on....
My only point to make at this time is that I do not believe a crystal, no
matter what colour depth it carries is the equivalent of a digital pixel.
Firstly the crystal only carries one colour while a digital pixel carries
several - up to 4 in the case of CMYK or 3 for RGB.
Actually, the camera makers have been cheating. They specify the mega pixels by CCD cell count. But each CCD only records one color. So a single crystal from a single layer and a CCD cell count is the same. I don't know how the crystal count is derived. But I'm looking forward to the CCD count to be a few order of magnitudes (2's) about crystal count then regardless of distribution pattern, digital wins in terms of accuracy.

While digital and chemical base flim are both improving, I do believe digital flim is progressing at a much more rapid pace. There are still people that are convinced that vinyl "sounds" better than CD, and I will never argue that and I can respect their opinion. Then there are people that are convinced that vinyl is more accurate, that I'll have to argue. At this point, I do believe digital is a few order of magnitudes behind, but I won't expect it'll be too long before digital catches up and surpass chemical film in strict accuracy measurement. As in what pleases the eyes more, it's up to anybody's taste.
From the perspective of entropy: the higher the randomness, the lower the
level of order. Lower order means less infomation in the system. The
information in a digital image is easily understood but film contains
less information than the sum of it's crystals the laws of entropy
dictate that it must. Although film may have a higher 'real' resolution
now you cannot equate a single crystal to a single digital pixel. It
takes a group of crystals to equal the information in a pixel.
entropy!?! now you are making my brain hurt!!!
Thanks for a stimulating discussion.

gordon
 
But maybe you intended this in a more lite hearted fashion than it reads
to me. If so I resolve to not be offended and forgive.
Yep, I did. Sorry if that wasn't clear.
Good to hear Chris and thanks for the clarification.
When I said digital prints were indistinguishable from photographic
prints I hope I said, or at least hinted, to the casual observer. The
majority of the people reading this can see the difference, but the
people who you took you photo's for: your family, friends, most clients
cannot tell.
The big problem is of course the lite coloured areas, which can look awful.
Ah, okay. No problems then. I find myself occasionally amazed by what
people will pay for--a friend had wedding pictures taken and to my eyes
they were not acceptable, but he seemed quite happy with them.

Chris
Unfortunately for us our standards are set so high we can see the flaws in anything. This is a blessing for others but a curse to us. Most would probably think we were crazy to have such a discussion in the first place.
 
If people are rating film resolution based on crystal count then the
estimate is too high. If three crystal colour types are present then the
actual count must be divided by three at least.
Ah, but there is a difference. In colour film, there are multiple (three
or four) layers, each of which is sensitive to a different frequency
range. This would be like having three separate CCDs in a camera, one
for each colour (which is done in broadcast-quality video cameras, and in
one of the Minolta digicams).
Good point. Multiple overlapping layers is better. This would also aid in the positional accuracy of recorded information. It depends on how the resolution of film is measured. It would have to be based on the crystal content of a single layer.

Of course, NOW that you reminded of how the CCD works we have to rethink digital camera resolutions since some interpolation has been performed in camera. It would seem to follow that the information content of a digital image is less than and not equal to the sum of it's pixels.

Darn it !!! I was beginning to feel like we were getting some where :-o
As for the analysis of film resolution, while it is true that there is a
random distribution and a random crystal size, it is highly likely that
these are bounded by certain max and min values, and that they obey a
statistical distribution, which could be measured. Because of this, they
would be susceptable to statistical analysis methods if anyone were to
actually take the time to do it.

Chris
Chris,

I wonder if film uses the same trick as the CCD, having more green sensors to carry the detail since the human eye is more sensitive to green. In photographic prints the blotchiness of the pure blue sky can be abvious while other objects in the same image are sharp.

The one problem with a random distribution is that there is no actual maximum and minimum peak at any one point. The location could contain no crystals or all the crystals. Where randomness works in our favour is that these two extremes are near enfinately unlikely. The likely crystal distributions are close to even but an absolutely even distribution is just as unlikely as the other two extremes. The result is that the resolution at any one point can not be determined until after processing. With digital I know (or at least suspect) in advance - even if just to know that it is not as good as film - for now.

P.S. Did I forget to mention anywhere around here that I also think photographs are beautiful.
 
Gordon,

Please see response to Chris. After reading both your messages I muddled them together in my head and made a single reply. The bit about the CCD should probably be aimed more at yourself but Chris also eluded to this important point.
My only point to make at this time is that I do not believe a crystal, no
matter what colour depth it carries is the equivalent of a digital pixel.
Firstly the crystal only carries one colour while a digital pixel carries
several - up to 4 in the case of CMYK or 3 for RGB.
Actually, the camera makers have been cheating. They specify the mega
pixels by CCD cell count. But each CCD only records one color. So a
single crystal from a single layer and a CCD cell count is the same. I
don't know how the crystal count is derived. But I'm looking forward to
the CCD count to be a few order of magnitudes (2's) about crystal count
then regardless of distribution pattern, digital wins in terms of
accuracy.

While digital and chemical base flim are both improving, I do believe
digital flim is progressing at a much more rapid pace. There are still
people that are convinced that vinyl "sounds" better than CD, and I will
never argue that and I can respect their opinion. Then there are people
that are convinced that vinyl is more accurate, that I'll have to argue.
At this point, I do believe digital is a few order of magnitudes behind,
but I won't expect it'll be too long before digital catches up and
surpass chemical film in strict accuracy measurement. As in what pleases
the eyes more, it's up to anybody's taste.
From the perspective of entropy: the higher the randomness, the lower the
level of order. Lower order means less infomation in the system. The
information in a digital image is easily understood but film contains
less information than the sum of it's crystals the laws of entropy
dictate that it must. Although film may have a higher 'real' resolution
now you cannot equate a single crystal to a single digital pixel. It
takes a group of crystals to equal the information in a pixel.
entropy!?! now you are making my brain hurt!!!
Thanks for a stimulating discussion.

gordon
 
If people are rating film resolution based on crystal count then the
estimate is too high. If three crystal colour types are present then the
actual count must be divided by three at least.
Ah, but there is a difference. In colour film, there are multiple (three
or four) layers, each of which is sensitive to a different frequency
range. This would be like having three separate CCDs in a camera, one
for each colour (which is done in broadcast-quality video cameras, and in
one of the Minolta digicams).
Good point. Multiple overlapping layers is better. This would also aid
in the positional accuracy of recorded information. It depends on how
the resolution of film is measured. It would have to be based on the
crystal content of a single layer.

Of course, NOW that you reminded of how the CCD works we have to rethink
digital camera resolutions since some interpolation has been performed in
camera. It would seem to follow that the information content of a
digital image is less than and not equal to the sum of it's pixels.

Darn it !!! I was beginning to feel like we were getting some where :-o
As for the analysis of film resolution, while it is true that there is a
random distribution and a random crystal size, it is highly likely that
these are bounded by certain max and min values, and that they obey a
statistical distribution, which could be measured. Because of this, they
would be susceptable to statistical analysis methods if anyone were to
actually take the time to do it.

Chris
Chris,

I wonder if film uses the same trick as the CCD, having more green
sensors to carry the detail since the human eye is more sensitive to
green. In photographic prints the blotchiness of the pure blue sky can
be abvious while other objects in the same image are sharp.

The one problem with a random distribution is that there is no actual
maximum and minimum peak at any one point. The location could contain no
crystals or all the crystals. Where randomness works in our favour is
that these two extremes are near enfinately unlikely. The likely crystal
distributions are close to even but an absolutely even distribution is
just as unlikely as the other two extremes. The result is that the
resolution at any one point can not be determined until after processing.
With digital I know (or at least suspect) in advance - even if just to
know that it is not as good as film - for now.
I just LOVE this thread. Guess we're all geeks here. With an avid interest, and appreciation, of all types of photography.

Okay, it's time for another two cents from me:

Here's what I am throwing out for consumption... In a real world digital photograph, with current CCDs and the Bayer pattern, resolution may be determinable in advance, but only with GREAT difficulty.

Let me lay a little background, and for now assume a fairly pure example. The typical method of measuring/determining resolution is by the number of line pairs (a pair is a white line adjacent to a black line) per millimeter that can be 'seen'. Because of the random distribution of sensing elements in film (crystals), the determination of line pairs per millimeter is not clean, and the measurement has to use a detectable contrast level between adjacent lines. With pixels (there's an assumption buried here which is the main thrust of this thread node - we'll get back to it), it takes four pixels to adequately detect a line pair. Why four? If you try to use one pixel for each white line and one pixel for each black line, you're fine as long as the lines fall square on the pixels; however, shift the lines by half a pixel, and all you detect is a nice uniform grey. Four pixels per line pair fixes that, though were starting to go into the area were contrast measurement might have to be called upon. (Yikes!, not with pixels...)

Okay, it's time to start picking at our assumptions. If a medium truly has a random distribution of sensors, the perceived resolution ought to be independent of orientation of the line pair screen with respect to the sensor medium. i.e. if you can detect 50 line pairs per millimeter, you can probably do that no matter how you rotate the resolution chart.

With a cartesian sensor matrix, the best resolution is when the lines are parallel to an edge of the sensor matrix. BUT, resolution should decrease as you rotate the resolution target so the lines are no longer parallel with an edge, and reach a minimum resolution when the target is lined up along the matrix diagonal. If 50 line pairs per millimeter can be detected when the target is parallel to an edge, that falls to 35.355 line pairs per millimeter when the chart is lined up along the diagonal. (that's 50 over the square root of two)

We're done with the first test target, made up only of black and white lines. What happens if our resolution target is made up of black and RED lines? Since film (and, I hope, future digital cameras) can detect all colors at each sensor location, its resolution should be roughly the same for this kind of target. If we could detect 50 black and white line pairs per millimeter, we ought to be able to detect 50 black and red line pairs per millimeter. But most current CCDs use a Bayer pattern and a reconstructing algorithm. A quarter of the sensors are red, a quarter blue, and a half green. With the prevalent CCD technology, our 50 black and white line pair per millimeter detection could fall to 12.5 black and red line pair per millimeter detection. Black and blue line pairs should be the same number as black and red line pairs, black and green line pairs should be 25 line pairs per millimeter. Now rotate the target towards the sensor diagonal...

That's it for simple test targets. Now go out and take a photograph of white houses with red tile roofs surrounded by apple trees with intensely green foliage and apples in varying stages of ripeness. And maybe throw in a blue sky with multicolored parrots flying through it. By having a good color meter in hand, doing a fair amount of analysis (possibly with a CRAY), and with exact knowledge of both the Bayer pattern, and the reconstructing algorithm, you could probably determine the average resolution in any small region. The small region next to it would probably be different.
P.S. Did I forget to mention anywhere around here that I also think
photographs are beautiful.
Indeed they are, both film and digital...
 
You brought up some good points and measuring methods. I just want to be picky about a few minor points.
it takes four pixels to adequately detect a line
pair. Why four? If you try to use one pixel for each white line and one
pixel for each black line, you're fine as long as the lines fall square
on the pixels; however, shift the lines by half a pixel, and all you
detect is a nice uniform grey. Four pixels per line pair fixes that,
though were starting to go into the area were contrast measurement might
have to be called upon. (Yikes!, not with pixels...)
Nyquest rate only requires the sample rate is be > 2 times max resolution for a perfect reconstruction. For a repeating pattern of line pairs, it only requires > 2 pixels per pair. 2.x pixels per pair is sufficient if a perfect filter is available. Four may be a good real world requirement, but not the minimum requirement.
With a cartesian sensor matrix, the best resolution is when the lines are
parallel to an edge of the sensor matrix. BUT, resolution should
decrease as you rotate the resolution target so the lines are no longer
parallel with an edge, and reach a minimum resolution when the target is
lined up along the matrix diagonal.
Yes the per line resolution is lower at 45 degrees, but the lines of detection is closer together. For example, if you draw parallel horizontal lines to connect 2 rows of pixels, the lines are 1 pixel unit apart. If you draw diagonal lines, the samples on each line is squareroot of 2 (1.414) apart, but the lines are (squareroot of 2 ) 2 (.707) apart. The two dimensional resolution is conserved for a groups of diagonal lines.

gordon
 
it takes four pixels to adequately detect a line
pair. Why four? If you try to use one pixel for each white line and one
pixel for each black line, you're fine as long as the lines fall square
on the pixels; however, shift the lines by half a pixel, and all you
detect is a nice uniform grey. Four pixels per line pair fixes that,
though were starting to go into the area were contrast measurement might
have to be called upon. (Yikes!, not with pixels...)
Nyquest rate only requires the sample rate is be > 2 times max resolution
for a perfect reconstruction. For a repeating pattern of line pairs, it
only requires > 2 pixels per pair. 2.x pixels per pair is sufficient if a
perfect filter is available. Four may be a good real world requirement,
but not the minimum requirement.
In the audio world, there is a movement to go to 96 KHz (and 24 bits for the dynamic range, ...and so such things as digital volume controls don't hurt things) as a way to accurately reproduce music. Trusted reviewers (I trust them, anyway) claim that for the first time, digital audio actually sounds like its analog counterpart. Hearing is supposed to only go to 20 KHz, but 44.8 KHz (Nyquest rate) CD sampling doesn't seem to cut it - at least for all people. (Side note: when I switched to CDs from LPs, I was disappointed by the sound - even though I got a good CD player and the rest of my audio chain was the same. Yet, I stopped playing LPs because of the CD convenience factor and CDs were just SO cool.) Nyquest rate may be okay as a theoretical construct, but does it really work? We're not theoretical. And if there are 2.1 pixels, say, per line pair, what does the image of the resolution chart look like? Can you, as an observer, easily tell what the frequency of the target lines is supposed to be?

Does Nyquest rate require point samples? (I honestly don't know.) Pixels are fat samples.
With a cartesian sensor matrix, the best resolution is when the lines are
parallel to an edge of the sensor matrix. BUT, resolution should
decrease as you rotate the resolution target so the lines are no longer
parallel with an edge, and reach a minimum resolution when the target is
lined up along the matrix diagonal.
Yes the per line resolution is lower at 45 degrees, but the lines of
detection is closer together. For example, if you draw parallel
horizontal lines to connect 2 rows of pixels, the lines are 1 pixel unit
apart. If you draw diagonal lines, the samples on each line is squareroot
of 2 (1.414) apart, but the lines are (squareroot of 2 ) 2 (.707)
apart. The two dimensional resolution is conserved for a groups of
diagonal lines.
Yep, I thought about that some more after I posted it, and I think you are right. However, I also suspect that now we're REALLY starting to get into the area of using a contrast threshold as a way to determine the frequency of the target lines pairs. The pixel centers may lie upon a black line (or a white line) but the corners may not; such pixels would be less of a pure black (or white). A resolution target is likely to look much less crisp. As above, the underlying target resolution may be determinable through technical analysis, but what do our photographs look like?

Ed
 
Nyquest rate only requires the sample rate is be > 2 times max resolution
for a perfect reconstruction. For a repeating pattern of line pairs, it
only requires > 2 pixels per pair. 2.x pixels per pair is sufficient if a
perfect filter is available. Four may be a good real world requirement,
but not the minimum requirement.
Minor problem with your statement. While it is true that Nyquist says you need only > 2 times max resolution, what you've neglected is that lines/mm is not sine waves, but square waves. This means that to faithfully represent the square wave you need much higher sampling than twice the actual lines/mm value. If you were sampling sine waves, then a bit more than twice the max resolution would be sufficient.

Chris
 
I believe there is some confusion regarding sampling theory and reconstruction as in applies to the CCD in our digital cameras. Below is a cut and paste of a posting I made about two months ago on this same subject.

**********************

Sampling theory is correct (by definition). What happens when you asynchronously sample a signal at a rate ever so slightly higher than twice the signal frequency? If you apply an infinite number of samples, and wait an infinite amount of time, it is possible to accurately reconstruct the signal. Nobody should have a problem with this concept.

Now here is my main point. We do not have an asynchronous sampler in the case of the digital camera. In this case, a fixed spatial frequency is focused on the CCD plane, and has a fixed phase relationship with respect to the CCD cells. We have a synchronous sampler. It is a one shot deal when you press the shutter button. We don’t have an infinite amount of time. We can’t shift (dither) the image cells relative to the input spatial frequency and reconstruct the image.

If the spatial frequency of the image is ever so slightly less than the sampling frequency we associate with the CCD cells, we are just as likely to sample two maximums (highlight) as two minimums (shadow), and anything in between. Depending on the phase relationship between the image spatial frequency and the CCD sampler, we will have nothing but a flat field (all white, black, or something in between) every time we press the shutter.

The comments that follow pertain to synchronous sampling. If the sample rate is 3X, the modulation transfer function of the sampler is much greater compared to the 2X+ case. In spite of this good news, if your life depended on it, no matter what slice of 3X data was examined, you could not correctly reconstruct the signal in your mind (this does not exclude the possibility of guessing correctly). One period of a sine wave sampled at 3X looks nothing like a sine wave at the output of the sampler. At 4X, for a sinusoidal input, you would have a pretty good chance of guessing the input was a sine wave by looking at the output of the sampler. At 6X, even my grandmother would get it right. At 10X, I say ship it.
*************************

At any rate, above is my input regarding the sampling/reconstruction subject as it applies to the CCD.

Joe Kurkjian
Nyquest rate only requires the sample rate is be > 2 times max resolution
for a perfect reconstruction. For a repeating pattern of line pairs, it
only requires > 2 pixels per pair. 2.x pixels per pair is sufficient if a
perfect filter is available. Four may be a good real world requirement,
but not the minimum requirement.
Minor problem with your statement. While it is true that Nyquist says
you need only > 2 times max resolution, what you've neglected is that
lines/mm is not sine waves, but square waves. This means that to
faithfully represent the square wave you need much higher sampling than
twice the actual lines/mm value. If you were sampling sine waves, then a
bit more than twice the max resolution would be sufficient.

Chris
 
Chris, Joe,

Try to slip a fast one by you guys, good thing you do pay attention. Yes, Nyquest limit and FFT only applies to an infinitly repeating signal in time domain. So I did sneak in a qualifier of:
For a repeating pattern of line pairs, it only requires > 2 pixels per pair.
repeating pattern is the key. For a single line pair, the limit of 4 in the original and joe's post would help. However, a typical spatial frequency measurement is a frequency measurement, not a position (phase) measurement. A typical test pattern uses a group of lines, not a single pair of line. As long as the number of cycles is longer than the reconstruction filter, it is possible to reconstruct the original frequency with nyquest samples.

The example of flat field of 0's is actually an oversampling rate of exactly 2, a violation of nyquest. The example should be a slow changing sample set, which would aliase to a low frequency if insufficiant samples are use, but can be reconstructed perfectly if enough sample are provided. (I think, please correct if I'm wrong again)

That said, nyquest is indeed not enough, because there is no such thing as a perfect filter. Any kind of interpolating reconstruction filter do have a slopping stop band to reject images. The usuable bandwith of any sampling system is usually less than .5 (2 pixels per line pair), but .25 sound excessive (4 pixels per line pairs). I think the right answer is somewhere in between.

In terms of the specific case of single line pair and square wave vs. sine wave, these are cases where generic sampling theory and signal processing fails. In these cases, both the frequency and instantaneous phase are important. Joe's post covers this point. A similar problem I run into is natural vs. synthetic (graphic, text) video. Scenes in nature just doesn't contain much fine lines and sharp transitions. That's why we've been getting away with low res TV signals for so long. Just remember, TV resolution is only 640x480 interlaced at 60 FPS. But when it comes to text in credits, it looks horrible. I am not too concern about test patterns and electric cables in my photos, I would rather give up some bandwidth for a better filter that removes more noise for a better picture. That's the kind of trade off I make in video world, and I hope the camera people are too. I guess this is why actual photo evaluation is more important than test patterns in a camera evaluation. All the test patterns and extreme cases provides hint of the capability, but in the end, it's the picture that matters.

I think I may be rambling and arguing in circle again. I think deep down we do agree that it's the subject evaluation of photos that is important. You can theorize all you want, but a picture is worth a thousand words and 500 equations.

gordon
-------------------------------------
I believe there is some confusion regarding sampling theory and
reconstruction as in applies to the CCD in our digital cameras. Below is
a cut and paste of a posting I made about two months ago on this same
subject.

**********************
Sampling theory is correct (by definition). What happens when you
asynchronously sample a signal at a rate ever so slightly higher than
twice the signal frequency? If you apply an infinite number of samples,
and wait an infinite amount of time, it is possible to accurately
reconstruct the signal. Nobody should have a problem with this concept.
[joe, good explaination of sampling theory limit , sorry I missed it the first time around]
*************************

At any rate, above is my input regarding the sampling/reconstruction
subject as it applies to the CCD.
Nyquest rate only requires the sample rate is be > 2 times max resolution
for a perfect reconstruction. For a repeating pattern of line pairs, it
only requires > 2 pixels per pair. 2.x pixels per pair is sufficient if a
perfect filter is available. Four may be a good real world requirement,
but not the minimum requirement.
Minor problem with your statement. While it is true that Nyquist says
you need only > 2 times max resolution, what you've neglected is that
lines/mm is not sine waves, but square waves. This means that to
faithfully represent the square wave you need much higher sampling than
twice the actual lines/mm value. If you were sampling sine waves, then a
bit more than twice the max resolution would be sufficient.

Chris
 
Ahh, audio. That's where I started my DSP pursuits.
Hearing is supposed to only
go to 20 KHz, but 44.8 KHz (Nyquest rate) CD sampling doesn't seem to cut
it - at least for all people. (Side note: when I switched to CDs from
LPs, I was disappointed by the sound - even though I got a good CD player
and the rest of my audio chain was the same. Yet, I stopped playing LPs
because of the CD convenience factor and CDs were just SO cool.)
I had a humbling experience. I played a test CD that contains clips like wobble tones in step frequency and noise levels. I found my ears are only good for 16.xkHz and -70dB noise. I check with other folks in my group and a few of us are in the same ball park. I think the 20kHz is overrated, and the 100dB+ SNR type high end equiptment are nothing more than status symbols. Until an avarage joe can "hear" 18k and -80dB, save your money and stick with CD. I am sure there are a few people with golden ears out there, but chances are, it's not you (Ed: a general you, not a personal you and I do know people that claims to hear over 20kHz and the alias noise from a CD ). Heck, even MP3 is good enough when I'm not paying attention.
Nyquest rate may be okay as a theoretical construct, but does it really work?
We're not theoretical. And if there are 2.1 pixels, say, per line pair,
what does the image of the resolution chart look like? Can you, as an
observer, easily tell what the frequency of the target lines is supposed
to be?
I can tell you that sampling theory works a lot better in audio world than video/graphics. Sound is more cyclical and less phase dependent. It was one of the biggest adjustment I had to make when switching from audio to video.

First off, the resolution chart needs to be greyscale sinewaves. If it's black and white lines, they are impulses and square waves and contains infinite frequency content, and no matter what CCD resolution you have, there can not be absolute correct reconstruction. This is not that bad a restriction, remember human eye and optical lens are bandlimiting filters. There is no real impulse and square waves.

For a sine wave, the resolution chart would look like a LOW frequency signal at 2.1 samples per cycle when you zoom in far enough. But anything that close to nyquest, is typically not in the usuable bandwidth. The usable bandwidth of a sampling system is usually between .25 to .5, Let's say 0.4 is a typical case. So what happens is somewhere around 3 pixels per line pair, things start to get grey out. Having the high frequency stuff merge into a flat field is not that bad because you eye also merges high frequency dots into a flat field as in half toning. A single line gets filtered out, which is what happen when we see things. If you are seeing a low frequency pattern in a high frequency section of lines, that's aliasing and a sign of a poorly designed system.
Does Nyquest rate require point samples? (I honestly don't know.)
Pixels are fat samples.
Pixels are fat samples, and in signal processing, it's sample-and-hold. That in itself, is a low pass filter. Sinc filter to be exact.
Yep, I thought about that some more after I posted it, and I think you
are right. However, I also suspect that now we're REALLY starting to get
into the area of using a contrast threshold as a way to determine the
frequency of the target lines pairs. The pixel centers may lie upon a
black line (or a white line) but the corners may not; such pixels would
be less of a pure black (or white). A resolution target is likely to
look much less crisp. As above, the underlying target resolution may be
determinable through technical analysis, but what do our photographs look
like?
I am not too hung up on black and white lines. And I suspect most digital camera's max recordable black and white line pairs is a bit lower than pixel_width/2. So the 4 pixels per pair may be closer to the truth afterall. And with enough points, contrast threshold is not necessary.

This may come as a surprise, for a resolution test, a good implementation is not only base on how many finest line pairs a camera can resolve. But rather, in my own order of importance: is there cross color (rainbow)? are the highest frequency area a flat grey with no discernable pattern (aliasing noise)? Is the transition from distingushable line to flat grey smooth (artifacts)? Where does the flat grey start (max frequency, this should also define how sharp can the transitions get on any given edge)?

gordon
 
Well

while you guys were having a pleasant chat last night I was working on a film vs digital simulation using Excel spreadsheet. Hope

you have access to Excel. I thought it would be easier just to E-mail it to you to look out. I will also post this text part of the
message in the forum.

I will describe what the spreadsheet does.

The sheet contains two 20X20 and a 10X10 matrices:

On the upper right we have an image which is to be photographed. In the example I am sending it is a sloppy 6. The image can
be changed to your liking to test various results.

The upper left is a simulation of randomly placed crystals. Each cell has a 50/50 chance of containing a crystal and if it contains a

crystal it will be turned on (exposed) if it also lines up with a part of the image on the upper right. To try a different crystal pattern

enter a space or backspace in an unocupied cell (no formula either). The spreadsheet will recalculate the random placement of
crystals and display the new results.

The lower left represents a CCD type of sensor array. It looks like a 20X20 matrix but the formulas are arranged so as to behave
as a 10X10 matrix with each 'pixel' being a 2X2 square.

The 'image' in the upper right is the one I drew before entering the formulas so it was not fixed to make the 'pixel' representation

look more accurate. There are images you can input that look worse than this one but as long as the image is not too detailed the
pixel representation is quite consistent.

The 'crystal' representation is hit and miss. Some formula refreshes show an excelent rendition of the image. In many the image
is unrecognisable.

I believe this illustrates the limitations of using randomly placed sensors to store information. On the small scale you do not know

what you will get. It is interesting to note in this simulation that on average there will be 20X20/2 = 200 crystals. The pixel matrix

always contains 10X10=100 pixels. I believe the pixel image is generally more coherent while only having half the number of

sensors. Ratio of sensors to coherent information may be as high as 4 to 1 in favour of pixels. Maybe even higher.

Although illustrative there are some flaws with this analasis:

1 - It is a binary representation. Real images have colour depth.

2 - Real crystals are not restricted to a square matrix and can overlap one another.

3 - Under present technology the relative comparison of pixel and crystal sizes is much greater than in the simulation.

4 - The probability of the presence of a crystal is assumed to be 50/50 in this simulation which may not jive with reality.

I am sure there are other flaws here but it is an interesting experiment.
 
Kudos on some strong work. Don't give up now though, as there are some additional assumptions that might make your demonstration more closely approximate reality.

1) The total area of a CCD is much smaller than the total area of film. Could you model the effects of the naturally occurring imperfections in lenses and how this might differentially effect CCDs and film? The tolerances on digital camera lenses would have to be much better to deliver the same quality. Digital cameras may have these exceptional lenses, but if so, I feel really ripped off that many of my lenses cost more than a consumer level digital camera.

2) A large percentage of the grid in a CCD is not light sensitive, so to properly model the CCD, the edges of your little boxes should not actually touch. Perhaps someone at Sony or Nikon would be good enough to provide you with the algorithim they use to guess how the rest of the grid is filled. I know that sampling error can be statistically modeled. (I am not just talking about how the camera guesses the right colors, but also how it guesses how to fill in the empty space)

I suspect that the algorithim makes the output appear more random than the input. If you looked at the raw output of a CCD, it would not look particularly good. The camera makers really work wonders to extract as much info as they do from the CCD, and make it appear much as we expect film to look.

Imagine how good film would look if we scanned at the limit of its resolving power, and then post-processed it to the same degree as is done inside a digital camera. I suspect I could make stunningly detailed prints the size of my house!
Well

while you guys were having a pleasant chat last night I was working on a
film vs digital simulation using Excel spreadsheet. Hope
you have access to Excel. I thought it would be easier just to E-mail it
to you to look out. I will also post this text part of the
message in the forum.

I will describe what the spreadsheet does.

The sheet contains two 20X20 and a 10X10 matrices:

On the upper right we have an image which is to be photographed. In the
example I am sending it is a sloppy 6. The image can
be changed to your liking to test various results.

The upper left is a simulation of randomly placed crystals. Each cell
has a 50/50 chance of containing a crystal and if it contains a
crystal it will be turned on (exposed) if it also lines up with a part of
the image on the upper right. To try a different crystal pattern
enter a space or backspace in an unocupied cell (no formula either). The
spreadsheet will recalculate the random placement of
crystals and display the new results.

The lower left represents a CCD type of sensor array. It looks like a
20X20 matrix but the formulas are arranged so as to behave
as a 10X10 matrix with each 'pixel' being a 2X2 square.

The 'image' in the upper right is the one I drew before entering the
formulas so it was not fixed to make the 'pixel' representation
look more accurate. There are images you can input that look worse than
this one but as long as the image is not too detailed the
pixel representation is quite consistent.

The 'crystal' representation is hit and miss. Some formula refreshes
show an excelent rendition of the image. In many the image
is unrecognisable.

I believe this illustrates the limitations of using randomly placed
sensors to store information. On the small scale you do not know
what you will get. It is interesting to note in this simulation that on
average there will be 20X20/2 = 200 crystals. The pixel matrix
always contains 10X10=100 pixels. I believe the pixel image is generally
more coherent while only having half the number of
sensors. Ratio of sensors to coherent information may be as high as 4 to
1 in favour of pixels. Maybe even higher.

Although illustrative there are some flaws with this analasis:

1 - It is a binary representation. Real images have colour depth.
2 - Real crystals are not restricted to a square matrix and can overlap
one another.
3 - Under present technology the relative comparison of pixel and crystal
sizes is much greater than in the simulation.
4 - The probability of the presence of a crystal is assumed to be 50/50
in this simulation which may not jive with reality.

I am sure there are other flaws here but it is an interesting experiment.
 
Dino:

I just noticed your post here from 3 days ago...

Apparently, some of the purple fringing problem has to due with a "microlens" over each pixel within the CCD. Phil Askey explained it a while back here:

http://www.dpreview.com/forums/read.asp?forum=1001&message=278022&query=microlens+purple

By the way, I did review photos from my 35mm camera (a Nikon N4004S, using an inexpensive Sigma 28-80mm Zoom Lens), and was unable to find any that exhibited the purple fringing problem that I see in photos from the Digital Cameras that I've tried so far (in the last couple of months, I've tried an Epson 3000z, an Olympus 2500L, and a Nikon 990).

However, in fairness to the Digital Cameras, most photos come out looking great. Only the outdoor photos that I take, including treelines against a bright sky, seem to be unacceptable in quality to me. Others don't seem to notice or mind it as much as I do.
(the world is very warped with glasses and -6 diopter corrections)
With a slightly larger wire, or a tree limb, the 35mm camera would show
the actual color of the wire or limb.

With a digital camera, it'll probaby be purple.

That's my biggest complaint. I've now tried 3 digital cameras (Epson
3000z, Olympus 2500L, Nikon 990), and I can't seem to take outdoor photos
that include treetops against a bright sky, without seeing blue/purple
leaves and limbs. Same thing for powerlines against a bright sky -- lots
of purple.

Although I don't plan on taking photos of thin wires or powerlines, so I
could care less about them. However, I do care about taking photos of
landscapes, and the quality of the existing digicams is not up to my
expecations.

When digital cameras get around this blue/purple problem, give you the
same exposure/focus capabilities of 35mm cameras, at a reasonable price,
letting me print an 11x14 or larger print, then I'll be happy....

The 3.3MP images are detailed enough for me, with the exception of the
blue/purple fringing... That's my biggest complaint, and they all seem to
have this problem now.
If you have any doubts about 3 MP camers being equal to 35MM film, read
Discovery mag, August 2000, 'The chemistry of Photography' page 24 -27.
Film is not analog, its binary digital!!. It takes at least a 3x3 matrix
of crystals to give a 'gray scale'. That's a 9 division on the number of
crystals to give a 512 level gray scale. Cells (pixels) in CCDs and
CMOS sensors are analog with more possible bits per cell (the a-d
converter resolution).

The artical is wrong about the 'best' digital cameras, but we can forgive
them this time.

Yes it is possable to capture a bright reflection off a thin wire that
would show up in a 35mm film, but you could not tell much from it because
you would not have enough information about the wire, only that it was
there. In the digital world, you wouldn't see the wire until the number
of pixels went up 2 or 3 fold, but when you did, you would know more
than that it was just there.

Yes it will be another few years before digital surpass film completely
and by a wide enough margin that film is left to history buffs, but will
come and sooner, not later. We are standing on the edge now!!!
 
The problem is more from the main lens. The more you bend the light to get to the sensor, the more the problem will happen. (like needing -6 diopter correction). A smaller sensor needs to have the light bent more than a larger sensor.

That's why you don't hear complaints about the D1 with purple fringe, larger sensor. Give manufactures a few more years and they will fix the problem after people complain enough.
I just noticed your post here from 3 days ago...

Apparently, some of the purple fringing problem has to due with a
"microlens" over each pixel within the CCD. Phil Askey explained it a
while back here:

http://www.dpreview.com/forums/read.asp?forum=1001&message=278022&query=microlens+purple

By the way, I did review photos from my 35mm camera (a Nikon N4004S,
using an inexpensive Sigma 28-80mm Zoom Lens), and was unable to find any
that exhibited the purple fringing problem that I see in photos from the
Digital Cameras that I've tried so far (in the last couple of months,
I've tried an Epson 3000z, an Olympus 2500L, and a Nikon 990).

However, in fairness to the Digital Cameras, most photos come out looking
great. Only the outdoor photos that I take, including treelines against
a bright sky, seem to be unacceptable in quality to me. Others don't
seem to notice or mind it as much as I do.
(the world is very warped with glasses and -6 diopter corrections)
With a slightly larger wire, or a tree limb, the 35mm camera would show
the actual color of the wire or limb.

With a digital camera, it'll probaby be purple.

That's my biggest complaint. I've now tried 3 digital cameras (Epson
3000z, Olympus 2500L, Nikon 990), and I can't seem to take outdoor photos
that include treetops against a bright sky, without seeing blue/purple
leaves and limbs. Same thing for powerlines against a bright sky -- lots
of purple.

Although I don't plan on taking photos of thin wires or powerlines, so I
could care less about them. However, I do care about taking photos of
landscapes, and the quality of the existing digicams is not up to my
expecations.

When digital cameras get around this blue/purple problem, give you the
same exposure/focus capabilities of 35mm cameras, at a reasonable price,
letting me print an 11x14 or larger print, then I'll be happy....

The 3.3MP images are detailed enough for me, with the exception of the
blue/purple fringing... That's my biggest complaint, and they all seem to
have this problem now.
If you have any doubts about 3 MP camers being equal to 35MM film, read
Discovery mag, August 2000, 'The chemistry of Photography' page 24 -27.
Film is not analog, its binary digital!!. It takes at least a 3x3 matrix
of crystals to give a 'gray scale'. That's a 9 division on the number of
crystals to give a 512 level gray scale. Cells (pixels) in CCDs and
CMOS sensors are analog with more possible bits per cell (the a-d
converter resolution).

The artical is wrong about the 'best' digital cameras, but we can forgive
them this time.

Yes it is possable to capture a bright reflection off a thin wire that
would show up in a 35mm film, but you could not tell much from it because
you would not have enough information about the wire, only that it was
there. In the digital world, you wouldn't see the wire until the number
of pixels went up 2 or 3 fold, but when you did, you would know more
than that it was just there.

Yes it will be another few years before digital surpass film completely
and by a wide enough margin that film is left to history buffs, but will
come and sooner, not later. We are standing on the edge now!!!
 
I'm debating now on whether or not to wait for a camera like the new Canon EOS-D30 when it becomes available, since it will be using a much larger, APS size CMOS sensor, according to the info Phil Askey posted in his preview. Phil devoted an entire page to the new sensor here:

http://www.dpreview.com/articles/canond30/default.asp?page=3

I've got my new Nikon 990 up for sale now. I thought that I had found a buyer, but they backed out at the last minute, so I placed an ad in the Atlanta paper (to avoid the hassles of long distance payment and shipping, etc.)...

As soon as my Nikon 990 is sold, I'll make a decision on the next digicam to try. Right now, I'm debating on whether or not to try one of the Sony DSC-505 series digicams (which seem to have less fringing, compared to many other digicams), or wait on a camera with a larger sensor, like the EOS-30 (which would cost me a lot more money than I really wanted to spend on a digital camera).
That's why you don't hear complaints about the D1 with purple fringe,
larger sensor. Give manufactures a few more years and they will fix the
problem after people complain enough.
I just noticed your post here from 3 days ago...

Apparently, some of the purple fringing problem has to due with a
"microlens" over each pixel within the CCD. Phil Askey explained it a
while back here:

http://www.dpreview.com/forums/read.asp?forum=1001&message=278022&query=microlens+purple

By the way, I did review photos from my 35mm camera (a Nikon N4004S,
using an inexpensive Sigma 28-80mm Zoom Lens), and was unable to find any
that exhibited the purple fringing problem that I see in photos from the
Digital Cameras that I've tried so far (in the last couple of months,
I've tried an Epson 3000z, an Olympus 2500L, and a Nikon 990).

However, in fairness to the Digital Cameras, most photos come out looking
great. Only the outdoor photos that I take, including treelines against
a bright sky, seem to be unacceptable in quality to me. Others don't
seem to notice or mind it as much as I do.
(the world is very warped with glasses and -6 diopter corrections)
With a slightly larger wire, or a tree limb, the 35mm camera would show
the actual color of the wire or limb.

With a digital camera, it'll probaby be purple.

That's my biggest complaint. I've now tried 3 digital cameras (Epson
3000z, Olympus 2500L, Nikon 990), and I can't seem to take outdoor photos
that include treetops against a bright sky, without seeing blue/purple
leaves and limbs. Same thing for powerlines against a bright sky -- lots
of purple.

Although I don't plan on taking photos of thin wires or powerlines, so I
could care less about them. However, I do care about taking photos of
landscapes, and the quality of the existing digicams is not up to my
expecations.

When digital cameras get around this blue/purple problem, give you the
same exposure/focus capabilities of 35mm cameras, at a reasonable price,
letting me print an 11x14 or larger print, then I'll be happy....

The 3.3MP images are detailed enough for me, with the exception of the
blue/purple fringing... That's my biggest complaint, and they all seem to
have this problem now.
If you have any doubts about 3 MP camers being equal to 35MM film, read
Discovery mag, August 2000, 'The chemistry of Photography' page 24 -27.
Film is not analog, its binary digital!!. It takes at least a 3x3 matrix
of crystals to give a 'gray scale'. That's a 9 division on the number of
crystals to give a 512 level gray scale. Cells (pixels) in CCDs and
CMOS sensors are analog with more possible bits per cell (the a-d
converter resolution).

The artical is wrong about the 'best' digital cameras, but we can forgive
them this time.

Yes it is possable to capture a bright reflection off a thin wire that
would show up in a 35mm film, but you could not tell much from it because
you would not have enough information about the wire, only that it was
there. In the digital world, you wouldn't see the wire until the number
of pixels went up 2 or 3 fold, but when you did, you would know more
than that it was just there.

Yes it will be another few years before digital surpass film completely
and by a wide enough margin that film is left to history buffs, but will
come and sooner, not later. We are standing on the edge now!!!
 
additonal note, the more you bend the light, the more the lens acts like a prism. We all know what a prism does to light, Rainbows!!

by the way, if you look closely, you will get a red tinge in the other side of the object if the object is a little large and the light is comming right behind the item.

Those glasses I got really have changed a lot about what I thought about camera lenses.
That's why you don't hear complaints about the D1 with purple fringe,
larger sensor. Give manufactures a few more years and they will fix the
problem after people complain enough.
I just noticed your post here from 3 days ago...

Apparently, some of the purple fringing problem has to due with a
"microlens" over each pixel within the CCD. Phil Askey explained it a
while back here:

http://www.dpreview.com/forums/read.asp?forum=1001&message=278022&query=microlens+purple

By the way, I did review photos from my 35mm camera (a Nikon N4004S,
using an inexpensive Sigma 28-80mm Zoom Lens), and was unable to find any
that exhibited the purple fringing problem that I see in photos from the
Digital Cameras that I've tried so far (in the last couple of months,
I've tried an Epson 3000z, an Olympus 2500L, and a Nikon 990).

However, in fairness to the Digital Cameras, most photos come out looking
great. Only the outdoor photos that I take, including treelines against
a bright sky, seem to be unacceptable in quality to me. Others don't
seem to notice or mind it as much as I do.
(the world is very warped with glasses and -6 diopter corrections)
With a slightly larger wire, or a tree limb, the 35mm camera would show
the actual color of the wire or limb.

With a digital camera, it'll probaby be purple.

That's my biggest complaint. I've now tried 3 digital cameras (Epson
3000z, Olympus 2500L, Nikon 990), and I can't seem to take outdoor photos
that include treetops against a bright sky, without seeing blue/purple
leaves and limbs. Same thing for powerlines against a bright sky -- lots
of purple.

Although I don't plan on taking photos of thin wires or powerlines, so I
could care less about them. However, I do care about taking photos of
landscapes, and the quality of the existing digicams is not up to my
expecations.

When digital cameras get around this blue/purple problem, give you the
same exposure/focus capabilities of 35mm cameras, at a reasonable price,
letting me print an 11x14 or larger print, then I'll be happy....

The 3.3MP images are detailed enough for me, with the exception of the
blue/purple fringing... That's my biggest complaint, and they all seem to
have this problem now.
If you have any doubts about 3 MP camers being equal to 35MM film, read
Discovery mag, August 2000, 'The chemistry of Photography' page 24 -27.
Film is not analog, its binary digital!!. It takes at least a 3x3 matrix
of crystals to give a 'gray scale'. That's a 9 division on the number of
crystals to give a 512 level gray scale. Cells (pixels) in CCDs and
CMOS sensors are analog with more possible bits per cell (the a-d
converter resolution).

The artical is wrong about the 'best' digital cameras, but we can forgive
them this time.

Yes it is possable to capture a bright reflection off a thin wire that
would show up in a 35mm film, but you could not tell much from it because
you would not have enough information about the wire, only that it was
there. In the digital world, you wouldn't see the wire until the number
of pixels went up 2 or 3 fold, but when you did, you would know more
than that it was just there.

Yes it will be another few years before digital surpass film completely
and by a wide enough margin that film is left to history buffs, but will
come and sooner, not later. We are standing on the edge now!!!
 
Dino:

By the way, I am an eyeglass wearer also. You may want to make sure that your glasses are using "high index" lenses. I cannot wear the normal polycarbonate lenses most optical centers like to use (due to the rainbow effect you are describing), but don't have the problem wearing "high index" lenses. Apparently, they are much more accurate than the normal lightweight lenses.

I also don't have a problem when wearing Contact Lenses. I wear PBH (Pilkington, Barnes-Hind) Hydrocurve 3 "Toric" lenses, since I am am nearsignted, with astigmatism. In fact, my vision is much better than anyone that I know, when wearing these contact lenses.

Of course, it takes a lot of "fitting" by someone that knows what they are doing, to get the "lens rotation" problem solved perfectly, to get vision as accurate as mine wearing contacts.

Because of the difficulty that I've had in the past with "multiple fittings" required to get perfect, corrected vision, I've been reluctant to try the new Lasik eye surgery. I figure that if they can't get my contacts right without multiple trys, then how are they going to get the surgery right... So, I'll wait until the technology is more mature, before risking the loss of my good vision with surgery (which is outstanding, with correctly fitted contact lenses). Although, I admit that the surgery is very tempting, to avoid the hassles of eyeglasses or contact lens wear.
Those glasses I got really have changed a lot about what I thought about
camera lenses.
That's why you don't hear complaints about the D1 with purple fringe,
larger sensor. Give manufactures a few more years and they will fix the
problem after people complain enough.
I just noticed your post here from 3 days ago...

Apparently, some of the purple fringing problem has to due with a
"microlens" over each pixel within the CCD. Phil Askey explained it a
while back here:

http://www.dpreview.com/forums/read.asp?forum=1001&message=278022&query=microlens+purple

By the way, I did review photos from my 35mm camera (a Nikon N4004S,
using an inexpensive Sigma 28-80mm Zoom Lens), and was unable to find any
that exhibited the purple fringing problem that I see in photos from the
Digital Cameras that I've tried so far (in the last couple of months,
I've tried an Epson 3000z, an Olympus 2500L, and a Nikon 990).

However, in fairness to the Digital Cameras, most photos come out looking
great. Only the outdoor photos that I take, including treelines against
a bright sky, seem to be unacceptable in quality to me. Others don't
seem to notice or mind it as much as I do.
(the world is very warped with glasses and -6 diopter corrections)
With a slightly larger wire, or a tree limb, the 35mm camera would show
the actual color of the wire or limb.

With a digital camera, it'll probaby be purple.

That's my biggest complaint. I've now tried 3 digital cameras (Epson
3000z, Olympus 2500L, Nikon 990), and I can't seem to take outdoor photos
that include treetops against a bright sky, without seeing blue/purple
leaves and limbs. Same thing for powerlines against a bright sky -- lots
of purple.

Although I don't plan on taking photos of thin wires or powerlines, so I
could care less about them. However, I do care about taking photos of
landscapes, and the quality of the existing digicams is not up to my
expecations.

When digital cameras get around this blue/purple problem, give you the
same exposure/focus capabilities of 35mm cameras, at a reasonable price,
letting me print an 11x14 or larger print, then I'll be happy....

The 3.3MP images are detailed enough for me, with the exception of the
blue/purple fringing... That's my biggest complaint, and they all seem to
have this problem now.
If you have any doubts about 3 MP camers being equal to 35MM film, read
Discovery mag, August 2000, 'The chemistry of Photography' page 24 -27.
Film is not analog, its binary digital!!. It takes at least a 3x3 matrix
of crystals to give a 'gray scale'. That's a 9 division on the number of
crystals to give a 512 level gray scale. Cells (pixels) in CCDs and
CMOS sensors are analog with more possible bits per cell (the a-d
converter resolution).

The artical is wrong about the 'best' digital cameras, but we can forgive
them this time.

Yes it is possable to capture a bright reflection off a thin wire that
would show up in a 35mm film, but you could not tell much from it because
you would not have enough information about the wire, only that it was
there. In the digital world, you wouldn't see the wire until the number
of pixels went up 2 or 3 fold, but when you did, you would know more
than that it was just there.

Yes it will be another few years before digital surpass film completely
and by a wide enough margin that film is left to history buffs, but will
come and sooner, not later. We are standing on the edge now!!!
 

Keyboard shortcuts

Back
Top