Equivalence

Well, if you think about it, you are actually proving my point - that
sensor size alone does not determine noise/light gathering
characteristics.
No one ever said it did. I mean, that's just stupid. Otherwise the
Sony and Nikon DSLRs (1.5x) would have less noise than Canon's 1.6x
DSLRs!
Well, actually "they" (you and Lee Jay) did,
http://forums.dpreview.com/forums/read.asp?forum=1029&message=24091042

http://forums.dpreview.com/forums/read.asp?forum=1029&message=23350027

at least until a month or so ago when I gave up banging my head against the wall about seeing empirically measured stops of noise difference between the 5D and various other cameras (specifically the 20D/30D generation "1.3 stops", but also extrapolating to P&S) which were part of your theory, and based solely on surface area.

As the essay is a work in progress, I will let you get to it... but without knowing the math behind image downsizing / spatial frequency (which you are also apparently still not solid on http://forums.dpreview.com/forums/read.asp?forum=1029&message=24401617

), and without even knowing if we are in the right theory ballpark, I can only go on the 2/3 stop advantage from the raw number of pixels of equivalent sensitivity from this
http://forums.dpreview.com/forums/read.asp?forum=1029&message=24211024

which, if accurate, has direct implications regarding "equivalence" since stopping down 1.3 has a 2/3rds noise penalty which shifts the balance away from equilibrium (and is then also different in each sensor comparison case)

The issue I have - and why I would rather not participate in these "duologues" - is that it looks a whole lot like a desired conclusion is working backwards to find its own support in some math, but glossing over the hard math of the applied designs and their output, or perhaps just ignoring it all together. When it all ties in with empirical results, if you are going to prosthelytize so vehemently, you really ought to spend the time and refine this aspect, and not advocate a generalization of noise response across sensor designs. The work can be done.
Doesn’t this prove that noise/light gathering is actually not a
determined by size but by sensor design?
No! The 5D and 30D have sensors of the same design and generation.
Which has better noise performance?
indeed... and by how much?

--
-CW
 
as i proved beyond a shadow of a doubt in the discussion forum at lum landsc on that thread, with photos.

short version (sorry for click links, but i didn't want to upload and host the samples again myself)

here's a crop at f/8:
http://luminous-landscape.com/forum/index.php?act=Attach&type=post&id=1633
here's the same crop with myhrvold's gaussian blur:
http://luminous-landscape.com/forum/index.php?act=Attach&type=post&id=1677

and here's a crop at f/22, which myhrvold suggested should look the same as the blurred one above (or alternatively, the same as if the entire photo was resized to 2mp, which looks the same, so i won't include it here, though i did in the original):
http://luminous-landscape.com/forum/index.php?act=Attach&type=post&id=1635

this isn't a case of being led astray by attending to math over pragmatics, it's a case of using the wrong mathematical model, one which fails utterly to model the real situation (most probably because it does not take into account the fact that the picture pixels are not the same as the sensor photosites, but rather reconstructions/interpolations from multiple photosites).
 
All right, more explanation, as requested. :)
And you have NOT increased the spatial frequency of the signal, but
instead you have increased the frequency range of the signal; as
such, both low-frequency and high-frequency signal will be present,
whereas only higher-frequency noise will be present (and
lower-frequency noise will be notably absent). If you continue
increasing the spatial frequency of the noise until you reach the
spatial resolution of the output image, you then get rid of the noise
altogether and are left with nothing but signal.
Think of high-frequency noise as fine sandpaper, and of low-frequency noise as coarse sandpaper. As one increases the spatial frequency of the noise, the noise becomes finer--relatively smaller specks in the final image. And yet, the signal itself remains both coarse and fine--for example, the entire head of the person we are photographing along with its details (eyes, nose, freckles). If the size of the specks of noise drops below the size of the pixels, the noise effectively goes poof.
Not true; you need NOT increase the size of the photosites in order
to improve the signal-to-noise ratio. Increasing the amount of data
gathered is enough.
More photosites provides more spatial resolution. Resolution provides more information that can be "spent" fighting noise. For example, if we have a sensor four times the area of another, and the size of the photosites is the same, then the bigger sensor also has four times as many photosites. For each pixel in the smaller sensor, there are four corresponding pixels in the bigger sensor. Now, let's say one pixel in the smaller sensor is noisy. You lose all that data. For the bigger sensor, you have four pixels representing the same information, so when you lose one, the other three still have the information. The loss of information on the bigger sensor is for smaller detail that the smaller sensor may not have even captured due to its lower resolution. As a result, one can preserve more detail with the bigger sensor for the same amount of noise, or get rid of more noise and keep the same amount of detail; but either way, the signal-to-noise ratio is bigger with the bigger sensor than with the smaller one.
You have more noise, but you also have more signal. Signal is
coherent (and thus adds up to meaningful values), whereas noise is
random (and thus tends to cancel itself out). As such, the
signal-to-noise ratio increases as you add both signal and noise.
Let's say you want to find out who won an election between three candidates. You do this by calling people. But half the people in the country will know for sure and tell you the truth (signal), and the other half of the people won't know and will guess (noise). If you only call two people, you won't know for sure--they could easily both be guessing, and be guessing wrong; or even if one is right and the other one is wrong, you won't be able to tell. But if you call 10,000 people, 5,000 will tell you the same thing (the truth), and the other 5,000 will give you different answers, some that match what the first 5,000 people told you and some that contradicts it. But the pattern will be clear: whoever is mentioned the most is the one who won the election. So even though the percentage of people who are guessing is the same, the more people you call, the more certain you will be of the result.

I hope this helps.

Victor
 
Well, if you think about it, you are actually proving my point - that
sensor size alone does not determine noise/light gathering
characteristics.
No one ever said it did. I mean, that's just stupid. Otherwise the
Sony and Nikon DSLRs (1.5x) would have less noise than Canon's 1.6x
DSLRs!
The sensors being compared in the links are "close enough" to the same design and generation that sensor area is, by far, the dominant force in determining noise.
at least until a month or so ago when I gave up banging my head
against the wall about seeing empirically measured stops of noise
difference between the 5D and various other cameras (specifically the
20D/30D generation "1.3 stops", but also extrapolating to P&S) which
were part of your theory, and based solely on surface area.
It is not "extrapolation", it is hard fact supported with pics. Do take a look:

http://forums.dpreview.com/forums/read.asp?forum=1029&message=21446344

http://forums.dpreview.com/forums/read.asp?forum=1029&message=21440105
As the essay is a work in progress, I will let you get to it... but
without knowing the math behind image downsizing / spatial frequency
(which you are also apparently still not solid on
http://forums.dpreview.com/forums/read.asp?forum=1029&message=24401617
), and without even knowing if we are in the right theory ballpark,
I do not understand the whole of it, but I do know, for sure, that the theory is in the right "ballpark", as vor's reply demonstrated.
I can only go on the 2/3 stop advantage from the raw number of pixels
of equivalent sensitivity from this
http://forums.dpreview.com/forums/read.asp?forum=1029&message=24211024
which, if accurate, has direct implications regarding "equivalence"
since stopping down 1.3 has a 2/3rds noise penalty which shifts the
balance away from equilibrium (and is then also different in each
sensor comparison case)
Yes, the fill factor is critical . However, different sensors do not have different fill factors as evidenced by the pics I linked above.

And, again, do not confuse total image noise with per-pixel noise.
The issue I have - and why I would rather not participate in these
"duologues" - is that it looks a whole lot like a desired conclusion
is working backwards to find its own support in some math, but
glossing over the hard math of the applied designs and their output,
or perhaps just ignoring it all together.
Not at all. Both theory and experiement agree. Simply because I have not laid out all the details does not mean I am wrong. That's the same logic that reationists use to say that evolution is incorrect.
When it all ties in with empirical results, if you are going to prosthelytize so
vehemently, you really ought to spend the time and refine this aspect, and
not advocate a generalization of noise response across sensor designs.
The work can be done.
I don't know what that means.
Doesn't this prove that noise/light gathering is actually not a
determined by size but by sensor design?
No! The 5D and 30D have sensors of the same design and generation.
Which has better noise performance?
indeed... and by how much?
Simple: 1 1/3 stops. Take another look at the pics I linked in this thread.

--
--joe

http://www.josephjamesphotography.com
http://www.pbase.com/joemama/

Please feel free to criticize, make suggestions, and edit my photos. If you wish to use any of my photos for any purpose other than editing in these forums, please ask.
 
(and fwiw you still haven't bothered to
mention that both the xxd and 5d cameras have a special set of
autofocus sensors which only fxn at f/2.8 or wider, besides the issue
of absolute light levels.)
I for one don't take the advantage promised by high precision sensors at face value. I've already been in an argument over it, but if you're interested you can read my final thoughts on the matter here:

http://forums.dpreview.com/forums/read.asp?forum=1029&message=24275778
 
i have avoided lenses which are slower than f/2.8 (mostly for separate reasons, but the special sensors are a contributing consideration), i can't provide direct confirmation that the wider sensors provide a clear benefit. but i can infer, imperfectly, from a few tangential points:

the center point works a whole heck of a lot better than the peripheral points. yeah, i know, its cross and they aren't, but i don't know if that accounts for all of the difference i am seeing.

faster lenses permit better (ie faster and more accurate) focus in lower light. i don't have any reason to attribute this to the special sensors; i am sure mostly it is because the lens lets in more light. but it does mean you see a difference in focus performance between faster and slower lenses when light is a limiting factor.

i doubt canon would bother to manufacture these sensors if they didn't help (weak argument, but probably true nonetheless).

the deeper dof at f/4 may cover for less accurate focus, meaning that the slower sensors are 'good enough' for a slow lens (i assume that this is basically your point, which is a good one). but: focus is always at a point, not, technically, on a range of points. the rest of the dof within a given coc are in fact out of focus, simply defined as 'good enough'. higher precision (and given the geometry of the sensors, probably higher accuracy as well) should give you more reliable, repeatable, predictable results (ie less wandering of actual pof within the range). and while personally i am not a sharpness nut, i do care about better control of the actual point of focus.

finally, the rebel line (xt/i) are known to have difficulty focussing with very fast lenses, where the 20/30d doesn't. that would indicate that the added sensors do have a practical effect.

anyway, as i said, i don't know exactly how much of an effect the special sensors have... but i like being able to use them.

of course, if we just get a mkIII, the entire issue is moot (at these f/stops, anyway...).
 
Equivalence does mean equal, but things are rarely equal. That's the
point. At its worst, larger sensors merely equal smaller sensors (in
terms of IQ). All other times they are better.
here, again, you say that your point is that different size sensors are not equivalent after all... larger is never worse and often better.

yet above, you claimed you were not arguing in favor of larger sensors.

so, do you see why your readers are confused? (i doubt that anyone on this forum read your essay without coming to the conclusion that you were making an argument in favor of larger sensors.) this is why i suggested you make your goal (that is to say, the positive point you want to argue for, not simply a list of mistakes you want to refute) clear at the start of your argument.
 
And you have NOT increased the spatial frequency of the signal, but
instead you have increased the frequency range of the signal; as
such, both low-frequency and high-frequency signal will be present,
whereas only higher-frequency noise will be present (and
lower-frequency noise will be notably absent). If you continue
increasing the spatial frequency of the noise until you reach the
spatial resolution of the output image, you then get rid of the noise
altogether and are left with nothing but signal.
Think of high-frequency noise as fine sandpaper, and of low-frequency
noise as coarse sandpaper. As one increases the spatial frequency of
the noise, the noise becomes finer--relatively smaller specks in the
final image. And yet, the signal itself remains both coarse and
fine--for example, the entire head of the person we are photographing
along with its details (eyes, nose, freckles). If the size of the
specks of noise drops below the size of the pixels, the noise
effectively goes poof.
I don't quite understand this, but something is wrong here! How could the size of noise suddenly "drop below the size of the pixels"? No matter how many pixels you have, noise will always have the same size, or frequence, relative to the pixel-size!
Not true; you need NOT increase the size of the photosites in order
to improve the signal-to-noise ratio. Increasing the amount of data
gathered is enough.
More photosites provides more spatial resolution. Resolution
provides more information that can be "spent" fighting noise. For
example, if we have a sensor four times the area of another, and the
size of the photosites is the same, then the bigger sensor also has
four times as many photosites. For each pixel in the smaller sensor,
there are four corresponding pixels in the bigger sensor. Now, let's
say one pixel in the smaller sensor is noisy. You lose all that
data. For the bigger sensor, you have four pixels representing the
same information, so when you lose one, the other three still have
the information. The loss of information on the bigger sensor is for
smaller detail that the smaller sensor may not have even captured due
to its lower resolution. As a result, one can preserve more detail
with the bigger sensor for the same amount of noise, or get rid of
more noise and keep the same amount of detail; but either way, the
signal-to-noise ratio is bigger with the bigger sensor than with the
smaller one.
You have more noise, but you also have more signal. Signal is
coherent (and thus adds up to meaningful values), whereas noise is
random (and thus tends to cancel itself out). As such, the
signal-to-noise ratio increases as you add both signal and noise.
Let's say you want to find out who won an election between three
candidates. You do this by calling people. But half the people in
the country will know for sure and tell you the truth (signal), and
the other half of the people won't know and will guess (noise). If
you only call two people, you won't know for sure--they could easily
both be guessing, and be guessing wrong; or even if one is right and
the other one is wrong, you won't be able to tell. But if you call
10,000 people, 5,000 will tell you the same thing (the truth), and
the other 5,000 will give you different answers, some that match what
the first 5,000 people told you and some that contradicts it. But
the pattern will be clear: whoever is mentioned the most is the one
who won the election. So even though the percentage of people who
are guessing is the same, the more people you call, the more certain
you will be of the result.

I hope this helps.

Victor
 
i have avoided lenses which are slower than f/2.8 (mostly for
separate reasons, but the special sensors are a contributing
consideration), i can't provide direct confirmation that the wider
sensors provide a clear benefit. but i can infer, imperfectly, from a
few tangential points:

the center point works a whole heck of a lot better than the
peripheral points. yeah, i know, its cross and they aren't, but i
don't know if that accounts for all of the difference i am seeing.

faster lenses permit better (ie faster and more accurate) focus in
lower light. i don't have any reason to attribute this to the special
sensors; i am sure mostly it is because the lens lets in more light.
but it does mean you see a difference in focus performance between
faster and slower lenses when light is a limiting factor.

i doubt canon would bother to manufacture these sensors if they
didn't help (weak argument, but probably true nonetheless).
No argument with any of the above.
the deeper dof at f/4 may cover for less accurate focus, meaning that
the slower sensors are 'good enough' for a slow lens (i assume that
this is basically your point, which is a good one). but: focus is
always at a point, not, technically, on a range of points. the rest
of the dof within a given coc are in fact out of focus, simply
defined as 'good enough'. higher precision (and given the geometry of
the sensors, probably higher accuracy as well) should give you more
reliable, repeatable, predictable results (ie less wandering of
actual pof within the range). and while personally i am not a
sharpness nut, i do care about better control of the actual point of
focus.
Of course it would be great if the high precision sensors also increased accuracy and repeatability, but this is not promised by Canon to my knowledge, and I've not seen anyone actually claim better accuracy due to using a lens which is f2.8 or faster; I've seen claims that there is no significant difference, however. Also, to counter your "weak argument" with another, I doubt Canon would neglect to mention it in their marketing blurb if these sensors increased performance in other ways than just 1/3 DoF precision. :-)

What I would like to know is the relationship between depth-of-focus and depth-of-field for a given output size (say 8x12") so we'd have a better idea what Canon is actually promising us.
finally, the rebel line (xt/i) are known to have difficulty focussing
with very fast lenses, where the 20/30d doesn't. that would indicate
that the added sensors do have a practical effect.
Yes: faster lenses need the more precise sensors much more than slow ones do, but engaging better sensors to counter the narrower DoF is hardly an argument for choosing fast glass over slow glass. Well it's an argument, but a circular one, and not very compelling with respect to the equivalence discussion.
anyway, as i said, i don't know exactly how much of an effect the
special sensors have... but i like being able to use them.
Well, I don't mind the special sensors being there, if that's what you thought! :-) But I would be interested in knowing how much of an effect they have, especially before making any purchasing decisions based on them.
of course, if we just get a mkIII, the entire issue is moot (at these
f/stops, anyway...).
 
Equivalence does mean equal, but things are rarely equal. That's the
point. At its worst, larger sensors merely equal smaller sensors (in
terms of IQ). All other times they are better.
here, again, you say that your point is that different size sensors
are not equivalent after all... larger is never worse and often
better.
You misinterpret: larger sensors always have at least as good IQ as smaller sensors. But larger sensor systems do not always have the operational advantages of the smaller sensor systems.
yet above, you claimed you were not arguing in favor of larger
sensors.
Arguing in favor of larger sensors for when IQ is paramount, but not arguing in favor of larger sensor systems in terms of the operational limitations in getting the image.
so, do you see why your readers are confused? (i doubt that anyone on
this forum read your essay without coming to the conclusion that you
were making an argument in favor of larger sensors.) this is why i
suggested you make your goal (that is to say, the positive point you
want to argue for, not simply a list of mistakes you want to refute)
clear at the start of your argument.
Yes, yes, and I thank you for that. I have been busily rewording the essay to make these points more clear. Please have another read and tell me what you think:

http://www.josephjamesphotography.com/equivalence/

--
--joe

http://www.josephjamesphotography.com
http://www.pbase.com/joemama/

Please feel free to criticize, make suggestions, and edit my photos. If you wish to use any of my photos for any purpose other than editing in these forums, please ask.
 
But I would be interested in knowing how much of an
effect they have, especially before making any purchasing decisions
based on them.
As would I. I mention this in my essay, but also say that I have no idea of what practical value it has. I do know that in low light, the center AF point on both the 20D I used to own, and the 5D I currently own, is world's better than the outer AF points.

However, it's been a very long time since I've owned a lens slower than f / 2.8. I briefly owned a 300 / 4L IS, but only used it in very good light, and also briefly owned a 24-105 / 4L IS, but never used it in ultra low light, either. In fact, I never even owned a lens slower than f / 2.8 when I had a 20D.

--
--joe

http://www.josephjamesphotography.com
http://www.pbase.com/joemama/

Please feel free to criticize, make suggestions, and edit my photos. If you wish to use any of my photos for any purpose other than editing in these forums, please ask.
 
i still don't get why you're reacting so defensively. you asked for
suggestions, i offered some. there's a difference between critiquing
the logic of an argument (which wasn't the thrust of my suggestions)
and critiquing the rhetorical persuasiveness of it. in the latter
case, the relevant evidence is whether or not your readers follow
(that is, come along with) the argument, not whether you can defend
each individual point.
Point taken. Hopefully, I've reworked my essay (in large part due to your criticism) to have effectively addressed these issues.
why wouldn't you take my comments as the most useful thing they could
constitute for any author, which is a window into how one's writing
is perceived by another reader in good faith? that's the most
important sort of evidence a writer interested in improving their
writing can get.
I humbly apologize and thank for taking the time and effort to help me in this endeavor!

--
--joe

http://www.josephjamesphotography.com
http://www.pbase.com/joemama/

Please feel free to criticize, make suggestions, and edit my photos. If you wish to use any of my photos for any purpose other than editing in these forums, please ask.
 
All right, more explanation, as requested. :)
Give me some time to digest this and either understand it, or come up with intelligent questions.

Once again, please accept my apologies for "blowing off" your initial effort. Quite rude of me, to tell the truth, and I feel pretty bad about that.

--
--joe

http://www.josephjamesphotography.com
http://www.pbase.com/joemama/

Please feel free to criticize, make suggestions, and edit my photos. If you wish to use any of my photos for any purpose other than editing in these forums, please ask.
 
I don't quite understand this, but something is wrong here! How could
the size of noise suddenly "drop below the size of the pixels"? No
matter how many pixels you have, noise will always have the same
size, or frequence, relative to the pixel-size!
That was just wording used for the sandpaper analogy.

If you have a sensor four times larger than another sensor, but with the same pixel size, you will end up with four times as many pixels on the larger sensor.

Now each pixel in both sensors have the same amount of noise - which is random.

You can take the larger sensor's picture, and down-sample it to the size of the smaller sensor. By doing this, you have to average 4 pixels together for every pixel. Averaging these pixels together will reduce the random noise.

Now what the engineers could do instead, is combine these four pixels together in hardware, making a pixel that is 4 times larger on the larger sensor. This would end up creating a sensor with the same number of pixels, but with a 2 stop noise advantage - all from being 4 times larger.
 
But I would be interested in knowing how much of an
effect they have, especially before making any purchasing decisions
based on them.
As would I. I mention this in my essay, but also say that I have no
idea of what practical value it has. I do know that in low light,
the center AF point on both the 20D I used to own, and the 5D I
currently own, is world's better than the outer AF points.

However, it's been a very long time since I've owned a lens slower
than f / 2.8. I briefly owned a 300 / 4L IS, but only used it in
very good light, and also briefly owned a 24-105 / 4L IS, but never
used it in ultra low light, either. In fact, I never even owned a
lens slower than f / 2.8 when I had a 20D.

--
I'm not likely to buy glass slower than f2.8 again either (with the possible exception of 24-105 for FF to keep the versatility I now have with the 17-55), so if/when I make the move to FF I won't be doing it to use equivalent lenses; in this sense the whole precision AF sensor discussion is moot to me. But I'm a sucker for truth!
 
I'm not likely to buy glass slower than f2.8 again either (with the
possible exception of 24-105 for FF to keep the versatility I now
have with the 17-55), so if/when I make the move to FF I won't be
doing it to use equivalent lenses; in this sense the whole precision
AF sensor discussion is moot to me. But I'm a sucker for truth!
That is exactly me! However, let me give you a heads up on a possible future for you: when I got the 5D, I sold my 35 / 1.4L to fund a 24-105 / 4L IS. I so enjoyed the Sigma 18-50 / 2.8 on the 20D, that I thought a 15-66 / 2.5 IS equivalent on FF would be pure heaven.

I was wrong.

In the end, as you, I switched systems to give me capabilities that were not even available on the 20D. I shoot wide open almost exclusively, and the shallow DOF of my 5D pics is unattainable with a smaller sensor.

I did not go FF to get equivalent images, I went FF to get "better" (for my tastes) images. But, it's important to note that I can still take the exact same types of pics I took with my 20D with a 5D, with no loss in IQ, and usually much higher IQ. I do not need to suffer more vignetting, softer edges, or more distortion -- three factors that rarely mattered to me anyway! : )

--
--joe

http://www.josephjamesphotography.com
http://www.pbase.com/joemama/

Please feel free to criticize, make suggestions, and edit my photos. If you wish to use any of my photos for any purpose other than editing in these forums, please ask.
 
Yeah but if" For example, ISO 100 on 1.6x is equivalent to ISO 250 on
35mm FF since 100(1.6^2) = 256", then does that really mean that ISO
1600 is equivalent to ISO 4100 on FF?
That is exactly correct. How do you get ISO 4100 on a 5D? Well, you shoot at ISO 3200 (and -1/3 ev if you need the shutter speed, which you most likely do, or you'd not be at that ISO), and then convert using +1/3 ev.

What if you don't shoot RAW? I don't know. I assume you just up the gamma in post. Maybe some software will up the ev of a jpg. Honestly, it's been so long since I've shot jpg, and when I did, this issue never came up for me.

--
--joe

http://www.josephjamesphotography.com
http://www.pbase.com/joemama/

Please feel free to criticize, make suggestions, and edit my photos. If you wish to use any of my photos for any purpose other than editing in these forums, please ask.
 
I'm not likely to buy glass slower than f2.8 again either (with the
possible exception of 24-105 for FF to keep the versatility I now
have with the 17-55), so if/when I make the move to FF I won't be
doing it to use equivalent lenses; in this sense the whole precision
AF sensor discussion is moot to me. But I'm a sucker for truth!
That is exactly me! However, let me give you a heads up on a
possible future for you: when I got the 5D, I sold my 35 / 1.4L to
fund a 24-105 / 4L IS. I so enjoyed the Sigma 18-50 / 2.8 on the
20D, that I thought a 15-66 / 2.5 IS equivalent on FF would be pure
heaven.

I was wrong.

In the end, as you, I switched systems to give me capabilities that
were not even available on the 20D. I shoot wide open almost
exclusively, and the shallow DOF of my 5D pics is unattainable with a
smaller sensor.
Yes, I might be better off keeping the 17-55 & 20D for when I need the versatility, and as a "TC" for my 70-200, and dedicating FF for fast primes. I find your shallow DoF photography inspiring by the way.
I did not go FF to get equivalent images, I went FF to get "better"
(for my tastes) images. But, it's important to note that I can still
take the exact same types of pics I took with my 20D with a 5D, with
no loss in IQ, and usually much higher IQ. I do not need to suffer
more vignetting, softer edges, or more distortion -- three factors
that rarely mattered to me anyway! : )
I do find it encouraging to know that I wouldn't be giving anything up by moving to FF, and I appreciate the effort you and Lee have taken to explain this since I wasn't aware of it before reading some of the equivalence posts. So while some are irritated by the "gospel" you guys have been spreading in numerous threads, my photography could well end up benefiting from it.
--
--joe

http://www.josephjamesphotography.com
http://www.pbase.com/joemama/

Please feel free to criticize, make suggestions, and edit my photos.
If you wish to use any of my photos for any purpose other than
editing in these forums, please ask.
 
If you increase the size of the sensor and leave the photo-sites the
same size then you will have exactly the same amplitude of noise.
However, since you now have more photo-sites you now have more noise.
That's right, you have a larger volume of noise that is exactly the
same amplitude. If you print the image the same size as the smaller
sensor all you have done is increased the spatial frequency of the
noise.
And you have NOT increased the spatial frequency of the signal, but
instead you have increased the frequency range of the signal; as
such, both low-frequency and high-frequency signal will be present,
whereas only higher-frequency noise will be present (and
lower-frequency noise will be notably absent). If you continue
increasing the spatial frequency of the noise until you reach the
spatial resolution of the output image, you then get rid of the noise
altogether and are left with nothing but signal.
You do understand what "spatial" means don't you?
If you increase the size of the photo-site then, and only then, will
you improve the signal to noise ratio (provided the amplification
hardware is the same).
Not true; you need NOT increase the size of the photosites in order
to improve the signal-to-noise ratio. Increasing the amount of data
gathered is enough.
Using a larger photo-site increases the signal to noise ratio. Averaging neighboring pixels increases the signal to noise ratio and is essentially the same as having a larger photo-site to begin with.
The notion that a larger surface gives you
less noise is false. The same is true of film; if you increase the
size the grain remains exactly the same, you just now have more of it.
You have more noise, but you also have more signal. Signal is
coherent (and thus adds up to meaningful values), whereas noise is
random (and thus tends to cancel itself out). As such, the
signal-to-noise ratio increases as you add both signal and noise.
This is why noise reduction by image averaging works. Check out this
article for more details:
If you multiply the signal by 1.6 and the noise by 1.6 then you have exactly the same signal to noise ratio.
This article is about taking two or more images and averaging them together. This is using a temporal variable to reduce noise and has nothing to do with sensor size. It would work equally well for any sensor.

--
Whoever said 'a picture is worth a thousand words' was a cheapskate.

http://www.pbase.com/dot_borg
 

Keyboard shortcuts

Back
Top