Grok2 what is equivalent in "Equivalence" and it's not Enlargement nor Aperture

...

4. The actual detection (i.e. some form of converting the electrical voltages to something useful) is done by a "receiver". In the case of an antenna, this is a radio, a tv, a crystal set, a scientific instrument ... in other words, anything that usefully "measures" the voltage.
In addition to "measuring the voltage", it's known as demodulating.
Demodulation is the act of extracting the original information-bearing signal from a modulated carrier wave. This is something useful and is part of a "radio" process. In the context of providing an analogy, this is unnecessary - only the concept of "detection" is which does not necessarily mean to extract super-imposed additional information from a carrier (although I cannot think of a practical example of this off the top of my head - the only one that pops into my mind is when the antenna is part of a transmission system).
In the case of the camera sensor system, the same thing happens ... the A/D converter converts the voltage value of the electrons in the pixel's capacitor into something useful ... this is nothing more than the ADU recorded in raw.
To me, sensor does not do that. Not even the ADC; it's the demosaicing and the Raw conversion math that is analogues to "demodulating"
The demoasic process is after the fact of recording the ADU in raw which is then end of the sensor's participation. It is a "post-processing" concept that is a transformation of the ADU data to another (probably RGB) form. It is very loosely similar to the concept of using an equalizer after the audio has been demodulated to change the nature of the sound.

--
tony
http://www.tphoto.ca
 
Last edited:
It's kinda hard to respond to the group ridicules individually so I start this subheading...

The question I've mulling over is "how does the relative size of the sensor affect its performance?"

Well, the short answer is: "it collects more light".

Well, rather obvious and ipso facto correct (like a bigger bucket collect more water) but it does really not answer "how does the size affect what performance metric of the sensor?"

Well, it's the SNR, collect more light, higher SNR.

Again, that's description of the definition of Photon Shot Noise SNR. It does not answer "how does the size of a sensor affect its performance", which is my question.

So while reading up on "photoelectrons - thanks to GB - it occurred to me the aggregate sensor Q.E. (a performance metric of a sensor) is being affected the sensor size, and so I thought I could build answer to my question.

One can always multiply by some number by another but it would be more meaningful if other engineering discipline did the same thing.

And so I found the analogue in the antenna gain description.

http://www.phys.hawaii.edu/~anita/new/papers/militaryHandbook/antennas.pdf

The Gain of an antenna with losses is given by:
Efficiency
Physical aperture area
wavelength

Aha, there are words like Efficiency (Q.E.) and Physical aperture area (Size).

I realize you all are chomping to say "aperture - see antenna is like a lens!" but bear with me ;-)

What does digital sensor do? Converts EM energy into electrical signal level (voltage).

What does radio antenna do? Converts EM energy into electrical signal level (voltage).

Parallel is striking. In fact, if one stuck an image of 4/3 sensor on a phased array Radar photo, not too many would be wiser ;-)

So, what does lens do?
Two photos of the same scene made with the same amount of light where the sensor and supporting hardware add in the same amount of additional electronic noise will have the same noise.

What does the lens do? It collects light from the scene and projects it on the sensor.

What determines how much light from the scene passes through the lens onto the sensor? The aperture diameter of the lens and how much of that light gets absorbed/scattered by the lens elements.

What does the sensor do? It converts photons (light) into electrons (electric signal) where more signal results in less noise.

What determines how much light becomes the signal? The QE of the sensor.

What does the supporting hardware do? It digitizes and records that signal into the image file.

What effect does this have on the noise of the signal? It results in additional electronic noise that becomes significant for the portions of the photo made with very little light.

What effect does the pixel count have on noise? More pixels result in higher noise frequencies that are blurred out with fewer pixels, and result in more electronic noise for a given portion of the photo (see here for details and here for examples).

So if two photos are made with the same amount of light, have sensors with the same QE and read noise, then the noise will be the same? Yes
And what is more meaningful way to understand why that is so. I think it's not because f/2 is equivalent to f/4.
-- see here, here, and here for demonstrations of exactly that (probably just coincidence, though ;-) ).

What makes this so hard to understand? I'll be kind and not say. ;-)
See above. It's not that I don't understand. I don't think you explanation is very good. Sorry.
 
If you don't get this this time, I'm afraid you are too stubborn to learn.

The reason a large sensor beats a small sensor is because it requires a longer focal length for the same field-of-view, which means you end up with a larger aperture diameter (a larger "antenna" if you like)
And the point is if one puts up a smaller sensor with a larger aperture diameter, it still works like a small sensor so the larger aperture diameter by itself is not doing much.

BTW, Dish is not the antenna, it's called reflector; Yagi antenna have reflectors that do not generate electrical signal but help the antennal element to generate more electrical signal than with the reflectors.
Dish acts similarly to the parasitic elements of a yagi. The presence of the reflectors in a yagi does not allow the antenna to "generate" more electrical signal. The energy going into the driven element is the same regardless of the number of reflectors. When that energy has a wavelength appropriate to the array size, the reflectors do indeed, by constructive interference, allow that fixed energy to be radiated in a more directional fashion, thus resulting in the "gain" over the antenna without reflectors. But no new energy is generated. The augmented energy in the direction of the yagi is gotten at the expense of energy that would otherwise be radiated in other directions. Thus, a yagi has augmented energy radiated in its principal direction and similarly diminished energy radiated in other directions, the least being off the side of the beam.
Thanks for more in depth explanation. I learned some more.

A receiving antenna produces a current in the antenna induced by passing electromagnetic energy through electromagnetic induction. Differing antennas can, again by constructive interference, be designed to provide a "gain" in responding to electromagnetic energy coming from a specific direction. This "gain" is again achieved at the expense of the antenna's being less responsive to energy coming from other directions. This gain is measured relative to the response of a perfect (lossless) dipole (in dBd) or that of a perfect (lossless) isotropic element (in dBi).
Yes.
As far as I can see, there is no useful analogy between the actions of antennas and lenses or antennas and camera sensors – other than the fact that both involve inputs and outputs (as indeed does my body with food and waste). The physical phenomena and their processes of action are otherwise of such different natures that any attempt to create an analogy is to create obfuscation.
Not completely agree.

As has been pointed out by numbers of posters in this thread, the basics of equivalence are really very simple and straightforward.
Simple and straight forward, except "equivalent aperture".
If one cannot understand them as straight goods, one has got problems.
I have solved my problem without "equivalent aperture". Instead I presented sensor "gain", different from Q.E.
If one has to try to understand them by introducing an irrelevant and befuddling analogy, one has problems with problems of their own.
I was hoping to share my thought and not my problem.

 
If you don't get this this time, I'm afraid you are too stubborn to learn.

The reason a large sensor beats a small sensor is because it requires a longer focal length for the same field-of-view, which means you end up with a larger aperture diameter (a larger "antenna" if you like)
And the point is if one puts up a smaller sensor with a larger aperture diameter, it still works like a small sensor so the larger aperture diameter by itself is not doing much.

BTW, Dish is not the antenna, it's called reflector; Yagi antenna have reflectors that do not generate electrical signal but help the antennal element to generate more electrical signal than with the reflectors.
Dish acts similarly to the parasitic elements of a yagi. The presence of the reflectors in a yagi does not allow the antenna to "generate" more electrical signal. The energy going into the driven element is the same regardless of the number of reflectors. When that energy has a wavelength appropriate to the array size, the reflectors do indeed, by constructive interference, allow that fixed energy to be radiated in a more directional fashion, thus resulting in the "gain" over the antenna without reflectors. But no new energy is generated. The augmented energy in the direction of the yagi is gotten at the expense of energy that would otherwise be radiated in other directions. Thus, a yagi has augmented energy radiated in its principal direction and similarly diminished energy radiated in other directions, the least being off the side of the beam.
Thanks for more in depth explanation. I learned some more.
A receiving antenna produces a current in the antenna induced by passing electromagnetic energy through electromagnetic induction. Differing antennas can, again by constructive interference, be designed to provide a "gain" in responding to electromagnetic energy coming from a specific direction. This "gain" is again achieved at the expense of the antenna's being less responsive to energy coming from other directions. This gain is measured relative to the response of a perfect (lossless) dipole (in dBd) or that of a perfect (lossless) isotropic element (in dBi).
Yes.
As far as I can see, there is no useful analogy between the actions of antennas and lenses or antennas and camera sensors – other than the fact that both involve inputs and outputs (as indeed does my body with food and waste). The physical phenomena and their processes of action are otherwise of such different natures that any attempt to create an analogy is to create obfuscation.
Not completely agree.
As has been pointed out by numbers of posters in this thread, the basics of equivalence are really very simple and straightforward.
Simple and straight forward, except "equivalent aperture".
If one cannot understand them as straight goods, one has got problems.
I have solved my problem without "equivalent aperture". Instead I presented sensor "gain", different from Q.E.
If one has to try to understand them by introducing an irrelevant and befuddling analogy, one has problems with problems of their own.
I was hoping to share my thought and not my problem.
I would suggest you study this example and try to grok equivalent aperture. It should make sense ... all of the necessary metrics are there.

http://www.dpreview.com/forums/post/56324170
 
If one cannot understand them as straight goods, one has got problems.
I have solved my problem without "equivalent aperture". Instead I presented sensor "gain", different from Q.E.
I believe it was you who introduced Occam's Razor into this discussion. But what you're doing here is introducing an unnecessary (and completely dubious) complexity.
If one has to try to understand them by introducing an irrelevant and befuddling analogy, one has problems with problems of their own.
I was hoping to share my thought and not my problem.
It's a nice hope, but a vain one. You're just sharing your problem.
 
Thanks to all and keep the good info coming - I just wanted to say to all how great this has been for me trying to express my ideas and to counter the counter points.

Some of my last responses are out of sequence as I seems to have missed some posts in time.

Again, I sincerely wanted to thank all before this tread reaches 149 as I do not plan to start another one on this subject. Hope some of you can find comfort in that ;-)

I'd like to think that I stayed mostly on (my) topic and hope that any ribbing was somewhat "good natured". Certainly, I hope I was not rude to anyone.

Regards,
 
Two photos of the same scene made with the same amount of light where the sensor and supporting hardware add in the same amount of additional electronic noise will have the same noise.

What does the lens do? It collects light from the scene and projects it on the sensor.

What determines how much light from the scene passes through the lens onto the sensor? The aperture diameter of the lens and how much of that light gets absorbed/scattered by the lens elements.

What does the sensor do? It converts photons (light) into electrons (electric signal) where more signal results in less noise.

What determines how much light becomes the signal? The QE of the sensor.

What does the supporting hardware do? It digitizes and records that signal into the image file.

What effect does this have on the noise of the signal? It results in additional electronic noise that becomes significant for the portions of the photo made with very little light.

What effect does the pixel count have on noise? More pixels result in higher noise frequencies that are blurred out with fewer pixels, and result in more electronic noise for a given portion of the photo (see here for details and here for examples).

So if two photos are made with the same amount of light, have sensors with the same QE and read noise, then the noise will be the same? Yes
And what is more meaningful way to understand why that is so.
To understand why what is so? Why the same amount of light results in the same noise, all else equal? Well, MBP, the "more meaningful way to understand why that is so" is to understand what noise is and what causes noise -- the very same two points that have been explained to you over and over and over...
I think it's not because f/2 is equivalent to f/4.
This demonstrates, once again, your complete and utter lack of regard for what is being explained to you. For a given [diagonal] angle of view, f/2 on mFT has the same aperture (entrance pupil) diameter as f/4 on FF, which means the same amount of light from a given scene passes through the lens onto the sensor for a given exposure time and lens transmission.

For a specific example, the same amount of light will fall on the sensor at 50mm f/2 (t/2.2) 1/100 on mFT as will fall on the sensor at 100mm f/4 (t/4.5) 1/100 on FF, for a given scene.

Of course, don't think for a minute that I don't think the above explanation, the above example, or the photos in the links immediately below will make one whit of difference -- it will all be ignored, again.
see here, here, and here for demonstrations of exactly that (probably just coincidence, though ;-) ).

What makes this so hard to understand? I'll be kind and not say. ;-)
See above. It's not that I don't understand.
In fact, it is not merely that you don't understand, but that you *actively* refuse to understand. There are three sets of photos linked above that all demonstrate exactly what has been told to you by several people in countless threads and countless posts, and the best you can come up with is "correlation is not causality". That's not merely ignorance -- it's *willful* ignorance.

I mean, how can anyone look at the photos above, all made with the same amount of light and displaying all but the exact same amount of noise, and dismiss them as mere coincidence? That's either extreme intellectual dishonesty, an extreme lack of cognitive capacity, or both.
I don't think you explanation is very good. Sorry.
That's like a Creationist telling a scientist that their explanation for the age of the Earth isn't very good because the science doesn't jive with their belief in the Bible.
 
Last edited:
Thanks to all and keep the good info coming - I just wanted to say to all how great this has been for me trying to express my ideas and to counter the counter points.

Some of my last responses are out of sequence as I seems to have missed some posts in time.

Again, I sincerely wanted to thank all before this tread reaches 149 as I do not plan to start another one on this subject. Hope some of you can find comfort in that ;-)

I'd like to think that I stayed mostly on (my) topic and hope that any ribbing was somewhat "good natured". Certainly, I hope I was not rude to anyone.

Regards,
Nicely said!

However I think there is no end to this type of thread - it's too open ended

equivalence how?

size?

weight?

cost?

MFD?

DOF?

SNR?

theft?

mugging?

exercise hand-weights?

cool factor?

likely hood of finding maching colors body/lens

likely hood of finding desireable colors body/lens

AF speed

AF dim light speed

FPS

1080p/4K/8K/16K video?

slow-motion?

battery life?

MF by-wire vs MF non-wire?

IBIS?

MF focus peaking

MF magnification

WIFI

remote control

remote viewing

flying/swimming/crawling/jumping/hovering drones

and so on and so on
 
Last edited:
Thanks to all and keep the good info coming - I just wanted to say to all how great this has been for me trying to express my ideas and to counter the counter points.

Some of my last responses are out of sequence as I seems to have missed some posts in time.

Again, I sincerely wanted to thank all before this tread reaches 149 as I do not plan to start another one on this subject. Hope some of you can find comfort in that ;-)

I'd like to think that I stayed mostly on (my) topic and hope that any ribbing was somewhat "good natured". Certainly, I hope I was not rude to anyone.

Regards,
Nicely said!

However I think there is no end to this type of thread - it's too open ended

equivalence how?

size?

weight?

cost?

MFD?

DOF?

SNR?

theft?

mugging?

exercise hand-weights?

cool factor?

likely hood of finding maching colors body/lens

likely hood of finding desireable colors body/lens

AF speed

AF dim light speed

FPS

1080p/4K/8K/16K video?

slow-motion?

battery life?

MF by-wire vs MF non-wire?

IBIS?

MF focus peaking

MF magnification

WIFI

remote control

remote viewing

flying/swimming/crawling/jumping/hovering drones

and so on and so on
Yes, that's why I'd rather think in terms of a metric than "equivalence"... ;-)
 
If one cannot understand them as straight goods, one has got problems.
I have solved my problem without "equivalent aperture". Instead I presented sensor "gain", different from Q.E.
I believe it was you who introduced Occam's Razor into this discussion. But what you're doing here is introducing an unnecessary (and completely dubious) complexity.
I have simplified "equivalent exposure" and "collect more photons" into one metric "gain".

For example, a resistor is specified in "ohms" and "watts" instead of "ohms" and "size".

Also the "gain" would include the sensor difference of various light collecting area relative to the overall sensor size. So "gain" may not be simply be the exact overall dimension ratio.
If one has to try to understand them by introducing an irrelevant and befuddling analogy, one has problems with problems of their own.
I was hoping to share my thought and not my problem.
It's a nice hope, but a vain one. You're just sharing your problem.

--
gollywop
http://g4.img-dpreview.com/D8A95C7DB3724EC094214B212FB1F2AF.jpg
 
If you don't get this this time, I'm afraid you are too stubborn to learn.

The reason a large sensor beats a small sensor is because it requires a longer focal length for the same field-of-view, which means you end up with a larger aperture diameter (a larger "antenna" if you like)
And the point is if one puts up a smaller sensor with a larger aperture diameter, it still works like a small sensor so the larger aperture diameter by itself is not doing much.

BTW, Dish is not the antenna, it's called reflector; Yagi antenna have reflectors that do not generate electrical signal but help the antennal element to generate more electrical signal than with the reflectors.
Dish acts similarly to the parasitic elements of a yagi. The presence of the reflectors in a yagi does not allow the antenna to "generate" more electrical signal. The energy going into the driven element is the same regardless of the number of reflectors. When that energy has a wavelength appropriate to the array size, the reflectors do indeed, by constructive interference, allow that fixed energy to be radiated in a more directional fashion, thus resulting in the "gain" over the antenna without reflectors. But no new energy is generated. The augmented energy in the direction of the yagi is gotten at the expense of energy that would otherwise be radiated in other directions. Thus, a yagi has augmented energy radiated in its principal direction and similarly diminished energy radiated in other directions, the least being off the side of the beam.
Thanks for more in depth explanation. I learned some more.
A receiving antenna produces a current in the antenna induced by passing electromagnetic energy through electromagnetic induction. Differing antennas can, again by constructive interference, be designed to provide a "gain" in responding to electromagnetic energy coming from a specific direction. This "gain" is again achieved at the expense of the antenna's being less responsive to energy coming from other directions. This gain is measured relative to the response of a perfect (lossless) dipole (in dBd) or that of a perfect (lossless) isotropic element (in dBi).
Yes.
As far as I can see, there is no useful analogy between the actions of antennas and lenses or antennas and camera sensors – other than the fact that both involve inputs and outputs (as indeed does my body with food and waste). The physical phenomena and their processes of action are otherwise of such different natures that any attempt to create an analogy is to create obfuscation.
Not completely agree.
As has been pointed out by numbers of posters in this thread, the basics of equivalence are really very simple and straightforward.
Simple and straight forward, except "equivalent aperture".
If one cannot understand them as straight goods, one has got problems.
I have solved my problem without "equivalent aperture". Instead I presented sensor "gain", different from Q.E.
If one has to try to understand them by introducing an irrelevant and befuddling analogy, one has problems with problems of their own.
I was hoping to share my thought and not my problem.
I would suggest you study this example and try to grok equivalent aperture. It should make sense ... all of the necessary metrics are there.

http://www.dpreview.com/forums/post/56324170
 
...

4. The actual detection (i.e. some form of converting the electrical voltages to something useful) is done by a "receiver". In the case of an antenna, this is a radio, a tv, a crystal set, a scientific instrument ... in other words, anything that usefully "measures" the voltage.
In addition to "measuring the voltage", it's known as demodulating.
Demodulation is the act of extracting the original information-bearing signal from a modulated carrier wave. This is something useful and is part of a "radio" process. In the context of providing an analogy, this is unnecessary - only the concept of "detection" is which does not necessarily mean to extract super-imposed additional information from a carrier (although I cannot think of a practical example of this off the top of my head - the only one that pops into my mind is when the antenna is part of a transmission system).
I see it a bit differently - photon is the carrier and is being "modulated" by the different reflectance and the shades of the scene.

In the case of the camera sensor system, the same thing happens ... the A/D converter converts the voltage value of the electrons in the pixel's capacitor into something useful ... this is nothing more than the ADU recorded in raw.
To me, sensor does not do that. Not even the ADC; it's the demosaicing and the Raw conversion math that is analogues to "demodulating"
The demoasic process is after the fact of recording the ADU in raw which is then end of the sensor's participation. It is a "post-processing" concept that is a transformation of the ADU data to another (probably RGB) form. It is very loosely similar to the concept of using an equalizer after the audio has been demodulated to change the nature of the sound.
AM signal may be converted to voltage by antenna and converted to Raw digital data by ADC without demodulation first; and using DSP algorithm, the Raw digital data can be demodulated and then, the recovered voice data may be outputted into a DAC so that we can hear it.

 
Two photos of the same scene made with the same amount of light where the sensor and supporting hardware add in the same amount of additional electronic noise will have the same noise.

What does the lens do? It collects light from the scene and projects it on the sensor.

What determines how much light from the scene passes through the lens onto the sensor? The aperture diameter of the lens and how much of that light gets absorbed/scattered by the lens elements.

What does the sensor do? It converts photons (light) into electrons (electric signal) where more signal results in less noise.

What determines how much light becomes the signal? The QE of the sensor.

What does the supporting hardware do? It digitizes and records that signal into the image file.

What effect does this have on the noise of the signal? It results in additional electronic noise that becomes significant for the portions of the photo made with very little light.

What effect does the pixel count have on noise? More pixels result in higher noise frequencies that are blurred out with fewer pixels, and result in more electronic noise for a given portion of the photo (see here for details and here for examples).

So if two photos are made with the same amount of light, have sensors with the same QE and read noise, then the noise will be the same? Yes
And what is more meaningful way to understand why that is so.
To understand why what is so? Why the same amount of light results in the same noise, all else equal? Well, MBP, the "more meaningful way to understand why that is so" is to understand what noise is and what causes noise -- the very same two points that have been explained to you over and over and over...
I think it's not because f/2 is equivalent to f/4.
This demonstrates, once again, your complete and utter lack of regard for what is being explained to you. For a given [diagonal] angle of view, f/2 on mFT has the same aperture (entrance pupil) diameter as f/4 on FF, which means the same amount of light from a given scene passes through the lens onto the sensor for a given exposure time and lens transmission.

For a specific example, the same amount of light will fall on the sensor at 50mm f/2 (t/2.2) 1/100 on mFT as will fall on the sensor at 100mm f/4 (t/4.5) 1/100 on FF, for a given scene.

Of course, don't think for a minute that I don't think the above explanation, the above example, or the photos in the links immediately below will make one whit of difference -- it will all be ignored, again.
see here, here, and here for demonstrations of exactly that (probably just coincidence, though ;-) ).

What makes this so hard to understand? I'll be kind and not say. ;-)
See above. It's not that I don't understand.
In fact, it is not merely that you don't understand, but that you *actively* refuse to understand. There are three sets of photos linked above that all demonstrate exactly what has been told to you by several people in countless threads and countless posts, and the best you can come up with is "correlation is not causality". That's not merely ignorance -- it's *willful* ignorance.

I mean, how can anyone look at the photos above, all made with the same amount of light and displaying all but the exact same amount of noise, and dismiss them as mere coincidence? That's either extreme intellectual dishonesty, an extreme lack of cognitive capacity, or both.
A resistor is specified in "ohms" and "watts" and not "ohms" and "size".
I don't think you explanation is very good. Sorry.
That's like a Creationist telling a scientist that their explanation for the age of the Earth isn't very good because the science doesn't jive with their belief in the Bible.
I hereby proclaim "GB's law", analogous to "Godwin's law". :-P
 
If one cannot understand them as straight goods, one has got problems.
I have solved my problem without "equivalent aperture". Instead I presented sensor "gain", different from Q.E.
I believe it was you who introduced Occam's Razor into this discussion. But what you're doing here is introducing an unnecessary (and completely dubious) complexity.
I have simplified "equivalent exposure" and "collect more photons" into one metric "gain".

For example, a resistor is specified in "ohms" and "watts" instead of "ohms" and "size".
These are not equivalent specifications. The "size" of a resistor is its physical dimension and is not a meaningful measure of its power rating. I have, say, 10K ohm 5-watt resistors of different sizes depending on the material they're made of. Indeed, I have resistors of the same ohm/power ratings of different sizes even when made of the same material.

And while you fail to see it, Occam's Razor cuts you out: you have made something that is actually quite simple and straightforward ugly, ungainly (pun intended), and of dubious meaning and value.

--
gollywop
http://g4.img-dpreview.com/D8A95C7DB3724EC094214B212FB1F2AF.jpg
 
Last edited:
One thing that went exactly as I thought it would: the thread has demonstrated that using an inapt, inexact "analogy" to understand something is destined to lead to misunderstanding.

What you wanted was something more than an analogy--analogies are not particularly useful for understanding physical systems. More correct would be that you were searching for a "model" of the behavior of sensors. The dazzling thing is that the model is indeed the physical description of the sources of noise that people have provided dozens of times to mostlyboring. The precision of that model can be tested by measuring the individual properties and comparing the overall noise to that predicted by the model. I've not done that but my guess is that like audio noise in an amplifier system the model prediction is pretty close to the output of the system.

Try modeling the behavior of freshly picked orange with a similarly sized and shaped rock. It might be analogous in some way but it doesn't allow me to predict much about how I experience an orange.

Don
 
And I get that. And I am saying that it works because a smaller sensor has less "gain"
Please do not misuse the word gain - it just causes confusion. It may well mean what you think it means in some contexts, but not in the context of image sensors. Gain is simply the amount of voltage per electron.

The size of the image sensor does not dictate gain, nor does changing the size of the sensor change it.

Why not use the same terminology with everyone else in the room or what the industry uses? It would make discussion easier and clearer.
so it requires stronger signal to have the same output level.
This is again very confusing. The sensor size is not relevant to the SNR if the amount of light is fixed in this context. In the context of formats SNR is simply a function of the number of photons captured.
Sorry that it's confusing; think what a sensor do - it converts incident photons to photoelectrons. If the Q.E. is the same at the pixel level, what accounts for more photoelectrons from a larger sensor.
Do you have any idea what QE is? It simply tells us how many electrons are excited by a photon hitting a photodiode
Yes, known as "photoelectric effect", of the theory which none other than Einstein won Nobel prize. P-N junction was not invented yet, however.
- typical number is in the ballpark of 0,5 (or 50%). It'd not someting "at pixel level" and something else "at some other mystical level".

The reason why bigger sensors may collect more light is because they are bigger.
Yes.
Why is that so hard to understand?
Not hard.
Bigger sensor also has larger signal holding capacity (meaning it can collect more photons before over exposure).
Yes. Collecting more photons is OK as a figurative expression, but photons are not collected and stored unlike water in a barrel. It's converted. There are no more photons.
I've told you already that a photon has the chance of QE of exciting an electron. The electron is stored, just like water in a barrel.

Why not read what I write and what others write instead of just writing?

Yes, a better metric makes comparison more meaningful.
Why do you incorrectly think that units of area are not metrics?
Is it relevant?
Do you now understand and accept that units of area a metrics?
So I'm suggesting an analogous metric from an antenna of "gain" which determined by the size and directivity of the antenna design.
Why the fixation to antennae? They are pointy, flexible and hard, image sensors flat and stable.
Rubber Ducky is not the only type of antenna but the analogue is in the fact that antenna converts EM energy to electrical signal as the imaging sensor convers EM energy to electrical signal.
I'm not that interested in antennae. I'm interested in image sensors.
How about talking about image sensors and signal and noise?
Because Radio Communications has been fixated about signal and noise before the imaging sensors.
So if we were talking about modern car engines you'd talk about steam engines or horse carriages?

There is no such sensor size depending "gain" in image sensors.

If we have sensor A with one pixel and sensor B which has two pixels which are 100% identical to the pixel of sensor A, sensor B will collect twice the light wiith the same exposure settings and have 1,41 time larger SNR (which we can measure in temporal domain to be sure).
What I am telling you is that it's more consistent with other engineering discipline to think of that as more "gain" that more "SNR"
Huh. That is just ridicilous.

Gain amplification does not increase SNR or decrease SNR. The infomration in the signal capured by the pixels is quantified, discrete. If you amplify the signal of 42 by factor of 2, it's now 84, but there is no new information - the noise is also amplified by factor of 2 and SNR remains the same.

You do realize that your using the term "gain" is very nonstadard and also you don't even bothering to define it in any reasonable way (like a mathematical formula) - this does not make matters any clearer, quite the opposite. Already image sensors have a property of conversion gain, the gain amplification of PGA is also often called gain in itself, and now you want to have yet another and totally obscure gain to be introduced for no sane reason what so ever.
; Because the two pixels will generate twice the photoelectrons than one pixel. Like an amplifier with 2x gain will output twice the voltage.
Of course the pixels do have different spatial locations, though in this context that is not relevant.

Regardless, what you just said is false analogy: if you amplify a voltage you do not add any new infomation and the SNR remains the same. However if you add another pixel you double the amount of information and improve SNR by factor of sqrt(2).

So your gain system collapses.

No SNR need to be considered yet.
False.

SNR is an inherit property of light itself. Since we're concerned about the information of light, the photon shot noise needs to be considered.

If you double the signal by collecting twice the light (be that a larger exposure or larger sensor or whatever), you'll improve the SNR by factor of approximately 1,41.

This is because signal adds up, while noise adds up in quadrature. Or S=s1+s2 while N=sqrt(n1^2+n2^2).

However if you just double the signal by amplifying it ("using a larger gain") you do not improve the SNR at all.
Anyhow:
  • Pixel collects light to electrons - this is all the information we get
  • It's converted to voltge according to the design parameters of the sensor ("conversion gain"). No new information content appears.
  • pragrammable gain amplifier (PGA) may amplify the signal in analogue domain - this is done to reduce the influence of noise from the next step in imaging chain. It does not add the information content of the data.
  • Analogue to digital converter (ADC) converts the signal into digital nunbers. No new information is added to the signal.
I realize across the pond, analogue == analog, I meant analogue as in "in similar manner".
No idea what you mean by that as the clippeti clip sees to have clipped relevant part of the discussion. ADC is the device in the sensor (or off sensor) which converts the voltage to numbers. The numbers don't care of the size of the sensor.

Regardless, please re-read the bulled points.

Think of infomration - collecting light is about collecting information. Collecting twice the information and just amplifying the data by factor of two do not create the same SNR as you seem to think.
At which point in the above list you think or wonder if the "mystical sensor size depending antenna gain" appears and influences the information content of the data?
The reason Radio Engineers "invented" SNR and antenna gain is to predict if the receiving system will be able to receive the "information content" that is being transmitted.
Please talk about image sensors. This is not a radio forum. The comment and the previous one are just obfuscation.

If sensor A has signal of 100 photons and sensor B has signal of 200 photons, regardless of the size of the sensors sensor B has higher SNR.
200 photons over a football field may have more SNR than 100 photons over a Ping-Pong table but what does it mean?
Ï've told you that already. Please read what I write - it's boring to repeat. How large the capturing device is not irrelevant in this context. You just have 100 or 200 phototons collected.
  • Bigger sensor may collect more light - size matters here
  • Signal and noise are simply metrics of data that has been collected - at this point size is not part of the function in any form or shape, ping pong or not.
  • For output - the print - size is a relevant part of the visual impact of noise to the observer
Please think hard of the three points above.
And please think about that what Radio Engineers was trying to do
This is not a radio forum.
, was transmit "information content" and why SNR and antenna gain was important to them and in an analogues manner, imaging sensor is very much like an antenna.
It is obvious you do not understand what information is.

Boosting the signal and capturing more signal are not the same thing. For some reason you seem to think they are. Maybe the radios are confusing you.
Anyhow, here is an example of data from two image sensors - one is the size of ping poing table, the other the size of football field.

...I don't see any inherit difference between the information captured by football field and the data captured by pingpong, do you?
Yes, there is obvious difference; the football field will be much darker than ping-pong table, for the same number of photons from each.
And that is absolutely irrelevant for the SNR. Both systems collect the same information.

I've explained this to you many times and you still ignore it. It's been explained to you many times by others and you keep ignoring. Either you have serious issue with understanding logic and reason or with reading comprehension or you're just trolling around (at this point at least). I'd bet for trolling.

But I'll repeat one more time. Pleaase try to understand that and then understand what that football-field and pingpong-talble comparison is nonsensical.

  1. Image sensor collects photons
  2. These excite electrons - one per captured photon
  3. Each photoelectron is simply a minimal unit of information
  4. The more of them is captured, the more there will be noise - however the signal goes up at faster rate, thus SNR improves with more light
  5. From electrons we go through voltage to simple numbers via ADC.
  6. The SNR is fixed at this point.
  7. So far no need to consider size of ping pong table or footballl stadium at all
  8. Now we download the data to a computer.
  9. We process it as we wish.
  10. Now we print the resulting photograph out - finally size is relevant for how we humans perceive the SNR.
Both mean and standard deviation are similar and would be even more similar if I had bothered to use more photons, but I didn't want to fill this post with numbers.
A definition is not necessarily the truth...
I didn't define anything, just offered an example.

Fact is that 100 photons captured have signal of 100, and noise of 10, thus SNR is also 10.

If you amplify the signal of 100 to 200, you also amplify noise to 20, thus SNR will still be 10.

If you however capture 200 photons, the signal will be again 200, but noise will now be only 14,1.

Do you agree or disagree with that fact?
 
Two photos of the same scene made with the same amount of light where the sensor and supporting hardware add in the same amount of additional electronic noise will have the same noise.

What does the lens do? It collects light from the scene and projects it on the sensor.

What determines how much light from the scene passes through the lens onto the sensor? The aperture diameter of the lens and how much of that light gets absorbed/scattered by the lens elements.

What does the sensor do? It converts photons (light) into electrons (electric signal) where more signal results in less noise.

What determines how much light becomes the signal? The QE of the sensor.

What does the supporting hardware do? It digitizes and records that signal into the image file.

What effect does this have on the noise of the signal? It results in additional electronic noise that becomes significant for the portions of the photo made with very little light.

What effect does the pixel count have on noise? More pixels result in higher noise frequencies that are blurred out with fewer pixels, and result in more electronic noise for a given portion of the photo (see here for details and here for examples).

So if two photos are made with the same amount of light, have sensors with the same QE and read noise, then the noise will be the same? Yes
And what is more meaningful way to understand why that is so.
To understand why what is so? Why the same amount of light results in the same noise, all else equal? Well, MBP, the "more meaningful way to understand why that is so" is to understand what noise is and what causes noise -- the very same two points that have been explained to you over and over and over...
I think it's not because f/2 is equivalent to f/4.
This demonstrates, once again, your complete and utter lack of regard for what is being explained to you. For a given [diagonal] angle of view, f/2 on mFT has the same aperture (entrance pupil) diameter as f/4 on FF, which means the same amount of light from a given scene passes through the lens onto the sensor for a given exposure time and lens transmission.

For a specific example, the same amount of light will fall on the sensor at 50mm f/2 (t/2.2) 1/100 on mFT as will fall on the sensor at 100mm f/4 (t/4.5) 1/100 on FF, for a given scene.

Of course, don't think for a minute that I don't think the above explanation, the above example, or the photos in the links immediately below will make one whit of difference -- it will all be ignored, again.
see here, here, and here for demonstrations of exactly that (probably just coincidence, though ;-) ).

What makes this so hard to understand? I'll be kind and not say. ;-)
See above. It's not that I don't understand.
In fact, it is not merely that you don't understand, but that you *actively* refuse to understand. There are three sets of photos linked above that all demonstrate exactly what has been told to you by several people in countless threads and countless posts, and the best you can come up with is "correlation is not causality". That's not merely ignorance -- it's *willful* ignorance.

I mean, how can anyone look at the photos above, all made with the same amount of light and displaying all but the exact same amount of noise, and dismiss them as mere coincidence? That's either extreme intellectual dishonesty, an extreme lack of cognitive capacity, or both.
A resistor is specified in "ohms" and "watts" and not "ohms" and "size".
And my junk is measured in inches. So +++++++ what? Something is clearly not right with you -- it's as if you're randomly spouting out things you read in some book on amateur radio, all the while ignoring what every single person is telling and showing you.
I don't think you explanation is very good. Sorry.
That's like a Creationist telling a scientist that their explanation for the age of the Earth isn't very good because the science doesn't jive with their belief in the Bible.
I hereby proclaim "GB's law", analogous to "Godwin's law". :-P
You *actively* refuse to acknowledge *facts* presented to you. You *actively* refuse to acknowledge photos demonstrating those facts. You *actively* obfuscate matters by bringing up matters that are completely irrelevant. Is there a name for a "law" that covers that? If not, I'd like to offer a suggestion.
 
Last edited:
...

4. The actual detection (i.e. some form of converting the electrical voltages to something useful) is done by a "receiver". In the case of an antenna, this is a radio, a tv, a crystal set, a scientific instrument ... in other words, anything that usefully "measures" the voltage.
In addition to "measuring the voltage", it's known as demodulating.
Demodulation is the act of extracting the original information-bearing signal from a modulated carrier wave. This is something useful and is part of a "radio" process. In the context of providing an analogy, this is unnecessary - only the concept of "detection" is which does not necessarily mean to extract super-imposed additional information from a carrier (although I cannot think of a practical example of this off the top of my head - the only one that pops into my mind is when the antenna is part of a transmission system).
I see it a bit differently - photon is the carrier and is being "modulated" by the different reflectance and the shades of the scene.
Indeed photons are "intensity modulated".

In the opening post, you said"

"Again, not quite the same; it's always been about semantics and it may be about just semantics, but I think words have meanings and consequences."

The significant problem throughout this thread is that you take an Alice In Wonderland view of "meanings an consequence". You choose a specific meaning/interpretation of a word and bend it inappropriately to suit your "current state of mind". To you, in means exactly what your want when you say it -- and, at another time, to you it means something else.

The function of a definition is to provide a common ground for verbal communication. Since your "interpretation" of a definition is variable to suit your instantaneous mental process, you cannot "agree" with the commonly acceptable interpretation ... and this variability also allows you to contradict your self between posts. Words have meanings and consequences. However, they only do so if people agree to the meanings.
In the case of the camera sensor system, the same thing happens ... the A/D converter converts the voltage value of the electrons in the pixel's capacitor into something useful ... this is nothing more than the ADU recorded in raw.
To me, sensor does not do that. Not even the ADC; it's the demosaicing and the Raw conversion math that is analogues to "demodulating"
The demoasic process is after the fact of recording the ADU in raw which is then end of the sensor's participation. It is a "post-processing" concept that is a transformation of the ADU data to another (probably RGB) form. It is very loosely similar to the concept of using an equalizer after the audio has been demodulated to change the nature of the sound.
AM signal may be converted to voltage by antenna and converted to Raw digital data by ADC without demodulation first; and using DSP algorithm, the Raw digital data can be demodulated and then, the recovered voice data may be outputted into a DAC so that we can hear it.
What has that got to do with the process we are discussing ?

Yes, that is possible... you can always convert AM (or other frequency) to a voltage and transcribe it to an ADU with some sort of ADC. That completes the recording.

I present that the last participation of the sensor is when the ADC records the ADU values to the raw buffer. The sensor no longer is of any use for the image capture. End of story.

What you do with the data recording is up to you. You could "post process" it in Raw Digger to analyze the numeric data in the raw file. That is a complete process. You could process the raw data in ACR to generate a JPG/TIFF. You can process that in Photoshop. You could print. You could mount the print in a frame and hang it on the wall.

I could easily stipulate that "demodulating" is not finished until my print is behind glass on the wall. All of this is playing with Alice in Wonderland verbage. I don't agree with your use of "demodulation" ... and I know because I am Humpty Dumpty: "it means just what I choose it to mean—neither more nor less"

None of this arm waving conjecture of "demodulating" is of any value. For the purposes of a sensor system (which I do think we are talking about), the end of the road for the sensor is the generation of raw data ADU information.

"Demodulation" (as well as many other words you use throughout this thread) is a well defined and understood process. It is best to understand this meaning than to ascribe your personal (inconsistent) loose interpretation. Similarly your use of "gain" is well defined (in spite of the fact that there are two clear definitions - one from the analogue world and one from the sensor technology world). Otherwise, it is very difficult to be on the same page.

--
tony
http://www.tphoto.ca
 
Last edited:
One thing that went exactly as I thought it would: the thread has demonstrated that using an inapt, inexact "analogy" to understand something is destined to lead to misunderstanding.

What you wanted was something more than an analogy--analogies are not particularly useful for understanding physical systems. More correct would be that you were searching for a "model" of the behavior of sensors. The dazzling thing is that the model is indeed the physical description of the sources of noise that people have provided dozens of times to mostlyboring. The precision of that model can be tested by measuring the individual properties and comparing the overall noise to that predicted by the model. I've not done that but my guess is that like audio noise in an amplifier system the model prediction is pretty close to the output of the system.

Try modeling the behavior of freshly picked orange with a similarly sized and shaped rock. It might be analogous in some way but it doesn't allow me to predict much about how I experience an orange.

Don
One of my favorites is a comment I heard from a radio preacher who supported segregation with the analogy of an ice-cube tray: without the inserted cubical dividers you could only with great difficulty carry the tray without spilling the water, but with the dividers things became much more stable. I still find myself laughing at the stupidity of that fellow (and his quite ardent followers).

One might have countered that, since he apparently believes in physical analogies, what would he say to alloys, where it's the combination of several elements that results in one of superior strength.

As you intimate, physical analogies are very dangerous and readily misleading, particularly when you don't actually understand the physical phenomenon being analogized. And then too, why the heck do you need an analogy when you've got the straight facts?

--
gollywop
http://g4.img-dpreview.com/D8A95C7DB3724EC094214B212FB1F2AF.jpg
 
Last edited:
Anyhow, here is an example of data from two image sensors - one is the size of ping poing table, the other the size of football field.

...I don't see any inherit difference between the information captured by football field and the data captured by pingpong, do you?
Yes, there is obvious difference; the football field will be much darker than ping-pong table, for the same number of photons from each
You did not answer the question. Is there a difference in the information of these two captures? If there is, what kind of difference? If not, then why is the size of the capturing device relevant in this context? Or is it?

Also, since they are sensors which captured those photons, they were not reflected, thus their contribution (or actually the noncontribution to it) is not relevant to the brightness of the fooball field or the ping pong table. The brightness of those is a function of how much light is reflected and not captured and since that information was not presented to you your conclusion about the relevant brightness levels was wrong.
 

Keyboard shortcuts

Back
Top