Grok2 what is equivalent in "Equivalence" and it's not Enlargement nor Aperture

I'd like a sensor with a QE of 2 please ;-)
2 football fields or 2 ping-pong tables?
Logistically, ping-pong tables are much more practical, especially if they're the fold up kind...
Sure.


Wait, I thought it over and I'd like one of these instead. It's not even full frame so it's probably noisy but I'll deal with that ;-) We really need a sarcasm font here...



Dragonfly_1080px_Oct-2014.jpg
 
The only problem is that purely refractive optics can't be faster than f/0.5. Other than that, there's no conceptual problem with equivalence and the fact that real-world lenses often cannot be found to make small sensors equivalent to larger sensors is obviously an advantage of large sensors.
One gets the same result whether one thinks of equivalent aperture or equivalent gain of Q.E.;
That makes no sensor what so ever. There is no such thing as "equivalent gain of QE".
The difference is, equivalent gain of Q.E. does not require any lens for the same result.
QE has a definition, and it can't be larger than one.
As I said, gain can be normalized to some sensor size like the isotropic antenna.
Gain is voltage/electron.
Gain = output level/input level
Any attempt to associate an "isotropic antenna" to sensors is rather silly. That is like trying to associate the number of boxcars moving in the rail system to the way you appreciate Mozart.

If you wish to make analogy, the correct one is to associate an antenna with a lens. Both are designed to "focus" em energy.
Not quite; main function of antenna is to convert EM energy into electrical signal, design of an antenna may include "focusing" element or may not and is not necessary for an antenna to work.

And imaging sensor also can "work" without lens. And in this context, the usefulness of "gain" is more apparent. At same light intensity*, a larger sensor is generating more photoelectrons than a smaller sensor. As gain typically means the ratio of output/input, I think it's appropriate to say a larger sensor has more "gain". I think "gain" is more meaningful than collecting more light because the attribute of an imaging sensor is that it converts photons to photoelectrons than "collecting photons".

* I realize in a camera context, larger sensor may be looked as being illuminated with a stronger light but logically it's not so as in the case of sensor performance without lens. Photoelectric effect is function of intensity of light (lumen/m2) and the wavelength.**
They are both passive componentry. A sensor is like the detector attached to the antenna. The antenna/lens and the detector/sensor are both plausible and independent analogies.
There is no additional detector in an antenna design. e.g. "detector" of dish antenna is a horn antenna with LNA (low noise amplifier). That horn antenna is what's making a dish an antenna ;-)
As you well know, the gain of a sensor must be less than one -- from the simple fact of Quantum Efficiency.
Yes, as a photodiode is a passive device and as is an antenna (there is no "amplification gain" and that's why antenna gain is defined relative to a "standard".
And, of course, the actual sensor performance in terms of photon collection is independent from the subsequent amplification and A/D conversions.
 
I'm with you, it's all pretty simple and you're preaching to the choir. I meant he was moving his goal posts, well they're not really goal posts they're... I don't know, i don't have a good imaginary metaphor for it, but every time someone explains something to him, he comes up with some nutty reason for why it's not the real reason. I wish i'd been keeping track of all the things he's gotten wrong so far... This has been great entertainment while I'm working on rendering fluid simulations :-) I don't really see an end to this, he's unable to admit that he's wrong and he like some kind of Rube Goldberg of photography.

You know what's better than this? Trying to explain fluid flow and pumps to people; you've got electricity AND fluids, and people don't understand either of them, not even the people that sell the pumps. At least we don't have to worry about the speed of light here :-)
It's not that I've been moving the goal post; I've been wondering what the goal post is and I found it.

WHEREAS 1. Sensor metric should be of sensor function not merely the dimensional difference.

WHEREAS 2. A larger scene area is not "more light as in stronger" light per light intensity lux = lumen/m2.

WHEREAS 3. Photoelectric effect is function of light intensity (not larger light area) and wave length.

THEREFORE 4. Defining the output SNR as reduction of ambient SNR by f/stop (less intensity), reduced by Q.E. being less than one and further reduced by relative "gain" of sensor size (and further reduced by read amp/ADC chain) is more logical and consistent with other engineering discipline.

I hope the above is sufficiently bulletized... ;-)
 
I think you're a lost-cause, but if you're talking about without a lens, then light intensity (illuminance) will be the same regardless of sensor size. In that case, the larger sensor works the same as a solar panel, and more area simply collects more light. No gain/QE BS, just more area. Period.
 
If you don't get this this time, I'm afraid you are too stubborn to learn.

The reason a large sensor beats a small sensor is because it requires a longer focal length for the same field-of-view, which means you end up with a larger aperture diameter (a larger "antenna" if you like)
And the point is if one puts up a smaller sensor with a larger aperture diameter, it still works like a small sensor so the larger aperture diameter by itself is not doing much.

BTW, Dish is not the antenna, it's called reflector; Yagi antenna have reflectors that do not generate electrical signal but help the antennal element to generate more electrical signal than with[out] the reflectors.
I corrected the typo.
Humm, the additional elements do not generate more electrical signal. They act as a lens focusing the direction of the constant signal applied to the antenna feed-point.
I meant the antenna element would generate more electrical signal (higher gain) with the reflecting elements which does not generate electrical signal.
at the same f-stop. Since there are practical and theoretical limits on how fast f-stops can be, the larger sensor enables a larger aperture diameter. The larger aperture diameter collects more light just as a larger antenna area does, and more light means a higher signal-to-noise ratio.

That's it. Don't make it any more complicated than that.
 
Number of photons per second per square millimeter goes with f-stop (and scene illumination, obviously). So, in the same period (1/shutter speed), the larger sensor will have more photons falling on it than a smaller sensor, given the same f-stop of the lens in front of it.

So, for the same scene, the same shutter speed, and the same f-stop, a larger sensor will have more photons hitting it.

Given the same quantum efficiency, that will generate more electrons and thus lower noise.

A larger sensor is "less noisy" for this reason - it collects more light at the same f-stop.

To equalize the number of photons collected by different sized sensors, one can change the f-stop in proportion to the sensor size. A 2x larger (linear dimension) sensor can withstand a 2x larger f-stop (f/4 versus f/2, for example) and collect the same light, thus producing the same noise. Thus, we say these two f-stops are "equivalent" on these different formats. As a bonus, they also generate the same DOF from the same field-of-view, making them even more equivalent. Further, they create the same diffraction-softening, further demonstrating their equivalence.

That's it. No big deal. Same as teleconverters.

--
Lee Jay
Thanks and yes, it's not difficult - I'm thinking in terms of what does the size of sensor change its attribute that makes it different and my definition of "gain" as the attribute that changes provide more consistent and analogous parameter.
You misuse the word gain, making your own definition that doesn't parse well for those of us who understand gain.

Your bewildering ramble, with multiple "analogies" is muddled by non-analogous so-called "analogies." If you think that you understand this better with that illogical line of thought then tuck it in and let it go. If you think we are going to follow that word salad and rejoice at it's clarity, then you're showing your continuing lack of understanding.

Don
Well, I've been wondering if I'm using "attribute" correctly but I'm pretty sure about the use of "antenna gain" even though it's somewhat different from "amplifier gain". What about the "antenna gain" do you find erroneous? I'm really looking for that kind of feed back.

Thanks.
I'm not playing the game of tweaking your non-analogies into something different but similarly non-analogous. I've watched your game for a couple of threads now and you clearly want to hang onto something off the mark for understanding digital photo noise for reasons that are totally unclear to me. I guess it's partially stubborness and it's partially a misguided concept of trying to twist and distort the digital image noise problem into a problem you think you understand. I think you're getting a brain hernia trying to do that heavy lift because there is very little "there" there. Sometimes the way through such issues is to fall out of love with inapt analogies and start from simple starting realities: photons, light intensity, areas, statistics of photon counting, read noise (amplifier noise) etc.

My suggestion, which I am now pretty certain you will not take, is to go through a few of the many good discussions on digital image noise and follow the arithmetic (math is a bit of an over statement of what it takes--more like a bit of statistical aritmetic). If you do the unexpected you will likely come out with concepts that are order of magnitude more useful than your inexact "analogies."
 
I think you're a lost-cause, but if you're talking about without a lens, then light intensity (illuminance) will be the same regardless of sensor size. In that case, the larger sensor works the same as a solar panel, and more area simply collects more light. No gain/QE BS, just more area. Period.
 
I think you're a lost-cause, but if you're talking about without a lens, then light intensity (illuminance) will be the same regardless of sensor size. In that case, the larger sensor works the same as a solar panel, and more area simply collects more light. No gain/QE BS, just more area. Period.
More area collecting more light is an observation; how to compare what that means is typically described as a "metric".
You mean a metric like how many photons are collected?
That a larger area collecting more is not a metric make.
Yes, it is.
And no one yet have explained what is different regarding 200 photons over football field vs. 100 photons over a ping-pong table.
A factor of two, and an change of SNR of sqrt(2).
 
If you don't get this this time, I'm afraid you are too stubborn to learn.

The reason a large sensor beats a small sensor is because it requires a longer focal length for the same field-of-view, which means you end up with a larger aperture diameter (a larger "antenna" if you like)
And the point is if one puts up a smaller sensor with a larger aperture diameter, it still works like a small sensor so the larger aperture diameter by itself is not doing much.

BTW, Dish is not the antenna, it's called reflector; Yagi antenna have reflectors that do not generate electrical signal but help the antennal element to generate more electrical signal than with[out] the reflectors.
I corrected the typo.
Humm, the additional elements do not generate more electrical signal. They act as a lens focusing the direction of the constant signal applied to the antenna feed-point.
I meant the antenna element would generate more electrical signal (higher gain) with the reflecting elements which does not generate electrical signal.
at the same f-stop. Since there are practical and theoretical limits on how fast f-stops can be, the larger sensor enables a larger aperture diameter. The larger aperture diameter collects more light just as a larger antenna area does, and more light means a higher signal-to-noise ratio.

That's it. Don't make it any more complicated than that.
 
... a bunch of other things.
They are both passive componentry. A sensor is like the detector attached to the antenna. The antenna/lens and the detector/sensor are both plausible and independent analogies.
There is no additional detector in an antenna design. e.g. "detector" of dish antenna is a horn antenna with LNA (low noise amplifier). That horn antenna is what's making a dish an antenna ;-)
The horn (or dish) is a waveguide... it is not "an antenna", it is a component used for focusing energy (even though we refer to the entire systems as an "antenna"). It is like the reflector/director of a yagi. In a loose sense, the actual "real antenna" sensing element is the 1/4 wave element at the "end of the horn" or the dipole of the yagi.

The horn's (or dish's) LNA it an auxiliary device to pre-amplify the weak signal before it is sent to a "receiver" along the lossy (i.e. noisy) transmission line. The loss in the cable can be often in the order of 3 to 10 or more dB per hundred feet ---depending on the cable size, characteristics, and frequency of use (with a few "etc" thrown in). The LNA simply amplifies the signal without otherwise changing it's characteristics. It is there only to mitigate the transmission line losses.

If you wish to make a parallel to the camera system, here is my best guess that might work.

1. The camera lens is like the horn or yagi reflector/director. The photon energy is simply guided in a desirable fashion.

2. The sensor pixel receives the light in a way that the dipole/ground plane component of the antenna works. The significant difference is that the pixel receives and accumulates an electric charge over a period of time, just like a capacitor. The dipole of the antenna receives does not accumulate anything ... it either immediately re-radiates the electrical currents or (by simple electrical giggery-pokery) allows the current to be passed to a feed line.

3. The effect of the LNA's amplifier to overcome the feeedline losses is similar to the sensor system's passing the accumulated electrical charge to a low noise amplifier to amplify the electrical voltage as determined by the ISO setting of the camera.

4. The actual detection (i.e. some form of converting the electrical voltages to something useful) is done by a "receiver". In the case of an antenna, this is a radio, a tv, a crystal set, a scientific instrument ... in other words, anything that usefully "measures" the voltage.

In the case of the camera sensor system, the same thing happens ... the A/D converter converts the voltage value of the electrons in the pixel's capacitor into something useful ... this is nothing more than the ADU recorded in raw.

(( :) hummm. you could say that this is a quantum mechanical manifestation in the real world ... you don't know anything until it is measured ... and then the only thing you really know is what the measured value was :) ))
As you well know, the gain of a sensor must be less than one -- from the simple fact of Quantum Efficiency.
Yes, as a photodiode is a passive device and as is an antenna (there is no "amplification gain" and that's why antenna gain is defined relative to a "standard".
?? Good gosh. Amazing understanding and logic. I QUIT !!!!

--
tony
http://www.tphoto.ca
 
Last edited:
I'm with you, it's all pretty simple and you're preaching to the choir. I meant he was moving his goal posts, well they're not really goal posts they're... I don't know, i don't have a good imaginary metaphor for it, but every time someone explains something to him, he comes up with some nutty reason for why it's not the real reason. I wish i'd been keeping track of all the things he's gotten wrong so far... This has been great entertainment while I'm working on rendering fluid simulations :-) I don't really see an end to this, he's unable to admit that he's wrong and he like some kind of Rube Goldberg of photography.

You know what's better than this? Trying to explain fluid flow and pumps to people; you've got electricity AND fluids, and people don't understand either of them, not even the people that sell the pumps. At least we don't have to worry about the speed of light here :-)
It's not that I've been moving the goal post; I've been wondering what the goal post is and I found it.

WHEREAS 1. Sensor metric should be of sensor function not merely the dimensional difference.

WHEREAS 2. A larger scene area is not "more light as in stronger" light per light intensity lux = lumen/m2.

WHEREAS 3. Photoelectric effect is function of light intensity (not larger light area) and wave length.

THEREFORE 4. Defining the output SNR as reduction of ambient SNR by f/stop (less intensity), reduced by Q.E. being less than one and further reduced by relative "gain" of sensor size (and further reduced by read amp/ADC chain) is more logical and consistent with other engineering discipline.

I hope the above is sufficiently bulletized... ;-)
The above reads more like the following, albeit without the rhyme, meter, imagery, or appeal to the imagination:

`Twas brillig, and the slithy toves
Did gyre and gimble in the wabe:
All mimsy were the borogoves,
And the mome raths outgrabe.

"Beware the Jabberwock, my son!
The jaws that bite, the claws that catch!
Beware the Jubjub bird, and shun
The frumious Bandersnatch!"

He took his vorpal sword in hand:
Long time the manxome foe he sought --
So rested he by the Tumtum tree,
And stood awhile in thought.

And, as in uffish thought he stood,
The Jabberwock, with eyes of flame,
Came whiffling through the tulgey wood,
And burbled as it came!

One, two! One, two! And through and through
The vorpal blade went snicker-snack!
He left it dead, and with its head
He went galumphing back.

"And, has thou slain the Jabberwock?
Come to my arms, my beamish boy!
O frabjous day! Callooh! Callay!'
He chortled in his joy.

`Twas brillig, and the slithy toves
Did gyre and gimble in the wabe;
All mimsy were the borogoves,
And the mome raths outgrabe.
 
Last edited:
I think you're a lost-cause, but if you're talking about without a lens, then light intensity (illuminance) will be the same regardless of sensor size. In that case, the larger sensor works the same as a solar panel, and more area simply collects more light. No gain/QE BS, just more area. Period.
More area collecting more light is an observation; how to compare what that means is typically described as a "metric". That a larger area collecting more is not a metric make.
http://www.merriam-webster.com/dictionary/metric

2 : a standard of measurement

So, yes, sensor area is a metric. Did you mean sensor area is not a metric for SNR? Well, how many times has it been explained to you that Total Light Recorded (N) = Exposure x Sensor Area x QE, where SNR = N / sqrt N = sqrt N. So, again, sensor area is, indeed, a metric, just not the whole story, but that has been explained to you, explicitly, over and over and over...
And no one yet have explained what is different regarding 200 photons over football field vs. 100 photons over a ping-pong table.
The above is the lie. You claim that no one has explained what is different regarding 200 photons over football field vs. 100 photons over a ping-pong table when, in fact, it has not only been explained to you, it has been explained to you, explicitly, multiple times. More light recorded means a higher SNR, and thus less noise: SNR = N / sqrt N = sqrt N, where N is the number of electrons recorded.

So, to be more specific, still, even though I am under no illusion that it will make any difference to you since you *actively* ignore this over and over and over and over..., if the QE is 50%, then 200 photons will release 100 electrons resulting in an SNR of 100 / sqrt 100 = sqrt 100 = 10:1 which is less noisy than 100 photons releasing 50 electrons resulting in an SNR of 50 / sqrt 50 = sqrt 50 = 7:1.
BTW, though not sure why there are animosity and hostility but I apologize for it.
The animosity comes from you *actively* ignoring what has been explained to you over and over. The animosity comes from this willful ignorance being practiced over literally hundreds (if not thousands) of posts in tens of threads. This animosity would be a problem for me if I was not able to adjust my perspective to view it from the standpoint of "entertainment".
 
Last edited:
And I get that. And I am saying that it works because a smaller sensor has less "gain"
Please do not misuse the word gain - it just causes confusion. It may well mean what you think it means in some contexts, but not in the context of image sensors. Gain is simply the amount of voltage per electron.

The size of the image sensor does not dictate gain, nor does changing the size of the sensor change it.

Why not use the same terminology with everyone else in the room or what the industry uses? It would make discussion easier and clearer.
so it requires stronger signal to have the same output level.
This is again very confusing. The sensor size is not relevant to the SNR if the amount of light is fixed in this context. In the context of formats SNR is simply a function of the number of photons captured.
Sorry that it's confusing; think what a sensor do - it converts incident photons to photoelectrons. If the Q.E. is the same at the pixel level, what accounts for more photoelectrons from a larger sensor.
Do you have any idea what QE is? It simply tells us how many electrons are excited by a photon hitting a photodiode - typical number is in the ballpark of 0,5 (or 50%). It'd not someting "at pixel level" and something else "at some other mystical level".

The reason why bigger sensors may collect more light is because they are bigger. Why is that so hard to understand? Bigger sensor also has larger signal holding capacity (meaning it can collect more photons before over exposure).
So you say it because the larger sensor collects more light which is an obvious observation but not an metric.
Let's see some definitons of metric:

"A system or standard of measurement" .

"Often, metrics. a standard for measuring or evaluating something, especially one that uses figures or statistics"

"a standard of measurement"

Why do you incorrectly think that units of area are not metrics?

So I'm suggesting an analogous metric from an antenna of "gain" which determined by the size and directivity of the antenna design.
Why the fixation to antennae? They are pointy, flexible and hard, image sensors flat and stable.

How about talking about image sensors and signal and noise?

There is no such sensor size depending "gain" in image sensors.

If we have sensor A with one pixel and sensor B which has two pixels which are 100% identical to the pixel of sensor A, sensor B will collect twice the light wiith the same exposure settings and have 1,41 time larger SNR (which we can measure in temporal domain to be sure).

Anyhow:
  • Pixel collects light to electrons - this is all the information we get
  • It's converted to voltge according to the design parameters of the sensor ("conversion gain"). No new information content appears.
  • pragrammable gain amplifier (PGA) may amplify the signal in analogue domain - this is done to reduce the influence of noise from the next step in imaging chain. It does not add the information content of the data.
  • Analogue to digital converter (ADC) converts the signal into digital nunbers. No new information is added to the signal.
At which point in the above list you think or wonder if the "mystical sensor size depending antenna gain" appears and influences the information content of the data?

If sensor A has signal of 100 photons and sensor B has signal of 200 photons, regardless of the size of the sensors sensor B has higher SNR.
200 photons over a football field may have more SNR than 100 photons over a Ping-Pong table but what does it mean?
Ï've told you that already. Please read what I write - it's boring to repeat. How large the capturing device is not irrelevant in this context. You just have 100 or 200 phototons collected.
  • Bigger sensor may collect more light - size matters here
  • Signal and noise are simply metrics of data that has been collected - at this point size is not part of the function in any form or shape, ping pong or not.
  • For output - the print - size is a relevant part of the visual impact of noise to the observer
Please think hard of the three points above.

Anyhow, here is an example of data from two image sensors - one is the size of ping poing table, the other the size of football field.

S(pingpong) = [0.033,0.659,0.873,0.735,0.992,0.352,0.543,0.479,0.28,0.955,0.425,0.129,0.86,0.196,0.448,0.269,0.821,0.746,0.81,0.121,0.686,0.324,0.314,0.908,0.171,0.628,0.57,0.91,0.953,0.958,0.88,0.41,0.328,0.109,0.718,0.89,0.913,0.083,0.342,0.392,0.125,0.5,0.668,0.61,0.928,0.596,0.666,0.138,0.498,0.616,0.385,0.833,0.16,0.099,0.884,0.985,0.385,0.675,0.683,0.174,0.087,0.187,0.294,0.678,0.633,0.396,0.199,0.404,0.307,0.609,0.436,0.105,0.674,0.232,0.828,0.116,0.19,0.497,0.333,0.515,0.923,0.911,0.259,0.466,0.004,0.913,0.321,0.456,0.742,0.964,0.024,0.683,0.106,0.867,0.842,0.66,0.359,0.785,0.922,0.345]

S(football) = [0.217,0.4,0.28,0.905,0.871,0.844,0.762,0.968,0.641,0.873,0.116,0.991,0.528,0.275,0.676,0.462,0.006,0.257,0.137,0.409,0.592,0.411,0.661,0.284,0.356,0.38,0.634,0.08,0.385,0.696,0.243,0.295,0.114,0.535,0.836,0.221,0.048,0.756,0.37,0.561,0.739,0.316,0.667,0.477,0.8,0.091,0.38,0.423,0.466,0.416,0.361,0.665,0.181,0.527,0.645,0.478,0.536,0.216,0.585,0.05,0.771,0.424,0.784,0.177,0.254,1.0,0.772,0.893,0.834,0.31,0.6,0.053,0.184,0.68,0.17,0.671,0.573,0.564,0.094,0.912,0.323,0.065,0.764,0.644,0.752,0.221,0.946,0.851,0.87,0.705,0.564,0.494,0.849,0.472,0.334,0.238,0.081,0.958,0.435,0.192]

I don't see any inherit difference between the information captured by football field and the data captured by pingpong, do you? Both mean and standard deviation are similar and would be even more similar if I had bothered to use more photons, but I didn't want to fill this post with numbers.
 
Number of photons per second per square millimeter goes with f-stop (and scene illumination, obviously). So, in the same period (1/shutter speed), the larger sensor will have more photons falling on it than a smaller sensor, given the same f-stop of the lens in front of it.

So, for the same scene, the same shutter speed, and the same f-stop, a larger sensor will have more photons hitting it.

Given the same quantum efficiency, that will generate more electrons and thus lower noise.

A larger sensor is "less noisy" for this reason - it collects more light at the same f-stop.

To equalize the number of photons collected by different sized sensors, one can change the f-stop in proportion to the sensor size. A 2x larger (linear dimension) sensor can withstand a 2x larger f-stop (f/4 versus f/2, for example) and collect the same light, thus producing the same noise. Thus, we say these two f-stops are "equivalent" on these different formats. As a bonus, they also generate the same DOF from the same field-of-view, making them even more equivalent. Further, they create the same diffraction-softening, further demonstrating their equivalence.

That's it. No big deal. Same as teleconverters.
 
Number of photons per second per square millimeter goes with f-stop (and scene illumination, obviously). So, in the same period (1/shutter speed), the larger sensor will have more photons falling on it than a smaller sensor, given the same f-stop of the lens in front of it.

So, for the same scene, the same shutter speed, and the same f-stop, a larger sensor will have more photons hitting it.

Given the same quantum efficiency, that will generate more electrons and thus lower noise.

A larger sensor is "less noisy" for this reason - it collects more light at the same f-stop.

To equalize the number of photons collected by different sized sensors, one can change the f-stop in proportion to the sensor size. A 2x larger (linear dimension) sensor can withstand a 2x larger f-stop (f/4 versus f/2, for example) and collect the same light, thus producing the same noise. Thus, we say these two f-stops are "equivalent" on these different formats. As a bonus, they also generate the same DOF from the same field-of-view, making them even more equivalent. Further, they create the same diffraction-softening, further demonstrating their equivalence.

That's it. No big deal. Same as teleconverters.
 
Rereading this post, I'm still rather flummoxed about the level of animosity and the hostility for suggesting a metric for a sensor :-(
The metrics have been given, and you ignore them. Thus the animosity. Once again, here are the relevant sensor metrics (feel free to consult the dictionary for what a metric is):
  • Area
  • QE (quantum efficiency -- the proportion of light falling on a sensor that is recorded)
  • Electronic noise (the additional noise added by the sensor and supporting hardware)
  • pixel count
  • microlenses
  • CFA
BTW, it took no other than Albert Einstein himself to explain photoelectric effect. It may seem easy now but it was not. BTW, he got Nobel prize for his theory of photoelectric effect. This was big then and still is.
The only relevance the photoelectric effect has in this "discussion" is that one photon releases at most one electron.
 
And I get that. And I am saying that it works because a smaller sensor has less "gain"
Please do not misuse the word gain - it just causes confusion. It may well mean what you think it means in some contexts, but not in the context of image sensors. Gain is simply the amount of voltage per electron.

The size of the image sensor does not dictate gain, nor does changing the size of the sensor change it.

Why not use the same terminology with everyone else in the room or what the industry uses? It would make discussion easier and clearer.
so it requires stronger signal to have the same output level.
This is again very confusing. The sensor size is not relevant to the SNR if the amount of light is fixed in this context. In the context of formats SNR is simply a function of the number of photons captured.
Sorry that it's confusing; think what a sensor do - it converts incident photons to photoelectrons. If the Q.E. is the same at the pixel level, what accounts for more photoelectrons from a larger sensor.
Do you have any idea what QE is? It simply tells us how many electrons are excited by a photon hitting a photodiode
Yes, known as "photoelectric effect", of the theory which none other than Einstein won Nobel prize. P-N junction was not invented yet, however.
- typical number is in the ballpark of 0,5 (or 50%). It'd not someting "at pixel level" and something else "at some other mystical level".

The reason why bigger sensors may collect more light is because they are bigger.
Yes.
Why is that so hard to understand?
Not hard.
Bigger sensor also has larger signal holding capacity (meaning it can collect more photons before over exposure).
Yes. Collecting more photons is OK as a figurative expression, but photons are not collected and stored unlike water in a barrel. It's converted. There are no more photons.


So you say it because the larger sensor collects more light which is an obvious observation but not an metric.
Let's see some definitons of metric:

"A system or standard of measurement" .

"Often, metrics. a standard for measuring or evaluating something, especially one that uses figures or statistics"

"a standard of measurement"
Yes, a better metric makes comparison more meaningful.

Why do you incorrectly think that units of area are not metrics?
Is it relevant?
So I'm suggesting an analogous metric from an antenna of "gain" which determined by the size and directivity of the antenna design.
Why the fixation to antennae? They are pointy, flexible and hard, image sensors flat and stable.
Rubber Ducky is not the only type of antenna but the analogue is in the fact that antenna converts EM energy to electrical signal as the imaging sensor convers EM energy to electrical signal.

How about talking about image sensors and signal and noise?
Because Radio Communications has been fixated about signal and noise before the imaging sensors.

There is no such sensor size depending "gain" in image sensors.

If we have sensor A with one pixel and sensor B which has two pixels which are 100% identical to the pixel of sensor A, sensor B will collect twice the light wiith the same exposure settings and have 1,41 time larger SNR (which we can measure in temporal domain to be sure).
What I am telling you is that it's more consistent with other engineering discipline to think of that as more "gain" that more "SNR"; Because the two pixels will generate twice the photoelectrons than one pixel. Like an amplifier with 2x gain will output twice the voltage. No SNR need to be considered yet.

Anyhow:
  • Pixel collects light to electrons - this is all the information we get
  • It's converted to voltge according to the design parameters of the sensor ("conversion gain"). No new information content appears.
  • pragrammable gain amplifier (PGA) may amplify the signal in analogue domain - this is done to reduce the influence of noise from the next step in imaging chain. It does not add the information content of the data.
  • Analogue to digital converter (ADC) converts the signal into digital nunbers. No new information is added to the signal.
I realize across the pond, analogue == analog, I meant analogue as in "in similar manner".
At which point in the above list you think or wonder if the "mystical sensor size depending antenna gain" appears and influences the information content of the data?


The reason Radio Engineers "invented" SNR and antenna gain is to predict if the receiving system will be able to receive the "information content" that is being transmitted.
If sensor A has signal of 100 photons and sensor B has signal of 200 photons, regardless of the size of the sensors sensor B has higher SNR.
200 photons over a football field may have more SNR than 100 photons over a Ping-Pong table but what does it mean?
Ï've told you that already. Please read what I write - it's boring to repeat. How large the capturing device is not irrelevant in this context. You just have 100 or 200 phototons collected.
  • Bigger sensor may collect more light - size matters here
  • Signal and noise are simply metrics of data that has been collected - at this point size is not part of the function in any form or shape, ping pong or not.
  • For output - the print - size is a relevant part of the visual impact of noise to the observer
Please think hard of the three points above.
And please think about that what Radio Engineers was trying to do, was transmit "information content" and why SNR and antenna gain was important to them and in an analogues manner, imaging sensor is very much like an antenna.

Anyhow, here is an example of data from two image sensors - one is the size of ping poing table, the other the size of football field.

...I don't see any inherit difference between the information captured by football field and the data captured by pingpong, do you?
Yes, there is obvious difference; the football field will be much darker than ping-pong table, for the same number of photons from each.
Both mean and standard deviation are similar and would be even more similar if I had bothered to use more photons, but I didn't want to fill this post with numbers.
A definition is not necessarily the truth...


--
Abe R. Ration - amateur photographer, amateur armchair scientist, amaterur camera buff
http://aberration43mm.wordpress.com/
 

Keyboard shortcuts

Back
Top