PS: The full-frame sensor has 2.32 times the area as APS-C, or 1.52
times the linear dimensions.
Other than the 1.52 gain in resolution, 2.32 times area reduce the
power of noise by 2.32 through averaging.
I'm not sure what you mean, but this sounds like a misconception.
For a typical solid-state sensor (or most any other conceivable technology) the statistical noise IN EACH PIXEL would be inversely proportional to the square root of the total photon fluence captured by the each pixel of the sensor during the exposure. (
see note below) That is, the standard deviation of the random fluctuation in charge in any collection of similarly exposed pixels, expressed as a percentage of the pixel charge, would be inversely proportional to the square root of the per-pixel photon fluence.
If you define the "noise" in terms of the "power spectrum," as is usually done in signal analysis, then you would square this number, eliminating that square root. Then this starts to look like what you said. (
but see notes below) However it is the area of the INDIVIDUAL PIXELS that matters, not the area of the detector.
So if you compare an APS-C and a full-size sensor, EACH OF WHICH HAS THE SAME NUMBER OF PIXELS, then what you say is true. But if each of those two sensors uses the same size pixels, then the noise level is the same for each sensor. (You could probably post-process the larger image to produce a lower-resolution, lower-noise image, though.)
The point is, that you have to trade off resolution for noise reduction. You don't get both just by going to a larger sensor.
The reduction of noise
might even cause more significant improvement in image quality.
There should be more additional factors, which combined together,
providing such a great improvement.
Notes: As if that weren't enough complexity, I glossed over some things:
1. The noise level is also temperature dependent. For a reverse-biased PN junction, there would be an exponential factor depending on the ratio between the absolute temperature and the bias voltage.
2. Noise comes from different sources. I think my discussion here applies equally to, for example, random statistical fluctuations in photon flux and noise due to external sources, such as ambient ionizing radiation that creates superfluous electron-hole pairs in the pixels.
3. Diffraction effects will be smaller for the larger sensor. Some types of lens aberrations would have sensor-size-dependent effects.
4. I'm not sure whether the actual pixel values that get stored in the image file are proportional to the pixel fluence or to the square of the fluence. I do know that it is NOT exponentially dependent on the fluence (which is related to the "expose to the right" rule for digital photography).
5. My discussion implicitly assumes that the effects of each photon event on a pixel are statistically independent. That assumption is good at low ISO but gets weak as ISO increases, basically because the arrival of one pixel changes the bias or charge on the detector element, which alters how it will respond to the next pixel that arrives. At very high ISO, there can even be cross-talk between adjacent pixels, leading to image "bleeding."
6. Sensors have pixels sensitive to each of the three primary colors. That fact alone doesn't affect my discussion, but also there is typically a different number of pixels for each of the primary colors.
7. Much of my discussion also applies to old-fashioned chemical photography.
8. What else did I forget?
--
Thomas D. Shepard, Sc.D.