Detail Man
Forum Pro
- Messages
- 17,490
- Solutions
- 11
- Reaction score
- 2,745
Dear Edward,From the above-referenced paper (in Section 2.1, PDF Page 3, journal Page 453) - where the authors are clearly speaking very specifically about "pixel-level" temporal measuremenrs):As most know, I hold a contrary view about pixel size vs SNR.
... to make sure I have not missed anything, I did a Google on "noise camera pixel size" and went to the sites that had some level of authority:
Stanford paper - larger pixels = better SNR
http://white.stanford.edu/~brian/papers/pdc/pixelSize_SPIE00.pdf
DR increases roughly as the square root of pixel size, since both C and reset noise (kTC) increase approximately linearly with pixel size.
True only to the extent that the spectrum of all internal readout/ADC noises are random in nature.
SNR also increases roughly as the square root of pixel size since the RMS shot noise increases as the square root of the signal. These curves demonstrate the advantages of choosing a large pixel.
True only to the extent that the Photon Shot Noise existing within the light itself dominates noise.
.
I attempt (for the Nth time, it seems) to inspire you to squarely face the following "equivalence". In the quoted text from the Stanford paper above, I substitute every occurance of the term "pixel" with the phrase "image-sensor active-area" ([bolded] below to show the changes made):
DR increases roughly as the square root of [image-sensor active-area] size, since both C and reset noise (kTC) increase approximately linearly with [image-sensor active-area] size.
SNR also increases roughly as the square root of [image-sensor active-area] size since the RMS shot noise increases as the square root of the signal. These curves demonstrate the advantages of choosing a large [image-sensor active-area].
Remaining (for the moment) in the world of temporal noise measurements that such single-photosite analysis necessitates, explain to us why the above [substitutions] would in any way be different when applied to image-sensor active-area sizes of arrays of multiple photosites.
.
Enter the spatial (inter-photosite measurement) domain:
With microlens array assemblies, 100% optical fill-factor is not an unreasonable assumption.
In the spatial domain (as opposed to the temporal domain - the only domain possible with single photosite analysis), explain to us why the above [substitutions] would in any way be different when applied to image-sensor active-area sizes of arrays of multiple photosites.
You have never responded to that query (previously specifically made to you several times) ...
If you cannot identify what differences demonstrably exist between "individual photosite size" and "image-sensor active-area sizes of arrays of multiple photosites" (as those phrases are used in the original, and in the by me modified, quotes appearing above), then there exists no meaningful case that can be made whatsoever for any sort of unique "primacy" of single-photosite analysis.
What matters in analyzing image-sensor performance is spatial (inter-photosite measurements performed over some given measurement time) - not temporal single photosite measurements.
RAW photosite data relates to image-data formed from an image-sensor active-area consisting of an array of multiple photosites. It is not (in applications of interest) about individual photosite output - that is, unless we are discussing a single-photosite ("cyclops") imaging device using one photodetector.
Well, it looks like you will never answer the above query. I do understand why. There is no answer that differentiates all that "monster pixel worship" from "monster sensor worship" (and the dreaded "total light"). Since (for 100% fill-factor, anyway, down to around 2 Micron pixel size, or so) that fact is indeed true, "pixel size" pretty much relates to spatial frequency resolution only.
Note that each and every image-sensor design is a different creation. Thus, Read Noise and Full Well Capacity (the determinants of "engineering Dynamic Range") depend more on the particular design and process variables than on individual photosite-aperture size itself.
Google is more than just "your friend". It also has the ability to humble one considerably, and reveal just how little one (self included) may know about subjects until they think long and hard about what is really going on. Yours truly has been thus humbled many, many times.
.
Demonstrable realities:
There are, in fact, a couple of tangible (as in "real") areas where "big pixel enthusiasts" can actually "hang their hat" on solid ground, and real, actual concerns (as opposed to vaporous floobydust).
(1) For sub-Micron photosites (unless specific processes are employed in fabrication, which do not appear to have been implemented, it seems), there is the matter of Random Telegraph Noise in the MOSFET source-follower amps which actually *does* present the most significant technological barriers for the "honey, I shrunk the pixels again" bit. Read Eric Fossum's posts as well as Wang's paper (quoted by me) in the thread where this post exists:
http://www.dpreview.com/forums/post/52427150
(2) Lens-system Diffraction effects do indeed "exact a price" where it comes to spatial frequency resolution (the magnitude of the MTF response) for smaller photosite-apertures:
http://www.dpreview.com/forums/thread/3475094
However, note that "big pixels" themselves exact a very similar spatial frequency resolution "cost". Therefore, pixels can just as well be "too big" as they can be "too small". Think about it sometime.
.
That's all, folks,
DM ...
Last edited:
