I re-read it and can't find any errors in what I wrote, but "everything beingsnip...But sensor SIZE does affect image DR, I've never said anything else.
But given a sensor size, then the pixel size may not affect image DR,
particularly in as how much details you can make out from the noise N
stops below clipping in the final image, viewed at a fixed size.
To me, there seems to be a contradiction, or I don't get it at all. :sEverything else being equal, if we have more, smaller, pixels, each
pixel
will have less DR. (DR defined as clipping level divided by the noise
floor.)
But if we downsize the pixels to a lower-Mp image, each new pixel will
have less noise, because the noise averages out. So the DR increases.
This is the point people are missing.
equal" is of course not clear-cut, so is open for interpretation.
There are two separate issues:
1) How to compare fairly if the Mp count differs between two cameras.
Pixel DR is not fair.
2) What constitutes everything else being equal, or how does the
read noise, FWC etc. vary with pixel size.
Yes.Given a certain sensor size disregarding fill factor, let's look at a
3mp and a 12mp model for an easy pixel binning setup. While shot
noise would be higher on the 12mp sensor per/ pixel, binned (or per
area) the difference would indeed been averaged out, let's assume
completely.
Let's say the 3Mp camera has a full-well capacity of 4096 electronsBut the read noise would everything else being equal be 4
times as much, given the same quality electronics.
and a read noise of 4 electrons. Then we have a pixel DR of 10 stops
(2^10 = 4096/4).
One notion of everything being equal would be the 12Mp pixels
having a FWC of 1024 electrons and a read noise of 4 electrons.
Then each pixel has a DR of 8 stops (2^8 = 1024/4).
However, the upshot is that the read noise relates to the pixel
capacitance which corresponds to well capacity. So absolute
read noise will tend to scale down with pixel size, somewhat
proportionally to the pixel pitch.
Semi-conductor expert bobn2 has written about that in
some posts, e.g. here:
http://forums.dpreview.com/forums/read.asp?forum=1000&message=28533465
So, in this case we can expect 2 electrons of read noise
with the 12Mp sensor, 9 stops of pixel DR (2^9 = 1024/2).
Now, many people, including you-know-who, would still say: "Look at
that pixel-stuffed 12Mp sensor, it lost a stop of DR against the 3Mp sensor,
let's have our 3Mp sensors back!"
But if the 12Mp is downsampled to 3Mp, the (uncorrelated) read noise
will reduce, due to laws of statistics to (2+2+2+2) sqrt(4) = 4. The max
possible signal is 4x1024=4096, and we are back at 10 stops of DR
(10^2 = 4096/4). [reference 1]
Typically, with current technology, as seen above, it IS linear in pixel pitch.OTOH there might be some kind of proportionality (but not
linear) between pixel size and the read noise.
bobn2 again:Secondly, even though the shot noise is averaged out to certain
degree, if the binning is not done on the sensor itself before the
A/D converter, potentially more noise is introduced to the equation.
"Downstram amplifier noise is not dependent on pixel size. However, if an area in a final image is made by integrating the contribution of a larger number of pixels (i.e. a higher pixel density sensor) the effect will be to smooth the amplifier noise contributions, resulting in lower noise for the higher pixel density sensor."
Some people claim it's easier to make efficient microlenses for small pixels,Fill factor is a another aspect, AFAIK achieving higher fill factor
with lower mp is easier. Yes, micro lenses come to mind, but light is
still lost between the lenses.
which would compensate for this. In the future, big pixel microlenses
might catch up, of course.
From [reference 1] above, we see that binning demonstrates howSo I still don't understand how sensor size affect DR. To me (putting
Bayer aside) one binned pixel should have virtually the same DR as a
whole sensor (not like noise which benefits a lot from averaging).
IOW, as I understand this, DR is related to pixel size, not sensor
size.
pixel DR goes up from the effect of having more pixels (whether
we bin them or just look at them (better)) Our eyes will do the
binning when looking at the high-res print. And having more
pixels will be advantageous for the perceived DR, or "DR per area".
Pixel DR is not what defines the final image.
EDIT: And this also shows why a larger sensor with the same
pixel density will have more "image DR".
And another point: Software binning doesn't improve the image in
any way, on the contrary, it throws resolution away. But it demonstrates
what's there in the data, all the time, so it's a useful concept. Edit
There are allegedly some tell-tale signs/marks when a sensor is made byHmmm, so the 12mp APS-c CMOS is sony but the FF version not? I
thought all Nikon sensor were made at the same place but they
customized one part of the process.
Sony, if it is scrutinised under a microscope, and these marks are lacking on
the D3 sensor.
Just my two oere
Erik from Sweden