spider-mario
Senior Member
Yes, and it just so happens that the convention that we often use to quantify how the recorded information maps to a reproduced tone (ISO 12232) is defined in terms of focal plane exposure. But it wouldn’t have to be the case. Neither is really what you see in the image.But that's not what you see in the image, that's only what's recorded. The image is just a number code that's reproduced by a printer or display.I have never denied that. But exposure is not what I meant by “light”, photons are.No. Exposure is signal per unit area. The same exposure will saturate all sensors with the same QE at the same point.
What is white?Pure white is the brightest you are going to get, whatever the camera records.
Yes, that’s exactly my point. It would appear to nullify the advantage, when in fact “it generally is the case that larger sensors can hold more light before clipping, for the image as a whole or for any fixed fraction of it”.No, because with a larger area, you record more light overall, so the signal to noise ratio is higher over the total area of the image.It seems logical that looking at light per unit area would appear to nullify a linear advantage of a larger area.
“Equivalent” in terms of angle of view, DOF, photon shot noise, diffraction, motion blur. “Exposure” as in “focal plane exposure”, generally expressed in lx·s, which I believe is the same definition that you are using.Equivalent in what terms? And what do you mean by 'exposure' ?My point is precisely that exposure is probably not the relevant thing to look at for such comparisons, if only because equal exposures on differently-sized sensors do not produce equivalent images.
So, if you have the same exposure on a larger sensor, you have at least one of: a wider angle of view, a shallower DOF, a longer exposure time. If you equalize those parameters, you have similar noise and a lower exposure on the larger sensor (thus more room for highlights).
Which would be pointless if they didn’t use the whole range, right? Hence why they use approximately the same numerical range for a given bit depth, not because they clip at the same point.They use more bits to reduce quantisation error.On most current sensors, it’s 2000-3000 electrons per µm². Does it really mean much that they all use most of their 14-bit range to represent the number of electrons from an individual photosite? Of course they do, why use so many bits otherwise?The clipping point is the same. On a 14-bit sensor, its 16383 units, which translates to RGB 255.
Yes, photosites of 35 µm². What is your point?But that's bits per pixel, not bits per square micrometer. And some FF sensors have 100,000 electrons per photosite.
That would depend on how you convert it.Black level offset varies between cameras, and Canon's are generally higher, but the difference is not significant. The question is what does the maximum number represent when converted to image RGB?(I say “most” of the range because it turns out that the RAWs from my G1 X III only go up to 15871, and those from my K-70, to 16313.)
What is the answer, then?But that isn't the answer, so what's your point?“Underexpose” is such a loaded term. If a given exposure gives you the same noise as an equivalent (higher) exposure on a smaller sensor, and that exposure was fine on the small sensor, then what makes the lower (but still equivalent) exposure on the larger sensor “underexposed”? “Under” compared to what?A larger signal does not make the exposure brighter, but the larger signal from a larger sensor DOES have less noise. With less noise you can underexpose more to preserve highlights and adjust in processing.
If the answer is “compared to what the sensor could have held”, then I believe that my point is made.
Last edited:


