Daaaaaaaamn! Check out the D5200 sensor!

Started Jan 24, 2013 | Discussions thread
Iliah Borg Forum Pro • Posts: 24,916
Re: Aptina?

bobn2 wrote:

Jack Hogan wrote:

I believe the answer lies in a different unity gain being applied at ISO100 vs the rest , resulting in a different G in Bob's formula: take a look at the 'relative gain' in my table to see what I mean. Viewed from the point of view of the raw data, the sensitivity of the sensor has changed differently than expected at ISO100.

I'm not sure what you mean by 'unity gain' - the gain is the gain.

From Emil's page (I changed the symbols to reflect those in the formula you showed above)

"G = U/Iso where Iso is the ISO setting, and U is a constant (which differs for different camera models) known as the unity gain."

That formula works on the assumption that ISO is well controlled, and also that G is a simple constant, not some function of the illumination, which is what it is once the sensor characteristic becomes non-linear.

Remember that the non-linearity I am talking about would only manifest itself at the top end of the scale, near to saturation level, you wouldn't see it anywhere else. Remember also that DxOmark's curves are themselves fitted to their measured data, so if the 100% saturation was off it might be flattened.

Yes to both comments, but if I understand correctly DxO says that they actually accurately measure the Exposure at which the sensor - the Raw data really - saturates (i.e. at the top end of the scale) for the various in-camera ISO settings. It'd be dumb to change the most accurate data points you have, no?

It might be dumb, but I guess that they do it. DxO says that they use transmissive step wedges to give them their targets. One variation of the test protocol would be to first find the saturation point by varying the illumination level (either by a variable brightness or ND filters in front of a calibrated source) then place the step wedge in front of that source to get the other readings. I suspect that this is what they do, because the data points always seem to be at a fixed proportion of saturation. I don't know what is the spacing of the steps, maybe 1/4 stop, in which case if the shoulder is in that last 1/4 stop, it is likely to be smoothed away. Once one goes into the curve fitting business, you lose control over which data points you keep. If the top one is an outlier, it will go. I guess that they are fitting against a known function, not using regression.

The behavior of the sensor is also dependent on the scene and stray light in the camera. Say, a better SNR formula is:

SNR = P*Q(e)*t /sqrt ((P + B)*Q(e)*t + D*t + N(r)^2)

where P is the incident photon flux (photons/pixel/second), B is background photon flux, Q(e) represents the sensor quantum efficiency, t is the integration time (seconds), D is the dark current value (electrons/pixel/second), and N(r) represents read noise (electrons rms/pixel).

Out of curiosity I tried a measurement protocol which differs from my usual (shining a very narrow green monochromatic focused beam onto the sensor controlling the time the light is on; 24 points on the last stop before saturation), and used an ND wheel at the port of an integrating sphere, halogen light. First, if saturation is measured without a filter (iven that the sphere is used) there is a non-continuity in the curve. Second, I observed more flare in the camera box compared to my usual method, and the results are skewed. I think DxO results are more qualitative than quantitative. I think if they want to do a quantitative job their measurement setup needs refactoring.

Post (hide subjects) Posted by
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark MMy threads
Color scheme? Blue / Yellow