Sense and Sensitivity
R Butler | Technology | Published Sep 1, 2011
Sensitivity (ISO) in digital imaging seems to be the subject of quite a lot of confusion - it's becoming common to hear talk of manufacturers 'cheating with ISO.' So we thought it made sense look at why sensitivity appears hard to pin down, why we use the definition we do and how it's actually not as complicated as it can sometimes seem.
ISO in Photography
Before we get too carried away with the intricacies of ISO standards, it makes sense to step back and consider how we use sensitivity in photography. Sensitivity is the connection between the physical exposure (how much light you let in) and the brightness of the final image. As such it joins shutter speed and aperture as one of the three factors that define exposure.
Sensors and sensitivity
For ISO to relate exposure to final image brightness, we have to think about the inherent sensitivity of the digital sensor and this is where it risks becoming rather removed from photographic concerns.
Much of the complication arises from the fact that there is no 'correct' way of exposing a sensor. Sensors have a capacity for converting light into electrical charge - limited at the upper end by the point at which the sensor becomes saturated (and cannot convert any more photons of light into electrical charge) and extending down until the signal is drowned-out by electrical noise. The upper, saturation limit of the sensor's response defines the brightest light intensity that can be turned into meaningful data in the final image. However, this doesn't tell us much about how to expose the sensor - simply exposing to retain the brightest highlights won't necessarily ensure a correctly exposed image – we need to work out how to correctly expose a middle grey.
And there is an added complication before we can get to that point. Sensors respond to light in a very different way to the human visual system - they respond in a linear fashion: twice the amount of light gives twice the signal, whereas the brain doesn't interpret things that way. In order to make this linear data into a convincing image, a tone curve that attempts to map the data back to the way the eyes respond has to be applied.
This tone curve converts the sensor's output to the final image brightness, which means it also defines how the sensor needs to be exposed. (In fact there is a subtle interplay between the sensor's inherent sensitivity, its dynamic range, the tone curve and the camera's metering.)
A standard with shades of grey
So this is what ISO is defining when you use it: it's combining considerations of the sensor's sensitivity with the effects of the tone curve and metering so that you can get the correct final image brightness with your chosen exposure.
However, the standard set down by the International Organization for Standardization (ISO12232:2006, as it happens), contains five separate definitions, each of which can produce a different answer for the same camera. Thankfully, only three of these definitions are widely used and only two, closely-related definitions are used by camera makers.
ISO, courtesy of CIPA
The two definitions of ISO that are actually used by camera manufacturers (and are reported by their cameras) are based on the brightness of cameras' JPEG output. Both definitions come from standards developed by the Japanese camera trade body CIPA, which were adopted by ISO in 2006. The first definition is probably the simplest and most intuitive, and it's called Standard Output Specification. Essentially, it defines ISO as the camera behaviour that renders middle grey at the correct brightness (as we've just described and pretty much the same way as it did for film).
The other definition (Recommended Exposure Index) is fairly similar but is designed to accommodate multi-zone/pattern metering systems. These metering systems aren't based on trying to represent middle grey and instead aim to achieve whatever the manufacturer considers to be 'correct' exposure. As such they can't be measured because the definition is pretty much circular: whatever the camera chooses is right, by definition.
So what about the others?
The only other definition of ISO you're ever likely to encounter is one that can be used for RAW data. The problem is that it's based on a combination of the sensor's saturation point and a generic tone curve – which isn't necessarily the tone curve your camera's JPEGs or metering are based on. So, discrepancies between this figure and your camera's reported ISOs aren't the result of under or over-reporting of ISO, they're a measure of how different your camera's tone curve is from this generic tone curve.
Why do I need to worry?
If you use the camera's JPEGs, or a RAW converter that acknowledges the manufacturer's rendering intent (and that includes many popular RAW converters), then chances are you're going to get the ISO that your camera tells you. So rather than measuring a slightly obscure aspect of sensor performance, our tests are based on the Standard Output Specification that the camera manufacturers use, that your camera is based on and that, chances are, you use.
|What happens when I change the ISO?|
|Traditionally ISO has been changed by amplifying the sensor's output before it is converted to digital data (as demonstrated if you move your mouse over the above diagram). However, it is also possible to mathematically manipulate the data once it has been digitised - many 'extended ISO' settings and some intermediate ISO values between full stops (e.g. 250 and 320) do just that.|