Sensitivity (ISO) in digital imaging seems to be the subject of quite a lot of confusion - it's becoming common to hear talk of manufacturers 'cheating with ISO.' So we thought it made sense look at why sensitivity appears hard to pin down, why we use the definition we do and how it's actually not as complicated as it can sometimes seem.

ISO in Photography

Before we get too carried away with the intricacies of ISO standards, it makes sense to step back and consider how we use sensitivity in photography. Sensitivity is the connection between the physical exposure (how much light you let in) and the brightness of the final image. However, all this brightening occurs after the image has been captured on the sensor, so scene illumination, shutter speed and aperture end up playing the biggest roles in determining image quality.

Sensors and sensitivity

For ISO to relate exposure to final image brightness, we have to think about the inherent sensitivity of the digital sensor and this is where it risks becoming rather removed from photographic concerns.

Much of the complication arises from the fact that there is no 'correct' way of exposing a sensor. Sensors have a capacity for converting light into electrical charge - limited at the upper end by the point at which the sensor becomes saturated (and cannot convert any more photons of light into electrical charge) and extending down until the signal is drowned-out by electrical noise. The upper, saturation limit of the sensor's response defines the brightest light intensity that can be turned into meaningful data in the final image.

However, although exposing to this limit (a technique called 'exposing to the right') can be an effective way to expose a Raw file, few (if any) camera provide the necessary tools to expose in this manner. Cameras light meters (even 'highlight' metering modes) are based on JPEG output and the majority of them, per the ISO standard, are designed to 'correctly' expose a middle grey.

And there is an added complication before we can get to that point. To make an image that looks good on a monitor, or as a print, curves are applied that compensate for the response of the monitor (to make the tonal response look more like the original scene) and to make the image look desirably contrasty.

The camera's tone curve converts the sensor's output to the final image brightness, which means it also defines what Raw value represents middle grey and hence how the sensor needs to be exposed. (In fact there is a subtle interplay between the sensor's inherent sensitivity, its dynamic range, the tone curve and the camera's metering.)

A standard with shades of grey

So this is what ISO is defining when you use it: its combining considerations of the sensor's sensitivity with the effects of the tone curve and metering so that you can get the correct final image brightness with your chosen exposure.

However, the standard set down by the International Organization for Standardization (ISO12232:2006, as it happens), contains five separate definitions, each of which can produce a different answer for the same camera. Thankfully, only three of these definitions are widely used and only two, closely-related definitions are used by camera makers.

ISO, courtesy of CIPA

The two definitions of ISO that are actually used by camera manufacturers (and are reported by their cameras) are based on the brightness of cameras' JPEG output. Both definitions come from standards developed by the Japanese camera trade body CIPA, which were adopted by ISO in 2006. The first definition is probably the simplest and most intuitive, and it's called Standard Output Specification. Essentially, it defines ISO as the camera behaviour that renders middle grey at the correct brightness (as we've just described and pretty much the same way as it did for film).

The other definition (Recommended Exposure Index) is fairly similar but is designed to accommodate multi-zone/pattern metering systems. These metering systems aren't based on trying to represent middle grey and instead aim to achieve whatever the manufacturer considers to be 'correct' exposure. As such they can't be measured because the definition is pretty much circular: whatever the camera chooses is right, by definition.

So what about the others?

The only other definition of ISO you're ever likely to encounter is one that can be used for Raw data. The problem is that it's based on a combination of the sensor's saturation point and a generic tone curve – which isn't necessarily the tone curve your camera's JPEGs or metering are based on. So, discrepancies between this figure and your camera's reported ISOs aren't the result of under or over-reporting of ISO, they're a measure of how different your camera's tone curve is from this generic tone curve.

Why do I need to worry?

If you use the camera's JPEGs, or a Raw converter that acknowledges the manufacturer's rendering intent (and that includes many popular Raw converters), then chances are you're going to get the ISO that your camera tells you. So rather than measuring a slightly obscure aspect of sensor performance, our tests are based on the Standard Output Specification that the camera manufacturers use, that your camera is based on and that, chances are, you use.

What happens when I change the ISO?
Traditionally ISO has been changed by amplifying the sensor's output before it is converted to digital data (as demonstrated if you move your mouse over the above diagram). However, it is also possible to mathematically manipulate the data once it has been digitised - many 'extended ISO' settings and some intermediate ISO values between full stops (e.g. 250 and 320) do just that.

Click here to read part two...