Dynamic range and the various ways of trying to capture and represent it are the topic of many a heated discussion on the forums. We spoke to Apical, a company working on this challenge whose technologies are incorporated in cameras from the biggest brands, to find out what it is doing to address the matter. We think this interview with managing director Michael Tusch will help shine a little light on this shadowy corner of image processing.

Out of the shadows

You may not have heard of the British imaging technology company Apical but, if you've used a camera made in the last couple of years, there's every chance you've used technology it has developed. This is because a wide variety of cameras, from compacts through to pro-level DSLRs incorporate its processing algorithms. You may know it as D-Lighting, Shadow Adjustment Technology or Dynamic Range Optimization, but the underlying technology starts with Apical.

We visited Apical's Headquarters and spoke to its Managing Director, Michael Tusch, to find out what the company is trying to do to improve the dynamic range capabilities of digital cameras.

Dynamic range has become a hot-topic amongst the camera-owning cognoscenti, who often worry that their camera's sensor cannot capture the full range of tones from the deepest shadows to the brightest highlights in a scene. But the use of simple, whole-image, tone curves often fails to convey all of the captured dynamic range into the final image, with important detail disappearing in the shadows.

'At its most simple, what our algorithms do is called dynamic range compression - trying to render extremes of light and dark in the way that the human eye would interpret them,' he explains. As it turns out, the company's starting point was human vision, rather than digital imaging: it was originally spun-out from a research group working on modeling biological imaging systems. 'The human eye is highly adaptive to light, and is able to interpret great extremes of light and dark,' says Tusch: 'The main thrust of that work was developing algorithms that could model the human visual system and the researchers realized that some aspects of that could be applied to digital image processing.'

'Our aim is to enable digital cameras to produce the most natural image in the most natural way,' he says. 'Other people were trying to do this at the time but all the existing tools either required manual intervention or had processing demands that were unsuitable for cameras. We created a model that achieved this, was suitable for use in a digital camera and could be included in the image processing pipeline. And it did this in a highly adaptive manner, so it's not something the user had to adjust or worry about.'

From small beginnings...

Given that its system was an adaptive one, that required no intervention, it probably shouldn't come as a surprise to find that the first application of Apical's technology first appeared in compact cameras. Nikon was the first digicam customer, building its D-Lighting system on the technology, in its Coolpix range. 'By 2004 all of the Coolpix range included D-Lighting and later Olympus's Mju series started to incorporate our technologies - with their implementation called Shadow Adjustment Technology.'

'People started to realize there is more to creating a realistic-looking image than just tone-curves and demosaicing,' says Tusch. As anyone who has tried 'rescuing' an image in post-processing using curves will know; it's difficult (and often impossible), to preserve extremes of light and shade in the same image without the result looking 'flat' and unrealistic. The problem, Tusch says, is that steepening the tone curve to increase contrast in the dark part of the image means flattening the tone curve somewhere in the midtones or highlights, leading to a loss of contrast and color in those regions. 'Most tone mapping algorithms steadily reduce local contrast as the strength of dynamic range compression is increased.'

Apical attempts to get 'round this problem by assessing the local environment of each pixel and considering its relationship to its immediate neighbors as well as their context within the image as a whole. 'To give it its technical name, our product, Iridix, is a space-variant dynamic range compression algorithm. What the eye does, in camera language, is apply a different gain, based on the local environment of a pixel - for example, a pixel with a specific R,G,B value in a bright region would be interpreted very differently to a pixel with the same R,G,B value in the shadows.' Iridix aims to mimic this response: it assesses how each pixel relates to the ones around it, in order to work out the local contrast, while also working out how that region fits in with the rest of the image.

Single adjustment curve applied Iridix result
Show adjustment curve Show Iridix local adjustments
Using a single, homogenous, curves adjustment to try to correct the entire image results in a slightly flat, washed-out looking image. Iridix calculates a different tone curve adjustment for each part of the image (varying slightly between each pixel), in an attempt to preserve local contrast all the way across the image, lifting the detail in the shadow regions without the whole image looking washed-out.

'To achieve this, and ensure the image looks natural, there are four non-trivial factors that need to be considered,' explains Tusch:

  1. The preservation of the black and white points of the image (to prevent color clipping and avoid true blacks becoming gray)
  2. The preservation of true color
  3. The exact preservation of local contrast.
  4. The complete elimination of any spatial artifacts, such as halos

'Getting any of these four wrong results in a unnatural looking image. For example, if you look at tone mapping algorithms for high dynamic range imagery, they often produce rather un-natural, "painterly" images, this is simply the result of an inappropriate algorithm - the results should look completely natural.'

...to the hands of the professionals

The technology is no longer confined to compact cameras, however. 'We had faith that it would be a very useful tool for professional photographers,' says Tusch, ' if well implemented and well explained.' And to illustrate his point, he points out that the company's technology has become wide-spread in the DSLR sector, appearing in products from Sony and Olympus, amongst others. 'We've built an image engine that produces image quality suitable for professional photographers. However, we wouldn't advocate this always being enabled because you may not always want the photograph to represent the scene the way it appeared - you may wish to present a more creative interpretation.

We simply aim to achieve pictures which look as close as possible to what you see with your eyes when you take the shot. And, of course, studio photographers simply don't need such a feature because they have complete control over the lighting of their scenes. But for photographers outside or, for instance, at weddings in which not all the lighting can be controlled, we think it can be useful.'

'And we've had a very strong positive response. That's because we think the manufacturers have worked hard to work out how to apply it for a professional audience. I think [the professional photographers] realize it's not a touch-up, it's a scientifically-backed quantitative way of improving image quality and getting the result you want.'

Unsurprisingly, Tusch says there is a philosophical distinction between the two sectors: 'In compact cameras, the expectation is that the camera will capture the scene as you saw it, without you having to think about it, so it tends to be an on/off feature. I think with DSLRs there's a lot more interest in being able to control this technology in a quantitative way (although it still has a self-adapting element to it). That way, it can become a tool that, once understood, can be used by photographers to optimize their image, just as they would with any other setting.' In the same way that photographers have demanded that ISO be as easy to access and change as shutter speed or aperture, Tusch says he hopes that in the future, dynamic range compression will be a turn of a thumb-wheel away.

He also believes the technology will improve: 'We're on version 6 of our product now but we're always trying to raise the technical bar - as the processing gets faster we want to get more sophisticated in terms of the algorithms we can use. But it's not just the processing that prevents the use of more complex algorithms - there are also huge technical challenges relating to how data gets moved around inside the camera, especially as you get more megapixels.' But Tusch is confident that the manufacturers share his company's ambitions to keep improving the final results: 'We're a technology provider, we work in many different areas of imaging. It's up to the manufacturers how they want to apply our technologies but the thing we love about working with DSLR companies is that, although (like everyone else) they have strong cost pressures, they will absolutely not compromise on image quality.'

Interview by Richard Butler