Researchers at the Massachusetts Institute of Technology have come up with a theory for a new way to solve the issue of bright skies burning out in a landscape photograph. Using what they call Unbounded High Dynamic Range photography they are working on the idea of photodiodes that reset themselves once they have reached their capacity for recording light so that they can carry on recording - and resetting if necessary.

Once the exposure is complete, the amount of light that has been recorded by how much charge is currently in the pixel is reported, as well as how many times it has had to reset. Using that information a clearer idea of the brightness of different areas of a scene can be obtained than had the photodiode just reached maximum charge and stopped recording. 

Scientists are working on the idea of a modulo camera that they claim will defy the relationship between bit-depth and dynamic range, but which will also produce better quality images than the multi-shot exposure combination method that is commonly used by photographers to increase the range of tones shown in an image. The research paper published by the institute claims that the problem of banding and artifacts that often occur in areas of dramatic tonal transition in multi-shot images can be avoided. We run into this issue quite commonly when merging drastically different exposures that contain harsh boundaries between bright and dark areas. Although the modulo camera can record high dynamic range in a single exposure, using three exposures it can represent a 24-stop brightness range.

A simulated idea of what the researchers aim to achieve

We think the natural evolution of this type of algorithm would involve pixels that can undergo, and 'remember', an arbitrary number of resets. Such a sensor could essentially simulate a very high pixel full-well capacity. This would allow every pixel to effectively collect more charge; a lot of it, in fact, if you can give the camera enough light with a long exposure or very bright lens. Ultimately, this would means the total sensor could record more light than what a traditional sensor of the same size would be able to. And since we know that most of the noise in our images comes from shot noise, whose contribution is reduced the more light you collect, this would mean that a smaller sensor could punch above its weight in terms of image quality. In other words, extending pixel capacities (kind of like ISO 64 on the Nikon D810) is a way of getting the noise, and more importantly signal:noise, performance of a larger sensor, presuming you can actually give the camera the extra light. Unfortunately, the method outlined in this paper doesn't appear to explicitly do this, but one can hope that such technology is on the horizon.

Although the paper shows mostly simulated images created in software there are also some 256x256 pixel resolution pictures that demonstrate what the idea can manage in real life.

For more information read the full paper on the MIT website (PDF).

There is a video that explains the concept here: