Earlier this week, Japanese patent blog Egami reported that Olympus has patented a technology that would allow the photographer to selectively adjust exposure of different scene areas during an exposure. This might allow for a more balanced exposure of scenes where cameras might normally struggle (and the photographer might have to resort to high dynamic range or multiple exposure techniques). 

In scenes like the one pictured below, it's hard to avoid overexposure of bright areas or underexposure of dark areas in a single exposure, due to the limited dynamic range of current imaging sensors. Selectively controlling the exposure of dark and bright objects at the time of capture is one way to deal with dynamic range limitations of current hardware, and could be of huge interest to those interested in high dynamic range (HDR) photography. We've had some time to sit down and look at the original patent to read between the lines and conjecture about what the actual implementation might be.

Sunset at Deception Pass, WA. This was a high dynamic range scene that required multiple exposures (1s, 4s, 15s) to composite a final image in Adobe Photoshop. This sort of scene could benefit from Olympus' technology that allows for selective exposure times for various scene elements. [Photo: Rishi Sanyal]

The patent appears to be a natural extension of Olympus' current 'Live Time' (or 'Live Bulb') feature, where the camera reads the sensor at set intervals (0.5, 1, 2, 4, 8, 15, 30, or 60s), continuously adding the exposures until the photographer stops the process with a press of the shutter. This mode is quite useful for, say, landscape photographers who are photographing a scene requiring long exposures but who might not know the ideal exposure time beforehand. In such cases, Olympus's 'Live Time' feature allows the camera to continue accruing exposure until the photographer decides, via the LCD display or histogram, that a good exposure has been achieved.  

The problem is that neither 'Live Time', nor any other traditional exposure mode in today's cameras, can do much in the way of holding back the exposure of, say, the sun in the photo of Deception Pass above. It's just too bright compared to the rest of the scene. So as we accrue a 4s exposure for the clouds and a 15s exposure for the trees in the photo above, the sun and surrounding areas would almost certainly clip to white. This sort of scenario - where there is a huge difference in brightness between important areas of a scene - is exactly what we believe this new patent from Olympus is designed to address.

The technology outlined in the patent appears to use a form of multi-exposure technique to control the exposure of objects in a scene. The user selects, via a touch screen, which objects receive how much exposure. This can effectively balance the exposure of objects in a scene that might otherwise be rendered as having extremely different brightnesses using traditional methods. The 'problematic' scenario is shown in the uppermost image below, while Olympus' 'fix' is shown in the image below it:

Image from the Olympus patent that shows a scenario where buildings in the foreground are in danger of being overexposed if the shutter is left open long enough to capture interesting fireworks. [SourceEgami]
An overall schematic of Olympus' technology to selectively control exposure of scene elements using a touchscreen interface. The user paints in the portions of the image (buildings) in danger of being overexposed, while the rest of the scene accrues exposure until an optimal image is achieved (the balanced exposure on the right). [SourceEgami]

We had a look at the original patent (translated by Google) and, while there is mention of varying the photoelectric conversion sensitivity, of more interest is the repeated use of the phrase ‘[reading the] image sensor sequentially’ in the patent. We think this suggests that a final image is assembled from many images read sequentially off the sensor. This would be a logical extension of Olympus' current 'Live Time' implementation where, instead of allowing pixels to accrue exposure over a number of intervals (nondestructively reading the sensor after each interval to display the accumulated exposure), the sensor is destructively read (or reset) after each interval. The exposures from each interval could then be added together selectively; that is, with the input of the photographer as to what scene elements require more, or less, exposure. You might think of it as a way to take 'Live Time' and add user input into the process to imbue some 'intelligence' into the synthesis of the final image. That's incredibly exciting. 

Patent images (see below) suggest that this 'user input' - the process of selecting which scene elements receive longer or shorter exposures - is done via some interaction with the histogram (a graph that essentially represents pixels based on brightness value). We imagine that these sorts of operations could be performed via a connected smartphone or tablet to avoid camera shake during exposure.

This image from the patent suggests that the photographer selects regions of the histogram - which represents scene elements based on brightness - that should receive more, or less, exposure. [SourceEgami]

To more specifically visualize how a photographer might benefit from this technology, let's return to our shot of sunset at Deception Pass and imagine how we might use the technology suggested in the patent. Using 'Live Time', we set up intervals of 1s exposures to be sequentially read off the sensor and added (digitally) to form the final image. We use the touchscreen to indicate that the sun, or any bright regions on the histogram, should only receive one interval of exposure. Meanwhile, the rest of the scene should receive 10 intervals, or, should simply continue accruing exposure until we feel a balanced exposure has been achieved. All of this seems feasible without significant technological breakthroughs in sensor design if the image were to be built up digitally. That is, from the, say, 10 exposures: add together signals from pixels that require longer exposures, while averaging signals from pixels requiring shorter exposures. This (digital) method would circumvent the need to actually expose different pixels for different amounts of time at the hardware level (something that, while theoretically possible, is extremely difficult to do).

At this point, we're being very speculative about the actual implementation of the patent (assuming it is ever implemented at all); nonetheless, the implications are exciting, as this technology might allow for photographers using 'Live Time' to capture high dynamic range scenes the luxury of getting a proper exposure in-camera as opposed to manually assembling HDR images from multiple shots in post-processing. In the end, any technology that allows for selective exposure of scene elements at the time of capture has the potential to provide better image quality in scenes containing subjects of significantly varying brightnesses. 

What might it mean?

Looking to the future, the utility of this technology could increase further with faster sensor reading rates and higher processing power of future generations of cameras. Improvements in these areas would make this system more robust - for scenes requiring shorter overall exposures, for example. However, it would be hard to utilize this technique for moving subjects, as well as handheld shots without proper image alignment. Even so, we think the implications are intriguing.

What do you think? Let us know in the comments section below.