Hello_Photo
Well-known member
Here's an idea for a sensor/firmware-based solution that would allow a camera to capture unlimited dynamic range.
As I understand it, photosites on sensors are only capable of capturing a finite amount of light. The best analogy I have heard is that a photosite can be conceptualized as a bucket. Light "flows" into the bucket and if it reaches its capacity while the shutter is open (and the sensor is activated), the firmware interprets this as being the maximum brightness in the image when producing the final image (or RAW file). The result of this interpretation is often blown highlights (or at least lost information in the upper tonal regions).
Here's my idea: If the sensor and firmware were able to capture and interpret both the amount of light captured during the exposure AND the time it took to capture that amount of light, the full dynamic range in the scene could be estimated by the firmware when producing the image. Here's and example to show how I envision this working (I'll use 8-bit values to represent the amount of light captures just to illustrate the concept):
Current Technology - A group of photosites reaches the maximum value of 255 during a 1/125 second exposure. The firmware translates this into pure white in the final file, resulting in blown highlights.
Proposed Technology - The same group of photosites and light exposure as above, but the sensor/firmware combination also records the fact that the maximum value of 255 was reached in 1/250th of a second. This would allow the firmware to estimate that the actual amount of light at that point is approximately 510 (since the maximum value was reached in half the exposure time). This information could then be recorded in the RAW file. Tonal curves could be applied to manipulate the final output either during post-processing, or immediately by the firmware to produce JPEGs.
The result of this technology should be greatly-enhanced detail in the high-key areas of a scene, resulting in much greater dynamic range. It should also allow for much greater control over how bright areas are represented in the final image. Overexposure could become a relic of the past!
Naturally, this idea would require additional in-camera processing power (to keep processing times reasonable), and larger RAW files (to record the additional information). However, I find it difficult to believe that either of these requirements would be significant barriers to commercialization of this technology. Heck, if this technology reduced frame rates significantly, you could always add a custom menu option to turn it off. I doubt that landscape photographers using tripods would mind!
Thoughts? Criticisms? Demands for Pentax to put this into the K30D/K1D? All are welcome!
As I understand it, photosites on sensors are only capable of capturing a finite amount of light. The best analogy I have heard is that a photosite can be conceptualized as a bucket. Light "flows" into the bucket and if it reaches its capacity while the shutter is open (and the sensor is activated), the firmware interprets this as being the maximum brightness in the image when producing the final image (or RAW file). The result of this interpretation is often blown highlights (or at least lost information in the upper tonal regions).
Here's my idea: If the sensor and firmware were able to capture and interpret both the amount of light captured during the exposure AND the time it took to capture that amount of light, the full dynamic range in the scene could be estimated by the firmware when producing the image. Here's and example to show how I envision this working (I'll use 8-bit values to represent the amount of light captures just to illustrate the concept):
Current Technology - A group of photosites reaches the maximum value of 255 during a 1/125 second exposure. The firmware translates this into pure white in the final file, resulting in blown highlights.
Proposed Technology - The same group of photosites and light exposure as above, but the sensor/firmware combination also records the fact that the maximum value of 255 was reached in 1/250th of a second. This would allow the firmware to estimate that the actual amount of light at that point is approximately 510 (since the maximum value was reached in half the exposure time). This information could then be recorded in the RAW file. Tonal curves could be applied to manipulate the final output either during post-processing, or immediately by the firmware to produce JPEGs.
The result of this technology should be greatly-enhanced detail in the high-key areas of a scene, resulting in much greater dynamic range. It should also allow for much greater control over how bright areas are represented in the final image. Overexposure could become a relic of the past!
Naturally, this idea would require additional in-camera processing power (to keep processing times reasonable), and larger RAW files (to record the additional information). However, I find it difficult to believe that either of these requirements would be significant barriers to commercialization of this technology. Heck, if this technology reduced frame rates significantly, you could always add a custom menu option to turn it off. I doubt that landscape photographers using tripods would mind!
Thoughts? Criticisms? Demands for Pentax to put this into the K30D/K1D? All are welcome!
- Scott Price.