Here's my idea: If the sensor and firmware were able to capture and
interpret both the amount of light captured during the exposure AND
the time it took to capture that amount of light, the full dynamic
range in the scene could be estimated by the firmware when producing
the image. Here's and example to show how I envision this working
(I'll use 8-bit values to represent the amount of light captures just
to illustrate the concept):
Current Technology - A group of photosites reaches the maximum value
of 255 during a 1/125 second exposure. The firmware translates this
into pure white in the final file, resulting in blown highlights.
Proposed Technology - The same group of photosites and light exposure
as above, but the sensor/firmware combination also records the fact
that the maximum value of 255 was reached in 1/250th of a second.
This would allow the firmware to estimate that the actual amount of
light at that point is approximately 510 (since the maximum value was
reached in half the exposure time). This information could then be
recorded in the RAW file. Tonal curves could be applied to
manipulate the final output either during post-processing, or
immediately by the firmware to produce JPEGs.
The result of this technology should be greatly-enhanced detail in
the high-key areas of a scene, resulting in much greater dynamic
range. It should also allow for much greater control over how bright
areas are represented in the final image. Overexposure could become
a relic of the past!