Unlimited Dynamic Range

Hello_Photo

Well-known member
Messages
234
Reaction score
0
Location
Ontario, CA
Here's an idea for a sensor/firmware-based solution that would allow a camera to capture unlimited dynamic range.

As I understand it, photosites on sensors are only capable of capturing a finite amount of light. The best analogy I have heard is that a photosite can be conceptualized as a bucket. Light "flows" into the bucket and if it reaches its capacity while the shutter is open (and the sensor is activated), the firmware interprets this as being the maximum brightness in the image when producing the final image (or RAW file). The result of this interpretation is often blown highlights (or at least lost information in the upper tonal regions).

Here's my idea: If the sensor and firmware were able to capture and interpret both the amount of light captured during the exposure AND the time it took to capture that amount of light, the full dynamic range in the scene could be estimated by the firmware when producing the image. Here's and example to show how I envision this working (I'll use 8-bit values to represent the amount of light captures just to illustrate the concept):

Current Technology - A group of photosites reaches the maximum value of 255 during a 1/125 second exposure. The firmware translates this into pure white in the final file, resulting in blown highlights.

Proposed Technology - The same group of photosites and light exposure as above, but the sensor/firmware combination also records the fact that the maximum value of 255 was reached in 1/250th of a second. This would allow the firmware to estimate that the actual amount of light at that point is approximately 510 (since the maximum value was reached in half the exposure time). This information could then be recorded in the RAW file. Tonal curves could be applied to manipulate the final output either during post-processing, or immediately by the firmware to produce JPEGs.

The result of this technology should be greatly-enhanced detail in the high-key areas of a scene, resulting in much greater dynamic range. It should also allow for much greater control over how bright areas are represented in the final image. Overexposure could become a relic of the past!

Naturally, this idea would require additional in-camera processing power (to keep processing times reasonable), and larger RAW files (to record the additional information). However, I find it difficult to believe that either of these requirements would be significant barriers to commercialization of this technology. Heck, if this technology reduced frame rates significantly, you could always add a custom menu option to turn it off. I doubt that landscape photographers using tripods would mind!

Thoughts? Criticisms? Demands for Pentax to put this into the K30D/K1D? All are welcome!
  • Scott Price.
 
This can only be implemented on CMOS sensors that already have technology on the photosite. CCD sensors don't have this. CMOS sensors have less surface available for capturing light due to this, although current miniaturisation and the ability to have better hardware noise reduction have countered this disadvantage. However, adding the extra technology you suggest my reduce the surface more, which will also have impact on dynamic range.

Furthermore currently the read-out of the exposure is done after the exposure and as such doesn't interfere during the exposure, whereas with what you suggest you would need a read-out process that could work during exposure, or at least some processor that would keep a timer and monitor/be triggered by the overflow. This means extra electronics and extra risk of interference, resulting in extra noise in the shadows, so even less DR.

Perhaps this will be implemented in the future, but I'm not sure technology is ready for it.

Just some thoughs from someone not hindered by extensive technical knowledge... ;-)

Wim

--
Belgium, GMT+1

 
Great idea BUT – have you thought that measuring each of the many millions of photosites continually during the exposure time may just be a step to much for existing technology. If you don’t do this then how do you know the precise instant that each or anyone becomes full? Unless I am missing something, this sounds like a very, very large and fast A/D converter.

Dave.
 
As I understand it, photosites on sensors are only capable of
capturing a finite amount of light. The best analogy I have heard is
that a photosite can be conceptualized as a bucket. Light "flows"
into the bucket and if it reaches its capacity while the shutter is
open (and the sensor is activated), the firmware interprets this as
being the maximum brightness in the image when producing the final
image (or RAW file).
Well, it's not so easy. AFAIK the photosite first collects light photons. The collected photons are converted to a very small electrical charge by a photodiode. To be able to actually read the charge it has to be amplified; and only after amplification it's converted to a digital value.

If I got that right this process can be done only once - after exposure is done. And there is no possibility to measure a state e.g. after only half of the exposure time has expired - the necessary amplicifation prior to the a/d conversion doesn't make it possible to collect additional photons afterwards.

--
Phil

GMT +1
http://picturesandstories.blogspot.com
 
Thanks for your thoughts!
This can only be implemented on CMOS sensors that already have
technology on the photosite. CCD sensors don't have this. CMOS
sensors have less surface available for capturing light due to this,
although current miniaturisation and the ability to have better
hardware noise reduction have countered this disadvantage. However,
adding the extra technology you suggest my reduce the surface more,
which will also have impact on dynamic range.
Like you, I'm not hindered by an abundance of technical knowledge either, but I'm not sure how this idea would reduce the surface area. It would just be a matter of interpreting the information produced by the sensor.
Furthermore currently the read-out of the exposure is done after the
exposure and as such doesn't interfere during the exposure, whereas
with what you suggest you would need a read-out process that could
work during exposure, or at least some processor that would keep a
timer and monitor/be triggered by the overflow. This means extra
electronics and extra risk of interference, resulting in extra noise
in the shadows, so even less DR.
That makes sense, although I'm not sure that the same result couldn't be achieved using some sort of "burst algorithm", rather than additional electronics. For example, the moment the photosites overflow - they send their location information to the camera's processor, which could then timestamp it (site X,Y overflowed at 1/XXX time) to be used in creating the final image. Again, I'm not technical, so I'm not sure what this would do to the efficiency of the whole process.
Perhaps this will be implemented in the future, but I'm not sure
technology is ready for it.

Just some thoughs from someone not hindered by extensive technical
knowledge... ;-)

Wim

--
Belgium, GMT+1

 
Great idea BUT – have you thought that measuring each of the many
millions of photosites continually during the exposure time may just
be a step to much for existing technology. If you don’t do this then
how do you know the precise instant that each or anyone becomes full?
Unless I am missing something, this sounds like a very, very large
and fast A/D converter.
I don't think that a continuous monitoring mechanism would be necessary - just a means to timestamp the overflow the moment it happens. I described a possible idea in my response to Wim above. However, I do see that significant processing power (and buffer, perhaps) would be necessary to resolve the traffic jam of information flowing into the ADC if the image was badly overexposed.
 
As I understand it, photosites on sensors are only capable of
capturing a finite amount of light. The best analogy I have heard is
that a photosite can be conceptualized as a bucket. Light "flows"
into the bucket and if it reaches its capacity while the shutter is
open (and the sensor is activated), the firmware interprets this as
being the maximum brightness in the image when producing the final
image (or RAW file).
Well, it's not so easy. AFAIK the photosite first collects light
photons. The collected photons are converted to a very small
electrical charge by a photodiode. To be able to actually read the
charge it has to be amplified; and only after amplification it's
converted to a digital value.

If I got that right this process can be done only once - after
exposure is done. And there is no possibility to measure a state e.g.
after only half of the exposure time has expired - the necessary
amplicifation prior to the a/d conversion doesn't make it possible to
collect additional photons afterwards.
Thanks Phil! See my response to Wim for an idea to get around this. Basically, it involves each photosite reporting to the ADC that it is "full" the moment that it happens.
 
Great idea BUT – have you thought that measuring each of the many
millions of photosites continually during the exposure time may just
be a step to much for existing technology. If you don’t do this then
how do you know the precise instant that each or anyone becomes full?
Unless I am missing something, this sounds like a very, very large
and fast A/D converter.
I don't think that a continuous monitoring mechanism would be
necessary - just a means to timestamp the overflow the moment it
happens. I described a possible idea in my response to Wim above.
However, I do see that significant processing power (and buffer,
perhaps) would be necessary to resolve the traffic jam of information
flowing into the ADC if the image was badly overexposed.
Timestamping directly would be hard, but one could imagine catching a part of the spill when the primary bucket overflows in a second bucket. If this bucket catches eg 1/8th of the spill flow and it is halve full after the exposure then the pixel was overexposed two stops. This is equivalent to having two kinds of photoreceptors, large sensitive ones and smaller insensitive ones, but that would be very hard to layout on the sensor matrix, each would need their own microlenses, etc.

If it is possible to register the spill per pixel than that would in theory be interesting, but I assume that is very hard to implement on a chip and even if it is possible then perhaps it takes too much space which is better used for larger receptors.

But who knows what some engineers are cooking in their labs. I'm sure they try all kinds of smart ideas all the time but most ofcourse will not be effective in the real world.
--

Tom - http://www.pentaxphotogallery.com/tomvijlbrief
 
Well, it's not so easy. AFAIK the photosite first collects light
photons. The collected photons are converted to a very small
electrical charge by a photodiode. To be able to actually read the
charge it has to be amplified; and only after amplification it's
converted to a digital value.

If I got that right this process can be done only once - after
exposure is done. And there is no possibility to measure a state e.g.
after only half of the exposure time has expired - the necessary
amplicifation prior to the a/d conversion doesn't make it possible to
collect additional photons afterwards.
Thanks Phil! See my response to Wim for an idea to get around this.
Basically, it involves each photosite reporting to the ADC that it is
"full" the moment that it happens.
I just doubt seriously - with my very own very limited knowledge of technology - that such a triggering / reporting mechanism could work at all. If I understand the basic technology correctly there is no chance of detecting that a photosite is "full": first you collect photons; then you convert to a very weak electric current using a photodiode; then you have to amplify this weak current to be able to read it (we're all analog up to this - no possibilty to "store" or "copy" any info for further use!). That means: You can't do a check every millisecond on each photodiode if the maximum number of photons has been collected - the act of reading the values itselfs empties the "buckets", and they have to be filled up from start for the next reading. And for a single photosite reporting to the ADC it must be read somehow ...

Sure hope I make myself clear, and that somebody with more insights in this technology can verify if my line of thinking is correct.

--
Phil

GMT +1
http://picturesandstories.blogspot.com
 
Timestamping directly would be hard, but one could imagine catching a
part of the spill when the primary bucket overflows in a second
bucket. If this bucket catches eg 1/8th of the spill flow and it is
halve full after the exposure then the pixel was overexposed two
stops.
This is equivalent to having two kinds of photoreceptors,
large sensitive ones and smaller insensitive ones, but that would be
very hard to layout on the sensor matrix, each would need their own
microlenses, etc.
This is exactly what Fuji did very successfully with their SuperCCD-SR sensors - their special way of placing the photosites allowed for incorporation of tiny R-pixels in addition to normal sized S-pixels with the R-pixels beeing less sensitive than the S-pixels. That means that the R-pixels would overflow later then the S-pixels, and with some interpolation done (S- and R-pixels beeing near to each other, but not at the exact same position) this very effectively increased dynamic range - without the need to time the "bucket full" moment ...

Due to some reasons the SR-sensors never got into the real mass market ... dynamic range doesn't seem to be the main worry of the standard consumer. But professional wedding photographers sure like the Fujifilm FinePix IS Pro, the latest SLR from Fuji using an SuperCCD-SR sensor - even if it has "only" 6 real MP.

--
Phil

GMT +1
http://picturesandstories.blogspot.com
 
Timestamping directly would be hard, but one could imagine catching a
part of the spill when the primary bucket overflows in a second
bucket. If this bucket catches eg 1/8th of the spill flow and it is
halve full after the exposure then the pixel was overexposed two
stops.
This is equivalent to having two kinds of photoreceptors,
large sensitive ones and smaller insensitive ones, but that would be
very hard to layout on the sensor matrix, each would need their own
microlenses, etc.
This is exactly what Fuji did very successfully with their
SuperCCD-SR sensors - their special way of placing the photosites
allowed for incorporation of tiny R-pixels in addition to normal
sized S-pixels with the R-pixels beeing less sensitive than the
S-pixels. That means that the R-pixels would overflow later then the
S-pixels, and with some interpolation done (S- and R-pixels beeing
near to each other, but not at the exact same position) this very
effectively increased dynamic range - without the need to time the
"bucket full" moment ...

Due to some reasons the SR-sensors never got into the real mass
market ... dynamic range doesn't seem to be the main worry of the
standard consumer. But professional wedding photographers sure like
the Fujifilm FinePix IS Pro, the latest SLR from Fuji using an
SuperCCD-SR sensor - even if it has "only" 6 real MP.
Interesting, I didn't know that. One problem with the approach (either two buckets who are filled at different rates, or a dedicated spill buffer) is that the secondairy bucket will probably have much less resolution in order to keep it small.

If the main bucket records 1024 levels (10bits) and the slow bucket can record eg 8 times more photons (3 stops) than it is likely that it will only be able to record eg 128 levels. In these 128 levels are 3 full stops recorded, which will result in a very coarse recording of these highlights, resulting in all kinds of banding artifacts, etc.
--

Tom - http://www.pentaxphotogallery.com/tomvijlbrief
 
I think it would be a mistake to write this concept off as "impossible", or "unlikely to happen" because the current technology isn't capable of performing the required functions in a reasonable amount of time. Twenty years ago most of us would have scoffed at the idea that we would someday be taking "digital" pictures of the kind and quality that we do today. Who knows what new technology will be available in the next 5 years? My son-in-law has a PHD in chemistry and specializes in nanotechnology and some of the things he talks about are downright amazing...
--
Look at the picture, not the pixels...
http://www.lkeithr.zenfolio.com
 
interesting, you could have a file format that goes something like R,G,B, and then R Clip, G Clip, B Clip assuming the clip time can be recorded... however would it also then be possible to have an exposure more that waits till all pixels have been clipped then re-align them in firmware... the problem there is that the exposure time is to long for anything but still objects.
--
Mike from Canada

'I like to think so far outside the box that it would require a telephoto lens just to see the box!' ~ 'My Quote :)'



http://www.airliners.net/search/photo.search?sort_order=views%20DESC&first_this_page=0&page_limit=180&&emailsearch=mighty_mike88%40hotmail.com&thumbnails=
 
I think it would be a mistake to write this concept off as
"impossible", or "unlikely to happen" because the current technology
isn't capable of performing the required functions in a reasonable
amount of time. Twenty years ago most of us would have scoffed at
the idea that we would someday be taking "digital" pictures of the
kind and quality that we do today. Who knows what new technology
will be available in the next 5 years? My son-in-law has a PHD in
chemistry and specializes in nanotechnology and some of the things he
talks about are downright amazing...
--
Look at the picture, not the pixels...
http://www.lkeithr.zenfolio.com
I agree. I hope I did not sound like I was writing off the idea, just don't think it will be possible just yet.

Dave.
 
Interesting, I didn't know that. One problem with the approach
(either two buckets who are filled at different rates, or a dedicated
spill buffer) is that the secondairy bucket will probably have much
less resolution in order to keep it small.

If the main bucket records 1024 levels (10bits) and the slow bucket
can record eg 8 times more photons (3 stops) than it is likely that
it will only be able to record eg 128 levels. In these 128 levels are
3 full stops recorded, which will result in a very coarse recording
of these highlights, resulting in all kinds of banding artifacts, etc.
It may well be that the only difference between R- and S-pixels on a SuperCCD-SR sensor is the size of the microlenses catching the light photons for the underlying photodiodes - if this is so (I can't verify it right now, just a surmise) the photodiodes may be the same, and only the size of the microlenses decides how many photons are collected into the individual "buckets". In fact the gain in dynamic range with the SR-sensors is indeed measurable - it really works, and I've never heard of coarse highlight recording with those sensors.

--
Phil

GMT +1
http://picturesandstories.blogspot.com
 
for what could easily be obtained simply by doing two quick exposures with an electronic shutter and processing the HDR automatically in-camera. But the files would be massive and post-processing very necessary.

Although unlimited dynamic range sounds great (we'd never have to set exposure again!), sometimes its nice to have all your pixels in a bunch — monitors, prints, and human eyes do not exhibit unlimited dynamic range.

-Matt

--

... interested in .... photographs? Heh? Know what a mean? Photographs? (He asked him knowingly). Nudge nudge, snap snap, grin grin, wink wink, say no more, say no more, know what a' mean? Know what a' mean?

http://www.pentaxphotogallery.com/home#section=ARTIST&subSection=183820&subSubSection=0&language=EN
 
You wouldn't tend to see as much banding as you'd think because brightness perception is not linear with photon flux, but probably closer to logarithmic. (No, I don't know what the actual response curve is, but I'd guess it's fairly complicated.)

This means that less precision is required for bright things than for dim things. This is also why noise is most noticible in dark areas of an image, even though the absolute (thermal and readout) noise levels are pretty much constant across all brightnesses. Two bits of noise out of a four bit number are far more destructive than two bits of noise out of a twelve bit number.

As to the "overflow" sort of design, a direct digital timestamp mechanism is likely to be quite impractical (in terms of electrical noise induced on the chip, switching power consumption, and possibly chip real estate). Slightly more practical might be an analog version, where a capacitor charges at a steady rate while the photosite is being exposed and is stopped by a comparator when the photosite reaches saturation. This would still have power supply issues (lots of current to charge all those capacitors), and capacitors are typically relatively large components on an IC, but it might be doable.

The main difficulty with any timed overflow sort of design is that, while effective at preserving highlights, it is terrible at capturing dark shadow details...you either need to have a hybrid scheme where both the captured charge in the photocell is read along with the overflow time (if any), or else need to expose long enough for enough photons to hit (practically) every photosite sufficiently to cause it to overflow. It also means that your effective shutter speed varies for different portions of the image, which could cause interesting artefacts.
--
--DrewE
 
I think it would be a mistake to write this concept off as
"impossible", or "unlikely to happen" because the current technology
isn't capable of performing the required functions in a reasonable
amount of time.
I agree whole-heartedly. People sometimes say I'm a naysayer when it comes to miraculous technology, but it isn't so. I'll tell you your getting taken if you believe somebody is getting around the laws of thermodynamics, but for something like this, count on it. We can already do this with HDR techniques, so it isn't a question of possibility, but economy of production and practicality of application.
Twenty years ago most of us would have scoffed at
the idea that we would someday be taking "digital" pictures of the
kind and quality that we do today.
I would have been oblivious to the discussion because my pictures looked like cr@p back then anyway. Film P&S cameras were the worst.
Who knows what new technology
will be available in the next 5 years? My son-in-law has a PHD in
chemistry and specializes in nanotechnology and some of the things he
talks about are downright amazing...
The societal implications are difficult to deal with as well. See: http://en.wikipedia.org/wiki/Technological_singularity

--

Through the window in the wall
Come streaming in on sunlight wings
A million bright ambassadors of morning
 
I am hoping this patent will make its way into a dslr sensor Being able to count how many times the "buckets" fill would be nice at times. It would take some new in-camera processing of this information. Maybe only allowing it to count up to 2 "buckets" for DRE mode and not having a limit on how many in HDR mode.

Might be some interesting times ahead.

Dave
There are some very interesting recent Samsung patents regarding DR.
--
'This is more serious than I thought.....but it is still fun!
http://www.pbase.com/rupertdog Take a look- It's Free!
 

Keyboard shortcuts

Back
Top