Dynamic Range Compression - thoughts!

  • Thread starter Thread starter Anders Borum
  • Start date Start date
A

Anders Borum

Guest
Hello!

This question has been on my mind for quite some time. Here goes some (perhaps quite interesting) thoughts..

Take a look at what the industry is offering us these days. We have pro digital backs, well-built, rugged and high-quality SLRs with sharp, defined outputs. Very nice.

However, it seems like few of the engineers who built these marvelous beasts are up to the task of offering advanced dynamic range compression in our images - an important aspect of digital photography.

Blown out highlights are impossible to recover, even by the photoshop expert (I'm not considering the stamping technique - it's like cheating). We cannot edit whats not there - it's that simple.

Lets face it, white tshirts and bright sunshine is a killer, right? Everybody knows this, if not - they will some day. It's a killer too, to carry reflectors wherever you go - even more so if you're off on holiday, taking your camera with you. Tossing around a reflector, at the excuse of 'saving highlights' is simply put not popular (a pro on a job may ofcourse find this usefull, naturally).

So how do we save those highlights?

It would be nice to photograph a scene with e.g. a lot of shadow/bright sky and being able to reproduce that dynamic range. One coule use the well-known tripod-bracketing-approach, but mostly, this isn't a viable solution. Especially with dynamic subjects in the frame.

I know there are technical difficulties in producing a CCD/CMOS layout with either 1) a higher bit depth or 2) developing a sensor layout, where each sensor is actually made up of 4 individual sensors - each having their own sensitivity - but probably very do-able with todays technology.

For instance (just to be clear here), where todays CCD/CMOS layouts would have a green sensor, a CCD/CMOS layout supporting dynamic range compression would have e.g. 4 sensors at that position. The first sensor would be very sensitive, the second a little less and so forth. The fourth sensor would be left with little sensivity, thus being able to capture the differences in extremely bright light.

Imagine a scene with a lot of shadow/sunlit areas. The sensors on the CCD/CMOS that are very sensitive to light would capture the detail in the shadows, but otherwise be completely burnt out (charged) where the scene is hit by the bright sun. However, the sensors that are very insensitive would capture the details in the sunlit areas of the scene, and otherwise be completely discharged in the dark shadows.

With some clever algorithms, one could compress and merge the 4 different ranges to that of e.g. our monitors/printers so that we could see the entire scene in its beauty.

As I see it, we have CCDs with 3000x2000 pixels today. Applying the CCD/CMOS layout as described above, each sensor would have to be represented with 4 sensors (each having a different sensitivity).

Cutting the CCD/CMOS in half on each axis would effectively yield a dynamic range CCD/CMOS with 1500x1000 pixels - but offer a huge gain in dynamic range compared to the conventional CCD/CMOS.

As the sensors effectively become 1/4 of their original size, the signal to noise ratio becomes a major player.

Now and then, I'd rather have 1500x1000 with dynamic range compression that those 2000x1300 offered by my trusty D1. The rest is left up to the development curve of technology.

Remember, it's not all about getting more pixels, the quality of those pixels count just as much!--with regards
anders lundholm · [email protected]
the sphereworx / monoliner experience
 
Anders, there was an announcement here some time ago of an imager with fairly low rez but hi dynamic range.

Of course , if one has a sufficiently noise-free sensor (large cells) one can just expose for the highlight detail, and recover the shadows later. I have heard that one can use a compositing technique to recover the data from the Canon D30 in linear mode - anyone here care to comment ?

With my D1X, my impression is that when you work with raw (Capture) you have dynamic range which is better than Ektachrome-process color slide film, so the present state, while not ideal, is not catastrophic.

Edmund
 
I've been thinking about something similar. Actually, you wouldn't even have to go that far to get what you are describing - at least for CCD imagers. The only thing that ISO changes is the amplification, which is applied > after

Of course, the above would only help you when you are opperating at ISOs higher than the minimum (well, at the minimum you'd get better shadow detail). The problem is that at the base ISO, the blown out whites are caused by the cells saturating. Making cells smaller (ie less sensitive) would also make the wells in which the charge is stored smaller as well.

If you masked off part of the cells, you could avoid saturation, however you'd be reducing sensitivity. While that would seem to help, it brings the image signal to lower levels and means the noise (which remained at a somewhat constant level) becomes a larger percentage of the signal. This decrease in the signal to noise ratio in turn decreases the dynamic range of the sensor - bringing you back to where you started.

Also, if you're willing to decrease resolution - the best way to increase dynamic range is simply to double the size of the pixels as is. Remember, a signal is the sum of the actual light recorded durring the integration and the noise. Every reading has a potential error of + - a certain percentage dictated by the signal to noise ratio. You make a larger sensor, it collects more light and the SNR is smaller. With your saturation level remaining the same (as it would if you didn't change anything else), and the error being smaller, you can cut that data into more slices and still have meaningful data.

Interesting discussion though :)
Thomas Sapiano,
[email protected]
Hello!

This question has been on my mind for quite some time. Here goes
some (perhaps quite interesting) thoughts..

Take a look at what the industry is offering us these days. We have
pro digital backs, well-built, rugged and high-quality SLRs with
sharp, defined outputs. Very nice.

However, it seems like few of the engineers who built these
marvelous beasts are up to the task of offering advanced dynamic
range compression in our images - an important aspect of digital
photography.

Blown out highlights are impossible to recover, even by the
photoshop expert (I'm not considering the stamping technique - it's
like cheating). We cannot edit whats not there - it's that simple.

Lets face it, white tshirts and bright sunshine is a killer, right?
Everybody knows this, if not - they will some day. It's a killer
too, to carry reflectors wherever you go - even more so if you're
off on holiday, taking your camera with you. Tossing around a
reflector, at the excuse of 'saving highlights' is simply put not
popular (a pro on a job may ofcourse find this usefull, naturally).

So how do we save those highlights?

It would be nice to photograph a scene with e.g. a lot of
shadow/bright sky and being able to reproduce that dynamic range.
One coule use the well-known tripod-bracketing-approach, but
mostly, this isn't a viable solution. Especially with dynamic
subjects in the frame.

I know there are technical difficulties in producing a CCD/CMOS
layout with either 1) a higher bit depth or 2) developing a sensor
layout, where each sensor is actually made up of 4 individual
sensors - each having their own sensitivity - but probably very
do-able with todays technology.

For instance (just to be clear here), where todays CCD/CMOS layouts
would have a green sensor, a CCD/CMOS layout supporting dynamic
range compression would have e.g. 4 sensors at that position. The
first sensor would be very sensitive, the second a little less and
so forth. The fourth sensor would be left with little sensivity,
thus being able to capture the differences in extremely bright
light.

Imagine a scene with a lot of shadow/sunlit areas. The sensors on
the CCD/CMOS that are very sensitive to light would capture the
detail in the shadows, but otherwise be completely burnt out
(charged) where the scene is hit by the bright sun. However, the
sensors that are very insensitive would capture the details in the
sunlit areas of the scene, and otherwise be completely discharged
in the dark shadows.

With some clever algorithms, one could compress and merge the 4
different ranges to that of e.g. our monitors/printers so that we
could see the entire scene in its beauty.

As I see it, we have CCDs with 3000x2000 pixels today. Applying the
CCD/CMOS layout as described above, each sensor would have to be
represented with 4 sensors (each having a different sensitivity).

Cutting the CCD/CMOS in half on each axis would effectively yield a
dynamic range CCD/CMOS with 1500x1000 pixels - but offer a huge
gain in dynamic range compared to the conventional CCD/CMOS.

As the sensors effectively become 1/4 of their original size, the
signal to noise ratio becomes a major player.

Now and then, I'd rather have 1500x1000 with dynamic range
compression that those 2000x1300 offered by my trusty D1. The rest
is left up to the development curve of technology.

Remember, it's not all about getting more pixels, the quality of
those pixels count just as much!
--
with regards
anders lundholm · [email protected]
the sphereworx / monoliner experience
 
Hello!

This question has been on my mind for quite some time. Here goes
some (perhaps quite interesting) thoughts..

Take a look at what the industry is offering us these days. We have
pro digital backs, well-built, rugged and high-quality SLRs with
sharp, defined outputs. Very nice.

However, it seems like few of the engineers who built these
marvelous beasts are up to the task of offering advanced dynamic
range compression in our images - an important aspect of digital
photography.

Blown out highlights are impossible to recover, even by the
photoshop expert (I'm not considering the stamping technique - it's
like cheating). We cannot edit whats not there - it's that simple.

Lets face it, white tshirts and bright sunshine is a killer, right?
Everybody knows this, if not - they will some day. It's a killer
too, to carry reflectors wherever you go - even more so if you're
off on holiday, taking your camera with you. Tossing around a
reflector, at the excuse of 'saving highlights' is simply put not
popular (a pro on a job may ofcourse find this usefull, naturally).

So how do we save those highlights?

It would be nice to photograph a scene with e.g. a lot of
shadow/bright sky and being able to reproduce that dynamic range.
One coule use the well-known tripod-bracketing-approach, but
mostly, this isn't a viable solution. Especially with dynamic
subjects in the frame.

I know there are technical difficulties in producing a CCD/CMOS
layout with either 1) a higher bit depth or 2) developing a sensor
layout, where each sensor is actually made up of 4 individual
sensors - each having their own sensitivity - but probably very
do-able with todays technology.

For instance (just to be clear here), where todays CCD/CMOS layouts
would have a green sensor, a CCD/CMOS layout supporting dynamic
range compression would have e.g. 4 sensors at that position. The
first sensor would be very sensitive, the second a little less and
so forth. The fourth sensor would be left with little sensivity,
thus being able to capture the differences in extremely bright
light.

Imagine a scene with a lot of shadow/sunlit areas. The sensors on
the CCD/CMOS that are very sensitive to light would capture the
detail in the shadows, but otherwise be completely burnt out
(charged) where the scene is hit by the bright sun. However, the
sensors that are very insensitive would capture the details in the
sunlit areas of the scene, and otherwise be completely discharged
in the dark shadows.

With some clever algorithms, one could compress and merge the 4
different ranges to that of e.g. our monitors/printers so that we
could see the entire scene in its beauty.

As I see it, we have CCDs with 3000x2000 pixels today. Applying the
CCD/CMOS layout as described above, each sensor would have to be
represented with 4 sensors (each having a different sensitivity).

Cutting the CCD/CMOS in half on each axis would effectively yield a
dynamic range CCD/CMOS with 1500x1000 pixels - but offer a huge
gain in dynamic range compared to the conventional CCD/CMOS.

As the sensors effectively become 1/4 of their original size, the
signal to noise ratio becomes a major player.

Now and then, I'd rather have 1500x1000 with dynamic range
compression that those 2000x1300 offered by my trusty D1. The rest
is left up to the development curve of technology.

Remember, it's not all about getting more pixels, the quality of
those pixels count just as much!
--
with regards
anders lundholm · [email protected]
the sphereworx / monoliner experience
Hi Anders,

I think your thought on an increased dynamic range is not unreasonable. Unfortunately we are working with a consumer product that is developing about as fast as it can. For years the transparency films were made with very low dynamic range. We called them professional products because they required special exposure techniques and lighting to give you a good image. We are now blessed with the newest emulsions that have a great dynamic range and have virtually no grain. That took 40 years! The CCD is going the same way but the numbers will drive the development. Very few products will be of the high quality you ask for because the amateur only needs a 5x7 or 8x10 print with a quality that is already better than what a standard 35mm film can offer. And all this on your home computer. That took 5 years! I see the future as stopping at the 5MP camera for amateur digicams and developing into a more expensive high quality product for the Pro. The CCDs will be better but at a price that will not be acceptable by the point and shoot crowd. The CMOS will take over in that market and the CCD will get the upscale products. The dynamic range will be enhanced with tricks like multiple shots at the lower exposures to provide overlay imaging in a fashion as you suggest but again at a cost. There are scan backs with a 12-stop range so the CCD will come to mature much the same. Just give it some time.
Rinus
 
Anders,

I would agree that this is one of the most important areas for current SLRs. By the way the Kodak 760 has some more dynamic range than the D1/D1x.

Uwe
Hello!

This question has been on my mind for quite some time. Here goes
some (perhaps quite interesting) thoughts..

Take a look at what the industry is offering us these days. We have
pro digital backs, well-built, rugged and high-quality SLRs with
sharp, defined outputs. Very nice.

However, it seems like few of the engineers who built these
marvelous beasts are up to the task of offering advanced dynamic
range compression in our images - an important aspect of digital
photography.

Blown out highlights are impossible to recover, even by the
photoshop expert (I'm not considering the stamping technique - it's
like cheating). We cannot edit whats not there - it's that simple.

Lets face it, white tshirts and bright sunshine is a killer, right?
Everybody knows this, if not - they will some day. It's a killer
too, to carry reflectors wherever you go - even more so if you're
off on holiday, taking your camera with you. Tossing around a
reflector, at the excuse of 'saving highlights' is simply put not
popular (a pro on a job may ofcourse find this usefull, naturally).

So how do we save those highlights?

It would be nice to photograph a scene with e.g. a lot of
shadow/bright sky and being able to reproduce that dynamic range.
One coule use the well-known tripod-bracketing-approach, but
mostly, this isn't a viable solution. Especially with dynamic
subjects in the frame.

I know there are technical difficulties in producing a CCD/CMOS
layout with either 1) a higher bit depth or 2) developing a sensor
layout, where each sensor is actually made up of 4 individual
sensors - each having their own sensitivity - but probably very
do-able with todays technology.

For instance (just to be clear here), where todays CCD/CMOS layouts
would have a green sensor, a CCD/CMOS layout supporting dynamic
range compression would have e.g. 4 sensors at that position. The
first sensor would be very sensitive, the second a little less and
so forth. The fourth sensor would be left with little sensivity,
thus being able to capture the differences in extremely bright
light.

Imagine a scene with a lot of shadow/sunlit areas. The sensors on
the CCD/CMOS that are very sensitive to light would capture the
detail in the shadows, but otherwise be completely burnt out
(charged) where the scene is hit by the bright sun. However, the
sensors that are very insensitive would capture the details in the
sunlit areas of the scene, and otherwise be completely discharged
in the dark shadows.

With some clever algorithms, one could compress and merge the 4
different ranges to that of e.g. our monitors/printers so that we
could see the entire scene in its beauty.

As I see it, we have CCDs with 3000x2000 pixels today. Applying the
CCD/CMOS layout as described above, each sensor would have to be
represented with 4 sensors (each having a different sensitivity).

Cutting the CCD/CMOS in half on each axis would effectively yield a
dynamic range CCD/CMOS with 1500x1000 pixels - but offer a huge
gain in dynamic range compared to the conventional CCD/CMOS.

As the sensors effectively become 1/4 of their original size, the
signal to noise ratio becomes a major player.

Now and then, I'd rather have 1500x1000 with dynamic range
compression that those 2000x1300 offered by my trusty D1. The rest
is left up to the development curve of technology.

Remember, it's not all about getting more pixels, the quality of
those pixels count just as much!
--
with regards
anders lundholm · [email protected]
the sphereworx / monoliner experience
 
I really do not know that much about imagers but I always thought that if astronomers use the method of overlaying multiple images to get rid of noise and show great detail it would only be a matter of time before they will introduce multiple exposures on a CCD by means of an fast electronic shutter (rather than mechanical) and blend these images into a single fantastic frame. It may prevent very high shutter speeds but it would keep the noise down and with longer exposure times instead of a longer exposure you could simply overlay more images. It could be a progressive scan mode that repeats. Does that look like it has a future?
Rinus
 
I've been thinking about something similar. Actually, you wouldn't
even have to go that far to get what you are describing - at least
for CCD imagers. The only thing that ISO changes is the
amplification, which is applied > after
chip
Hi! It's that confounded ISO carry-over from the film days again. Using these digital sensors at a chosen ISO does increase convenience tremendously, at a cost of flexibility. I have been trying to justify using the RAW format rather than the much faster/compact JPEG format by testing under and over exposure situations in an EOS D30. Conclusion: the RAW format does give you increased dynamic range when you over-expose (I have not found any practical advantage for RAW in under-exposure situations, since I post process all my images for exposure and color balance anyway, JPEG or RAW). It is possible to recover reasonably acceptable images (not "fully" satisfactory, but nonetheless "reasonable") even when you over expose RAW by two stops, not JPEG. So, presumably the way to routinely maximise the available dynamic range would be (in a D30 anyway) is to shoot RAW and download onto 48 bit linear tif. ALL the dynamic range your camera is capable of is there, captured, and available for playing with. Provided your PC is fast enough and you have PLENTY of RAM (!) this does not slow your work throughput enormously.

To increase dynamic range further, but this would slow you down when you get back to your PC, is to auto bracket your exposure, both onto RAW. First exposure to be the "correct" one, second under-exposed at, say, twice or 4 times faster shutter speed. By compositing the two images later one can, in theory anyway, extend your captured details by 4 stops into the over-exposure side. By using auto exposure bracketing and with little action in the image it MAY be possible to composite without having to align the images. One has to test this out in the field. Aligning images to subpixel accuracy is a real pain. That's also why the second exposure has to be under-exposed by using a faster shutter speed. I assume closing down the lens would mess up the optics (changes in depth of field and some such). Autobracketing and that 1 GB microdrive should not add much effort at the camera stage, and one can always decide later if the extra effort is really worth it when in front of the PC. Compositing in this way should, if anything, decrease rather than increase noise. My understanding is that compositing two images (equal exposure) taken immediately one after the other, doubles the image signal but increases random noise by only a factor of SQRT(2) ie 1.4 times. So, compositing a correct and an underexposed image should still reduce rather than increase noise, but to a lesser extent.

Now who really wants all that hassle? 10 micron square pixels (like in a D30) already give us reasonable well depth (i.e. dynamic range) when using RAW+48 bit linear tif. Just give us 3600x2400 (=8.6 megapixels) of these pixels to cover the full 35 mm frame. Such a sensor should be fabricatable at a reasonable cost (that's really what is holding things up) in the next 2 to 3 years. It would also match the resolution of our best lenses, say, 50 lines/mm over the entire frame. OK, let's not delve into MTF right here ;-). For people who need a faster recycling camera (e.g. sports) they can make one with 20 micron square pixels (1800x1200=2.2 megapixels) and the well depth will be 4 times as much (extra two stops in dynamic range). But then others will want something in between... I believe the technology is available today, it's only the cost of all those discarded imperfect sensors which gets in the way.
 
To increase dynamic range further, but this would slow you down
when you get back to your PC, is to auto bracket your exposure,
both onto RAW. First exposure to be the "correct" one, second
under-exposed at, say, twice or 4 times faster shutter speed. By
compositing the two images later one can, in theory anyway, extend
your captured details by 4 stops into the over-exposure side. By
using auto exposure bracketing and with little action in the image
it MAY be possible to composite without having to align the images.
One has to test this out in the field. Aligning images to subpixel
accuracy is a real pain. That's also why the second exposure has to
be under-exposed by using a faster shutter speed. I assume closing
down the lens would mess up the optics (changes in depth of field
and some such). Autobracketing and that 1 GB microdrive should not
add much effort at the camera stage, and one can always decide
later if the extra effort is really worth it when in front of the
PC. Compositing in this way should, if anything, decrease rather
than increase noise. My understanding is that compositing two
images (equal exposure) taken immediately one after the other,
doubles the image signal but increases random noise by only a
factor of SQRT(2) ie 1.4 times. So, compositing a correct and an
underexposed image should still reduce rather than increase noise,
but to a lesser extent.
Thats essentially what we're discussing, however at a more automated level. By sampling at multiple ISOs off the same CCD readout the images will be aligned, identical and simple to integrate. What I suggested is technically possible, however, that doesn't mean I think its going to be integrated into a camera anytime soon.
Now who really wants all that hassle? 10 micron square pixels (like
in a D30) already give us reasonable well depth (i.e. dynamic
range) when using RAW+48 bit linear tif. Just give us 3600x2400
(=8.6 megapixels) of these pixels to cover the full 35 mm frame.
Such a sensor should be fabricatable at a reasonable cost (that's
really what is holding things up) in the next 2 to 3 years. It
would also match the resolution of our best lenses, say, 50
lines/mm over the entire frame. OK, let's not delve into MTF right
here ;-). For people who need a faster recycling camera (e.g.
sports) they can make one with 20 micron square pixels
(1800x1200=2.2 megapixels) and the well depth will be 4 times as
much (extra two stops in dynamic range). But then others will want
something in between... I believe the technology is available
today, it's only the cost of all those discarded imperfect sensors
which gets in the way.
Exactly, the reason why I never bothered to bring up what I proposed above before this was mentioned was practicality. While technically possible, such a system would require double the ADC equipment, a much more sophisticated amplifier with two seperate tunable amplifiers as well as a much more sophisticated datapath to handle the additional data. Additionally, the added complexity would also introduce more noise into the puzzle - and we are already fighting an upward battle against that. All of this would add a significant ammount of cost, in both manufacturing and engineering.

As much as I'd like to see a feature like this in a camera, I think that it would be of limited usefullness. Current digital cameras already have quite a large dynamic range as it stands, while this would improve that range at the upper ISO bands - the utility would really be limited to special circumstances. Kodak's DCS series can comfortably push/pull + - 2 stops within the dynamic range captured in its raw files - and from your descriptions it sounds like the D30 can do similarly. A system like this would allow you to go for + - 4 stops or more CP. However, the added costs would inevitably increase the cost of the camera, and they are already fighting to keep competitive. Although it would no doubt be useful to have a camera that could shoot at several ISOs simultaneously, it is too much of a niche feature to justify the costs.

As for full-frame sensors, there are a lot of issues other than the simple cost of fabrication. However, that has been discussed ad nausium, so its likely something for another discussion :)
 
Well, not knowing the details of any of the manufacturers particular imager chips its hard to say. However, I can't see any reason why it would be impossible. Canon's 1D CCD looks to simply be a larger version of the D1H CCD, if Canon is providing 8fps with more data to readout with essentially the same chip - the limitation is in the processing of the image after the fact. Now if you are simply adding the readouts of sucessive frames and not processing or reading out the data - you could theoretically provide high 'framerates'. Averaging out the images would be a little slower, but it could be done with a sufficiently powerful microprocessor. With ILT CCDs like the Canon and Nikon, you could leave the mechanical shutter and mirror open for the compund exposure to further increase speed. This could likely be performed soully in firmware, so it wouldn't be that impractical. However, not having the internal details of either camera, I can't really say for sure one way or the other.

However, AFAIK this will really only help you with long exposures, as even if you could pull 20cps (captures per second, as they really aren't frames so to speak) you still would require a very static scene for it to opperate properly. This would kind of make it a niche feature for long exposure needs, however Canon did integrate dark frame subtraction in the D30 - and the simplicity of integration could make it a feasable feature for firmware upgrades.
I really do not know that much about imagers but I always thought
that if astronomers use the method of overlaying multiple images to
get rid of noise and show great detail it would only be a matter of
time before they will introduce multiple exposures on a CCD by
means of an fast electronic shutter (rather than mechanical) and
blend these images into a single fantastic frame. It may prevent
very high shutter speeds but it would keep the noise down and with
longer exposure times instead of a longer exposure you could simply
overlay more images. It could be a progressive scan mode that
repeats. Does that look like it has a future?
Rinus
 
Hello!

This question has been on my mind for quite some time. Here goes
some (perhaps quite interesting) thoughts..

Take a look at what the industry is offering us these days. We have
pro digital backs, well-built, rugged and high-quality SLRs with
sharp, defined outputs. Very nice.

However, it seems like few of the engineers who built these
marvelous beasts are up to the task of offering advanced dynamic
range compression in our images - an important aspect of digital
photography.

Blown out highlights are impossible to recover, even by the
photoshop expert (I'm not considering the stamping technique - it's
like cheating). We cannot edit whats not there - it's that simple.

Lets face it, white tshirts and bright sunshine is a killer, right?
Everybody knows this, if not - they will some day. It's a killer
too, to carry reflectors wherever you go - even more so if you're
off on holiday, taking your camera with you. Tossing around a
reflector, at the excuse of 'saving highlights' is simply put not
popular (a pro on a job may ofcourse find this usefull, naturally).

So how do we save those highlights?

It would be nice to photograph a scene with e.g. a lot of
shadow/bright sky and being able to reproduce that dynamic range.
One coule use the well-known tripod-bracketing-approach, but
mostly, this isn't a viable solution. Especially with dynamic
subjects in the frame.

I know there are technical difficulties in producing a CCD/CMOS
layout with either 1) a higher bit depth or 2) developing a sensor
layout, where each sensor is actually made up of 4 individual
sensors - each having their own sensitivity - but probably very
do-able with todays technology.

For instance (just to be clear here), where todays CCD/CMOS layouts
would have a green sensor, a CCD/CMOS layout supporting dynamic
range compression would have e.g. 4 sensors at that position. The
first sensor would be very sensitive, the second a little less and
so forth. The fourth sensor would be left with little sensivity,
thus being able to capture the differences in extremely bright
light.

Imagine a scene with a lot of shadow/sunlit areas. The sensors on
the CCD/CMOS that are very sensitive to light would capture the
detail in the shadows, but otherwise be completely burnt out
(charged) where the scene is hit by the bright sun. However, the
sensors that are very insensitive would capture the details in the
sunlit areas of the scene, and otherwise be completely discharged
in the dark shadows.

With some clever algorithms, one could compress and merge the 4
different ranges to that of e.g. our monitors/printers so that we
could see the entire scene in its beauty.

As I see it, we have CCDs with 3000x2000 pixels today. Applying the
CCD/CMOS layout as described above, each sensor would have to be
represented with 4 sensors (each having a different sensitivity).

Cutting the CCD/CMOS in half on each axis would effectively yield a
dynamic range CCD/CMOS with 1500x1000 pixels - but offer a huge
gain in dynamic range compared to the conventional CCD/CMOS.

As the sensors effectively become 1/4 of their original size, the
signal to noise ratio becomes a major player.

Now and then, I'd rather have 1500x1000 with dynamic range
compression that those 2000x1300 offered by my trusty D1. The rest
is left up to the development curve of technology.

Remember, it's not all about getting more pixels, the quality of
those pixels count just as much!
--
with regards
anders lundholm · [email protected]
the sphereworx / monoliner experience
i see three options for today (sorta mentioned above)
1 - a photoartist could fix the blown out areas or find extra info their

2 - take multiple exposures - this is a really practical technique anyway for difficult to expose scenes - relatively easy to do in photoshop - we can now get a trouble darkspot correctly exposed along w/ highlights and any other area a camera would have trouble w/ - you can effectively get almost any exposure this way

3 - stop down while shooting - slightly underexpose - it's very easy to pull out detail in shadows and ya save your highlights - i would reccomend this anyway when theirs a dangetr of blowing the highlights - but someone may say something different
  • i think for now ya have to think completely different about shooting digital - ya don't have all the choices of films and their effects - this kind of control has been moved to the digital darkroom - at least for now
 
Kodak hold many patents in CCD design (especially in highlight control), so Japanese can only make a toy

http://www.dpreview.com/reviews/kodakdcs760/page13.asp
Hello!

This question has been on my mind for quite some time. Here goes
some (perhaps quite interesting) thoughts..

Take a look at what the industry is offering us these days. We have
pro digital backs, well-built, rugged and high-quality SLRs with
sharp, defined outputs. Very nice.

However, it seems like few of the engineers who built these
marvelous beasts are up to the task of offering advanced dynamic
range compression in our images - an important aspect of digital
photography.

Blown out highlights are impossible to recover, even by the
photoshop expert (I'm not considering the stamping technique - it's
like cheating). We cannot edit whats not there - it's that simple.

Lets face it, white tshirts and bright sunshine is a killer, right?
Everybody knows this, if not - they will some day. It's a killer
too, to carry reflectors wherever you go - even more so if you're
off on holiday, taking your camera with you. Tossing around a
reflector, at the excuse of 'saving highlights' is simply put not
popular (a pro on a job may ofcourse find this usefull, naturally).

So how do we save those highlights?

It would be nice to photograph a scene with e.g. a lot of
shadow/bright sky and being able to reproduce that dynamic range.
One coule use the well-known tripod-bracketing-approach, but
mostly, this isn't a viable solution. Especially with dynamic
subjects in the frame.

I know there are technical difficulties in producing a CCD/CMOS
layout with either 1) a higher bit depth or 2) developing a sensor
layout, where each sensor is actually made up of 4 individual
sensors - each having their own sensitivity - but probably very
do-able with todays technology.

For instance (just to be clear here), where todays CCD/CMOS layouts
would have a green sensor, a CCD/CMOS layout supporting dynamic
range compression would have e.g. 4 sensors at that position. The
first sensor would be very sensitive, the second a little less and
so forth. The fourth sensor would be left with little sensivity,
thus being able to capture the differences in extremely bright
light.

Imagine a scene with a lot of shadow/sunlit areas. The sensors on
the CCD/CMOS that are very sensitive to light would capture the
detail in the shadows, but otherwise be completely burnt out
(charged) where the scene is hit by the bright sun. However, the
sensors that are very insensitive would capture the details in the
sunlit areas of the scene, and otherwise be completely discharged
in the dark shadows.

With some clever algorithms, one could compress and merge the 4
different ranges to that of e.g. our monitors/printers so that we
could see the entire scene in its beauty.

As I see it, we have CCDs with 3000x2000 pixels today. Applying the
CCD/CMOS layout as described above, each sensor would have to be
represented with 4 sensors (each having a different sensitivity).

Cutting the CCD/CMOS in half on each axis would effectively yield a
dynamic range CCD/CMOS with 1500x1000 pixels - but offer a huge
gain in dynamic range compared to the conventional CCD/CMOS.

As the sensors effectively become 1/4 of their original size, the
signal to noise ratio becomes a major player.

Now and then, I'd rather have 1500x1000 with dynamic range
compression that those 2000x1300 offered by my trusty D1. The rest
is left up to the development curve of technology.

Remember, it's not all about getting more pixels, the quality of
those pixels count just as much!
--
with regards
anders lundholm · [email protected]
the sphereworx / monoliner experience
i see three options for today (sorta mentioned above)
1 - a photoartist could fix the blown out areas or find extra info
their
2 - take multiple exposures - this is a really practical technique
anyway for difficult to expose scenes - relatively easy to do in
photoshop - we can now get a trouble darkspot correctly exposed
along w/ highlights and any other area a camera would have trouble
w/ - you can effectively get almost any exposure this way
3 - stop down while shooting - slightly underexpose - it's very
easy to pull out detail in shadows and ya save your highlights - i
would reccomend this anyway when theirs a dangetr of blowing the
highlights - but someone may say something different
  • i think for now ya have to think completely different about
shooting digital - ya don't have all the choices of films and their
effects - this kind of control has been moved to the digital
darkroom - at least for now
 
Hello Everybody,

There is an old adage, garbage in garbage out. If you work wisely in digital and work with it, the dynamic range is quite amazing compared to slide films.

I find I seldom have problems with the dynamic range but that's because I work with the camera not against it. You can sit around and bemoan the dynamic range or take the time and learn how to work with it.

I very carefully select my images giving the camera "perfect" images and the camera always gives me back perfectly exposed material with greater dynamic range than I ever had with positive films.

If you simply feed your camera garbage it will give you garbage back.

Reflect on this instead of trying to dream up, if only situations. For the long time seasoned photographer digital is a wonderous event with amazing qualities.

Learn to work with it, because I can assure you it's much better than you are!

Stephen
 
Hello Everybody,

There is an old adage, garbage in garbage out. If you work wisely
in digital and work with it, the dynamic range is quite amazing
compared to slide films.

I find I seldom have problems with the dynamic range but that's
because I work with the camera not against it. You can sit around
and bemoan the dynamic range or take the time and learn how to
work with it.

I very carefully select my images giving the camera "perfect"
images and the camera always gives me back perfectly exposed
material with greater dynamic range than I ever had with positive
films.

If you simply feed your camera garbage it will give you garbage back.

Reflect on this instead of trying to dream up, if only situations.
For the long time seasoned photographer digital is a wonderous
event with amazing qualities.

Learn to work with it, because I can assure you it's much better
than you are!

Stephen
 
Hello Everybody,

There is an old adage, garbage in garbage out. If you work wisely
in digital and work with it, the dynamic range is quite amazing
compared to slide films.

I find I seldom have problems with the dynamic range but that's
because I work with the camera not against it. You can sit around
and bemoan the dynamic range or take the time and learn how to
work with it.

I very carefully select my images giving the camera "perfect"
images and the camera always gives me back perfectly exposed
material with greater dynamic range than I ever had with positive
films.

If you simply feed your camera garbage it will give you garbage back.

Reflect on this instead of trying to dream up, if only situations.
For the long time seasoned photographer digital is a wonderous
event with amazing qualities.

Learn to work with it, because I can assure you it's much better
than you are!

Stephen

Right on Stephen! Compared to my days in the bowery with my Graflex, flash bulbs and film holders, you young photographers are using light weight dream machines!!!
 
Do tell Me Weegee!

Are you really THE very famous Weegee?

Sure sounds like it could be ; )

Stephen
Hello Everybody,

There is an old adage, garbage in garbage out. If you work wisely
in digital and work with it, the dynamic range is quite amazing
compared to slide films.

I find I seldom have problems with the dynamic range but that's
because I work with the camera not against it. You can sit around
and bemoan the dynamic range or take the time and learn how to
work with it.

I very carefully select my images giving the camera "perfect"
images and the camera always gives me back perfectly exposed
material with greater dynamic range than I ever had with positive
films.

If you simply feed your camera garbage it will give you garbage back.

Reflect on this instead of trying to dream up, if only situations.
For the long time seasoned photographer digital is a wonderous
event with amazing qualities.

Learn to work with it, because I can assure you it's much better
than you are!

Stephen

Right on Stephen! Compared to my days in the bowery with my Graflex, flash bulbs and film holders, you young photographers are using light weight dream machines!!!
 

Keyboard shortcuts

Back
Top