A
Anders Borum
Guest
Hello!
This question has been on my mind for quite some time. Here goes some (perhaps quite interesting) thoughts..
Take a look at what the industry is offering us these days. We have pro digital backs, well-built, rugged and high-quality SLRs with sharp, defined outputs. Very nice.
However, it seems like few of the engineers who built these marvelous beasts are up to the task of offering advanced dynamic range compression in our images - an important aspect of digital photography.
Blown out highlights are impossible to recover, even by the photoshop expert (I'm not considering the stamping technique - it's like cheating). We cannot edit whats not there - it's that simple.
Lets face it, white tshirts and bright sunshine is a killer, right? Everybody knows this, if not - they will some day. It's a killer too, to carry reflectors wherever you go - even more so if you're off on holiday, taking your camera with you. Tossing around a reflector, at the excuse of 'saving highlights' is simply put not popular (a pro on a job may ofcourse find this usefull, naturally).
So how do we save those highlights?
It would be nice to photograph a scene with e.g. a lot of shadow/bright sky and being able to reproduce that dynamic range. One coule use the well-known tripod-bracketing-approach, but mostly, this isn't a viable solution. Especially with dynamic subjects in the frame.
I know there are technical difficulties in producing a CCD/CMOS layout with either 1) a higher bit depth or 2) developing a sensor layout, where each sensor is actually made up of 4 individual sensors - each having their own sensitivity - but probably very do-able with todays technology.
For instance (just to be clear here), where todays CCD/CMOS layouts would have a green sensor, a CCD/CMOS layout supporting dynamic range compression would have e.g. 4 sensors at that position. The first sensor would be very sensitive, the second a little less and so forth. The fourth sensor would be left with little sensivity, thus being able to capture the differences in extremely bright light.
Imagine a scene with a lot of shadow/sunlit areas. The sensors on the CCD/CMOS that are very sensitive to light would capture the detail in the shadows, but otherwise be completely burnt out (charged) where the scene is hit by the bright sun. However, the sensors that are very insensitive would capture the details in the sunlit areas of the scene, and otherwise be completely discharged in the dark shadows.
With some clever algorithms, one could compress and merge the 4 different ranges to that of e.g. our monitors/printers so that we could see the entire scene in its beauty.
As I see it, we have CCDs with 3000x2000 pixels today. Applying the CCD/CMOS layout as described above, each sensor would have to be represented with 4 sensors (each having a different sensitivity).
Cutting the CCD/CMOS in half on each axis would effectively yield a dynamic range CCD/CMOS with 1500x1000 pixels - but offer a huge gain in dynamic range compared to the conventional CCD/CMOS.
As the sensors effectively become 1/4 of their original size, the signal to noise ratio becomes a major player.
Now and then, I'd rather have 1500x1000 with dynamic range compression that those 2000x1300 offered by my trusty D1. The rest is left up to the development curve of technology.
Remember, it's not all about getting more pixels, the quality of those pixels count just as much!--with regards
anders lundholm · [email protected]
the sphereworx / monoliner experience
This question has been on my mind for quite some time. Here goes some (perhaps quite interesting) thoughts..
Take a look at what the industry is offering us these days. We have pro digital backs, well-built, rugged and high-quality SLRs with sharp, defined outputs. Very nice.
However, it seems like few of the engineers who built these marvelous beasts are up to the task of offering advanced dynamic range compression in our images - an important aspect of digital photography.
Blown out highlights are impossible to recover, even by the photoshop expert (I'm not considering the stamping technique - it's like cheating). We cannot edit whats not there - it's that simple.
Lets face it, white tshirts and bright sunshine is a killer, right? Everybody knows this, if not - they will some day. It's a killer too, to carry reflectors wherever you go - even more so if you're off on holiday, taking your camera with you. Tossing around a reflector, at the excuse of 'saving highlights' is simply put not popular (a pro on a job may ofcourse find this usefull, naturally).
So how do we save those highlights?
It would be nice to photograph a scene with e.g. a lot of shadow/bright sky and being able to reproduce that dynamic range. One coule use the well-known tripod-bracketing-approach, but mostly, this isn't a viable solution. Especially with dynamic subjects in the frame.
I know there are technical difficulties in producing a CCD/CMOS layout with either 1) a higher bit depth or 2) developing a sensor layout, where each sensor is actually made up of 4 individual sensors - each having their own sensitivity - but probably very do-able with todays technology.
For instance (just to be clear here), where todays CCD/CMOS layouts would have a green sensor, a CCD/CMOS layout supporting dynamic range compression would have e.g. 4 sensors at that position. The first sensor would be very sensitive, the second a little less and so forth. The fourth sensor would be left with little sensivity, thus being able to capture the differences in extremely bright light.
Imagine a scene with a lot of shadow/sunlit areas. The sensors on the CCD/CMOS that are very sensitive to light would capture the detail in the shadows, but otherwise be completely burnt out (charged) where the scene is hit by the bright sun. However, the sensors that are very insensitive would capture the details in the sunlit areas of the scene, and otherwise be completely discharged in the dark shadows.
With some clever algorithms, one could compress and merge the 4 different ranges to that of e.g. our monitors/printers so that we could see the entire scene in its beauty.
As I see it, we have CCDs with 3000x2000 pixels today. Applying the CCD/CMOS layout as described above, each sensor would have to be represented with 4 sensors (each having a different sensitivity).
Cutting the CCD/CMOS in half on each axis would effectively yield a dynamic range CCD/CMOS with 1500x1000 pixels - but offer a huge gain in dynamic range compared to the conventional CCD/CMOS.
As the sensors effectively become 1/4 of their original size, the signal to noise ratio becomes a major player.
Now and then, I'd rather have 1500x1000 with dynamic range compression that those 2000x1300 offered by my trusty D1. The rest is left up to the development curve of technology.
Remember, it's not all about getting more pixels, the quality of those pixels count just as much!--with regards
anders lundholm · [email protected]
the sphereworx / monoliner experience