DR, digital photography’s greatest weakness?

The Kodak 14n/SLRn/SLRc (8 stops) easily match film dynamic range and scanning backs deliver 11 stops, far exceeding film's 5-7 stops.
and I am betting it really won't. I shoot RAW and while it is
better at preserving the available DR of digital photos it really
still does not provide negative film DR.

I guess I'll resort to the double shot method since there doesn't
appear to be a real answer to this.

I am pretty much a die hard digital guy, like I said, all digital
for the past 5 years so there is no need to sell me on its
benefits, print quality, cost ect.. I am a believer to the tune of
several thousand $'s.

The problem is that there are real world scenes that todays crop of
6MP DLSR's simply do not handle well. I guess that is reallity and
it is pointless to discuss since most people seem to have come from
slide film and are used to it.

Thanks for everyone's input and if you solve this issue, keep me in
mind!

--
Thanks!
Mark
Latest Gallery
http://www.radphotos.net/newstuff/Page.html
Personal Favorites
http://www.radphotos.net/S2GAL/Page.html
 
The requirement for commercial images had always largely been for
transparencies. Very few commercial customers accepted prints.
And stock, and magazines, and most of the fine art landscape crowd when they wanted color.

The print crowd is dominated primarily by the portrait and wedding folk.

And of course, the mass market...
 
Where is the evidence for the 8 stops in the Kodak dSLR ?
If it exists, how do they acheive it ?

Stuart
and I am betting it really won't. I shoot RAW and while it is
better at preserving the available DR of digital photos it really
still does not provide negative film DR.

I guess I'll resort to the double shot method since there doesn't
appear to be a real answer to this.

I am pretty much a die hard digital guy, like I said, all digital
for the past 5 years so there is no need to sell me on its
benefits, print quality, cost ect.. I am a believer to the tune of
several thousand $'s.

The problem is that there are real world scenes that todays crop of
6MP DLSR's simply do not handle well. I guess that is reallity and
it is pointless to discuss since most people seem to have come from
slide film and are used to it.

Thanks for everyone's input and if you solve this issue, keep me in
mind!

--
Thanks!
Mark
Latest Gallery
http://www.radphotos.net/newstuff/Page.html
Personal Favorites
http://www.radphotos.net/S2GAL/Page.html
 
I shoot all RAW so i apprecite the comment but I tell you, even
hours in photoshop with a single shot can't compare with the DR of
film. I think that all reviews on DPReview should have a
DR/Lattitude test. The base should be film until something can
prove that it can beat it.
As I understand it, DR in digital is purely a function of bit depth, which is why RAWs allow for more DR (usually 12-bit-per-color-per-pixel vs. 8-bit for ordinary JPEGs), so it would be somewhat pointless to compare DR on digitals - you can just state their maximum bit depths. These will vary to some degree based on color settings, but should be pretty similar. I could be wrong on this, but I think I'm at least on the right track.

I think you may be mythologizing film too much when you imagine that most print films could capture a range vastly superior to RAWs, let alone anywhere near what nature can present you with. It's a rare film that will let you catch shadow detail as well as white, fluffy clouds.

What might be throwing you off is that digital is more similar to slide film in that it errs slightly on the side of shadows, and is more likely to blow highlights, while prints err on the side of highlights and are more likely to lose shadow detail (based on whatever mid range you're exposing for), thus you blow highlights much less often with prints in practical use.
 
When an outdoor scene is very contrasty, I just set my camera to lower contrast, measure for zone V and expose for zone V, and tweak in photoshop afterwards. Lots of my images are commercially printed.

Earl
I would like some expert opinions on blowing highlights using digital.

I know you can under expose to save highlights and that’s my
current mode of operation and I am not really happy with it. I
used shadow recovery with decent results but sometimes the contrast
of a scene is so great that this method leaves you with a poor
looking photo.

I know you can take two and combine, but I don’t really like that
either, seems like a lot of work.

Maybe I am just disillusioned but I have seen great single shot
photos that appear to capture a ton of range from the darks to the
highlights, am I missing something? Is there some basic
fundamental aspect of photography I am missing where on digital you
can have it all?

Alas, I did some side by side comparisons on this last trip to
Colorado of Film verses my digital and I have to say that the Film
vastly outperformed my digitals ability to capture the range of
contrast on Partly Cloudy mid day mountain shots. So much so that
I am likely to continue carrying a film camera with me on these
trips where I know I am likely to need more ability to capture
dynamic range.

So is this truly digital photography’s greatest weakness?

--
Thanks!
Mark
Latest Gallery
http://www.radphotos.net/newstuff/Page.html
Personal Favorites
http://www.radphotos.net/S2GAL/Page.html
 
I would like some expert opinions on blowing highlights using digital.
Seem to me that the manufacturers should have a mode that maps the 12 bit dynamic range to 8 bit. This would give needed dynamic range and also virtually eliminate the need to process raw images for dynamic range with two partial images.

Seems to me that we already have a "contrast" setting which could be the ideal mechanism for this.

tony
 
I respectfully disagree. Ultimately, we must see visual evidence. In fact, I think the whole subject has remained murky because evidence is never presented.
Ansel Adams and others produced much visual evidence re DR in film.

A simple way to start would be to show test scenes, like Imag Rescource. A better way would be to photograph a textured neutral gray object , opening and closing various numbers of f-stops to see when detail disappears. More sophisticated methods are of course possible.

Unless we show visual evidence and then correlate this with bit depth or other electronic data, be will always be unclear about this vital issue.

Stuart
I shoot all RAW so i apprecite the comment but I tell you, even
hours in photoshop with a single shot can't compare with the DR of
film. I think that all reviews on DPReview should have a
DR/Lattitude test. The base should be film until something can
prove that it can beat it.
As I understand it, DR in digital is purely a function of bit
depth, which is why RAWs allow for more DR (usually
12-bit-per-color-per-pixel vs. 8-bit for ordinary JPEGs), so it
would be somewhat pointless to compare DR on digitals - you can
just state their maximum bit depths. These will vary to some
degree based on color settings, but should be pretty similar. I
could be wrong on this, but I think I'm at least on the right track.

I think you may be mythologizing film too much when you imagine
that most print films could capture a range vastly superior to
RAWs, let alone anywhere near what nature can present you with.
It's a rare film that will let you catch shadow detail as well as
white, fluffy clouds.

What might be throwing you off is that digital is more similar to
slide film in that it errs slightly on the side of shadows, and is
more likely to blow highlights, while prints err on the side of
highlights and are more likely to lose shadow detail (based on
whatever mid range you're exposing for), thus you blow highlights
much less often with prints in practical use.
 
Bit depth as you state is not really related to DR. Bit depth is the ability to capture a certain number of points between the brightest and darkest pixel.

DR is the ability to push the brightest and darkest points further apart.

The advantage of bit depth is the ability to fill in between the bright and dark end points with more varied points.

As I understand it, DR in digital is purely a function of bit
depth, which is why RAWs allow for more DR (usually
12-bit-per-color-per-pixel vs. 8-bit for ordinary JPEGs), so it
would be somewhat pointless to compare DR on digitals - you can
just state their maximum bit depths. These will vary to some
degree based on color settings, but should be pretty similar. I
could be wrong on this, but I think I'm at least on the right track
 
Seem to me that the manufacturers should have a mode that maps the
12 bit dynamic range to 8 bit. This would give needed dynamic
range and also virtually eliminate the need to process raw images
for dynamic range with two partial images.
I think that's a JPEG. If you think about it, you'll see that the 12-bit output has to be mapped to 8-bit in order for it to work at all.

The loss comes more from a) mapping 16 12-bit values to 1 8-bit value; and b) mapping linear values to a non-linear output curve.
 
Bit depth as you state is not really related to DR. Bit depth is
the ability to capture a certain number of points between the
brightest and darkest pixel.
Or to state it another way, a higher bit depth allows you specify those points (and those in-between) more precisely.
 
I think that the bit depth conversation is interesting but Film and CCD's deal with light differently. Not sure about CMOS as I have not studied it much.

I like the comments about slide verses print film as I think that CCD's can be compared in that regard.

My experience with CCD's in astronomy is that they are very good in low light so that makes more like slide film.

They are however very sensitive to bright light, this is why they have blooming and NABG (non anti blooming) chips.

When a well saturates on a blooming chip like the D70 it can spill to adjacent pixels thus causing nasty which blotches.

On a non blooming chip they simply stop when they are full.

If you think about this, that is a problem because if your shadow areas are not done exposing it would be difficult to balance out correctly.

CCD's are really very limited due to their linear nature. Hence Fuji’s effort with the S3 to add extra pixels that simulate a film like non linear exposure curve.

What I find rather funny is why the metering systems are better engineered for CCD’s to meter the scene in such a way as to interpret the data after its collected on the CCD when its translates it into an image.

IE if I know that I need 11 stops between my brightest and darkest portions of the photo, then why not apply a non linear curve that simulates film and translates it into the available bit depth rather than say you get a perfect exposure for the available range? It should at least be an option in future digitals.

Really silly in my mind that Fuji would feel the need for a special sensor to do this when it could likely be done in the metering system.

Just my 2 cents on the subject.

--
Thanks!
Mark
Latest Gallery
http://www.radphotos.net/newstuff/Page.html
Personal Favorites
http://www.radphotos.net/S2GAL/Page.html
 
The problem is not the metering systems and along with this your suggestion below will not currently work as the information (11 stops) is beyond what the sensor is able to capture. Imagine having in one view, the sun and a black coal mine. Your eyes cannot see the detail in the sun (although it is there) as well as not being able to see the black cat in the coal mine. The DR is beyond your vision and no matter what your eye does to correct it it cannot see both ends at the same time. Same thing for digital sensors (CCD, CMOS or film for that matter) although there end points of visibility are much shorter than your eye. No type of non linear curve can reclaim what is not there.
What I find rather funny is why the metering systems are better
engineered for CCD’s to meter the scene in such a way as to
interpret the data after its collected on the CCD when its
translates it into an image.

IE if I know that I need 11 stops between my brightest and darkest
portions of the photo, then why not apply a non linear curve that
simulates film and translates it into the available bit depth
rather than say you get a perfect exposure for the available range?
It should at least be an option in future digitals.
 
IE if I know that I need 11 stops between my brightest and darkest
portions of the photo, then why not apply a non linear curve that
simulates film and translates it into the available bit depth
rather than say you get a perfect exposure for the available range?
You answered your own question, e.g. "that is a problem because if your shadow areas are not done exposing it would be difficult to balance out correctly."

Or to put it another way, you can't map what you didn't record to start with. A good exposure stops just before a significant portion of the pixel wells saturate/clip. As such, you only have that period of time to get an exposure, and due to that, you only have as much "data" as your "darkest" wells can record.

So you can't map 11-stops worth of data if you didn't record it. (Well, you could, but you're going to comb your midtones doing it.)

Making the pixel more sensitive doesn't help, as the "bright" pixels just fill faster.
Really silly in my mind that Fuji would feel the need for a special
sensor to do this when it could likely be done in the metering
system.
Not silly at all. They're doing what you're talking about, but they need TWO separate measurements from sensors of different sensitivity to do it.
 
Ansel Adams and others produced much visual evidence re DR in film.
After he wrestled it to the ground. Ansel's work is not by any means the result of a "straight" shot with casual development and printing. He had to work hard to compress the vast tonal range of a natural scene into a very short range that he arbitrarily divided into 10 segments.

He didn't torture his materials any less than a digital photographer using a RAW image, C1, and Photoshop to their maximum potentials.

--
RDKirk
'There's nothing worse than a brilliant image of a fuzzy concept.' --Ansel Adams
 
RD,

My point is that we need to stop the electonic fantasy approach to this subject and use a visual approach. What is blown out to one person may not be blown out to another. The film guys had no bit depths, etc to talk about and produced visual (granted ,tortured) evidence of their assertions. We then would see what electronically corresponds to blown out highlights for most people, etc. As they say, " a picture is worth a thousand words."

Stuart
Ansel Adams and others produced much visual evidence re DR in film.
After he wrestled it to the ground. Ansel's work is not by any
means the result of a "straight" shot with casual development and
printing. He had to work hard to compress the vast tonal range of
a natural scene into a very short range that he arbitrarily divided
into 10 segments.

He didn't torture his materials any less than a digital
photographer using a RAW image, C1, and Photoshop to their maximum
potentials.

--
RDKirk
'There's nothing worse than a brilliant image of a fuzzy concept.'
--Ansel Adams
 
But your assuming the normal rules of exposure would apply.

If I exposed longer I could catch it all and translate.

Imagine that today’s rules did not apply for a minute...

I meter a scene and learn what its hottest and coolest points are.

Rules aside I could expose both parts optimally and combine right, isn't this what we sort of do when we combine two photos in PS?

I have thought about this more this afternoon and a film curve is not the answer, its a unique scene curve that is needed.

The problem of applying curves in software is that they are geared to impact light as a whole. So if I am adjusting highlights I am adjusting all highlights, whether they need it or not.

I love SHO Pro plugin because it allows me to pull up shadows while leaving highlights alone. But I still pull all shadows up whether they need it or not. Again I am adjusting the whole of dark in my scene. BTW, effectively I have vastly improved my DR using this gem of a plugin.

Rules aside, what if there were a way to adjust only the areas that needed it. If you think about it, Nikon already has some intelligence in its metering right, the claim based that they have 1000's of scenes they compare to.

But what if we broke up the DR into channels based off of a metering CCD that's purpose was to look at and break my scene up into a series of curves like an equalizer?

If you think about what Fuji is doing is in reality incorporating another CCD chip into the main imaging chip to do this same thing. It basically kicks in when the primary pixels get saturated and keeps track of it so that it can later normalize the picture.

That has the advantage of getting the exact scene the chip records analyzed verses with metering you could slightly move by the time you click the shutter. But Fuji is not really leveraging this concept the way I described that it could be done.

If information like this were stored in the RAW files then an endless stream of processing rules could be applied to simulate an image with greater DR. You have to think hard about that to believe it because you are relying on a metering chip which could have an assortment of pixels types to understand an even greater DR than the eye can see and use that analysis and information to control and interpret the exposure on a CCD with a lower DR and map that to an even lower bit depth. Is it a real representation of the scene, nope! It would bean interpretive translation.

Its also not to say that the additional information could not be used to create an greater simulated bit depth so you could go from a 12 to a 16,24 32 ect..

Fuji’s method isn’t silly, but maybe it could be done in a more cost effective way since in their method they are using valuable future pixel real-estate to analyze the images. Its interesting to note in their solution though that it’s the upper end of the dynamic range they are focusing on which aught to tell us all something we already know, that’s the problem end of the range.

Maybe this makes more sense now, maybe its still not clear.

--
Thanks!
Mark
Latest Gallery
http://www.radphotos.net/newstuff/Page.html
Personal Favorites
http://www.radphotos.net/S2GAL/Page.html
 
So what ways do camera manufactures have to approach the problem?

You could try a pickoff mirror like what is frequently in astronomy that picks off the scene and directs it to another smaller CCD used to guide images.

You could use a second CCD next to the primary, but that just won’t work well in this case.

You can do as Fuji has chosen which is deploy special pixels into the main imager CCD but that may eventually stint your ability to increase Mega Pixels.

You can do it as a part of the metering and catch it before the mirror flips up, maybe again after it flips down and store it all, but your scene may have changed, especially with fast moving subjects.

Another approach of course is to build more dynamic pixels.

Of course yet another would be to take two pictures and combine onboard, either with a Stereo setup or in series, the later having problems with motion again.

I think the motion problem though is more of the 80/20 rule where the DR is less of a problem, but it does happen outdoors with a bright sky.

How else do you want to solve it? Your thoughts would make interesting conversation.

--
Thanks!
Mark
Latest Gallery
http://www.radphotos.net/newstuff/Page.html
Personal Favorites
http://www.radphotos.net/S2GAL/Page.html
 
How about a better chip that records a larger DR?
So what ways do camera manufactures have to approach the problem?

You could try a pickoff mirror like what is frequently in astronomy
that picks off the scene and directs it to another smaller CCD used
to guide images.

You could use a second CCD next to the primary, but that just won’t
work well in this case.

You can do as Fuji has chosen which is deploy special pixels into
the main imager CCD but that may eventually stint your ability to
increase Mega Pixels.

You can do it as a part of the metering and catch it before the
mirror flips up, maybe again after it flips down and store it all,
but your scene may have changed, especially with fast moving
subjects.

Another approach of course is to build more dynamic pixels.

Of course yet another would be to take two pictures and combine
onboard, either with a Stereo setup or in series, the later having
problems with motion again.

I think the motion problem though is more of the 80/20 rule where
the DR is less of a problem, but it does happen outdoors with a
bright sky.

How else do you want to solve it? Your thoughts would make
interesting conversation.

--
Thanks!
Mark
Latest Gallery
http://www.radphotos.net/newstuff/Page.html
Personal Favorites
http://www.radphotos.net/S2GAL/Page.html
 

Keyboard shortcuts

Back
Top