kwik
wrote:
Sorry to sidetrack the discussion, but do you happen to know what the
DR of the human eye is under various circumstances and how these
figures are derived?
It is human nature to first look for the extreme answer, and it often happens in this area.
Sure: we can squint at the sun for a very brief moment with our eyelids nearly crammed shut, putting up with the big multicoloured afterimages; and several hours later, we can hang around for ten minutes in the dark while our eyes adapt, and then we can pick out a faint planet in the night sky.
But this is our visual system's
capability
, not its dynamic range, IMO. A digital camera can also photograph the sun for a brief moment, and next it can make a long exposure in almost perfect darkness - and with zero adaptation time between. We don't consider that to be dynamic range, we consider it to be adjustability, because the two captures were not simultaneous.
The same is strictly true for our eyes and brain, but because our visual experiences are always inseparably grouped - cumulative - we think about something
seen
more or less at once, when really we have
scanned it for a while
[using many different and complex processes, including adaptation, edge detection, daring inference and sheer invention - for example, filling in our "blind spot" and reconciling binocular vision into a single impression. There's a lot we will report having seen, which experiment shows we simply haven't.]
Futhermore, we can't directly compare the subjective aspects of looking at a compressed brightness representation (a print, say) with a full brightness range scene in reality, without also considering the effects of that reduced brightness range. If we could duplicate that for our eyes - a low contrast filter - then we could look at the bright sun and the deep shadows all at the same time; but we can't. So that's not a fair comparison.
So about all we can do is to look at the information the human eye can gather during a mere second or two (allowing "local scanning" adaptation but not "overall scene" adaptation), from a single extremely contrasty scene, and judge how well a picture from a digital camera
viewed with a similar luminance range
compares in information content, when also viewed for the same period. Most monitors cannot do this, and (for sure) a print cannot, unless we subjectively and imaginatively "enter its world" in the viewing, which has no counterpart in viewing direct reality.
But there are specialist displays now that show something remarkably
like
looking at the world - a dazzling sun is dazzling to look at, deep shadows are dim and obscure, when we view this display in all the same flawed and problematic ways that we view reality. A brief impression is easily represented within our usual digital camera DR - with blown sky, filled shadows, and all: after all, the sky is a mere dazzle when we see it peripherally as we look at the land; and while we look at the clouds instead, the land is a dark blur. All cameras can do somewhat better than that.
True, a longer, more exploratory and forensic exploration will always reveal when something is a photograph and not a live scene, even in ideal viewing circumstances; but to expect everything from a photograph that we do from a real scene is to miss the point of photography IMO - to stray into the realms of simulation, not representation.
RP