The Zone System versus Digital cameras dynamic range

Well explained. Thanks!

Does canon 20D use 8 bit or 16 bit?

I saw poeple use software to combine over expoused and under expoused pictures of same scene together to gain the DR. Does photoshop take care of this non-linear factor? or it simply pick the best parts and put together?
Another take on this subject (last para is the kicker)

Some people think of digital imaging as solid state film. This
isn't the case. The following explanation by Bruce Lindbloom may
help you understand what's going on with the above technique...
(expose to the right)

For film based photography, the highlight end of the scale is
compressed by the shoulder portion of the D/log E curve. So as
brighter and brighter objects are photographed, the highlight
detail gets gradually compressed more and more until eventually the
film saturates. But up until that point, the highlight compression
progresses in a gradual fashion.

Solid state sensors in digital cameras behave very differently. As
light falls on a sensor, a charge either accumulates or dissipates
(depending on the sensor technology). Its response is well behaved
right up until the point of saturation, at which time it abruptly
stops. There is no forgiveness by gradually backing off, as was the
case with film.

Because of this difference, setting up the exposure using an 18%
gray card (as is typically done with film) does not work so well
with a digital camera. You will get better results if you set your
exposure such that the whitest white in the scene comes close to,
but not quite reaching, the full digital scale (255 for 8-bit
capture, 65535 for 16-bit capture). Base the exposure on the
highlight for a digital camera, and a mid-tone (e.g. 18% gray card)
for a film camera.

--
Charlie
--
Allen
 
I'm not sure what you meant by live. I assume it is like an image
available immediately and/or continuously.

But if it
exists, its military, and we never know for sure what they have.
Until (if) they decide to declassify it twenty years later or so.
OR THE NEW YORK TIMES REVEALS THE "SECRETS" !!!
a live satellite image?
or is that just a Hollywood myth?

Ian

--

--
Thanks for reading .... JoePhoto

( Do You Ever STOP to THINK --- and FORGET to START Again ??? )
 
As interspersed below:
Does canon 20D use 8 bit or 16 bit?
It's not quite so simple as that. Although some photo editors output TIFF image files and work internally using a choice of 8 or 16 bits per colour channel (16 preferred when doing major exposure or colour adjustments in order to avoid round off errors and too coarse steps in tonality, called banding), most modern camera capture the data using 12 or 14 bit resolution. Some older cameras captured only 10 bits or even as little as 8 bits (very old). The Canon 20D/30D, in common with most DSLR's captures 12 bits; allmost all Sony and Fuji cameras (including compacts) capture 14 bits. However, 14 bits isn't necessarily all that much better, given that even at minimum ISO sensitivity the least significant bits contain mostly random noise generated by the randomness of light photons striking the small photosites. The main advantage of any bits beyond 12 is that they allow the output from interpolation and filtering/sharpening functions to be a little smoother.

In-camera processed images from the camera are always output with 8 bit per colour channel resolution, whether in JPEG compressed format or uncompressed TIFF format. That isn't all that bad for properly exposed images in that due to the non-linear TRC, the represented tonality range is greater than a linear two to the exponent 256 for a linear range (8 stops), and generally represents something like 10 or 11 stops, and the number of levels change for a distinquishable tone change to our eyes is approximately equalized across the range. With proper attention to camera settings to maximize captured range and to post processing to display this range, one can produce images with details in about 7 to 8 zones in the original scene for a compact camera and the full 9 zones for a DSLR, both using low senstivity settings for the lowest noise/grain floor threshold.
I saw people use software to combine over exposed and under
exposed pictures of same scene together to gain the DR. Does
photoshop take care of this non-linear factor? or it simply pick
the best parts and put together?
Since Photoshop works in floating point for the High Dynamic Range (HDR) function, one would assume that it would apply a linearization function before doing the combining, then apply a non-linear function to produce the final expected JPEG/TIFF type of output. However, this would be a bit of a guess on the software's part, in that the true Tone Response Curve (TRC) of the camera is not generally known. However, errors would be reasonably small if the same best guess function is used to linearize the data and its inverse used to produce the final viewable image.

Hope this helps, GordonBGood
 
...the part of the zone system most applicable to the digital age:
previsualization.
Ansel is spinning in his grave.

Pre-visualization is redundant! It's VISUALIZATION. It's like saying pre-drinking water. :-D

--
Eric

Ernest Hemingway's writing reminds me of the farting of an old horse. - E.B. White
 
That's not quite true. You have to adopt something like the
approach that you'd take when shooting with slide film, by metering
for the highlights and controling shadows in development rather
than the other way around, but that doesn't mean that you're in
some sense limited in the application of the system.
If anything, digital makes some aspects of the Zone system easier
because you're free to reprocess your raw file as many times as you
like. While it's still possible to make exposure mistakes, you're
protected against the equivalent of development mistakes. You can
also fine-tune development in a way that was completely impossible
with film. The biggest problem is that you have too much freedom,
so that there's a strong temptation to keep fiddling instead of
getting a good picture an moving on to the next one.
Point taken, but the fact of the matter is, that what the sensor collects (RAW) is all you get. You can't change the exposure and alter the electricity hitting the sensor to make it more or less sensitive to various parts of the spectrum diffrently to achieve a visualization. The post-processing in ACR or Aperture or whatever is very similar to processing film based on exposure choices beforehand, but not quite. My point about it being like slide film isn't tha tit's similar in contrast, digital is way, way more forgiving. But it is what you get when you take the photo and that's it. It's up to to to make the best of it by exposing properly and knowing what you're doing in Photoshop (or whatever lesser software one might use).

--
Eric

Ernest Hemingway's writing reminds me of the farting of an old horse. - E.B. White
 
??

If you are a scientist, the translation of an exponential scale to a digital representation should be obvious. Either you map the linear function (wasteful of bits) or you map the log. Zones = logs, it si that easy.
--
Stephen M Schwartz
SeattleJew.blogspot.com
 
orry, you are wrong. Ansel worked very ahrd at seeing the scene AS IT WOULD BE PRINTED. That si previsualization.
--
Stephen M Schwartz
SeattleJew.blogspot.com
 
The best way to do it is this:
Take a white fluffy towel and set the camera on a tripod.

Meter for the towel take your picture then open up 1 stop take another open another stop etc.
Then go back to your original exposure close down 1 stop then another etc.

When back at your PC look for the first and last zone that have detail visible on your fluffy towel, this is your useful dynamic range.

Note the amount of stops, but caveat emptor remember digital cameras aren't set for mid gray (18%) but are nearer 12% (I think).
--
http://www.digitalcamera.netfirms.com http://www.pbase.com/mark_antony/root
 
The program looks really intriguing. However, I have never heard
of it. Do you use it as your sole program? How is it in handling
B&W - I am scanning a large number of old B&W prints and they are
hard to work with.
--
Don
Hi Don,

I don't use Lightzone as my sole program, although it is my main editor. At this time, it doesn't handle my camera's raw files (Sigma's x3f format). So I use SPP as a raw converter to produce a 16-bit TIFF. Following raw conversion, I do all my editing in Lightzone. But when I want to print, I have to take an exported TIFF into PSE3---Lightzone currently has a bug (actually a java bug) in the print driver for my DesignJet 30. They're working on this and should have it fixed shortly.

It's great for B&W. I find it easy to do B&W conversions and manipulations. If your scanner is putting out good quality scans (good dynamic range without blowing out highlights), my guess is that you'll like the software, once you "get" the workflow.

See my following post with regards to the workflow in LZ.

--
Jim
 
JLK,

Actually Lightzone is what aroused my interest in the zone system,
especially beacuse I tried tu use it and had not much clue of what
was going one. But again, coming from the remote sensing field,
histograms and curves are very natural to me, what is probably why
I didn't quite get the ZS at first.
When I first tried Lightzone, I had the same reaction (especially coming from photoshop---or as a scientist, from apps like ImageJ). Once you figure out how Lightzone works, you'll probably find it a better application to quickly get to finished photos. There are several tutorials on Lightzone on the web, and I'd suggest that you follow through on a couple.

Lightzone is a non-destructive photo editor---this means that changes you make to the image are captured as an "adjustment layer" that's applied to the base photograph. What this means is:

1. All the LZ tools are equal in that they all create these layers (automatically) when you apply a tool to the image.

2. All tools can be used either globally or regionally, using the very powerful "region masking" tools in LZ. That means that local contrast adjustments are quite easy by bringing up a ZoneMapper, using the region selector to grab the area you want to adjust, and then doing the adjustment. The cool thing is that this is true for all tools (see 1)---so if you want to do a local white balance adjustment, or local sharpening or blurring---it's easy.

3. Most all of the tools can have a "blend mode" applied to them.

4. LZ works in a 16-bit linear color space, which means it's less prone to color shifting and other artifacts for various adjustments.

5. The editing workflow is "non-linear". Because all steps are non-destructive layers, you're free to go back at any time to change something you did earlier, or change the order of steps to achieve a different result.

It took me a few hours to "get" this Lightzone paradigm. Now that I have, I far prefer it to working in Photoshop. It lets me get to my visualized photograph much faster than I could in PS.

--
Jim
 
Mapping functions were not the problem. My question was how applicable was the ZS to digital in a conceptual manner, especially beacause coming from the "digital" world instead of film, histograms seem to be more accurate than the ZS, so why bother. But now I see that they complement each other.

In fact, I wonder if for dSLRs the user is able to upload custom mapping functions, to optimize the dinamic range for different light conditions. Is it possible?
??

If you are a scientist, the translation of an exponential scale to
a digital representation should be obvious. Either you map the
linear function (wasteful of bits) or you map the log. Zones =
logs, it si that easy.
--
Stephen M Schwartz
SeattleJew.blogspot.com
 
Hi all,

I've been reading about the famed Zone Systems lately, and it is
mentioned many times that the original system had ten zones, but
that for certain types of filme the number of zones is reduced, due
to the reduce dynamic range.
I learned B&W photography from Adams Basic Photo books, which are still in print and available for under $15 ea. from Walmart.com. By and read them if you really want to understand his zone system.

The reason that the Ansel Adam zone system has 10 zones, each an f/stop apart, is due to the fact it was designed around a #2 grade of print paper which could reproduce a 10-stop range of densities from a negative.

The 10-stop range of the #2 paper was a contstant. In the AA Zone system the exposure and development of the print is treated as a constant. The f/stop range of the scene, and the development of the negative which controlled the density of the highlights were the variables.

With testing one first finds the neg. development time which will reproduce a 10-stop range of brightness correctly on #2 paper. "Correct" is defined by Adams as zone 0 reproducing as pure black (max density of the print paper) and zone 9 as pure white, with zones 1 reproducing the deepest shadow detail and zone 8 the brightest highlight with perceptible texture. That neg. development time, whatever it turns out to be, is then defined as "normal" development.

Not all scenes have a 10-stop range of brightness so it was necessary in Adams' system to expand or contract it via the density range of the negative to fit the constant of the ten stop range of the #2 paper.

Here's the point where you need to let go of the zone = f/stop idea. The 10 zones for scene visualization are artibrary print tonal values, not fixed f/tops.

A scene with bright sunlit highlights and deep shadows might measure 12 f-stops between the brightest and darkest tonal values you wish to reproduce as pure black and white on the print. The same scene in overcast light might measure only 8 stops difference in light intensity between the same shadow and highlight areas. But in both cases the scene will have 10 previsualization if you want solid black in the shadows and paper white in the brightest specular highlights. Note that the second scene might not look to the eye that it had pure black and white tones due to the flat lighting, but it could be manipulated to reproduce so it did on the print. That's the true essence and magic of the AA zone system and what he means by "previsualization". You don't accept the "normal" reproduction range of the scene lighting, you bend it to your creative vision with some systematic control over technique.

In the first case where the scene measures 12 stops it would be necessary to develop the negative less than "normal" so the brighter areas are low enough in density on the negative to fit the #2 paper. In the second it would be necessary to boost the density of the highlights of the negative to compensate for the flat lighting with more development to get pure black and white tones in the same areas on the #2 print.

The time needed to do both was determined via careful controlled testing. A zone system darkroom usually had a notebook or chart on the wall listing -2 stop, -1 stop, normal, +1 stop,+2 stop development time.

cont....
 
continuation...

In "The Print" Adams teaches how to expose a print consistently with the minimum about of exposure needed to make the clearest part of the negative, the film base + chemical fog density (FB+F) the blackest tone that the paper can produce, but any densities above that are reproduced as a gray tone. Nailing the exposure so the darkest shadow detail reproduced at the densities just above FB+F was as critical to the success of a ZS shot as nailing the highlight exposure is with digital.

With digital the dynamic range of the sensor is the constant, just as the 10-stop range of the #2 paper was in Adams'. The differences are two-fold: 1) the sensor may not have the same 10-stop range which defined "normal" for #2 paper, and; 2) there's no way to increase the range light intensity which can be recorded when the image is captured as with negative development.

With digital you must expose for the maximum amount of exposure which will retain detail in print value 8, the lightest highlight detail. The simplest way I've found to do that is to toss a white terry towel into the scene and bracket until the detail is blown, then back off 1/3 stop.

When you capture the optimal highlight exposure where the shadow detail falls is limited by the range of the sensor, which gets us to your question below....
I wonder then, how do you know the dynamic range of your digital
camera, then, in terms of f-stops? Is it specified somewhere in the
manuals, is there a "standard" or you have to determine it by
testing?
First you need a 1-degree spot meter and an outdoor scene with consistent lighting and at least a 10 stop range of brightness between pure white and shadows devoid of detail. Measure many different areas in the scene and take notes to record their f-stop readings. Then shoot a bracketed series to make sure you get a file with correctly exposed highlights. Take the file with the best exposed highlight detail, print it, and look at the shadow detail, or more importantly find a transition zone where the shadow detail disappears. Refer back to your notes to find the f/stop reading for that area, and for the brightest plain white area in the photo. The difference is your useable (i.e., what you can actually perceive on a print) dynamic range in f/stops. Now knowing the DR of the camera what can you do with the information?

Well you'll know the difference between the f/stop used for the aperture to capture the file and the reading of the brightest highlight with detail, print value 8. It will be several f-stops different. Knowing that delta factor will allow you to base your exposure off a spot meter reading of a textured highlight rather than a gray card. You'd take the reading with the spot meter on the brightest highlight in the scene you wish to retain texture. The meter will give you the f/stop which will make it gray. You then add the delta factor to adjust the exposure to compensate for the fact you actually want to reproduce it as print value 8. That will not automatically be 3 stops (i.e. zone 8 - zone 5)! It will depend entirely on the DR of the sensor.

Also, by knowing the practical DR of the camera you can meter the textured highlight and know from your test how many stops away from it the shadow detail will be lost. There's not anything you can do in the camera to expand the DR, but knowing where in the scene detail will be lost in advance of shooting and chimp can clue you to the need to add fill flash or two shoot on a tripod and take two separate exposures, one for the highlights and a second with more exposure, for post-processing blending. If you know the area you want shadow detail is 3 stops below the range of a single exposure you'll know to add at least three stops for the second exposure.

So to recap.

Zones in the AA system are desired values on the print, not one f/stop increments of brightness in the scene. The stops = zones in Adams system was due to the fact #2 paper reproduced a 10-stop range.

You need to expose for optimum highlight detail, measure the scene, make a print and determine where the detail disappears to find the DR of your camera for practical (not scientific) purposes. There might be detail in the file or visible differences on screen at 100%, but if you can't see it on the print it's not being reproduced.

If you know the practical DR and have a spot meter you can base exposure off the brightest textured highlight then measure the shadows to know in advance where print value 1 shadow detail will be lost on the print. You can't increase the total range of scene reflectance captured with detail unless you stack separate bracketed exposures.

CG
 
First its pretty pointless to consider using the zone system if you are not using a hand-held 1-degree reflection spot meter to measure specific areas of the scene.

If you have a 1-degree meter a more practical way to determine the DR is to simply take readings and notes of every brightness value in a test scene. Then shoot a bracketed series (changing f-stop), but only to ensure that you get a file with properly exposed highlights with detail (i.e. print value 8).

Then

1) Compare the f/stop which produced the best exposed file with the meter reading for the print value 8 highlight. That difference is the "delta" factor to be used for exposure compensation for future exposures based on reading area of the scene you wish to place on the print value 8.

2) Print the file and discover with your eyeballs where in the print the shadow detail disappears. Go back to your note and find the f/stop reading for that area. The difference between it and the reading from the brightest highlight in the scene is your camera's real world DR.

Zones are on the print, f/stop differences in illumination...

"Zones" in the AA zone system are not one f/stop differences in scene illumination but rather an arbitrary division of the tonal range from pure black to white into 10 separate desired tonal values on a final print.

The convention at the time Adams and White developed the zone system was to develop the negative the same regardless of the illumination range of the scene and then pick the paper grade which matched the resulting density range of the negative after the fact. The same approach was later used with filter modified "polycontrast" papers.

The essence of the AA system is manipulating the film negative development and resulting negative highlight density to make any brightness range match the 10 stop range of the #2 paper Adams (and Minor White) used as the constant and knowing in advance what degree of over or under development was needed.

If the scene measured 12 stops difference between highlight and shadow it didn't have 12 zones, it had 2-stops more light than could be reproduced on #2 paper from a negative with "normal" (10-stop range) development. Lowering the neg development time had the effect of compressing the 12 stop brightness range into the 10 stop range of densities the #2 paper could reproduce. The corrolation between the X stop compression or expansion of the scene range and the negative development time needed to pull it off was determined with controlled testing. It's a bit difficult to grasp this if you've never used a spot meter to determine the range of a scene or gotten your hands wet in a darkroom.

If you actually own a copy of Adams Basic Photo books read the forward and "description of terms" section in the front of The Negative. In my 1971 printing of the second edition Adams tells the reader not to use the word "zone" but to use "value" instead. I suspect that the change in terms was a result of Adams realizing that the coincidence of #2 paper reproducing 10 stops and there being 10 arbitrary zones for previsualizing tone was confusing people. But the publisher was too lazy or frugal to re-edit the books to conform with that new terminology. 35 years later is still is apparently confusing a lot of people whose understanding of the zone system is word of mouth. Buy the books, they are a good read even in the digital age and you'll understand how it actually was designed to work.

So regardless of whether your camera has is capable of recording 6,8,10, 12, or more f/stops the photos it reproduces will all have 10 tonal values (i.e. print zones) between black and white if you use the system Adams invented for previsualization.

With digital you can determine the range of illumination with a spot meter just as Adams did, and expose for highlight detail based on the spot reading just as he based his on the shadow detail. But what you can't do with digital is change the DR of the sensor the way that development changed the DR of the negative. The best you can do is determine in advance where in the shadows detail will be lost.

CG
Try this. Using a tripod and a well lit wall and camera in manual
mode:

Take picture a picture, correctly metered.
Take another one stop over exposed. Then another 2 stops over, etc.
until the picture is totally 'white' (255,255,255)
Now do the same but in under exposed direction until 0,0,0.

Count the number of pictures from 0,0,0 to 255,255,255

Viola! Your camera's dynamic range, number of zones. Notice where
the correctly metered picture falls. Use this to add exposure
compensation, after you learn how to pre-visualize and where you
want the metered object to fall.

--
Appreciating the gifts you have been given is the blessing.
 
As follows:
With digital you can determine the range of illumination with a
spot meter just as Adams did, and expose for highlight detail based
on the spot reading just as he based his on the shadow detail. But
what you can't do with digital is change the DR of the sensor the
way that development changed the DR of the negative. The best you
can do is determine in advance where in the shadows detail will be
lost.
Very good explanation, Chuck! Yes, it's true that you can't expand a digital cameras maximum captured DR in f-stops so there is indeed a finite limit to the amount of shadow detail that can be captured, even for the best exposed single captured image. However, given that correct exposure using a good DSLR at low ISO sensitivities, where sensor noise in the shadows isn't too much of a problem, can capture about 10 f-stops of scene DR, the problem in printing is usually how to compress this DR into the limited DR of prints.

Yes, Ansel Adam's ZS could represent 12 f-stops of scene DR on #2 paper, but that does not mean that there is a range of 12 f-stops in reflectance values between the whitest area on the paper and the area with the maximum density; it is more likely to be about half of that or 6 f-stops. So that means that he effectively was compressing the captured scene DR into the DR of the paper. That is effectively what we are doing when we use "black point compensation" to compress the DR of our digital images to the actual DR range of densities of our prints.

Yes, with digital we may not be quite able to quite capture as much shadow detail as we could with film using the ZS (ignoring the noise floor thresholds of film grain vs. digital noise), but the difference may be barely noticeable on prints given their reduced DR.

Regards, GordonBGood
 
In fact, I wonder if for dSLRs the user is able to upload custom
mapping functions, to optimize the dinamic range for different
light conditions. Is it possible?
You're talking about the ability to upload custom Tone Response Curves (TRC's), a capability only available for a few high-end Nikons and (I think) Canon's. It isn't available on current consumer level cameras. Check the reviews on this site of higher end cameras to see which have this capability. However, these curves would only apply to in-camera processed images (JPEG's). With raw image capture formats, you are free to apply any TRC you desire when you post process the data to produce viewable/printable images.

Hope this helps, GordonBGood
 
If you know the practical DR and have a spot meter you can base
exposure off the brightest textured highlight then measure the
shadows to know in advance where print value 1 shadow detail will
be lost on the print. You can't increase the total range of scene
reflectance captured with detail unless you stack separate
bracketed exposures.
It gets a bit more involved, though, in digital color. The color of the lighting and the color of the subject affect the dynamic range. If you're shooting RAW, with most digital cameras, blue is only 1/4 the strength of red or green in the capture. This means that there is a 2-stop difference in sensitivity between blue, and red or green (or orange or yellow, etc). So, if your highlight is blue, and your shadow is yellow, you will have 4 stops more DR than you will if your highlight is yellow, and your shadow is blue.

If the shadows and highlights are unsaturated, the light color still has an effect. The native WB of most digital cameras is a purplish-magenta (red light most prominent, green least). That gives widest, full-channel DR, and any deviation from that starts to make color channels more or less sensitive to each other for a greyscale subject. Deviations have their good and bad points; they limit the DR of full-color, but increase the DR of B&W output, or color images that can have greyscale extended highlights and even shadows (I'm not sure than any current RAW converters utilize the latter, though).

--
John

 
It gets a bit more involved, though, in digital color. The color
of the lighting and the color of the subject affect the dynamic
range.
The points you make are all good and illustrate the "hidden" variables. It also is why I think a practical test of DR by taking your camera and 1-degree spot meter out into the backyard and then making a print from what is captured for eyeball evaluation is far more useful to a photographer interested in such things than the DR shots of the Stouffer transmission scale now included in the reviews.

It's also worth pointing out to those who have never shot B&W film that B&W photographers were also very aware of the effect of color on the DR and tonal rendition of their subjects. The glowing luminosity of some of Adams' foliage photos was a result of using a green filter to lighten it more than it would normally be perceived. From the knowledge gained by reading Adams books, back when I shot B&W I routinely used a yellow filter to compensate for the film's greater-than-eye sensitivity to blue, switching to orange or red when a darker sky was desired, such as in Adams' Yosemite photos. But one wouldn't want to use a red filter to darken the sky if there was a red fire truck in the foreground be cause the truck would be rendered unnaturally lighter in tone by the filter. Some DSLRs have similar filter effects selectable in their B&W capture modes, but you can also do the same with Lab > RGB channel blending in PS.

Personally I don't see much point in trying to apply the classic zone system to digital.

Even when I shot B&W I used a modified version of it for use with roll film. I previsualized and spot metered the scene to determine its range vs. the standard 10-stop range of #2 paper, but developed all the negs "normal" to fit #2 regardless of the brightness range of the scene. I did that because I used polycontrast paper with a color head on my enlarger which allowed me to dial in a precise filter pack for any range of density on a negative. I determined the filter packs needed for any negative density range by making test prints of the same type of Stouffer scale Phil now uses for his camera DR tests.

After my roll film negs were developed to standard #2 conditions I made a carefully controlled contact sheet on #2 grade polycontrast paper exposed for max black at film base + fog. I'd already know from my shooting notes what the range of each frame would be, but the standard "baseline" #2 contact sheet made it quite easy to visually confirm if the exposure was correct for the shadow detail and if the scene would fit #2 paper. Using a spot enlarging meter allowed me to separately read FB+F and the brightest detailed highlight I wanted on print value 8. Noting the difference between the highlight and shadow readings and consulting the graph made from my paper tests told me what filter pack was needed to exactly match the density range of the negative.

I use a similar approach with digital. I still think in terms of the classic B&W 10 print values when evaluating a scene but I don't bother with spot readings. I just try to carefully expose for the highlights using highlight texture and the histogram as a guide, erring on the side of slight under exposure which is easily correctable. In most photos I take the important detail is in the highlights and midtones, so lack of detail down into the darkest shadows isn't something I'm typically concerned with. If I anticipate the wanting more shadow detail than the natural lighting will produce I use fill flash for the subject in the foreground when practical. If there is a situation where shadow detail is critical I'll bracket exposure and blend in Photoshop.

CG
 

Keyboard shortcuts

Back
Top