Adobe CS 2.0 now official - on their web site

  • Thread starter Thread starter Andrew Booth
  • Start date Start date
Er, my question was asking the photoaddict why he stated that CS2 requires a 64 bit operating system, which would be a bit of a problem for Windows users, because there is no 64 bit version of Windows.

But I think I answered this question myself by reading further. I think that what he meant was you need a 64 bit operating system if you have more than 2 gigs of RAM and want to access beyond 2 gigs. But only up to 3.5 gigs, which I agree is a foolish limitation. Once you are using a 64 bit operating system, which should be able to address terabytes, shouldn't it?

About HDR. What you describe sounds like the Fred Miranda DRI Pro plugin I have been using for a while. Except that it can only open two images and doesn't know about the new 32 bit floating point format.

Wayne Larmon
Let me quote Photofocus.com and his explanation of the Beta. This
is the way I understand it to work. It simply saves you tons of
time if this is your thing - and it is mine!

"With digital, we started exposing for different parts of the scene
and then manually dragging each image into Photoshop, stacking them
and creating a series of complicated masks to hide or reveal parts
of the image that didn't fit in our final product.

Now, it's as easy as opening CS 2.0 and selecting FILE > AUTOMATE >
MERGE TO HDR. You select two or more images to merge and Photoshop
automatically finds the parts of each exposure and merges them
together.

This new tool allows you to achieve the ultimate in dynamic range
with automatic conversion of exposures to 32-Bit High Dynamic Range
(HDR) images. That's right, I said 32-bit! When Adobe gave me a
private demonstration of this feature in Beta, I don't think they
realized how important it was. This will be a real time saver.

Using HDR you can effectively get 12-14 stops of latitude into your
digital images and maintain incredible quality. You can now take
control of the full detail from the deepest shadows to the
brightest highlights, and everywhere in between. You can use Merge
to HDR to even create 32-bit HDR images from your current digital
camera, by automatically combining a series of regular exposures."

Get it?
 
This version will support 3.5G, the next will add an astounding 3.5G more, and then the next one another 3.5G. :-) Upgrade treadmill.

They could have easily said, OK, the maximum supported RAM is 1TB from now on on 64bit machines. If they already have 64 bit address space support, all they'd have to do is remove the hardcoded limit, I'm sure.
 
Once a file is stored that includes areas below 0 and above "255," that is useful data to keep around. When you boost the brightness of shadow areas, you can reveal parts of what was below zero, and when you adjust highlight tones, you can reveal what had been above 255. Even if your final output was going to be an 8-bit jpeg, the extra dynamic range would still make a big, visible difference to your digital darkroom work.

Working with images that have a dynamic range beyond that of your monitor is no harder than working with images that have more pixels than your monitor. In both cases, the problem can be solved with a few extra sliders in the interface.

-jeremy
 
You are right that you can take two separate exposures, and
combine them, but that is VERY different from getting a larger
dynamic range, and has NOTHING to do with processing bit depth.
In Adobe's words "32-bit High Dynamic Range (HDR) support: Create and edit 32-bit images, and combine multiple exposures into a single, 32-bit image with expanded range—from the deepest shadows to the brightest highlights."

The HDR support is essential to the functionality of combining bracketed sets of images into a new HDR image with a much greater dynamic range. If you didn't have floating point values that could go below "0" and above "255" then you couldn't have the wider dynamic range.

-jeremy
 
That's just precision. Dynamic range is applicable to physical phenomena, and to them only. When you take a picture, your sensor captures certain dynamic range, from the darkest to the lightest color in the image. When your picture is displayed on the screen, the screen has certain dynamic range, from the darkest to the brightest luminance it can display. Finally, when you print, your print has dynamic range, too.

The number of measurements between the endpoints (which we can call precision ) is irrelevant. Even if you have a really bright monitor you only need one bit to exploit its full dynamic range. Same with the printout. 0 means pure black, 1 means pure white. There you go, full dynamic range.

HDR should really have been called HP - high precision. It brings huge advantages when you want to do some heavy post-processing without losing much detail. But the potential of this technology will only be fully unleashed when cameras themselves capture images using 96bit floating point triplets.
 
HDR should really have been called HP - high precision. It brings
huge advantages when you want to do some heavy post-processing
without losing much detail. But the potential of this technology
will only be fully unleashed when cameras themselves capture
images using 96bit floating point triplets.
Why? Isn't the output from Bayer demosaicking infinite precision? Depending on how you adjust raw conversion parameters.

With that aside, the benefits of this technology can be easily exploited with current technology, if people put their cameras on tripods where they belong and enable bracketing. Landscape type images are typically the kinds of images that need bunch's of stops and landscape images are usually bracketable.

Honestly. Reading about this has inspired me to dump my cruddy Sunpak tripod that I hate with a passion and get a half decent Bogen tripod. So I'm more likely to be able to bracket my landscape type images.

Wayne Larmon
 
No, it's very much finite (which you can see if you try to work on severely underexposed pictures - posterization is unavoidable). In the best case you get 2^12 levels per pixel before interpolation, but that's only in the best case. Realistically, you can expect the lowest two bits to be nothing but noise or even to be discarded by some cameras. And this brings it to 2^10 which is 1024 levels. Now, while this is still much more than our eye is able to discern, let's not forget that these measurements are linear and our eye is not. So you get ridiculously low precision at the ends of the dynamic range. Even with interpolation it's still ridiculously low. 2^10 levels also presents a limitation on the "near zero" end, because at some point pixel readings become just too small to be recorded.

It's not like FP can solve everything there. There are physical limits to what can be captured. Sometimes there are just no electrons to reliably count, at all. Before they do FP, I'd like to see at least 16/20 bit ADCs in cameras.
 
As of 4/5/05...

According to a post by Dave Cross on the new NAPP CS2 forum, "Not yet...as soon as we know, we'll let NAPP members know."

It seems odd this was not hammered out by Adobe/NAPP by the time CS2 was announced.

I believe there was a $30 NAPP member discount for the CS upgrade.

Let's hope: a $20-$30 break is still $20-$30.
 
you forgot lens distortion fix and the rating capabilities
http://www.adobe.com/products/creativesuite/main.html

"Image editing with Adobe Photoshop CS2

Revolutionary Vanishing Point
Achieve amazing results in a fraction of the time with the
groundbreaking Vanishing Point, which lets you clone, brush, and
paste elements that automatically match the perspective of any
image area.

Multiple layer control
Select and move, group, transform, and warp objects more
intuitively by clicking and dragging directly on the canvas. Easily
align objects with Smart Guides.

Smart Objects
Perform nondestructive scaling, rotating, and warping of raster and
vector graphics with Smart Objects. Even preserve the editability
of high-resolution vector data from Adobe Illustrator software.

Multi-image digital camera raw file processing
Accelerate your raw file workflow with simultaneous processing of
multiple images while you continue working. Import images into your
choice of formats, including Digital Negative (DNG); enjoy
automatic adjustments to exposure, shadows, and brightness and
contrast; and much more.

Image Warp
Easily create packaging mock-ups or other dimensional effects by
wrapping an image around any shape or stretching, curling, and
bending an image using Image Warp."
--
beam me up scotty

im giving it all shes got captain
 
Sorry, but I still don't think you understand the point of a DR file.
That's just precision. Dynamic range is applicable to physical
phenomena, and to them only. When you take a picture, your sensor
captures certain dynamic range, from the darkest to the lightest
color in the image. When your picture is displayed on the screen,
the screen has certain dynamic range, from the darkest to the
brightest luminance it can display. Finally, when you print, your
print has dynamic range, too.
I agree with this, but the difference is that all previous image file formats (with the possible exception of RAW) are designed to map that dynamic range (typically that of the intended media) across the full range of the file data. For example, if a TIFF file has a pixel of #0, it will print black, if the pixel is #255, it will print nothing (white). This limits the potential dynamic range of the file formats to about 5 stops. Clearly, you can squeeze in however many stops you want into whatever number of bits you have (8 or 16), but if displayed normally, it would be a very flat, low contrast image, and in the case of an 8 bit file, coarse staps between gradations.
The number of measurements between the endpoints (which we can call
precision ) is irrelevant. Even if you have a really bright
monitor you only need one bit to exploit its full dynamic range.
Same with the printout. 0 means pure black, 1 means pure white.
There you go, full dynamic range.
Yes, full dynamic range of your screen. But what if you captured something brighter that your screen can display? What number does that get? In a TIFF it will still get number 1. You'll never see this thing brighter than 1. I'd like to record and store (but not neccesarily display) a number 2! HDR does this (if I got it right).
HDR should really have been called HP - high precision. It brings
huge advantages when you want to do some heavy post-processing
without losing much detail. But the potential of this technology
will only be fully unleashed when cameras themselves capture
images using 96bit floating point triplets.
Don't pay too much attention to the bit depth or the floating point issue. 16 bits (which I would consider fairly high precision) are plenty for mapping a typical 5 stop image, even if you stretch out the contrast a bit (bringing up detail in the shadows, for example - assuming that the details are within the 5-stop range). The big deal is that because there is so much more dynamic range in the world than a normally mapped TIFF can hold, we need a way to capture and store it all! Now, at least, we have a file format to store it.
 
That's just precision.
Precision is just whether you use 8 bits, 16 bits or 32 bits per channel. Switching to Floating Point and storing values vastly beyond the range of tones visible on your monitor or capturable by your camera is HDR.
Dynamic range is applicable to physical
phenomena, and to them only.
HDRI are digital files. Visit http://www.debevec.org for information on how HDRI are created and used on the computer.
When you take a picture, your sensor
captures certain dynamic range, from the darkest to the lightest
color in the image.
...but that low dynamic range image is just a starting point - you can bracket several shots at different shutter speeds, capturing useable detail that was lost to underexposure or overexposure in the LDR, then assemble the differently exposed set of images into one HDR file that has all of the information, from which all of the exposures many in between could be recovered.
When your picture is displayed on the screen,
the screen has certain dynamic range, from the darkest to the
brightest luminance it can display.
...and that limitation is easily worked around in software, by providing the users with sliders to choose which sub-range of the HDRI they view. Working with files of greater dynamic range than your monitor is no more of a physical impossiblity than working with images of a higher resolution than your monitor.
The number of measurements between the endpoints (which we can call
precision ) is irrelevant.
Right, that's just precision, and that's irrelevant. I'm surprised you even brought it issue up.
Even if you have a really bright
monitor you only need one bit to exploit its full dynamic range.
Same with the printout. 0 means pure black, 1 means pure white.
There you go, full dynamic range.
That's true - and in your hypothetical example, a two-bit file that could store a value of 2 would give you high dynamic range.
HDR should really have been called HP - high precision. It brings
huge advantages when you want to do some heavy post-processing
without losing much detail. But the potential of this technology
will only be fully unleashed when cameras themselves capture
images using 96bit floating point triplets.
Those would be advantages of a high precision file, but HDRI goes far beyond that by allowing you to recover and work with detail from far outside of the dynamic range possible in a single shot with your camera.

-jeremy
 
Thanks for that useful post, Mathius. Reading your post I saw that some of my later reply was redundant.

One small thing I'd add: you actually can store HDRI data in a floating point .tif file. Tiff is a pretty flexible format. Right now when I try to open a floating point tiff with HDRI in Photoshop CS, it just clips off the HDR info and rounds to 16-bit, but software that can read and write floating point tiff files already includes Mental Ray and Shake, and I'd hope CS2 will as well.

-jeremy
 

Keyboard shortcuts

Back
Top