New article on color management

Started Apr 26, 2013 | Discussions
Hugowolf
Forum ProPosts: 11,197
Like?
Re: Yeah...
In reply to gollywop, Apr 28, 2013

gollywop wrote:

Great Bustard wrote:

gollywop wrote:

Great Bustard wrote:

gollywop wrote:

Great Bustard wrote:

...this is something I've always wondered about.  Whenever you expand the range of colors, you necessarily decrease the distance between colors, for a given bit depth.  So, the question, then, is when the range of colors matters more than the gradations of color, and vice-versa, as a function of existing colorspaces.

The simple solution, of course, is to use 16-bits.

I take that as a given!

That does fine even with ProPhoto RGB.

So 16 bits per channel gives at least enough color separation in PPRGB?  Is the implication, then, that 16 bits is overkill of less expansive color spaces like sRGB?

Well, there's more to it than that – as you well know, there always is.  The real advantage of 16-bit images is the ability of the colors to survive tonal manipulations without posterization.  This would apply equally well to sRGB as to a broader space.  I doubt you can ever consider the potential of 16-bits overkill, but I suspect, even with sRGB you can consider the potential of 8-bits to be underkill.

Here's a nifty demonstration of the "survivability factor"

http://www.photoshopessentials.com/essentials/16-bit/

What I'm asking, I guess, is if 16 bit sRGB will ever have any practical advantage over a larger 16 bit colorspace due to the finer gradations of the colors it does represent.

Well, I have certainly encountered significant posterization in large expanses of blue skies under aggressive tone mapping when PP-ing an 8-bit sRGB jpeg.  This is not an uncommon experience.

And, unfortunately, once an image starts life as an 8-bit image, you don't gain a heck of a lot just converting to 16 bits; all those gradations in between don't suddenly get created.  If you're going to use 16-bits with sRGB, you want to shoot raw and process with 16-bits from the outset.

As to just what color differences humans can see, and how fine practical gradations can be, we humans are apparently much more sensitive to small shifts in blues than reds, and more in reds than greens. So the whole notion of JND (just noticeable difference) in color is very wavelength related. Clearly going to 16-bits is a huge boost. It gives a 256 fold increase in the number of divisions over 8-bits, and none of the color spaces is anywhere nearly 256 times larger in linear dimension.

You can get an idea of all this from the MacAdam ellipses:

http://en.wikipedia.org/wiki/MacAdam_ellipse

but beware in assessing the diagram they give there that they have exaggerated the sizes of the ellipses ten fold.

Interesting as it may be (MacAdam et al) it doesn’t address metameristic differences: that colors may be indistinguishable within their range, but may produce visual differences when placed next to other colors.

And on the use of either or both 16 bit and a large space like ProPhotoRGB, surely the concern is not with final reproduction (print, screen, projection) but with the ability to manipulate without clipping during editing – working space as opposed to output space. There are many operations which when performed under a space as restrictive as sRGB, could cause a push out of gamut during operations, which could (had they been in a more expansive space) subsequently landed them back in the expected output gamut.

And, although the spaces are based on integer values, processing is done with floating point numbers.

Brian A

Reply   Reply with quote   Complain
gollywop
Veteran MemberPosts: 5,440
Like?
Re: Yeah...
In reply to Great Bustard, Apr 29, 2013

Great Bustard wrote:

gollywop wrote:

Great Bustard wrote:

What I'm asking, I guess, is if 16 bit sRGB will ever have any practical advantage over a larger 16 bit colorspace due to the finer gradations of the colors it does represent.

Well, I have certainly encountered significant posterization in large expanses of blue skies under aggressive tone mapping when PP-ing an 8-bit sRGB jpeg.  This is not an uncommon experience.

For sure.  But I'm talking about 16 bit files.

And, unfortunately, once an image starts life as an 8-bit image, you don't gain a heck of a lot just converting to 16 bits; all those gradations in between don't suddenly get created.  If you're going to use 16-bits with sRGB, you want to shoot raw and process with 16-bits from the outset.

That's what I mean.  Will sRGB ever have an advantage over aRGB or PPRGB due to it's finer gradations under those circumstances?

As to just what color differences humans can see, and how fine practical gradations can be, we humans are apparently much more sensitive to small shifts in blues than reds, and more in reds than greens. So the whole notion of JND (just noticeable difference) in color is very wavelength related. Clearly going to 16-bits is a huge boost. It gives a 256 fold increase in the number of divisions over 8-bits, and none of the color spaces is anywhere nearly 256 times larger in linear dimension.

You can get an idea of all this from the MacAdam ellipses:

http://en.wikipedia.org/wiki/MacAdam_ellipse

but beware in assessing the diagram they give there that they have exaggerated the sizes of the ellipses ten fold.

I'm fully aware of the advantages of 16 bits vs 8 bits.  What I'm wondering is if 16 bits representing a less expansive colorspace might not have advantages, on occasion, over 16 bits representing a larger colorspace, due to the finer gradations of the smaller colorspace.

If any advantage existed, it could only be for an image that lay completely within the sRGB gamut to begin with.  I don't recall ever having used a wider gamut (with 16-bits) on such an image that would have benefited from using sRGB.  However, one can speculate that, if any such advantage existed, it would occur if it were necessary to massively expand the tonal range during processing.  This might occur, for example, if one had shot a very high DR scene very dark to preserve highlights, with a major part of the image down a number of EV.  A large so-called exposure compression in such a case might benefit from having a 16-bit sRGB instead of a 16-bit something broader.

-- hide signature --

gollywop

Reply   Reply with quote   Complain
gollywop
Veteran MemberPosts: 5,440
Like?
Re: Yeah...
In reply to Hugowolf, Apr 29, 2013

Hugowolf wrote:

gollywop wrote:

Great Bustard wrote:

gollywop wrote:

Great Bustard wrote:

gollywop wrote:

Great Bustard wrote:

...this is something I've always wondered about.  Whenever you expand the range of colors, you necessarily decrease the distance between colors, for a given bit depth.  So, the question, then, is when the range of colors matters more than the gradations of color, and vice-versa, as a function of existing colorspaces.

The simple solution, of course, is to use 16-bits.

I take that as a given!

That does fine even with ProPhoto RGB.

So 16 bits per channel gives at least enough color separation in PPRGB?  Is the implication, then, that 16 bits is overkill of less expansive color spaces like sRGB?

Well, there's more to it than that – as you well know, there always is.  The real advantage of 16-bit images is the ability of the colors to survive tonal manipulations without posterization.  This would apply equally well to sRGB as to a broader space.  I doubt you can ever consider the potential of 16-bits overkill, but I suspect, even with sRGB you can consider the potential of 8-bits to be underkill.

Here's a nifty demonstration of the "survivability factor"

http://www.photoshopessentials.com/essentials/16-bit/

What I'm asking, I guess, is if 16 bit sRGB will ever have any practical advantage over a larger 16 bit colorspace due to the finer gradations of the colors it does represent.

Well, I have certainly encountered significant posterization in large expanses of blue skies under aggressive tone mapping when PP-ing an 8-bit sRGB jpeg.  This is not an uncommon experience.

And, unfortunately, once an image starts life as an 8-bit image, you don't gain a heck of a lot just converting to 16 bits; all those gradations in between don't suddenly get created.  If you're going to use 16-bits with sRGB, you want to shoot raw and process with 16-bits from the outset.

As to just what color differences humans can see, and how fine practical gradations can be, we humans are apparently much more sensitive to small shifts in blues than reds, and more in reds than greens. So the whole notion of JND (just noticeable difference) in color is very wavelength related. Clearly going to 16-bits is a huge boost. It gives a 256 fold increase in the number of divisions over 8-bits, and none of the color spaces is anywhere nearly 256 times larger in linear dimension.

You can get an idea of all this from the MacAdam ellipses:

http://en.wikipedia.org/wiki/MacAdam_ellipse

but beware in assessing the diagram they give there that they have exaggerated the sizes of the ellipses ten fold.

Interesting as it may be (MacAdam et al) it doesn’t address metameristic differences: that colors may be indistinguishable within their range, but may produce visual differences when placed next to other colors.

And on the use of either or both 16 bit and a large space like ProPhotoRGB, surely the concern is not with final reproduction (print, screen, projection) but with the ability to manipulate without clipping during editing – working space as opposed to output space. There are many operations which when performed under a space as restrictive as sRGB, could cause a push out of gamut during operations, which could (had they been in a more expansive space) subsequently landed them back in the expected output gamut.

You are absolutely correct.  Although one can often counter problematic adjustments of one sort with another.

The same problem, of course, can plague wider spaces.  If the image as first read fills the histogram, there is always the danger of processing out of bounds regardless of bit depth.

And, although the spaces are based on integer values, processing is done with floating point numbers.

In processors like RPP, Rawtheraee, and PhotoNinja, yes.  In ACR/PS, I don't think so.  Adobe still uses integer arithmetic as far as I know.  But I'd certainly be willing to be guided to documentation that says otherwise.

-- hide signature --

gollywop

Reply   Reply with quote   Complain
Hugowolf
Forum ProPosts: 11,197
Like?
Re: Yeah...
In reply to gollywop, Apr 29, 2013

gollywop wrote:

And, although the spaces are based on integer values, processing is done with floating point numbers.

In processors like RPP, Rawtheraee, and PhotoNinja, yes.  In ACR/PS, I don't think so.  Adobe still uses integer arithmetic as far as I know.  But I'd certainly be willing to be guided to documentation that says otherwise.

I would be surprised if it were otherwise. It is standard practice to map 0-255 to 0-1, it is just much easier to algorithmically process that way.

Brian A

Reply   Reply with quote   Complain
gollywop
Veteran MemberPosts: 5,440
Like?
Re: Yeah...
In reply to Hugowolf, Apr 29, 2013

Hugowolf wrote:

gollywop wrote:

And, although the spaces are based on integer values, processing is done with floating point numbers.

In processors like RPP, Rawtheraee, and PhotoNinja, yes.  In ACR/PS, I don't think so.  Adobe still uses integer arithmetic as far as I know.  But I'd certainly be willing to be guided to documentation that says otherwise.

I would be surprised if it were otherwise. It is standard practice to map 0-255 to 0-1, it is just much easier to algorithmically process that way.

Surprise is indeed fun. But I'm waiting for documentation that supports the surprise.

Meanwhile normalizations do not require floating point.

http://en.wikipedia.org/wiki/Fixed-point_arithmetic

Reply   Reply with quote   Complain
Hugowolf
Forum ProPosts: 11,197
Like?
Re: Yeah...
In reply to gollywop, Apr 29, 2013

gollywop wrote:

Hugowolf wrote:

gollywop wrote:

And, although the spaces are based on integer values, processing is done with floating point numbers.

In processors like RPP, Rawtheraee, and PhotoNinja, yes.  In ACR/PS, I don't think so.  Adobe still uses integer arithmetic as far as I know.  But I'd certainly be willing to be guided to documentation that says otherwise.

I would be surprised if it were otherwise. It is standard practice to map 0-255 to 0-1, it is just much easier to algorithmically process that way.

Surprise is indeed fun. But I'm waiting for documentation that supports the surprise.

Meanwhile normalizations do not require floating point.

It just may well be the way I have always dealt with it. But I am pretty sure I have heard of others doing it this way, perhaps from Eric Chan.

For me it has to do with closure. Integers are not closed under division. (Addition, subtraction, and multiplication of integers result in integers, but not so with division.) So to be consistant, it would be silly to do some operations with integer arithmetic, and make exceptions when division is needed.

If I have time, I will look for sources.

Brian A

Reply   Reply with quote   Complain
gollywop
Veteran MemberPosts: 5,440
Like?
Re: Yeah...
In reply to Hugowolf, Apr 29, 2013

Hugowolf wrote:

gollywop wrote:

Hugowolf wrote:

gollywop wrote:

And, although the spaces are based on integer values, processing is done with floating point numbers.

In processors like RPP, Rawtheraee, and PhotoNinja, yes.  In ACR/PS, I don't think so.  Adobe still uses integer arithmetic as far as I know.  But I'd certainly be willing to be guided to documentation that says otherwise.

I would be surprised if it were otherwise. It is standard practice to map 0-255 to 0-1, it is just much easier to algorithmically process that way.

Surprise is indeed fun. But I'm waiting for documentation that supports the surprise.

Meanwhile normalizations do not require floating point.

It just may well be the way I have always dealt with it. But I am pretty sure I have heard of others doing it this way, perhaps from Eric Chan.

For me it has to do with closure. Integers are not closed under division.

Closure has nothing to do with it.  Floating-point numbers are not closed under division either (unless of infinite precision -- both mantissa and exponent).  I'm sure they are using normalizations, particularly for things like blends, but they're likely carrying out the operations using fixed-point arithmetic, not floating point.

If I have time, I will look for sources.

Ok

-- hide signature --

gollywop

Reply   Reply with quote   Complain
Hugowolf
Forum ProPosts: 11,197
Like?
Re: Yeah...
In reply to gollywop, Apr 29, 2013

gollywop wrote:

Hugowolf wrote:

gollywop wrote:

Hugowolf wrote:

gollywop wrote:

And, although the spaces are based on integer values, processing is done with floating point numbers.

In processors like RPP, Rawtheraee, and PhotoNinja, yes.  In ACR/PS, I don't think so.  Adobe still uses integer arithmetic as far as I know.  But I'd certainly be willing to be guided to documentation that says otherwise.

I would be surprised if it were otherwise. It is standard practice to map 0-255 to 0-1, it is just much easier to algorithmically process that way.

Surprise is indeed fun. But I'm waiting for documentation that supports the surprise.

Meanwhile normalizations do not require floating point.

It just may well be the way I have always dealt with it. But I am pretty sure I have heard of others doing it this way, perhaps from Eric Chan.

For me it has to do with closure. Integers are not closed under division.

Closure has nothing to do with it.  Floating-point numbers are not closed under division (unless of infinite precision -- both mantissa and exponent).

No, but the errors involved are less spectacular than those of truncation.

Brian A

Reply   Reply with quote   Complain
Hugowolf
Forum ProPosts: 11,197
Like?
Re: Yeah...
In reply to Hugowolf, Apr 29, 2013

Hugowolf wrote:

gollywop wrote:

Hugowolf wrote:

gollywop wrote:

Hugowolf wrote:

gollywop wrote:

And, although the spaces are based on integer values, processing is done with floating point numbers.

In processors like RPP, Rawtheraee, and PhotoNinja, yes.  In ACR/PS, I don't think so.  Adobe still uses integer arithmetic as far as I know.  But I'd certainly be willing to be guided to documentation that says otherwise.

I would be surprised if it were otherwise. It is standard practice to map 0-255 to 0-1, it is just much easier to algorithmically process that way.

Surprise is indeed fun. But I'm waiting for documentation that supports the surprise.

Meanwhile normalizations do not require floating point.

It just may well be the way I have always dealt with it. But I am pretty sure I have heard of others doing it this way, perhaps from Eric Chan.

For me it has to do with closure. Integers are not closed under division.

Closure has nothing to do with it.  Floating-point numbers are not closed under division (unless of infinite precision -- both mantissa and exponent).

No, but the errors involved are less spectacular than those of truncation.

Honestly, this is not something I just came up with. I wish I was that bright. Even Java (yes even Java) has an RGB(float, float, float) object.

Brian A

Reply   Reply with quote   Complain
gollywop
Veteran MemberPosts: 5,440
Like?
Re: Yeah...
In reply to Hugowolf, Apr 29, 2013

Hugowolf wrote:

gollywop wrote:

Hugowolf wrote:

gollywop wrote:

Hugowolf wrote:

gollywop wrote:

And, although the spaces are based on integer values, processing is done with floating point numbers.

In processors like RPP, Rawtheraee, and PhotoNinja, yes.  In ACR/PS, I don't think so.  Adobe still uses integer arithmetic as far as I know.  But I'd certainly be willing to be guided to documentation that says otherwise.

I would be surprised if it were otherwise. It is standard practice to map 0-255 to 0-1, it is just much easier to algorithmically process that way.

Surprise is indeed fun. But I'm waiting for documentation that supports the surprise.

Meanwhile normalizations do not require floating point.

It just may well be the way I have always dealt with it. But I am pretty sure I have heard of others doing it this way, perhaps from Eric Chan.

For me it has to do with closure. Integers are not closed under division.

Closure has nothing to do with it.  Floating-point numbers are not closed under division (unless of infinite precision -- both mantissa and exponent).

No, but the errors involved are less spectacular than those of truncation.

And that is supposed to prove . . . ?

What you might think about, however, is how PS manages to have almost instant response with it's sliders, whereas the raw processors that we know use floating-point (RPP, Rawtherapee, PhotoNinja, to name a few) all require a significant processing lag after making an adjustment.

-- hide signature --

gollywop

Reply   Reply with quote   Complain
gollywop
Veteran MemberPosts: 5,440
Like?
Re: Yeah...
In reply to Hugowolf, Apr 29, 2013

Hugowolf wrote:

Hugowolf wrote:

gollywop wrote:

Hugowolf wrote:

gollywop wrote:

Hugowolf wrote:

gollywop wrote:

And, although the spaces are based on integer values, processing is done with floating point numbers.

In processors like RPP, Rawtheraee, and PhotoNinja, yes.  In ACR/PS, I don't think so.  Adobe still uses integer arithmetic as far as I know.  But I'd certainly be willing to be guided to documentation that says otherwise.

I would be surprised if it were otherwise. It is standard practice to map 0-255 to 0-1, it is just much easier to algorithmically process that way.

Surprise is indeed fun. But I'm waiting for documentation that supports the surprise.

Meanwhile normalizations do not require floating point.

It just may well be the way I have always dealt with it. But I am pretty sure I have heard of others doing it this way, perhaps from Eric Chan.

For me it has to do with closure. Integers are not closed under division.

Closure has nothing to do with it.  Floating-point numbers are not closed under division (unless of infinite precision -- both mantissa and exponent).

No, but the errors involved are less spectacular than those of truncation.

Honestly, this is not something I just came up with. I wish I was that bright. Even Java (yes even Java) has an RGB(float, float, float) object.

Yep.  But PS began in 1988 and Java in 1995.  One heck of a lot happened in computer processing over that time, and PS had already locked itself into an older technology.  Note, however, that they have introduced 32-bit floating-point processing for the new HDR stuff.

-- hide signature --

gollywop

Reply   Reply with quote   Complain
Detail Man
Forum ProPosts: 14,950
Like?
Re: Hope this may help a bit ...
In reply to gollywop, Apr 29, 2013

gollywop wrote:

Hugowolf wrote:

gollywop wrote:

And, although the spaces are based on integer values, processing is done with floating point numbers.

In processors like RPP, Rawtheraee, and PhotoNinja, yes.  In ACR/PS, I don't think so.  Adobe still uses integer arithmetic as far as I know.  But I'd certainly be willing to be guided to documentation that says otherwise.

I would be surprised if it were otherwise. It is standard practice to map 0-255 to 0-1, it is just much easier to algorithmically process that way.

Surprise is indeed fun. But I'm waiting for documentation that supports the surprise.

Meanwhile normalizations do not require floating point.

http://en.wikipedia.org/wiki/Fixed-point_arithmetic

Regarding the 2010 processing, Ken is a quite knowledgeable chap:

kenw wrote:

I'm really curious to watch what RT and RPP start to be able to do with floating point processing and cameras like the K5 which have incredibly low read noise compared to something like ACR/LR that uses fixed point algorithms optimized for speed.

http://www.dpreview.com/forums/post/39144044

Regarding the 2012 processing, certain floating-point functionality (to some extent) exists:

... Lr 4.1 and ACR 7.1 have the ability to import and render floating-point HDR images.  Supported formats are TIFF and DNG.  (If you have HDR images in other formats like OpenEXR or Radiance, you can use Photoshop or other tools to convert them to TIFF.)  Supported bit depths are 16, 24, and 32 bits per channel.

If you're using Photoshop's Merge to HDR Pro feature to perform the "merge" step, be sure to choose 32-bit output in the top-right popup menu of the HDR Pro dialog box.  This will generate a floating point (but not yet tone-mapped) image, which you can then use ACR or LR to render and tone map using the new PV 2012 controls.

-Eric Chan, Reply #2 on: May 31, 2012, 09:44:14 AM

http://www.luminous-landscape.com/forum/index.php?topic=67528.0

Apparently, when Eric and the other engineers work working on the V4 PV2012 develop module sliders, they used 32 Bit floating point HDR images to make sure there was sufficient slider capabilities. Then the finalized the basic module. After releasing V4 they realized the power was there for handling HDR 32 bit floating point files so they incorporated it into V4.1.

-dmward, 23rd of September 2012, Post 15

http://photography-on-the.net/forum/showthread.php?t=1230468

Eric Chan has informed me that there are two image-processing pipelines in Lightroom: output-referred, and scene-referred. Raw files get the scene-referred pipeline. Integer TIFFs get the output-referred pipeline. Therefore, the TIFF test images are getting a different set of processing than LR applies to raw files.

http://blog.kasson.com/?m=yhnqcvxwewk

Reply   Reply with quote   Complain
Great Bustard
Forum ProPosts: 22,413
Like?
OK, this is good stuff.
In reply to Hugowolf, Apr 29, 2013

Hugowolf wrote:

gollywop wrote:

Great Bustard wrote:

What I'm asking, I guess, is if 16 bit sRGB will ever have any practical advantage over a larger 16 bit colorspace due to the finer gradations of the colors it does represent.

Well, I have certainly encountered significant posterization in large expanses of blue skies under aggressive tone mapping when PP-ing an 8-bit sRGB jpeg.  This is not an uncommon experience.

And, unfortunately, once an image starts life as an 8-bit image, you don't gain a heck of a lot just converting to 16 bits; all those gradations in between don't suddenly get created.  If you're going to use 16-bits with sRGB, you want to shoot raw and process with 16-bits from the outset.

As to just what color differences humans can see, and how fine practical gradations can be, we humans are apparently much more sensitive to small shifts in blues than reds, and more in reds than greens. So the whole notion of JND (just noticeable difference) in color is very wavelength related. Clearly going to 16-bits is a huge boost. It gives a 256 fold increase in the number of divisions over 8-bits, and none of the color spaces is anywhere nearly 256 times larger in linear dimension.

You can get an idea of all this from the MacAdam ellipses:

http://en.wikipedia.org/wiki/MacAdam_ellipse

but beware in assessing the diagram they give there that they have exaggerated the sizes of the ellipses ten fold.

Interesting as it may be (MacAdam et al) it doesn’t address metameristic differences: that colors may be indistinguishable within their range, but may produce visual differences when placed next to other colors.

That's fascinating.  You mean I can put two colors next to each other, not see a difference, but put those two colors next to a different color, and then notice a difference?  Trippy!

And on the use of either or both 16 bit and a large space like ProPhotoRGB, surely the concern is not with final reproduction (print, screen, projection) but with the ability to manipulate without clipping during editing – working space as opposed to output space. There are many operations which when performed under a space as restrictive as sRGB, could cause a push out of gamut during operations, which could (had they been in a more expansive space) subsequently landed them back in the expected output gamut.

OK, so if you have an sRGB monitor, can you *accurately* edit the photo in PPRGB or aRGB?  Or might there be problems when converting from PPRGB to sRGB as gollywop hinted at?

In other words, it seems like you're saying there's an advantage to work in a wider colorspace, like PPRGB or aRGB, even if your final colorspace is sRGB.

Reply   Reply with quote   Complain
Jack Hogan
Senior MemberPosts: 3,627Gear list
Like?
Ground control to Major Tom...
In reply to Hugowolf, Apr 29, 2013

Hugowolf wrote:  There are many operations which when performed under a space as restrictive as sRGB, could cause a push out of gamut during operations, which could (had they been in a more expansive space) subsequently landed them back in the expected output gamut.

Hi Brian, I've been wondering about what I have highlighted above in your statement for a while.  In huge spaces like ProPhoto, rendering and adjustment operations can throw colors far and wide from the edge of the output gamut compared to their relative starting position in camera space.

When the time comes to land them back in the output gamut, a color lost in space doesn't remember where it came from, and (I assume) shoots for the bright star in the 'middle' of the small triangle  - the direction is just a guess.

Compare this to moving to the output space at the very start and then applying rendering/adjustments.  Chances for error would intuitively appear to be minimized in this case, no?

Jack

Reply   Reply with quote   Complain
Hugowolf
Forum ProPosts: 11,197
Like?
Re: Ground control to Major Tom...
In reply to Jack Hogan, Apr 29, 2013

Jack Hogan wrote:

Hugowolf wrote:  There are many operations which when performed under a space as restrictive as sRGB, could cause a push out of gamut during operations, which could (had they been in a more expansive space) subsequently landed them back in the expected output gamut.

Hi Brian, I've been wondering about what I have highlighted above in your statement for a while.  In huge spaces like ProPhoto, rendering and adjustment operations can throw colors far and wide from the edge of the output gamut compared to their relative starting position in camera space.

When the time comes to land them back in the output gamut, a color lost in space doesn't remember where it came from, and (I assume) shoots for the bright star in the 'middle' of the small triangle  - the direction is just a guess.

Compare this to moving to the output space at the very start and then applying rendering/adjustments.  Chances for error would intuitively appear to be minimized in this case, no?

No. Consider what happens when one of the triplet gets pushed to 0 or 255, another words clipped. Once you have hit a boundary, that is it.

Brian A

Reply   Reply with quote   Complain
gollywop
Veteran MemberPosts: 5,440
Like?
Re: Ground control to Major Tom...
In reply to Hugowolf, Apr 29, 2013

Hugowolf wrote:

Jack Hogan wrote:

Hugowolf wrote:  There are many operations which when performed under a space as restrictive as sRGB, could cause a push out of gamut during operations, which could (had they been in a more expansive space) subsequently landed them back in the expected output gamut.

Hi Brian, I've been wondering about what I have highlighted above in your statement for a while.  In huge spaces like ProPhoto, rendering and adjustment operations can throw colors far and wide from the edge of the output gamut compared to their relative starting position in camera space.

When the time comes to land them back in the output gamut, a color lost in space doesn't remember where it came from, and (I assume) shoots for the bright star in the 'middle' of the small triangle  - the direction is just a guess.

Compare this to moving to the output space at the very start and then applying rendering/adjustments.  Chances for error would intuitively appear to be minimized in this case, no?

No. Consider what happens when one of the triplet gets pushed to 0 or 255, another words clipped. Once you have hit a boundary, that is it.

But that's not the point, Brian.  Jack is assuming he's starting with a properly exposed raw file, and, in the first instance, processing it, say, in sRGB so to tone map properly into the space, then taking it into the image processor (if needed) and making adjustments that don't push beyond clipping.

He's opposing this to processing in a wider space, making adjustments that seem ok within that space, but then having problems making any sensible conversion down to sRGB for, say, a final web-oriented image.

-- hide signature --

gollywop

Reply   Reply with quote   Complain
gollywop
Veteran MemberPosts: 5,440
Like?
Re: Hope this may help a bit ...
In reply to Detail Man, Apr 29, 2013

Detail Man wrote:

gollywop wrote:

Hugowolf wrote:

gollywop wrote:

And, although the spaces are based on integer values, processing is done with floating point numbers.

In processors like RPP, Rawtheraee, and PhotoNinja, yes.  In ACR/PS, I don't think so.  Adobe still uses integer arithmetic as far as I know.  But I'd certainly be willing to be guided to documentation that says otherwise.

I would be surprised if it were otherwise. It is standard practice to map 0-255 to 0-1, it is just much easier to algorithmically process that way.

Surprise is indeed fun. But I'm waiting for documentation that supports the surprise.

Meanwhile normalizations do not require floating point.

http://en.wikipedia.org/wiki/Fixed-point_arithmetic

Regarding the 2010 processing, Ken is a quite knowledgeable chap:

kenw wrote:

I'm really curious to watch what RT and RPP start to be able to do with floating point processing and cameras like the K5 which have incredibly low read noise compared to something like ACR/LR that uses fixed point algorithms optimized for speed.

http://www.dpreview.com/forums/post/39144044

Thanks, DM for that quote.  It is completely in line with my understanding about ACR/LR/PS that they used fixed-point calculations from the beginning to keep processing speed in check.  When they began writing PS, that was a real issue, and some platforms didn't even have FPU's in their basic models.  For a good period thereafter, the use of an FPU would have made the operation very slow, and there wouldn't have been anything like a reasonable response to the slider actions.

And, of course, once all that code became ensconced in the product, it would have been (and apparently still is) very difficult to change it.

The speed issue is still with us.  RPP and RT and PN are good examples of the kind of response one gets when using real floating-point processing.  People who are into their processing are willing to put up with the delays, but most people aren't, and I suspect Adobe is well aware of this attitude.

Regarding the 2012 processing, certain floating-point functionality (to some extent) exists:

... Lr 4.1 and ACR 7.1 have the ability to import and render floating-point HDR images.  Supported formats are TIFF and DNG.  (If you have HDR images in other formats like OpenEXR or Radiance, you can use Photoshop or other tools to convert them to TIFF.)  Supported bit depths are 16, 24, and 32 bits per channel.

If you're using Photoshop's Merge to HDR Pro feature to perform the "merge" step, be sure to choose 32-bit output in the top-right popup menu of the HDR Pro dialog box.  This will generate a floating point (but not yet tone-mapped) image, which you can then use ACR or LR to render and tone map using the new PV 2012 controls.

-Eric Chan, Reply #2 on: May 31, 2012, 09:44:14 AM

http://www.luminous-landscape.com/forum/index.php?topic=67528.0

Apparently, when Eric and the other engineers work working on the V4 PV2012 develop module sliders, they used 32 Bit floating point HDR images to make sure there was sufficient slider capabilities. Then the finalized the basic module. After releasing V4 they realized the power was there for handling HDR 32 bit floating point files so they incorporated it into V4.1.

-dmward, 23rd of September 2012, Post 15

http://photography-on-the.net/forum/showthread.php?t=1230468

Eric Chan has informed me that there are two image-processing pipelines in Lightroom: output-referred, and scene-referred. Raw files get the scene-referred pipeline. Integer TIFFs get the output-referred pipeline. Therefore, the TIFF test images are getting a different set of processing than LR applies to raw files.

http://blog.kasson.com/?m=yhnqcvxwewk

Yes, the HDR stuff was written de novo and has to use exponent-based numbers to deal with the necessarily wide number of DR stops that characterized HDR.  There is no way around it.  It is also the case that many of PS's "normal" features are simply not available when working in the HDR mode, not until you have done a final tone-mapping into a regular psd image.

-- hide signature --

gollywop

Reply   Reply with quote   Complain
Jack Hogan
Senior MemberPosts: 3,627Gear list
Like?
Color Space Color Gradations
In reply to Great Bustard, Apr 29, 2013

Great Bustard wrote:  What I'm wondering is if 16 bits representing a less expansive colorspace might not have advantages, on occasion, over 16 bits representing a larger colorspace, due to the finer gradations of the smaller colorspace.

You mean which color space makes better use of the available bits with histograms such as these (sRGB to the left, ProPhoto to the right)?

As far as I know, any issues that ProPhoto had with posterization went away when PS switched to 16 bits, so I would guess that if there is an advantage to the higher color resolution of the smaller space it is not noticeable in practice.

Jack

Reply   Reply with quote   Complain
crames
Regular MemberPosts: 192
Like?
Re: Camera-->sRGB versus Camera-->ProPhoto-->sRGB
In reply to Jack Hogan, Apr 29, 2013

Jack Hogan wrote:

Since in 99% of cases people are either printing or viewing in sRGB, it would be interesting to know whether it makes a practical difference in day to day use to work in sRGB from the very start versus using ProPhoto/other large color space to work in and convert to sRGB at the end of PP. Besides gamut, there are also other considerations when choosing one working (editing) color space over another.

Besides gamut, there are other things to consider when choosing a working/editing space.

Unlike sRGB, ProPhoto RGB  was designed to minimize hue shifts that can occur from "common image manipulations such as tonescale modifications, color balance adjustments, sharpening, etc." (ref: Kodak's ROMM )

sRGB might not be the best color space in which to do white-balancing. Another reference here. (sRGB has the same primaries as ITU-R BT.709 compared in these references.)

I don't know if these make a big difference, but ProPhoto makes me feel better.

Cliff

Reply   Reply with quote   Complain
gollywop
Veteran MemberPosts: 5,440
Like?
Re: Camera-->sRGB versus Camera-->ProPhoto-->sRGB
In reply to crames, Apr 29, 2013

crames wrote:

Jack Hogan wrote:

Since in 99% of cases people are either printing or viewing in sRGB, it would be interesting to know whether it makes a practical difference in day to day use to work in sRGB from the very start versus using ProPhoto/other large color space to work in and convert to sRGB at the end of PP. Besides gamut, there are also other considerations when choosing one working (editing) color space over another.

Besides gamut, there are other things to consider when choosing a working/editing space.

Unlike sRGB, ProPhoto RGB  was designed to minimize hue shifts that can occur from "common image manipulations such as tonescale modifications, color balance adjustments, sharpening, etc." (ref: Kodak's ROMM )

Indeed. On the other hand, there can be even more significant color shifts after processing in ProPhoto RGB when you try to shoehorn the result into sRGB.  If it doesn't get you coming, it'll get you going.

sRGB might not be the best color space in which to do white-balancing. Another reference here. (sRGB has the same primaries as ITU-R BT.709 compared in these references.)

I'm not sure what you're talking about here, Cliff.  If you're shooting raw and doing white-balancing in ACR, the result is the same regardless of which working space you've chosen.  If you're talking about white-balancing a jpeg in PS -- well, there you're on your own.

-- hide signature --

gollywop

Reply   Reply with quote   Complain
Keyboard shortcuts:
FForum MMy threads