how many of you use aRGB?

Neither the raw colour data nor the demosaicing are linear operations (in
the same sense RGB is not perceptually linear) unless you resort to
the simplest of algorithms.

To see how complex is the problem of colour restoration and how large
is the error sometimes you may want to look at metamerism of sensors.
Indeed. Demosaicing is an educated guess about missing coordinates in color space based on "typical" properties of images. Metamerism differs for human vision spectral response functions and digital camera sensor spectral response functions, and so color space transforms are necessarily an educated guess as to what the human vision version is given the digital camera version. These don't seem to me to be particularly related inference problems, and so I'm curious as to the reasoning that leads to the conclusion that it's better to do one before the other. You are saying that it's better to do the color space transform first. Why?

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
To represent 14 bits coded in linear space one needs 15+ bits in gamma 2.2 space.
But that's not the issue. These holes you are talking about are in the shadows whereas the extra bits are needed to prevent data loss in the highlights. You are deliberately changing the subject here.

Oh, and BTW, ProPhoto isn't gamma 2.2, it's 1.8. 14 bits fits almost perfectly into 15 bits of gamma 1.8. That, conveniently, is what we have so there's no problem there. Of course, there is a bit more loss with gamma 2.2 which you advocate with BetaRGB.
Basing the conversion and demosaicing on Y one does not create any wholes at all. One of the reasons we use Lstar-based spaces.
If you convert linear encoding into nonlinear encoding you create holes somewhere. When you expand 14 bit linear to 15 bit, you create holes. It hurts nothing. This whole "holes" thing is a diversion.
You may want to broaden the circle of those you are listening to.
I would prefer to listen only to those who aren't intentionally misleading. I notice you don't advocate gamma 1.0 spaces. Why is that if you think these holes are so bad?
Really so if the size does not matter. Especially when it is the size of the holes.
The size of the holes in ProPhoto are smaller because granularity is worse. I am aware of your consistent anti-ProPhoto bent. This "hole" business is a deliberate distraction. I have yet to see any evidence from you that ProPhoto in 16-bit actually causes any kind of demostrable image degradation. Instead, all you offer is FUD, some of it here.

It is amusing to me that you hate ProPhoto so much yet you advocate a remarkably large space in BetaRGB, a space that because of gamma 2.2 has even larger holes and more data loss in the coding. Prove that using ProPhoto damages images where BetaRGB does not.
 
To represent 14 bits coded in linear space one needs 15+ bits in gamma 2.2 space.
But that's not the issue.
That's for sure, nothing is an issue for you :)

Get your facts straight, do not misinterpret what you fail to understand, and get some practical experience before feeding forum with second-hand knowledge.
These holes you are talking about are in
the shadows whereas the extra bits are needed to prevent data loss in
the highlights
BS
Oh, and BTW, ProPhoto isn't gamma 2.2, it's 1.8. 14 bits fits almost
perfectly into 15 bits of gamma 1.8.
Then you should ask yourself why Adobe even for their working space that is a linear clone of ProPgoto RGB use 20+ bits.
Of course, there is a bit more
loss with gamma 2.2 which you advocate with BetaRGB.
Once again you do not know what you are talking about.
If you convert linear encoding into nonlinear encoding you create
holes somewhere.
Same, lack of understanding.
I have yet to see any
evidence from you that ProPhoto in 16-bit actually causes any kind of
demostrable image degradation.
The proof is right here, it is just that you skipped your homework. Try doing something practical instead of typing nonsense.

--
http://www.libraw.org/
 
You are saying that it's better
to do the color space transform first.
Primary colours transform is not the same as colour space transform. Educated guess is easier (including keeping tables more compact) when you have alignment to perceptually uniform grid and keep calculations in perceptually uniform space. Decisions in demosaicing are based on the distance between colour coordinates of pixels, and you are better off scaling them so that this perceptually uniform measure works.

--
http://www.libraw.org/
 
What I mean't was to assign the adobe color space (I think it's called Nikon adobe rgb) to the image. Isn't this what the camera does when you choose the differant profile on the camera, it just assiigns it ? It doesn't effect the actual raw file numbers in anyway, does it?

(I'm talking raw images here)
You want to "assign" the aRGB color space in capture not "convert".
No, it does not work like that unless colour filters in the camera
are based on Adobe RGB. But they are not. Nikon, for example, is very
close to NTSC - much closer then to anything else.

Generally processing chain is like this - do white balance, apply
primary colour and tone transform, do demosaicing, assign "camera
profile", convert to working colour space (sRGB, Adobe RGB, NTSC,
etc.)

--
http://www.libraw.org/
 
Good to know. I use an input profiler (pictoColor InCamera) to create iinput device icc profiles and I "assign" these to the applicable images in Capture, I guess this situation already inheritantly considers the camera color space.

Cheers.
Isn't this what the camera does
when you choose the differant profile on the camera, it just assiigns
it ?
No, raw data stays intact, but the converter (in-camera or Nikon
Capture) shall convert image from camera colour space to Adobe RGB.

--
http://www.libraw.org/
 
I use an input profiler (pictoColor InCamera) to create
iinput device icc profiles and I "assign" these to the applicable
images in Capture, I guess this situation already inheritantly
considers the camera color space.
That is not a simple question, the answer depends on the workflow - that is, is it Nikon Capture or Capture One, what is the exact sequence of actions, and what dialogue do you use to "assign" profile.

--
http://www.libraw.org/
 
It's capture NX Adjust tab -> Color profile -> Apply profile.

The instructions for the tool say to assign the profile in photoshop which I have also done. But I tried doing it in capture and the results appear the same. The results of applying these custom input profiles are dramatic and they seem to produce the same results regardless of what tool I do it on. (At least visually the same)
I use an input profiler (pictoColor InCamera) to create
iinput device icc profiles and I "assign" these to the applicable
images in Capture, I guess this situation already inheritantly
considers the camera color space.
That is not a simple question, the answer depends on the workflow -
that is, is it Nikon Capture or Capture One, what is the exact
sequence of actions, and what dialogue do you use to "assign" profile.

--
http://www.libraw.org/
 
It's capture NX Adjust tab -> Color profile -> Apply profile.

The instructions for the tool say to assign the profile in photoshop
which I have also done.
I'm not sure I agree with the workflow. Consider this.

Your output image for profiling target was already converted to some output colour space when you got it from Capture. Now you create what can be loosely defined as a "correction profile" since your RGB data is already in some colour space. Correction profile works over the already converted data to adjust colours and tonality further. Such a profile is meant to be assigned at the stage of Photoshop input, that is the workflow to test should be: leave Nikon Capture to output images in the same profile you used to output the image of the target (workflow consistency; that is the converter should go through the exact same workflow with the target image as with the real images), ignore that embedded profile at the stage of Photoshop input and assign your correction profile instead. What you are doing here is using Nikon Capture and camera as one, single input device providing you with TIFFs. You profile this combined device and assign the correct profile to its output.

You can look at it this way. Imagine Nikon Capture is in your camera (and it is there in certain sense) and the camera output is jpeg. In this case you obviously drop out the output profile that jpegs have embedded to them and assign a new profile you just created.

--
http://www.libraw.org/
 
I can relete to the Capture nx as the camera analogy, I just triied this for the sake of seeing what it will would do. For a few images I tried this and actually printed the results from both workflows, I couldn't see any differance.

In the workflow described by the tool vendor they talk about two situations, one is the photoshop "assign profile" and the other is "your input device has already embedded the profile" . Could Capture not be my input device (as in your analogy).

http://www.pictocolor.com/UserGuides/inCamera40/UseProfileOpenImage.html#B
It's capture NX Adjust tab -> Color profile -> Apply profile.

The instructions for the tool say to assign the profile in photoshop
which I have also done.
I'm not sure I agree with the workflow. Consider this.

Your output image for profiling target was already converted to some
output colour space when you got it from Capture. Now you create what
can be loosely defined as a "correction profile" since your RGB data
is already in some colour space. Correction profile works over the
already converted data to adjust colours and tonality further. Such a
profile is meant to be assigned at the stage of Photoshop input, that
is the workflow to test should be: leave Nikon Capture to output
images in the same profile you used to output the image of the target
(workflow consistency; that is the converter should go through the
exact same workflow with the target image as with the real images),
ignore that embedded profile at the stage of Photoshop input and
assign your correction profile instead. What you are doing here is
using Nikon Capture and camera as one, single input device providing
you with TIFFs. You profile this combined device and assign the
correct profile to its output.

You can look at it this way. Imagine Nikon Capture is in your camera
(and it is there in certain sense) and the camera output is jpeg. In
this case you obviously drop out the output profile that jpegs have
embedded to them and assign a new profile you just created.

--
http://www.libraw.org/
 
I can relete to the Capture nx as the camera analogy, I just triied
this for the sake of seeing what it will would do. For a few images I
tried this and actually printed the results from both workflows, I
couldn't see any differance.
Wonderful. You proved your workflow. That means you do indeed have the honest control in NX to assign true output profile. (It still means it can be viewed as a part of input device, but it also has the necessary degree of transparency to control it.) In this case you can rely on your workflow and re-test only when Nikon will update Capture.

--
http://www.libraw.org/
 
When you shoot RAW....it dosnt matter what color space you shoot in....it DOES matter what color space your converter is dialed in to use.

Now....why I shoot sRGB is because my rear monitor is an sRGB device....and give me honest colors and exposure I can subjectivly use out in the field to insure I got my shot. If you use aRGB or somthing diffrent, it will look flat on your camera screen.

Roman
--

'Our deepest fear is not that we are inadequate. Our deepest fear is that we are powerful beyond measure. It is our light, not our darkness that most frightens us. We ask ourselves, who are we to be brilliant, gorgeous, talented, fabulous.

Actually, who are we not to be?'

--Marianne Williamson

http://www.pbase.com/romansphotos/
 
You are saying that it's better
to do the color space transform first.
Primary colours transform is not the same as colour space transform.
Educated guess is easier (including keeping tables more compact) when
you have alignment to perceptually uniform grid and keep calculations
in perceptually uniform space. Decisions in demosaicing are based on
the distance between colour coordinates of pixels, and you are better
off scaling them so that this perceptually uniform measure works.
OK, thanks for the explanation :-)

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
I agree, I was thinking between posts that this is all based on exactly what "apply profile" in Capture is doing.
I can relete to the Capture nx as the camera analogy, I just triied
this for the sake of seeing what it will would do. For a few images I
tried this and actually printed the results from both workflows, I
couldn't see any differance.
Wonderful. You proved your workflow. That means you do indeed have
the honest control in NX to assign true output profile. (It still
means it can be viewed as a part of input device, but it also has the
necessary degree of transparency to control it.) In this case you can
rely on your workflow and re-test only when Nikon will update Capture.

--
http://www.libraw.org/
 
Just curious, not withstanding what we discussed here, when would anyone (or what valid purpose) would a apply profile be used for ? (In capture nx)

Based on what I've read here the image comes into capture with an implicit camera profile based on the particular camera. Using "apply profile" is simply going to redefine the color based on nothing. (Again notwithstanding a profile that was created specifically for that device/lighting conditions)
I can relete to the Capture nx as the camera analogy, I just triied
this for the sake of seeing what it will would do. For a few images I
tried this and actually printed the results from both workflows, I
couldn't see any differance.
Wonderful. You proved your workflow. That means you do indeed have
the honest control in NX to assign true output profile. (It still
means it can be viewed as a part of input device, but it also has the
necessary degree of transparency to control it.) In this case you can
rely on your workflow and re-test only when Nikon will update Capture.

--
http://www.libraw.org/
 
Just curious, not withstanding what we discussed here, when would
anyone (or what valid purpose) would a apply profile be used for ?
(In capture nx)
Well, one might want to use bw or sepia profiles, or something like that. Generally I do not see any use to assign command in raw converter other then for custom or generic profile. However it depends on implementation, and on how much one wants to use Photoshop after the conversion. In Photoshop we use false profile assignment in certain cases to address image problems like overall or shadow underexposure and that sometimes is a very valuable addition to manipulations raw converter allows.

--
http://www.libraw.org/
 
To represent 14 bits coded in linear space one needs 15+ bits in gamma 2.2 space.
But that's not the issue.
That's for sure, nothing is an issue for you :)
Not true. I take issue with your deliberately disingenuous posts. You talk of "cheese holes" without explaining what they are, then switch the topic when challenged. Typical of your tactics.
Get your facts straight, do not misinterpret what you fail to
understand, and get some practical experience before feeding forum
with second-hand knowledge.
Also typical of you to immediate respond with personal insults.
These holes you are talking about are in
the shadows whereas the extra bits are needed to prevent data loss in
the highlights
BS
Oh, and BTW, ProPhoto isn't gamma 2.2, it's 1.8. 14 bits fits almost
perfectly into 15 bits of gamma 1.8.
Then you should ask yourself why Adobe even for their working space
that is a linear clone of ProPgoto RGB use 20+ bits.
Haha, you are a joke, Iliah. It is easily proven that 14 bit linear fits into 15 bits with a 1.8 gamma. You know it, you are just posturing. You and I both know why Adobe would use more bits. Extra bits preserves fractional values that would otherwise be truncated in integer arithmetic. You know, that same problem that drives you to conclude that you must use floating point?

I like how you take everyone for idiots.
Of course, there is a bit more
loss with gamma 2.2 which you advocate with BetaRGB.
Once again you do not know what you are talking about.
There you go again. You are quite the debate specialist.
If you convert linear encoding into nonlinear encoding you create
holes somewhere.
Same, lack of understanding.
Then what are these "cheese holes" you speak of and how does your superior knowledge help them taste better?
I have yet to see any
evidence from you that ProPhoto in 16-bit actually causes any kind of
demostrable image degradation.
The proof is right here, it is just that you skipped your homework.
Try doing something practical instead of typing nonsense.
But, by all means, don't you ever provide evidence to back up what you say. I'm not the one trying to suggest that ProPhoto has something wrong with it here. You are. Just exactly what is it about ProPhoto that is uniquely different and inherently worse than other spaces for the purpose of raw conversion? I'd like to know specific details and I'd like an explanation for how a color space like BetaRGB, one that you personally advocate, doesn't suffer the same problems.

Incidently, you are completely wrong here:

"Imagine you have a lot of very small balls in a compact space. Converting to ProPhoto you just reposition the balls (original colours) so that they are not touching each other any more. That happens because you redistributed finite elements over the larger volume. "

The size of a ball is determined by bit depth, so when you increase gamut volume the balls can't remain the same unless bit depth is increased. If bit depth is increased so the balls remain the same size, you do not spread them out so they are no longer touching. Instead, all the balls remain in place and continue to touch while more balls are added outside the existing ones to represent the addition volume. In reality, bit depth will remain the same. The effect is coarser color granularity, not gaps and "holes in cheese". Gaps and holes appear in the shadow areas when linear data is converted to a gamma space.
 
I take issue with your deliberately disingenuous posts.
Do I care you knew better then that? No. But some others who follow the thread mightbe interested to see this.



Target with a set of colours in Zone V captured, tints of red mark the colours found in raw, tints of green - same colours converted to ProPhoto.
Extra
bits preserves fractional values that would otherwise be truncated in
integer arithmetic
BS.

I will be waiting to see your converter using your methods and let others compare the quality of it with other converters. mine included. That is the best proof of how good your understanding of issues is.

--
http://www.libraw.org/
 
Typical response from you. No questions answered, a bunch of insults and posturing, and an obtuse piece of "data" without any context.

You are a master of BS, Iliah.
 

Keyboard shortcuts

Back
Top