JPEG and size of colorspace

Just Looking

Veteran Member
Messages
5,740
Reaction score
1
Location
CA, US
I think I'm finally getting my head around what sg10 has been trying to say, and why we seem to be unable to communicate in the same language.

He attributes to JPEG the act of reducing from a 36-bit representation to a 24-bit (per pixel) representation, thereby reducing the "size of the color space" to 1/(2^12) of its orginal size (reducing by 4095/4096 as he puts it). This statement is equally true of converting to TIFF (with 8-bit R, G, and B channels), so it is not a property of JPEG compression itself, but of the conversion from raw data to a standard 24-bit rendered image format.

Have I got it right so far, sg10?

There are some complications to this picture, though. For one, the raw data come in with a "nearly lossless" compression to about half of the original 36 bits per pixel (the 8 MB or 64 megabit raw file with 3.4 M RGB pixels is about 18 bits per pixel, not 36). Does that mean the final 24-bit space is actually an "expanded" space from the 18b raw?

For another thing, standard 8-bit values in an image file are not chosen from among 256 equally-spaced values. The value range is "gamma-encoded" to put more values near dark and fewer near the light end.

So, the relationship between the "size" of the color space and the number of points that can be represented, or that are uniquely represented in the image, is nothing like the simple picture that sg10 is basing his arguments on.

I'd prefer to think of the "size" of a color space to be something like a volume in Lab space. Going from a raw file with good measurements on the original subject, or scene, to a file representing a reproduction, or photograph, involves "rendering" into a color space defined by at least a color triangle, a white point, and a maximum brightness. Scene colors that are outside the gamut of the space, either by having a chromaticity outside the color triangle or a brightness outside the brightness range, need to be "clipped" or otherwise mapped into the space of colors that can be represented. Part of that operatioin includes the nonlinear gamma mapping and quantization to a desired number of output bits. That's what PhotoPro does to get to a TIFF or JPEG. With JPEG, the image is further compressed using techniques that others have discussed better than I can.

Within the volume of the colorspace, the encoding, whether TIFF or JPEG, 8 or 16 bit, determines the accuracy with which individual pixels (for TIFF) or blocks of pixels (for JPEG) are represented. When the accuracy is reduced, by encoding to fewer bits, then fewer unique colors will be found in an image, as sg10 keeps telling us. But can one see the difference? Sometimes yes, sometimes no. I don't think anyone can show us a difference between an 8b TIFF and a 16b TIFF, since the 16b has to be converted to 8b to be put a on a screen or sent to a printer, in most cases. When JPEG cuts the file size by a factor of 10 and the number of unique colors by a factor of 4 or more, can you see the difference relative to an original TIFF? Not usually, though sometimes with careful inspection some differences may be findable.

I'm sure someone has some serious experiments to see at what JPEG compression level a user's preference begins significantly, but I don't have data handy. It's certainly not as bad as sg10 keeps saying. On the other hand, I think some people are blind to compression artifacts, and post things whose JPEG quality is really too low. Most have found happy mediums by now.

As to whether compression can cause moire, I don't want to take sides. I bet there are some compression techniques that will do so, and it would be hard to prove that JPEG will not. But if someone is going to push such a concept, then a pair of compressed and uncompressed images to illustrate the effect would certainly be required before any credibility would attach.

j
 
I think I'm finally getting my head around what sg10 has been
trying to say, and why we seem to be unable to communicate in the
same language.
I don't think it's a language problem...
There are some complications to this picture, though. For one, the
raw data come in with a "nearly lossless" compression to about half
of the original 36 bits per pixel (the 8 MB or 64 megabit raw file
with 3.4 M RGB pixels is about 18 bits per pixel, not 36). Does
that mean the final 24-bit space is actually an "expanded" space
from the 18b raw?
I don't know about the SD9, but for all other digital cameras I know of, RAW compression is lossless.

The issue that still has not been answered is what colorspace the RAW file is in. It's almost certainly not sRGB. Until this is answered, it doesn't make much sense to talk about where the extra detail is hidden.
As to whether compression can cause moire, I don't want to take
sides. I bet there are some compression techniques that will do
so, and it would be hard to prove that JPEG will not. But if
someone is going to push such a concept, then a pair of compressed
and uncompressed images to illustrate the effect would certainly be
required before any credibility would attach.
There are plenty of ways you can mutilate an image with compression. I don't think anybody disputes this. Even if one finds an example where the compression artifacts work out just right to create a moire-like effect, this does not explain away the existence of moire on sensors without AA filters.

The fundamental problem seems to be that sg10 does not accept that sampling theory applies to the SD9. He thinks it's a magical device that defies known math and science. He is therefore grasping at straws to explain what is obvious to everybody else.

--
Ron Parr
FAQ: http://www.cs.duke.edu/~parr/photography/faq.html
Gallery: http://www.pbase.com/parr/
 
He's an ignorant nasty troll..

Best to ignore him.

Personally, I've put in my troll complaint to Phil and encourage others to do likewise. I don't mind someone with a true counterpoint, in fact I encourage that type of good discussion. But sg10's proven what he really is, and that's nothing to do with good discussion.

--Steve
 
Amazing that a guy that said "is that there are STILL people who think they can control what gets posted on a public forum..

if you don't like it, don't read it, move along.

--Steve"
would say this, don't you think?
He's an ignorant nasty troll..

Best to ignore him.

Personally, I've put in my troll complaint to Phil and encourage
others to do likewise. I don't mind someone with a true
counterpoint, in fact I encourage that type of good discussion.
But sg10's proven what he really is, and that's nothing to do with
good discussion.

--Steve
 
He's an ignorant nasty troll..
I'm not so sure. I've known lots of people with strongly held but logically untenable positions who get hung up in insecurity and try desparately to be taken seriously. This guy is doing it online, with just enough knowledge to make up numerological arguments. He sounds sometimes like a serious SD9 lover and booster, but his technical-sounding arguments are so bogus they do more harm than good. Is he clever enough to be purposely trying to tarnish the opinion of SD9 users this way? I don't think so. Is he trolling in other forums and causing a ruckus there? Don't know, I haven't looked. If so, then maybe you're right, he's just a troll.
Best to ignore him.
probably true!
 
Could you send me a mail? Need to contact you off forum...
He's an ignorant nasty troll..

Best to ignore him.

Personally, I've put in my troll complaint to Phil and encourage
others to do likewise. I don't mind someone with a true
counterpoint, in fact I encourage that type of good discussion.
But sg10's proven what he really is, and that's nothing to do with
good discussion.

--Steve
--
Regards from Old Europe,

Dominic

http://www.pbase.com/sigmasd9/dominic_gross
 
I don't know about the SD9, but for all other digital cameras I
know of, RAW compression is lossless.
see http://www.x3f.info/sd9/v1_1/english.html#Format
The issue that still has not been answered is what colorspace the
RAW file is in. It's almost certainly not sRGB. Until this is
answered, it doesn't make much sense to talk about where the extra
detail is hidden.
not sure what your point is, Ron. Raw data doesn't generally conform to a colorspace, or at least not any standardized colorspace. It includes all the effects of the camera's sensors, like spectral sensitivities and nonlinearities. From the measurements, colors have to be "imputed" by some process that at the least picks an approximation of color matching functions to the sensor's spectral sensitivity curves. Only after making such choices can the data be put into a colorspace. So, given that the sensitivity curves have been shown, what other answers do you think would be illuminating here?

j
 
The issue that still has not been answered is what colorspace the
RAW file is in. It's almost certainly not sRGB. Until this is
answered, it doesn't make much sense to talk about where the extra
detail is hidden.
not sure what your point is, Ron. Raw data doesn't generally
conform to a colorspace, or at least not any standardized
colorspace. It includes all the effects of the camera's sensors,
like spectral sensitivities and nonlinearities. From the
measurements, colors have to be "imputed" by some process that at
the least picks an approximation of color matching functions to the
sensor's spectral sensitivity curves. Only after making such
choices can the data be put into a colorspace. So, given that the
sensitivity curves have been shown, what other answers do you think
would be illuminating here?
The RAW data exist in the space of the sensor. It's not a standard colorspace, but it's a space defined by 3 basis functions, one for each layer.

Perhaps somebody has already done this and I just haven't seen it: What is needed is an analysis of the sensitivity curve of the sensor. For CMOS devices, this is typically fairly linear until you get close to saturation, but it might be different for X3.

Once we have some approximation of gamma (or perhaps some other function) for the X3, we can figure out how the extra bits are allocated. It's almost certainly not simply dividing sRGB more finally, as the 16-bit extensions to sRGB do (here, I mean Photoshop's 16-bit version of sRGB, not sRGB-64, etc.).

It would be useful to know, in sRGB terms, were the extra precision lies. How much is out of gamut? How much is subdividing the dark areas more finely? How much is subdividing the bright areas more finely?

I'm asking these questions b/c sg10 seems to think he can see these colors somehow. I'd like him to first explain where they are so we can then determine what kind of output device would be needed to display them. Once this is done, we can evaluate whether sg10 is really seeing all of those colors, as he claims.

--
Ron Parr
FAQ: http://www.cs.duke.edu/~parr/photography/faq.html
Gallery: http://www.pbase.com/parr/
 
The RAW data exist in the space of the sensor. It's not a standard
colorspace, but it's a space defined by 3 basis functions, one for
each layer.
I agree, it's "in the space of the sensor". But besides that not being a "standard colorspace", I argue that it's not a "colorspace" at all. The mapping from sensorRGB to something like sRGB would be well defined if sensorRGB were a colorspace. But in general, it's not, since a sensor's spectral sensitivities never exactly match the Luther criterion. That is, they are not color matching functions. So the mapping from sensorRGB to a colorspace such as sRGB or XYZ requires the input of someone's opinion about how to distribute the inherent errors of metamerism, and who-knows-what other sensor non-idealities. That's the "rendering" that maps measured raw data to colors.

Of course, if you prefer to define colorspace more liberally to include sensorRGB, I will accept that. Just that it my more narrow interpretation of colorspace, you don't get one out of a sensor.
...
I'm asking these questions b/c sg10 seems to think he can see these
colors somehow. I'd like him to first explain where they are so we
can then determine what kind of output device would be needed to
display them. Once this is done, we can evaluate whether sg10 is
really seeing all of those colors, as he claims.
Ron, no offense, but spending effort on trying to reason with sg10 seems like a waste of time all around. I do enjoy reading your viewpoints, but hate to see them being twisted around by arguments with someone who can't follow what you're saying.

There are other and more interesting reasons to seek more info about what really goes on in the X3 sensor and processing, so keep poking at that.

j
 
Ron,

Like Quixote or the windmill - I keep having trouble with this analogy - sparing with sg10 is exhausting. It would be nice to return to a discussion which surrounds the camera and its technology. I, for one, am not very gifted on the technology side and gain a lot from you, Joe, Dominic and others when not distracted by noise. Frankly, I could not give two hoots too.

Take a deep breath and let's get on with it.
I don't think it's a language problem...
snip
The fundamental problem seems to be that sg10 does not accept that
sampling theory applies to the SD9. He thinks it's a magical
device that defies known math and science. He is therefore
grasping at straws to explain what is obvious to everybody else.
--
Laurence Φ€

http://www.pbase.com/lmatson/sd9_images
http://www.pbase.com/sigmasd9/root
http://www.pbase.com/cameras/sigma/sd9
http://www.beachbriss.com (eternal test site)
 
But besides that not
being a "standard colorspace", I argue that it's not a "colorspace"
at all. The mapping from sensorRGB to something like sRGB would be
well defined if sensorRGB were a colorspace. But in general, it's
not, since a sensor's spectral sensitivities never exactly match
the Luther criterion.
The problem with your definition is that the Luther (or Luther-Ives) criterion defines a "colorimetric" system (or color space). In the technical literature, "color space" is typically used as a much more generic term because the descriptors don't have to be linear at all or have any reasonable transformation to a tristimulous space.

--
Erik
 
There are others on this forum who know this far more accurately that I do who have actually used/written code to read the RAW file but here my go:

I am 99% sure that the RAW file is in no RGB colourspace. In simplified terms (ie. the terms I understand :-) the X3 outputs data in three channels:

Layer 1: All wavelengths within silicon's sensitive spectrum (inc. a bit of UV as we all know)

Layer 2: All wavelengths except those

Layer 3: All wavelengths

These are probably not sharp cutoffs.

So its in RGB+RG+R colourspace. Or YCrgCr or something. How you map this to tristimulous values to convert into an RB colourspace I have no idea, but it is obviously not simple. A few matricies should do some kind of job though. The non-linearities in each channels output (noise+clipping) will probably make this a bit trickier.

I will let someone else better informed continue.
The RAW data exist in the space of the sensor. It's not a standard
colorspace, but it's a space defined by 3 basis functions, one for
each layer.
I agree, it's "in the space of the sensor". But besides that not
being a "standard colorspace", I argue that it's not a "colorspace"
at all. The mapping from sensorRGB to something like sRGB would be
well defined if sensorRGB were a colorspace. But in general, it's
not, since a sensor's spectral sensitivities never exactly match
the Luther criterion. That is, they are not color matching
functions. So the mapping from sensorRGB to a colorspace such as
sRGB or XYZ requires the input of someone's opinion about how to
distribute the inherent errors of metamerism, and who-knows-what
other sensor non-idealities. That's the "rendering" that maps
measured raw data to colors.

Of course, if you prefer to define colorspace more liberally to
include sensorRGB, I will accept that. Just that it my more narrow
interpretation of colorspace, you don't get one out of a sensor.
...
I'm asking these questions b/c sg10 seems to think he can see these
colors somehow. I'd like him to first explain where they are so we
can then determine what kind of output device would be needed to
display them. Once this is done, we can evaluate whether sg10 is
really seeing all of those colors, as he claims.
Ron, no offense, but spending effort on trying to reason with sg10
seems like a waste of time all around. I do enjoy reading your
viewpoints, but hate to see them being twisted around by arguments
with someone who can't follow what you're saying.

There are other and more interesting reasons to seek more info
about what really goes on in the X3 sensor and processing, so keep
poking at that.

j
 
Amazing that a guy that said "is that there are STILL people who
think they can control what gets posted on a public forum..

if you don't like it, don't read it, move along.

--Steve"
would say this, don't you think?
Absolutely. I'd hardly say voicing my opinion about such an obvious EXTREME case is an attempt to control what's posted.

Besides which, it's this very "freedom" which allows me to post such whining garbage.. enjoy it.

Must be great fun to take something completely out of context and use it as a flame bait, which won't be given the time of day..

FWIW, I also said this:
And followed up with a note that I "PERSONALLY" put my complaint into Phil.

Take it as you will..

--Steve
 
The problem with your definition is that the Luther (or
Luther-Ives) criterion defines a "colorimetric" system (or color
space). In the technical literature, "color space" is typically
used as a much more generic term because the descriptors don't have
to be linear at all or have any reasonable transformation to a
tristimulous space.
Erik, I'm not familiar with that non-colorimetric usage of "color space". Perhaps you can provide a typical reference. Of course, I understand that linearity is not required, but as I've seen it, there's usually a specified nonlinear relationship, as in gamma curves or the Lab-space nonlinearity functions, and a defined relationship to standard colorimetry such as XYZ coordinates.

Perhaps I'm being too strict in trying to conceptualize sensor space as something different from a color space; I'm not that familiar with the current technical literature that you know.

j
 
Erik, I'm not familiar with that non-colorimetric usage of "color
space". Perhaps you can provide a typical reference.
It's easy. Just google for "colorimetric color space" (include the quotes). If there were not a distinction, then this combination should almost never appear because it would be redundant. Also look for "device dependent color space" because these are almost never perfectly colorimetric.
Perhaps I'm being too strict in trying to conceptualize sensor
space as something different from a color space
Try here:
http://www.dpreview.com/reviews/minoltadimage7/page13.asp

--
Erik
 
There are some complications to this picture, though. For one, the
raw data come in with a "nearly lossless" compression to about half
of the original 36 bits per pixel (the 8 MB or 64 megabit raw file
with 3.4 M RGB pixels is about 18 bits per pixel, not 36). Does
that mean the final 24-bit space is actually an "expanded" space
from the 18b raw?
I don't know about the SD9, but for all other digital cameras I
know of, RAW compression is lossless.
Nikon D100 compression is slightly lossy. Nikon refers to it as "visually lossless". It's a pretty good compression, knovk the 12 bits down to about 8.5 (430 discreet levels, if memory serves) then run it through a huffman compressor.

We Nikon users pretty much always turn the compression off. Not because of any image artifacts, but because Nikon did such a horrible implementation that it takes around 55 seconds to compress each raw file.

I did some experiments in the frequency domain. The compression itself really is near impossible to preceive.

--
Ciao!

Joe

http://www.swissarmyfork.com
 
Sam Berry wrote:
I can just second your opinion below:
There are others on this forum who know this far more accurately
that I do who have actually used/written code to read the RAW file
but...
My $0.02:

I started with physics modelling and my respect to Foveon people grew significantly: What they are trying to measure are differences in attenuation of the different light wavelenghts in silicon. Thus, the first layer is essentially the luminance detector where most of the photons, regardless of the frequency, would interact. Exponentially less photons would pass to the second layer, the average wavelength of these will be shifted towards red end of spectrum, further shift and attenuation would happen in the third layer. So, you have to extract colour information from the signal which is exponentially weaker than the one measured in the first layer. Of course in design they played with thickness of the layers, relative gains etc. in order to reduce calculation errors, but the manuver space was limited by the termal noise in the third and sensitivity to UV in the first layer.

They did all of that. KUDOS.

I guess I now understand (qualitatively) the limited ISO range, and sensitivity to the colour balance of this wonderful camera. Mapping the dector response to sRGB for SG9 is really a case of a tail wagging the dog due to the relative strength of signals available for the job.
 
Erik, I'm not familiar with that non-colorimetric usage of "color
space".
It's just one of those annoying linguistic shifts. We had one in my own field. Researchers call it "speech recognition", the marketing folk, journalists, and end users usually say "voice recognition". We normally consider "speech recognition" to be entirely separate from "voice recognition", the first refers to understanding what was said, the second, to recognizing who said it. Now, I have to deliberatly swap the terms to be able to talk to my own management.
Perhaps you can provide a typical reference. Of course, I
understand that linearity is not required, but as I've seen it,
there's usually a specified nonlinear relationship, as in gamma
curves or the Lab-space nonlinearity functions, and a defined
relationship to standard colorimetry such as XYZ coordinates.

Perhaps I'm being too strict in trying to conceptualize sensor
space as something different from a color space; I'm not that
familiar with the current technical literature that you know.
I've used the term "sensor space" to define whatever comes out of a sensor, for years. Weather the sensor is an imager, a microphone, a position sensor. One of the cameras I designed, a rotating filter turet multiple shot system, has a 7 dimensional sensor space.

There are sensor manufacturers who would like you to believe that their sensor space constitutes a "color space", in that it can be linearly transformed into a CIE tristimulus. But I have yet to see a sensor for which a simple linear operation produced as low errors as a more complex mon-linear spatial map, so I don't accept this definition.

--
Ciao!

Joe

http://www.swissarmyfork.com
 

Keyboard shortcuts

Back
Top