Just Looking
Veteran Member
I think I'm finally getting my head around what sg10 has been trying to say, and why we seem to be unable to communicate in the same language.
He attributes to JPEG the act of reducing from a 36-bit representation to a 24-bit (per pixel) representation, thereby reducing the "size of the color space" to 1/(2^12) of its orginal size (reducing by 4095/4096 as he puts it). This statement is equally true of converting to TIFF (with 8-bit R, G, and B channels), so it is not a property of JPEG compression itself, but of the conversion from raw data to a standard 24-bit rendered image format.
Have I got it right so far, sg10?
There are some complications to this picture, though. For one, the raw data come in with a "nearly lossless" compression to about half of the original 36 bits per pixel (the 8 MB or 64 megabit raw file with 3.4 M RGB pixels is about 18 bits per pixel, not 36). Does that mean the final 24-bit space is actually an "expanded" space from the 18b raw?
For another thing, standard 8-bit values in an image file are not chosen from among 256 equally-spaced values. The value range is "gamma-encoded" to put more values near dark and fewer near the light end.
So, the relationship between the "size" of the color space and the number of points that can be represented, or that are uniquely represented in the image, is nothing like the simple picture that sg10 is basing his arguments on.
I'd prefer to think of the "size" of a color space to be something like a volume in Lab space. Going from a raw file with good measurements on the original subject, or scene, to a file representing a reproduction, or photograph, involves "rendering" into a color space defined by at least a color triangle, a white point, and a maximum brightness. Scene colors that are outside the gamut of the space, either by having a chromaticity outside the color triangle or a brightness outside the brightness range, need to be "clipped" or otherwise mapped into the space of colors that can be represented. Part of that operatioin includes the nonlinear gamma mapping and quantization to a desired number of output bits. That's what PhotoPro does to get to a TIFF or JPEG. With JPEG, the image is further compressed using techniques that others have discussed better than I can.
Within the volume of the colorspace, the encoding, whether TIFF or JPEG, 8 or 16 bit, determines the accuracy with which individual pixels (for TIFF) or blocks of pixels (for JPEG) are represented. When the accuracy is reduced, by encoding to fewer bits, then fewer unique colors will be found in an image, as sg10 keeps telling us. But can one see the difference? Sometimes yes, sometimes no. I don't think anyone can show us a difference between an 8b TIFF and a 16b TIFF, since the 16b has to be converted to 8b to be put a on a screen or sent to a printer, in most cases. When JPEG cuts the file size by a factor of 10 and the number of unique colors by a factor of 4 or more, can you see the difference relative to an original TIFF? Not usually, though sometimes with careful inspection some differences may be findable.
I'm sure someone has some serious experiments to see at what JPEG compression level a user's preference begins significantly, but I don't have data handy. It's certainly not as bad as sg10 keeps saying. On the other hand, I think some people are blind to compression artifacts, and post things whose JPEG quality is really too low. Most have found happy mediums by now.
As to whether compression can cause moire, I don't want to take sides. I bet there are some compression techniques that will do so, and it would be hard to prove that JPEG will not. But if someone is going to push such a concept, then a pair of compressed and uncompressed images to illustrate the effect would certainly be required before any credibility would attach.
j
He attributes to JPEG the act of reducing from a 36-bit representation to a 24-bit (per pixel) representation, thereby reducing the "size of the color space" to 1/(2^12) of its orginal size (reducing by 4095/4096 as he puts it). This statement is equally true of converting to TIFF (with 8-bit R, G, and B channels), so it is not a property of JPEG compression itself, but of the conversion from raw data to a standard 24-bit rendered image format.
Have I got it right so far, sg10?
There are some complications to this picture, though. For one, the raw data come in with a "nearly lossless" compression to about half of the original 36 bits per pixel (the 8 MB or 64 megabit raw file with 3.4 M RGB pixels is about 18 bits per pixel, not 36). Does that mean the final 24-bit space is actually an "expanded" space from the 18b raw?
For another thing, standard 8-bit values in an image file are not chosen from among 256 equally-spaced values. The value range is "gamma-encoded" to put more values near dark and fewer near the light end.
So, the relationship between the "size" of the color space and the number of points that can be represented, or that are uniquely represented in the image, is nothing like the simple picture that sg10 is basing his arguments on.
I'd prefer to think of the "size" of a color space to be something like a volume in Lab space. Going from a raw file with good measurements on the original subject, or scene, to a file representing a reproduction, or photograph, involves "rendering" into a color space defined by at least a color triangle, a white point, and a maximum brightness. Scene colors that are outside the gamut of the space, either by having a chromaticity outside the color triangle or a brightness outside the brightness range, need to be "clipped" or otherwise mapped into the space of colors that can be represented. Part of that operatioin includes the nonlinear gamma mapping and quantization to a desired number of output bits. That's what PhotoPro does to get to a TIFF or JPEG. With JPEG, the image is further compressed using techniques that others have discussed better than I can.
Within the volume of the colorspace, the encoding, whether TIFF or JPEG, 8 or 16 bit, determines the accuracy with which individual pixels (for TIFF) or blocks of pixels (for JPEG) are represented. When the accuracy is reduced, by encoding to fewer bits, then fewer unique colors will be found in an image, as sg10 keeps telling us. But can one see the difference? Sometimes yes, sometimes no. I don't think anyone can show us a difference between an 8b TIFF and a 16b TIFF, since the 16b has to be converted to 8b to be put a on a screen or sent to a printer, in most cases. When JPEG cuts the file size by a factor of 10 and the number of unique colors by a factor of 4 or more, can you see the difference relative to an original TIFF? Not usually, though sometimes with careful inspection some differences may be findable.
I'm sure someone has some serious experiments to see at what JPEG compression level a user's preference begins significantly, but I don't have data handy. It's certainly not as bad as sg10 keeps saying. On the other hand, I think some people are blind to compression artifacts, and post things whose JPEG quality is really too low. Most have found happy mediums by now.
As to whether compression can cause moire, I don't want to take sides. I bet there are some compression techniques that will do so, and it would be hard to prove that JPEG will not. But if someone is going to push such a concept, then a pair of compressed and uncompressed images to illustrate the effect would certainly be required before any credibility would attach.
j