Question About JPEG Compression

Started Jan 28, 2013 | Questions thread
peevee1 Veteran Member • Posts: 6,247
Re: Question About JPEG Compression

w6rlf wrote:

Thanks to all.

tclune, you explained something that seems to clear up the seeming "contradiction" that has confused me. You talk about two kinds of processes here when you shoot jpegs:

-The processes that degrade the image by throwing away information.

-The "Huffman encoding", which you indicate is, while "lossless" actually does most of the size compression. I might liken it to zipping a file.

Therefore: When I open a jpeg in Photoshop, then save it at the highest quality setting, what I'm getting back is the SIZE compression done by the "Huffman encoding". What I'm NOT getting back is the image quality lost by the other processes. Right?

No, actually. Huffman is applied anyway. Photoshop at highest settings just retains the data it interpolated from the more compressed data, not the real details. But retaining this interpolated (as opposed to real) data is useless, you could always interpolate it back, and actually every moderm viewing device does just that.

Now, about the loss of color information by the 4:2:2 encoding. The thing is, that on most modern sensors (except Foveon in Sigmas) there is no full color information in each pixel to begin with - they are either green, red or blue (the used to be some variations with cyan filters, but the principle is the same). So 4:2:2 encoding of JPEG actually just matches the color information property of Bayer color filters in the modern sensors. If you produce, say, 16 mpix full-color picture from 16-mpix Bayer sensor, 2/3 of the color information is interpolated, i.e. useless/invented, and 4:2:2 JPEG encoding throws it out for a good reason, without loss of any real detail.

Now, 15 mpix Foveon sensors are 3-color and 4:2:2 encoding of the 15 mpix would lose information, but when Sigma presents them as 45 mpix, then the 4:2:2 does not lose much.

Also, all lenses at most apertures and most lenses at all apertures never reach resolution of modern high resolution sensors, so even luminance information is not there to begin with in most cases (not to mention color information screwed up by chromatic aberrations which are always there, bigger or smaller). But on big sensors some expensive lenses stopped down a few stops can reach the sensor resolution, so then some loss might be a problem. Not on point-and-shoots.

Now, converting 12-14 bit from sensor to 8 bit is a problem, but mostly for postprocessing (that is why it is better to manipulate the original RAW than JPEG). 8 bits don't compress range of the pictures to 8EV for example, because both sRGB and Adobe color curves are logarithmic, not linear. Although it should be noted that static luminance range of print on a paper is less than 7 EVs, real static range of LCD displays is also close to that, and humans cannot distinguish as many as 9 EVs in a single picture either (people distinguish about 400 levels of light in absolutely ideal laboratory conditions on big screen, which is a little more than 8 bits can do, but less than 9).

How aggressive is the throwing out of higher spatial frequencies depends on the specific cameras. Some tests (on these forums) were done comparing SuperFine JPEG vs Fine JPEG on a certain 16 mpix camera, developed after the fact from the same RAW (but using in-camera algorithm). The size of the file certainly grew significantly, but when decompressed to the image bitmap, the differences (minor) were only in a few dozen pixels, out of 16 million. Hence any further reductions in lossy compression over SuperFine JPEG would be meaningless.

Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark MMy threads
Color scheme? Blue / Yellow