Fine vs Super Fine revisited

Started Jan 25, 2013 | Discussions thread
OP Don_Campbell Senior Member • Posts: 2,707
Re: Fine vs Super Fine revisited

VisionLight wrote:

Thanks for a very enlightening post, Don. As I have written here a number of times, including in the mini-reviews, in some pixel peeping instances you can see the benefit of Superfine and in others you don't. I normally keep my default set to Superfine only because of the some times when it makes a difference. With a large card in the camera, there is not much in the way of downside to me.

It is my understanding that Canon uses compression of 5 bits per pixel for Superfine and 3 bits per pixel in Fine. The 67% increase in bits per pixel translates into the approximate increase in file size. A 2.4 meg file in Fine becomes a 4.1 meg for the same image in Superfine.

Saying that it is 5 bits per pixel vs 3 bits is basically counting file size instead of understanding the process of jpg compression. I highly recommend the jpegsnoop discussions and the wikipedia jpg article.

By my estimation, this also means that in Fine mode for every 100 original pixels, about 23-24 remain in the compressed state, while in Superfine, 33 or so pixels remain in the compressed state. So Superfine should have an additional 10 pixels per 100 of quality vs compression.

No, that is missing the basics of jpeg compression. There are several processes involved in the compression, and only one of those is lossy: the quantization tables applied to the discrete fourier transform data. So in theory, you could make the quantization process lossless if you wanted to (not in an actual jpg engine but in theory) and there would still be a fair amount of compression--compression without loss. That is loss of file size without loss of any detail--just more efficient storage.

Now since you know a lot more about JPEG compression, am I understanding this correctly? And if so, would you think this would mean that there should be a more obvious difference in the results? I realize that the vast majority of the quality scale has already been lost in both instances, but it still seems to me that there's a "big" difference between the two levels of compression.

Sorry if you already gave enough information in your original post to answer my question and I missed it, so please point me to it if so. Otherwise, do the additional bits per pixel at this level not really make a difference?



The lossy step in jpg compression is the application of quantization matrices and then rounding off the results to integers. As applied it preserves changes in lightness/darkness that give the eye/brain the ability to see detail and it "worries" less about the actual values which are detected by the eye with less sensitivity. This is especially true for chrominance data.

So, you need to embrace the concept that jpg conversion reduces components of the image selectively based on what the eyes/brain are less sensitive to and retains components that are used by the eye/brain to detect detail. You can reduce those components a lot and reduce file size a lot before your eye detects the changes.


Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark MMy threads
Color scheme? Blue / Yellow