Started 9 months ago | Discussions thread
Flat view
LuxLuthor Regular Member • Posts: 127


The conventional wisdom is to always shoot raw, as that has the most information. Raw files are big, two or three times larger than a very high quality JPEG. Of course, to actually do anything with a raw file, one must create a JPEG or TIF image file. Unfortunately, the TIF files are even larger than the raw file. There are other problems with the proprietary raw format. Usually the file format is not published, and third party’s software does not work correctly with the raw image, many features are not implemented at all. In addition now with mirrorless cameras the images must be corrected for lens distortions. Many of the lenses designed for mirrorless cameras, have built into them the expectation that there will be software corrections for their distortions. These lenses would not work so well on film.

So one idea: why not use the camera created JPEG image and bypass all the problems with proprietary formats, raw converters, image distortion, etc.? What could go wrong?


Let's examine that idea from a practical standpoint.

Why do people use a raw converter?

Below is a common reason, from DPReview:

“You really can't get anything from an image if the data simply isn't there.
FWIW though, I do find that overall, the OOC JPEGs aren't necessarily as good as the hype should suggest. Many of mine have blown highlights, which I originally thought was just a need to use EXR modes, or at least DR400. However, when shooting Raw (using Raw +JPEG, rather than an embedded JPEG), it's clear that the highlight and shadow detail is actually there in the image captured by the sensor. Indeed, in most cases I find that it's actually unnecessary to shoot M size and DR400 to retain highlights, shooting L size and DR100 in Raw format returns almost the same highlight detail. Which is why now I mostly use the Raw images, rather than the JPEG version, and indeed I may soon shoot Raw only.”

In short, improperly encoded jpegs which have blown highlights or little shadow detail is a very common reason people use raw. They know they can go to the raw and recover information that was not properly encoded into the jpeg.

When the scene info is in the process of being encoded into a jpeg, one must be careful to not throw away scene info, as then it is gone forever. This is also true with raw as well. This is the main reason people shoot and store raw images--improperly encoded jpegs.


On the Internet you will see various authors claiming the human eye can see 10 to 20,000,000 colors – bullcrap! Where are the references? Why do they say this? From what I have read ( ) the human eye can only see 2 to 3,000,000 colors. Many colors in the RGB color space are not discernible and are seen as one color. Accounting for this, using Macadam ellipses the human eye color space has only 2 to 3,000,000 distinct colors. JPEG's can have over 16 million colors, as can raw formats, so I don't think that color is a problem with a properly encoded JPEG.

Output images

Nobody can see your images in raw format until they are converted to a format that your computer or printer can use. That's why there are all these different raw converters that you can spend your money on and some are free. All of these output images are constrained in that they have a limited range of colors and brightness. Most monitors can only output the colors in the sRGB color palette, maybe a few can reproduce the Adobe RGB color palette. And printers cannot do that, but have an even more restricted range of colors. Again, JPEG's can be made to contain adequate scene info for output images, more than printers. (You may need to find a printer who accepts TIFs for actual printing.) In fact, many printers will convert whatever format you give them to a JPEG. Here is an argument about eight bit versus 16 bit printer drivers. In the end nobody could tell a difference. . So images for printing or emailing can’t be a reason that a properly encoded JPEG would be better than a raw image.

Information capture

So what do we gain by using raw images? JPEG image encoding is designed by photo experts to capture so much visual information that the JPEG image is virtually indistinguishable from a non-compressed original image. So what is the problem with just using JPEG's? It is a compact storage format designed to have the most visual information per bits stored, more so than raw.

As we have seen, one reason people use the proprietary raw format, and then must use a raw converter, which may or may not work 100% with the proprietary raw format, is that information— highlights and shadow detail—is thrown away during JPEG encoding. Can this be fixed?

Another problem which is pretty much in the past, is if you edit a JPEG and then store it as a JPEG, and then open the edited JPEG and re-edit and again store that as a JPEG, you can wind up with little visual artifacts in the final image. But nowadays most editing programs are non destructive, they apply a recipe to the original data and the final image optionally can be output as a very large TIF file if need be, so that problem is gone.

Are there other problems? This website ( ) asked the same question. I think it's worth reading. He came to the conclusion that:

“In ALL of these tests, the majority of image info is in the mid-range of the plot, so if you expose well, do not blow out the highlights, and have only 8 or so stops of dynamic range to deal with, the jpegs produces excellent images that are mostly indistinguishable from 16-bit tif images, and even the ISO 100 results would be great.”

That was based on hime using a 2005 Canon 1D Mark II, and cameras have advanced since then.

There were two areas where he saw the raw images would have an advantage. One area was the darkest shadows, where there was quantization noise in the JPEG because there were so few levels. Of course the shadows are very dark so is that a practical problem? If so how do we fix it? See this graph:

The other area that he found the Canon 1D Mark II TIF image better than the JPEG image was the dynamic range, the highlight area in the TIF image had a little more room. It varies from camera to camera, as the Canon S60 jpeg output were closer to the tif output. Can this be fixed?


How do we ensure the original scene can be encoded into a JPEG image without much information loss?

Color palette

I would choose a slightly larger color palette i.e. Adobe RGB. This will capture more colors than you can print or review on almost any monitor, but it's not too wide. Colors outside of the palette are discarded, so you may want a slightly larger palette. Of course they should be generally exported into the sRGB color space for use, as the whole world uses that space.

Nikon active D

This feature dynamically adjust the scene transfer curve and sometimes exposure so that more of the scene info is in the midrange and not clipped. This allows for better JPEG encoding. If you Google it there are lots of people saying that it just under exposes the image. But what really is happening are problems from the proprietary raw file formats. The proprietary raw format is being improperly treated by the user’s image processing software which comes from a different company, not Nikon. Nikon’s image processing software handles it correctly, of course. What it seems to primarily due is change the transfer curve, and sometimes it will reduce the exposure to properly expose for the highlights, i.e. not blow out the highlights. If it's not a contrasty scene it doesn't do much, the exposures will be the same with it on or off, the image transfer curve may be modified a little bit. Other manufactures probably have something similar; I don’t know.

Neutral and flat picture control

By default the in camera JPEG's are created so they look more intense, snappy, punchy. However this increased contrast will cause a lot of shadow detail to be pushed deeper into the darker areas of the JPEG as well as pushing the highlight areas of the scene to maximum values, and throwing some data away. And these are the JPEG areas that can be problematic, as we have just seen. That causes information loss in the JPEG. But the flat and neutral picture controls don't do this. Relative to the standard picture control, they maintain lots of shadow and highlight detail. See the curves:

Master Image

The conventional wisdom is the raw image is like film which has to be developed or tweaked to support the final photographic statement. We also can make a well encoded master JPEG which captures almost all the scene info and can be tweaked to support the final photographic statement. How do we do this? We turn on active D lighting, probably auto or normal, and choose a neutral or flat picture control. This produces an in camera JPEG with lots of information. It looks less intense but can easily be tweaked to be snappy and contrasty if you like. The difference is that that this JPEG image has a lot more info in the shadows and highlight areas than a standard in camera JPEG. In a practical sense it is virtually indistinguishable from a similar image made from the raw data. Much less data has been thrown away. The flat transfer curve will always save more of the original scene information, but it will look flat and must be tweaked more for final output. Sometimes the neutral transfer curve looks just right and requires no work, if you are not to fussy. You can check how much of the scene is in danger of being clipped as most software allows turning on some feature which will display pixels in the extreme highlight or shadow areas. For instance, Nikon Capture NX-D allows this; image > show lost highlights. (Some software is not too accurate, for instance, anything above 240 is marked as in the danger zone.) Also, you can shoot the same scene with different transfer curves; standard, neutral, flat, and observe in the histogram how these different scene tone mapping curves affect the histogram.


We have seen that the problem areas in a JPEG encoded photograph are with the highlights and shadow areas. We have also seen that camera manufacturers, at least Nikon, have provided ways to overcome these issues. We also know that many third party software packages do not properly deal with the proprietary raw formats, they mostly ignore some parameters. This allows us to have the camera create JPEG's which overcome the highlight and shadow detail losses. This leads us to using a properly encoded JPEG as a master image instead of a raw format image, which can be further tweaked to give better expression of the scene. One important advantage of a jpeg besides space is that most software knows how to handle jpegs, whereas that is not true for raw formated images, with their lens corrections, extra features like Active D, and so on.

I would say make your own tests and see if a tweaked well encoded jpeg is in a practical sense indistinguishable from a tweaked raw image.

Also, all these features like Active D keep improving, so don’t let somebody who tried it years ago with incompatible image processing software tell you that it is not reliable. I remember the Nikon FA, it was the first verson of matrix metering. I tried it, but centerweighed metering was more reliable, so the FA got sent back. Nikon kept working on it and today matrix metering is very reliable, and I use it all the time.

I have more testing to do and it is on my list. When I will get to it….

Flat view
Post (hide subjects) Posted by
(unknown member)
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark MMy threads
Color scheme? Blue / Yellow