How do I processing RAW images in the same way my camera processes JPEGs?

stephband

Member
Messages
25
Reaction score
4
I have a Panasonic Lumix S5. It captures a RAW file and saves both that and a JPEG version of the same image. I have filters switched off. When I import the two files into an image editor they look completely different. Sometimes, I want to recreate the way my camera has processed the JPEG while editing the RAW image but I'm struggling to understand what it is doing and why.

An example. Here's an image I shot last night. It's not terribly good, but it illustrates the problem...



the JPEG produced by the camera
the JPEG produced by the camera



A screenshot of the RAW image
A screenshot of the RAW image



We can see that that lava river in the JPEG produced by the camera is yellow, as it was on the night, whereas the RAW image has regions that are bleached white. Presumably it still carries the data to unbleach those white patches.

At first I thought this was simply a question of white balance, which I assume is not applied to the RAW image. But adjusting the white balance in an image editor does not solve the problem.

So what is the camera doing and how can I recreate what it is doing?
 
The most infamous current cooking of 'raw' files is noise reduction applied to Canon R-series 'raw' files, although in the past other companies have done similar things.
The Lumix does some (optional) extra-extra noise reduction at long exposures. I had understood that was applied to the RAW file (I think I remember it doing that with JPEG turned off), but I will have to test again.

(It blocks the camera while it processes, so it can get in the way of the continuity of successive images in a timelapse, and if I were to discover that it is not applied to the raw version I could switch it off without losing anything.)
 
This is interesting. I was under the impression that many of these processes would be deterministic, but you are saying that there is a degree of interpretation involved.
Yes indeed there is very a quite significant degree of interpretation involved. Maybe the most important is white balance. The camera cannot know what is due to the color of the subject itself, versus the color of the light hitting it. When set to auto white balance, it has to make an educated guess. Sometimes it guesses accurately, sometimes not. Sometimes the camera's guess is pleasing, but sometimes not. And take the raw file into different raw converters, set them to 'as shot', and see that they report different values for temperature and tint. And of course either in camera or in raw converter, you can choose / set your own values, whether set by eye or by photographing and sampling something like the white-balance tiles on a ColorChecker Passport (a regular ColorChecker does not have any truly neutral tiles).

And although the camera may know the precise spectral filtration characteristics of the red, green, and blue filters over some (a quarter, half, and a quarter, respectively) of the pixels, a third-party raw converter may not.
For example, I had thought that when I set lens correction to ON in the camera, that I am relying on the camera to distort (or undistort) the image. And that is true for the in-camera JPEG. But I have just found a switch in RAW Power, 'Apple Lens Correction', and it comes on by default when viewing a RAW file ... and I guess that is using a different process to do lens correction.
This is very much an 'it depends' point, with different cameras, lenses, and raw converters behaving differently.
The original issue appears to be about mapping the gamut of the RAW image to the wide gamut of my MacBook's screen. I am effectively using Apple's RAW processes, as that is what both the application RAW Power and Pixelmator Pro rely upon.
And unless the screen is hardware-calibrated-and-profiled, and you're using it in the ambient light conditions under which it was calibrated and profiled, and you're using fully color-managed software, you can't really know what you're getting. The Mac OS ecosystem seems to be ahead the Windows ecosystem in this regard, but AFAIK there have been reports of some of the newer Mac OS versions 'breaking' some color management.
 
Camera RAW is unprocessed. The computer software has to do all corrections, including converting the image (commonly a Bayer mosaic) to RGB for all pixels, and applying lens corrections. (Lens corrections typically include distortion and vignetting. Lateral color correction, which gives one type of color fringing, may also be done.)
Jim Kasson (a sometimes-moderator and regular participant at various times in the medium format, Nikon, and Sony forums) has shown pretty well that very few camera 'raw' files are truly raw, and the real question is how much and what type of processing has occurred. The most infamous current cooking of 'raw' files is noise reduction applied to Canon R-series 'raw' files, although in the past other companies have done similar things. Other processing includes, but is not limited to, substituting and interpolating to fill PDAF pixels. So although pretty much any 'raw' is much less cooked than a JPEG, it's rarely actually raw, and the question is what and how much has been done.
Do you have any links about Canon R raw in-camera noise reduction ready to hand?

I certainly see plenty of noise in R5 high-ISO raws, so apparently Canon doesn't get too carried away. (I imagine, perhaps incorrectly, that the camera can't do AI noise reduction. It's far from instantaneous in a real PC, IMHO.)

I might be able to find some links by searching the forums, but apparently my skill in getting good results from the forum's search engine is poor. I sometimes resort to Google instead (requiring dpreview in the search).
 
The most infamous current cooking of 'raw' files is noise reduction applied to Canon R-series 'raw' files, although in the past other companies have done similar things.
The Lumix does some (optional) extra-extra noise reduction at long exposures. I had understood that was applied to the RAW file (I think I remember it doing that with JPEG turned off), but I will have to test again.

(It blocks the camera while it processes, so it can get in the way of the continuity of successive images in a timelapse, and if I were to discover that it is not applied to the raw version I could switch it off without losing anything.)
Canon cameras can take, and subtract, a dark frame to reduce a fixed pattern noise for very long exposures. Is that what you Panasonic camera is doing?

(It's typically done in astrophotography, and it is the lowest order correction applied to thermal imagers.)
 
Camera RAW is unprocessed. The computer software has to do all corrections, including converting the image (commonly a Bayer mosaic) to RGB for all pixels, and applying lens corrections. (Lens corrections typically include distortion and vignetting. Lateral color correction, which gives one type of color fringing, may also be done.)
Jim Kasson (a sometimes-moderator and regular participant at various times in the medium format, Nikon, and Sony forums) has shown pretty well that very few camera 'raw' files are truly raw, and the real question is how much and what type of processing has occurred. The most infamous current cooking of 'raw' files is noise reduction applied to Canon R-series 'raw' files, although in the past other companies have done similar things. Other processing includes, but is not limited to, substituting and interpolating to fill PDAF pixels. So although pretty much any 'raw' is much less cooked than a JPEG, it's rarely actually raw, and the question is what and how much has been done.
Do you have any links about Canon R raw in-camera noise reduction ready to hand?
Sure. I know Photos to Photos has published about this. Take a look at, e.g., a PtP chart comparing the Canon R5, Nikon Z7 II, and Sony A7R IV:

https://www.photonstophotos.net/Charts/PDR.htm#Canon EOS R5,Nikon Z 7II,Sony ILCE-7RM4

As you can see, the R5 data points are downward-pointing triangles where "triangle down indicates noise reduction" but the Z7 II and A7R IV are circles.
 
Thanks, that is useful. Although about the histogram, in RAW Power it does change when you switch Gamut Map OFF, and in Pixelmator it does not when you do the equivalent and switch EDR mode ON. I am beginning to suspect Pixelmator may be at fault here.
A link to your RAW would go a long way in helping us help you.
Yes, sorry, it is here...

stephen.band/test/PANA4186.RW2

My question now is, how do you guys compose bracketed photos together, for example to create a HDR edit of several exposures? The software I have will process one RAW file at a time. In Pixelmator Pro, you can switch wide gamut on (EDR - ON) for a single RAW layer, but as soon as you add another layer it is automatically switched off. Do the RAW files need to be processed and 'flattened' before you edit them together?
With PhotoLab, you can copy and paste raw processing settings, so you can ensure that all the images in a stack are identically processed.

I put your raw file through PhotoLab 5, using several different settings for the Smart Lighting (which controls how the background is lit):



Your camera JPEG. Has this been cropped?  It appears to have been.
Your camera JPEG. Has this been cropped? It appears to have been.



PhotoLab 5, Smart Lighting OFF
PhotoLab 5, Smart Lighting OFF



PhotoLab 5, SL=25
PhotoLab 5, SL=25



PL5, SL=50
PL5, SL=50





PL5, SL=75
PL5, SL=75

Which one you prefer might depend on whether you want the spectators to be seen or hidden.

You will see that these images are wider and higher than your original, which you presumably cropped?

You can see that PL5 has dealt with the highlights better:



PhotoLab has retained much more details in the bright areas
PhotoLab has retained much more details in the bright areas
 
Camera RAW is unprocessed. The computer software has to do all corrections, including converting the image (commonly a Bayer mosaic) to RGB for all pixels, and applying lens corrections. (Lens corrections typically include distortion and vignetting. Lateral color correction, which gives one type of color fringing, may also be done.)
Jim Kasson (a sometimes-moderator and regular participant at various times in the medium format, Nikon, and Sony forums) has shown pretty well that very few camera 'raw' files are truly raw, and the real question is how much and what type of processing has occurred. The most infamous current cooking of 'raw' files is noise reduction applied to Canon R-series 'raw' files, although in the past other companies have done similar things. Other processing includes, but is not limited to, substituting and interpolating to fill PDAF pixels. So although pretty much any 'raw' is much less cooked than a JPEG, it's rarely actually raw, and the question is what and how much has been done.
Do you have any links about Canon R raw in-camera noise reduction ready to hand?
Sure. I know Photos to Photos has published about this. Take a look at, e.g., a PtP chart comparing the Canon R5, Nikon Z7 II, and Sony A7R IV:

https://www.photonstophotos.net/Charts/PDR.htm#Canon EOS R5,Nikon Z 7II,Sony ILCE-7RM4

As you can see, the R5 data points are downward-pointing triangles where "triangle down indicates noise reduction" but the Z7 II and A7R IV are circles.
Thanks.

Seems strange. Noise reduction in the R5 raws below ISO 600, but not above (until 102400)?
 
The application RAW Power tells me the red channel clips in the river, but the other channels do not.
The JPEGs you included tell a different story. In the OOC JPEG, the red and green channels are both blown in yellow areas of the flow. In the reddish areas the green channel isn't blown but the red channel is blown or close to blown. The blue channel is generally not blown at all in the flowing lava but it's blown in the white erupting part. (You get white when all three channels are blown).

In the version you edited the big problem is that the blue channel is mostly blown as well as the red and green channels in the flowing lava. The difference is quite clearly illustrated by looking at the blue channel only:



Top=OOC JPEG; Bottom=Edited Version
Top=OOC JPEG; Bottom=Edited Version

Simply by eliminating the blue channel altogether, the flow is yellow throughout:



Top OOC JPEG; Bottom=Edited version with blue channel removed
Top OOC JPEG; Bottom=Edited version with blue channel removed

Of course, it's not necessary to completely kill the blue channel. You could just reduce the brightness (highlight especially) in the blue channel and get a very satisfactory rendering. It's not clear why the blue channel is pushed so much lighter in the edited version vs. the OOC JPEG version. Something about how you processed or WB'd or the basic camera profiles you're using in the editors you're playing with.
However, I just discovered the control 'Gamut Map'. When I set that to OFF the river becomes yellow.
That doesn't make sense to me because you'd expect the ON setting to prevent the clipping. It seems to be doing the opposite of what's supposed to do. A weird labeling issue perhaps?
In Pixelmator, setting EDR (Extended Dynamic Range) to ON appears to do the same thing.
That makes more sense.
I am on a recent Macbook Pro with a high dynamic range screen. This is clearly something to do with mapping gamut to the dynamic range of my screen.
No, it's not monitor DR issue. It's an issue with how you're processing the files. As you get better at understanding how to manipulate the raw converter/editor tools you'll be able to better match and improve upon what comes straight out of the camera.
 
Thank you, that's helpful.
You will see that these images are wider and higher than your original, which you presumably cropped?
No, these are straight from the camera. But I shoot in a 16:9 frame on a 4:3 sensor. The 16:9 crop is preserved when importing the RAW file into RAW Power or into Pixelmator, or when previewing in the Finder on a Mac. So the crop info must be stored in the RAW data even though the full 4:3 image is there. Clearly it isn't preserved when you import into PhotoLab. It also isn't preserved if I convert my RAW files to .dng files.
 
The application RAW Power tells me the red channel clips in the river, but the other channels do not.
The JPEGs you included tell a different story. In the OOC JPEG, the red and green channels are both blown in yellow areas of the flow. In the reddish areas the green channel isn't blown but the red channel is blown or close to blown. The blue channel is generally not blown at all in the flowing lava but it's blown in the white erupting part. (You get white when all three channels are blown).

In the version you edited the big problem is that the blue channel is mostly blown as well as the red and green channels in the flowing lava. The difference is quite clearly illustrated by looking at the blue channel only:
Yes. We are getting our wires crossed a little bit, and it's partly my fault. Neither of these are edited. The second is a screenshot of a preview of the raw file, showing how it displays (blown out) on my screen. The actual raw file is here (sorry, I didn't post it earlier)

stephen.band/test/PANA4186.RW2

The raw file says only red clips, yet it displays as white, until I switch those switches. This appears to be because of the way the gamut is mapped to my screen.
 
Thank you, that's helpful.
You will see that these images are wider and higher than your original, which you presumably cropped?
No, these are straight from the camera. But I shoot in a 16:9 frame on a 4:3 sensor. The 16:9 crop is preserved when importing the RAW file into RAW Power or into Pixelmator, or when previewing in the Finder on a Mac. So the crop info must be stored in the RAW data even though the full 4:3 image is there. Clearly it isn't preserved when you import into PhotoLab. It also isn't preserved if I convert my RAW files to .dng files.
But note that your image was also noticeably narrower, and not just shallower, than the PhotoLab rendering. Your camera and raw rendering have unnecessarily lost information from the sides of the image, and not just the top and bottom as you intended. It's made your lens appear narrower than the actual hardware is capable of. That's a sign of poor lens distortion correction in both the camera and the raw processor.
 
Last edited:
I think that is lens correction doing that. The Mac appears to preview the raw image with lens correction ON if it was on inside the camera.
 
I think that is lens correction doing that. The Mac appears to preview the raw image with lens correction ON if it was on inside the camera.
My images also had lens correction ON. I could have turned it off, but didn't. It's just that the in-camera lens correction is poor.
 
The application RAW Power tells me the red channel clips in the river, but the other channels do not.
The JPEGs you included tell a different story. In the OOC JPEG, the red and green channels are both blown in yellow areas of the flow. In the reddish areas the green channel isn't blown but the red channel is blown or close to blown. The blue channel is generally not blown at all in the flowing lava but it's blown in the white erupting part. (You get white when all three channels are blown).

In the version you edited the big problem is that the blue channel is mostly blown as well as the red and green channels in the flowing lava. The difference is quite clearly illustrated by looking at the blue channel only:
Yes. We are getting our wires crossed a little bit, and it's partly my fault. Neither of these are edited.
Poor wording on my part to refer to the one you processed from raw as "edited". "Converted" or "processed" would have been more precise terms to use.
The second is a screenshot of a preview of the raw file, showing how it displays (blown out) on my screen.
That should make no difference. The preview (and, hence, your screenshot) is using the same RGB values as is generated when you save the previewed file to an output file (e.g. as a JPEG or TIFF). If you're skeptical, just save a JPEG from the same file/settings as you were using when you do a screengrab. The RGB values in both the JPEG and the screengrab also saved as a JPEG will be the same. If they're not. Something very odd is going on.
The actual raw file is here (sorry, I didn't post it earlier)
stephen.band/test/PANA4186.RW2

The raw file says only red clips,
I checked the raw in Rawdigger, which is the best way to readily tell what's going on at the raw level. The histograms and clipping warnings you see in most converters are based on gamma corrected and profiled data. They also often apply a certain hidden amount of automatic highlight recovery so that it may appear to you that some highlights aren't clipped when they actually are. Per Rawdigger, quite a bit of the flowing lava is blown in raw in both the red and green channels and most of erupting area is completely blown in the red channel. There's only a very tiny bit of the erupting area that's blown in the blue channel and virtually none of the flowing lava is blown in the blue channel.
yet it displays as white, until I switch those switches.
Any of the neutrally colored lava is neutral ("white" as you refer to it) precisely because all three channels are roughly the same (i.e., similar numeric RGB values). The mystery is why those switches should need to be switched at all in order for the lava to appear yellow. It definitely wasn't necessary to do anything extraordinary in ACR (the Adobe converter) and others in the thread have reported success with other converters. With default ACR settings, I'm not really seeing anything wonky with the blue channel like what appears in your screengrab. There may be some relatively minor out of gamut issues in sRGB but they're not the type that would result in the lava looking less yellow. It's still something of a mystery to me why things are so far off for you as a default/starting point in the converters you've tried.
This appears to be because of the way the gamut is mapped to my screen.
As I probably unsuccessfully explained, it has nothing to do with your monitor. Those roughly equal values in all three channels tell us that the monitor is behaving perfectly correctly to display the lava flow as white on ANY reasonably well-calibrated monitor or print. The problem is further upstream than how the data is being fed to and interpreted by the monitor.
 
I know it's not the main subject of the thread but I rather like the effect with the foreground lightened to show the watchers.

657faef438ad49aaa50743e2cefca6f1.jpg
 
The application RAW Power tells me the red channel clips in the river, but the other channels do not.
The JPEGs you included tell a different story. In the OOC JPEG, the red and green channels are both blown in yellow areas of the flow. In the reddish areas the green channel isn't blown but the red channel is blown or close to blown. The blue channel is generally not blown at all in the flowing lava but it's blown in the white erupting part. (You get white when all three channels are blown).

In the version you edited the big problem is that the blue channel is mostly blown as well as the red and green channels in the flowing lava. The difference is quite clearly illustrated by looking at the blue channel only:
Yes. We are getting our wires crossed a little bit, and it's partly my fault. Neither of these are edited. The second is a screenshot of a preview of the raw file, showing how it displays (blown out) on my screen. The actual raw file is here (sorry, I didn't post it earlier)
stephen.band/test/PANA4186.RW2

The raw file says only red clips, yet it displays as white, until I switch those switches. This appears to be because of the way the gamut is mapped to my screen.
Finally, I've found your RAW, and I decided to cook it.

Fist is your raw cooked in SilyPix with initial settings, which means that I did not move any sliders



d81ac0052fff49f3be2e45db548ea8ca.jpg



Second is product from ON1 RAW developer with initial settings, just to compare results from 2 different programs under initial settings.



dc0e9d3139cc4278bfc9d1ecd0041e47.jpg



and 3d photo is the result of my intervention in ON1 development with 2 touches only: first is AI tone adjustment, second is AI NoNoise which I apply to all RAW shots.



3896db901d574681a065efd3042a6105.jpg

and conclusion is up to you.

--
If you want to be equal, you have to be better...
 
Yes, I do too. I have longer exposures of the same image I was going to stack to create something like this.
 
That was Capture One with just a few simple tweaks. Exposure increased by a small amount. Highlight minus 35, White minus 46, Shadow plus 67, Black plus 57. Clarity plus 22. Could probably improve with a bit more time spent.
 
That was Capture One with just a few simple tweaks. Exposure increased by a small amount. Highlight minus 35, White minus 46, Shadow plus 67, Black plus 57. Clarity plus 22. Could probably improve with a bit more time spent.
That rendering seems very noisy?
 
Thank you for doing this. I like the SilkyPix import, it has the closest colouring to how I remember the scene on Saturday. (I like how you edited the last one too - I have multiple exposures I was going to stack to create something like this).

Overall, the conclusion to this thread seems to be, "buy more tools", which I'm slightly surprised about, because until today I thought they were all doing much the same thing, and assumed MacOS processes to be among the best.

SilkyPix explicitly supports my camera, but seems to have multiple websites which is a bit confusing. ON1 on the face of it looks like it might be a richer tool for composing HDR stacks, and it appears to support Photoshop plugins (Panasonic has a plug-in for importing HLG images, another thing I wanted to play with). But it's hard to tell. Time to download and try them out!

Thanks for your help!
 

Keyboard shortcuts

Back
Top