Options for Denoising High ISO RAW Files

Just shoot in jpeg. Modern cameras do a terrific job of processing and you can adjust noise reduction to your preference when you set up your camera.
No thanks!
 
Just shoot in jpeg. Modern cameras do a terrific job of processing and you can adjust noise reduction to your preference when you set up your camera.
I have nothing against JPEG shooting, though I always shoot in raw. However, it is impossible to apply the same quality and amount of noise reduction to the JPEG as to the raw file.
 
Just shoot in jpeg. Modern cameras do a terrific job of processing and you can adjust noise reduction to your preference when you set up your camera.
Compared to shooting JPEG I can get 1 1/2 to 2 stops better noise reduction when shooting RAW and processing with DXO PL7. This allows me to shoot from ISO 5000 to 6400 compared to 1600 with my 1" sensor cameras and I can shoot up to ISO 25600 with my Full Frame compared to ISO 6400-8000 when shooting JPEG.

One thing to remember is the camera spends a fraction of a second processing the JPEG while a far more powerful computer will spend 10-30 seconds processing a RAW to JPEG. It's inevitable the RAW processed results will be superior. Whether the improvement is enough for you care is another matter.

--
Tom
 
Last edited:
Just shoot in jpeg. Modern cameras do a terrific job of processing and you can adjust noise reduction to your preference when you set up your camera.
I guess you don't use the same cameras that I use. Inferior in-camera JPEG noise reduction is widespread, and software solutions are far more economical than replacing otherwise good cameras and lenses with the newest products.
 
Last edited:
... To my eye, Topaz consistently produces denoised images that look more "smooshed" or lacking in the same amount of detail as LrC's, and LrC's denoised images also have better contrast. This makes sense because excess noise will produce a perceived decrease in contrast, so better denoising should result in an image with better apparent contrast.
In your examples Topaz does look worse than Lightroom. Are you using the most recent version of "Topaz Photo AI"? Just curious, because in my examples it didn't smoosh the details away the way it did in yours. I had it set to "75" out of a possible 100.
I agree. The examples posted by SCoombs don't represent what Photo AI can do when it's optimally adjusted.
It really is subjective. Some people don't mind the noise. Personally if I can keep the details AND get rid of the noise, that's my goal. If I have to sacrifice some details, then I'd rather keep some of the noise.
Noise obscures detail. The best use of the best state-of-the-art noise reduction improves the sense of detail when the noise is reduced. Of course, going too far makes it look worse.
 
Last edited:
These days it has become a personal opinion. A respected birder on these forums.

Start at 13:08


I looked her up. She is not an unknown influencer.

 
(snip)

I looked her up. She is not an unknown influencer.

But she is annoying. :-|
She may be but her calls on why files lose competions can be more annoying if it happens to us.
I didn't mean to suggest otherwise. Maybe I'm getting to be too old to respond positively to a form of what someone else referred to, many years ago, as "cute power".

I wish that she had posted her RAW so that I could experiment. (Maybe using DXO Photolab DeepPrime XD.)
 
Last edited:
It is (to give a snappy description of a more complicated process) replacement from a different source.

The hint is in the name - DeepPrime uses what it has learnt from a library of images to add detail it thinks is appropriate for your image. This will work fine for things like bird feathers, as most image databases have an adequate number of birds. Remember, though, that the regenerated detail is an estimate, not what is called "ground truth" in technical publications. A photograph of a parrot may end up with fine detail being added from a pigeon-blackbird-duck source.

So when does this really matter? Examples:

Textures - Any texture it hasn't seen in its database, it will bias towards something it has seen in its database. Expect surprises. This also goes for things like macro photography of unusual insects, or other animals and plants. If it's not in the database, it will essentially get mistaken for a mix of the closest matches.

Faces - Pores and hair (even fine hair) will get biased towards the mean of what's in the database. A hairless person will get hairier, a poreless person will get larger pores, etc. If you use AI-based upscaling solutions, you may find yourself having to address your clients' questions.
 
(snip)

I looked her up. She is not an unknown influencer.

But she is annoying. :-|
She may be but her calls on why files lose competions can be more annoying if it happens to us.
I didn't mean to suggest otherwise. Maybe I'm getting to be too old to respond positively to a form of what someone else referred to, many years ago, as "cute power".

I wish that she had posted her RAW so that I could experiment. (Maybe using DXO Photolab DeepPrime XD.)
That would have helped. Again just her opinion but I did find it interesting.
 
It is (to give a snappy description of a more complicated process) replacement from a different source.

The hint is in the name - DeepPrime uses what it has learnt from a library of images to add detail it thinks is appropriate for your image. This will work fine for things like bird feathers, as most image databases have an adequate number of birds. Remember, though, that the regenerated detail is an estimate, not what is called "ground truth" in technical publications. A photograph of a parrot may end up with fine detail being added from a pigeon-blackbird-duck source.
You've made assumptions that might not be true. Rather than adding detail that's been synthesized from a library of random images, I think DxO's AI has learned how noise itself affects images in general at the pixel level. If the library contains images that were created noiselessly (good lighting at base ISO) and also images of the same scenes created with noise (lower light, higher ISOs), the software can learn how to 'undo' the pixel level changes caused by noise. DxO has a large library of such images as a result of the parent company's work in sensor and lens analysis.

AI noise reduction from other developers might work differently if they don't necessarily have a large number of 'before and after' images for the same kind of analysis. I have no way of knowing.
So when does this really matter? Examples:

Textures - Any texture it hasn't seen in its database, it will bias towards something it has seen in its database. Expect surprises. This also goes for things like macro photography of unusual insects, or other animals and plants. If it's not in the database, it will essentially get mistaken for a mix of the closest matches.

Faces - Pores and hair (even fine hair) will get biased towards the mean of what's in the database. A hairless person will get hairier, a poreless person will get larger pores, etc. If you use AI-based upscaling solutions, you may find yourself having to address your clients' questions.
Whether you're right or I'm right or we're both right, caveats like those apply. The results will be a best guess in any case, with the guesses getting more difficult as the noise level increases.
 
Last edited:
It is (to give a snappy description of a more complicated process) replacement from a different source.

The hint is in the name - DeepPrime uses what it has learnt from a library of images to add detail it thinks is appropriate for your image. This will work fine for things like bird feathers, as most image databases have an adequate number of birds. Remember, though, that the regenerated detail is an estimate, not what is called "ground truth" in technical publications. A photograph of a parrot may end up with fine detail being added from a pigeon-blackbird-duck source.
You've made assumptions that might not be true. Rather than adding detail that's been synthesized from a library of random images, I think DxO's AI has learned how noise itself affects images in general at the pixel level.
No, what you are describing is essentially the original Prime noise reduction. Have you looked into what it does?
 
It is (to give a snappy description of a more complicated process) replacement from a different source.

The hint is in the name - DeepPrime uses what it has learnt from a library of images to add detail it thinks is appropriate for your image. This will work fine for things like bird feathers, as most image databases have an adequate number of birds. Remember, though, that the regenerated detail is an estimate, not what is called "ground truth" in technical publications. A photograph of a parrot may end up with fine detail being added from a pigeon-blackbird-duck source.
You've made assumptions that might not be true. Rather than adding detail that's been synthesized from a library of random images, I think DxO's AI has learned how noise itself affects images in general at the pixel level. If the library contains images that were created noiselessly (good lighting at base ISO) and also images of the same scenes created with noise (lower light, higher ISOs), the software can learn how to 'undo' the pixel level changes caused by noise. DxO has a large library of such images as a result of the parent company's work in sensor and lens analysis.
No, what you are describing is essentially the original Prime noise reduction. Have you looked into what it does?
Is there a detailed description somewhere of exactly what PRIME does?

Likewise, is there a detailed description somewhere of exactly what DeepPRIME (or XD) does - and does not do? This is in the user manual for PhotoLab 6:

DxO DeepPRIME and DeepPRIME XD noise reduction (Deep, for deep learning, and derived from Probabilistic Raw IMage Enhancement) goes even further in noise processing. Based on artificial intelligence and neural network technology, its algorithms have been trained using the millions of images produced by DxO over many years, for laboratory analysis. DeepPRIME XD has been developed to let you extract even more detail (XD: eXtra Details).

DeepPRIME and DeepPRIME XD both use denoising and demosaicing in a holistic approach that consists of analyzing image problems in full context, rather than focusing solely on the problems of digital noise.


That says DeepPRIME and XD do benefit from predictive algorithms trained on DxO's own laboratory images in addition to new AI and neural network technology. It doesn't say the results are just synthesized from a library of random images.
 
Last edited:
These days it has become a personal opinion. A respected birder on these forums.

Start at 13:08


I looked her up. She is not an unknown influencer.

Just like noses, we all have one. That would apply to opinions as well. I dumped Adobe several years ago for a good reason (I feel). With the likes of DXO Photolab 7 and ON1 as an alternate (throw in Topaz Photo AI as a backup) I get superior results in not only retention of detail AFTER virtual complete removal of any noise but better overall results in the final image period.

There are died in the wool Adobe folks out there and I get that. First an foremost I simply want to say that digital noise is NOT something our eyes see when viewing a scene naturally. Therefore, as far as I"M concerned, it has NO Place in a final image. We can argue that one till the cows come home. Bottom line is it is mechanically INDUCED and I feel it has NO place in a final image. Getting to that point with a noisy image takes technique and some work at times. This gals video isn't something I'll agree with whatsoever. There are as many that will argue her points as she may have. We would all tend to find something to try to defend our beloved outlooks on gear and it's performance. You use what you want. All I know is for final images I'll stick wtih the big 3 I mentioned for detail left over after complete removal of noise. And NO I don't agree with a noise free image looking automatically like AI. I feel it represents what my eyes saw in the real world.
 
It is (to give a snappy description of a more complicated process) replacement from a different source.

The hint is in the name - DeepPrime uses what it has learnt from a library of images to add detail it thinks is appropriate for your image. This will work fine for things like bird feathers, as most image databases have an adequate number of birds. Remember, though, that the regenerated detail is an estimate, not what is called "ground truth" in technical publications. A photograph of a parrot may end up with fine detail being added from a pigeon-blackbird-duck source.
Poppycock.
So when does this really matter? Examples:

Textures - Any texture it hasn't seen in its database, it will bias towards something it has seen in its database. Expect surprises. This also goes for things like macro photography of unusual insects, or other animals and plants. If it's not in the database, it will essentially get mistaken for a mix of the closest matches.

Faces - Pores and hair (even fine hair) will get biased towards the mean of what's in the database. A hairless person will get hairier, a poreless person will get larger pores, etc. If you use AI-based upscaling solutions, you may find yourself having to address your clients' questions.
 

Keyboard shortcuts

Back
Top