Reilly Diefenbach
Forum Pro
No thanks!Just shoot in jpeg. Modern cameras do a terrific job of processing and you can adjust noise reduction to your preference when you set up your camera.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
No thanks!Just shoot in jpeg. Modern cameras do a terrific job of processing and you can adjust noise reduction to your preference when you set up your camera.
I have nothing against JPEG shooting, though I always shoot in raw. However, it is impossible to apply the same quality and amount of noise reduction to the JPEG as to the raw file.Just shoot in jpeg. Modern cameras do a terrific job of processing and you can adjust noise reduction to your preference when you set up your camera.
Compared to shooting JPEG I can get 1 1/2 to 2 stops better noise reduction when shooting RAW and processing with DXO PL7. This allows me to shoot from ISO 5000 to 6400 compared to 1600 with my 1" sensor cameras and I can shoot up to ISO 25600 with my Full Frame compared to ISO 6400-8000 when shooting JPEG.Just shoot in jpeg. Modern cameras do a terrific job of processing and you can adjust noise reduction to your preference when you set up your camera.
I guess you don't use the same cameras that I use. Inferior in-camera JPEG noise reduction is widespread, and software solutions are far more economical than replacing otherwise good cameras and lenses with the newest products.Just shoot in jpeg. Modern cameras do a terrific job of processing and you can adjust noise reduction to your preference when you set up your camera.
I agree. The examples posted by SCoombs don't represent what Photo AI can do when it's optimally adjusted.In your examples Topaz does look worse than Lightroom. Are you using the most recent version of "Topaz Photo AI"? Just curious, because in my examples it didn't smoosh the details away the way it did in yours. I had it set to "75" out of a possible 100.... To my eye, Topaz consistently produces denoised images that look more "smooshed" or lacking in the same amount of detail as LrC's, and LrC's denoised images also have better contrast. This makes sense because excess noise will produce a perceived decrease in contrast, so better denoising should result in an image with better apparent contrast.
Noise obscures detail. The best use of the best state-of-the-art noise reduction improves the sense of detail when the noise is reduced. Of course, going too far makes it look worse.It really is subjective. Some people don't mind the noise. Personally if I can keep the details AND get rid of the noise, that's my goal. If I have to sacrifice some details, then I'd rather keep some of the noise.
But she is annoying. :-|(snip)
I looked her up. She is not an unknown influencer.
She may be but her calls on why files lose competions can be more annoying if it happens to us.But she is annoying. :-|(snip)
I looked her up. She is not an unknown influencer.
That's fine since it is just an opinion based on your experience.And Adobe.........they can take a long walk off a short pier worthless as far as I'm concerned.
I didn't mean to suggest otherwise. Maybe I'm getting to be too old to respond positively to a form of what someone else referred to, many years ago, as "cute power".She may be but her calls on why files lose competions can be more annoying if it happens to us.But she is annoying. :-|(snip)
I looked her up. She is not an unknown influencer.
That would have helped. Again just her opinion but I did find it interesting.I didn't mean to suggest otherwise. Maybe I'm getting to be too old to respond positively to a form of what someone else referred to, many years ago, as "cute power".She may be but her calls on why files lose competions can be more annoying if it happens to us.But she is annoying. :-|(snip)
I looked her up. She is not an unknown influencer.
I wish that she had posted her RAW so that I could experiment. (Maybe using DXO Photolab DeepPrime XD.)
You've made assumptions that might not be true. Rather than adding detail that's been synthesized from a library of random images, I think DxO's AI has learned how noise itself affects images in general at the pixel level. If the library contains images that were created noiselessly (good lighting at base ISO) and also images of the same scenes created with noise (lower light, higher ISOs), the software can learn how to 'undo' the pixel level changes caused by noise. DxO has a large library of such images as a result of the parent company's work in sensor and lens analysis.It is (to give a snappy description of a more complicated process) replacement from a different source.
The hint is in the name - DeepPrime uses what it has learnt from a library of images to add detail it thinks is appropriate for your image. This will work fine for things like bird feathers, as most image databases have an adequate number of birds. Remember, though, that the regenerated detail is an estimate, not what is called "ground truth" in technical publications. A photograph of a parrot may end up with fine detail being added from a pigeon-blackbird-duck source.
Whether you're right or I'm right or we're both right, caveats like those apply. The results will be a best guess in any case, with the guesses getting more difficult as the noise level increases.So when does this really matter? Examples:
Textures - Any texture it hasn't seen in its database, it will bias towards something it has seen in its database. Expect surprises. This also goes for things like macro photography of unusual insects, or other animals and plants. If it's not in the database, it will essentially get mistaken for a mix of the closest matches.
Faces - Pores and hair (even fine hair) will get biased towards the mean of what's in the database. A hairless person will get hairier, a poreless person will get larger pores, etc. If you use AI-based upscaling solutions, you may find yourself having to address your clients' questions.
No, what you are describing is essentially the original Prime noise reduction. Have you looked into what it does?You've made assumptions that might not be true. Rather than adding detail that's been synthesized from a library of random images, I think DxO's AI has learned how noise itself affects images in general at the pixel level.It is (to give a snappy description of a more complicated process) replacement from a different source.
The hint is in the name - DeepPrime uses what it has learnt from a library of images to add detail it thinks is appropriate for your image. This will work fine for things like bird feathers, as most image databases have an adequate number of birds. Remember, though, that the regenerated detail is an estimate, not what is called "ground truth" in technical publications. A photograph of a parrot may end up with fine detail being added from a pigeon-blackbird-duck source.
Is there a detailed description somewhere of exactly what PRIME does?No, what you are describing is essentially the original Prime noise reduction. Have you looked into what it does?You've made assumptions that might not be true. Rather than adding detail that's been synthesized from a library of random images, I think DxO's AI has learned how noise itself affects images in general at the pixel level. If the library contains images that were created noiselessly (good lighting at base ISO) and also images of the same scenes created with noise (lower light, higher ISOs), the software can learn how to 'undo' the pixel level changes caused by noise. DxO has a large library of such images as a result of the parent company's work in sensor and lens analysis.It is (to give a snappy description of a more complicated process) replacement from a different source.
The hint is in the name - DeepPrime uses what it has learnt from a library of images to add detail it thinks is appropriate for your image. This will work fine for things like bird feathers, as most image databases have an adequate number of birds. Remember, though, that the regenerated detail is an estimate, not what is called "ground truth" in technical publications. A photograph of a parrot may end up with fine detail being added from a pigeon-blackbird-duck source.
Just like noses, we all have one. That would apply to opinions as well. I dumped Adobe several years ago for a good reason (I feel). With the likes of DXO Photolab 7 and ON1 as an alternate (throw in Topaz Photo AI as a backup) I get superior results in not only retention of detail AFTER virtual complete removal of any noise but better overall results in the final image period.These days it has become a personal opinion. A respected birder on these forums.
Start at 13:08
I looked her up. She is not an unknown influencer.
I don't understand... one is called Jana and the other, Jess. Seems to be a mismatch.I looked her up. She is not an unknown influencer.
Poppycock.It is (to give a snappy description of a more complicated process) replacement from a different source.
The hint is in the name - DeepPrime uses what it has learnt from a library of images to add detail it thinks is appropriate for your image. This will work fine for things like bird feathers, as most image databases have an adequate number of birds. Remember, though, that the regenerated detail is an estimate, not what is called "ground truth" in technical publications. A photograph of a parrot may end up with fine detail being added from a pigeon-blackbird-duck source.
So when does this really matter? Examples:
Textures - Any texture it hasn't seen in its database, it will bias towards something it has seen in its database. Expect surprises. This also goes for things like macro photography of unusual insects, or other animals and plants. If it's not in the database, it will essentially get mistaken for a mix of the closest matches.
Faces - Pores and hair (even fine hair) will get biased towards the mean of what's in the database. A hairless person will get hairier, a poreless person will get larger pores, etc. If you use AI-based upscaling solutions, you may find yourself having to address your clients' questions.