Will Apple's NPU ever be used by Adobe SW?

No, it doesn’t. There’s twice as much waiting with Denoise.
I haven't played with DXO for many years. When I tested it, it didn't actually do the Denoise until you exported the picture, so you could not actually see what the output was going to look like. Has that changed?
PhotoLab 8 has a larger Loupe tool that you can move around the image to preview the effect of DP on part of the image. This is obviously not as nice as Denoise's full-image preview, but, OTOH, you don't have to wait for it. You can apply the DP instruction to a large batch of images, and as you click through them you can immediately use the Loupe tool to preview the effect. You don't have to wait, because the actual full-image processing doesn't happen until export, and the Loupe preview is generated on the fly.
That could explain why DXO does the Denoise DeepPRIME so fast
It doesn't "do" DeepPRIME when you choose it. All edits to images with DxO are simply instructions that aren't actually executed until export, which is why the entire process is non-destructive, IOW you can change edits at any time. The processing happens on export, which is why exporting images with DP applied takes longer than without it.
and the export so slow. Not taking sides, just being Curious George.
On even modest Macs with Apple Silicon processors, DeepPRIME 3 is pretty quick 5-15 seconds, and even the more demanding max-detail DeepPRIME XD2s is about as fast as Denoise at 10-30 seconds.
Frankly, I suspect both products are fast enough to satisfy most people.
Given that "most people" are not event pros, that seems right. But, for those of us who have to produce large batches of finished images on tight deadlines (sports, event and wedding shooters), the 2x speed advantage of DxO is hard to ignore. In many cases, I can deliver on the same night rather than processing overnight and delivering the next day. Also, when I'm processing onsite between shooting sessions for same-day delivery at a conference I'm covering, 2x can be the difference between NR and no NR.

OTOH, I've found one workflow in which Denoise isn't slower and brings an extra benefit: it works on my Sony 26MP mRAW files about as fast as DeepPRIME 3 runs on my Sony 61MP RAW files from my a7RV and a7CR. DxO doesn't work with mRAW. I actually prefer 24MP-33MP files for event work, so the ability to shoot mRAW and still have access to advanced NR is an advantage of the new Denoise. OTOH, I may have to give up some other valuable DxO tools (Lens Sharpness, Distortion, Smart Lighting), so I'm on the fence about which workflow to choose. It may come down to the specific requirements of a given job.

--
Event professional for 20+ years, travel & landscape enthusiast for 30+, stills-only.
http://jacquescornell.photography
http://happening.photos
 
Last edited:
No, it doesn’t. There’s twice as much waiting with Denoise.
I haven't played with DXO for many years. When I tested it, it didn't actually do the Denoise until you exported the picture, so you could not actually see what the output was going to look like. Has that changed?
PhotoLab 8 has a larger Loupe tool that you can move around the image to preview the effect of DP on part of the image. This is obviously not as nice as Denoise's full-image preview, but, OTOH, you don't have to wait for it. You can apply the DP instruction to a large batch of images, and as you click through them you can immediately use the Loupe tool to preview the effect. You don't have to wait, because the actual full-image processing doesn't happen until export, and the Loupe preview is generated on the fly.
It's still a loop view compared to LrC where you can see entire image in real time.
That could explain why DXO does the Denoise DeepPRIME so fast
It doesn't "do" DeepPRIME when you choose it. All edits to images with DxO are simply instructions that aren't actually executed until export, which is why the entire process is non-destructive, IOW you can change edits at any time. The processing happens on export, which is why exporting images with DP applied takes longer than without it.
With PL you still have to wait for it to export. I've tried to make to make this point many times with other posters and they never seemed to get it. Thanks for this and it doesn't matter if it is good or bad for either developer. It's just a fact.

Some like to wait for it to export. I prefer LrC with LrC that applies Denoise during development and seeing the entire file at all times. I prefer the instant LrC exports.
and the export so slow. Not taking sides, just being Curious George.
On even modest Macs with Apple Silicon processors, DeepPRIME 3 is pretty quick 5-15 seconds, and even the more demanding max-detail DeepPRIME XD2s is about as fast as Denoise at 10-30 seconds.
Frankly, I suspect both products are fast enough to satisfy most people.
Given that "most people" are not event pros, that seems right. But, for those of us who have to produce large batches of finished images on tight deadlines (sports, event and wedding shooters), the 2x speed advantage of DxO is hard to ignore. In many cases, I can deliver on the same night rather than processing overnight and delivering the next day. Also, when I'm processing onsite between shooting sessions for same-day delivery at a conference I'm covering, 2x can be the difference between NR and no NR.

OTOH, I've found one workflow in which Denoise isn't slower and brings an extra benefit: it works on my Sony 26MP mRAW files about as fast as DeepPRIME 3 runs on my Sony 61MP RAW files from my a7RV and a7CR. DxO doesn't work with mRAW. I actually prefer 24MP-33MP files for event work, so the ability to shoot mRAW and still have access to advanced NR is an advantage of the new Denoise. OTOH, I may have to give up some other valuable DxO tools (Lens Sharpness, Distortion, Smart Lighting), so I'm on the fence about which workflow to choose. It may come down to the specific requirements of a given job.
--
Fail Forward
 
Last edited:
The short answer is no I don't; they would've already done it.

My opinion? They should; wasted silicon otherwise.

The trouble? Adobe needs folks who can write MSL. C++ based. That's why this hasn't happened and probably never will.
 
Last edited:
No, it doesn’t. There’s twice as much waiting with Denoise.
I haven't played with DXO for many years. When I tested it, it didn't actually do the Denoise until you exported the picture, so you could not actually see what the output was going to look like. Has that changed?
PhotoLab 8 has a larger Loupe tool that you can move around the image to preview the effect of DP on part of the image. This is obviously not as nice as Denoise's full-image preview, but, OTOH, you don't have to wait for it. You can apply the DP instruction to a large batch of images, and as you click through them you can immediately use the Loupe tool to preview the effect. You don't have to wait, because the actual full-image processing doesn't happen until export, and the Loupe preview is generated on the fly.
It's still a loop view compared to LrC where you can see entire image in real time.
No kidding. I said as much.
That could explain why DXO does the Denoise DeepPRIME so fast
It doesn't "do" DeepPRIME when you choose it. All edits to images with DxO are simply instructions that aren't actually executed until export, which is why the entire process is non-destructive, IOW you can change edits at any time. The processing happens on export, which is why exporting images with DP applied takes longer than without it.
With PL you still have to wait for it to export.
No kidding. I thought we had both already agreed that with DxO you wait at the end, and with Adobe you wait at the beginning.
I've tried to make to make this point many times with other posters and they never seemed to get it. Thanks for this and it doesn't matter if it is good or bad for either developer. It's just a fact.

Some like to wait for it to export. I prefer LrC with LrC that applies Denoise during development and seeing the entire file at all times. I prefer the instant LrC exports.
Since we’re now repeating things that had already been clarified, I’ll follow your lead by repeating that DxO takes half as long.
and the export so slow. Not taking sides, just being Curious George.
On even modest Macs with Apple Silicon processors, DeepPRIME 3 is pretty quick 5-15 seconds, and even the more demanding max-detail DeepPRIME XD2s is about as fast as Denoise at 10-30 seconds.
Frankly, I suspect both products are fast enough to satisfy most people.
Given that "most people" are not event pros, that seems right. But, for those of us who have to produce large batches of finished images on tight deadlines (sports, event and wedding shooters), the 2x speed advantage of DxO is hard to ignore. In many cases, I can deliver on the same night rather than processing overnight and delivering the next day. Also, when I'm processing onsite between shooting sessions for same-day delivery at a conference I'm covering, 2x can be the difference between NR and no NR.

OTOH, I've found one workflow in which Denoise isn't slower and brings an extra benefit: it works on my Sony 26MP mRAW files about as fast as DeepPRIME 3 runs on my Sony 61MP RAW files from my a7RV and a7CR. DxO doesn't work with mRAW. I actually prefer 24MP-33MP files for event work, so the ability to shoot mRAW and still have access to advanced NR is an advantage of the new Denoise. OTOH, I may have to give up some other valuable DxO tools (Lens Sharpness, Distortion, Smart Lighting), so I'm on the fence about which workflow to choose. It may come down to the specific requirements of a given job.
 
Since we’re now repeating things that had already been clarified, I’ll follow your lead by repeating that DxO takes half as long.
I am curious as to how you determine that if DXO actually does the denoise as part of the export process. Are you comparing the time to generate the small preview to Lightroom's time to do the entire finished image? That is probably fair as long as the preview window shows a representative portion of your image.

Do you have the current version of both products on your computer to get a valid comparison? I don't. My only hands-on knowledge of DXO is many years out of date,

--
George
.
Feel free to retouch any photograph I post in these forums. They probably need it. :)
 
Last edited:
The short answer is no I don't; they would've already done it.

My opinion? They should; wasted silicon otherwise.

The trouble? Adobe needs folks who can write MSL. C++ based. That's why this hasn't happened and probably never will.
I thought this was an Apple Neural issue that affected not just Adobe. Last I read DXO found a work around but never really solved the colour shifting problem itself. Perhaps it’s easier to solve a colour shift problem with a temporary fix than a deep shadow noise quality issue. Perhaps Adobe wants to Apple to fix the actual issue instead of using a Bandaid. I’m not sure if anyone has all the answers because most companies can be tight lipped about these things.

It was enabled for about 3 months and worked great IMO. When Adobe disabled it there were discussions at the Adobe Community forums. Some said they did not see any issues and wanted user control to enable/disable it. So it did work and it was fast except for one reported quality issue.

As we know Adobe is not a one app company and no one knows what their priorities are. They aren’t sitting around. Maybe they contracted someone with experience with that code you mentioned and they could not solve it either. I don’t know if that happened but companies do that all of the time.

I still have confidence it will be enabled. I just got a new Mac Mini M4 based on that happening.
 
No, it doesn’t. There’s twice as much waiting with Denoise.
I haven't played with DXO for many years. When I tested it, it didn't actually do the Denoise until you exported the picture, so you could not actually see what the output was going to look like. Has that changed?
PhotoLab 8 has a larger Loupe tool that you can move around the image to preview the effect of DP on part of the image. This is obviously not as nice as Denoise's full-image preview, but, OTOH, you don't have to wait for it. You can apply the DP instruction to a large batch of images, and as you click through them you can immediately use the Loupe tool to preview the effect. You don't have to wait, because the actual full-image processing doesn't happen until export, and the Loupe preview is generated on the fly.
It's still a loop view compared to LrC where you can see entire image in real time.
No kidding. I said as much.
That could explain why DXO does the Denoise DeepPRIME so fast
It doesn't "do" DeepPRIME when you choose it. All edits to images with DxO are simply instructions that aren't actually executed until export, which is why the entire process is non-destructive, IOW you can change edits at any time. The processing happens on export, which is why exporting images with DP applied takes longer than without it.
With PL you still have to wait for it to export.
No kidding. I thought we had both already agreed that with DxO you wait at the end, and with Adobe you wait at the beginning.
I was just thanking you. You are the first to actually say that in any conversations I've had with others.
I've tried to make to make this point many times with other posters and they never seemed to get it. Thanks for this and it doesn't matter if it is good or bad for either developer. It's just a fact.

Some like to wait for it to export. I prefer LrC with LrC that applies Denoise during development and seeing the entire file at all times. I prefer the instant LrC exports.
Since we’re now repeating things that had already been clarified, I’ll follow your lead by repeating that DxO takes half as long.
and the export so slow. Not taking sides, just being Curious George.
On even modest Macs with Apple Silicon processors, DeepPRIME 3 is pretty quick 5-15 seconds, and even the more demanding max-detail DeepPRIME XD2s is about as fast as Denoise at 10-30 seconds.
Frankly, I suspect both products are fast enough to satisfy most people.
Given that "most people" are not event pros, that seems right. But, for those of us who have to produce large batches of finished images on tight deadlines (sports, event and wedding shooters), the 2x speed advantage of DxO is hard to ignore. In many cases, I can deliver on the same night rather than processing overnight and delivering the next day. Also, when I'm processing onsite between shooting sessions for same-day delivery at a conference I'm covering, 2x can be the difference between NR and no NR.

OTOH, I've found one workflow in which Denoise isn't slower and brings an extra benefit: it works on my Sony 26MP mRAW files about as fast as DeepPRIME 3 runs on my Sony 61MP RAW files from my a7RV and a7CR. DxO doesn't work with mRAW. I actually prefer 24MP-33MP files for event work, so the ability to shoot mRAW and still have access to advanced NR is an advantage of the new Denoise. OTOH, I may have to give up some other valuable DxO tools (Lens Sharpness, Distortion, Smart Lighting), so I'm on the fence about which workflow to choose. It may come down to the specific requirements of a given job.
 
The short answer is no I don't; they would've already done it.

My opinion? They should; wasted silicon otherwise.

The trouble? Adobe needs folks who can write MSL. C++ based. That's why this hasn't happened and probably never will.
I thought this was an Apple Neural issue that affected not just Adobe. Last I read DXO found a work around but never really solved the colour shifting problem itself. Perhaps it’s easier to solve a colour shift problem with a temporary fix than a deep shadow noise quality issue. Perhaps Adobe wants to Apple to fix the actual issue instead of using a Bandaid. I’m not sure if anyone has all the answers because most companies can be tight lipped about these things.

It was enabled for about 3 months and worked great IMO. When Adobe disabled it there were discussions at the Adobe Community forums. Some said they did not see any issues and wanted user control to enable/disable it. So it did work and it was fast except for one reported quality issue.

As we know Adobe is not a one app company and no one knows what their priorities are. They aren’t sitting around. Maybe they contracted someone with experience with that code you mentioned and they could not solve it either. I don’t know if that happened but companies do that all of the time.
I still have confidence it will be enabled. I just got a new Mac Mini M4 based on that happening.
I kinda base this on the Canon R5. When released it took Adobe a year to add Canon Color profiles. How could something take a year for that? Quite a few blaming it on Adobe's laziness. Whatever works some. I don't have an answer for that either but if someone can prove it was laziness or what my theory was I'd be happy. Possible legal issues? You know what happens when corporate giants lawyers get together.

I only say that because one day the R5 got the profiles. Adobe then added R3 RAW support and colour profiles the day the camera was released. RAW support and colour profiles were added 6 weeks before the R1 and R5II were released. I downloaded the R5II RAW files from this site long before the camera started shipping just to mess around.

Until there is actual proof of what really happened to R colour profiles or what is happening with the NE it's just guessing. Hope I guessed correctly with my Mac Mini :-D
 
I LOVE the new Denoise. I am not as much concerned about speed in Denoise as the results. Adobe's Denoise has always given me natural looking images with no artifacts or plastic images. The only reason I keep Topaz now is for non-raw files.
 
I LOVE the new Denoise. I am not as much concerned about speed in Denoise as the results. Adobe's Denoise has always given me natural looking images with no artifacts or plastic images. The only reason I keep Topaz now is for non-raw files.
Agree. I see LR denoise as the tool to reduce noise, gain sharpness without ever overdoing (artifacts) it!
Most reviewers comparing Adobe to others stop after the denoising is complete. You can do so much more with the texture, clarity and sharpening module that includes the detail and masking sliders.
 
Since we’re now repeating things that had already been clarified, I’ll follow your lead by repeating that DxO takes half as long.
I am curious as to how you determine that if DXO actually does the denoise as part of the export process.
Export times are longer with than without. Also, you can change NR parameters at any point and there’s no waiting for reprocessing because processing doesn’t happen until export.
Are you comparing the time to generate the small preview to Lightroom's time to do the entire finished image? That is probably fair as long as the preview window shows a representative portion of your image.

Do you have the current version of both products on your computer to get a valid comparison? I don't. My only hands-on knowledge of DXO is many years out of date,
I do. I will try to make time to run a batch of images through both apps and report timing, as these apps have been updated since the last time I did this.
 
The short answer is no I don't; they would've already done it.

My opinion? They should; wasted silicon otherwise.

The trouble? Adobe needs folks who can write MSL. C++ based. That's why this hasn't happened and probably never will.
I thought this was an Apple Neural issue that affected not just Adobe. Last I read DXO found a work around but never really solved the colour shifting problem itself. Perhaps it’s easier to solve a colour shift problem with a temporary fix than a deep shadow noise quality issue. Perhaps Adobe wants to Apple to fix the actual issue instead of using a Bandaid. I’m not sure if anyone has all the answers because most companies can be tight lipped about these things.

It was enabled for about 3 months and worked great IMO. When Adobe disabled it there were discussions at the Adobe Community forums. Some said they did not see any issues and wanted user control to enable/disable it. So it did work and it was fast except for one reported quality issue.

As we know Adobe is not a one app company and no one knows what their priorities are. They aren’t sitting around. Maybe they contracted someone with experience with that code you mentioned and they could not solve it either. I don’t know if that happened but companies do that all of the time.
I still have confidence it will be enabled. I just got a new Mac Mini M4 based on that happening.
It's possible Adobe has changed their ways, absolutely.

I will say as I've gotten older, I've learned people, companies, organizations, don't change their ways, they too just get older. The exception being change of leadership, from the top.
 
The short answer is no I don't; they would've already done it.

My opinion? They should; wasted silicon otherwise.

The trouble? Adobe needs folks who can write MSL. C++ based. That's why this hasn't happened and probably never will.
I thought this was an Apple Neural issue that affected not just Adobe. Last I read DXO found a work around but never really solved the colour shifting problem itself. Perhaps it’s easier to solve a colour shift problem with a temporary fix than a deep shadow noise quality issue. Perhaps Adobe wants to Apple to fix the actual issue instead of using a Bandaid. I’m not sure if anyone has all the answers because most companies can be tight lipped about these things.

It was enabled for about 3 months and worked great IMO. When Adobe disabled it there were discussions at the Adobe Community forums. Some said they did not see any issues and wanted user control to enable/disable it. So it did work and it was fast except for one reported quality issue.

As we know Adobe is not a one app company and no one knows what their priorities are. They aren’t sitting around. Maybe they contracted someone with experience with that code you mentioned and they could not solve it either. I don’t know if that happened but companies do that all of the time.
I still have confidence it will be enabled. I just got a new Mac Mini M4 based on that happening.
I kinda base this on the Canon R5. When released it took Adobe a year to add Canon Color profiles. How could something take a year for that? Quite a few blaming it on Adobe's laziness. Whatever works some. I don't have an answer for that either but if someone can prove it was laziness or what my theory was I'd be happy. Possible legal issues? You know what happens when corporate giants lawyers get together.

I only say that because one day the R5 got the profiles. Adobe then added R3 RAW support and colour profiles the day the camera was released. RAW support and colour profiles were added 6 weeks before the R1 and R5II were released. I downloaded the R5II RAW files from this site long before the camera started shipping just to mess around.

Until there is actual proof of what really happened to R colour profiles or what is happening with the NE it's just guessing. Hope I guessed correctly with my Mac Mini :-D
Adobe took a very long time to add color profiles for non-legacy sensor holdouts (EOS R, RP) to CR3 and they're still half baked.

The advent of CR3 coincided with both the termination of partnership with Canon and the great reset of the Bay Area's workforce when the quiet non-compete labor agreement between the major tech companies in the valley was dissolved. Shortly after? Adobe introduced the whole cloud-based licensing model too.

Seems every company when they get to be too big for their britches, this sort of thing happens (raise prices, cut R&D / support)
 
Last edited:
The short answer is no I don't; they would've already done it.

My opinion? They should; wasted silicon otherwise.

The trouble? Adobe needs folks who can write MSL. C++ based. That's why this hasn't happened and probably never will.
I thought this was an Apple Neural issue that affected not just Adobe. Last I read DXO found a work around but never really solved the colour shifting problem itself. Perhaps it’s easier to solve a colour shift problem with a temporary fix than a deep shadow noise quality issue. Perhaps Adobe wants to Apple to fix the actual issue instead of using a Bandaid. I’m not sure if anyone has all the answers because most companies can be tight lipped about these things.

It was enabled for about 3 months and worked great IMO. When Adobe disabled it there were discussions at the Adobe Community forums. Some said they did not see any issues and wanted user control to enable/disable it. So it did work and it was fast except for one reported quality issue.

As we know Adobe is not a one app company and no one knows what their priorities are. They aren’t sitting around. Maybe they contracted someone with experience with that code you mentioned and they could not solve it either. I don’t know if that happened but companies do that all of the time.
I still have confidence it will be enabled. I just got a new Mac Mini M4 based on that happening.
It's possible Adobe has changed their ways, absolutely.
I don’t know if Adobe is lazy at times or dealing with things out of their control like legal issues or anything in between. Until anyone prove anything it’s just guessing. I had read Adobe was waiting for Canon to provide the new CR3 camera colour info. Who knows but a year is a very long time then all of a sudden RAW support and camera colour is coming out before cameras are shipped. Again just guessing on my part but there had to be something going on. Typically Adobe provided RAW support for new bodies about 6 weeks after release. So did C1Pro and DXO.
I will say as I've gotten older, I've learned people, companies, organizations, don't change their ways, they too just get older. The exception being change of leadership, from the top.
If you don’t adapt you are done. I can provide all types of examples. The one closest that we can relate to is Kodak that invented the digital sensor.

DXO at one time did not update PL with the latest PureRaw goodies with the spring version upgrade. You had to wait until fall for the next version of PL and pay for it. The last few years DXO has been updating PL with the new PureRaw tools. I’m sure they would have preferred to wait until fall but may have had some upset customers.
 
The short answer is no I don't; they would've already done it.

My opinion? They should; wasted silicon otherwise.

The trouble? Adobe needs folks who can write MSL. C++ based. That's why this hasn't happened and probably never will.
I thought this was an Apple Neural issue that affected not just Adobe. Last I read DXO found a work around but never really solved the colour shifting problem itself. Perhaps it’s easier to solve a colour shift problem with a temporary fix than a deep shadow noise quality issue. Perhaps Adobe wants to Apple to fix the actual issue instead of using a Bandaid. I’m not sure if anyone has all the answers because most companies can be tight lipped about these things.

It was enabled for about 3 months and worked great IMO. When Adobe disabled it there were discussions at the Adobe Community forums. Some said they did not see any issues and wanted user control to enable/disable it. So it did work and it was fast except for one reported quality issue.

As we know Adobe is not a one app company and no one knows what their priorities are. They aren’t sitting around. Maybe they contracted someone with experience with that code you mentioned and they could not solve it either. I don’t know if that happened but companies do that all of the time.
I still have confidence it will be enabled. I just got a new Mac Mini M4 based on that happening.
I kinda base this on the Canon R5. When released it took Adobe a year to add Canon Color profiles. How could something take a year for that? Quite a few blaming it on Adobe's laziness. Whatever works some. I don't have an answer for that either but if someone can prove it was laziness or what my theory was I'd be happy. Possible legal issues? You know what happens when corporate giants lawyers get together.

I only say that because one day the R5 got the profiles. Adobe then added R3 RAW support and colour profiles the day the camera was released. RAW support and colour profiles were added 6 weeks before the R1 and R5II were released. I downloaded the R5II RAW files from this site long before the camera started shipping just to mess around.

Until there is actual proof of what really happened to R colour profiles or what is happening with the NE it's just guessing. Hope I guessed correctly with my Mac Mini :-D
Adobe took a very long time to add color profiles for non-legacy sensor holdouts (EOS R, RP) to CR3 and they're still half baked.
Sorry I responded to this on another post. Sorry. I’m having a hectic morning. I know. I had an R5. But then I downloaded R5II RAW files with camera colour profiles 6 weeks before they started shipping. I eventually adopted Adobe Color and I know use Adobe Adaptive Color profile which is AI.
The advent of CR3 coincided with both the termination of partnership with Canon and the great reset of the Bay Area's workforce when the quiet non-compete labor agreement between the major tech companies in the valley was dissolved. Shortly after? Adobe introduced the whole cloud-based licensing model too.
Ahhh. Interesting. Thanks for the info. I knew it had more to do with Adobe not getting around to it.
Seems every company when they get to be too big for their britches, this sort of thing happens (raise prices, cut R&D / support)
Luckily people who were customers before Jan 15 were grandfathered in for 2018 price for the Photo Plan. I’ll still be paying the same amount as I did. Not so great for new customers but if someone does not need PS they made it still reasonable to get LrC.

--
Fail Forward
 
Last edited:
I just read Adobe is nearing their two month update cycle so version LrC 14.5 should be out soon. We’ll see what we get.
 
I thought this was an Apple Neural issue that affected not just Adobe. Last I read DXO found a work around but never really solved the colour shifting problem itself. Perhaps it’s easier to solve a colour shift problem with a temporary fix than a deep shadow noise quality issue. Perhaps Adobe wants to Apple to fix the actual issue instead of using a Bandaid. I’m not sure if anyone has all the answers because most companies can be tight lipped about these things.
I, too, thought this was an Apple bug in the Neural engine itself. The last I heard none of the big three were using it. I had not heard that DXO had found a work-around. I would not be in too much of a hurry to criticize the Adobe programmers.
 
No, it doesn’t. There’s twice as much waiting with Denoise.
I haven't played with DXO for many years. When I tested it, it didn't actually do the Denoise until you exported the picture, so you could not actually see what the output was going to look like. Has that changed?
PhotoLab 8 has a larger Loupe tool that you can move around the image to preview the effect of DP on part of the image. This is obviously not as nice as Denoise's full-image preview, but, OTOH, you don't have to wait for it. You can apply the DP instruction to a large batch of images, and as you click through them you can immediately use the Loupe tool to preview the effect. You don't have to wait, because the actual full-image processing doesn't happen until export, and the Loupe preview is generated on the fly.
It's still a loop view compared to LrC where you can see entire image in real time.
No kidding. I said as much.
That could explain why DXO does the Denoise DeepPRIME so fast
It doesn't "do" DeepPRIME when you choose it. All edits to images with DxO are simply instructions that aren't actually executed until export, which is why the entire process is non-destructive, IOW you can change edits at any time. The processing happens on export, which is why exporting images with DP applied takes longer than without it.
With PL you still have to wait for it to export.
No kidding. I thought we had both already agreed that with DxO you wait at the end, and with Adobe you wait at the beginning.
I was just thanking you.
Ah, well, that's nice to hear. I misinterpreted your thanks as something else.
You are the first to actually say that in any conversations I've had with others.
You were the first to note that the processing happens at different stages in each app's workflow, and that's a useful clarification.

Seems we're on the same page now.
I've tried to make to make this point many times with other posters and they never seemed to get it. Thanks for this and it doesn't matter if it is good or bad for either developer. It's just a fact.

Some like to wait for it to export. I prefer LrC with LrC that applies Denoise during development and seeing the entire file at all times. I prefer the instant LrC exports.
Since we’re now repeating things that had already been clarified, I’ll follow your lead by repeating that DxO takes half as long.
and the export so slow. Not taking sides, just being Curious George.
On even modest Macs with Apple Silicon processors, DeepPRIME 3 is pretty quick 5-15 seconds, and even the more demanding max-detail DeepPRIME XD2s is about as fast as Denoise at 10-30 seconds.
Frankly, I suspect both products are fast enough to satisfy most people.
Given that "most people" are not event pros, that seems right. But, for those of us who have to produce large batches of finished images on tight deadlines (sports, event and wedding shooters), the 2x speed advantage of DxO is hard to ignore. In many cases, I can deliver on the same night rather than processing overnight and delivering the next day. Also, when I'm processing onsite between shooting sessions for same-day delivery at a conference I'm covering, 2x can be the difference between NR and no NR.

OTOH, I've found one workflow in which Denoise isn't slower and brings an extra benefit: it works on my Sony 26MP mRAW files about as fast as DeepPRIME 3 runs on my Sony 61MP RAW files from my a7RV and a7CR. DxO doesn't work with mRAW. I actually prefer 24MP-33MP files for event work, so the ability to shoot mRAW and still have access to advanced NR is an advantage of the new Denoise. OTOH, I may have to give up some other valuable DxO tools (Lens Sharpness, Distortion, Smart Lighting), so I'm on the fence about which workflow to choose. It may come down to the specific requirements of a given job.
 
I thought this was an Apple Neural issue that affected not just Adobe. Last I read DXO found a work around but never really solved the colour shifting problem itself. Perhaps it’s easier to solve a colour shift problem with a temporary fix than a deep shadow noise quality issue. Perhaps Adobe wants to Apple to fix the actual issue instead of using a Bandaid. I’m not sure if anyone has all the answers because most companies can be tight lipped about these things.
I, too, thought this was an Apple bug in the Neural engine itself. The last I heard none of the big three were using it. I had not heard that DXO had found a work-around. I would not be in too much of a hurry to criticize the Adobe programmers.
My same thoughts. Nothing wrong with a work around but it is still a bandaid covering an underlying problem that is not the fault of the big three.

When I once complained about how DXO was slow to getting to RAW updates for some new bodies the DXO crew gave me heck. :-D Told me DXO will release things when they are good and ready. I saved that card and I'm playing it here. :-)
 
No, it doesn’t. There’s twice as much waiting with Denoise.
I haven't played with DXO for many years. When I tested it, it didn't actually do the Denoise until you exported the picture, so you could not actually see what the output was going to look like. Has that changed?
PhotoLab 8 has a larger Loupe tool that you can move around the image to preview the effect of DP on part of the image. This is obviously not as nice as Denoise's full-image preview, but, OTOH, you don't have to wait for it. You can apply the DP instruction to a large batch of images, and as you click through them you can immediately use the Loupe tool to preview the effect. You don't have to wait, because the actual full-image processing doesn't happen until export, and the Loupe preview is generated on the fly.
It's still a loop view compared to LrC where you can see entire image in real time.
No kidding. I said as much.
That could explain why DXO does the Denoise DeepPRIME so fast
It doesn't "do" DeepPRIME when you choose it. All edits to images with DxO are simply instructions that aren't actually executed until export, which is why the entire process is non-destructive, IOW you can change edits at any time. The processing happens on export, which is why exporting images with DP applied takes longer than without it.
With PL you still have to wait for it to export.
No kidding. I thought we had both already agreed that with DxO you wait at the end, and with Adobe you wait at the beginning.
I was just thanking you.
Ah, well, that's nice to hear. I misinterpreted your thanks as something else.
I didn't exactly say thanks. I was still in a bit of a fencing mode with you which I enjoy. :-D
You are the first to actually say that in any conversations I've had with others.
You were the first to note that the processing happens at different stages in each app's workflow, and that's a useful clarification.

Seems we're on the same page now.
Yep. Well are and you are one of the first to acknowledge it. When I tried to discuss this with others they just didn't get it. Couldn't get past the micro and look at the macro I guess. It all matters when it comes to time.
I've tried to make to make this point many times with other posters and they never seemed to get it. Thanks for this and it doesn't matter if it is good or bad for either developer. It's just a fact.

Some like to wait for it to export. I prefer LrC with LrC that applies Denoise during development and seeing the entire file at all times. I prefer the instant LrC exports.
Since we’re now repeating things that had already been clarified, I’ll follow your lead by repeating that DxO takes half as long.
and the export so slow. Not taking sides, just being Curious George.
On even modest Macs with Apple Silicon processors, DeepPRIME 3 is pretty quick 5-15 seconds, and even the more demanding max-detail DeepPRIME XD2s is about as fast as Denoise at 10-30 seconds.
Frankly, I suspect both products are fast enough to satisfy most people.
Given that "most people" are not event pros, that seems right. But, for those of us who have to produce large batches of finished images on tight deadlines (sports, event and wedding shooters), the 2x speed advantage of DxO is hard to ignore. In many cases, I can deliver on the same night rather than processing overnight and delivering the next day. Also, when I'm processing onsite between shooting sessions for same-day delivery at a conference I'm covering, 2x can be the difference between NR and no NR.

OTOH, I've found one workflow in which Denoise isn't slower and brings an extra benefit: it works on my Sony 26MP mRAW files about as fast as DeepPRIME 3 runs on my Sony 61MP RAW files from my a7RV and a7CR. DxO doesn't work with mRAW. I actually prefer 24MP-33MP files for event work, so the ability to shoot mRAW and still have access to advanced NR is an advantage of the new Denoise. OTOH, I may have to give up some other valuable DxO tools (Lens Sharpness, Distortion, Smart Lighting), so I'm on the fence about which workflow to choose. It may come down to the specific requirements of a given job.
 

Keyboard shortcuts

Back
Top