Performance tests on PhotoLab 9

Digital Nigel

Forum Pro
Messages
22,411
Solutions
37
Reaction score
10,980
Location
London, UK
With previous releases of PhotoLab, unless you used the new features, performance didn't usually change much compared to the previous version, and sometimes even improved (eg, sometimes DeepPRIME speeded up). I had assumed that PL9 would be the same, but noticed that it seemed to be a bit slower than PL8, both to start up and to export. So I decided to do some proper tests.

I assembled a set of 30 raw images, all shot by me, using five of my Sony Cameras. These consisted of 15 20mp images (from two 1" sensor cameras), five 24mp (APS-C), five 42mp (FF) and five 61mp (FF). The average was 31mp, which seems fairly representative. For most of the tests, I ran a batch of all 30 images, though for the last test, I only ran six.

With both PL8.9 and PL9.1, I followed the recommendation of allowing a maximum of two simultaneously processed images. Both versions were loaded in different folders of an SSD, while the images were on an internal HDD. I ran them with only one running at a time, so they weren't stealing resources from each other. In both cases, I processed a couple of images after starting up PL, before running the timed benchmark. This is because I've learned that the first image exports more slowly than subsequent images, presumably because of the time required to load the PL export code into RAM.

The machine has an AMD Ryzen Threadripper 1920X 12-core processor with 32GB RAM. The GPU is an NVIDIA GeForce RTX 4060 with 8GB of VRAM. I notice that Windows assigns 8GB of the main RAM as additional GPU memory, so there's a total of 16GB shared GPU memory.

The first test had no local adjustments. I ran it with both DeepPRIME 3 and XD2s.

I then added some of the more intelligent non-AI local adjustments in PL8, including a graduated filter, a control line with subtractive control point and a multi-circle control point. These masks take some computation, based on image content. I applied the same adjustments to all 30 images. With PL9, they needed to be converted to the new engine format. I couldn't find a way to do this conversion for a whole folder, so had to open each image in turn, and agree to the conversion. It's easy enough, but tedious.

Finally, for six of the images, I applied some of the AI masks in PL9; I had to limit myself to the ones that work on my machine. This had to be done image by image, which is why I only did six. In this case, I only used DeepPRIME 3.

In all cases, I have reported the performance using the exported mp/sec measure. Obviously, the higher the number, the faster the export. This is the rather shocking results I found:
Chart.jpg
So, the results are that with straightforward, day-to-day images that need no local adjustments and for which DP3 is ideal, PL9 runs at only 46% of the speed of PL8! This was a real shock — I wasn't expecting more than a 10% difference. This means that even if I buy PL9, I'll continue to use PL8 for most images. The difference narrowed with XD2s, with PL9's speed only dropping by 39%, rather than 54%. In particular, PL9 suffers very little performance degradation when using XD2s compared to DP3. Overall, with a typical mix of DP3 and XD2s images, PL8 is about twice as fast as PL9.

I then applied the non-AI masks, and repeated the tests. This significantly degraded the PL8 speed, but not PL9. So PL8 was still significantly faster, but with a narrowing lead. Overall, PL8 is about a third faster if you have masks.

Finally, I tested the effect of using AI masks in PL9. Unfortunately the export process fails if I use the complex object recognition masks, so I had to limit myself to the click-on-a-shape sub-masks. These were slower still, but only by about 20%.

It's hard to measure, but PL9 also seems much less responsive than PL8. It often pauses briefly before jerking into action again. Sometimes the cursor freezes for a few seconds, so you can't even switch to other apps. But I've not encountered any machine crashes.

So, the lesson is that PL9 is much slower than PL8, whatever you do. But the relative degradation is lowest with the most complex tasks. It seems that PL9 had has been re-engineered to be optimised for complex edits and NR, but much less efficient with straightforward images.

Separately, I've found that PL9 (including 9.2) is much less stable than PL8. It crashes much more often, fails to export many images that use the new AI masks, and often gives the catch-all Internal error message even when you're not using the AI masks. This may be connected with my GPU (I'm using the latest NVIDIA studio driver that's supposed to fix the problems), or may also happen with other hardware.

These tests were all on my Windows 10 machine which has what probably counts as a low-to-middling spec by modern standards. I plan to replace it next year, with a much faster model. I can't say if other Windows machines will suffer similar reliability and performance problems (perhaps less so if they have more VRAM?), and I have no idea if any of these problems exist with the macOS version (but suspect it's better). The YouTube reviewers mainly seem to use Macs, and they don't mention these problems, so the Mac version may well be much better, or at least better tested.

My advice to anyone is to take full advantage of the 30-day free trial before buying. Perhaps it'll work better on their hardware than mine, but probably not if it's only a PC laptop. Try it with some complex AI edits, including using things like the pre-defined masks like Sky or People. Don't just edit them, but make sure you always export the edited image. You might be surprised how often it fails to recognise the sky, or people or animals. Basically, only use this category of mask as a last resort, and prepare to be let down.

If anyone is interested, I can make available an image that won't export from my PC, but might from their higher-specified PC or Mac.
 
Wow, I have to say that most new software can have issues but your testing is not complimentary and upgrading is very hard to justify. 😔

I've been looking on the DXO forums etc but can't find much response/feedback from DXO as to what the heck is going on. Have you seen anything?

Ian
 
I had the trial version of PL9 when I still was using PL8 on my old Windows-10 PC (2016) with an "upgraded" NVIDIA RTX2060. When I started with the AI masking export (DeepPrime) took really over 30 sec per image. Also the real-time AI masking functions often came with delays. Many crashes during export.

So I decided not to go for the Windows Security update Services and bought a complete new PC with Intel i10 Ultra processor and NVIDEA RTX5080 video card. Well that worked!!

Using DeepPrime XD2S: Average 1.1 sec/ 20 MP image (RX10iv); 3.3 sec/ 60 MP image (A7Rv).
Note, I had in PL9 performance the setting with emphasis on the GPU (instead of automatic) which turned out to be a bit faster than on automatic.
In the beginning some crashes during export but last weeks no more problems
 
Thanks for doing these, they are interesting results for sure.

I have an M4 Max Mac Studio and both versions of PL - if I get some time I will try some (likely less scientific) comparisons.
 
Digital Nigel's test results for PL 9 are certainly very disappointing...and quite surprising in that one would have expected to see some processing speed ups. Here I am with PL7E, waiting for BF to upgrade to PL9. But, I'm now going to be very hesitant about that upgrade, as I am sure many others will be, also. I'm wondering if it is worthwhile to ask DxO, directly, what they think of Nigel's results? I have found that, most of the time, their help has been rather good, and would hope that they would give us a reasonable response to the significant slowdown that was observed for an "upgraded" product.
 
Haven't tried v9 yet but I notice they've just released v8.10 . Downloading that now and I will be trying v9.x in the next few days ...
 
Nice work. Thanks. 8 feels snappy overall compared to 9 (including 9.2), but the magnitude of the difference is surprising. In keeping with this, presets in 8 load almost instantly; I have yet to get all the presets in 9 to load on my M1 Max. One person on the DXO forums hypothesized the performance hits were from accessing new AI-associated libraries. Not being a programmer, I cannot comment on the merits of this. Nevertheless, hope DXO can sort this soon.
 
One person on the DXO forums hypothesized the performance hits were from accessing new AI-associated libraries.
Those have certainly become massive in terms of required storage space for the app. Something like 10 times the requirement for v8 and earlier.
 
Last edited:
Digital Nigel's test results for PL 9 are certainly very disappointing...and quite surprising in that one would have expected to see some processing speed ups. Here I am with PL7E, waiting for BF to upgrade to PL9. But, I'm now going to be very hesitant about that upgrade, as I am sure many others will be, also. I'm wondering if it is worthwhile to ask DxO, directly, what they think of Nigel's results? I have found that, most of the time, their help has been rather good, and would hope that they would give us a reasonable response to the significant slowdown that was observed for an "upgraded" product.
What really surprised and disappointed me was the halving of export performance, even when you have no local adjustments at all. It means that even if I (reluctantly) buy PL9 in the BF sale, I’ll only use it for the few images that really need the AI masking. For everything else, I’ll keep using PL8, as it starts up quicker, is much more responsive, is very reliable, and exports about twice as quickly.
 
Well, to tell you the truth, guys, my curiosity about how DxO would respond to DN's testing efforts have gotten the better of me, and I have forwarded a response request to DxO through their help support network. I am quite sure that they will respond rather quickly.
 
Well, to tell you the truth, guys, my curiosity about how DxO would respond to DN's testing efforts have gotten the better of me, and I have forwarded a response request to DxO through their help support network. I am quite sure that they will respond rather quickly.
I'll be very interested to hear their response! I've not actually contacted them about my findings, as I'm more bothered by the fact that V9 fails to export so often.
 
Hi Nigel, I think you may send the logs of PL9 failing to DxO for troubleshooting. I am not sure if it is related to VRAM or other configuration settings, as I have not experienced the failed exports myself. However, I can confirm that with 20 AI masks, PL9 will slow down to unbearable as it uses CPU for AI masks inferences.
 
Last edited:
I'll be very interested to hear their response! I've not actually contacted them about my findings, as I'm more bothered by the fact that V9 fails to export so often.
Just established a support order with DxO. Gave them this site to ponder and reply to your findings. As there is a big time difference between us and them, I don't expect any replies before tomorrow (ie, if they hop on it.) Aside from asking them to explain the slow down, I also specifically asked if they intend (or are able) to quickly remedy the slow down.
 

Keyboard shortcuts

Back
Top