Digital Nigel
Forum Pro
With previous releases of PhotoLab, unless you used the new features, performance didn't usually change much compared to the previous version, and sometimes even improved (eg, sometimes DeepPRIME speeded up). I had assumed that PL9 would be the same, but noticed that it seemed to be a bit slower than PL8, both to start up and to export. So I decided to do some proper tests.
I assembled a set of 30 raw images, all shot by me, using five of my Sony Cameras. These consisted of 15 20mp images (from two 1" sensor cameras), five 24mp (APS-C), five 42mp (FF) and five 61mp (FF). The average was 31mp, which seems fairly representative. For most of the tests, I ran a batch of all 30 images, though for the last test, I only ran six.
With both PL8.9 and PL9.1, I followed the recommendation of allowing a maximum of two simultaneously processed images. Both versions were loaded in different folders of an SSD, while the images were on an internal HDD. I ran them with only one running at a time, so they weren't stealing resources from each other. In both cases, I processed a couple of images after starting up PL, before running the timed benchmark. This is because I've learned that the first image exports more slowly than subsequent images, presumably because of the time required to load the PL export code into RAM.
The machine has an AMD Ryzen Threadripper 1920X 12-core processor with 32GB RAM. The GPU is an NVIDIA GeForce RTX 4060 with 8GB of VRAM. I notice that Windows assigns 8GB of the main RAM as additional GPU memory, so there's a total of 16GB shared GPU memory.
The first test had no local adjustments. I ran it with both DeepPRIME 3 and XD2s.
I then added some of the more intelligent non-AI local adjustments in PL8, including a graduated filter, a control line with subtractive control point and a multi-circle control point. These masks take some computation, based on image content. I applied the same adjustments to all 30 images. With PL9, they needed to be converted to the new engine format. I couldn't find a way to do this conversion for a whole folder, so had to open each image in turn, and agree to the conversion. It's easy enough, but tedious.
Finally, for six of the images, I applied some of the AI masks in PL9; I had to limit myself to the ones that work on my machine. This had to be done image by image, which is why I only did six. In this case, I only used DeepPRIME 3.
In all cases, I have reported the performance using the exported mp/sec measure. Obviously, the higher the number, the faster the export. This is the rather shocking results I found:
So, the results are that with straightforward, day-to-day images that need no local adjustments and for which DP3 is ideal, PL9 runs at only 46% of the speed of PL8! This was a real shock — I wasn't expecting more than a 10% difference. This means that even if I buy PL9, I'll continue to use PL8 for most images. The difference narrowed with XD2s, with PL9's speed only dropping by 39%, rather than 54%. In particular, PL9 suffers very little performance degradation when using XD2s compared to DP3. Overall, with a typical mix of DP3 and XD2s images, PL8 is about twice as fast as PL9.
I then applied the non-AI masks, and repeated the tests. This significantly degraded the PL8 speed, but not PL9. So PL8 was still significantly faster, but with a narrowing lead. Overall, PL8 is about a third faster if you have masks.
Finally, I tested the effect of using AI masks in PL9. Unfortunately the export process fails if I use the complex object recognition masks, so I had to limit myself to the click-on-a-shape sub-masks. These were slower still, but only by about 20%.
It's hard to measure, but PL9 also seems much less responsive than PL8. It often pauses briefly before jerking into action again. Sometimes the cursor freezes for a few seconds, so you can't even switch to other apps. But I've not encountered any machine crashes.
So, the lesson is that PL9 is much slower than PL8, whatever you do. But the relative degradation is lowest with the most complex tasks. It seems that PL9 had has been re-engineered to be optimised for complex edits and NR, but much less efficient with straightforward images.
Separately, I've found that PL9 (including 9.2) is much less stable than PL8. It crashes much more often, fails to export many images that use the new AI masks, and often gives the catch-all Internal error message even when you're not using the AI masks. This may be connected with my GPU (I'm using the latest NVIDIA studio driver that's supposed to fix the problems), or may also happen with other hardware.
These tests were all on my Windows 10 machine which has what probably counts as a low-to-middling spec by modern standards. I plan to replace it next year, with a much faster model. I can't say if other Windows machines will suffer similar reliability and performance problems (perhaps less so if they have more VRAM?), and I have no idea if any of these problems exist with the macOS version (but suspect it's better). The YouTube reviewers mainly seem to use Macs, and they don't mention these problems, so the Mac version may well be much better, or at least better tested.
My advice to anyone is to take full advantage of the 30-day free trial before buying. Perhaps it'll work better on their hardware than mine, but probably not if it's only a PC laptop. Try it with some complex AI edits, including using things like the pre-defined masks like Sky or People. Don't just edit them, but make sure you always export the edited image. You might be surprised how often it fails to recognise the sky, or people or animals. Basically, only use this category of mask as a last resort, and prepare to be let down.
If anyone is interested, I can make available an image that won't export from my PC, but might from their higher-specified PC or Mac.
I assembled a set of 30 raw images, all shot by me, using five of my Sony Cameras. These consisted of 15 20mp images (from two 1" sensor cameras), five 24mp (APS-C), five 42mp (FF) and five 61mp (FF). The average was 31mp, which seems fairly representative. For most of the tests, I ran a batch of all 30 images, though for the last test, I only ran six.
With both PL8.9 and PL9.1, I followed the recommendation of allowing a maximum of two simultaneously processed images. Both versions were loaded in different folders of an SSD, while the images were on an internal HDD. I ran them with only one running at a time, so they weren't stealing resources from each other. In both cases, I processed a couple of images after starting up PL, before running the timed benchmark. This is because I've learned that the first image exports more slowly than subsequent images, presumably because of the time required to load the PL export code into RAM.
The machine has an AMD Ryzen Threadripper 1920X 12-core processor with 32GB RAM. The GPU is an NVIDIA GeForce RTX 4060 with 8GB of VRAM. I notice that Windows assigns 8GB of the main RAM as additional GPU memory, so there's a total of 16GB shared GPU memory.
The first test had no local adjustments. I ran it with both DeepPRIME 3 and XD2s.
I then added some of the more intelligent non-AI local adjustments in PL8, including a graduated filter, a control line with subtractive control point and a multi-circle control point. These masks take some computation, based on image content. I applied the same adjustments to all 30 images. With PL9, they needed to be converted to the new engine format. I couldn't find a way to do this conversion for a whole folder, so had to open each image in turn, and agree to the conversion. It's easy enough, but tedious.
Finally, for six of the images, I applied some of the AI masks in PL9; I had to limit myself to the ones that work on my machine. This had to be done image by image, which is why I only did six. In this case, I only used DeepPRIME 3.
In all cases, I have reported the performance using the exported mp/sec measure. Obviously, the higher the number, the faster the export. This is the rather shocking results I found:
So, the results are that with straightforward, day-to-day images that need no local adjustments and for which DP3 is ideal, PL9 runs at only 46% of the speed of PL8! This was a real shock — I wasn't expecting more than a 10% difference. This means that even if I buy PL9, I'll continue to use PL8 for most images. The difference narrowed with XD2s, with PL9's speed only dropping by 39%, rather than 54%. In particular, PL9 suffers very little performance degradation when using XD2s compared to DP3. Overall, with a typical mix of DP3 and XD2s images, PL8 is about twice as fast as PL9.
I then applied the non-AI masks, and repeated the tests. This significantly degraded the PL8 speed, but not PL9. So PL8 was still significantly faster, but with a narrowing lead. Overall, PL8 is about a third faster if you have masks.
Finally, I tested the effect of using AI masks in PL9. Unfortunately the export process fails if I use the complex object recognition masks, so I had to limit myself to the click-on-a-shape sub-masks. These were slower still, but only by about 20%.
It's hard to measure, but PL9 also seems much less responsive than PL8. It often pauses briefly before jerking into action again. Sometimes the cursor freezes for a few seconds, so you can't even switch to other apps. But I've not encountered any machine crashes.
So, the lesson is that PL9 is much slower than PL8, whatever you do. But the relative degradation is lowest with the most complex tasks. It seems that PL9 had has been re-engineered to be optimised for complex edits and NR, but much less efficient with straightforward images.
Separately, I've found that PL9 (including 9.2) is much less stable than PL8. It crashes much more often, fails to export many images that use the new AI masks, and often gives the catch-all Internal error message even when you're not using the AI masks. This may be connected with my GPU (I'm using the latest NVIDIA studio driver that's supposed to fix the problems), or may also happen with other hardware.
These tests were all on my Windows 10 machine which has what probably counts as a low-to-middling spec by modern standards. I plan to replace it next year, with a much faster model. I can't say if other Windows machines will suffer similar reliability and performance problems (perhaps less so if they have more VRAM?), and I have no idea if any of these problems exist with the macOS version (but suspect it's better). The YouTube reviewers mainly seem to use Macs, and they don't mention these problems, so the Mac version may well be much better, or at least better tested.
My advice to anyone is to take full advantage of the 30-day free trial before buying. Perhaps it'll work better on their hardware than mine, but probably not if it's only a PC laptop. Try it with some complex AI edits, including using things like the pre-defined masks like Sky or People. Don't just edit them, but make sure you always export the edited image. You might be surprised how often it fails to recognise the sky, or people or animals. Basically, only use this category of mask as a last resort, and prepare to be let down.
If anyone is interested, I can make available an image that won't export from my PC, but might from their higher-specified PC or Mac.