In my personal experience, at least at the mid-lever tiers of GPUs, there is not a significant perceptible difference pushing AI enhanced pixels through nVidia or AMD GPUs while processing single images. Raw CPU horsepower has been more perceptible.
You can try this at home kids:
If you use something like Topaz Sharpen AI you can set the preference for GPU or CPU processing primacy and see if you perceive any difference in throughput. When I do that and watch the software monitors my CPU is always being thrashed all cores at full bore while the nVidia 3xxx GPU barely warms up. There might be a few seconds of difference processing a complex image but barely enough to take a sip of water.
That nVidia replaced an AMD GPU several generations older. That resulted in one or two sips of water difference in throughput. Perceptible but not thirst quenching.
While video editors will usually measurably run faster on CUDA (nVidia proprietary software) for GPU rendering it is not clear whether image editing programs objectively work better with CUDA or there are other factors determining throughput, like Open GL, a universal GPU software standard. Apple has its proprietary "metal" for GPU acceleration, but its modeled on similar x86 software standards.
AMD and Intel are pushing for a universal x86 replacement for CUDA, not for us pixel pushers but for the big bucks in data center AI processing.
Unfortunately the best comparative GPU benchmark for mere mortals remains the Puget sound Photoshop scores, but that script in no way resembles how mere humans interact with Photoshop and does not include other programs that are more heavily threaded or which also claim, and seem to have, more significant GPU off-loading.
In my humble if you don't play games $500 is a lot of money for what you may not perceive with regard to single image processing throughput compared to a $300 (!) GPU. If you play games the 7800x seems like its the best value in today's disappointing GPU world, which explains why its sold out everywhere.