Got An AMD RX7800XT?

Batdude

Veteran Member
Messages
7,274
Solutions
9
Reaction score
5,267
Location
US
Has anyone purchased this GPU? I am REALLY curious how this puppy performs with Lightroom Classic. Everything that I’m hearing about this new product is really really good including the price.



Are AMD GPUs that bad that nobody here uses one? I’m just having a really hard time believing that these cards are no good in 2023. Just saying.
 
The GPU is pretty good but it doesn't have equivalent to tensor cores and some of the other baked in hardware.

I'll touch on the gaming front briefly not because of a direct connection but it kinda shows the point.

AMD cards in terms of just rendering a game with traditional lighting effects are better dollar for dollar than NVIDIA. But if you want to play something with heavy ray tracing they fall way back.

And that's the basic thing in terms of creative workflows. NVIDIA is effectively the standard so you can be pretty sure that if it ships on PC that ____ app will work with it. They also do a lot with CUDA and now tensor cores for acceleration.

With AMD cards you're risking if it'll work and how well optimized it will be if it does.

Intel is an even more extreme example of this with ARC. It's actually really fast hardware for the money but between the driver issues and lack of support from developers often you can't use it.

Basically if you want to go AMD I'd look for specific benchmarks on the apps you want to use and then just weigh the risk that if a new one comes out you might not be able to use it/or it could run slow.
 
Last edited:
The GPU is pretty good but it doesn't have equivalent to tensor cores and some of the other baked in hardware.

I'll touch on the gaming front briefly not because of a direct connection but it kinda shows the point.

AMD cards in terms of just rendering a game with traditional lighting effects are better dollar for dollar than NVIDIA. But if you want to play something with heavy ray tracing they fall way back.

And that's the basic thing in terms of creative workflows. NVIDIA is effectively the standard so you can be pretty sure that if it ships on PC that ____ app will work with it. They also do a lot with CUDA and now tensor cores for acceleration.

With AMD cards you're risking if it'll work and how well optimized it will be if it does.

Intel is an even more extreme example of this with ARC. It's actually really fast hardware for the money but between the driver issues and lack of support from developers often you can't use it.

Basically if you want to go AMD I'd look for specific benchmarks on the apps you want to use and then just weigh the risk that if a new one comes out you might not be able to use it/or it could run slow.
That’s why I posted this to ask people that are running Lightroom with it. I’m well aware of each brand GPU capabilities, yet I don’t see any “proof” because nobody has a 7800XT. I’m not asking for assumptions.



Regarding Nvidia, I don’t see what the bid deal is. All I hear about Nvidia is that their drivers are superior and this and that, but from all the research I’ve been doing for months now, no matter how many more hundreds of cuda cores and tensor cores and this and that, and no matter how more expensive it is, most Nvidia GPUs perform pretty much exactly the same. That’s certainly the case when using a GPU for (most) photography software. At least that’s what I’ve been told, but mainly with LR.




I mean, everyone knows that Adobe LR uses a GPU at a minimal level so this software does not use an Nvidia GPU in its (full) potential, so how bad can an AMD perform? hahaha 🤣

What I would like to see is this particular 7800XT running LR and see how it performs. The card is supposed to be damn good from what everyone is saying, and is not that expensive.
 
It won't give you exact specs but with a bunch of 7000 series cards out there you should be able to see how they do compared to the equivalent NVIDIA GPU to get a good idea. It'll depend on how much Adobe utilizes AI hardware VS general compute in Lightroom along with how optimized each solution is.

As to the benefit of dedicated hardware with video encoding an old integrated GPU on a Intel chip was about half as fast as a 3060 Ti. The reason is Intel's quicksync dedicated video encoder is really good even though the GPU is a tiny fraction as powerful.
 
Last edited:
It won't give you exact specs but with a bunch of 7000 series cards out there you should be able to see how they do compared to the equivalent NVIDIA GPU to get a good idea. It'll depend on how much Adobe utilizes AI hardware VS general compute in Lightroom along with how optimized each solution is.
Hopefully this AI thing some day will become smart enough to say on it’s own “hey why is everything running so slow here we can do better come on speed it up” hahaha 🤣

Other than that AI simply scares me and the only thing It reminds me of is The Terminator 😬
As to the benefit of dedicated hardware with video encoding an old integrated GPU on a Intel chip was about half as fast as a 3060 Ti. The reason is Intel's quicksync dedicated video encoder is really good even though the GPU is a tiny fraction as powerful.
 
Last edited:
It won't give you exact specs but with a bunch of 7000 series cards out there you should be able to see how they do compared to the equivalent NVIDIA GPU to get a good idea. It'll depend on how much Adobe utilizes AI hardware VS general compute in Lightroom along with how optimized each solution is.
Hopefully this AI thing some day will become smart enough to say on it’s own “hey why is everything running so slow here we can do better come on speed it up” hahaha 🤣

Other than that AI simply scares me and the only thing It reminds me of is The Terminator 😬
As to the benefit of dedicated hardware with video encoding an old integrated GPU on a Intel chip was about half as fast as a 3060 Ti. The reason is Intel's quicksync dedicated video encoder is rerally good even though the GPU is a tiny fraction as powerful.
Resistance is futile.
 
In my personal experience, at least at the mid-lever tiers of GPUs, there is not a significant perceptible difference pushing AI enhanced pixels through nVidia or AMD GPUs while processing single images. Raw CPU horsepower has been more perceptible.

You can try this at home kids:

If you use something like Topaz Sharpen AI you can set the preference for GPU or CPU processing primacy and see if you perceive any difference in throughput. When I do that and watch the software monitors my CPU is always being thrashed all cores at full bore while the nVidia 3xxx GPU barely warms up. There might be a few seconds of difference processing a complex image but barely enough to take a sip of water.

That nVidia replaced an AMD GPU several generations older. That resulted in one or two sips of water difference in throughput. Perceptible but not thirst quenching.

While video editors will usually measurably run faster on CUDA (nVidia proprietary software) for GPU rendering it is not clear whether image editing programs objectively work better with CUDA or there are other factors determining throughput, like Open GL, a universal GPU software standard. Apple has its proprietary "metal" for GPU acceleration, but its modeled on similar x86 software standards.

AMD and Intel are pushing for a universal x86 replacement for CUDA, not for us pixel pushers but for the big bucks in data center AI processing.

Unfortunately the best comparative GPU benchmark for mere mortals remains the Puget sound Photoshop scores, but that script in no way resembles how mere humans interact with Photoshop and does not include other programs that are more heavily threaded or which also claim, and seem to have, more significant GPU off-loading.

In my humble if you don't play games $500 is a lot of money for what you may not perceive with regard to single image processing throughput compared to a $300 (!) GPU. If you play games the 7800x seems like its the best value in today's disappointing GPU world, which explains why its sold out everywhere.
 
In my personal experience, at least at the mid-lever tiers of GPUs, there is not a significant perceptible difference pushing AI enhanced pixels through nVidia or AMD GPUs while processing single images. Raw CPU horsepower has been more perceptible.

You can try this at home kids:

If you use something like Topaz Sharpen AI you can set the preference for GPU or CPU processing primacy and see if you perceive any difference in throughput. When I do that and watch the software monitors my CPU is always being thrashed all cores at full bore while the nVidia 3xxx GPU barely warms up. There might be a few seconds of difference processing a complex image but barely enough to take a sip of water.

That nVidia replaced an AMD GPU several generations older. That resulted in one or two sips of water difference in throughput. Perceptible but not thirst quenching.

While video editors will usually measurably run faster on CUDA (nVidia proprietary software) for GPU rendering it is not clear whether image editing programs objectively work better with CUDA or there are other factors determining throughput, like Open GL, a universal GPU software standard. Apple has its proprietary "metal" for GPU acceleration, but its modeled on similar x86 software standards.

AMD and Intel are pushing for a universal x86 replacement for CUDA, not for us pixel pushers but for the big bucks in data center AI processing.

Unfortunately the best comparative GPU benchmark for mere mortals remains the Puget sound Photoshop scores, but that script in no way resembles how mere humans interact with Photoshop and does not include other programs that are more heavily threaded or which also claim, and seem to have, more significant GPU off-loading.

In my humble if you don't play games $500 is a lot of money for what you may not perceive with regard to single image processing throughput compared to a $300 (!) GPU. If you play games the 7800x seems like its the best value in today's disappointing GPU world, which explains why its sold out everywhere.
I COMPLETELY agree with everything you said. $500 for a GPU that you will barely notice anything is indeed a lot of money and I don’t feel comfortable throwing away my money that way. Incredibly for what I do my old GTX 1070 is performing very decently and although I would like something more powerful it just ain’t gonna benefit me in a huge way.



I am not a PC or a GPU “expert” but during the last few months I have learned a lot when it comes down to the GPU and the photography software I’m using.



I have been reading several articles talking about what AMD is working on and that the next gen GPU might be RDNA 5 etc. it sounded very interesting and who knows what Nvidia will continue doing with their newer GPUs as well. So after doing soooo much research I figured that right now and (for me) it is not the time to buy the existing GPU hardware there is right now and instead I will wait for something better that will actually be worth it. I’m in no rush.

Only if there are REALLY good Black Friday or Amazon prime deals coming up then I might pull the trigger on something decent.
 
I too have been searching for RDNA performance numbers and found it difficult. However, the closest I’ve come across is one reply on Reddit where they say his 7900xt can process his canon files with Denoise Ai in 5 seconds. I don’t remember the model camera but I think it is ~20-24mp.

some may say the Gpu isn’t that important but I would argue that some processes are very much gpu bound. My computer takes 1:20 for a single raw file (denoise ai). I cover burlesque and other night shows that produce hundreds of photos and at the moment I can only denoise 50 per hour whereas a gpu that only takes :15” can do 240 per hour. Keep in mind the cpu is idle while this is going on.
 
I too have been searching for RDNA performance numbers and found it difficult. However, the closest I’ve come across is one reply on Reddit where they say his 7900xt can process his canon files with Denoise Ai in 5 seconds. I don’t remember the model camera but I think it is ~20-24mp.
Yes I have seen that post. I think almost any decent $400 and up GPU can do that 20-24MP files are not large and seconds is not bad at all. I’m assuming those are raw files.

Anyway I’m not worry about it I’m just gonna get the next generation GPU they should be worth it.
some may say the Gpu isn’t that important but I would argue that some processes are very much gpu bound. My computer takes 1:20 for a single raw file (denoise ai). I cover burlesque and other night shows that produce hundreds of photos and at the moment I can only denoise 50 per hour whereas a gpu that only takes :15” can do 240 per hour. Keep in mind the cpu is idle while this is going on.

--
http://www.flickr.com/photos/lethaltalons/
 

Keyboard shortcuts

Back
Top