Graphics cards . . .

For all these denoise enhance functions speed stuff we've been talking about, what should I be looking if I was to choose between the RTX3070 and the AMD RX6800? Personally with everything going up in cost, one of the first things I look at is electricity/power consumption Southern California Edison keeps jacking up prices and is going to get worse so I would like to stay below 300 watts hahaha! :-)

No but seriously, which one would you pick and why? I have no clue and don't understand what ray tracing and cuda cores do, the bit rate, and which ones have more RAM bla bla bla all these stuff is really confusing to me I admit. What I would like to know is very simple, which one is the fastest and costs less.

Thanks.
From the minimal words from Adobe's Eric Chan...

https://blog.adobe.com/en/publish/2023/04/18/denoise-demystified

a1fbffc014de4f8e9199f012f69963c4.jpg

I don't know the AMD products but the RTX 3070 has tensor cores. All my attempts at using ACR AI Noise Reduction and come in at sub 10 second runs.

Peter
Usually, AI technology makes use of CUDA cores of NVIDIA cards. AMD has ROCm. I do not know if both technologies are compatible and which one is better.




There's dedicated AI cores in NVIDIA (RTX) and Apple Silicon that speed it up a lot. Think of it like a hardware accelerating video encoder which with Intel iGPUs let's my Surface Pro 6 export a video at only half the speed of an RTX 3060Ti even though the general purpose GPU power is next to nothing compared to it.

The one interesting thing about that blurb PMB wrote is that it doesn't mention specific support for Intel's AI cores in their ARC GPUs. Those in some ways are the most powerful for their price but this is where they run into issues with developers not optimizing for them and on the other end Intel's own drivers not always being that great.



Back to the OP I can say an RTX 3060Ti is enough for it. I'd wait a little bit since in the next few weeks the 4060Ti is likely coming out so it'll either be a better buy or push the 3060Ti cards lower.

The 3060 especially refurbished is getting to be a decent deal and the added VRAM could help long term usability though I can't say how much of a performance hit it'll have for Lightroom.
 
And this is what an AI NR run looks like in iCUE ...

e4b6a723996a4a5e8a5ae11ad90fd58c.jpg

The run, about 10 seconds long, begins at the rising edge of the fan curves (blue) and ends at the falling edge of the Temp curves (orange). The load seems flat during the run. The fans run on another 15 seconds or so. There is actually 3 fans in this GPU but for some reason iCUE only reports two. Oh well.

Peter
 
No but seriously, which one would you pick and why? I have no clue and don't understand what ray tracing and cuda cores do, the bit rate, and which ones have more RAM bla bla bla all these stuff is really confusing to me I admit.
TLDR:
Where NVIDIA is concerned, Cuda cores and Cuda acceleration are still available but dated technologies now. The three most recent generations of GPUs have Tensor cores which are more applicable to AI tasks for image processing. At least when the software developers actually mean AI in the context of machine learning and aren’t just using it as a hollow marketing term.

CUDA cores are nothing specific these days other than being from NVIDIA. The CUDA cores are just the name of the “normal” cores on the GPU die (processor). Granted, these normal cores number in the thousands and are arranged in a massively parallel architecture which can be “programmed against” to do computational tasks and not just graphics processing. This was essentially the point of CUDA-enabled software you saw years ago.

As PMB posted in his initial reply to you via an excerpt from Adobe, the current generation of AI applications tend to make use of Tensor cores on the GPUs instead of Cuda cores. Assuming the Tensor cores are available, of course. Tensor cores are more or less specific to machine learning and therefore AI tasks.

Cuda cores can be used to accelerate the computation of “finite” things that are specific and already exist. Take for example Bicubic Sharper. This algorithm is a specifically defined set of math rules that are going to run the same specific calculations every time you run them. I doubt Bicubic Sharper actually uses CUDA acceleration, but hopefully you get the idea. It’s the same computation every time, regardless of input.

Tensor cores OTOH can be used to accommodate inference or “conditionally making stuff up” based on inputs. It’s the later of these that you’d likely desire for “using AI” in tasks such as noise reduction and upscaling of images. Throw [Image A] at a procedure on the Tensor cores, and it will do something different…and hopefully more appropriate…than what that same Tensor core procedure will do for [Image B].
 
My 8 year old Dell XPS 8700 had the original minimal GPU. No real issue until I tried Topaz Sharpen AI. Even with 24 gig of RAM, I7 processor, the program took minutes to sharpen an image. Spending $400 or even more on a GPU did not make sense plus the power supply was not adequate. Research showed 2 cards worth considering: GTX 1050 and 1650. The 1650 was newer tech, 1/3 faster by estimate, and $50-75 more. However, some upgraders reported BIOS issues with 1650 and no problems at all with the 1050. I bought and installed a new 1050 for about $170 (cheaper now, used really cheap). Processing time went from 2 minutes+ to 25 seconds or less. Not great, but good enough for a program I don't use constantly. Maybe should have tried the 1650. No issues with power supply either.

Greg
 
The GTX 1030 is a card that uses little electrical power (30W). It has no PCI-E power connector.

What's your PSU?

If you went with a Radeon RX 6600 , it would require a single 8 pin PCI-E (often 6+2 pin) PCI-E connector. AMD recommends a 450W PSU or better.

Another 8GB card could be an RX 580. Similar power requirements. Cheaper (a generation older), but with significantly lower performance.

Some software (I believe Canon Digital Photography Pro, for example) requires an nVidia card. Maybe a GTX 1660 . Those are "Turing" cards, like the RTX 20X0 series, but without the hardware raytracing. (I know of no photo processing software that uses hardware raytracing.) They have 6GB of VRAM. I'd prefer one over a 4GB 1650.
Thanks for that.

My PSU is rated at 500W so it's probably OKL without being outstanding.

As things stand DxO and Topaz work quickly enough that I can live with it and I don't denoise that many images anyway.

All I'm after is a sensible improvement overall; it'll never, ever, get used for gaming and a more marked improvement in the case of Adobe.

Therefore, I can't justify parting with a great deal of money!

That RX 6600's a bit beyond my anticipated budget but the GTX 1660 is a bit more like it.

Someone else suggested an RX 580 (570?) which again can be had quite cheaply.

Any good?

"It's good to be . . . . . . . . . Me!"
With these higher power PSU PCs have you measured the energy usage for a typical workflow. Import files, AI NR, maybe other AI work, let's say edits through LR and export.

I'm looking to upgrade a personal laptop but I could manage a tower. However these big PSU numbers are a concern for me.
 
I regard PSU requirements as for the possible peak power draw, rather than sizing for typical work flows.

I check power usage via my UPS. The maximum power draw spec'd for the CPU and GPU seem to match reality pretty well.

The settings for both matter. My RTX 4090 is normally limited to 450W max, but it can be OC'd to 600W. (Not all 4090s permit that.) An I9-13900K's maximum power draw can be controlled through the motherboard's BIOS settings.

(Not braggin'. The total cost of my primary hobby PC is a fraction of the cost of a single Canon RF big prime. I don't own any $9k plus lenses.)
 
I'm looking to upgrade a personal laptop but I could manage a tower. However these big PSU numbers are a concern for me.
As far as I know, very few laptop GPUs are internally upgradable. Unless portability is truly needed, a desktop/tower PC is IMO a better choice for high performance, especially with GPU-intensive software; more room for effective cooling systems too.
 
I'm looking to upgrade a personal laptop but I could manage a tower. However these big PSU numbers are a concern for me.
For laptops, the most common possible upgrades are the RAM and the drives.

There is an upgradable discrete graphics card format for laptops, MXM . However, most laptops with discrete graphics have them soldered to the motherboard. An MXM module would be useful to you only if your laptop came with one. That's highly unlikely.
 
I'm looking to upgrade a personal laptop but I could manage a tower. However these big PSU numbers are a concern for me.
As far as I know, very few laptop GPUs are internally upgradable.
The upgrade is by selling my machine and buying another. However my laptop does allow an upgrade but it's not going to give me very much gain.

Unlessss portability is truly needed, a desktop/tower PC is IMO a better choice for high performance, especially with GPU-intensive software; more room for effective cooling systems too.

If I reflect back to the question I have over energy usage - this is a very important element for me.

My current Xeon Laptop has worked it's fans little socks of for 7 years, it is portable and was quite a beast when new. Specifically purchased for EM modeling.
 
I'm looking to upgrade a personal laptop but I could manage a tower. However these big PSU numbers are a concern for me.
For laptops, the most common possible upgrades are the RAM and the drives.

There is an upgradable discrete graphics card format for laptops, MXM. However, most laptops with discrete graphics have them soldered to the motherboard. An MXM module would be useful to you only if your laptop came with one. That's highly unlikely.
The upgrade is not the key element exactly. It is the power usage of a tower PC (I've just seen 600W just for a GFX card).

My laptop can have its GFX upgraded but it won't gain me very much.

I'm looking to upgrade by means of a new machine. I'm concerned over power usage of a tower.
 
The upgrade is not the key element exactly. It is the power usage of a tower PC (I've just seen 600W just for a GFX card).

My laptop can have its GFX upgraded but it won't gain me very much.

I'm looking to upgrade by means of a new machine. I'm concerned over power usage of a tower.
You don't have to specify top-of-the-line CPUs or GPUs; lower level components can be chosen to use less energy. That also makes the PC less expensive, easier to cool and potentially quieter.
 
I'm looking to upgrade by means of a new machine. I'm concerned over power usage of a tower.
Towers are no more hungry than laptops: the chosen components are the source of heat, not the form factor. According to Kill-A-Watt, my tower gaming rig draws about 200W going full tilt. Half of that is the RTX 3050 (which uses a laptop nVidia chip BTW). I selected an i5 CPU because the i9 is a ridiculously wasteful power hog.

Decide on your desired energy budget and build accordingly.

--
Canon, Nikon, Contax RTS, Leica M, Sony, Profoto
 
Last edited:
The upgrade is not the key element exactly. It is the power usage of a tower PC (I've just seen 600W just for a GFX card).

My laptop can have its GFX upgraded but it won't gain me very much.

I'm looking to upgrade by means of a new machine. I'm concerned over power usage of a tower.
You don't have to specify top-of-the-line CPUs or GPUs; lower level components can be chosen to use less energy. That also makes the PC less expensive, easier to cool and potentially quieter.
Thanks Austinian. I would be greatful if anyone has any measured data from a PC.

For context this is the type of machine I was looking at.

 
The upgrade is not the key element exactly. It is the power usage of a tower PC (I've just seen 600W just for a GFX card).

My laptop can have its GFX upgraded but it won't gain me very much.

I'm looking to upgrade by means of a new machine. I'm concerned over power usage of a tower.
You don't have to specify top-of-the-line CPUs or GPUs; lower level components can be chosen to use less energy. That also makes the PC less expensive, easier to cool and potentially quieter.
Thanks Austinian. I would be greatful if anyone has any measured data from a PC.

For context this is the type of machine I was looking at.

https://www.ebay.co.uk/itm/20432284...hSttH_LRP6&var=&widget_ver=artemis&media=COPY
All I can suggest for any laptop is to look at the specified wattage of its power supply--that's probably close to its maximum power consumption under load. Not always, but that's my best guess.
 
The upgrade is not the key element exactly. It is the power usage of a tower PC (I've just seen 600W just for a GFX card).

My laptop can have its GFX upgraded but it won't gain me very much.

I'm looking to upgrade by means of a new machine. I'm concerned over power usage of a tower.
You don't have to specify top-of-the-line CPUs or GPUs; lower level components can be chosen to use less energy. That also makes the PC less expensive, easier to cool and potentially quieter.
Thanks Austinian. I would be greatful if anyone has any measured data from a PC.

For context this is the type of machine I was looking at.

https://www.ebay.co.uk/itm/20432284...hSttH_LRP6&var=&widget_ver=artemis&media=COPY
The specs on that laptop aren't very complete.

The most power hungry mobile Gen10 I7 that I find is an I7-10875H , rated at 45W max. (Quite a bit for a mobile CPU.) Other CPUs, much less.

The laptop has only an integrated GPU, so it's not ideal for photo processing by AI software (Topaz, DXO Photolab, some of the newer Adobe features).

If you visit the Dell website, you can search on laptops with discrete graphics. This is a US page: Laptop Computers | Dell USA

I don't know about the UK, but in the US, you could consider a Dell refurb. (Maybe a safer purchase than eBay.) The US site has someXPS models with discrete GPUs.
 
Last edited:
The upgrade is not the key element exactly. It is the power usage of a tower PC (I've just seen 600W just for a GFX card).

My laptop can have its GFX upgraded but it won't gain me very much.

I'm looking to upgrade by means of a new machine. I'm concerned over power usage of a tower.
You don't have to specify top-of-the-line CPUs or GPUs; lower level components can be chosen to use less energy. That also makes the PC less expensive, easier to cool and potentially quieter.
Thanks Austinian. I would be greatful if anyone has any measured data from a PC.

For context this is the type of machine I was looking at.

https://www.ebay.co.uk/itm/20432284...hSttH_LRP6&var=&widget_ver=artemis&media=COPY
All I can suggest for any laptop is to look at the specified wattage of its power supply--that's probably close to its maximum power consumption under load. Not always, but that's my best guess.
It's the power consumption of the tower PC is what I am looking for. If when batch processing it's sat using a kw then I would avoid that route and look to the laptop.

The very high power consumption figures I see for GFX cards and processors are a concern.
 
The upgrade is not the key element exactly. It is the power usage of a tower PC (I've just seen 600W just for a GFX card).

My laptop can have its GFX upgraded but it won't gain me very much.

I'm looking to upgrade by means of a new machine. I'm concerned over power usage of a tower.
You don't have to specify top-of-the-line CPUs or GPUs; lower level components can be chosen to use less energy. That also makes the PC less expensive, easier to cool and potentially quieter.
Thanks Austinian. I would be greatful if anyone has any measured data from a PC.

For context this is the type of machine I was looking at.

https://www.ebay.co.uk/itm/20432284...hSttH_LRP6&var=&widget_ver=artemis&media=COPY
The specs on that laptop aren't very complete.
Indeed, I accidentally shared the wrong machine. This was to be as a means of an example.

The most power hungry mobile Gen10 I7 that I find is an I7-10875H , rated at 45W max. (Quite a bit for a mobile CPU.) Other CPUs, much less.
My current Dell laptop of 7 years old has a GFX card, Xeon processor, 32GB ram and a 0.5Tb ssd. For a large batch of images to processor, using single screen, brightness down it uses around 110W.
The laptop has only an integrated GPU, so it's not ideal for photo processing by AI software (Topaz, DXO Photolab, some of the newer Adobe features).
Yes indeed. My mistake sorry.
If you visit the Dell website, you can search on laptops with discrete graphics. This is a US page: Laptop Computers | Dell USA

I don't know about the UK, but in the US, you could consider a Dell refurb. (Maybe a safer purchase than eBay.) The US site has someXPS models with discrete GPUs.
There are some refurb companies that specialise in Dell in the UK which is probably going to be my outlet.

I could manage a tower but that potential for a big jump in energy is pretty off putting (if it's correct).

Thanks fella.
 
I'm looking to upgrade by means of a new machine. I'm concerned over power usage of a tower.
Towers are no more hungry than laptops: the chosen components are the source of heat, not the form factor. According to Kill-A-Watt, my tower gaming rig draws about 200W going full tilt. Half of that is the RTX 3050 (which uses a laptop nVidia chip BTW). I selected an i5 CPU because the i9 is a ridiculously wasteful power hog.

Decide on your desired energy budget and build accordingly.
I don't have the data for the energy profiles of modern tower PCs. I know what our laptops use.

200W is 175% of my laptop and I'd think if one was building a tower would you not choose something like a 13900 i9?

Your machine sounds like it's average is much less than the peak?
--
Canon, Nikon, Contax RTS, Leica M, Sony, Profoto
 
It's the power consumption of the tower PC is what I am looking for. If when batch processing it's sat using a kw then I would avoid that route and look to the laptop.

The very high power consumption figures I see for GFX cards and processors are a concern.
No generalization is possible for that AFAIK; it all depends on the components and the workloads. "Tower" means nothing in itself except the shape of the case; I could put any parts I wanted into a "tower" case. Low power, high power, whatever.
 
It's the power consumption of the tower PC is what I am looking for. If when batch processing it's sat using a kw then I would avoid that route and look to the laptop.

The very high power consumption figures I see for GFX cards and processors are a concern.
No generalization is possible for that AFAIK; it all depends on the components and the workloads. "Tower" means nothing in itself except the shape of the case; I could put any parts I wanted into a "tower" case. Low power, high power, whatever.
Ive provided the workload by means of an image processing path. So so that process for 100 images, 500 images whatever and then work out energy per image.

The tower would have a suitable setup for LR AI NR (that's where the story starts)

So now you have a basic spec and a defined system to process.

Next job is collect data of typical energy usage. This is the stage we/I am at and asking for data
 

Keyboard shortcuts

Back
Top