New System Build - Reality Check

SushiEater wrote:

Do you realize that any kind of benchmarking is like getting a Ferrari and driving it 65mph?

Some of the benchmark operations are never going to be used in real life or at least not very frequently.
Yet those scores are included. If I was doing something in Photoshop over and over again that requires one specific operation I might consider doing a research on it. But most photographers at best using standard operations that mostly required CPU power and nothing else.
Yea.. The OP's video card is already much faster than needed, as I pointed out in this post:

http://www.dpreview.com/forums/post/51955264

As for the CPU, it's not hard to find a variety of different benchmarks of it for use with apps for image editing (anandtech, tom's hardware ,etc.)

But, even though it's not the absolute fastest CPU on some tests like that, as time passes, I'd expect it to move up in the ranks (as some of the existing apps are not written well enough to spin off more threads to take advantage of the 6 physical cores in the Core i7 3930K yet, but future versions will probably take better advantage of more cores).

In any event, the AMD CPU mentioned is just not in the same class as a Core i7 3930K (even if looking at single threaded speed, where AMD tends to lag well behind newer Intel CPU designs as far as performance/thread).

For many users, it may be just fine. But, it looks like the OP wants a "top of the line" type of system with some future proofing for newer apps that will take better advantage of more cores later (and the Core i7 3930K should be a pretty good choice).

If it were my money, I'd probably go with a much cheaper Core i7 3xxx model (probably just buying a refurbished XPX 8500 from Dell for around $700 with a Core i7 3770, 2GB Drive, 8GB of memory, reasonably fast video card like their OEM version of the GT 640 with 1GB of GDDR5. Then, add a fast SSD and more memory to it.

That way, I'd have a system that gives me around 90% of the performance for the apps I use more often for well under half the cost of the system the OP is building. ;-)

The last 10% or so increase in speed tends to cost you a lot more (just as you may spend a lot more for an automobile that's a little better in some areas).

But, then again, time is money to many users (so spending 2 or 3 times as much for a PC to get a little better performance may be well worth it to them in time savings over the life of the PC).

As for me, I'd probably just buy an off-the shelf reburb, add some memory and an SSD, and end up spending less than half of that for a system that's close to 90% as fast. ;-)

I built my own computers for years (so many of them I've lost count). But, lately (for about the last 6 or 7 years), I haven't bothered, since I can find good deals on refurbs from Dell Outlet (for a lot less than I could build equivalent machines for), and just add memory, video card, hard drives, etc. as needed to them (saving a lot of money over buying the parts and putting one together from scratch myself).

On the downside, BIOS setups Dell uses don't allow Overclocking. But, to me that's not a big deal (as you're saving so much money, you can upgrade to a newer generation CPU more often when buying the refurbs from Dell Outlet -- especially if you watch for coupon codes to get even more off the already discounted refurb pricing).

I've bought 3 desktops and 2 laptops that way so far (refurbished from Dell Outlet, waiting for coupon codes for even more off), just upgrading memory, drives and video cards myself (and even swapping out the PSU in one system for a higher wattage model supporting a higher end video card). If you're a good shopper, you can find some pretty good deals that way (ending up with a system for less than you could buy the parts for). ;-)

--
JimC
------
 
Last edited:
Jim Cockfield wrote:

In any event, a solution like the A10-6800K is not even in the same class as the system the OP is building (as the CPU he's going with is more than twice as fast, and his existing video card is close to 4 times as fast as the video chipset integrated with that AMD A10-6800K solution).
You're probably correct. But it doesn't matter a bean how fast your GPU is if the software isn't using it. And, let's face it, most software doesn't.

Folks are buying millions of mid-high end GPU's that are doing basically nothing. You gotta hand it to the marketers at Nvidia and AMD. Folks are spending big bucks on stuff they think they need but don't.

Now, if you're a gamer or doing lots of video encoding that could benefit from a GPU, the higher end products make sense. And, yes, it's likely that more software will take advantage of GPU's over time.

But I submit that most of us are overspending on the GPU component (me included). I'd encourage folks to go with something smaller, quieter, cheaper rather than fill the case with a power-hungry and noisy monster.
 
Jim I would suggest you read this complete article on the issue. It is not as simple as it first may appear. There is CPU horsepower, GPU horsepower, and communication bandwidth involved. And there is the support of the various processors for C, OpenGL, OpenCL, and how much of each code is used by the software of interest.

And I'm not suggesting in the end that high CPU horsepower is not important, and can't make up for weak GPU horsepower. All I am saying, is that with the right code GPU horsepower can make a huge difference.

I was disputing the statement made that GPU's are for gamers only. Photo editing uses it too, and I think that is the trend -- to use it more and more.
 
Last edited:
Ron AKA wrote:

Jim I would suggest you read this complete article on the issue. It is not as simple as it first may appear. There is CPU horsepower, GPU horsepower, and communication bandwidth involved. And there is the support of the various processors for C, OpenGL, OpenCL, and how much of each code is used by the software of interest.

And I'm not suggesting in the end that high CPU horsepower is not important, and can't make up for weak GPU horsepower. All I am saying, is that with the right code GPU horsepower can make a huge difference.

I was disputing the statement made that GPU's are for gamers only. Photo editing uses it too, and I think that is the trend -- to use it more and more.
The OP already has a card that's more than fast enough. See my post about it here:

http://www.dpreview.com/forums/post/51955264

BTW, some tests show that the Fermi Architecture used by the GTX 560 Ti is better for OpenCL related processing compared to cards using the newer Kepler designs when looking at cards that test at about the same speed for gaming purposes (newer Nvidia designs don't test as fast for OpenCL compared to the last generation designs for some reason).

In any event, I just don't see the point of the OP changing video cards (or going to a CPU solution with integrated video that's even slower), as the GTX 560 Ti he already has (and plans on reusing in the new system) should be plenty fast enough.

--
JimC
------
 
Last edited:
As already mentioned in more than on post in this thread, the OP already has a video card that's plenty fast enough (that he plans on reusing in the new build).

See my post here about it:

http://www.dpreview.com/forums/post/51955264

Of course, as time passes, I would expect to see more and more software that takes better advantage of the GPU to help out with processing.

But, in the OP's case, I'd stick with the card he already has for now, as I suggested.

Then, if he starts using software later that takes better advantage of the GPU, he'd probably be able to get a card with a better price/performance ratio compared to currently available card models, as each newer generation of video cards tends to give you more "bang for the buck" (not to mention better performance/watt.
 
Jim Cockfield wrote:

The OP already has a card that's more than fast enough.
I don't know the card, and that could very well be true. That is not the point I made. I am suggesting that the GPU is an important factor in photo editing performance. It is not just for gamers.

The second part of the point is that, myself included, most have no idea what to look for in a graphics card for photo editing. We just go out and buy some random card that most likely was made for a kid playing video games, and hope it works.

If we want to take a step beyond that, we need to learn how photo editing software is written, and what kind of card will use the code efficiently. I am learning that GPU support for OpenCL is important.

Does the OP's card support the latest version of OpenCL, or more importantly the version used to program CS6? I don't know, but I think it is meaningful.
 
The GTX 500 series cards all support DirectX 11, OpenGL 4.3, and OpenCL 1.1 with current drivers.

CS6 requires OpenGL 2.0 and Open CL 1.1.

So, the OP's GTX 560 Ti card is fine for use with CS6.

The Nvidia 400 series cards also support the same specs (provided you're using updated drivers). I'm using a slower GT 440 with 1GB of GDDR5 (not slower GDDR3 like most GT 440 cards use) and I doubt I'd see much benefit upgrading it for use with apps like CS6 either.

That's because you'll tend to see diminishing returns with faster cards once you get to cards around that speed with apps like CS6. IOW, you may buy a card that's 3 or 4 times as fast and only see a percent or two improvement in processing speed with that kind of application

There are multiple generation of cards that would work fine with CS6, including the OP's GTX 560 Ti, which is roughly twice as fast as you'd need for maximum performance using GPU accelerated features in CS6.

IOW, anything faster than the OP already has is not going to help performance with apps like CS6.

For other purposes (gaming, etc.), a faster card can help out. Heck, I run tasks using BOINC (Berkley Open Interface for Network Computing) software for some projects that use my GPU to help processing speed, and a fast card than I have now would be very helpful.

But, for GPU accelerated features using CS6, a faster card is not necessary (once you get to a mid range card, anything faster is a waste of money).

--
JimC
------
 
Last edited:
Jim Cockfield wrote:

The GTX 500 series cards all support DirectX 11, OpenGL 4.3, and OpenCL 1.1 with current drivers.

CS6 requires OpenGL 2.0 and Open CL 1.1.

So, the OP's GTX 560 Ti card is fine for use with CS6.

The Nvidia 400 series cards also support the same specs (provided you're using updated drivers). I'm using a slower GT 440 with 1GB of GDDR5 (not slower GDDR3 like most GT 440 cards use) and I doubt I'd see much benefit upgrading it for use with apps like CS6 either.

That's because you'll tend to see diminishing returns with faster cards once you get to cards around that speed with apps like CS6. IOW, you may buy a card that's 3 or 4 times as fast and only see a percent or two improvement in processing speed with that kind of application

There are multiple generation of cards that would work fine with CS6, including the OP's GTX 560 Ti, which is roughly twice as fast as you'd need for maximum performance using GPU accelerated features in CS6.

IOW, anything faster than the OP already has is not going to help performance with apps like CS6.

For other purposes (gaming, etc.), a faster card can help out. Heck, I run tasks using BOINC (Berkley Open Interface for Network Computing) software for some projects that use my GPU to help processing speed, and a fast card than I have now would be very helpful.

But, for GPU accelerated features using CS6, a faster card is not necessary (once you get to a mid range card, anything faster is a waste of money).

--
JimC
------
If we are talking photography, agreed a fast card is not needed, but may want to consider an AMD FirePro so that some day when the upgrade to a 10-bit color monitor happens the OP has a GPU that can render it. Also want to consider the ports on the GPU... Displayport, Duel Link DVI, etc.. in the event multiple hi-res monitors are considered. These fast "gaming" GPUs do not address certain needs that may need to be considered if/when the OP does want to be "future proof" and elects to acquire a hi-res (2560x1440) 10-bit color and/or multi-montitor setup.
 
Last edited:
Jim Cockfield wrote:

But, Microsoft dropped that feature (XP Mode) with Win 8 Pro. Sorry, you'd need to have your own copy of XP to install using Win 8 Pro (using the built in Windows Hyper-V, Oracle VirtualBox, VMWare player, or similar virtualization software).

That's one reason I think Win 8 Pro is a ripoff compared to Win 7 Pro (since Microsoft no longer allows you to download a free copy of XP in the form of XP mode that runs in a Virtual Machine as they did with Win 7 Pro).

Heck, they don't even include codecs for commercial DVD playback with Win 8 Pro (unless you buy the optional Media Pack).
Jeez Jim, give it a break. We all know you're miffed because MS dropped XP mode and you can't play DVDs with Windows Media Player — this is like the third or fourth time you've mentioned it in this thread alone.

And you're apparently convinced that the Modern UI is the spawn of Satan; we get that too. But you need to accept the fact that people can use Win8 efficiently and productively. I wrapped my 58 year old brain around it in less than a week, admittedly with the aid of Start is Back (so I could learn the new UI at my leisure). And while you may seize on that as an AHA! I can tell you that I rely on SiB rarely now, primarily using it for its integrated search. The fragmented search of Win8 is my lone remaining niggle and that's apparently been addressed in 8.1.

My boots take me directly to the new Start screen instead of the desktop and what I see is what I want to see with no evidence of Modern / Metro apps in sight:



Between this, pinned apps on the taskbar and the re-enabled Quick Launch, I have everything I regularly use at my fingertips, one click away — which grants me much faster access than scrolling through last century's hierarchical Start Menu.

I don't expect you to embrace Win8, especially if you really do need XP mode (but it's hard for me to imagine that you don't have a copy of XP you could VM in Win8 and probably end up with better functionality in the process). I just wish you'd tone down the negative rhetoric and stop scaring people off of giving the new OS a go. The "awfulness" of Win8 is approaching urban myth status in part because of comments like yours.

I entered into the Win8 arena with a skeptical mindset but, after using it since March, you couldn't pay me to go back to 7.
 
Last edited:
Thanks all. I am going to stick with the video card I have now for the time being. If/when I need to upgrade as Teseg suggests, I will probably buy a card/monitor combination at that time using whatever the latest current technology is available.
 
teseg wrote:

------

If we are talking photography, agreed a fast card is not needed, but may want to consider an AMD FirePro so that some day when the upgrade to a 10-bit color monitor happens the OP has a GPU that can render it. Also want to consider the ports on the GPU... Displayport, Duel Link DVI, etc.. in the event multiple hi-res monitors are considered. These fast "gaming" GPUs do not address certain needs that may need to be considered if/when the OP does want to be "future proof" and elects to acquire a hi-res (2560x1440) 10-bit color and/or multi-montitor setup.
Or, Nvidia Quadro.

Firepro and Quadro cards over a bit overpriced (and that's being nice about it). AMD tends to give more "bang for the buck" with their FirePro cards

But, both Quadro and Firepro are expensive compared to their consumer cards with equivalent performance (IOW, the price/performance ratio stinks compared to their consumer lineup (Radeon and GeForce series cards).

Unfortunately, even if the hardware is capable of more advanced features (e.g., 10 bit per channel color via a DisplayPort), the drivers that Nvidia and AMD make available for 10 bit per channel are only available via their Pro series cards (Firepro or Quadro).

Now with Linux, you can get 10 bit per channel color with Nvidia's consumer level GeForce series cards if they have a DisplayPort. But, that's not the case with their drivers for Windows.

Personally, I think 10 bit per channel color is more trouble than it's worth. IOW, I wouldn't spend more money to get that feature, since very few Windows apps are going to support it anyway. Photoshop CS6 does (using Quadro or Firepro cards). But, most apps don't; and it just wouldn't be worth it to me to see 30 bit color on screen with very few apps like that, as it's only going to impact what you see on screen, not what's in the output files.

Some people like it. Not me (I wouldn't spend a lot more to get a pro series card to see 10 bit per channel color on screen with apps like Photoshop, when even the Wnidows desktop is limited to 8 bits per channel color).
 
Always a good strategy. They are virtually guaranteed to get cheaper and more powerful, and you have already paid for the one you have. They are not a big deal to switch.
 
Jim Cockfield wrote:

Some people like it. Not me (I wouldn't spend a lot more to get a pro series card to see 10 bit per channel color on screen with apps like Photoshop, when even the Wnidows desktop is limited to 8 bits per channel color).

--
JimC
------
A little over $100 for a last generation V3900 FirePro is not a lot in my book. Agree few apps tap into 10-bit today, but could see it growing in the future, which is what this thread is about.
 
Would there be any speed advantage to the following:

Exchange the 128 GB SSD drive for another 256 GB drive, and then set them up as a RAID-0 array as the boot/program/cache drive?

(I would still be keeping the RAID-1 array for my two data drives)

Benefits? Dangers?

Thanks!

--
James
 
Last edited:
Doublehelix wrote:

Would there be any speed advantage to the following:

Exchange the 128 GB SSD drive for another 256 GB drive, and then set them up as a RAID-0 array as the boot/program/cache drive?

(I would still be keeping the RAID-1 array for my two data drives)

Benefits? Dangers?
There is practically no performance advantage to placing two SSD's in RAID 0. With HDD's you benefit from overlapped seeks and rotational latency. These things are almost non-existent with SSD's.

On the downside:

* It's more costly.

* You consume two (valuable) SATAIII ports and drive bays versus one.

* The chances of a downed system from SSD failure are doubled.

* It's rather likely to prevent TRIM from working.

In other words... it's just silly!

But we seem to have an epidemic of RAIDitus...
 
I think running two Samsung 840 Pro 128 GB SSD's in Raid 0 will give you better performance than running a 128 and a 256 as two independent drives. And, it will save you $100. I think reliability is not an issue as 2 times 0 is still 0.

But, on the other hand, you are into the really marginal benefit range after your install just one, 128 GB SSD. Probably just playing to ratchet up benchmark results.
 
malch wrote:

In other words... it's just silly!
Sooooo... what are you really trying to say??? LOL!!! :-D

Got the message loud and clear... thanks for your great response.
 
Ron AKA wrote:

But, on the other hand, you are into the really marginal benefit range after your install just one, 128 GB SSD. Probably just playing to ratchet up benchmark results.
Yep, I am just going to stick with one. Thanks for that link, very informative.
 

Keyboard shortcuts

Back
Top