Which CPU?

Hi Jim, thank you for your help and guidance.

What would your thoughts be on the i7-4820K now that it has been released?

There is only about a £25 price difference between the 4770 and the 4820K.

My sincere thanks to you.
 
BPB wrote:

Hi Jim, thank you for your help and guidance.

What would your thoughts be on the i7-4820K now that it has been released?

There is only about a £25 price difference between the 4770 and the 4820K.

My sincere thanks to you.
Personally, I'd still lean towards the Core i7 4770 (or 4770k if you plan on overclocking).

There are a few reviews with lots of benchmarks out now, and *every* benchmark I've looked at shows that the Core i7 4770 is slightly faster than the Core i7 4820K, except for memory bandwidth type benchmarks.

IOW, because the Core i7 4820K can use Quad Channel Memory Addressing (with 4 or 8 DIMMs), you get higher memory bandwidth.

But, the problem is that it just doesn't matter for running any app. Dual Channel Addressing works just fine.

So, I'd still lean towards the latest Core i7 4770 CPU, with two exceptions:

1. You want more than 32GB of memory (as a number of Socket LGA 2011 MBs will support 8 DIMM slots so you can go up to 8x8GB for 64GB total)

2. You want to use multiple Video Cards in an SLI or Crossfire Config for gaming (as the X79/Socket 2011 boards have more PCIe lanes available to allow more full speed slots).

But, for the rest of the world (single video card, 32GB of memory or less), I'd stick with the Core i7 4770 or 4770K, as it's faster anyway and most Motherboards support up to 32GB of memory (using 4x8GB).

--
JimC
------
 
Last edited:
Thank you Jim,

I will spend some time tomorrow and have a good study of the system that I can put together based on your advice.

I am willing to stump up the cash for a really effective specification, and in all honesty I like to have some additional headroom, additional resources , built like a tank approach. It all comes from my many years in industrial electrical engineering.

OK I'm off to do some comparisons.

Mythanks again.
 
You may also want to have a look at this thread at another forum. It shows how various real world computer configurations scored for performance (including mine in the currently last post). For where you are at, I would suggest focusing on the CPU score and what they are using for a CPU. It gives you a reasonably relative idea of how different CPUs compare.

I only saw one computer that actually stood out from the rest and scored in the 9's for CPU speed. If you really want to go to the max that probably is what it takes. Seems to me it was tested a few times in the 30's for page number. It looks like it is a two physical CPU motherboard potentially using two 8 core Xeon CPU's. You will find they are in a little different price class though.
 
Thanks Ron AKA,

Some interesting info in that link. Still need to go through it a couple of more times.
 
BPB wrote:

Thanks Ron AKA,

Some interesting info in that link. Still need to go through it a couple of more times.
I looked through the link too, and the only processor that caught my eye, and may be worth investigating is the i7-3930K. It would provide a noticeable step up in performance from the group below it, but at a price. The i7-3960X also would provide a noticeable step up and a very slight edge on the i7-3930K, however the price doubles again for a rather trivial benefit, so I would not suggest it.
 
Ron AKA wrote:

I total your equipment list up to £1,985. Here is what I would suggest as an alternate from that same supplier:

Processor - FX-8350 - £154

Motherboard - Asus Sabertooth 990FX R2.0 - £137

Cooler - Stock AMD - 0

RAM - G.Skill RipJawsX 2x8 GB, DDR3-1866 - £114

Power Supply - Novatech 600W - £55

Graphics Card x 2 - Sapphire HD 6450 - £60

SSD - Samsung 840 Evo 250 GB - £148

External Backup/Storage - WD My Book Live 3TB - £124

Internal D: Drive* - 2 x Seagate Barracuda 1TB - £100

Hot Swap Tray - Not Needed - 0

Card Reader - Akasa Pro 5.25 - £27

Case - CoolerMaster K350 - £35

Monitors - 2 x Dell U2412 IPS 24" - £454

* Drives configured in Windows 8 as 1 2TB Storage Space drive

This totals £1,408.
I think there are a number of processors which provide about the same level of performance; i7-3770K, FX-8350, i7-4770, & i7-4770K. The relative ranking will depend on the degree of overclocking applied, if they can be overclocked. What differences among them in performance are rather trivial. I would still suggest out of this group that the FX-8350 is easily the best choice as it provides the same performance at a significantly lower price.

However I gather you want to take a step above the crowd. The i7-3820 and i7-4820K would be a very small step above that group, again depending on willingness to overclock.

But, the clear step above the crowd for performance is the i7-3930K, or the latest version of it, the i7-4930K, and especially if you are willing to give it a one click auto overclock. It does come at a significant price premium. It gets you up into the socket 2011 motherboard which does have some future proofing advantages, like more room for RAM. So what would be the impact on that equipment list above, based on the i7-4930K?

Processor - i7-4930K - £445

Motherboard - Asus P9X79 - £193

Total system increases £347 to £1,755. You can probably lower that price by shopping around at amazon and the like. But this option actually does give you an increase in power.
 
Hi all,

Having read and followed up on the advice given to me through this thread so far and what I have been able to see in other threads, I have looked at a lot of cpu's, graphics solutions including on-board and seperate gpu cards.

I liked the idea of the i7-3930/4930 cpu's but after all that has been said even I think that I can see that these are way over the top for photo editing and even video editing should I get into it.

Ron AKA, I really can see where you are pointing me with your advice on the AMD 8350 cpu. I must admit to having a bit of doubt about AMD processors because the only PC that has just up and died on me contained an AMD unit and that is what failed. Just don't seem to be able to have the confidence in AMD at the moment, but that could change.

Regardless of that, I have spent some time pricing up various specifications based on my original spec but changing CPU's, GPU's, Mother Boards, Coolers, RAM quantity etc. and the price differential between using an i7-4820K with Gigabyte M/B or using an AMD 8350 with equivalent Gigabyte M/B is only about £128 UK.

So right or wrong (and I am sure you folks will tell me which) here is the latest configuration.

Novatech Hot Swap Internal Mobile Tray for 3.5" SATA HDD - Black x 2 @ £12.98 = £25.97

Akasa Interconnect Pro 5.25 Inch Card Reader x 1 = £27.19

Sharkoon T9 Value Gaming Case Black with Green LED Fans - (No PSU) x 1 = £50.99

Novatech PowerStation Black Edition 750W Silent ATX2 Modular Power Supply x 1 = £64.99

Toshiba DT01ACA300 3TB 64MB Cache Hard Drive SATA 6 Gb/s 7200rpm - OEM x 1 = £79.98

Crucial Ballistix Tactical 16GB (2x8GB) DDR3 PC3-12800 C8 1600MHz Dual Channel Kit x 2 (for 32GB) @ £123.98 = £247.97

LiteOn IHAS124-14 24x DVD+/-RW SATA Black - OEM x 1 = £12.98

Samsung 840 Evo Basic 250GB Solid State Hard Drive 2.5" Basic Kit with Data Migration Magician Software - Retail x 1 = £147.98

Intel Core i7-4820K 3.40GHz (Ivy bridge-E) Socket LGA2011 Processor - Retail x 1 = £257.98

GIGABYTE GA-X79-UD3 Intel X79 (Socket 2011) Motherboard x 1 = £159.98

Arctic Freezer i30 CPU Cooler x 1 = £32.99

GIGABYTE GeForce GT 610 Silent 1GB GDDR3 x 2 @ £35.99 = £71.98

Total Price including VAT etc. = £1180.98 UK

I have opted for two GPU's to provide flexibility for monitor connections.

I might change the case to a Coolermaster Sileo 500 if I can get one.

I have also omitted the monitors from the spec. at the moment because I am still looking at the 16:10 aspect monitirs as suggested.

So what do think now?
 
BPB wrote:

So right or wrong (and I am sure you folks will tell me which) here is the latest configuration.

Novatech Hot Swap Internal Mobile Tray for 3.5" SATA HDD - Black x 2 @ £12.98 = £25.97

Akasa Interconnect Pro 5.25 Inch Card Reader x 1 = £27.19

Sharkoon T9 Value Gaming Case Black with Green LED Fans - (No PSU) x 1 = £50.99

Novatech PowerStation Black Edition 750W Silent ATX2 Modular Power Supply x 1 = £64.99

Toshiba DT01ACA300 3TB 64MB Cache Hard Drive SATA 6 Gb/s 7200rpm - OEM x 1 = £79.98

Crucial Ballistix Tactical 16GB (2x8GB) DDR3 PC3-12800 C8 1600MHz Dual Channel Kit x 2 (for 32GB) @ £123.98 = £247.97

LiteOn IHAS124-14 24x DVD+/-RW SATA Black - OEM x 1 = £12.98

Samsung 840 Evo Basic 250GB Solid State Hard Drive 2.5" Basic Kit with Data Migration Magician Software - Retail x 1 = £147.98

Intel Core i7-4820K 3.40GHz (Ivy bridge-E) Socket LGA2011 Processor - Retail x 1 = £257.98

GIGABYTE GA-X79-UD3 Intel X79 (Socket 2011) Motherboard x 1 = £159.98

Arctic Freezer i30 CPU Cooler x 1 = £32.99

GIGABYTE GeForce GT 610 Silent 1GB GDDR3 x 2 @ £35.99 = £71.98

Total Price including VAT etc. = £1180.98 UK

I have opted for two GPU's to provide flexibility for monitor connections.

I might change the case to a Coolermaster Sileo 500 if I can get one.

I have also omitted the monitors from the spec. at the moment because I am still looking at the 16:10 aspect monitirs as suggested.

So what do think now?
As an engineer also, I appreciate what you are doing, and I see you are now also considering the 80:20 rule. Some comments:

HDD - I would still highly recommend using Windows 8 Storage Spaces to manage your drives. You will get much better (double speed) performance by using two drives configured as one "Simple" drive. Yes it will cost a few quid more, but I think it is worth it. The HDD will be the lowest performing part of your system, and anything you can do to speed it up helps. A WD Velociraptor drive will not be as fast as two basic drives in Storage Space Simple mode. Also my research into drives was that the Seagate Barracuda was about as fast as they get (at a good price) for 7200 rpm units. The two I have are working flawlessly in Storage Space configuration.

RAM - Make sure you buy big enough stick to not fill your board. I believe the 2011 boards will take 64 GB, but you do not want to start off with all the slots full of lower capacity sticks. Becomes expensive to upgrade later.

CPU - If you must go Intel I think that i7-4820K is a good choice. You should be able to get a 10% or so overclock with an auto setting. I would recommend doing it, providing the auto tune, leaves all the power management (scaling back voltage etc) intact and in auto. That way it behaves almost exactly the same as a standard clocked CPU, but still goes harder when it needs to. If you do, I'm sure you will get a small step in performance over the rest of the crowd.

Motherboard - Having used both Gigabyte and Asus boards I am currently partial to Asus. Their AI suite is excellent for monitoring and controlling your motherboard. In particular the FanExpert is really good in making the fans on the CPU and case behave. You can set up your own custom fan speed vs temperature profile. It makes a big difference in noise. I recently made some changes in the BIOS setup of mine and it defaulted the FanExpert back to the default settings. I couldn't figure out why all of a sudden I had a noisy PC. Then when I realized what was going on, I reset them back to my custom profile (which was not lost) and the quiet was restored. Here is a video on how it works. Can't imagine any engineer not loving this stuff...

Fan Expert 2

There are different versions of FanExpert so that is worth checking out if you are evaluating Asus boards. I don't think Gigabyte has an equivalent system.

On the specific board, I noticed that some of the custom systems are using the Asus P9X79. It looks like a good board to me with very up to date hardware. Check the details out further here at the Asus site. It has other features such as external power supply eSATA ports. This board has Fan Xpert+ which seems to be even more advanced. Some details here.

Yes, it costs a bit more, but the motherboard is not where you want to cheap out.

CPU Cooler - Yes it is worth getting a decent cooler as this is a 130 watt CPU. Can't comment on that specific cooler. Usually it comes down to the number and size of heat pipes, and size of the fan(s). I have the CoolerMaster TPC-812 and it works well, but is not the easiest to install. While it is a bit of a clunk site, check out the Frosty Tech site , and their reviews and ratings of various coolers. Last, check that the cooler will fit the 2011 processor.

Graphics Card - That is a decent choice, but there is no need and probably a disadvantage in installing two cards. It will be just more heat load in your case. Just buy a simple 5 quid adaptor to convert the HDMI or DisplayPort to DVI. Check that the card can run two monitors, but virtually any card can. For future proofing I would do some research into the card support for OpenGL and OpenCL. Get the latest version support that you can. I don't know but suspect that may be built into the firmware, and not upgradeable with the driver. But not sure. Use the money saved on the extra graphics card to get the better mother board.

Case - Look for one that has one or at the most two 120 mm (or bigger) fans that will plug into the motherboard. Fans on the front will be noisier. Having the power supply mounted on the bottom of a tower is better than the top. Look for USB 3.0 ports in the case that are convenient, and potentially eSATA if that is what you plan to use. Also look for cases that have a removable panel under the CPU and a false bottom to route the wires. Not essential but nice.

That is about all I can think of. Hope that helps. I think you are getting close to a good system.
 
Last edited:
Jim Cockfield wrote:

Intel went from Sandy Bridge, to Ivy Bridge and now Haswell architectures, with improvements with each newer generation.
This is correct with the mainstream desktop processors, Haswell is the most currently released "4th Gen Core" CPU.

They just skipped a generation with the LGA 2011 CPU models (moving from Sandy Bridge E to Haswell E, skipping the Ivy Bridge E generation entirely, even though they originally had Ivy Bridge E series models on the roadmaps).

This is wrong. Intel did not skip Ivy-E. The i7-4960X is the Ivy Bridge E version top of the line desktop CPU, also the i7-4920 is Ivy Bridge E. Haswell-E has not yet launched and won't launch for quite a while, and will require a different chipset when it does launch. This is why the X79 chipset works with both Sandy-E and Ivy-E, versus what you've seen with Haswell which has a different number of pins and chipset than did Ivy bridge.

The Ivy-E CPU while one design generation behind the Haswell CPU benefits from a number of workstation/high-end desktop features:

1) Ivy-E offers up to 6 Cores and 12 Threads, Haswell up to 4 Cores 8 Threads. For software that benefits from more than 8 threads you will see nearly linear performance improvements with Ivy-E, theoretically up to almost 50% more than Haswell.

2) Ivy-E has 40 lanes of PCIe versus 16, both Gen3 - this is important if you have multiple graphics cards, or plug in RAID controllers or any other super high bandwidth PCIe devices.

3) Ivy-E supports 64GB memory, Haswell 32GB

4) The CPU cache for Ivy-E is larger than Haswell, so for certain applications that utilize a large instruction set there will be significant performance benefits.

If none of these 4 architectural features impact your usage, then indeed, the latest design generation, i.e. Haswell, would be the better choice.

Both are also significantly overclockable. I have both my Ivy-E 4960X and my Haswell 4770K running at 4.5Ghz, which isn't a particularly aggressive overclock as I didn't push the voltages beyond just the "auto" settings.

Roland.
 
BPB wrote:

So right or wrong (and I am sure you folks will tell me which) here is the latest configuration.
Bad idea.

As I've already mentioned, the Core i7 4770 series CPUs are faster for virtually any purpose. Look at any benchmarks for applications, and they are *all* faster on the Core i7 4770 versus a Core i7 4820.

The *only* exception would be a synthetic (not a real application) benchmarks for memory bandwidth (where the Core i7 4820 tests faster due to quad channel addressing).

But, for virtually any real world application (single threaded or multi-threaded), the Core i7 4770 CPUs are *always* slightly faster than the Core i7 4820 series CPUs. That goes for office apps, video editing apps, image editing apps, or any other benchmarks you want to look at. The Core i7 4770 (or Core i7 4770K if you want to overclock) will *always* outperform the Core i7 4820.

As for your motherboard, it only has 4 DIMM slots. So, it only supports up to 32GB of memory (4x8GB). So, there is no advantage there either. If you needed more than 32GB of memory, then an 8 slot Socket 2011 Motherboard using a Core i7 4820 would be worth consideration over the Core i7 4770 CPU.

But, with the type of setup you listed, there is no benefit compared to a setup using a Core i7 4770 or 4770K instead (since motherboards for the Core i7 4770 CPUs will also support up to 32GB).

Lets discuss video cards...

Yes, you have more PCIe lanes with the Core i7 3820. The problem is you'd need to use 3 or 4 very high end cards in a Crossfire or SLI config before you *might* (and I stress *might*) see any benefit. You've selected the very slowest card in the current Nvidia lineup with a two card configuration.

Guess what? Even if you went to a dual card SLI setup using dual Nvidia Titan Cards ($1K cards that are around in an SLI configuration, the Core i7 4770 would *still* be faster. Here's one page testing gaming performance that way.

http://www.anandtech.com/show/7255/intel-core-i7-4960x-ivy-bridge-e-review/5

Benchmarks for an Nvidia Titan (8113 on passmark series tests):

http://www.videocardbenchmark.net/gpu.php?gpu=GeForce+GTX+Titan

Benchmarks for Nvidia GT 610 (346 on passmark series tests):

http://www.videocardbenchmark.net/gpu.php?gpu=GeForce+GT+610

IOW, even if you used dual cards in an SLI configuration that are *23 times as fast* as the dual GT 610 cards you have listed, a system using the Core i7 4770K would *still* be faster with those same two cards in it.

You're not going to see any benefit of the extra PCIe lanes you'd have with the Core i7 3820 unless you setup something like a Quad SLI config (using 4 very fast cards) with a board like an Asus Rampage Extreme IV over what you'd have with a motherboard supporting the Core i7 4770 or 4770K. Even then the benefit would be questionable unless you also needed more than 32GB of memory (and the Motherboard you're looking at does *not* support a Quad SLI config, and it does *not* support more than 32GB of memory).

So, my suggestion would be to save your money and go with a Core i7 4770 (or 4770k if you want to overclock), as you'd need to overclock the Core i7 4820 just to have processing speed as fast as a stock speed Core i7 4770 without any overclocking anyway.

There are lots of benchmarks around that show the Core i7 3820 versus the Core i7 4770. If you haven't found them on your own, I'd dig them back up again and post links to them.

The Core i7 4770 (or 4770K if you want to overclock), is *always*) going to be slightly faster compared to the Core i7 4820 for any virtually real world app you'd want to test.

IOW, there is *no* benefit to the Core i7 4820 unless you want to go with more than 32GB of memory (and the motherboard you listed won't go more than 32GB anyway as it's a 4 slot board), or want to use something like a Quad SLI configuration for gaming with very high end video cards and/or use multiple SATA Controller cards that can take advantage of the extra PCIe lanes.

For the rest of the world (including you), the Core i7 4770 is a better bet.\

If you were looking at something like an Asus Rampage Extreme IV using more than 32GB of memory and 3 or 4 high end video cards in an SLI or Crossfire Config for gaming, then the story may be different. But, for your type of setup, there's no benefit to the config you listed.

More on your dual video cards... they're the very slowest desktop video cards in the current Nvidia 6xx series lineup, using a very narrow 64 bit memory type, very few CUDA cores, etc.

Why do you think they've give you more flexibility?

Look... as previously mentioned, I'd just go with a system using a single GTX 650 Ti instead. That card gives you one HDMI port (capable of 4K resolution, which is something you do *not* get with the GT 610), a dual link DVI-I port (meaning it can be used with a VGA adapter for older displays, or give you digital output to a newer display at 2560x1600 resolution), as well as a dual link DVI-D port (again, allowing a display using up to 2560x1600 resolution).

Specs for the GTX 650 Ti:

http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-650ti/specifications

Specs for the GT 610:

http://www.geforce.com/hardware/desktop-gpus/geforce-gt-610/specifications

Do you need more than 3 displays connected at the same time?

If not, just buy a single GTX 650Ti instead, as you could have two displays at up to 2560x1600 connected via the DVI ports (and one of them could be an older analog display using a VGA adapter from the dual link DVI-I port at a lower 2048x1536 resolution), and one of them could even be one of the newer 4K displays via the HDMI port.

So, why go with dual GT 610 cards instead? The 650 Ti is approximately 8 times as fast on most benchmarks, gives you support for 3 displays at the same time, and also supports the newer 4K models via it's HDMI output (3840x2160 at 30Hz or 4096x2160 at 24Hz).

You don't get 4K output with the [dramatically slower] GT 610, and the GT 610 only supports OpenGL 4.2 (whereas the GTX 650 Ti supports the newer OpenGL 4.3 standard).

Heck... the price for a single GTX 650 Ti is probably pretty close to the dual GT 610 solution you're looking at anyway. So, I fail to see the point of going with a very slow card like that GT 610, when it doesn't even support the latest OpenGL standards, ability to output higher 4K resolution, etc.

Now... looking at your current applications, it doesn't look like most of them support GPU acceleration. But, if you use Roxio Creator a lot, it does. The 2010 version of it (announced in August 2009) supported GPU acceleration for it's video transcoding features using either Nvidia CUDA or AMD Stream, with performance up to *5 times as fast* as using the CPU alone).

The latest versions also support acceleration using your GPU (Graphics Processing Unit; a.k.a., your video card).

So, it makes no sense to use an *extremely* slow card like the GT 610 if you have software that can take advantage of a faster video card.

I haven't seen any benchmarks of Roxio products for how they compare with various video cards (other than their marketing material claiming performance up to 5 times as fast using Nvidia CUDA or AMD Stream). But, if you look at similar products using the GPU, you'll see that the GT 610 is not very fast compared to other cards.

For example, Adobe Photoshop Premiere CS6 can use the GPU for things like timeline rendering and transcoding via Nvidia CUDA. See this review of how cards compare:

http://www.pugetsystems.com/labs/articles/Adobe-Premiere-Pro-CS6-GPU-Acceleration-162/

Basically, the GT 610 does help compared to the CPU alone. But, you really need to move into a mid range card like a GTX 650 to see the full benefit of apps using Nvidia CUDA for speeding up things like video transcoding.

Once you get up to a card as fast as a GTX 650, then you will tend to see diminishing returns for most video transcoding purposes (or image editing apps that use GPU acceleration for thinks like raw conversion and filter applications).

Since the GTX 650 Ti is around 40% faster for most other purposes (gaming, etc.) compared to the GTX 650, it should make a very good choice for you (with support for 3 displays). For most other purposes, a GTX 650 Ti is around 8 times as fast as a GT 610.

If you want even faster with more ports, look at the GTX 660 instead (which you mentioned looking at in your very first post to this thread), as it also includes a Display Port (in addition to it's HDMI port, Dual Link DVI-I port and Dual Link DVI-D port. I just don't see where you went from a good choice to a bad choice (GTX 660 versus dual GT 610 cards) if you care about performance for apps that can take advantage of your video card.

IOW, unless you need more than 3 displays, I'd go with a GTX 650 Ti (as it's even faster than a GTX 650, which tends to give max performance with most image editing and video editing apps that can use GPU Acceleration, and would give you support for 3 displays (and one of them could be an older VGA display, and one of them could be a newer 4K resoluton display).

Also, as time passes, I'd expect to see more and more applications taking advantage of GPU Acceleration. Roxio Creator products have since 2010, Photoshop filters have since CS4 (with more GPU Accelerated features with each newer release), Corel AfterShot Pro does for many of it's features like RAW conversion, DXO Optics Pro uses GPU acceleration, CS6 Premiere Pro does, and the list goes on and on. Heck, even most Internet Browsers can use your GPU now to speed in video rendering.

So, it makes zero sense to buy dual GT 610 cards from my perspective, when you can buy a single card supporting mutiple displays at the same time for about the same price, and end up with a video card that's dramatically faster.

--
JimC
------
 
Last edited:
Note that the Core i7 4770 is faster than the Core i7 4820 for things like image editing.

http://www.hardwarecanucks.com/foru...-i7-4930k-i7-4820k-ivy-bridge-e-review-7.html

As pointed out in my last post pointing to another review, even if you had dual Nvidia Titan Cards in an SLI Config, the Core i7 4770 is still faster than a Core i7 4820 for things like gaming, too.

IOW, you're not going to see any benefit to the extra PCIe lanes the Core i7 4820 supports for virtually any application you'd use. You'd probably need to have a Quad SLI config with 4 very fast cards with a board like the Asus Rampage Extreme IV before you *might* see a benefit to going with the Core i7 4820 instead of the Core i7 4770.

For your type of use, I'd go with a Core i7 4770 instead, as the motherboards are probably less expensive anyway.

Now, if you wanted more than 32GB of memory, then the Core i7 4820 may be worth considering. But, the Motherboard you're looking at doe not support more than 32GB of memory anyway (as it only has 4 DIMM Slots, unlike some of the X79/Socket 2011 MBs with 8 DIMM slots on them)

BTW, you left out the cost of the Operating System (64 bit Win 7, 8, etc.) in your configuration. Don't forget that you need to buy that part when looking at budget, unless you want to use a free OS like Linux (which is what I use the vast majority of the time, even though I also have 64 Bit Win 7 installed in a mult-boot config). ;-)

--
JimC
------
 
Last edited:
Jim Cockfield wrote:

As I've already mentioned, the Core i7 4770 series CPUs are faster for virtually any purpose. Look at any benchmarks for applications, and they are *all* faster on the Core i7 4770 versus a Core i7 4820.
Only, when you cherry pick your benchmarks, as you seem to like to do. The sun has set on high performance single core CPUs. They only benefit games and old out of date software. Even games are changing to use multi-core. Very basic average benchmarks do not support your argument. The i7-4820 has a 10% advantage out of the box, and since the i7-4770 cannot be overclocked, a simple auto tune, one mouse click, will give you another 10%, for a total of a 20% advantage over the i7-4770. Is that a huge advantage? No, but saying the i7-4770 is faster is just wrong.

CPU Comparison
As for your motherboard, it only has 4 DIMM slots. So, it only supports up to 32GB of memory (4x8GB). So, there is no advantage there either. If you needed more than 32GB of memory, then an 8 slot Socket 2011 Motherboard using a Core i7 4820 would be worth consideration over the Core i7 4770 CPU.
The P9X79 has 8 memory slots, as you can see:

51sSan-nPGL.jpg


Lets discuss video cards...
Please... All that discussion about video cards is irrelevant. This is not a gaming box. The basic card the OP has selected will do the job easily. He just needs to check to see that it supports up to date OpenGL and OpenCL standards. Even less of a video card would do the job. For future proofing just buy a new card if you update monitors. It will be a small cost compared to the high end monitors that potentially could not be used with the selected card.
 
They're all very close to each other (Core i7 4820 versus Core i7 4770, etc.).

IOW, you're probably not going to see any real world difference between them.

But, I would not spend more money for the Core i7 4820 solution unless you plan on installing more than 32GB of memory (and you'd need to go with a different MB than you had listed for that purpose, as the MB you selected only has 4 DIMM Slots anyway and so you'd only be able to install up to 32GB with it).

So, from my perspective, it makes no sense to go with your solution, when you can get a solution using the Core i7 4770 for less money anyway.

My biggest concern would be your video card config (dual GT 610 cards). As pointed out, that's a *very* slow card compared to other current Nvidia models. For most apps that can take advantage of GPU Acceleration, you need to move up to something like a GT 650 for best performance.

Something like a GT 645 using faster 128 Bit GDDR5 would probably be fine, too (as it would test about the same as a GTX 650)

But, a GT 610 is going to be much slower with most any app that takes advantage of your GPU.

As pointed out, more and more applications will be able to use the GPU as time passes. Even the Roxio Creator software you have has used GPU Acceleration via Nvidia CUDA or AMD Stream technology since their 2010 release (claiming performance up to 5 times as fast as the CPU alone). You'll see GPU acceleration used by many other similar apps (for example, Premiere Pro CS6 timeline rendering is up to 10 times as fast as the CPU alone with faster cards supporting Nvidia CUDA technology).

You'll see similar improvements with many other applications (DXO Optics Pro, Corel AfterShot Pro and others) for raw conversion and still image processing, too. But, for best results, you need to move into something like a GTX 650 level card (anything faster tends to give minimal improvement, but you're still going to see performance a *lot* faster with something like a GTX 650 versus the GT 610 with apps supporting GPU Acceleration).

So, as suggested, I'd probably look at something like a GTX 650 Ti (faster than the GTX 650 so you have a bit of "future proofing"), as it's got support for 3 monitors at the same time, versus something like the dual GT 610 solution you're looking at (as the GT 610 is a *very* slow card in comparison, and you could probably buy a [dramatically faster] single GTX 650 Ti for close to the same price as two of the very slow GT 610 cards.

--

JimC

-----
 
Last edited:
Ron AKA wrote:
Jim Cockfield wrote:

As I've already mentioned, the Core i7 4770 series CPUs are faster for virtually any purpose. Look at any benchmarks for applications, and they are *all* faster on the Core i7 4770 versus a Core i7 4820.
Only, when you cherry pick your benchmarks, as you seem to like to do. The sun has set on high performance single core CPUs. They only benefit games and old out of date software. Even games are changing to use multi-core. Very basic average benchmarks do not support your argument. The i7-4820 has a 10% advantage out of the box, and since the i7-4770 cannot be overclocked, a simple auto tune, one mouse click, will give you another 10%, for a total of a 20% advantage over the i7-4770. Is that a huge advantage? No, but saying the i7-4770 is faster is just wrong.

CPU Comparison
You're the one posting links to synthetic benchmarks, versus benchmarks for real world applications.

Sure, because the Core i7 4820 has quad channel memory addressing, it's going to have higher scores with some benchmarks.

But, for virtually any real world application (mult-threaded or single threaded), the Core i7 4770 is going to test slightly faster (video editing, image editing, office apps, etc. etc. etc.).

Also, I said "or the 4770K if you want to overclock") multiple times in previous posts. If you want to overclock, sure, go with a Core i7 4770K versus 4770.
As for your motherboard, it only has 4 DIMM slots. So, it only supports up to 32GB of memory (4x8GB). So, there is no advantage there either. If you needed more than 32GB of memory, then an 8 slot Socket 2011 Motherboard using a Core i7 4820 would be worth consideration over the Core i7 4770 CPU.
The P9X79 has 8 memory slots, as you can see:
Sheesh.

Pay Attention. You're not even mentioning the the correct brand of Motherboard, much less the correct model of MB the OP is looking at.

You just mentioned and posted a photo of an Asus P9X79.

The OP has the Gigabyte GA-X79-UD3 in the lastest parts list posted. That MB only has 4 DIMM slots that support 32GB of total memory (4x8GB). See specs for it here:

http://www.gigabyte.com/products/product-page.aspx?pid=4050#sp

Here's the post I'm responding to (note the MB in that parts list - the Gigabyte GA-X79-UD3, not the Asus P9X79).

http://www.dpreview.com/forums/post/52280465
Lets discuss video cards...
Please... All that discussion about video cards is irrelevant. This is not a gaming box. The basic card the OP has selected will do the job easily. He just needs to check to see that it supports up to date OpenGL and OpenCL standards.
Again, pay attention.

I've already pointed out that the GT 610 in his latest parts list does not support the latest OpenGL 4.3 standards, whereas the GTX 650 Ti that I suggested does. I also pointed out that the GTX 650 Ti supports 4K video output (and the much slower GT 610 does not).

The GTX 650 Ti is also much faster (even for simple image editing apps that support GPU Acceleration), or for video transcoding purposes (and the Roxio Media Creator software the OP is using supports GPU Acceleration using Nvidia CUDA technology -- meaning the GTX 650 TI would be much faster than the GT 610). Look at benchmarks for any app that supports GPU accelerated video transcoding using Nvidia CUDA (for example, the Premiere Pro CS6 benchmarks that I linked to), and you'll see that the GT 610 is going to be much slower than a mid range card like a GTX 650 or faster.

From a cost perspective, it would probably cost you about the same amount for a single card like a GTX 650 or 650 Ti versus dual GT 610 cards anyway (like the OP had in the last parts list).

So, from my perspective, it makes zero sense to go with the card setup the OP is looking at, when a single card solution offers multiple benefits (dramatically faster, supports newer OpenGL standards, higher video resolutions, etc.). That's what I was responding to (the OP's lastest parts list), with suggestions on faster alternatives. But, you don't seem to be paying any attention to the details.

--
JimC
------
 
Last edited:
I have read all the posts, and it is obvious you haven't. You have often recommended the i7-4770 processor with the integrated graphics HD 4600 processor. And, I agree that graphics processor while on the lower end for performance is fine for image editing. It does not have the latest OpenGL and OpenCL standards, but it does meet the minimum Adobe has set for CC & CS6.

Again, there is no need for a gaming computer video card. It is simply a waste of money. As some hard evidence instead of a long post saying nothing, here is my computer CPU load when processing a RAW file with Sony Image Data Converter. As you can see all 8 cores are working pretty hard - 98%. Clearly I'm getting my money's worth out of the processor.



FX-8350
FX-8350



And here is what the GPU is doing with the same image editing task while driving a 1920 x 1200 display. It is using all of 140 MB of memory, which is what it was using before I started the task, and is loaded to a whopping 0.8%! And this is with a similar mid range graphics card as the OP is suggesting. To put it bluntly, the graphics card is doing nothing other than displaying a pretty picture without even breaking a sweat... The card is way more than is needed for image editing, and basically a waste of the extra money spent on it, as would the money spent on those gaming cards you keep recommending.



Radeon HD 7770
Radeon HD 7770
 

Attachments

  • 9627597a924b43a4b6cf509befbc9d96.jpg
    9627597a924b43a4b6cf509befbc9d96.jpg
    83.9 KB · Views: 0
  • 0a9a4235bba544c9a1e0602930c58f86.jpg
    0a9a4235bba544c9a1e0602930c58f86.jpg
    59.1 KB · Views: 0
Last edited:
Ron AKA wrote:

I have read all the posts, and it is obvious you haven't. You have often recommended the i7-4770 processor with the integrated graphics HD 4600 processor.
You're obviously confusing me with another member.

I've never recommended using HD 4600 graphics. I've always suggested solutions using a discrete video card solution, even with the latest computers. That's because more and more applications are taking advantage of GPU Acceleration now (still image processing, video editing apps, and even your internet browsers).

Much of the time, I suggest a box like the XPS 8700 (and used to suggest the XPS 8500, and before that, the XPS 8300). Those *all* come with dedicated video cards.

Personally, I would not use a solution without a dedicated video card.

Now... some users may not need one [yet]. But, I use multiple applications that do take better advantage of a dedicated GPU. For example, I use Corel AfterShot Pro for image management and raw conversion, and it works better with a faster video card (using OpenCL for raw conversion, etc.)

The OP also mentioned using Roxio Creator (which has supported Nvidia CUDA or AMD Stream technology since the 2010 release to allow much faster transcoding compared to using the CPU alone).

We'll see more and more apps going that route as time passes (able to use the GPU versus CPU for faster processing); not just image editing apps or video transcoding apps either (as we'll see many apps that start to leverage GPU power as time passes.).

Heck, I even use Nvidia CUDA for things like helping out with work on projects for Cancer Research and more, as I have my PC setup to use BOINC (Berkely Open Interface for Network Computing) software so that it works on Tasks for that kind of thing from various projects. For example, I have it setup to work on a variety of projects from the World Community Grid (projects related to Cancer Research, Solar Research, Clean Water Research, etc.), and some of them can leverage the GPU for much faster processing of tasks assigned to my PC.

From my perspective, why not let those kind of projects use my CPU and GPU when I'm not busy (as you can easily setup profiles so that your computer resources are only used when you don't need them). I don't mind helping out (as it's very simple to install the BOINC software and assign projects to it). In the case of some of the Cancer Research projects, work gets done dramatically faster if you allow your GPU (video card) to be used for that processing, as some of them can make use of Nvidia CUDA.
And, I agree that graphics processor while on the lower end for performance is fine for image editing. It does not have the latest OpenGL and OpenCL standards, but it does meet the minimum Adobe has set for CC & CS6.
Look at benchmarks. The GTX 650 or faster cards are better. The GT 610 is slow in comparison.

CS6 Benchmarks:

http://www.pugetsystems.com/labs/articles/Adobe-Photoshop-CS6-GPU-Acceleration-161/

CS6 Premiere Pro Benchmarks:

http://www.pugetsystems.com/labs/articles/Adobe-Premiere-Pro-CS6-GPU-Acceleration-162/

There's not a lot of difference between a GT 610 or GTX 650 with CS6 for still image editing.

But, as you can see, once you start getting into video rendering with apps that use Nvidia CUDA (like CS6 Premirere Pro), the benefit of a faster card like the GT 650 is obvious.

So, since the OP's latest parts list included dual GT 610 cards, and the Roxio Creator software the OP is using has been able to use Nvidia CUDA for video related features like transcoding for several years, it makes a lot more sense to go with a single GTX 650 or GTX 650Ti instead (as the cost difference would be minimal between dual GTX 610s and a single GTX 650Ti or GTX 650).

As pointed out, the GTX 650 Ti I suggested supports 3 displays at the same time anyway (and also gives you 4K video output via the HDMI port, with 2560x1600 output via the two dual link DVI ports), support for a newer OpenGL 4.3 standard, etc.

The GT 650 Ti also has much wider memory bandwidth compared to the GT 610, around 10 times the processing cores, and much more. So, I'd go with the much faster single card solution instead for a variety of reasons (as the cost difference would be minimal compared to the dual GT 610 solution in the OP's latest parts list).

--
JimC
------
 
Last edited:
Hi Jim,

I already have an unopened Full Retail copy of Windows 7 Professional with both 32 Bit and 64 Bit DVD's included. I bought this a short time ago and was going to update from Vista on my old machine but decided not to waste it on an old system.

Hence the reason I have not included any allowance in any of my specs.

I must admit that I am getting a little confused again over graphics card levels.

Would the 4600 graphics on the 4770 cpu be good enough for what I want to do without any independant cards?

Thanks again for your input.
 
Hi Jim,

It looks like my last post crossed with your post. You have just answered the question that I asked.

Thank you.
 
Jim Cockfield wrote:
Ron AKA wrote:

I have read all the posts, and it is obvious you haven't. You have often recommended the i7-4770 processor with the integrated graphics HD 4600 processor.
You're obviously confusing me with another member. I've never recommended using HD 4600 graphics.
That is too bad, as I though we finally agreed on something ;-).

Could you do a graphics editing task with some image editing software and show my the Process Explorer graph showing the card actually doing something? That is something I have not managed to do. If I had MuseMage I would be interested to see if it really could load the graphics processor. They claim they actually do. Photoshop claims to use OpenGL, but based on their requirements, it is obviously not to any extent.

Sorry, I still fail to see the benefit of using a gaming card to edit images...
 

Keyboard shortcuts

Back
Top