Intel/AMD higher performance integrated GPU

  • Thread starter Thread starter Henry Richardson
  • Start date Start date
The one differentiator is circuit size - Apple and AMD have an advantage over Intel who has struggled to move to sub 10nm manufacturing. It is somewhat cloudy in that marketing has made it fuzzy what it means to do 7nm or 5 as Apple does. Advantages here are likely to be short term ones.
Intel "10 nm" is about the same as TSMC "7 nm". These "nanometers" are not actual sizes of the transistors or other features in the IC, and hence these numbers are not comparable across companies. So I don't think Intel has been "struggling" really that much: Intel and AMD are both at the same actual density currently.
yes, as already noted.

But it took Intel a long time to get to that "10," well beyond schedule. It resulted in some major leadership change at the company. And still it takes them double the wattage as AMD.
They're actually a lot closer for a given level of performance right now. Look at the i7 vs Ryzen 5800X or 5900X, and the i7 is more efficient for lighter workloads.

The issue is the i9 where to get that performance crown they went outside the efficiency envelope. And it's up against the 5950X which due to binning is usually more efficient than the 5900X.

I'm still really happy with my 3900X especially compared to what was out there at the time and overall AMD has strong points still. But Alder Lake is a pretty great achievement from Intel. The big question is how will it work with 15 & 45w mobile chips.
 
I'm still really happy with my 3900X especially compared to what was out there at the time and overall AMD has strong points still. But Alder Lake is a pretty great achievement from Intel. The big question is how will it work with 15 & 45w mobile chips.
big question to me is does the next AMD release put distance in again, or will Intel keep momentum. The 5900x was released over a year ago. And will the next one be a drop in option for our 3900x motherboards, and worth doing?

Alder Lake's gains rely in large part on DDR5, which is still a bit scarce. You have to pay for it, at present. By the time that's not true, AMD
 
I'm still really happy with my 3900X especially compared to what was out there at the time and overall AMD has strong points still. But Alder Lake is a pretty great achievement from Intel. The big question is how will it work with 15 & 45w mobile chips.
big question to me is does the next AMD release put distance in again, or will Intel keep momentum. The 5900x was released over a year ago. And will the next one be a drop in option for our 3900x motherboards, and worth doing?

Alder Lake's gains rely in large part on DDR5, which is still a bit scarce. You have to pay for it, at present. By the time that's not true, AMD
The V cache Zen processors have me really intrigued because that could be a huge boost for certain workloads, especially since the 5900X itself isn't much of an improvement for my main bottleneck (Lightroom Imports). So fingers crossed it has a big impact there.

That being said if you look at Puget's LR & PS benchmarks the i7-1200K does really well with DDR4 with scores right up there with the 5900X/5950X. I'd expect some gain with DDR5 like the i9 but it's not needed for a solid system and you don't have the heat issues of the i9 which is pushing a good amount of additional power for that last little bit of performance.

Really there's no wrong answer right now, if you don't need many MB features you can get a B550 for a good bit cheaper than Z690 and you're good to go. And there are other configs that work best for each. I hope they keep leap frogging each other like this and the additional pressure from ARM will only help to push innovation.
 
They will do what there manufacturing partners want them to do. It used to be every IO was a separate card. Most people don't need a graphics card to browse the internet, send email and do word processing. Most laptops sold don't have graphics cards and I'd be surprised if desktops don't wind up the same way.

There will still be a need for discreet GPUs for gamers, creators, and a few other applications.

Morris
 
I wonder if Intel and AMD will start integrating a higher performance GPU into their processors? Particularly when you consider what Apple has done with the M1 and the current very high prices/shortage of GPUs.

My last few laptops with Intel processors have had integrated GPUs, but they are low performance:
  • i7-8565U - Intel UHD 620 (also has Nvidia Geforce MX250 2gb DDR5)
  • i5-6200U - Intel HD Graphics 520
  • i7-3630QM - Intel HD Graphics 4000 (also has Nvidia GeForce GT 650M 2gb)
  • i5-520M - Intel Graphics Media Accelerator HD
It has been years since I have had a desktop. These days, from what I read, there is a shortage of GPUs and/or very high prices. It would be helpful for those of us who use the Topaz products, DXO, etc. which make heavy use of the GPU to have better ones that are integrated and not need separate ones. The separate ones would then be mostly interesting to high end users doing video editing, gamers, etc.

Apple came out with their M1 processor last year that integrates a multi-core ARM RISC CPU, multi-core GPU, very high speed RAM, etc. all on one chip. Topaz and DXO run well on it.

I have often wondered why Intel and AMD have such low performance integrated GPUs on their processors? I have heard that the newest Intel processors have Iris GPUs which are a bit better, but still not as good, I think.
I don't know how relevant these benchmarks are for use with Topaz, DXO, Lightroom, Photoshop, etc., but this compares the MX250 vs. MX450 vs. Iris Xe:

https://www.videocardbenchmark.net/...Force-MX450-vs-Intel-Iris-Xe/4076vs4296vs4265

69cebca2071e4405af3f833ce23bf9a8.jpg

Unfortunately it does not include the Apple M1. Anyone know of some comparisons?

--
Henry Richardson
http://www.bakubo.com
 
Last edited:
I wonder if Intel and AMD will start integrating a higher performance GPU into their processors? Particularly when you consider what Apple has done with the M1 and the current very high prices/shortage of GPUs.

My last few laptops with Intel processors have had integrated GPUs, but they are low performance:
  • i7-8565U - Intel UHD 620 (also has Nvidia Geforce MX250 2gb DDR5)
  • i5-6200U - Intel HD Graphics 520
  • i7-3630QM - Intel HD Graphics 4000 (also has Nvidia GeForce GT 650M 2gb)
  • i5-520M - Intel Graphics Media Accelerator HD
It has been years since I have had a desktop. These days, from what I read, there is a shortage of GPUs and/or very high prices. It would be helpful for those of us who use the Topaz products, DXO, etc. which make heavy use of the GPU to have better ones that are integrated and not need separate ones. The separate ones would then be mostly interesting to high end users doing video editing, gamers, etc.

Apple came out with their M1 processor last year that integrates a multi-core ARM RISC CPU, multi-core GPU, very high speed RAM, etc. all on one chip. Topaz and DXO run well on it.

I have often wondered why Intel and AMD have such low performance integrated GPUs on their processors? I have heard that the newest Intel processors have Iris GPUs which are a bit better, but still not as good, I think.
I don't know how relevant these benchmarks are for use with Topaz, DXO, Lightroom, Photoshop, etc., but this compares the MX250 vs. MX450 vs. Iris Xe:

https://www.videocardbenchmark.net/...Force-MX450-vs-Intel-Iris-Xe/4076vs4296vs4265

69cebca2071e4405af3f833ce23bf9a8.jpg

Unfortunately it does not include the Apple M1. Anyone know of some comparisons?
Here are a couple of screen shots of a youtube video showing the difference in performance between the M1 and xps i7 with iris xe for video processing.



d27af12f04b3492e93e1be9c47db9d2e.jpg.png



d850331fd1924c40b0e68c9affbc0bfa.jpg.png



--
Fronterra Photography Tours
One Lens, No Problem
The Point and Shoot Pro
 
I wonder if Intel and AMD will start integrating a higher performance GPU into their processors? Particularly when you consider what Apple has done with the M1 and the current very high prices/shortage of GPUs.

My last few laptops with Intel processors have had integrated GPUs, but they are low performance:
  • i7-8565U - Intel UHD 620 (also has Nvidia Geforce MX250 2gb DDR5)
  • i5-6200U - Intel HD Graphics 520
  • i7-3630QM - Intel HD Graphics 4000 (also has Nvidia GeForce GT 650M 2gb)
  • i5-520M - Intel Graphics Media Accelerator HD
It has been years since I have had a desktop. These days, from what I read, there is a shortage of GPUs and/or very high prices. It would be helpful for those of us who use the Topaz products, DXO, etc. which make heavy use of the GPU to have better ones that are integrated and not need separate ones. The separate ones would then be mostly interesting to high end users doing video editing, gamers, etc.

Apple came out with their M1 processor last year that integrates a multi-core ARM RISC CPU, multi-core GPU, very high speed RAM, etc. all on one chip. Topaz and DXO run well on it.

I have often wondered why Intel and AMD have such low performance integrated GPUs on their processors? I have heard that the newest Intel processors have Iris GPUs which are a bit better, but still not as good, I think.
I don't know how relevant these benchmarks are for use with Topaz, DXO, Lightroom, Photoshop, etc., but this compares the MX250 vs. MX450 vs. Iris Xe:

https://www.videocardbenchmark.net/...Force-MX450-vs-Intel-Iris-Xe/4076vs4296vs4265

69cebca2071e4405af3f833ce23bf9a8.jpg

Unfortunately it does not include the Apple M1. Anyone know of some comparisons?
Here are a couple of screen shots of a youtube video showing the difference in performance between the M1 and xps i7 with iris xe for video processing.

d27af12f04b3492e93e1be9c47db9d2e.jpg.png

d850331fd1924c40b0e68c9affbc0bfa.jpg.png

--
Fronterra Photography Tours
One Lens, No Problem
The Point and Shoot Pro
Can you post a link to the video? It would be interesting to dig into this especially if it's early emulated versions of the apps to see the performance gains since they didn't even have native versions until the spring/summer well after the M1 launch. On DPReviews test vs a high end gaming laptop the M1 was slower with DaVinci but only by a little 0:56 VS 1:04.

 
I wonder if Intel and AMD will start integrating a higher performance GPU into their processors? Particularly when you consider what Apple has done with the M1 and the current very high prices/shortage of GPUs.

My last few laptops with Intel processors have had integrated GPUs, but they are low performance:
  • i7-8565U - Intel UHD 620 (also has Nvidia Geforce MX250 2gb DDR5)
  • i5-6200U - Intel HD Graphics 520
  • i7-3630QM - Intel HD Graphics 4000 (also has Nvidia GeForce GT 650M 2gb)
  • i5-520M - Intel Graphics Media Accelerator HD
It has been years since I have had a desktop. These days, from what I read, there is a shortage of GPUs and/or very high prices. It would be helpful for those of us who use the Topaz products, DXO, etc. which make heavy use of the GPU to have better ones that are integrated and not need separate ones. The separate ones would then be mostly interesting to high end users doing video editing, gamers, etc.

Apple came out with their M1 processor last year that integrates a multi-core ARM RISC CPU, multi-core GPU, very high speed RAM, etc. all on one chip. Topaz and DXO run well on it.

I have often wondered why Intel and AMD have such low performance integrated GPUs on their processors? I have heard that the newest Intel processors have Iris GPUs which are a bit better, but still not as good, I think.
I don't know how relevant these benchmarks are for use with Topaz, DXO, Lightroom, Photoshop, etc., but this compares the MX250 vs. MX450 vs. Iris Xe:

https://www.videocardbenchmark.net/...Force-MX450-vs-Intel-Iris-Xe/4076vs4296vs4265

69cebca2071e4405af3f833ce23bf9a8.jpg

Unfortunately it does not include the Apple M1. Anyone know of some comparisons?
Here are a couple of screen shots of a youtube video showing the difference in performance between the M1 and xps i7 with iris xe for video processing.

d27af12f04b3492e93e1be9c47db9d2e.jpg.png

d850331fd1924c40b0e68c9affbc0bfa.jpg.png
Can you post a link to the video? It would be interesting to dig into this especially if it's early emulated versions of the apps to see the performance gains since they didn't even have native versions until the spring/summer well after the M1 launch. On DPReviews test vs a high end gaming laptop the M1 was slower with DaVinci but only by a little 0:56 VS 1:04.

He does test both emulated and optimized. The optimized are the times here. I will try to link the video today. The M1 is not some magical fairy dust chip. It is a great piece of tech, But it cannot overcome raw power of intel on heavy tasks.

--
Fronterra Photography Tours
One Lens, No Problem
The Point and Shoot Pro
 
I'm still really happy with my 3900X especially compared to what was out there at the time and overall AMD has strong points still. But Alder Lake is a pretty great achievement from Intel. The big question is how will it work with 15 & 45w mobile chips.
big question to me is does the next AMD release put distance in again, or will Intel keep momentum. The 5900x was released over a year ago. And will the next one be a drop in option for our 3900x motherboards, and worth doing?

Alder Lake's gains rely in large part on DDR5, which is still a bit scarce. You have to pay for it, at present. By the time that's not true, AMD
The announcement is going on right now, I'm a bit behind and still on mobile but they are finally moving to RDNA2 and claiming an average 2x increase in performance. Since Vega is so old that is believable but the big thing to wait for is how specific tasks do with it. Also helping that performance out is the DDR5 memory.

Edit: and they have up to 12 RDNA CU's. That's 50% more than Valve's upcoming Steam Deck and looks like it could have much higher clock speeds.
 
Last edited:
Can you post a link to the video? It would be interesting to dig into this especially if it's early emulated versions of the apps to see the performance gains since they didn't even have native versions until the spring/summer well after the M1 launch. On DPReviews test vs a high end gaming laptop the M1 was slower with DaVinci but only by a little 0:56 VS 1:04.

He does test both emulated and optimized. The optimized are the times here. I will try to link the video today. The M1 is not some magical fairy dust chip. It is a great piece of tech, But it cannot overcome raw power of intel on heavy tasks.
Those Davinci benchmarks are obsolete because Resolve 17.4 delivered support for native M1 in late October 2021. I don't know about Adobe Premiere. According to this tweet, Resolve 17.4 is 5x faster, although they don't say 5x faster than what, exactly.

Bottom line - a high-end GPU will outperform M1 Pro or Max for many graphic tasks, while helping to heat your home.
 
Last edited:
Can you post a link to the video? It would be interesting to dig into this especially if it's early emulated versions of the apps to see the performance gains since they didn't even have native versions until the spring/summer well after the M1 launch. On DPReviews test vs a high end gaming laptop the M1 was slower with DaVinci but only by a little 0:56 VS 1:04.

He does test both emulated and optimized. The optimized are the times here. I will try to link the video today. The M1 is not some magical fairy dust chip. It is a great piece of tech, But it cannot overcome raw power of intel on heavy tasks.
Those Davinci benchmarks are obsolete because Resolve 17.4 delivered support for native M1 in late October 2021. I don't know about Adobe Premiere. According to this tweet, Resolve 17.4 is 5x faster, although they don't say 5x faster than what, exactly.

Bottom line - a high-end GPU will outperform M1 Pro or Max for many graphic tasks, while helping to heat your home.
My notebook or workstation does not get hot at all.
 
The announcement is going on right now, I'm a bit behind and still on mobile but they are finally moving to RDNA2 and claiming an average 2x increase in performance. Since Vega is so old that is believable but the big thing to wait for is how specific tasks do with it. Also helping that performance out is the DDR5 memory.

Edit: and they have up to 12 RDNA CU's. That's 50% more than Valve's upcoming Steam Deck and looks like it could have much higher clock speeds.
Just to add to this now that Intel has had their presentation it looks like they are going all in on CPU performance this generation and the iGPU will be about the same as 11th gen.

It makes sense though since they will be focusing on shipping dedicated GPU's this year. If (and that's a big IF) their talk about the performance gain of combining the integrated and dedicated GPU's for creative tasks holds true and other companies support it that might help them punch above their weight for creative tasks. Not exactly the topic of this thread but it sounds promising.
 
Last edited:
The announcement is going on right now, I'm a bit behind and still on mobile but they are finally moving to RDNA2 and claiming an average 2x increase in performance. Since Vega is so old that is believable but the big thing to wait for is how specific tasks do with it. Also helping that performance out is the DDR5 memory.

Edit: and they have up to 12 RDNA CU's. That's 50% more than Valve's upcoming Steam Deck and looks like it could have much higher clock speeds.
Just to add to this now that Intel has had their presentation it looks like they are going all in on CPU performance this generation and the iGPU will be about the same as 11th gen.

It makes sense though since they will be focusing on shipping dedicated GPU's this year. If (and that's a big IF) their talk about the performance gain of combining the integrated and dedicated GPU's for creative tasks holds true and other companies support it that might help them punch above their weight for creative tasks. Not exactly the topic of this thread but it sounds promising.
Sorry but I don't agree Intel Xe sounds promising. Looking at the leaks, Intel discrete graphics is still way behind AMD and nVidia.

When Intel released their Xe being modular from 1 tile to 2 to 4 tiles, they were claiming huge success competing against high end graphics space. I was like... W*F??

1 tile performance is 10TFLOPs but at that time, nVidia was already selling their RTX3080 with 29.7TFLOPs and RTX3090 at 35TFLOPs. So 4 tiles of Xe is needed to compete with a single RTX3090. And with the current CES nVidia just announced their RTX3090Ti crunching 40TFLOPs on a single card. How much power would Xe need for 4 tiles to crunch the same 40TFLOPs?

My conspiracy theory is that the graphics engineer Intel pouched from AMD was actually a toxic tumour. Good riddance for AMD! Hahahahaha
 
I don't think any of the rumours claimed the initial launch was aimed at the 3090 or even the 3080. The high end card is aimed at the 3070.

From the sounds of it they have enough production and parts lined up they'll pretty much out ship AMD for second place. Remember right now just having ANYTHING on the shelf means you'll sell stuff. If the new lineup actually hits 3070 levels with reasonable prices that's a bonus.

The stuff aimed at 3090 or more likely the Nvidia A line is next generation. But if you can afford an A6000 that's a totally different market.

Other point is considering this isn't a gaming forum the AMD cards don't tend to test very well in many non gaming benchmarks. The question is will the Intel cards?
 
I don't think any of the rumours claimed the initial launch was aimed at the 3090 or even the 3080. The high end card is aimed at the 3070.

From the sounds of it they have enough production and parts lined up they'll pretty much out ship AMD for second place. Remember right now just having ANYTHING on the shelf means you'll sell stuff. If the new lineup actually hits 3070 levels with reasonable prices that's a bonus.

The stuff aimed at 3090 or more likely the Nvidia A line is next generation. But if you can afford an A6000 that's a totally different market.

Other point is considering this isn't a gaming forum the AMD cards don't tend to test very well in many non gaming benchmarks. The question is will the Intel cards?
I don't recall it's rumours, Intel did claim to aim their Xe discrete graphics solution at the high, mid and low end of AMD and nVidia and the scalable of Xe being able to combine 2 tiles and 4 tiles.

For the record, the RTX3070 has 20.3TFLOPs and the "lowly" RTX3060 has 12.7TFLOPs. The AMD 6600XT has 10.6TFLOPs. Unless Intel definition of high end graphics solution is just 10TFLOPs. They even claimed their high end will be used in their super computer for AI and ML. After reading that I flipped! What kind of low caliber people are running Intel now??

Recent announcement of the Intel DG2-448 is speculated to have about 16TFLOPs based on the specs. It's not even on sale today, and the RTX3070 already has a Ti version doing 21.7TFLOPs. I don't even think the highest end DG2-512 can touch the "old" RTX3080. From what it looks so far to me, Intel is even behind AMD. The 6800XT has 20.7TFLOPs.

Read that Intel is producing the Xe in-house using their 10nm tech. I'm not sure how much power the cards will suck in order to hit 20TFLOPs. Gauging from the current Alder Lake power draw, likely on par with AMD & nVidia.

BTW I have the AMD Vega64 and did some home videos at 4K, and it's really not bad at all, I don't do benchmarks though.
 
I don't think any of the rumours claimed the initial launch was aimed at the 3090 or even the 3080. The high end card is aimed at the 3070.
The x60 represents the midpoint of the market, so having a goal of matching the x70 is a reasonable target, esp for a company with a history of low performance in their igpus. If Intel can produce in quantity, a 3070 analog would serve a giant market that is currently sitting unserved due to the lack of supply.
 
The super computer is using a totally different setup. It's from the HPC division.


While I'm sure there is some crossover look at the machine on that page. It's not a gaming card.

I can edit 4K video on my four year old IGPU. Just saying edit doesn't mean much.
 
BTW the GPUs are TSMC 6nm IIRC. Intel outsourced at least this generation.
 
The announcement is going on right now, I'm a bit behind and still on mobile but they are finally moving to RDNA2 and claiming an average 2x increase in performance. Since Vega is so old that is believable but the big thing to wait for is how specific tasks do with it. Also helping that performance out is the DDR5 memory.

Edit: and they have up to 12 RDNA CU's. That's 50% more than Valve's upcoming Steam Deck and looks like it could have much higher clock speeds.
Just to add to this now that Intel has had their presentation it looks like they are going all in on CPU performance this generation and the iGPU will be about the same as 11th gen.

It makes sense though since they will be focusing on shipping dedicated GPU's this year. If (and that's a big IF) their talk about the performance gain of combining the integrated and dedicated GPU's for creative tasks holds true and other companies support it that might help them punch above their weight for creative tasks. Not exactly the topic of this thread but it sounds promising.
Sorry but I don't agree Intel Xe sounds promising. Looking at the leaks, Intel discrete graphics is still way behind AMD and nVidia.

When Intel released their Xe being modular from 1 tile to 2 to 4 tiles, they were claiming huge success competing against high end graphics space. I was like... W*F??

1 tile performance is 10TFLOPs but at that time, nVidia was already selling their RTX3080 with 29.7TFLOPs and RTX3090 at 35TFLOPs. So 4 tiles of Xe is needed to compete with a single RTX3090. And with the current CES nVidia just announced their RTX3090Ti crunching 40TFLOPs on a single card. How much power would Xe need for 4 tiles to crunch the same 40TFLOPs?

My conspiracy theory is that the graphics engineer Intel pouched from AMD was actually a toxic tumour. Good riddance for AMD! Hahahahaha
I don't see how that's a massive loss? The 3090 while a great halo card for the lineup is something that few people will buy. The 3060-3070 range is where the bulk of sales are so if Intel with their first generation can get a solid product in the biggest part of the market that would be a win to them.

Say their top end is closer to an RTX3070 which is what I've been seeing on rumor sites. If they can do that and at a price lower than the 3070 or in today's market just having stock available it'll be success for Intel.

And on the creator side if this i/dGPU working together pans out it would let it punch above its' weight so while 3070 in terms of gaming it might be more like a 3080 for some content creation workloads. That would be a big selling point.

TFLOPS like MHz is also best used as a measuring stick within the same architecture. This video is for gaming but it includes tests showing a 4.3 TFLOP GPU outperforming 6 TFLOP GPU's.


And then there's the whole issue of drivers/app support which can have a major impact on performance. Intel says that they're making a huge push on that front but we'll have to see.
 
Last edited:

Keyboard shortcuts

Back
Top