Intel/AMD higher performance integrated GPU

  • Thread starter Thread starter Henry Richardson
  • Start date Start date
H

Henry Richardson

Guest
I wonder if Intel and AMD will start integrating a higher performance GPU into their processors? Particularly when you consider what Apple has done with the M1 and the current very high prices/shortage of GPUs.

My last few laptops with Intel processors have had integrated GPUs, but they are low performance:
  • i7-8565U - Intel UHD 620 (also has Nvidia Geforce MX250 2gb DDR5)
  • i5-6200U - Intel HD Graphics 520
  • i7-3630QM - Intel HD Graphics 4000 (also has Nvidia GeForce GT 650M 2gb)
  • i5-520M - Intel Graphics Media Accelerator HD
It has been years since I have had a desktop. These days, from what I read, there is a shortage of GPUs and/or very high prices. It would be helpful for those of us who use the Topaz products, DXO, etc. which make heavy use of the GPU to have better ones that are integrated and not need separate ones. The separate ones would then be mostly interesting to high end users doing video editing, gamers, etc.

Apple came out with their M1 processor last year that integrates a multi-core ARM RISC CPU, multi-core GPU, very high speed RAM, etc. all on one chip. Topaz and DXO run well on it.

I have often wondered why Intel and AMD have such low performance integrated GPUs on their processors? I have heard that the newest Intel processors have Iris GPUs which are a bit better, but still not as good, I think.

--
Henry Richardson
http://www.bakubo.com
 
Last edited:
I wonder if Intel and AMD will start integrating a higher performance GPU into their processors? Particularly when you consider what Apple has done with the M1 and the current very high prices/shortage of GPUs.
Intel made big progress with 11th gen CPUs Iris Xe, which you referenced. Iris Xe might have varying performance on various Core processors, but as videocardbenchmark.net tested, it's slightly faster than the MX 250 and slightly slower than the MX 350. My daughter has this on her LG Gram 17, which is comparatively quiet even with many tabs open in G.Chrome.

AMD followed suit with Radeon GPUs on the Ryzen "G" series. Some of them are faster than the Iris Xe, though most of them are not. Also I'm not sure they are suitable for laptops.

Apple M1 is about as fast as the GTX 1050 Ti, which is good but not great.
I have often wondered why Intel and AMD have such low performance integrated GPUs on their processors? I have heard that the newest Intel processors have Iris GPUs which are a bit better, but still not as good, I think.
Perhaps Intel and AMD didn't care because Nvidia was cheap and good? No longer both...

I have no further theories, but perhaps others do.

P.S. Remember when Google Chrome was the fastest browser? Ha ha ha ha ha...
 
Last edited:
AMD ipgus are considerably faster than Intel offerings, like 3 or 4x. However, you cannot get them on the high performing Rzyen cpus, while Intel does make them an option on their best desktop cpus.

Intel has also put effort into including on chip support for common graphics functions, just as Apple did with the M1. So long as your needs fit those functions, great. When it does not, the discrete GPU runs circles around it.

Let's also dispense with the notion that the M1 is this ground breaking performer. It's not by accident that Apple wants to focus on single threaded perf figures, or perf per watt. The high end AMD and Intel CPUs are still in a very different class, and if you can get a 3080 or a 6800, you don't care about the perf per watt that the M1/M1X delivers for portable users.

CPU design is all about tradeoffs - how much space you give to cache memory, core count, gpu count, and wattage. Support for legacy code is baggage. If you're willing to jettison the past, you do get some quick gain, but your customers have to pay the price for it.

The one differentiator is circuit size - Apple and AMD have an advantage over Intel who has struggled to move to sub 10nm manufacturing. It is somewhat cloudy in that marketing has made it fuzzy what it means to do 7nm or 5 as Apple does. Advantages here are likely to be short term ones.
 
I wonder if Intel and AMD will start integrating a higher performance GPU into their processors? Particularly when you consider what Apple has done with the M1 and the current very high prices/shortage of GPUs.
Intel made big progress with 11th gen CPUs Iris Xe, which you referenced. Iris Xe might have varying performance on various Core processors, but as videocardbenchmark.net tested, it's slightly faster than the MX 250 and slightly slower than the MX 350. My daughter has this on her LG Gram 17, which is comparatively quiet even with many tabs open in G.Chrome.

AMD followed suit with Radeon GPUs on the Ryzen "G" series. Some of them are faster than the Iris Xe, though most of them are not. Also I'm not sure they are suitable for laptops.

Apple M1 is about as fast as the GTX 1050 Ti, which is good but not great.
I have often wondered why Intel and AMD have such low performance integrated GPUs on their processors? I have heard that the newest Intel processors have Iris GPUs which are a bit better, but still not as good, I think.
Perhaps Intel and AMD didn't care because Nvidia was cheap and good? No longer both...
I suspect that with the way things are now Intel and AMD may start focusing on this.
I have no further theories, but perhaps others do.
 
I wonder if Intel and AMD will start integrating a higher performance GPU into their processors? Particularly when you consider what Apple has done with the M1 and the current very high prices/shortage of GPUs.
Intel made big progress with 11th gen CPUs Iris Xe, which you referenced. Iris Xe might have varying performance on various Core processors, but as videocardbenchmark.net tested, it's slightly faster than the MX 250 and slightly slower than the MX 350. My daughter has this on her LG Gram 17, which is comparatively quiet even with many tabs open in G.Chrome.

AMD followed suit with Radeon GPUs on the Ryzen "G" series. Some of them are faster than the Iris Xe, though most of them are not. Also I'm not sure they are suitable for laptops.

Apple M1 is about as fast as the GTX 1050 Ti, which is good but not great.
I have often wondered why Intel and AMD have such low performance integrated GPUs on their processors? I have heard that the newest Intel processors have Iris GPUs which are a bit better, but still not as good, I think.
Perhaps Intel and AMD didn't care because Nvidia was cheap and good? No longer both...
I suspect that with the way things are now Intel and AMD may start focusing on this.
I have no further theories, but perhaps others do.
Just for the record, the M1 gpu performance is 2.6TFLOPs, the GTX1050Ti mbile has 2.48TFLOPs. They both belong to the entry level graphics. Those iGPU from intel and AMD are not even close to this figure. The Vega 11 and Xe with 96 eu are around 1.7TFLOPs only, really baby stuff.

The M1 pro 16 core gpu is 5.2TFLOPs and the Max is 10.4TFLOPs.

For Intel and AMD to create a SoC of that caliber will take some years. AMD is in a very good position though IMO. They can create high end stuffs for workstation given their position for both graphics and cpu. I'm thinking of a SoC with Radeon Pro graphics integrated for entry level, mid and high end workstation laptops. Something like a Threadripper Pro laptop with 10 performance cores, 4 efficient cores and Radeon Pro variants integrated, running 48GB of DDR5 ECC embedded ram. Fabricated on 5nmGAA process.

That would be totally awesome!!
 
Last edited:
I wonder if Intel and AMD will start integrating a higher performance GPU into their processors? Particularly when you consider what Apple has done with the M1 and the current very high prices/shortage of GPUs.

My last few laptops with Intel processors have had integrated GPUs, but they are low performance:
  • i7-8565U - Intel UHD 620 (also has Nvidia Geforce MX250 2gb DDR5)
  • i5-6200U - Intel HD Graphics 520
  • i7-3630QM - Intel HD Graphics 4000 (also has Nvidia GeForce GT 650M 2gb)
  • i5-520M - Intel Graphics Media Accelerator HD
It has been years since I have had a desktop. These days, from what I read, there is a shortage of GPUs and/or very high prices. It would be helpful for those of us who use the Topaz products, DXO, etc. which make heavy use of the GPU to have better ones that are integrated and not need separate ones. The separate ones would then be mostly interesting to high end users doing video editing, gamers, etc.

Apple came out with their M1 processor last year that integrates a multi-core ARM RISC CPU, multi-core GPU, very high speed RAM, etc. all on one chip. Topaz and DXO run well on it.

I have often wondered why Intel and AMD have such low performance integrated GPUs on their processors? I have heard that the newest Intel processors have Iris GPUs which are a bit better, but still not as good, I think.
A couple of reasons:

-Apple owns the high-end market, high-end PC laptops aren't really profitable

-Contrary to what you might think, sticking a mobile 3080 in your high end laptop is extremely cheap compared to making an APU that has a 3080-level GPU inside, this is because such an APU will cost you like 100 mil to start production, where as the mobile 3080 costs almost nothing cuz it's off-the-shelf, you only pay a few hundred dollars per unit sold, so if you're only selling say 10k units, it's so much cheaper to just stick a 3080 in there.

Think of it this way, the M1 max is ONLY usable in like 3 products, the MBP14 MPB16 and imac pro. You need to sell so many of just those 3 products that you justify the hundreds of millions on the chip. On the other hand, if you make a cheap i5-6200U, it can go in hundreds and hundreds of different products, it makes so much more sense for PC manufacturers.
 
Apple doesn't own anything. They're smaller than the Chromebook market.

High end GPUs have high power needs. Even a middle of the road 3070 is over 200 watts peak . Putting that into a laptop would kill your battery in a blink.

Intel has room to push IGPU performance up and they have incentive. Rumours are Meteorlake will do this. AMD has less incentive since every person using an IGPU is one less person buying a GPU from AMD. But no matter what the extreme high end or even mid range GPU performance is going to stay on discrete cards because of the power needs.

If you look at what Intel has done the last three generations of laptops they've increased not just the performance of the IGPU but improved the media encoder/decoder. That means even if you put a discrete card in there you'll have benefit from the IGPU. Expect that trend to increase if they finally ship their desktop cards in a few months. Offloading some work to the IGPU while the GPU does the heavy lifting would seem a smart move by Intel and would hurt only the competition.
 
Apple doesn't own anything. They're smaller than the Chromebook market.

High end GPUs have high power needs. Even a middle of the road 3070 is over 200 watts peak . Putting that into a laptop would kill your battery in a blink.
the 3070 Mobile has a TDW of 115-125W, depending on frequency.
 
Apple doesn't own anything. They're smaller than the Chromebook market.
In 2021 Chromebook shipments will be around 40 million, vs 7 million for Macbooks. [statista]

Most of the Chromebooks are used for school, so I never see them in Internet cafes and seldom in website visit statistics.
High end GPUs have high power needs. Even a middle of the road 3070 is over 200 watts peak . Putting that into a laptop would kill your battery in a blink.
the 3070 Mobile has a TDW of 115-125W, depending on frequency.
What discrete GPU has the highest performance to power ratio?
 
Last edited:
This will be changing soon. Intel is putting a ton of effort into their GPU architecture for their upcoming discrete cards to compete with NVIDIA and AMD. Xe is the first step in that and while driver support is still being worked on it's a huge step up from the older architecture. And there are future ones already well into development.

AMD is sitting on a huge jump right now. They've been using Vega graphics which are 2 generations back. The closest thing to a modern fully featured APU would be what's going into the Steamdeck which uses RDNA2. But this will likely be added into their regular consumer APU's sooner than later.
 
Given the price and performance of Alder Lake Desktop with Xe graphics I do not see how AMD competes without adding an integrated GPU across their entire desktop line.

They already have a successful chiplet design with infinity fabric, adding an iGPU should not be all that difficult. I believe that current AMD APUs are monolithic rather than chiplet, but AMD has been leaking noise about going chiplet for their next generation of GPUs.
 
Apple doesn't own anything. They're smaller than the Chromebook market.
You can't compare high end with bottom tier trash, of course you will sell more trash because most people buy the cheapest.

The correct comparison is how much of the $1000+ market is owned by Apple.
High end GPUs have high power needs. Even a middle of the road 3070 is over 200 watts peak . Putting that into a laptop would kill your battery in a blink.
err so?

You realize you can run a chip at basically arbitrary power levels right ?That you can run a 3070 at 50w or 500w? You know the only difference is lower power gives you high efficiency but lower total performance?

It's 250w only because that's the sweet spot where cost of the chip and cost of the cooling/power systems are minimized for a desktop environment?
Intel has room to push IGPU performance up and they have incentive.
No they don't not with the size of chips they're making. They want better GPUs they're gonna have to make bigger chips which will be more expensive, which PC laptops won't be able to afford to use.
Rumours are Meteorlake will do this. AMD has less incentive since every person using an IGPU is one less person buying a GPU from AMD. But no matter what the extreme high end or even mid range GPU performance is going to stay on discrete cards because of the power needs.
No, they are staying on discrete cards because that's what they can afford to make. Look at the PS5 and Xbox SX, they both have high end GPUs in an SOC, and that chip is over 1 year old now.

They don't sell this as a socketable CPU because there's no market for it. But if you separate out the GPU, that chip becomes much cheaper and easier to sell.

Also since you clearly don't understand, there are two paths to high performance, bigger chip or more power, you don't necessarily have to use more power. It's just that heat pipes are usually much cheaper than chip space so cheap markets like gaming GPUs push the chips to higher and higher power but lower and lower efficiency.
If you look at what Intel has done the last three generations of laptops they've increased not just the performance of the IGPU but improved the media encoder/decoder.
yea obviously, as everybody is improving performance, just because of better technology.

The real question is are they making bigger chips, no they're not.
That means even if you put a discrete card in there you'll have benefit from the IGPU. Expect that trend to increase if they finally ship their desktop cards in a few months. Offloading some work to the IGPU while the GPU does the heavy lifting would seem a smart move by Intel and would hurt only the competition.
 
Last edited:
Given the price and performance of Alder Lake Desktop with Xe graphics I do not see how AMD competes without adding an integrated GPU across their entire desktop line.

They already have a successful chiplet design with infinity fabric, adding an iGPU should not be all that difficult. I believe that current AMD APUs are monolithic rather than chiplet, but AMD has been leaking noise about going chiplet for their next generation of GPUs.
That's the one area where it's not as needed at least during normal times when GPU's are easy to get. They offer the G series of chips with GPU's and that handles the low to mid range with the higher end ones being in systems that most use a discrete GPU and Intel sells a lot of F chips in this space as well with no GPU.

It would be great just for debugging or small form factor builds if you didn't need a GPU and a bonus if they offered something like quick sync. But that market isn't really pushing for it like the laptop space is. And that's where AMD has seen their tech lead shrink with Xe VS Vega.
 
Apple doesn't own anything. They're smaller than the Chromebook market.
You can't compare high end with bottom tier trash, of course you will sell more trash because most people buy the cheapest.

The correct comparison is how much of the $1000+ market is owned by Apple.
First of all not nice calling Apple products trash. But you're not the first.

BTW 1K laptops are the low end. Or even 2K isn't high end.
No they don't not with the size of chips they're making. They want better GPUs they're gonna have to make bigger chips which will be more expensive, which PC laptops won't be able to afford to use.
You might want to look at history. Virtually every one of my CPUs have cost less than the system they replaced.
They don't sell this as a socketable CPU because there's no market for it. But if you separate out the GPU, that chip becomes much cheaper and easier to sell.

Also since you clearly don't understand, there are two paths to high performance, bigger chip or more power, you don't necessarily have to use more power. It's just that heat pipes are usually much cheaper than chip space so cheap markets like gaming GPUs push the chips to higher and higher power but lower and lower efficiency.
Wow. High end GPUs are a lot more than power. Go look at the Nvidia A6000 for example. THAT's a high end card. It uses less power than the consumer version. It's no bigger.

Not that you'll see something like that in an Apple system. Too low end.

The real question is are they making bigger chips, no they're not.
They're NOT? Have you looked at current sockets? Have you looked at the replacements coming in two generations?
 
The one differentiator is circuit size - Apple and AMD have an advantage over Intel who has struggled to move to sub 10nm manufacturing. It is somewhat cloudy in that marketing has made it fuzzy what it means to do 7nm or 5 as Apple does. Advantages here are likely to be short term ones.
Intel "10 nm" is about the same as TSMC "7 nm". These "nanometers" are not actual sizes of the transistors or other features in the IC, and hence these numbers are not comparable across companies. So I don't think Intel has been "struggling" really that much: Intel and AMD are both at the same actual density currently.

"Intel 10nm" has about 100 million transistors per mm square.

"TSMC 7nm" has about 96.5 million transistors per mm square.

"TSMC 5nm" has about 173 million transistors per mm square.

And it will probably be increasingly unclear whether these even should be compared in terms of "area" rather than "volume", as ICs are going to be less 2-dimensional.
 
Intel "10 nm" is about the same as TSMC "7 nm". These "nanometers" are not actual sizes of the transistors or other features in the IC, and hence these numbers are not comparable across companies.
Sounds reminiscent of the old "Megahertz war" when Intel debuted its Netburst archtitecture that broke the CPU pipeline into tiny steps and dramatically increased clock speeds. Which is when the world learned that clock speed does not equal performance .
This time Intel is just changing the names to be closer to TSMC's.


But Intel is doing a pretty good job now. They are competitive again with AMD while a process node behind.

With the single core gains, big little efficiency, and emphasis on GPU development it looks like they're really going to put up a fight against ARM based CPU's. I just hope by the 13th or 14th gen it's all in a good place. That's when I'll probably be upgrading my Surface Pro which will benefit a lot more from that than my desktop.
 
Apple doesn't own anything. They're smaller than the Chromebook market.
You can't compare high end with bottom tier trash, of course you will sell more trash because most people buy the cheapest.

The correct comparison is how much of the $1000+ market is owned by Apple.
First of all not nice calling Apple products trash. But you're not the first.

BTW 1K laptops are the low end. Or even 2K isn't high end.
totally irrelevant argument for argument sake, usually happens when someone has nothing substantial to say.

Call it low end then, whatever you like, we're talking about the $1000+ segment got it?
No they don't not with the size of chips they're making. They want better GPUs they're gonna have to make bigger chips which will be more expensive, which PC laptops won't be able to afford to use.
You might want to look at history. Virtually every one of my CPUs have cost less than the system they replaced.
what? you have some serious misunderstanding going on here.
They don't sell this as a socketable CPU because there's no market for it. But if you separate out the GPU, that chip becomes much cheaper and easier to sell.

Also since you clearly don't understand, there are two paths to high performance, bigger chip or more power, you don't necessarily have to use more power. It's just that heat pipes are usually much cheaper than chip space so cheap markets like gaming GPUs push the chips to higher and higher power but lower and lower efficiency.
Wow. High end GPUs are a lot more than power. Go look at the Nvidia A6000 for example. THAT's a high end card. It uses less power than the consumer version. It's no bigger.
what? when did I say high end cards are nothing more than power?
Not that you'll see something like that in an Apple system. Too low end.
ok.. what?
The real question is are they making bigger chips, no they're not.
They're NOT? Have you looked at current sockets? Have you looked at the replacements coming in two generations?
yes, they're not any bigger
 
The one differentiator is circuit size - Apple and AMD have an advantage over Intel who has struggled to move to sub 10nm manufacturing. It is somewhat cloudy in that marketing has made it fuzzy what it means to do 7nm or 5 as Apple does. Advantages here are likely to be short term ones.
Intel "10 nm" is about the same as TSMC "7 nm". These "nanometers" are not actual sizes of the transistors or other features in the IC, and hence these numbers are not comparable across companies. So I don't think Intel has been "struggling" really that much: Intel and AMD are both at the same actual density currently.
yes, as already noted.

But it took Intel a long time to get to that "10," well beyond schedule. It resulted in some major leadership change at the company. And still it takes them double the wattage as AMD.
 

Keyboard shortcuts

Back
Top