Monday, February 10th 2020

Intel Xe Graphics to Feature MCM-like Configurations, up to 512 EU on 500 W TDP

A reportedly leaked Intel slide via DigitalTrends has given us a load of information on Intel's upcoming take on the high performance graphics accelerators market - whether in its server or consumer iterations. Intel's Xe has already been cause for much discussion in a market that has only really seen two real competitors for ages now - the coming of a third player with muscles and brawl such as Intel against the already-established players NVIDIA and AMD would surely spark competition in the segment - and competition is the lifeblood of advancement, as we've recently seen with AMD's Ryzen CPU line.

The leaked slide reveals that Intel will be looking to employ a Multi-Chip-Module (MCM) approach to its high performance "Arctic Sound" graphics architecture. The GPUs will be available in up to 4-tile configuration (the name Intel is giving each module), which will then be joined via Foveros 3D stacking (first employed in Intel Lakefield. This leaked slide shows Intel's approach starting with a 1-tile GPU (with only 96 of its 128 total EUs active) for the entry level market (at 75 W TDP) a-la DG1 SDV (Software Development Vehicle).
Then we move towards the midrange market through a 1-tile 128 EU unit (150 W), a 2-tile 256 EU unit (300 W) for the enthusiasts, and finally, a 4-tile, up to 512 EU — a 400-500 W beast reserved only for the Data Center. This last one is known to be reserved for the Data Center since the leaked slide (assuming it's legitimate) points to a 48 V input voltage, which isn't available on consumer solutions. Intel's design means that each EU has access to (at least by design) the equivalent of eight graphics processing cores per EU. That's a lot of addressable hardware, but we'll see if both the performance and power efficiency are there in the final products - we hope they are. Sources: Digital Trends, via Videocardz
Add your own comment

49 Comments on Intel Xe Graphics to Feature MCM-like Configurations, up to 512 EU on 500 W TDP

#26
Midland Dog
ppn
Then it will be a 300 watt card against the hypothetical but very close to 100 watt GTX 2660 Super 7nm+ 2048 Cuda / 128 bit .
there wont be gtx _660 skus this time, rtx will be coming to 3050 i expect
Posted on Reply
#27
cucker tarlson
Khonjel
At worst Intel produces mediocre product but AMD falls behind because Intel does driver support very good.
or there's performance penatly on every driver with new architectural vulnerabilities

but on serious note,nvidia has been laughing how easy they've had it in the last couple of years.

even if intel is slower,merely producing new cards would invigorate the martket.

amd has no cards above tu106 equvalent rx5700xt.that's how sad it is.
Posted on Reply
#28
theoneandonlymrk
Wow, intel are in the shiz , 500watt 2070 ,,? my old Vega 64 would piss on it and yet still gets called shit by you lot with its massive power draw(total bs in the hands of a tuner btw).

this is going to need at least two respins before consumer time , see you in 2021 intel GPU, shame.
Posted on Reply
#29
Vya Domus
lemonadesoda
CUDA and OpenCL are severely limited for decision-based algorithms, and are not great at AI, and have appalling latency for certain tasks. They are good at processing vast quantities of data with rudimentary transformations.
It's not the software, it's the hardware that is limited in performance or capability with things such as branching and synchronization across threads. If you make it so that it can do those things well it becomes worse at everything else. Trying to address a broader compute spectrum is a bad idea, we already have high core count CPU meant to for those.
Posted on Reply
#30
Khonjel
cucker tarlson
amd has no cards above tu106 equvalent rx5700xt.that's how sad it is.
The biggest slam dunk in my memory is GK104 was so powerful it became GTX 680 when it's predecessor GF104/114 was mere GTX 460/560 Ti. **104 has been Nvidia's high end chip since. It's kinda like watching Mercedes dominating F1 since 2014.
theoneandonlymrk
Wow, intel are in the shiz , 500watt 2070 ,,? my old Vega 64 would piss on it and yet still gets called shit by you lot with its massive power draw(total bs in the hands of a tuner btw).

this is going to need at least two respins before consumer time , see you in 2021 intel GPU, shame.

At least Intel has the money to burn and improve even if the rumor is true. And a few months ago there were rumors that both Nvidia and AMD are working on MCM GPUs.

The last high end AMD that actually delivered was R9 290X. Let that sink in. R9 390x was a bloody rebadge and later R9 Fury and RX VII and Vega were late and duds.
Posted on Reply
#31
theoneandonlymrk
Khonjel
The biggest slam dunk in my memory is GK104 was so powerful it became GTX 680 when it's predecessor GF104/114 was mere GTX 460/560 Ti. **104 has been Nvidia's high end chip since. It's kinda like watching Mercedes dominating F1 since 2014.

At least Intel has the money to burn and improve even if the rumor is true. And a few months ago there were rumors that both Nvidia and AMD are working on MCM GPUs.

The last high end AMD that actually delivered was R9 290X. Let that sink in. R9 390x was a bloody rebadge and later R9 Fury and RX VII and Vega were late and duds.
you can talk whatever shite you want mate , but do try to keep on topic , were talking intel and a 500 watt gpu here.

hows about you comment on that, AMD and NVidia mean naught to my point so stick your point back up your nose and stop trying to start flame wars.
Posted on Reply
#32
kapone32
I for one hope that Intel does have a competitive product if for nothing else to drive down prices.
Posted on Reply
#33
kings
theoneandonlymrk
Wow, intel are in the shiz , 500watt 2070 ,,? my old Vega 64 would piss on it and yet still gets called shit by you lot with its massive power draw(total bs in the hands of a tuner btw).

this is going to need at least two respins before consumer time , see you in 2021 intel GPU, shame.

Have you read the news piece? The 400W-500W is a Data Center solution, the consumer cards will top at 300W for the high-end one.

Where do you come up with the 500W card for Vega 64 performance?
Posted on Reply
#34
bug
Chrispy_
Those are looking like some pretty grandiose plans for a company that hasn't launched a GPU in 22 years.

I'm hoping for competition as much as the next guy, but let's see if they can get the baby steps right and make a viable dGPU that people might want to buy first.

After all, if it's not a success, Intel will just can it and all of these roadmap ideas will be archived like Larabree was.
They released plenty GPUs. They didn't release dGPUs, but that doesn't mean they've been out of the loop for 20+ years.
TheGuruStud
They forgot the tiny print: 75% of power consumption is the interconnect. Efficiency is trash, we won't really produce this, but we have to market something.
IF accounts for a lot of the power draw in Zen designs, too. That doesn't mean AMD can't produce Zen.
Posted on Reply
#35
TheUn4seen
kings
Have you read the news piece? The 400W-500W is a Data Center solution, the consumer cards will top at 300W for the high-end one.

Where do you come up with the 500W card for Vega 64 performance?
To me it looks suspiciously like the guy is trying to justify a disappionting purchase motivated by hype. I'm of course speculating, but I personally met a few people just like that, not only in PC hardware space.

As for the Intel GPUs, I fail to see any point in speculating based on unconfirmed snippet of information. On the one hand, Intel can throw money at the problem, while on the other, it's easy to ruin a perfectly good product with a small oversight.
Posted on Reply
#36
theoneandonlymrk
TheUn4seen
To me it looks suspiciously like the guy is trying to justify a disappionting purchase motivated by hype. I'm of course speculating, but I personally met a few people just like that, not only in PC hardware space.

As for the Intel GPUs, I fail to see any point in speculating based on unconfirmed snippet of information. On the one hand, Intel can throw money at the problem, while on the other, it's easy to ruin a perfectly good product with a small oversight.
@kings 4-500 watt for the data center only, these are all rumors, other rumors say 1-4 tiles will make up DG2 the consumer high end, these are rumors but back to 4-500 watts, is that going to compare well to arcturis or nvidias Hopper, I doubt it, also rumors indicate that the original use case for intels GPU left the waiting room already(streaming) and the GGPU faculties of this chip cost efficiency and die space, we will see but rumors are not sounding good to me

@TheUn4seen I owned Fx8350 i owned vega neither use the wattage in use people percieved but they got slammed(rightly so on efficiency) and the vega 64 has done millions of folding @home Work units all day everyday since purchase @110-150watts , does 4k gaming on every game i have played at180-250 watts , if it works it works ,i bought and stuck a 1080ti gets 10-15% better Fps but would have cost me 10-15% more, im fine with my over hyped( true but mostly by other than AMD ) GPU

so then, why is intel different why shouldn't they get stick for raping wattage< answer that one question


I am certainly not hyping This , and I certainly don't want intel to fail, but my disappointment in what's been shown(Dg1 and this) is what it is, and as i said I now think Intels a non starter in the GPU space until 2021 at the earliest(i think im being optimistic here btw) unfortunately.

AND other than the foveros bit of this tech it sounds less interesting as a commercial purchase than any of its competition BY FAR.


AND EVERYONE keeps saying Intel has the money, true they certainly do - let's see what the shareholders have to say in 2021 shall we, they love throwing their dividend at the wall to hopefully stick? hence the massive re organization lately, it was all going to plan eh.

IT takes years to bring change into chip architecture not a year not two,,,, years, MANY years like 3-5, if they f up, like Larrabee,, they cant change tac that easy and despite all, will have to crack on, so by now positive news regarding their GPU's is what we need to be seeing not this.
Posted on Reply
#37
Super XP
Wow 500W that is way too much.
Posted on Reply
#38
ratirt
MCM and two chips. I'm interested how will this chip work? Will it be recognized as one chip or two separate like SLI. Wonder how would the link work.
AMD is trying (maybe was) to get 2 chips connected via IF. Is Intel trying to beat AMD to the punch?
Posted on Reply
#39
londiste
ratirt
Will it be recognized as one chip or two separate like SLI. Wonder how would the link work.
Multiple chips.

Having multiple chips presented as one GPU has benefits for gaming and otherwise real-time rendering. For GPGPU/HPC/AI stuff it generally does not matter and presenting as they physically are is probably more beneficial for better control and efficiency.

If Intel has figured out how to efficiently combine multiple chips into one GPU they have pretty good jump on both AMD and Nvidia who have been trying to get there for at least over a decade.
Posted on Reply
#40
ratirt
londiste
If Intel has figured out how to efficiently combine multiple chips into one GPU they have pretty good jump on both AMD and Nvidia who have been trying to get there for at least over a decade.
This is what I care about. I know it hasn't been done so far and I'm curious. It's not the matter of gaming or workstation stuff but I wanna know if they have managed to do it and if it works OK.
Posted on Reply
#41
londiste
ratirt
This is what I care about. I know it hasn't been done so far and I'm curious. It's not the matter of gaming or workstation stuff but I wanna know if they have managed to do it and if it works OK.
I am willing to bet they have not.

AMD and Nvidia have put more time and effort into this than anyone and they do not have a viable solution to combining multiple dies to a single GPU. Even if the solution was something exotic and expensive, considering what workstation cards go for either one of them would have deployed it.
Posted on Reply
#42
ratirt
londiste
I am willing to bet they have not.

AMD and Nvidia have put more time and effort into this than anyone and they do not have a viable solution to combining multiple dies to a single GPU. Even if the solution was something exotic and expensive, considering what workstation cards go for either one of them would have deployed it.
Probably. Was curious how they have managed to get chips connected and how it looks. maybe it is to early yet.
Posted on Reply
#43
theoneandonlymrk
ratirt
Probably. Was curious how they have managed to get chips connected and how it looks. maybe it is to early yet.
Directly via foveros , emib interconnects, chip edge type connections which likely incorporate through-silicon vias at the edge of the silicon so any tile based rendering would be per tile and there does not seem to be a separate managing(for want of a better word) chip so the control ( to regulate frame presentation)must be built into the chips or driver.
Posted on Reply
#44
ratirt
theoneandonlymrk
Directly via foveros , emib interconnects, chip edge type connections which likely incorporate through-silicon vias at the edge of the silicon so any tile based rendering would be per tile and there does not seem to be a separate managing(for want of a better word) chip so the control ( to regulate frame presentation)must be built into the chips or driver.
So the way it used to be. Well in that case nothing special.
Posted on Reply
#45
theoneandonlymrk
https://wccftech.com/exclusive-intel-xe-hp-4-tile-500w-gpu-eu-count-leaked-no-its-not-512/

quoted from wccf
"
Here are the actual EU counts of Intel's various MCM-based Xe HP GPUs along with estimated core counts and TFLOPs:
  • Intel Xe HP (12.5) 1-Tile GPU: 512 EU [Est: 4096 Cores, 12.2 TFLOPs assuming 1.5GHz, 150W]
  • Intel Xe HP (12.5) 2-Tile GPU: 1024 EUs [Est: 8192 Cores, 20.48 assuming 1.25 GHz, TFLOPs, 300W]
  • Intel Xe HP (12.5) 4-Tile GPU: 2048 EUs [Est: 16,384 Cores, 36 TFLOPs assuming 1.1 GHz, 400W/500W]"
I take back my mocking if this bit is true, they may have something in 2021 that Might compete, still needs some die shrinking and power optimization though but this sounds much more promising as a development step.
Posted on Reply
#46
yeeeeman
Up to 512EUs x 4 for 500W. 500W is the 4 ciplet variant and each chiplet is 512EUs.
There is no way in this world for a 96EU iGPU to fit into 15W TDP and a 512 EU to require 500W.
Posted on Reply
#47
theoneandonlymrk
yeeeeman
Up to 512EUs x 4 for 500W. 500W is the 4 ciplet variant and each chiplet is 512EUs.
There is no way in this world for a 96EU iGPU to fit into 15W TDP and a 512 EU to require 500W.
It's rumoured the consumer version will top out at two tiles, 1024 EU's and. 300watt tdp.
Posted on Reply
#48
Vayra86
theoneandonlymrk
https://wccftech.com/exclusive-intel-xe-hp-4-tile-500w-gpu-eu-count-leaked-no-its-not-512/

quoted from wccf
"
Here are the actual EU counts of Intel's various MCM-based Xe HP GPUs along with estimated core counts and TFLOPs:
  • Intel Xe HP (12.5) 1-Tile GPU: 512 EU [Est: 4096 Cores, 12.2 TFLOPs assuming 1.5GHz, 150W]
  • Intel Xe HP (12.5) 2-Tile GPU: 1024 EUs [Est: 8192 Cores, 20.48 assuming 1.25 GHz, TFLOPs, 300W]
  • Intel Xe HP (12.5) 4-Tile GPU: 2048 EUs [Est: 16,384 Cores, 36 TFLOPs assuming 1.1 GHz, 400W/500W]"
I take back my mocking if this bit is true, they may have something in 2021 that Might compete, still needs some die shrinking and power optimization though but this sounds much more promising as a development step.
But this is like 4 times what is stated in the article on TPU, and the source is WCCFTech.

So now we suddenly think Intel can cram this into a 500W TDP? And how would this be positioned now?! 512 EU as entry level? 1024 EU midrange and 2048 EU... datacenter?! Where is enthusiast in this picture? So many questions.

Bags of salt required
Posted on Reply
#49
theoneandonlymrk
Vayra86
But this is like 4 times what is stated in the article on TPU, and the source is WCCFTech.

So now we suddenly think Intel can cram this into a 500W TDP? And how would this be positioned now?! 512 EU as entry level? 1024 EU midrange and 2048 EU... datacenter?! Where is enthusiast in this picture? So many questions.

Bags of salt required
No way man, Those are rock solid facts defo:P

500 watt TDP would be the 4 tile data center, the highes end consumer is 2 tiles 300 watt, I think ill pass personally but perhaps they might surprise us ,fingers crossed.
Posted on Reply
Add your own comment