Tuesday, July 24th 2018

Intel is Giving up on Xeon Phi - Eight More Models Declared End-Of-Life

Intel's Xeon Phi lineup, which started as Larrabee. has never seen any commercial success in the market despite big promises from the big blue giant that its programming model would be more productive for developers coming from x86. In the meantime, NVIDIA GPUs have taken over the world of supercomputing, with the latest generation Volta decimating Intel Xeon Phi offerings.

Intel's plan was to release a new generation of Xeon Phi called "Knights Hill", on a 10 nanometer process. However, constant delays ramping up 10 nm, paired with generally low demand for Xeon Phi, forced the company to abandon this project. Now the company announces that they are stopping production for eight currently shipping Xeon Phi models.
Affected are Xeon Phi 7210, 7210F, 7230, 7230F, 7250, 7250F, 7290 and 7290F, which are socketed CPUs that go into accelerator designs. The accelerator cards using a graphics card-like form factor were already cancelled a while ago. Interestingly, the change notice mentions "Market demand for the products [...] have shifted to other Intel products"... yeah, well, these products don't exist at Intel. The company has nothing comparable in their lineup, and the only successor I could think of is the GPU project that Raja Koduri and his minions are working on, which is expected not before 2019, probably later.
Add your own comment

29 Comments on Intel is Giving up on Xeon Phi - Eight More Models Declared End-Of-Life

#1
FreedomEclipse
~Technological Technocrat~
<div class="youtube-embed" data-id="CdqoNKCCt7A"><img src="https://i.ytimg.com/vi/CdqoNKCCt7A/hqdefault.jpg" /><div class="youtube-play"></div><a href="https://www.youtube.com/watch?v=CdqoNKCCt7A" target="_blank" class="youtube-title"></a></div>
Posted on Reply
#3
Vya Domus
despite big promises from the big blue giant that its programming model would be more productive for developers coming from x86
They make it sound as if CUDA or OpenCL required some deep and complex knowledge about GPU architecture that would be bothersome for developers to learn. "x86 developers" doesn't even mean much today , seriously that would be a line I wouldn't have been surprised to hear 30 years ago when most software written was ISA dependent and you could say there was such a thing as an "x86 developer". That goes to show how antiquated their thinking was.

Intel is living in the past when they could shove x86 everywhere and this failure was the result of that.
Posted on Reply
#5
RejZoR
AMD knew well to invest into GPU's for compute, which is why they bought ATi back then... Intel on other hand stubbornly insisted on x86 everything...
Posted on Reply
#6
dj-electric
Its almost like Intel knows how its future of GPU based computing will look like...
Posted on Reply
#7
jabbadap
US government has it own part of killing it, China were buying those until Intel was forced to stop deliver them anymore.

So all those EOLed are knights landings, not mixed precision knights mills for deep learning. So HPC out, DL in.
Posted on Reply
#8
Nioktefe
I don't think Xeon Phi was a failure, they exprerimented a lot of features regarding scaling of many cores, wich they could not have developped with Skylakes core that are too big to fit that much on 14nm

10/7nm xeon with a lot more cores will be close to the numbers of cores in those xeon phi so all of that experience is not gonna go to waste

Also this architecture should scale insanely good compared to Epyc architecture wich require complex rework to change ccx layout (either that or they inscrease ccx count, wich further complexify the interconnect)
Posted on Reply
#9
Vya Domus
Nioktefe said:
wich they could not have developped with Skylakes core that are too big
That was never the intent , Phi cores are meant to be small and stripped down of features.
Posted on Reply
#10
Nioktefe
Vya Domus said:
That was never the intent , Phi cores are meant to be small and stripped down of features.
Yes but in the future they will scale that big with their classic line of cores, that's when the xeon phi experience comes in, despite being a fail as a product
Posted on Reply
#11
DeathtoGnomes
I remember there was a huge buzz about these chips, then nothing, like they dropped off the face of the earth.
Posted on Reply
#12
R0H1T
Nioktefe said:
I don't think Xeon Phi was a failure, they exprerimented a lot of features regarding scaling of many cores, wich they could not have developped with Skylakes core that are too big to fit that much on 14nm

10/7nm xeon with a lot more cores will be close to the numbers of cores in those xeon phi so all of that experience is not gonna go to waste

Also this architecture should scale insanely good compared to Epyc architecture wich require complex rework to change ccx layout (either that or they inscrease ccx count, wich further complexify the interconnect)
Think of them as Atom+enterprise Xeon tacked as one, minus the latter's IPC, IIRC the last gen Phi had AVX 512?
DeathtoGnomes said:
I remember there was a huge buzz about these chips, then nothing, like they dropped off the face of the earth.
That's because it was pretty bad for anything they were doing wrt Tesla, Quadro et al.
Posted on Reply
#13
RejZoR
DeathtoGnomes said:
I remember there was a huge buzz about these chips, then nothing, like they dropped off the face of the earth.
Because CPU's are basically dead for large scale compute. GPU's have so much more brute compute power there is no point in fiddling around with CPU designs.
Posted on Reply
#14
CheapMeat
They were used in a lot of HPC's. So I can see why so many are dismissive since not "gamer/enthusiast". I always found them interesting. But seems like they were hard to work with, as least the card variant.
Posted on Reply
#15
Prima.Vera
When your crappy CPU costs more than an nVidia's Tesla GPU (which is at least 100 times faster in compute), you know you will have a problem....:laugh::laugh::laugh:
Posted on Reply
#16
Jism
The technique was great, however outdated and GPU's did things generally faster and with more precision compared to the phi.
Posted on Reply
#18
Xx Tek Tip xX
Who was actually going to buy a cheap one off eBay for a joke then you realise you need one fo those overpriced xeons.
Posted on Reply
#19
DeathtoGnomes
Xx Tek Tip xX said:
Who was actually going to buy a cheap one off eBay for a joke then you realise you need one fo those overpriced xeons.
a fanboi ? no wait! I know this one.
Posted on Reply
#20
Xx Tek Tip xX
DeathtoGnomes said:
a fanboi ? no wait! I know this one.
Fanboy? Tell that to my fm2+ am3+ am3 rigs kid and my 1151 and 2066 and 1366 rigs.

DeathtoGnomes said:
a fanboi ? no wait! I know this one.
AMD fx user?
I challenge you, take a shot a my x5650 - It will obliterate anything you've probably got.
Posted on Reply
#21
Caring1
Xx Tek Tip xX said:
Fanboy? Tell that to my fm2+ am3+ am3 rigs kid and my 1151 and 2066 and 1366 rigs.


AMD fx user?
I challenge you, take a shot a my x5650 - It will obliterate anything you've probably got.
I believe he was answering your question, not having a go at you.
Posted on Reply
#22
Xx Tek Tip xX
Caring1 said:
I believe he was answering your question, not having a go at you.
Well he should probably edit, because I considered getting the Xeon phi and he assumed a "fanboi" would.
Posted on Reply
#23
Vayra86
Xx Tek Tip xX said:
Well he should probably edit, because I considered getting the Xeon phi and he assumed a "fanboi" would.
Well in that case wasnt he pretty accurate after all?

This was a hilarious bit of miscommunication tbh :D
Posted on Reply
#24
Xx Tek Tip xX
Vayra86 said:
Well in that case wasnt he pretty accurate after all?

This was a hilarious bit of miscommunication tbh :D
Pretty accurate? Nope. To be honest with you I've gone through more amd systems than intel but always preferred intel until am4 dominated and whooped ass
Posted on Reply
#25
Vayra86
Xx Tek Tip xX said:
Pretty accurate? Nope. To be honest with you I've gone through more amd systems than intel but always preferred intel until am4 dominated and whooped ass
So you're a fanboy of both then ;)
Posted on Reply
Add your own comment