• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Intel Reveals the "What" and "Why" of CXL Interconnect, its Answer to NVLink

There is some level of anti-Intel obsession here. Like Intel owes something to anybody, meanwhile nVidia and AMD proprietary solutions are looked as "meah, nothing to see, look away". Yes CCIX is AMD's baby, and others are "contributors".
CXL, besides Intel, has already gained a lot of support from other big names interested in computing, so put that in perspective:
https://www.computeexpresslink.org/members

ARM, Google, Cisco, Facebook, alibaba, Dell, HP, Huawei, Lenovo, Microsoft, Microchip... they are all into giving Intel free money???
A standard is as strong as the money behind it and the adoption by industry. Better standard (by al measures) will win.
I will add to that: a standard in an agreed upon way of doing things. The trouble is, it's hard to do stuff for the first(ish) time and agree with everybody else.
So oftentimes, when companies decide to go at it by themselves, it's not because they're after your cash (well, they are in the end), but because they need a product out there.
 
There is some level of anti-Intel obsession here. Like Intel owes something to anybody, meanwhile nVidia and AMD proprietary solutions are looked as "meah, nothing to see, look away". Yes CCIX is AMD's baby, and others are "contributors".
CXL, besides Intel, has already gained a lot of support from other big names interested in computing, so put that in perspective:
https://www.computeexpresslink.org/members

ARM, Google, Cisco, Facebook, alibaba, Dell, HP, Huawei, Lenovo, Microsoft, Microchip... they are all into giving Intel free money???
A standard is as strong as the money behind it and the adoption by industry. Better standard (by al measures) will win.

I think you missed your own point. Intel locks down their tech all the time. Thunderbolt being the latest. They have a proprietary 200g network adapter as well as a long history of this. While AMD is the opposite. Freesync, opencl support, opengl, Tressfx, etc. I don't care who makes it, as long as it is truly open. AMD has a good record of going open. Intel, not so much. I do admire your faith that this will definitely and without question be the first time without strings attached. I just don't share it.
 
I think you missed your own point. Intel locks down their tech all the time. Thunderbolt being the latest. They have a proprietary 200g network adapter as well as a long history of this. While AMD is the opposite. Freesync, opencl support, opengl, Tressfx, etc. I don't care who makes it, as long as it is truly open. AMD has a good record of going open. Intel, not so much. I do admire your faith that this will definitely and without question be the first time without strings attached. I just don't share it.
You have a really black or white view of things there.
Intel had an open source video driver for Linux long before AMD. Also: https://en.wikipedia.org/wiki/Thunderbolt_(interface)#Royalty_situation
AMD has to go the open route. They're the underdog, they can't sell closed solutions. If things changed, I'm pretty sure they'd reconsider their approach.
 
So Intel should be nationalized, and then government should provide all those standards for free to everyone else in the world. Like they use freely the GPS.
Got it.
 
  • Like
Reactions: bug
This is not for your desktop Steeevo, this is for servers where the bandwidth isn't as much for single device performance but for device to device performance. X8 may be fine for a single gpu to not lose performance, but not if it wants to work with 15 others and compete against nvlink. This is also intel railroading and not joining the other consortiums... which are already open standards Now... not to be opened on 2nd gen. This is a desperate lock-in attempt for their cascade lake failings.
I don't believe PCIe 4.0 will have such a bottleneck even for huge server farms ~ Why AMD EPYC Rome 2P Will Have 128-160 PCIe Gen4 Lanes

Having said that each use(r) case is different, so while some enterprses may need the extra lanes - they should have plenty to spare with PCIe 4.0 perhpas with the exception of (extreme) edge cases.

Some key points wrt competing solutions ~
https://www.openfabrics.org/images/eventpresos/2017presentations/213_CCIXGen-Z_BBenton.pdf
https://www.csm.ornl.gov/workshops/...chnology Overview, Trends, and Alignments.pdf
 
Last edited:
You have a really black or white view of things there.
Intel had an open source video driver for Linux long before AMD. Also: https://en.wikipedia.org/wiki/Thunderbolt_(interface)#Royalty_situation
AMD has to go the open route. They're the underdog, they can't sell closed solutions. If things changed, I'm pretty sure they'd reconsider their approach.

LOL, so, open solutions are only for losers who have no choice but to market that way? What does that say about thunderbolt? Do you think they dropped the royalties because it was such a rousing success? It's not about black, white, blue, green, or whatever. Although I'll give you, Nvidia makes Intel look like choir boys when it comes to this. I really could care less about who comes up with what, as I said. And true innovation should be rewarded, but warming over similar tech to others in order to lock players out of one market or another is not right, and just not something I support. Optane is an original idea and more power to them for leveraging it, CXL is not.
 
All big talk and nothing concrete. By date, there isn't even 1 mobo with PCI-X 4.0 out there, not to mention CPUs that support it (yet)
 
LOL, so, open solutions are only for losers who have no choice but to market that way? What does that say about thunderbolt? Do you think they dropped the royalties because it was such a rousing success? It's not about black, white, blue, green, or whatever. Although I'll give you, Nvidia makes Intel look like choir boys when it comes to this. I really could care less about who comes up with what, as I said. And true innovation should be rewarded, but warming over similar tech to others in order to lock players out of one market or another is not right, and just not something I support. Optane is an original idea and more power to them for leveraging it, CXL is not.
Do not put words in my mouth. Open solution are generally better. But they are contingent on existing expertise and participants agreeing with each other. Public companies on the other hand are primarily accountable to their shareholders and have to think about profit first.
 
When open solutions will be building their first CPU, then good for them.
Until then, if Intel will add support in their CPU for something, it will became a defacto standard.

And yes, building for profit works, provides money for future research and development.

Open solutions don't work by themselves, they are supported by the evil non open products. Nobody likes to work for free, even the kids in their parents bedrooms want money for new phones, movie tickets with their dates, gas money...
 
  • Like
Reactions: bug
I don't believe PCIe 4.0 will have such a bottleneck even for huge server farms ~ Why AMD EPYC Rome 2P Will Have 128-160 PCIe Gen4 Lanes

Having said that each use(r) case is different, so while some enterprses may need the extra lanes - they should have plenty to spare with PCIe 4.0 perhpas with the exception of (extreme) edge cases.

Some key points wrt competing solutions ~
https://www.openfabrics.org/images/eventpresos/2017presentations/213_CCIXGen-Z_BBenton.pdf
https://www.csm.ornl.gov/workshops/openshmem2017/presentations/Benton - OpenCAPI, Gen-Z, CCIX- Technology Overview, Trends, and Alignments.pdf


Intel is skipping pcie 4 and going straight to 5 with this custom "optional" alternative proprietary protocol. So while AMD's Pcie 4 with CCIX and 128+++ pcie lanes** is enough for accelerators Intel has max 80 lanes of pcie 3.0 per cascade lake 2p or 4p... They need more and can catch up/pass AMD with 80 lanes of pcie 5.0.

**(160 lanes requires cutting cpu interconnects from 4 to 3/ While flexibility is nice, this is by far not optimal for compute intensive setups. Naples suffered due to interconnect saturation with nvme devices, While the bandwidth has doubled the core count has as well. Time will tell if the ram speed bump, and I/O die brings enough of an improvement to offset the loss of an interconnect.

AMD also has a 4 gpu infinity fabric ringbus that takes the load off the cpu. infinity fabric is very similar to ccix in being an alternative lower latency protocol over pcie.

Also, whoever mentioned no pcie 4 boards being on the market is only half correct, no x86 boards, but powerpc has had them for awhile, and I think a few arm boards.
2nd fun fact, powerpc chips have nvlink interconnects on die, so rather than connecting to nvlink gpus through a pcie switch... they are part of the mesh.
 
Last edited:
Intel is skipping pcie 4 and going straight to 5 with this custom "optional" alternative proprietary protocol.
They have PCIe 4.0 lined up however Intel's road-map change so often & they have so many overlapping ones that it's virtually impossible to say what they'll "release" next & if it'll just be a paper launch.

intel_hpc_roadmap_xeon.png

powerpc chips have nvlink interconnects on die
Yes & NVlink was designed with IBM, it's a GPU-GPU & GPU-CPU interconnect that's why I said it's nothing like CXL, it's more akin to IF.
 
Back
Top