• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Intel Reveals the "What" and "Why" of CXL Interconnect, its Answer to NVLink

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,771 (7.42/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
CXL, short for Compute Express Link, is an ambitious new interconnect technology for removable high-bandwidth devices, such as GPU-based compute accelerators, in a data-center environment. It is designed to overcome many of the technical limitations of PCI-Express, the least of which is bandwidth. Intel sensed that its upcoming family of scalable compute accelerators under the Xe band need a specialized interconnect, which Intel wants to push as the next industry standard. The development of CXL is also triggered by compute accelerator majors NVIDIA and AMD already having similar interconnects of their own, NVLink and InfinityFabric, respectively. At a dedicated event dubbed "Interconnect Day 2019," Intel put out a technical presentation that spelled out the nuts and bolts of CXL.

Intel began by describing why the industry needs CXL, and why PCI-Express (PCIe) doesn't suit its use-case. For a client-segment device, PCIe is perfect, since client-segment machines don't have too many devices, too large memory, and the applications don't have a very large memory footprint or scale across multiple machines. PCIe fails big in the data-center, when dealing with multiple bandwidth-hungry devices and vast shared memory pools. Its biggest shortcoming is isolated memory pools for each device, and inefficient access mechanisms. Resource-sharing is almost impossible. Sharing operands and data between multiple devices, such as two GPU accelerators working on a problem, is very inefficient. And lastly, there's latency, lots of it. Latency is the biggest enemy of shared memory pools that span across multiple physical machines. CXL is designed to overcome many of these problems without discarding the best part about PCIe - the simplicity and adaptability of its physical layer.



CXL uses the PCIe physical layer, and has raw on-paper bandwidth of 32 Gbps per lane, per direction, which aligns with PCIe gen 5.0 standard. The link layer is where all the secret-sauce is. Intel worked on new handshake, auto-negotiation, and transaction protocols replacing those of PCIe, designed to overcome its shortcomings listed above. With PCIe gen 5.0 already standardized by the PCI-SIG, Intel could share CXL IP back to the SIG with PCIe gen 6.0. In other words, Intel admits that CXL may not outlive PCIe, and until the PCI-SIG can standardize gen 6.0 (around 2021-22, if not later), CXL is the need of the hour.



The CXL transaction layer consists of three multiplexed sub-protocols that run simultaneously on a single link. They are: CXL.io, CXL.cache, and CXL.memory. CXL.io deals with device discovery, link negotiation, interrupts, registry access, etc., which are basically tasks that get a machine to work with a device. CXL.cache deals with the device's access to a local processor's memory. CXL.memory deals with processor's access to non-local memory (memory controlled by another processor or another machine).



Intel listed out use-cases for CXL, which begins with accelerators with memory, such as graphics cards, GPU compute accelerators, and high-density compute cards. All three CXL transaction layer protocols are relevant to such devices. Next up, are FPGAs, and NICs. CXL.io and CXL.cache are relevant here, since network-stacks are processed by processors local to the NIC. Lastly, there are the all-important memory buffers. You can imagine these devices as "NAS, but with DRAM sticks." Future data-centers will consist of vast memory pools shared between thousands of physical machines and accelerators. CXL.memory and CXL.cache are relevant. Much of what makes the CXL link-layer faster than PCIe is its optimized stack (processing load for the CPU). The CXL stack is built from the ground up keeping low-latency as a design goal.

View at TechPowerUp Main Site
 
You don't even have PCIe gen4 yet & you're gunning for gen6, reminds me of that "10nm coming soon" promise :rolleyes:
Current estimate is that due to 4.0 and 5.0 being less than two years apart, 4.0 (officially announced in June 2017) would get overwhelmed by 5.0 (final spec is expected to ratified in Q1 2019). Compare this to 3.0 being from November 2010.

There have been rumors that Intel intends to skip PCI-Express 4.0 completely.
 
Current estimate is that due to 4.0 and 5.0 being less than two years apart, 4.0 (officially announced in June 2017) would get overwhelmed by 5.0 (final spec is expected to ratified in Q1 2019). Compare this to 3.0 being from November 2010.

There have been rumors that Intel intends to skip PCI-Express 4.0 completely.
Yes & looking at that S/A article Intel seems to want to lock people into CXL - a proprietary lookalike of CCIX. Besides we already have PCIe 4.0 CPU, GPU, SSD(?) & accelerators out there. Yet Intel does what it knows best, only for themselves!
 
Last edited:
Yes & looking at that S/A article Intel seems to want to lock people into CXL - a proprietary lookalike of CCIX. Besides we already have PCIe 4.0 CPU, GPU, SSD(?) & accelerators out there.
First, S/A and Charlie are not exactly objective about anything Intel ;)
CXL is a protocol on top of PCI-e 5.0, similarly to CCIX on top of PCI-e 4.0 (at least in the current iteration of it). Whether Intel has something nefarious in mind we will have to wait and see. They make it sound like an evolution of CXL or something similar is something they would like to eventually see in PCI-e 6.0 proper.

What we already have are not exactly optimal for the purpose. Intel does talk about the why they want a new interconnect. Putting this on Intel is a bit strange, as CCIX is quite literally coming from the same points but from AMD, ARM, Qualcomm, Xilinx etc. There are also other interconnects like IF or NVLink.
 
SA may have a good dose of anti-Intel bias, but that does not mean they are wrong. ;)

So if CCIX is so similar, why are they "modifying it" themselves rather than joining the party with everyone else?
 
CCIX is open standard, likewise GenZ IIRC lest you've forgotten what happened to firewire, TB, GSync & so many others before these. Firstly CXL will lock users into the Intel ecosystem, second there will be a CXL "tax" & lastly with Intel controlling pretty much the entire consortium it's their way or the highway. I'm sure there are other technical differences, but on the face of it I see no reason why CXL should be preferred over GenZ or CCIX atm.
 
SA may have a good dose of anti-Intel bias, but that does not mean they are wrong. ;)
So if CCIX is so similar, why are they "modifying it" themselves rather than joining the party with everyone else?
You are right about S/A and Charlie being right sometimes. Just not all the time and they are clickmagnety with their headlines.

I have not had a chance to read the entire CCIX spec (simple search doesn't do it and have not jumped through enough hoops to get the full document) and CXL spec is not public AFAIK. While having their own version of everything is probably part of it, from what has been revealed the solution seems to be somehat different. Intel's approach no doubt is geared or optimized to their specific needs.
 
First, S/A and Charlie are not exactly objective about anything Intel ;)
CXL is a protocol on top of PCI-e 5.0, similarly to CCIX on top of PCI-e 4.0 (at least in the current iteration of it). Whether Intel has something nefarious in mind we will have to wait and see. They make it sound like an evolution of CXL or something similar is something they would like to eventually see in PCI-e 6.0 proper.

What we already have are not exactly optimal for the purpose. Intel does talk about the why they want a new interconnect. Putting this on Intel is a bit strange, as CCIX is quite literally coming from the same points but from AMD, ARM, Qualcomm, Xilinx etc. There are also other interconnects like IF or NVLink.

Maybe they feel the pressure by the big blue.
 
So basically for desktop users this is not important in next 5-10 years?
 
So basically for desktop users this is not important in next 5-10 years?
It is never going to be relevant to desktop users.
 
First, S/A and Charlie are not exactly objective about anything Intel ;)
CXL is a protocol on top of PCI-e 5.0, similarly to CCIX on top of PCI-e 4.0 (at least in the current iteration of it). Whether Intel has something nefarious in mind we will have to wait and see. They make it sound like an evolution of CXL or something similar is something they would like to eventually see in PCI-e 6.0 proper.

What we already have are not exactly optimal for the purpose. Intel does talk about the why they want a new interconnect. Putting this on Intel is a bit strange, as CCIX is quite literally coming from the same points but from AMD, ARM, Qualcomm, Xilinx etc. There are also other interconnects like IF or NVLink.

Charlie Demerjian is one of the best tech analysts in my opinion. Dude sure knows his stuff. Sure he is pretty critical of intel; though I truly think it is well justified as intel has proved over and over again that they are unethical as hell. Though to be fair, I am liking the new intel better as they kinda seem to be getting better and more streamlined with the new management.

You are right about S/A and Charlie being right sometimes. Just not all the time and they are clickmagnety with their headlines.
This is also not exactly true because they are subscription based so they hardly rely on clickbait as that type of traffic doesn't net them anything
 
You don't even have PCIe gen4 yet & you're gunning for gen6, reminds me of that "10nm coming soon" promise :rolleyes:
Yes, because this newspiece is totally about PCIe 6.0 :rolleyes:
 
I guess you don't see the slides, nor the promise of making CXL open (standard?) by gen 6.0?
With PCIe gen 5.0 already standardized by the PCI-SIG, Intel could share CXL IP back to the SIG with PCIe gen 6.0
But of course you didn't -
AbMwyjCbsmqgKc5g.jpg


So if Intel doesn't get their way this will likely end up as TB, without the USB bailout :rolleyes:
 
Remind me again how the couple percent performance in our own resident PCIe bandwidth testing from W1zz shows we don't yet need more bandwidth to GPUs unless you want to stack a bunch together, which has never really scaled well, but it's more of resources and management than bandwidth.....


Sounds like Intel wants to make standards that offer little benefits but cost a lot to license.
 
The latency is not the same as bandwidth. In many applications that use GPU to accelerate CPU calculations, even at desktop level, I am already seeing the latency effects. Usage for CPU, GPU, memory are not maximized to 100%, but some apps cannot go higher in utilization.
That's why we "don't need more bandwidth", because latency kills any speed that we could gain from that.

Intel proposing this to be incorporated in PCIe standard is nothing nefarious, since they are already members of the PCI-SIG consortium:
http://pcisig.com/membership/member-companies?combine=intel
I don't see how this translates into Intel "wanting to get a fee".

As for people that bash Intel just because they feel it's "cool" and they think they "know better"... whatever inflates your ego is fine to be put online, for everyone to see.
 
Last edited:
This isn't going to be incorporated in PCIe anytime soon, at the latest gen 6.0 & only if Intel feels generous. This as proprietary as TB was at launch, there are also competing standards which are in fact open.
 
This isn't going to be incorporated in PCIe anytime soon, at the latest gen 6.0 & only if Intel feels generous. This as proprietary as TB was at launch, there are also competing standards which are in fact open.
It's as open as NVLink and InfinityFabric this competes with ;)
 
NVlink & IF aren't CXL's direct competitors, it's CCIX & GenZ though the point about proprietary is 100% valid.
 
Last edited:
  • Like
Reactions: bug
As the others have ones that work well with existing standards intel wants to abandon pcie...
 
There is some level of anti-Intel obsession here. Like Intel owes something to anybody, meanwhile nVidia and AMD proprietary solutions are looked as "meah, nothing to see, look away". Yes CCIX is AMD's baby, and others are "contributors".
CXL, besides Intel, has already gained a lot of support from other big names interested in computing, so put that in perspective:
https://www.computeexpresslink.org/members

ARM, Google, Cisco, Facebook, alibaba, Dell, HP, Huawei, Lenovo, Microsoft, Microchip... they are all into giving Intel free money???
A standard is as strong as the money behind it and the adoption by industry. Better standard (by al measures) will win.
 
Last edited:
Remind me again how the couple percent performance in our own resident PCIe bandwidth testing from W1zz shows we don't yet need more bandwidth to GPUs unless you want to stack a bunch together, which has never really scaled well, but it's more of resources and management than bandwidth.....


Sounds like Intel wants to make standards that offer little benefits but cost a lot to license.


This is not for your desktop Steeevo, this is for servers where the bandwidth isn't as much for single device performance but for device to device performance. X8 may be fine for a single gpu to not lose performance, but not if it wants to work with 15 others and compete against nvlink. This is also intel railroading and not joining the other consortiums... which are already open standards Now... not to be opened on 2nd gen. This is a desperate lock-in attempt for their cascade lake failings.
 
There is some level of anti-Intel obsession here. Like Intel owes something to anybody, meanwhile nVidia and AMD proprietary solutions are looked as "meah, nothing to see, look away". Yes CCIX is AMD's baby, and others are "contributors".
CXL, besides Intel, has already gained a lot of support from other big names interested in computing, so put that in perspective:
https://www.computeexpresslink.org/members

ARM, Google, Cisco, Facebook, alibaba, Dell, HP, Huawei, Lenovo, Microsoft, Microchip... they are all into giving Intel free money???
A standard is as strong as the money behind it and the adoption by industry. Better standard (by al measures) will win.

Plenty of Amd bias here too
 
Back
Top