News Posts matching #CXL

Return to Keyword Browsing

Intel "Sapphire Rapids" Brings PCIe Gen 5 and DDR5 to the Data-Center

As if the mother of all ironies, prior to its effective death-sentence dealt by the U.S. Department of Commerce, Huawei's server business developed an ambitious product roadmap for its Fusion Server family, aligning with Intel's enterprise processor roadmap. It describes in great detail the key features of these processors, such as core-counts, platform, and I/O. The "Sapphire Rapids" processor will introduce the biggest I/O advancements in close to a decade, when it releases sometime in 2021.

With an unannounced CPU core-count, the "Sapphire Rapids-SP" processor will introduce DDR5 memory support to the data-center, which aims to double bandwidth and memory capacity over the DDR4 generation. The processor features an 8-channel (512-bit wide) DDR5 memory interface. The second major I/O introduction is PCI-Express gen 5.0, which not only doubles bandwidth over gen 4.0 to 32 Gbps per lane, but also comes with a constellation of data-center-relevant features that Intel is pushing out in advance as part of the CXL Interconnect. CXL and PCIe gen 5 are practically identical.

Intel Reveals the "What" and "Why" of CXL Interconnect, its Answer to NVLink

CXL, short for Compute Express Link, is an ambitious new interconnect technology for removable high-bandwidth devices, such as GPU-based compute accelerators, in a data-center environment. It is designed to overcome many of the technical limitations of PCI-Express, the least of which is bandwidth. Intel sensed that its upcoming family of scalable compute accelerators under the Xe band need a specialized interconnect, which Intel wants to push as the next industry standard. The development of CXL is also triggered by compute accelerator majors NVIDIA and AMD already having similar interconnects of their own, NVLink and InfinityFabric, respectively. At a dedicated event dubbed "Interconnect Day 2019," Intel put out a technical presentation that spelled out the nuts and bolts of CXL.

Intel began by describing why the industry needs CXL, and why PCI-Express (PCIe) doesn't suit its use-case. For a client-segment device, PCIe is perfect, since client-segment machines don't have too many devices, too large memory, and the applications don't have a very large memory footprint or scale across multiple machines. PCIe fails big in the data-center, when dealing with multiple bandwidth-hungry devices and vast shared memory pools. Its biggest shortcoming is isolated memory pools for each device, and inefficient access mechanisms. Resource-sharing is almost impossible. Sharing operands and data between multiple devices, such as two GPU accelerators working on a problem, is very inefficient. And lastly, there's latency, lots of it. Latency is the biggest enemy of shared memory pools that span across multiple physical machines. CXL is designed to overcome many of these problems without discarding the best part about PCIe - the simplicity and adaptability of its physical layer.

Intel Releases Compute Express Link (CXL) 1.0, New Interconnect Protocol that Enables PCIe gen 5.0

Intel has been working on CXL, short for Compute Express Link gen 1, for over four years new. This new interconnect protocol was donated to a new consortium of tech companies for release as a the CXL 1.0 standard. Its protocol layer will pave the way for PCI-Express gen 5.0 to sustain its bandwidth growth target of being twice as fast as PCIe gen 4.0. CXL 1.0 is out to compete with other established PCIe-alternative slot standards such as NVLink from NVIDIA, and InfinityFabric from AMD. It has one killer advantage, though: the CXL 1.0 is pin-compatible and backwards-compatible with PCI-Express, and uses PCIe physical-layer and electrical interface.

This reduces hardware upgrade costs for data-centers. CXL maintains memory coherency between the CPU's memory-space and memory on installed devices. The CXL Consortium, or SIG, includes data-center and cloud-computing giants, including Alibaba, Cisco, DellEMC, Facebook, Google, HPE, Huawei, Microsoft, and of course Intel. CXL will be used bot as a socketed/slotted interface for add-on cards and GPU boards, and as an embedded interface. We estimate bandwidth of CXL to be 32 Gbps per lane, or four times that of PCIe gen 3.0, keeping in line with PCIe gen 5.0 bandwidth growth estimates.
Return to Keyword Browsing