News Posts matching #Compute Express Link

Return to Keyword Browsing

Intel Reveals the "What" and "Why" of CXL Interconnect, its Answer to NVLink

CXL, short for Compute Express Link, is an ambitious new interconnect technology for removable high-bandwidth devices, such as GPU-based compute accelerators, in a data-center environment. It is designed to overcome many of the technical limitations of PCI-Express, the least of which is bandwidth. Intel sensed that its upcoming family of scalable compute accelerators under the Xe band need a specialized interconnect, which Intel wants to push as the next industry standard. The development of CXL is also triggered by compute accelerator majors NVIDIA and AMD already having similar interconnects of their own, NVLink and InfinityFabric, respectively. At a dedicated event dubbed "Interconnect Day 2019," Intel put out a technical presentation that spelled out the nuts and bolts of CXL.

Intel began by describing why the industry needs CXL, and why PCI-Express (PCIe) doesn't suit its use-case. For a client-segment device, PCIe is perfect, since client-segment machines don't have too many devices, too large memory, and the applications don't have a very large memory footprint or scale across multiple machines. PCIe fails big in the data-center, when dealing with multiple bandwidth-hungry devices and vast shared memory pools. Its biggest shortcoming is isolated memory pools for each device, and inefficient access mechanisms. Resource-sharing is almost impossible. Sharing operands and data between multiple devices, such as two GPU accelerators working on a problem, is very inefficient. And lastly, there's latency, lots of it. Latency is the biggest enemy of shared memory pools that span across multiple physical machines. CXL is designed to overcome many of these problems without discarding the best part about PCIe - the simplicity and adaptability of its physical layer.

Intel Driving Data-Centric World with New 10nm Intel Agilex FPGA Family

Intel announced today a brand-new product family, the Intel Agilex FPGA. This new family of field programmable gate arrays (FPGA) will provide customized solutions to address the unique data-centric business challenges across embedded, network and data center markets. "The race to solve data-centric problems requires agile and flexible solutions that can move, store and process data efficiently. Intel Agilex FPGAs deliver customized connectivity and acceleration while delivering much needed improvements in performance and power for diverse workloads," said Dan McNamara, Intel senior vice president, Programmable Solutions Group.

Customers need solutions that can aggregate and process increasing amounts of data traffic to enable transformative applications in emerging, data-driven industries like edge computing, networking and cloud. Whether it's through edge analytics for low-latency processing, virtualized network functions to improve performance, or data center acceleration for greater efficiency, Intel Agilex FPGAs are built to deliver customized solutions for applications from the edge to the cloud. Advances in artificial intelligence (AI) analytics at the edge, network and the cloud are compelling hardware systems to cope with evolving standards, support varying AI workloads, and integrate multiple functions. Intel Agilex FPGAs provide the flexibility and agility required to meet these challenges and deliver gains in performance and power.

Intel Releases Compute Express Link (CXL) 1.0, New Interconnect Protocol that Enables PCIe gen 5.0

Intel has been working on CXL, short for Compute Express Link gen 1, for over four years new. This new interconnect protocol was donated to a new consortium of tech companies for release as a the CXL 1.0 standard. Its protocol layer will pave the way for PCI-Express gen 5.0 to sustain its bandwidth growth target of being twice as fast as PCIe gen 4.0. CXL 1.0 is out to compete with other established PCIe-alternative slot standards such as NVLink from NVIDIA, and InfinityFabric from AMD. It has one killer advantage, though: the CXL 1.0 is pin-compatible and backwards-compatible with PCI-Express, and uses PCIe physical-layer and electrical interface.

This reduces hardware upgrade costs for data-centers. CXL maintains memory coherency between the CPU's memory-space and memory on installed devices. The CXL Consortium, or SIG, includes data-center and cloud-computing giants, including Alibaba, Cisco, DellEMC, Facebook, Google, HPE, Huawei, Microsoft, and of course Intel. CXL will be used bot as a socketed/slotted interface for add-on cards and GPU boards, and as an embedded interface. We estimate bandwidth of CXL to be 32 Gbps per lane, or four times that of PCIe gen 3.0, keeping in line with PCIe gen 5.0 bandwidth growth estimates.
Return to Keyword Browsing
Jun 2nd, 2024 16:48 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts