News Posts matching #Compute Express Link

Return to Keyword Browsing

CXL Consortium Releases Compute Express Link 3.0 Specification to Expand Fabric Capabilities and Management

The CXL Consortium, an industry standards body dedicated to advancing Compute Express Link (CXL) technology, today announced the release of the CXL 3.0 specification. The CXL 3.0 specification expands on previous technology generations to increase scalability and to optimize system level flows with advanced switching and fabric capabilities, efficient peer-to-peer communications, and fine-grained resource sharing across multiple compute domains.

"Modern datacenters require heterogenous and composable architectures to support compute intensive workloads for applications such as Artificial Intelligence and Machine Learning - and we continue to evolve CXL technology to meet industry requirements," said Siamak Tavallaei, president, CXL Consortium. "Developed by our dedicated technical workgroup members, the CXL 3.0 specification will enable new usage models in composable disaggregated infrastructure."

OpenCAPI Consortium Merges Into CXL

The industry has been undergoing significant changes in computing. Application specific hardware acceleration is becoming commonplace and new memory technologies are influencing the economics of computing. To address the need for an open architecture to allow full industry participation, the OpenCAPI Consortium (OCC) was founded in 2016. The architecture that was defined allowed any microprocessor to attach to coherent user-level accelerators, advanced memories, and was agnostic to the processor architecture. In 2021, OCC announced the Open Memory Interface (OMI). Based on OpenCAPI, OMI is a serial attached near memory interface that provides low latency and high bandwidth connections for main memory.

In 2019, the Compute Express Link (CXL) Consortium was launched to deliver an industry-supported cache-coherent interconnect for processors, memory expansion, and accelerators. In 2020, the CXL and Gen-Z Consortiums announced plans to implement interoperability between their respective technologies, and in early 2022, Gen-Z transferred its specifications and assets to the CXL Consortium.

Kioxia Launches Second Generation of High-Performance, Cost-Effective XL-FLASH Storage Class Memory Solution

Kioxia Corporation, the world leader in memory solutions, today announced the launch of the second generation of XL-FLASH, a Storage Class Memory (SCM) solution based on its BiCS FLASH 3D flash memory technology, which significantly reduces bit cost while providing high performance and low latency. Product sample shipments are scheduled to start in November this year, with volume production expected to begin in 2023.

The second generation XL-FLASH achieves significant reduction in bit cost as a result of the addition of new multi-level cell (MLC) functionality with 2-bit per cell, in addition to the single-level cell (SLC) of the existing model. The maximum number of planes that can operate simultaneously has also increased from the current model, which will allow for improved throughput. The new XL-FLASH will have a memory capacity of 256 gigabits.

Samsung Electronics Introduces Industry's First 512GB CXL Memory Module

Samsung Electronics, the world leader in advanced memory technology, today announced its development of the industry's first 512-gigabyte (GB) Compute Express Link (CXL) DRAM, taking an important step toward the commercialization of CXL which will enable extremely high memory capacity with low latency in IT systems. Since introducing the industry's first CXL DRAM prototype with a field-programmable gate array (FPGA) controller in May 2021, Samsung has been working closely with data center, enterprise server and chipset companies to develop an improved, customizable CXL device.

The new CXL DRAM is built with an application-specific integrated circuit (ASIC) CXL controller and is the first to pack 512 GB of DDR5 DRAM, featuring four times the memory capacity and one-fifth the system latency over the previous Samsung CXL offering. "CXL DRAM will become a critical turning point for future computing structures by substantially advancing artificial intelligence (AI) and big data services, as we aggressively expand its usage in next-generation memory architectures including software-defined memory (SDM)," said Cheolmin Park, Vice President of Memory Global Sales & Marketing at Samsung Electronics, and Director of the CXL Consortium. "Samsung will continue to collaborate across the industry to develop and standardize CXL memory solutions, while fostering an increasingly solid ecosystem."

Montage Technology Delivers the World's First CXL Memory eXpander Controller

Montage Technology, a leading data processing and interconnect IC design company, today announced that it has delivered the world's first Compute Express Link (CXL ) Memory eXpander Controller (MXC). The device is designed to be used in Add-in Cards (AIC), Backplanes or EDSFF memory modules to enable significant scaling of memory capacity and bandwidth for data-intensive applications such as high-performance computing (HPC) and artificial intelligence (AI). The MXC is a Type 3 CXL DRAM memory controller. The MXC supports and is compliant with both DDR4 & DDR5 JEDEC standards. It is also designed to the CXL 2.0 specification and supports PCIe 5.0 specification speeds. The MXC provides high-bandwidth and low-latency interconnect between the CPU and the CXL-based devices, allowing them to share memory for higher performance, reduced software stack complexity, and lower data center TCO.

Montage Technology's President, Stephen Tai said, "CXL is a key technology that enables innovative ways to do memory expansion and pooling which will play an important role in next-generation server platforms. I'm very excited that Montage is the first company in the industry to successfully deliver the MXC chip, which signals we are making a critical step towards advancing the CXL interconnect technology to the memory market." CXL Consortium's President, Siamak Tavallaei said, "The CXL Consortium is excited to see continued CXL specification adoption to enable technologies and solutions such as the CXL DRAM Memory eXpander Controller." Montage Technology is working closely with industry-leading memory manufacturers to deliver advanced memory products based on the CXL MXC and help develop a robust memory ecosystem around CXL.

Tanzanite Silicon Solutions Demonstrates Industry's First CXL Based Memory Expansion and Memory Pooling Products

Tanzanite Silicon Solutions Inc., the leader in the development of Compute Express Link (CXL) based products, is unveiling its architectural vision and product roadmap with an SoC mapped to FPGA Proof-Of-Concept vehicle demonstrating Memory Expansion and Memory Pooling, with multi-host CXL based connectivity. Explosive demand for memory and compute to meet the needs of emerging applications such as Artificial Intelligence (AI), Machine Learning (ML), blockchain technology, and the metaverse is outpacing monolithic systems. A disaggregated data center design with composable components for CPU, memory, storage, GPU, and XPU is needed to provide flexible and dynamic pooling of resources to meet the varying demands of heterogenous workloads in an optimal and efficient manner.

Tanzanite's visionary TanzanoidTZ architecture and purpose-built design of a "Smart Logic Interface Connector" (SLICTZ) SoC enables independent scaling and sharing of memory and compute in a pool with low latency within and across server racks. The Tanzanite solution provides a highly scalable architecture for exa-scale level memory capacity and compute acceleration, supporting multiple industry standard form-factors, ranging from E1.S, E3.S, memory expansion board, and memory appliance.

CXL Consortium & Gen-Z Consortium Sign Letter of Intent to Advance Interconnect Technology

High performance computing continues to evolve—meeting the ever-increasing demand for high efficiency, low-latency, rapid and seamless processing. The Gen-Z Consortium was founded in 2016 to create a next-generation fabric capable of bridging existing solutions while enabling new, unbounded innovation in an open, non-proprietary standards body.

In 2019, the CXL Consortium launched to deliver Compute Express Link (CXL ), an industry-supported cache-coherent interconnect designed for processors, memory expansion, and accelerators. The CXL Consortium and the Gen-Z Consortium established a joint memorandum of understanding (MOU) providing an opportunity for collaboration to define bridging between the protocols. This took the form of a joint working group that encouraged creativity and innovation between the two organizations toward the betterment of the industry as a whole.

Samsung Introduces Industry's First Open-source Software Solution for CXL Memory Platform

Samsung Electronics Co., Ltd., the world leader in advanced memory technology, today introduced the first open-source software solution, the Scalable Memory Development Kit (SMDK), that has been specially designed to support the Compute Express Link (CXL) memory platform. In May, Samsung unveiled the industry's first CXL memory expander that allows memory capacity and bandwidth to scale to levels far exceeding what is possible in today's server systems. Now, the company's CXL platform is being extended beyond hardware to offer easy-to-integrate software tools, making CXL memory much more accessible to data center system developers for emerging artificial intelligence (AI), machine learning (ML) and 5G-edge markets.

The CXL interconnect is an open, industry-backed standard that enables different types of devices such as accelerators, memory expanders and smart I/O devices to work more efficiently when processing high-performance computational workloads. "In order for data center and enterprise systems to smoothly run next-generation memory solutions like CXL, development of corresponding software is a necessity," said Cheolmin Park, vice president of the Memory Product Planning Team at Samsung Electronics. "Today, Samsung is reinforcing its commitment toward delivering a total memory solution that encompasses hardware and software, so that IT OEMs can incorporate new technologies into their systems much more effectively."

Penetration Rate of Ice Lake CPUs in Server Market Expected to Surpass 30% by Year's End as x86 Architecture Remains Dominant, Says TrendForce

While the server industry transitions to the latest generation of processors based on the x86 platform, the Intel Ice Lake and AMD Milan CPUs entered mass production earlier this year and were shipped to certain customers, such as North American CSPs and telecommunication companies, at a low volume in 1Q21, according to TrendForce's latest investigations. These processors are expected to begin seeing widespread adoption in the server market in 3Q21. TrendForce believes that Ice Lake represents a step-up in computing performance from the previous generation due to its higher scalability and support for more memory channels. On the other hand, the new normal that emerged in the post-pandemic era is expected to drive clients in the server sector to partially migrate to the Ice Lake platform, whose share in the server market is expected to surpass 30% in 4Q21.

New Intel XPU Innovations Target HPC and AI

At the 2021 International Supercomputing Conference (ISC) Intel is showcasing how the company is extending its lead in high performance computing (HPC) with a range of technology disclosures, partnerships and customer adoptions. Intel processors are the most widely deployed compute architecture in the world's supercomputers, enabling global medical discoveries and scientific breakthroughs. Intel is announcing advances in its Xeon processor for HPC and AI as well as innovations in memory, software, exascale-class storage, and networking technologies for a range of HPC use cases.

"To maximize HPC performance we must leverage all the computer resources and technology advancements available to us," said Trish Damkroger, vice president and general manager of High Performance Computing at Intel. "Intel is the driving force behind the industry's move toward exascale computing, and the advancements we're delivering with our CPUs, XPUs, oneAPI Toolkits, exascale-class DAOS storage, and high-speed networking are pushing us closer toward that realization."

Cadence Announces New Low-Power IP for PCI Express 5.0 Specification on TSMC N5 Process

Cadence Design Systems, Inc. (Nasdaq: CDNS) today announced immediate availability of Cadence IP supporting the PCI Express (PCIe ) 5.0 specification on TSMC N5 process technology. The next follow-on version on TSMC N3 process technology is expected to be taped out in early 2022. Collaboration with major customers is ongoing for N5 SoC designs targeting hyperscale computing and networking applications. The Cadence IP for PCIe 5.0 technology consists of a PHY, companion controller and Verification IP (VIP) targeted at SoC designs for very high-bandwidth hyperscale computing, networking and storage applications. With Cadence's PHY and Controller Subsystem for PCIe 5.0 architecture, customers can design extremely power-efficient SoCs with accelerated time to market.

The Cadence IP for PCIe 5.0 architecture offers a highly power-efficient implementation of the standard, with several evaluations from leading customers indicating it provides industry best-in-class power at the maximum data transfer rate of 32GT/s and worst-case insertion loss. Leveraging Cadence's existing N7/N6 silicon validated offering, the N5 design provides a full 512GT/s (gigatransfers per second) power-optimized solution across the full range of operating conditions with a single clock lane.

Samsung Unveils Industry-First Memory Module Incorporating New CXL Interconnect

Samsung Electronics Co., Ltd., the world leader in advanced memory technology, today unveiled the industry's first memory module supporting the new Compute Express Link (CXL) interconnect standard. Integrated with Samsung's Double Data Rate 5 (DDR5) technology, this CXL-based module will enable server systems to significantly scale memory capacity and bandwidth, accelerating artificial intelligence (AI) and high-performance computing (HPC) workloads in data centers.

The rise of AI and big data has been fueling the trend toward heterogeneous computing, where multiple processors work in parallel to process massive volumes of data. CXL—an open, industry-supported interconnect based on the PCI Express (PCIe) 5.0 interface—enables high-speed, low latency communication between the host processor and devices such as accelerators, memory buffers and smart I/O devices, while expanding memory capacity and bandwidth well beyond what is possible today. Samsung has been collaborating with several data center, server and chipset manufacturers to develop next-generation interface technology since the CXL consortium was formed in 2019.

Microchip Announces World's First PCI Express 5.0 Switches

Applications such as data analytics, autonomous-driving and medical diagnostics are driving extraordinary demands for machine learning and hyperscale compute infrastructure. To meet these demands, Microchip Technology Inc. today announced the world's first PCI Express (PCIe) 5.0 switch solutions—the Switchtec PFX PCIe 5.0 family—doubling the interconnect performance for dense compute, high speed networking and NVM Express (NVMe ) storage. Together with the XpressConnect retimers, Microchip is the industry's only supplier of both PCIe Gen 5 switches and PCIe Gen 5 retimer products, delivering a complete portfolio of PCIe Gen 5 infrastructure solutions with proven interoperability.

"Accelerators, graphic processing units (GPUs), central processing units (CPUs) and high-speed network adapters continue to drive the need for higher performance PCIe infrastructure. Microchip's introduction of the world's first PCIe 5.0 switch doubles the PCIe Gen 4 interconnect link rates to 32 GT/s to support the most demanding next-generation machine learning platforms," said Andrew Dieckmann, associate vice president of marketing and applications engineering for Microchip's data center solutions business unit. "Coupled with our XpressConnect family of PCIe 5.0 and Compute Express Link (CXL ) 1.1/2.0 retimers, Microchip offers the industry's broadest portfolio of PCIe Gen 5 infrastructure solutions with the lowest latency and end-to-end interoperability."

BittWare Launches IA-840F with Intel Agilex FPGA and Support for oneAPI

BittWare, a Molex company, today unveiled the IA-840F, the company's first Intel Agilex -based FPGA card designed to deliver significant performance-per-watt improvements for next-generation data center, networking and edge compute workloads. Agilex FPGAs deliver up to 40% higher performance or up to 40% lower power, depending on application requirements. BittWare maximized I/O features using the Agilex chip's unique tiling architecture with dual QSFP-DDs (4× 100G), PCIe Gen4 x16, and three MCIO expansion ports for diverse applications. BittWare also announced support for Intel oneAPI, which enables an abstracted development flow for dramatically simplified code re-use across multiple architectures.

"Modern data center workloads are incredibly diverse, requiring customers to implement a mix of scalar, vector, matrix and spatial architectures," said Craig Petrie, vice president of marketing for BittWare. "The IA-840F ensures that customers can quickly and easily exploit the advanced features of the Intel Agilex FPGA. For those customers who prefer to develop FPGA applications at an abstracted level, we are including support for oneAPI. This new unified software programming environment allows customers to program the Agilex FPGA from a single code base with native high-level language performance across architectures."

CXL Consortium Releases Compute Express Link 2.0 Specification

The CXL Consortium, an industry standards body dedicated to advancing Compute Express Link (CXL) technology, today announced the release of the CXL 2.0 specification. CXL is an open industry-standard interconnect offering coherency and memory semantics using high-bandwidth, low-latency connectivity between host processor and devices such as accelerators, memory buffers, and smart I/O devices. The CXL 2.0 specification adds support for switching for fan-out to connect to more devices; memory pooling for increased memory utilization efficiency and providing memory capacity on demand; and support for persistent memory - all while preserving industry investments by supporting full backwards compatibility with CXL 1.1 and 1.0.

"Datacenter architectures continue to evolve rapidly to support the growing demands of emerging workloads for Artificial Intelligence and Machine Learning, with CXL technology keeping pace to meet the performance and latency demands," said Barry McAuliffe, president, CXL Consortium. "Designed with breakthrough performance and easy adoption as guiding principles, the CXL 2.0 specification is a significant achievement from our dedicated technical work group members."

Intel Delivers Advances Across 6 Pillars of Technology, Powering Our Leadership Product Roadmap

At Intel, we truly believe in the potential of technology to enrich lives and change the world. This has been a guiding principle since the company was founded. It started with the PC era, when technology enabled the mass digitization of knowledge and networking, bringing 1 billion people onto the internet. Then came the mobile and cloud era, a disruption that changed the way we live. We now have over 10 billion devices connected to supercomputers in the cloud.

We believe the next era will be the intelligent era. An era where we will experience 100 billion intelligent connected devices. Exascale performance and architecture will make this intelligence available to all, enriching our lives in more ways than we can imagine today. This is a future that inspires and motivates me and my fellow Intel architects every day.

CXL Consortium and Gen-Z Consortium Announce MOU Agreement

The Compute Express Link (CXL) Consortium and Gen-Z Consortium today announced their execution of a Memorandum of Understanding (MOU), describing a mutual plan for collaboration between the two organizations. The agreement shows the commitment each organization is making to promote interoperability between the technologies, while leveraging and further developing complementary capabilities of each technology.

"CXL technology and Gen-Z are gearing up to make big strides across the device connectivity ecosystem. Each technology brings different yet complementary interconnect capabilities required for high-speed communications," said Jim Pappas, board chair, CXL Consortium. "We are looking forward to collaborating with the Gen-Z Consortium to enable great innovations for the Cloud and IT world."
Motherboard PCB

Intel Announces New GPU Architecture and oneAPI for Unified Software Stack at SC19

At Supercomputing 2019, Intel unveiled its vision for extending its leadership in the convergence of high-performance computing (HPC) and artificial intelligence (AI) with new additions to its data-centric silicon portfolio and an ambitious new software initiative that represents a paradigm shift from today's single-architecture, single-vendor programming models.

Addressing the increasing use of heterogeneous architectures in high-performance computing, Intel expanded on its existing technology portfolio to move, store and process data more effectively by announcing a new category of discrete general-purpose GPUs optimized for AI and HPC convergence. Intel also launched the oneAPI industry initiative to deliver a unified and simplified programming model for application development across heterogenous processing architectures, including CPUs, GPUs, FPGAs and other accelerators. The launch of oneAPI represents millions of Intel engineering hours in software development and marks a game-changing evolution from today's limiting, proprietary programming approaches to an open standards-based model for cross-architecture developer engagement and innovation.

7nm Intel Xe GPUs Codenamed "Ponte Vecchio"

Intel's first Xe GPU built on the company's 7 nm silicon fabrication process will be codenamed "Ponte Vecchio," according to a VideoCardz report. These are not gaming GPUs, but rather compute accelerators designed for exascale computing, which leverage the company's CXL (Compute Express Link) interconnect that has bandwidth comparable to PCIe gen 4.0, but with scalability features slated to come out with future generations of PCIe. Intel is preparing its first enterprise compute platform featuring these accelerators codenamed "Project Aurora," in which the company will exert end-to-end control over not just the hardware stack, but also the software.

"Project Aurora" combines up to six "Ponte Vecchio" Xe accelerators with up to two Xeon multi-core processors based on the 7 nm "Sapphire Rapids" microarchitecture, and OneAPI, a unifying API that lets a single kind of machine code address both the CPU and GPU. With Intel owning the x86 machine architecture, it's likely that Xe GPUs will feature, among other things, the ability to process x86 instructions. The API will be able to push scalar workloads to the CPU, and and the GPU's scalar units, and vector workloads to the GPU's vector-optimized SIMD units. Intel's main pitch to the compute market could be significantly lowered software costs from API and machine-code unification between the CPU and GPU.
Image Courtesy: Jan Drewes

Compute Express Link Consortium (CXL) Officially Incorporates

Today, Alibaba, Cisco, Dell EMC, Facebook, Google, Hewlett Packard Enterprise, Huawei, Intel Corporation and Microsoft announce the incorporation of the Compute Express Link (CXL) Consortium, and unveiled the names of its newly-elected members to its Board of Directors. The core group of key industry partners announced their intent to incorporate in March 2019, and remain dedicated to advancing the CXL standard, a new high-speed CPU-to-Device and CPU-to-Memory interconnect which accelerates next-generation data center performance.

The five new CXL board members are as follows: Steve Fields, Fellow and Chief Engineer of Power Systems, IBM; Gaurav Singh, Corporate Vice President, Xilinx; Dong Wei, Standards Architect and Fellow at ARM Holdings; Nathan Kalyanasundharam, Senior Fellow at AMD Semiconductor; and Larrie Carr, Fellow, Technical Strategy and Architecture, Data Center Solutions, Microchip Technology Inc.

Intel Ships Stratix 10 DX FPGAs, VMWare Among Early Partners

Intel today announced shipments of new Intel Stratix 10 DX field programmable gate arrays (FPGA). The new FPGAs are designed to support Intel Ultra Path Interconnect (Intel UPI), PCI-Express (PCIe) Gen4 x16 and a new controller for Intel Optane technology to provide flexible, high-performance acceleration. VMware is one of many early access program participants.

"Intel Stratix 10 DX FPGAs are the first FPGAs designed to combine key features that dramatically boost acceleration of workloads in the cloud and enterprise when used with Intel's portfolio of data center solutions. No other FPGA currently offers this combination of features for server designs based on future select Intel Xeon Scalable processors," said David Moore, Intel vice president and general manager, FPGA and Power Products, Network and Custom Logic Group.

Intel Ships First 10nm Agilex FPGAs

Intel today announced that it has begun shipments of the first Intel Agilex field programmable gate arrays (FPGAs) to early access program customers. Participants in the early access program include Colorado Engineering Inc., Mantaro Networks, Microsoft and Silicom. These customers are using Agilex FPGAs to develop advanced solutions for networking, 5G and accelerated data analytics.

"The Intel Agilex FPGA product family leverages the breadth of Intel innovation and technology leadership, including architecture, packaging, process technology, developer tools and a fast path to power reduction with eASIC technology. These unmatched assets enable new levels of heterogeneous computing, system integration and processor connectivity and will be the first 10nm FPGA to provide cache-coherent and low latency connectivity to Intel Xeon processors with the upcoming Compute Express Link," said Dan McNamara, Intel senior vice president and general manager of the Networking and Custom Logic Group.

AMD Joins CXL Consortium

In a blog post, AMD's Executive Vice President and Chief Technology Officer, Mark Papermaster announced that AMD is officially joining Compute Express Link (CXL) Consortium. Previously AMD has joined different working groups that are addressing the problem of cache coherency inside various systems and accelerators, like CCIX, OpenCAPI and Gen-Z. CXL is initiative started by Intel that promises better solution to cache coherency problem and many companies have joined, now including AMD.

As workloads are split into different segments like machine learning and graphics processing that require a mix of scalar and vector processing, accelerators for those workloads need better inter connection to other processors such as CPUs, where coherency and memory semantics are one of the biggest challenges. AMD's goal of joining the CXL working group is to help advance the standard and possibly integrate it into future products. making the concept of heterogeneous computing a bit closer to reality.

Intel Reveals the "What" and "Why" of CXL Interconnect, its Answer to NVLink

CXL, short for Compute Express Link, is an ambitious new interconnect technology for removable high-bandwidth devices, such as GPU-based compute accelerators, in a data-center environment. It is designed to overcome many of the technical limitations of PCI-Express, the least of which is bandwidth. Intel sensed that its upcoming family of scalable compute accelerators under the Xe band need a specialized interconnect, which Intel wants to push as the next industry standard. The development of CXL is also triggered by compute accelerator majors NVIDIA and AMD already having similar interconnects of their own, NVLink and InfinityFabric, respectively. At a dedicated event dubbed "Interconnect Day 2019," Intel put out a technical presentation that spelled out the nuts and bolts of CXL.

Intel began by describing why the industry needs CXL, and why PCI-Express (PCIe) doesn't suit its use-case. For a client-segment device, PCIe is perfect, since client-segment machines don't have too many devices, too large memory, and the applications don't have a very large memory footprint or scale across multiple machines. PCIe fails big in the data-center, when dealing with multiple bandwidth-hungry devices and vast shared memory pools. Its biggest shortcoming is isolated memory pools for each device, and inefficient access mechanisms. Resource-sharing is almost impossible. Sharing operands and data between multiple devices, such as two GPU accelerators working on a problem, is very inefficient. And lastly, there's latency, lots of it. Latency is the biggest enemy of shared memory pools that span across multiple physical machines. CXL is designed to overcome many of these problems without discarding the best part about PCIe - the simplicity and adaptability of its physical layer.

Intel Driving Data-Centric World with New 10nm Intel Agilex FPGA Family

Intel announced today a brand-new product family, the Intel Agilex FPGA. This new family of field programmable gate arrays (FPGA) will provide customized solutions to address the unique data-centric business challenges across embedded, network and data center markets. "The race to solve data-centric problems requires agile and flexible solutions that can move, store and process data efficiently. Intel Agilex FPGAs deliver customized connectivity and acceleration while delivering much needed improvements in performance and power for diverse workloads," said Dan McNamara, Intel senior vice president, Programmable Solutions Group.

Customers need solutions that can aggregate and process increasing amounts of data traffic to enable transformative applications in emerging, data-driven industries like edge computing, networking and cloud. Whether it's through edge analytics for low-latency processing, virtualized network functions to improve performance, or data center acceleration for greater efficiency, Intel Agilex FPGAs are built to deliver customized solutions for applications from the edge to the cloud. Advances in artificial intelligence (AI) analytics at the edge, network and the cloud are compelling hardware systems to cope with evolving standards, support varying AI workloads, and integrate multiple functions. Intel Agilex FPGAs provide the flexibility and agility required to meet these challenges and deliver gains in performance and power.
Return to Keyword Browsing
May 3rd, 2024 16:49 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts