T0@st
News Editor
- Joined
- Mar 7, 2023
- Messages
- 3,223 (3.96/day)
- Location
- South East, UK
System Name | The TPU Typewriter |
---|---|
Processor | AMD Ryzen 5 5600 (non-X) |
Motherboard | GIGABYTE B550M DS3H Micro ATX |
Cooling | DeepCool AS500 |
Memory | Kingston Fury Renegade RGB 32 GB (2 x 16 GB) DDR4-3600 CL16 |
Video Card(s) | PowerColor Radeon RX 7800 XT 16 GB Hellhound OC |
Storage | Samsung 980 Pro 1 TB M.2-2280 PCIe 4.0 X4 NVME SSD |
Display(s) | Lenovo Legion Y27q-20 27" QHD IPS monitor |
Case | GameMax Spark M-ATX (re-badged Jonsbo D30) |
Audio Device(s) | FiiO K7 Desktop DAC/Amp + Philips Fidelio X3 headphones, or ARTTI T10 Planar IEMs |
Power Supply | ADATA XPG CORE Reactor 650 W 80+ Gold ATX |
Mouse | Roccat Kone Pro Air |
Keyboard | Cooler Master MasterKeys Pro L |
Software | Windows 10 64-bit Home Edition |
Marvell Technology, Inc., a leader in data infrastructure semiconductor solutions, today announced it is teaming with NVIDIA to offer NVLink Fusion technology to customers employing Marvell custom cloud platform silicon. NVLink Fusion is an innovative new offering from NVIDIA for integrating custom XPU silicon with NVIDIA NVLink connectivity, rack-scale hardware architecture, software and other technology, providing customers with greater flexibility and choice in developing next-generation AI infrastructure.
The Marvell custom platform strategy seeks to deliver breakthrough results through unique semiconductor designs and innovative approaches. By combining expertise in system and semiconductor design, advanced process manufacturing, and a comprehensive portfolio of semiconductor platform solutions and IP—including electrical and optical serializer/deserializers (SerDes), die-to-die interconnects for 2D and 3D devices, advanced packaging, silicon photonics, co-packaged copper, custom high-bandwidth memory (HBM), system-on-chip (SoC) fabrics, optical IO, and compute fabric interfaces such as PCIe Gen 7—Marvell is able to create platforms in collaboration with customers that transform infrastructure performance, efficiency and value.
Marvell custom silicon with NVLink Fusion offers hyperscalers an accelerated path to custom scale-up solutions that meet the rigorous demands of model training and agentic AI inference—where outputs are driven by learned knowledge and reasoning. Hyperscalers can now seamlessly deploy these customized capabilities across their AI data center infrastructure while also leveraging their architecture investments in NVLink.
NVLink gives cloud providers an easy path to scale out AI factories to millions of custom XPUs by effectively enabling them to integrate their proprietary XPUs into NVIDIA's rack-scale systems and the NVIDIA end-to-end networking platform. The core technology of NVLink Fusion is a chiplet delivering up to 1.8 TB/s of bidirectional bandwidth.
"Marvell and NVIDIA are working together to advance AI factory integration," said Nick Kucharewski, senior vice president and general manager, Cloud Platform Business Unit at Marvell. "Through this collaboration, we offer customers the flexibility to rapidly deploy scalable AI infrastructure with the bandwidth, performance and reliability required to support advanced AI models."
"The computing landscape is being reshaped as AI is no longer an application—it is foundational to modern data centers," said Shar Narasimhan, Director of Accelerated Computing at NVIDIA. "NVLink Fusion extends NVIDIA's open platform to partners like Marvell, enabling hyperscalers to scale out AI factories to millions of GPUs, using custom silicon, NVIDIA's rack-scale systems and the NVIDIA end-to-end networking platform, to meet the world's accelerating demand for intelligence."
View at TechPowerUp Main Site | Source
The Marvell custom platform strategy seeks to deliver breakthrough results through unique semiconductor designs and innovative approaches. By combining expertise in system and semiconductor design, advanced process manufacturing, and a comprehensive portfolio of semiconductor platform solutions and IP—including electrical and optical serializer/deserializers (SerDes), die-to-die interconnects for 2D and 3D devices, advanced packaging, silicon photonics, co-packaged copper, custom high-bandwidth memory (HBM), system-on-chip (SoC) fabrics, optical IO, and compute fabric interfaces such as PCIe Gen 7—Marvell is able to create platforms in collaboration with customers that transform infrastructure performance, efficiency and value.


Marvell custom silicon with NVLink Fusion offers hyperscalers an accelerated path to custom scale-up solutions that meet the rigorous demands of model training and agentic AI inference—where outputs are driven by learned knowledge and reasoning. Hyperscalers can now seamlessly deploy these customized capabilities across their AI data center infrastructure while also leveraging their architecture investments in NVLink.
NVLink gives cloud providers an easy path to scale out AI factories to millions of custom XPUs by effectively enabling them to integrate their proprietary XPUs into NVIDIA's rack-scale systems and the NVIDIA end-to-end networking platform. The core technology of NVLink Fusion is a chiplet delivering up to 1.8 TB/s of bidirectional bandwidth.
"Marvell and NVIDIA are working together to advance AI factory integration," said Nick Kucharewski, senior vice president and general manager, Cloud Platform Business Unit at Marvell. "Through this collaboration, we offer customers the flexibility to rapidly deploy scalable AI infrastructure with the bandwidth, performance and reliability required to support advanced AI models."
"The computing landscape is being reshaped as AI is no longer an application—it is foundational to modern data centers," said Shar Narasimhan, Director of Accelerated Computing at NVIDIA. "NVLink Fusion extends NVIDIA's open platform to partners like Marvell, enabling hyperscalers to scale out AI factories to millions of GPUs, using custom silicon, NVIDIA's rack-scale systems and the NVIDIA end-to-end networking platform, to meet the world's accelerating demand for intelligence."
View at TechPowerUp Main Site | Source