News Posts matching #InfiniBand

Return to Keyword Browsing

NVIDIA and Atos Team Up to Build World's Fastest AI Supercomputer

NVIDIA today announced that the Italian inter-university consortium CINECA—one of the world's most important supercomputing centers—will use the company's accelerated computing platform to build the world's fastest AI supercomputer.

The new "Leonardo" system, built with Atos, is expected to deliver 10 exaflops of FP16 AI performance to enable advanced AI and HPC converged application use cases. Featuring nearly 14,000 NVIDIA Ampere architecture-based GPUs and NVIDIA Mellanox HDR 200 Gb/s InfiniBand networking, Leonardo will propel Italy as the global leader in AI and high performance computing research and innovation.

NVIDIA Building UK's Most Powerful Supercomputer, Dedicated to AI Research in Healthcare

NVIDIA today announced that it is building the United Kingdom's most powerful supercomputer, which it will make available to U.K. healthcare researchers using AI to solve pressing medical challenges, including those presented by COVID-19.

Expected to come online by year end, the "Cambridge-1" supercomputer will be an NVIDIA DGX SuperPOD system capable of delivering more than 400 petaflops of AI performance and 8 petaflops of Linpack performance, which would rank it No. 29 on the latest TOP500 list of the world's most powerful supercomputers. It will also rank among the world's top 3 most energy-efficient supercomputers on the current Green500 list.

Atos Launches First Supercomputer Equipped with NVIDIA A100 Tensor Core GPU

Atos, a global leader in digital transformation, today announces its new BullSequana X2415, the first supercomputer in Europe to integrate NVIDIA's Ampere next-generation graphics processing unit architecture, the NVIDIA A100 Tensor Core GPU. This new supercomputer blade will deliver unprecedented computing power to boost application performance for HPC and AI workloads, tackling the challenges of the exascale era. The BullSequana X2415 blade will increase computing power by more than 2x and optimize energy consumption thanks to Atos' 100% highly efficient water-cooled patented DLC (Direct Liquid Cooling) solution, which uses warm water to cool the machine.

Forschungszentrum Jülich will integrate this new blade into its booster module, extending its existing JUWELS BullSequana supercomputer, making it the first system worldwide the use this new technology. The JUWELS Booster will provide researchers across Europe with significantly increased computational resources. Some of the projects it will fuel are the European Commission's Human Brain Project and the Jülich Laboratories of "Climate Science" and "Molecular Systems". Once fully deployed this summer the upgraded supercomputing system, operated under ParTec's software ParaStation Modulo, is expected to provide a computational peak performance of more than 70 Petaflops/s making it the most powerful supercomputer in Europe and a showcase for European exascale architecture.

NVIDIA Tesla A100 "Ampere" AIC (add-in card) Form-Factor Board Pictured

Here's the first picture of a Tesla A100 "Ampere" AIC (add-in card) form-factor board, hot on the heals of the morning big A100 reveal. The AIC card is a bare PCB, which workstation builders will add compatible cooling solutions on. The PCB features the gigantic GA100 processor with its six HBM2E stacks, in the center, surrounded by VRM components, and I/O on three sides. On the bottom side, you will find a conventional PCI-Express 4.0 x16 host interface. Above it, are NVLink fingers. The rear I/O has high-bandwidth network interfaces (likely 200 Gbps InfiniBand), by Mellanox. The tail end has hard points for 12 V power input. Find juicy details of the GA100 in our older article.

NVIDIA to Acquire Mellanox Technology for $6.9 Billion

NVIDIA and Mellanox today announced that the companies have reached a definitive agreement under which NVIDIA will acquire Mellanox. Pursuant to the agreement, NVIDIA will acquire all of the issued and outstanding common shares of Mellanox for $125 per share in cash, representing a total enterprise value of approximately $6.9 billion. Once complete, the combination is expected to be immediately accretive to NVIDIA's non-GAAP gross margin, non-GAAP earnings per share and free cash flow.

The acquisition will unite two of the world's leading companies in high performance computing (HPC). Together, NVIDIA's computing platform and Mellanox's interconnects power over 250 of the world's TOP500 supercomputers and have as customers every major cloud service provider and computer maker. The data and compute intensity of modern workloads in AI, scientific computing and data analytics is growing exponentially and has put enormous performance demands on hyperscale and enterprise datacenters. While computing demand is surging, CPU performance advances are slowing as Moore's law has ended. This has led to the adoption of accelerated computing with NVIDIA GPUs and Mellanox's intelligent networking solutions.

China Pulls Ahead of U.S. in Latest TOP500 List

The fiftieth TOP500 list of the fastest supercomputers in the world has China overtaking the US in the total number of ranked systems by a margin of 202 to 143. It is the largest number of supercomputers China has ever claimed on the TOP500 ranking, with the US presence shrinking to its lowest level since the list's inception 25 years ago.

Just six months ago, the US led with 169 systems, with China coming in at 160. Despite the reversal of fortunes, the 143 systems claimed by the US gives them a solid second place finish, with Japan in third place with 35, followed by Germany with 20, France with 18, and the UK with 15.

NVM Express Over Fabrics Specification Released

NVM Express, Inc., the organization that developed the NVM Express specification for accessing solid-state storage technologies on a PCI Express (PCIe) bus, today announced the release of its NVM Express over Fabrics specification for accessing storage devices and systems over Ethernet, Fibre Channel, InfiniBand, and other network fabrics. NVM Express, Inc. has also recently published Version 1.0 of the NVM Express Management Interface specification.

The NVM Express over Fabrics specification extends the benefits of NVM Express beyond rack-scale architectures to datacenter-wide Fabric architectures supporting thousands of solid state devices, where using a fabric as an attach point to the host is more appropriate than using PCI Express.

Storage technologies are quickly innovating to reduce latency, providing a significant performance improvement for today's cutting-edge applications. NVM Express (NVMe) is a significant step forward in high-performance, low-latency storage I/O and reduction of I/O stack overheads. NVMe over Fabrics is an essential technology to extend NVMe storage connectivity such that NVMe-enabled hosts can access NVMe-enabled storage anywhere in the datacenter, ensuring that the performance of today's and tomorrow's solid state storage technologies is fully unlocked, and that the network itself is not a bottleneck.

NVIDIA Launches World's First Deep Learning Supercomputer

NVIDIA today unveiled the NVIDIA DGX-1, the world's first deep learning supercomputer to meet the unlimited computing demands of artificial intelligence. The NVIDIA DGX-1 is the first system designed specifically for deep learning -- it comes fully integrated with hardware, deep learning software and development tools for quick, easy deployment. It is a turnkey system that contains a new generation of GPU accelerators, delivering the equivalent throughput of 250 x86 servers.

The DGX-1 deep learning system enables researchers and data scientists to easily harness the power of GPU-accelerated computing to create a new class of intelligent machines that learn, see and perceive the world as humans do. It delivers unprecedented levels of computing power to drive next-generation AI applications, allowing researchers to dramatically reduce the time to train larger, more sophisticated deep neural networks.

IBM, NVIDIA and Mellanox Launch Design Center for Big Data and HPC

IBM, in collaboration with NVIDIA and Mellanox, today announced the establishment of a POWER Acceleration and Design Center in Montpellier, France to advance the development of data-intensive research, industrial, and commercial applications. Born out of the collaborative spirit fostered by the OpenPOWER Foundation - a community co-founded in part by IBM, NVIDIA and Mellanox supporting open development on top of the POWER architecture - the new Center provides commercial and open-source software developers with technical assistance to enable them to develop high performance computing (HPC) applications.

Technical experts from IBM, NVIDIA and Mellanox will help developers take advantage of OpenPOWER systems leveraging IBM's open and licensable POWER architecture with the NVIDIA Tesla Accelerated Computing Platform and Mellanox InfiniBand networking solutions. These are the class of systems developed collaboratively with the U.S. Department of Energy for the next generation Sierra and Summit supercomputers and to be used by the United Kingdom's Science and Technology Facilities Council's Hartree Centre for big data research.

Intel's Post-2017 "Purley" Enterprise Platform Detailed

Intel's future enterprise computing business, post-2017, could see a unification of its 2-socket (2S), 4-socket (4S), and >8-socket (8S+) platforms unify into one, codenamed "Purley." The platform will consist of multiple SKUs, but a common socket type (Socket-P), and a new interconnect technology replacing InfiniBand, which will wire the sockets and core-logic across multiple server blades, together. Called Omni-Path Interconnect, the tech appears to be fiber-optic at the physical layer, with extremely thin cables, and bandwidths could start at 100 Gbps, for the first generation. The controller driving it will be codenamed "Storm Lake." The inter-socket communication will be care of a newer 10.4 GT/s UPI interconnect, each socket will feature three such UPI channels. The platform will support up to eight sockets per blade, with more sockets across neighboring blades over Omni-Path.

"Purley" will feature a new platform core-logic, in the form of the "Lewisburg" PCH. It will feature the new DMI3 chipset-bus, which is PCI-Express 3.0 x4 at the physical layer. This chipset will support up to four 10 GbE interfaces. On the processor front, will be as processors based on the "Skylake" micro-architecture. Intel will carve out several silicons based on "Skylake," the biggest one will feature 28 physical CPU cores, with HyperThreading enabling 56 logical CPUs, and for the first time, a six-channel (384-bit wide) DDR4 integrated memory controller, with support for DDR4-2666. On the flip side, this IMC only supports one DIMM per channel (DPC). The 3DPC support from previous platforms is gone. These chips will be built on the 14 nm silicon fab process, and their TDP will range between 45W and 165W, depending on the number of cores and clock speeds.

Intel Reveals Details for Future HPC System Building Blocks

Intel Corporation todayannounced several new and enhanced technologies bolstering its leadership in high-performance computing (HPC). These include disclosure of the future generation Intel Xeon Phi processor, code-named Knights Hill, and new architectural and performance details for Intel Omni-Path Architecture, a new high-speed interconnect technology optimized for HPC deployments.

Intel also announced new software releases and collaborative efforts designed to make it easier for the HPC community to extract the full performance potential from current and future Intel industry-standard hardware. Together, these new HPC building blocks and industry collaborations will help to address the dual challenges of extreme scalability and mainstream use of HPC while providing the foundation for a cost-effective path to exascale computing.

PLX Unveils ExpressFabric at SC12 Supercomputing Event

PLX Technology, Inc. (NASDAQ: PLXT), a leading global supplier of high-speed connectivity solutions enabling emerging data center architectures, today announced a live demonstration of its ExpressFabric technology at the SC12 supercomputing conference this week in Salt Lake City, UT. ExpressFabric is a PLX-initiated approach to extending the PCI Express (PCIe) standard from its current dominant presence inside servers, switches and storage appliances, to a central position as a fabric, uniting all hardware within the racks of data centers.

"Small to medium sized clusters -- for example, those with 20 to 1,000 CPU nodes, on one to eight racks -- are best suited for ExpressFabric technology," said Vijay Meduri, PLX vice president of engineering. "For applications that use clusters of that size, or are capable of using disaggregated clusters, ExpressFabric technology allows for complete elimination of bridges and other protocol fabrics to create the mini-clusters. The PCIe convergence model and ongoing software development that is being implemented is designed to be compatible and will co-exist with Ethernet, InfiniBand and other fabrics in the data center."

SuperMicro Announces Server and Motherboard Line for AMD Opteron 6300 Series

Super Micro Computer, Inc. , a global leader in high-performance, high-efficiency server technology and green computing, has upgraded its line of A+ servers with the just released AMD (NYSE: AMD) Opteron 6300 Series processors. The new A+ server and tower platforms deliver up to 24 percent higher performance over prior generation solutions and maximize processor density and core counts to support more VMs for public and private cloud deployments. Performance per watt is up to 40 percent higher with intelligent thermal/power monitoring and management, enabling datacenters running robust large-scale software systems to maximize performance while lowering overall TCO. With dual-processor (DP) and multi-processor (MP) architectures, Supermicro ups the performance bar for data center and high-performance computing (HPC) cluster applications.

PLX to Showcase Emerging PCIe Gen3, USB 3.0 Applications at Intel Developers Forum

PLX Technology, Inc., a leading global supplier of high-speed connectivity solutions enabling emerging data center architectures, is welcoming attendees of the Intel Developers Forum (IDF), held this week in San Francisco at the Moscone West Convention Center, to witness a live demonstration of how PCI Express (PCIe) Gen3 performance is poised to challenge, replace or enhance all other interconnect technologies in the data center, including SAS, RapidIO, InfiniBand and Ethernet.

The first demonstration utilizes PLX PCIe Gen3 switches to showcase plug-and-play, box-to-box connectivity using cost-effective optical modules, connectors and cables -- without any special hardware or software -- can achieve the high performance required of new data center designs. The demonstration's highlights include a x4 PCIe configuration achieving 32 Gbps throughput via economical mini-SAS HD or dual-optical USB. The demonstration will also prove how "independent SSC" operation can be achieved without spread-spectrum clocking (SSC) isolation, showing that neither clock management, additional clock chips/buffers, nor protocol translations are needed, resulting in simple scalability and connectivity, lower connection costs and higher density.

PLX Expands PCI Express Gen3 Portfolio, Adds Versatile 96-, 80-, 64-Lane Switches

PLX Technology, Inc., a leading global supplier of high-speed connectivity solutions enabling emerging data center architectures, today announced a new trio of ultra-high lane count PCI Express (PCIe) Gen3 switches developed for cutting-edge markets like storage systems, high-end graphics, and communications platforms. The high-performance ExpressLane PCIe Gen3 switches include the PEX8796 (96 lanes, 24 ports), PEX8780 (80 lanes, 20 ports) and PEX8764 (64 lanes, 16 ports), which expand the PLX portfolio to 14 PCIe Gen3 devices. Designers choosing the PEX8796 switch -- touting bandwidth of 8 Gigatransfers per second, per lane, in full duplex mode -- are rewarded with amazing throughput of 1,536 gigabits per second (192 gigabytes/s), delivering performance that challenges all other interconnect technologies.

AMD Supercomputing Leadership Continues with Broader Developer Ecosystem, Latest Tech

AMD today announced its continued leadership in high performance computing (HPC) with 24 of the top 100 supercomputers in the 39th TOP500 list, announced today, while making significant strides in maturing its HPC ecosystem through the addition of several key components that enable end users to continue to benefit from best-in-class price/performance for today's demanding applications.

AMD has continued to work closely with its technology partners to significantly mature its HPC ecosystem, resulting in several new developments including LS-DYNA simulation software optimized for the AMD Opteron 6200 Series processors from Livermore Software Technology Corporation (LSTC), the addition of programming options for AMD graphics processor unit (GPU) technology from CAPS, and the announcement by Mellanox Technologies of its Connect-IB products that will deliver FDR 56 Gb/s InfiniBand speeds to AMD's solution portfolios.

NASA Scales SGI Pleiades InfiniBand Cluster to 25,000 Intel Xeon Processor Cores

SGI, the trusted leader in technical computing, announced today that with over 60 miles of InfiniBand cabling in place at the NASA Advanced Supercomputing (NAS) Division at NASA Ames Research Center at Moffett Field, Calif., a scientist was able to utilize 25,000 SGI ICE Intel Xeon processor cores on Pleiades to run a space weather simulation.

One particular area of study is magnetic reconnection, a physical process in highly conducting plasmas such as those that occur in the Earth's magnetosphere, in which the magnetic topology is rearranged and magnetic energy converted to kinetic or thermal energy. This field of research is critical, as these disturbances can disable wide scale power grids, affect satellite transmissions and disrupt airline communications.

SGI ICE Supercomputer Speeds to Benchmark Record

SGI (NASDAQ: SGI), the trusted leader in technical computing, announced today that its SGI ICE 8400 high performance compute (HPC) system with AMD Opteron 6200 processors achieved the top performance number for 64-, 32-, 16-, 8-, and 4-socket configurations for the SPECMPIL2007 benchmark. The SPECMPIL benchmark is SPEC's "benchmark suite for evaluating MPI-parallel, floating point, compute intensive performance across a wide range of cluster and SMP hardware," according to the SPEC website.

The SGI ICE 8400 platform with AMD processors is a completely open platform optimized for HPC workloads and runs an off-the-shelf Linux operating system for application compatibility. Although the ICE platform is able to comfortably support multi-petaflop sized installations, design considerations allow cost effective solutions down to a half rack. Single- or dual-plane integrated InfiniBand can be cabled into four different topologies, including hypercube, enhanced hypercube, all to all, and fat-tree, allowing flexible network customization for a variety of workloads.

Intel Announces InfiniBand Acquisition

Intel Corporation today announced that it has entered into a definitive agreement with QLogic to acquire the product lines of and certain assets related to its InfiniBand business. A significant number of the employees associated with this business are expected to accept offers to join Intel.

This acquisition is designed to enhance Intel's networking portfolio and provide scalable high-performance computing (HPC) fabric technology as well as support the company's vision of innovating on fabric architectures to achieve ExaFLOP/s performance by 2018. An ExaFLOP/s is a quintillion computer operations per second, a hundred times more than today's fastest supercomputers.
Return to Keyword Browsing