News Posts matching "InfiniBand"

Return to Keyword Browsing

IBM, NVIDIA and Mellanox Launch Design Center for Big Data and HPC

IBM, in collaboration with NVIDIA and Mellanox, today announced the establishment of a POWER Acceleration and Design Center in Montpellier, France to advance the development of data-intensive research, industrial, and commercial applications. Born out of the collaborative spirit fostered by the OpenPOWER Foundation - a community co-founded in part by IBM, NVIDIA and Mellanox supporting open development on top of the POWER architecture - the new Center provides commercial and open-source software developers with technical assistance to enable them to develop high performance computing (HPC) applications.

Technical experts from IBM, NVIDIA and Mellanox will help developers take advantage of OpenPOWER systems leveraging IBM's open and licensable POWER architecture with the NVIDIA Tesla Accelerated Computing Platform and Mellanox InfiniBand networking solutions. These are the class of systems developed collaboratively with the U.S. Department of Energy for the next generation Sierra and Summit supercomputers and to be used by the United Kingdom's Science and Technology Facilities Council's Hartree Centre for big data research.

Intel's Post-2017 "Purley" Enterprise Platform Detailed

Intel's future enterprise computing business, post-2017, could see a unification of its 2-socket (2S), 4-socket (4S), and >8-socket (8S+) platforms unify into one, codenamed "Purley." The platform will consist of multiple SKUs, but a common socket type (Socket-P), and a new interconnect technology replacing InfiniBand, which will wire the sockets and core-logic across multiple server blades, together. Called Omni-Path Interconnect, the tech appears to be fiber-optic at the physical layer, with extremely thin cables, and bandwidths could start at 100 Gbps, for the first generation. The controller driving it will be codenamed "Storm Lake." The inter-socket communication will be care of a newer 10.4 GT/s UPI interconnect, each socket will feature three such UPI channels. The platform will support up to eight sockets per blade, with more sockets across neighboring blades over Omni-Path.

"Purley" will feature a new platform core-logic, in the form of the "Lewisburg" PCH. It will feature the new DMI3 chipset-bus, which is PCI-Express 3.0 x4 at the physical layer. This chipset will support up to four 10 GbE interfaces. On the processor front, will be as processors based on the "Skylake" micro-architecture. Intel will carve out several silicons based on "Skylake," the biggest one will feature 28 physical CPU cores, with HyperThreading enabling 56 logical CPUs, and for the first time, a six-channel (384-bit wide) DDR4 integrated memory controller, with support for DDR4-2666. On the flip side, this IMC only supports one DIMM per channel (DPC). The 3DPC support from previous platforms is gone. These chips will be built on the 14 nm silicon fab process, and their TDP will range between 45W and 165W, depending on the number of cores and clock speeds.

Source: AnandTech Forums

Intel Reveals Details for Future HPC System Building Blocks

Intel Corporation todayannounced several new and enhanced technologies bolstering its leadership in high-performance computing (HPC). These include disclosure of the future generation Intel Xeon Phi processor, code-named Knights Hill, and new architectural and performance details for Intel Omni-Path Architecture, a new high-speed interconnect technology optimized for HPC deployments.

Intel also announced new software releases and collaborative efforts designed to make it easier for the HPC community to extract the full performance potential from current and future Intel industry-standard hardware. Together, these new HPC building blocks and industry collaborations will help to address the dual challenges of extreme scalability and mainstream use of HPC while providing the foundation for a cost-effective path to exascale computing.

PLX Unveils ExpressFabric at SC12 Supercomputing Event

PLX Technology, Inc. (NASDAQ: PLXT), a leading global supplier of high-speed connectivity solutions enabling emerging data center architectures, today announced a live demonstration of its ExpressFabric technology at the SC12 supercomputing conference this week in Salt Lake City, UT. ExpressFabric is a PLX-initiated approach to extending the PCI Express (PCIe) standard from its current dominant presence inside servers, switches and storage appliances, to a central position as a fabric, uniting all hardware within the racks of data centers.

"Small to medium sized clusters -- for example, those with 20 to 1,000 CPU nodes, on one to eight racks -- are best suited for ExpressFabric technology," said Vijay Meduri, PLX vice president of engineering. "For applications that use clusters of that size, or are capable of using disaggregated clusters, ExpressFabric technology allows for complete elimination of bridges and other protocol fabrics to create the mini-clusters. The PCIe convergence model and ongoing software development that is being implemented is designed to be compatible and will co-exist with Ethernet, InfiniBand and other fabrics in the data center."

SuperMicro Announces Server and Motherboard Line for AMD Opteron 6300 Series

Super Micro Computer, Inc. , a global leader in high-performance, high-efficiency server technology and green computing, has upgraded its line of A+ servers with the just released AMD (NYSE: AMD) Opteron 6300 Series processors. The new A+ server and tower platforms deliver up to 24 percent higher performance over prior generation solutions and maximize processor density and core counts to support more VMs for public and private cloud deployments. Performance per watt is up to 40 percent higher with intelligent thermal/power monitoring and management, enabling datacenters running robust large-scale software systems to maximize performance while lowering overall TCO. With dual-processor (DP) and multi-processor (MP) architectures, Supermicro ups the performance bar for data center and high-performance computing (HPC) cluster applications.

PLX to Showcase Emerging PCIe Gen3, USB 3.0 Applications at Intel Developers Forum

PLX Technology, Inc., a leading global supplier of high-speed connectivity solutions enabling emerging data center architectures, is welcoming attendees of the Intel Developers Forum (IDF), held this week in San Francisco at the Moscone West Convention Center, to witness a live demonstration of how PCI Express (PCIe) Gen3 performance is poised to challenge, replace or enhance all other interconnect technologies in the data center, including SAS, RapidIO, InfiniBand and Ethernet.

The first demonstration utilizes PLX PCIe Gen3 switches to showcase plug-and-play, box-to-box connectivity using cost-effective optical modules, connectors and cables -- without any special hardware or software -- can achieve the high performance required of new data center designs. The demonstration's highlights include a x4 PCIe configuration achieving 32 Gbps throughput via economical mini-SAS HD or dual-optical USB. The demonstration will also prove how "independent SSC" operation can be achieved without spread-spectrum clocking (SSC) isolation, showing that neither clock management, additional clock chips/buffers, nor protocol translations are needed, resulting in simple scalability and connectivity, lower connection costs and higher density.

PLX Expands PCI Express Gen3 Portfolio, Adds Versatile 96-, 80-, 64-Lane Switches

PLX Technology, Inc., a leading global supplier of high-speed connectivity solutions enabling emerging data center architectures, today announced a new trio of ultra-high lane count PCI Express (PCIe) Gen3 switches developed for cutting-edge markets like storage systems, high-end graphics, and communications platforms. The high-performance ExpressLane PCIe Gen3 switches include the PEX8796 (96 lanes, 24 ports), PEX8780 (80 lanes, 20 ports) and PEX8764 (64 lanes, 16 ports), which expand the PLX portfolio to 14 PCIe Gen3 devices. Designers choosing the PEX8796 switch -- touting bandwidth of 8 Gigatransfers per second, per lane, in full duplex mode -- are rewarded with amazing throughput of 1,536 gigabits per second (192 gigabytes/s), delivering performance that challenges all other interconnect technologies.

AMD Supercomputing Leadership Continues with Broader Developer Ecosystem, Latest Tech

AMD today announced its continued leadership in high performance computing (HPC) with 24 of the top 100 supercomputers in the 39th TOP500 list, announced today, while making significant strides in maturing its HPC ecosystem through the addition of several key components that enable end users to continue to benefit from best-in-class price/performance for today's demanding applications.

AMD has continued to work closely with its technology partners to significantly mature its HPC ecosystem, resulting in several new developments including LS-DYNA simulation software optimized for the AMD Opteron 6200 Series processors from Livermore Software Technology Corporation (LSTC), the addition of programming options for AMD graphics processor unit (GPU) technology from CAPS, and the announcement by Mellanox Technologies of its Connect-IB products that will deliver FDR 56 Gb/s InfiniBand speeds to AMD's solution portfolios.

NASA Scales SGI Pleiades InfiniBand Cluster to 25,000 Intel Xeon Processor Cores

SGI, the trusted leader in technical computing, announced today that with over 60 miles of InfiniBand cabling in place at the NASA Advanced Supercomputing (NAS) Division at NASA Ames Research Center at Moffett Field, Calif., a scientist was able to utilize 25,000 SGI ICE Intel Xeon processor cores on Pleiades to run a space weather simulation.

One particular area of study is magnetic reconnection, a physical process in highly conducting plasmas such as those that occur in the Earth's magnetosphere, in which the magnetic topology is rearranged and magnetic energy converted to kinetic or thermal energy. This field of research is critical, as these disturbances can disable wide scale power grids, affect satellite transmissions and disrupt airline communications.

SGI ICE Supercomputer Speeds to Benchmark Record

SGI (NASDAQ: SGI), the trusted leader in technical computing, announced today that its SGI ICE 8400 high performance compute (HPC) system with AMD Opteron 6200 processors achieved the top performance number for 64-, 32-, 16-, 8-, and 4-socket configurations for the SPECMPIL2007 benchmark. The SPECMPIL benchmark is SPEC's “benchmark suite for evaluating MPI-parallel, floating point, compute intensive performance across a wide range of cluster and SMP hardware,” according to the SPEC website.

The SGI ICE 8400 platform with AMD processors is a completely open platform optimized for HPC workloads and runs an off-the-shelf Linux operating system for application compatibility. Although the ICE platform is able to comfortably support multi-petaflop sized installations, design considerations allow cost effective solutions down to a half rack. Single- or dual-plane integrated InfiniBand can be cabled into four different topologies, including hypercube, enhanced hypercube, all to all, and fat-tree, allowing flexible network customization for a variety of workloads.

Intel Announces InfiniBand Acquisition

Intel Corporation today announced that it has entered into a definitive agreement with QLogic to acquire the product lines of and certain assets related to its InfiniBand business. A significant number of the employees associated with this business are expected to accept offers to join Intel.

This acquisition is designed to enhance Intel's networking portfolio and provide scalable high-performance computing (HPC) fabric technology as well as support the company's vision of innovating on fabric architectures to achieve ExaFLOP/s performance by 2018. An ExaFLOP/s is a quintillion computer operations per second, a hundred times more than today's fastest supercomputers.
Return to Keyword Browsing