News Posts matching #DGX A100

Return to Keyword Browsing

NVIDIA Launches UK's Most Powerful Supercomputer

NVIDIA today officially launched Cambridge-1, the United Kingdom's most powerful supercomputer, which will enable top scientists and healthcare experts to use the powerful combination of AI and simulation to accelerate the digital biology revolution and bolster the country's world-leading life sciences industry. Dedicated to advancing healthcare, Cambridge-1 represents a $100 million investment by NVIDIA. Its first projects with AstraZeneca, GSK, Guy's and St Thomas' NHS Foundation Trust, King's College London and Oxford Nanopore Technologies include developing a deeper understanding of brain diseases like dementia, using AI to design new drugs and improving the accuracy of finding disease-causing variations in human genomes.

Cambridge-1 brings together decades of NVIDIA's work in accelerated computing, AI and life sciences, where NVIDIA Clara and AI frameworks are optimized to take advantage of the entire system for large-scale research. An NVIDIA DGX SuperPOD supercomputing cluster, it ranks among the world's top 50 fastest computers and is powered by 100 percent renewable energy.

NVIDIA Announces New DGX SuperPOD, the First Cloud-Native, Multi-Tenant Supercomputer, Opening World of AI to Enterprise

NVIDIA today unveiled the world's first cloud-native, multi-tenant AI supercomputer—the next-generation NVIDIA DGX SuperPOD featuring NVIDIA BlueField -2 DPUs. Fortifying the DGX SuperPOD with BlueField-2 DPUs—data processing units that offload, accelerate and isolate users' data—provides customers with secure connections to their AI infrastructure.

The company also announced NVIDIA Base Command, which enables multiple users and IT teams to securely access, share and operate their DGX SuperPOD infrastructure. Base Command coordinates AI training and operations on DGX SuperPOD infrastructure to enable the work of teams of data scientists and developers located around the globe.

TOP500 Expands Exaflops Capacity Amidst Low Turnover

The 56th edition of the TOP500 saw the Japanese Fugaku supercomputer solidify its number one status in a list that reflects a flattening performance growth curve. Although two new systems managed to make it into the top 10, the full list recorded the smallest number of new entries since the project began in 1993.

The entry level to the list moved up to 1.32 petaflops on the High Performance Linpack (HPL) benchmark, a small increase from 1.23 petaflops recorded in the June 2020 rankings. In a similar vein, the aggregate performance of all 500 systems grew from 2.22 exaflops in June to just 2.43 exaflops on the latest list. Likewise, average concurrency per system barely increased at all, growing from 145,363 cores six months ago to 145,465 cores in the current list.

NVIDIA Announces the A100 80GB GPU for AI Supercomputing

NVIDIA today unveiled the NVIDIA A100 80 GB GPU—the latest innovation powering the NVIDIA HGX AI supercomputing platform—with twice the memory of its predecessor, providing researchers and engineers unprecedented speed and performance to unlock the next wave of AI and scientific breakthroughs. The new A100 with HBM2E technology doubles the A100 40 GB GPU's high-bandwidth memory to 80 GB and delivers over 2 terabytes per second of memory bandwidth. This allows data to be fed quickly to A100, the world's fastest data center GPU, enabling researchers to accelerate their applications even faster and take on even larger models and datasets.

"Achieving state-of-the-art results in HPC and AI research requires building the biggest models, but these demand more memory capacity and bandwidth than ever before," said Bryan Catanzaro, vice president of applied deep learning research at NVIDIA. "The A100 80 GB GPU provides double the memory of its predecessor, which was introduced just six months ago, and breaks the 2 TB per second barrier, enabling researchers to tackle the world's most important scientific and big data challenges."

NVIDIA Building UK's Most Powerful Supercomputer, Dedicated to AI Research in Healthcare

NVIDIA today announced that it is building the United Kingdom's most powerful supercomputer, which it will make available to U.K. healthcare researchers using AI to solve pressing medical challenges, including those presented by COVID-19.

Expected to come online by year end, the "Cambridge-1" supercomputer will be an NVIDIA DGX SuperPOD system capable of delivering more than 400 petaflops of AI performance and 8 petaflops of Linpack performance, which would rank it No. 29 on the latest TOP500 list of the world's most powerful supercomputers. It will also rank among the world's top 3 most energy-efficient supercomputers on the current Green500 list.

AMD Reports Second Quarter 2020 Financial Results

AMD (NASDAQ:AMD) today announced revenue for the second quarter of 2020 of $1.93 billion, operating income of $173 million, net income of $157 million and diluted earnings per share of $0.13. On a non-GAAP basis, operating income was $233 million, net income was $216 million and diluted earnings per share was $0.18. "We delivered strong second quarter results, led by record notebook and server processor sales as Ryzen and EPYC revenue more than doubled from a year ago," said Dr. Lisa Su, AMD president and CEO. "Despite some macroeconomic uncertainty, we are raising our full-year revenue outlook as we enter our next phase of growth driven by the acceleration of our business in multiple markets."

NVIDIA to Build Fastest AI Supercomputer in Academia

The University of Florida and NVIDIA Tuesday unveiled a plan to build the world's fastest AI supercomputer in academia, delivering 700 petaflops of AI performance. The effort is anchored by a $50 million gift: $25 million from alumnus and NVIDIA co-founder Chris Malachowsky and $25 million in hardware, software, training and services from NVIDIA.

"We've created a replicable, powerful model of public-private cooperation for everyone's benefit," said Malachowsky, who serves as an NVIDIA Fellow, in an online event featuring leaders from both the UF and NVIDIA. UF will invest an additional $20 million to create an AI-centric supercomputing and data center.

AMD EPYC Processors Ecosystem Continues to Grow with Integration into New NVIDIA DGX A100

AMD today announced the NVIDIA DGX A100, the third generation of the world's most advanced AI system, is the latest high-performance computing system featuring 2nd Gen AMD EPYC processors. Delivering 5 petaflops of AI performance, the elastic architecture of the NVIDIA DGX A100 enables enterprises to accelerate diverse AI workloads such as data analytics, training, and inference.

NVIDIA DGX A100 leverages the high-performance capabilities, 128 cores, DDR4-3200 MHz and PCIe 4 support from two AMD EPYC 7742 processors running at speeds up to 3.4 GHz¹. The 2nd Gen AMD EPYC processor is the first and only current x86-architecture server processor that supports PCIe 4, providing leadership high-bandwidth I/O that's critical for high performance computing and connections between the CPU and other devices like GPUs.

NVIDIA Announces Financial Results for First Quarter Fiscal 2021

NVIDIA today reported revenue for the first quarter ended April 26, 2020, of $3.08 billion, up 39 percent from $2.22 billion a year earlier, and down 1 percent from $3.11 billion in the previous quarter. GAAP earnings per diluted share for the quarter were $1.47, up 130 percent from $0.64 a year ago, and down 4 percent from $1.53 in the previous quarter. Non-GAAP earnings per diluted share were $1.80, up 105 percent from $0.88 a year earlier, and down 5 percent from $1.89 in the previous quarter.

NVIDIA completed its acquisition of Mellanox Technologies Ltd. on April 27, 2020, for a transaction value of $7 billion. It also transitioned its GPU Technology Conference to an all-digital format, drawing more than 55,000 registered participants, while NVIDIA founder and CEO Jensen Huang's keynote videos were viewed 3.8 million times in their first three days.

NVIDIA RTX 3080 Ti and GA102 "Ampere" Specs, Other Juicy Bits Revealed

PC hardware focused YouTube channel Moore's Law is Dead published a juicy tech-spec reveal of NVIDIA's next-generation "Ampere" based flagship consumer graphics card, the GeForce RTX 3080 Ti, citing correspondence with sources within NVIDIA. The report talks of big changes to NVIDIA's Founders Edition (reference) board design, as well as what's on the silicon. To begin with, the RTX 3080 Ti reference-design card features a triple-fan cooling solution unlike the RTX 20-series. This cooler is reportedly quieter than the RTX 2080 Ti FE cooling solution. The card pulls power from a pair of 8-pin PCIe power connectors. Display outputs include three DP, and one each of HDMI and VirtualLink USB-C. The source confirms that "Ampere" will implement PCI-Express gen 4.0 x16 host interface.

With "Ampere," NVIDIA is developing three tiers of high-end GPUs, with the "GA102" leading the pack and succeeding the "TU102," the "GA104" holding the upper-performance segment and succeeding today's "TU104," but a new silicon between the two, codenamed "GA103," with no predecessor from the current-generation. The "GA102" reportedly features 5,376 "Ampere" CUDA cores (up to 10% higher IPC than "Turing"). The silicon also taps into the rumored 7 nm-class silicon fabrication node to dial up GPU clock speeds well above 2.20 GHz even for the "GA102." Smaller chips in the series can boost beyond 2.50 GHz, according to the report. Even with the "GA102" being slightly cut-down for the RTX 3080 Ti, the silicon could end up with FP32 compute performance in excess of 21 TFLOPs. The card uses faster 18 Gbps GDDR6 memory, ending up with 863 GB/s of memory bandwidth that's 40% higher than that of the RTX 2080 Ti (if the memory bus width ends up 384-bit). Below are screengrabs from the Moore's Law is Dead video presentation, and not NVIDIA slides.

NVIDIA DGX A100 is its "Ampere" Based Deep-learning Powerhouse

NVIDIA will give its DGX line of pre-built deep-learning research workstations its next major update in the form of the DGX A100. This system will likely pack number of the company's upcoming Tesla A100 scalar compute accelerators based on its next-generation "Ampere" architecture and "GA100" silicon. The A100 came to light though fresh trademark applications by the company. As for specs and numbers, we don't know yet. The "Volta" based DGX-2 has up to sixteen "GV100" based Tesla boards adding up to 81,920 CUDA cores and 512 GB of HBM2 memory. One can expect NVIDIA to beat this count. The leading "Ampere" part could be HPC-focused, featuring a large CUDA-, and tensor core count, besides exotic memory such as HBM2E. We should learn more about it at the upcoming GTC 2020 online event.
Return to Keyword Browsing
Copyright © 2004-2021 www.techpowerup.com. All rights reserved.
All trademarks used are properties of their respective owners.