News Posts matching #EPYC

Return to Keyword Browsing

AMD Announces Ambitious Goal to Increase Energy Efficiency of Processors Running AI Training and High Performance Computing Applications 30x by 2025

AMD today announced a goal to deliver a 30x increase in energy efficiency for AMD EPYC CPUs and AMD Instinct accelerators in Artificial Intelligence (AI) training and High Performance Computing (HPC) applications running on accelerated compute nodes by 2025.1 Accomplishing this ambitious goal will require AMD to increase the energy efficiency of a compute node at a rate that is more than 2.5x faster than the aggregate industry-wide improvement made during the last five years.

Accelerated compute nodes are the most powerful and advanced computing systems in the world used for scientific research and large-scale supercomputer simulations. They provide the computing capability used by scientists to achieve breakthroughs across many fields including material sciences, climate predictions, genomics, drug discovery and alternative energy. Accelerated nodes are also integral for training AI neural networks that are currently used for activities including speech recognition, language translation and expert recommendation systems, with similar promising uses over the coming decade. The 30x goal would save billions of kilowatt hours of electricity in 2025, reducing the power required for these systems to complete a single calculation by 97% over five years.

AMD Zen 4 AM5 & SP5 CPU Coolers Spotted

Chinese cooler manufacturer Cool Server have recently listed several upcoming coolers for the AMD Zen 4 AM5 & SP5 sockets. The manufacturer has listed 5 AM5 coolers, and 4 SP5 coolers all targeted towards the enterprise sector. The lineup includes several passive coolers which rely on case airflow while the others feature high-performance fans which can get quite noisy. The AM5 socket will be introduced with the next-generation Zen 4 Ryzen processors while the SP5 (LGA6096) socket has been prepared for the Zen 4 EPYC processors. The complete list of coolers can be found below.

Intel Xeon "Sapphire Rapids" Processor With 20 Cores Tested

Intel is slowly preparing to launch its 4th generation of Xeon Scalable processors, with it being the first arrival of the 10 nm designs to the server market. Codenamed Sapphire Rapids, these processors are expected to bring much-needed IPC and platform improvements so Intel can keep up with AMD's EPYC processors. Today, we are getting some first performance results as well as some information about a specific 20 core, 40 threaded Intel Xeon Sapphire Rapids SKU. In a leaked Geekbench 4 submission, the latest Xeon processor was tested and we get to see even more details about the processor.

Featuring 20 cores and 40 threads, the CPU has a base clock speed of 1.5 GHz. It features as much as 40 MB of L2 cache and 75 MB of L3 cache spread across the die. The system was tested on an Intel reference platform called VulcanCity, with this configuration carrying 32 GB of DDR5 memory. The reported results of the benchmarks that this processor went through are not very impressive. These numbers are easily beaten by AMD Ryzen 9 5950X, however, this is only an engineering sample with low clock speed and it could be possible that Geekbench is not optimized to run on this processor. You can check out some of the performance numbers below, and see the submitted results here.

AMD MI200 "Aldebaran" Memory Size of 128GB Per Package Confirmed

The 128 GB per package memory size of AMD's upcoming Instinct MI200 HPC accelerator was confirmed, in a document released by Pawsey SuperComputing Centre, a Perth, Australia-based supercomputing firm that's popular with mineral prospecting companies located there. The company is currently working on Setonix, a 50-petaFLOP supercomputer being put together by HP Enterprise, which combines over 750 next-generation "Aldebaran" GPUs (referenced only as "AMD MI-Next GPUs"); and over 200,000 AMD EPYC "Milan" processor cores (the actual processor package count would be lower, and depend on the various core configs the builder is using).

The Pawsey document mentions 128 GB as the per-GPU memory. This corresponds with the rumored per-package memory of "Aldebaran." Recently imagined by Locuza_, an enthusiast who specializes in annotation of logic silicon dies, "Aldebaran" is a multi-chip module of two logic dies and eight HBM2E stacks. Each of the two logic dies, or chiplets, has 8,192 CDNA2 stream processors that add up to 16,384 on the package; and each of the two dies is wired to four HBM2E stacks over a 4096-bit memory bus. These are 128 Gbit (16 GB) stacks, so we have 64 GB memory per logic die, and 128 GB on the package. Find other drool worthy specs of the Pawsey Setonix in the screengrab below.

AMD Leads High Performance Computing Towards Exascale and Beyond

At this year's International Supercomputing 2021 digital event, AMD (NASDAQ: AMD) is showcasing momentum for its AMD EPYC processors and AMD Instinct accelerators across the High Performance Computing (HPC) industry. The company also outlined updates to the ROCm open software platform and introduced the AMD Instinct Education and Research (AIER) initiative. The latest Top500 list showcased the continued growth of AMD EPYC processors for HPC systems. AMD EPYC processors power nearly 5x more systems compared to the June 2020 list, and more than double the number of systems compared to November 2020. As well, AMD EPYC processors power half of the 58 new entries on the June 2021 list.

"High performance computing is critical to addressing the world's biggest and most important challenges," said Forrest Norrod, senior vice president and general manager, data center and embedded systems group, AMD. "With our AMD EPYC processor family and Instinct accelerators, AMD continues to be the partner of choice for HPC. We are committed to enabling the performance and capabilities needed to advance scientific discoveries, break the exascale barrier, and continue driving innovation."

Google Selects 3rd Gen AMD EPYC Processors to Launch First Tau VM Instance

AMD and Google Cloud today announced T2D, the first instance in the new family of Tau Virtual Machines (VMs) powered by 3rd Gen AMD EPYC processors. According to Google Cloud, the T2D instance offers 56% higher absolute performance and more than 40% higher price performance for scale-out workloads. The Tau VM family provides customers with a leading combination of performance, price, and easy integration. The T2D instances, using the leadership performance of 3rd Gen AMD EPYC processors, excels at workloads including web servers, containerized micro-services, data logging-processing, large scale Java applications and more.

"At Google Cloud, our customers' compute needs are evolving," said Thomas Kurian, CEO of Google Cloud. "By collaborating with AMD, Google Cloud customers can now leverage amazing performance for scale-out applications, with great price-performance, all without compromising x86 compatibility." "We designed 3rd Gen AMD EPYC processors to meet the growing demand from cloud and enterprise customers for high-performance, cost-effective solutions with optimal TCO," said AMD President and CEO Dr. Lisa Su. "We work closely with Google Cloud and are proud they selected AMD to exclusively power the new Tau VM T2D instance which provides customers with powerful new options to run their most demanding scale-out workloads."

AMD "Milan-X" Processor Could Use Stacked Dies with X3D Packaging Technology

AMD is in a constant process of processor development, and there are always new technologies on the horizon. Back in March of 2020, the company has revealed that it is working on new X3D packaging technology, that integrated both 2.5D and 3D approaches to packing semiconductor dies together as tightly as possible. Today, we are finally getting some more information about the X3D technology, as we have the first codename of the processor that is featuring this advanced packaging technology. According to David Schor, we have learned that AMD is working on a CPU that uses X3D tech with stacked dies, and it is called Milan-X.

The Milan-X CPU is AMD's upcoming product designed for data center usage. The rumors suggest that the CPU is designed for heavy bandwidth and presumably a lot of computing power. According to ExecutableFix, the CPU uses a Genesis-IO die to power the connectivity, which is an IO die from EPYC Zen 3 processors. While this solution is in the works, we don't know the exact launch date of the processor. However, we could hear more about it in AMD's virtual keynote at Computex 2021. For now, take this rumor with a grain of salt.
AMD X3D Packaging Technology

Two New Security Vulnerabilities to Affect AMD EPYC Processors

AMD processors have been very good at the field of security, on par with its main competitor, Intel. However, from time to time, researchers find new ways of exploiting a security layer and making it vulnerable to all kinds of attacks. Today, we have information that two new research papers are being published at this year's 15th IEEE Workshop on Offensive Technologies (WOOT'21) happening on May 27th. Both papers are impacting AMD processor security, specifically, they show how AMD's Secure Encrypted Virtualization (SEV) is compromised. Researchers from the Technical University of Munich and the University of Lübeck are going to present their papers on CVE-2020-12967 and CVE-2021-26311, respectfully.

While we do not know exact details of these vulnerabilities until papers are presented, we know exactly which processors are affected. As SEV is an enterprise feature, AMD's EPYC lineup is the main target of these two new exploits. AMD says that affected processors are all of the EPYC embedded CPUs and the first, second, and third generation of regular EPYC processors. For third-generation EPYC CPUs, AMD has provided mitigation in SEV-SNP, which can be enabled. For prior generations, the solution is to follow best security practices and try to avoid an exploit.
AMD EPYC Processor

AMD Embedded Roadmap Lists Zen 4 EPYC CPU with 64+ Cores

The AMD embedded roadmap for 2020 - 2023 was recently leaked and reveals some interesting information about AMD's upcoming Zen 4 based EPYC server processes. The current generation 7003 series of Zen 3 EPYC processors offer up to 64 cores and 128 threads with a TDP range of 120 W - 280 W. The next-generation EPYC 7004 "Genoa" Zen 4 processors will push the maximum core count to 96 cores and 192 threads with a maximum TDP of 320 W. The Zen 4 based EPYC processors will move to a 12 chiplet design up from the current 8 chiplet design which allows for the core increase that will increase the physical size of the processors and require a new SP5 socket. The new EPYC 7004 series processors will also support the latest features such as 12 channel DDR5-5200 ECC memory and PCIe Gen5.

AMD and GlobalFoundries Wafer Supply Agreement Now Non-Exclusive, Paves Way for 7nm sIOD

AMD in a filing with the U.S. Securities and Exchange Commission (SEC), revealed that its wafer supply agreement with GlobalFoundries has been amended. Under the new terms, AMD places orders for wafers from GlobalFoundries up to 2024, with purchase targets set for each year leading up to 2024. Beyond meeting these targets, AMD is free from all other exclusivity commitments. The agreement was previously amended in January 2019, setting annual purchase targets for 2019, 2020, and 2021, while beginning a de-coupling between AMD and GlobalFoundries. This enabled the company to source 7 nm (or smaller) chips, such as CCDs and GPUs, from other foundries, such as TSMC, while keeping GlobalFoundries exclusive for 12 nm (or larger) nodes.

The updated wafer supply agreement unlocks many possibilities for AMD. For starters, it can finally build a next-generation sIOD (server I/O die) on a more efficient node than GlobalFoundries 12LP, such as TSMC 7 nm. This transition to 7 nm will be needed as the next-gen "Genoa" EPYC processor could feature future I/O standards such as DDR5 memory and PCI-Express Gen 5, and the switching fabric for these could be too power-hungry on 12 nm. The "Zen 4" CPU core complex dies (CCDs) of "Genoa" are expected to be built on TSMC 5 nm.

AMD Clocks Highest Quarterly Growth in Server CPU Sales Against Intel in 15 Years

AMD has registered the steepest single-quarter growth in server CPU sales in 15 years, for Q1-2021, according to Mercury Research data accessed by Tom's Hardware. "While we don't often discuss average selling prices, we note that this quarter saw unusually strong price moves for AMD -- as AMD shipped fewer low-end parts and more high-end parts, as well as shipping many more server parts, the company's average selling price increased significantly," said Dean McCarron of Mercury Research. Unfortunately, the growth in its EPYC server CPU sales also coincides with a drop in notebook CPU sales. It's also interesting to note here that AMD's Q1-2021 performance with server CPUs also coincides with a 20% drop in revenue for its Data Center Group, the business unit responsible for sales of its Xeon server processors.

TSMC Employs AMD EPYC CPUs for Mission-Critical Manufacturing

Taiwan Semiconductor Manufacturing Company, the maker of various kinds of silicon products, is the manufacturer of AMD's EPYC processors. However, have you ever questioned what CPUs are actually behind TSMC? The answer to that question is quite simple. Today, we have come to know that TSMC is using AMD EPYC processors to power their manufacturing infrastructure and tape out thousands of wafers per month. AMD has published TSMC's case study, which pointed out that the total cost of ownership has been the main challenge of the Taiwanese company. By using AMD EPYC 7702P and 7F72 CPUs, TSMC addresses the need for both reliable and high-performing server infrastructure to power the manufacturing efforts. For research and development purposes, TSMC chose the 7F72 with 24 cores and a high clock speed of 3.2 GHz, which is ideal for the company needs and purposes.

For more details about TSMC's choices and solutions, read the case study here.

AMD Reports First Quarter 2021 Financial Results

AMD (NASDAQ:AMD) today announced revenue for the first quarter of 2021 of $3.45 billion, operating income of $662 million, net income of $555 million and diluted earnings per share of $0.45. On a non-GAAP* basis, operating income was $762 million, net income was $642 million and diluted earnings per share was $0.52.

"Our business continued to accelerate in the first quarter driven by the best product portfolio in our history, strong execution and robust market demand," said Dr. Lisa Su, AMD president and CEO. "We had outstanding year-over-year revenue growth across all of our businesses and data center revenue more than doubled. Our increased full-year guidance highlights the strong growth we expect across our business based on increasing adoption of our high-performance computing products and expanding customer relationships."

TYAN Now Offers AMD EPYC 7003 Processor Powered Systems

TYAN, an industry-leading server platform design manufacturer and a MiTAC Computing Technology Corporation subsidiary, today introduced AMD EPYC 7003 Series Processor-based server platforms featuring efficiency and performance enhancements in hardware, security, and memory density for the modern data center.

"Big data has become capital today. Large amounts of data and faster answers drive better decisions. TYAN's industry-leading server platforms powered by 3rd Gen AMD EPYC processors enable businesses to make more accurate decisions with higher precision," said Danny Hsu, Vice President of MiTAC Computing Technology Corporation's Server Infrastructure BU. "Moving the bar once more for workload performance, EPYC 7003 Series processors provide the performance needed in the heart of the enterprise to help IT professionals drive faster time to results," said Ram Peddibhotla, corporate vice president, EPYC product management, AMD. "Time is the new metric for efficiency and EPYC 7003 Series processors are the perfect choice for the most diverse workloads, helping provide more and better data to drive better business outcomes."

AMD Announces 3rd Generation EPYC 7003 Enterprise Processors

AMD today announced its 3rd generation EPYC (7003 series) enterprise processors, codenamed "Milan." These processors combine up to 64 of the company's latest "Zen 3" CPU cores, with an updated I/O controller die, and promise significant performance uplifts and new security capabilities over the previous generation EPYC 7002 "Rome." The "Zen 3" CPU cores, AMD claims, introduce an IPC uplift of up to 19% over the previous generation, which when combined by generational increases in CPU clock speeds, bring about significant single-threaded performance increases. The processor also comes with large multi-threaded performance gains thanks to a redesigned CCD.

The new "Zen 3" CPU complex die (CCD) comes with a radical redesign in the arrangement of CPU cores, putting all eight CPU cores of the CCD in a single CCX, sharing a large 32 MB L3 cache. This the total amount of L3 cache addressable by a CPU core, and significantly reduces latencies for multi-threaded workloads. The "Milan" multi-chip module has up to eight such CCDs talking to a centralized server I/O controller die (sIOD) over the Infinity Fabric interconnect.

AMD to Launch 3rd Gen EPYC Processors on March 15

AMD today announced that its 3rd generation EPYC enterprise processors will launch on March 15, 2021. Codenamed "Milan," these processors are expected to leverage the company's latest "Zen 3" CPU microarchitecture to significantly increase IPC (single-threaded performance), and retain compatibility with the the SP3 socket. AMD set up a micro-site where it will stream the 3rd Gen EPYC processor launch event on March 15, at 11 ET (16:00 UTC). "Milan" is rumored to be AMD's final processor architecture on this socket, before transitioning to SP5 and the next-gen processor codenamed "Genoa," sometime in 2022. "Genoa" marks a switch to next-gen I/O such as DDR5 memory and PCIe gen 5.0, along with an increase in CPU core counts.

GIGABYTE Releases 2U Server: G262-ZR0 with NVIDIA HGX A100 4-GPU

GIGABYTE Technology, (TWSE: 2376), an industry leader in high-performance servers and workstations, today announced the G262-ZR0 for HPC, AI, and data analytics. Designed to support the highest-level of performance in GPU computing, the G262-ZR0 incorporates fast PCIe 4.0 throughput in addition to NVIDIA HGX technologies and NVIDIA NVLink to provide industry leading bandwidth performance.

Intel Xeon "Sapphire Rapids" LGA4677-X Processor Sample Pictured

Here are some of the first pictures of the humongous Intel Xeon "Sapphire Rapids-SP" processor, in the flesh. Pictured by YuuKi-AnS on Chinese micro-blogging site bilibili, the engineering sample looks visibly larger than an AMD EPYC. Bound for 2021, this processor will leverage the latest generation of Intel's 10 nm Enhanced SuperFin silicon fabrication node, the latest I/O that include 8-channel DDR5 memory, a large number of PCI-Express gen 5.0 lanes, and ComputeXpress Link (CXL) interconnect. Perhaps the most interesting bit of information from the YuuKi-AnS has to be the mention of an on-package high-bandwidth memory solution. The processors will introduce an IPC uplift over "Ice Lake-SP" processors, as they use the newer "Willow Cove" CPU cores.

AMD EPYC "Milan" Processors Pricing and Specifications Leak

AMD is readying its upcoming EPYC processors based on the refined Zen 3 core. Codenamed "Milan", the processor generation is supposed to bring the same number of PCIe lanes and quite possibly similar memory support. The pricing, along with the specifications, has been leaked and now we have information on every model ranging from eight cores to the whopping 64 cores. Thanks to @momomo_us on Twitter, we got ahold of Canadian pricing leaked on the Dell Canada website. Starting from the cheapest design listed here (many are missing here), you would be looking at the EPYC 7543 processor with 32 cores running at 2.8 GHz speed, 256 MB of L3 cache, and a TDP of 225 Watts. Such a processor will set you back as much as 2579.69 CAD, which is cheaper compared to the previous generation EPYC 7542 that costs 3214.70 CAD.

Whatever this represents more aggressive pricing to position itself better against the competition, we do not know. The same strategy is applied with the 64 core AMD EPYC 7763 processor (2.45 GHz speed, 256 MB cache, 280 W TDP) as the new Zen 3 based design is priced at 8069.69 CAD, which is cheaper than the 8180.10 CAD price tag of AMD EPYC 7762 CPU.

AMD 32-Core EPYC "Milan" Zen 3 CPU Fights Dual Xeon 28-Core Processors

AMD is expected to announce its upcoming EPYC lineup of processors for server applications based on the new Zen 3 architecture. Codenamed "Milan", AMD is continuing the use of Italian cities as codenames for its processors. Being based on the new Zen 3 core, Milan is expected to bring big improvements over the existing EPYC "Rome" design. Bringing a refined 7 nm+ process, the new EPYC Milan CPUs are going to feature better frequencies, which are getting paired with high core counts. If you are wondering how Zen 3 would look like in server configuration, look no further because we have the upcoming AMD EPYC 7543 32-core processor benchmarked in Geekbench 4 benchmark.

The new EPYC 7543 CPU is a 32 core, 64 thread design with a base clock of 2.8 GHz, and a boost frequency of 3.7 GHz. The caches on this CPU are big, and there is a total of 2048 KB (32 times 32 KB for instruction cache and 32 times 32 KB for data cache) of L1 cache, 16 MB of L2 cache, and as much as 256 MB of L3. In the GB4 test, a single-core test produced 6065 points, while the multi-core run resulted in 111379 points. If you are wondering how that fairs against something like top-end Intel Xeon Platinum 8280 Cascade Lake 28-core CPU, the new EPYC Milan 7543 CPU is capable of fighting two of them at the same time. In a single-core test, the Intel Xeon configuration scores 5048 points, showing that the new Milan CPU has 20% higher single-core performance, while the multi-core score of the dual Xeon setup is 117171 points, which is 5% faster than AMD CPU. The reason for the higher multi-core score is the sheer number of cores that a dual-CPU configuration offers (32 cores vs 56 cores).

AMD's Radeon RX 6700 Series Reportedly Launches in March

AMD may be finding itself riding a new wave of success caused by its accomplishments with the Zen architecture, which in turn bolstered its available R&D for its graphics division and thus turned the entire AMD business on its head. However, success comes at a cost, particularly when you don't own your own fabs and have to vie for capacity with TSMC against its cadre of other clients. I imagine that currently, AMD's HQ has a direct system of levers and pulleys that manage its chip allocation with TSMC: pull this lever and increase number of 7 nm SOC for the next-generation consoles; another controls Ryzen 5000 series; and so on and so on. As we know, production capacity on TSMC's 7 nm is through the roof, and AMD is finding it hard to ship enough of its Zen 3 CPUs and RDNA2 graphics cards. The reported delay for the AMD RX 6700 series may well be a result of AMD overextending its product portfolio on the 7 nm process with foundry partner TSMC.

A report coming from Cowcotland now points towards a 1Q2021 release for AMD's high-performance RX 6700 series, which was initially poised to see the light of day in the current month of January. The RX 6700 series will ship with AMD's Navi 22 chip, which is estimated to be half of the full Navi 21 chip (which puts it at a top configuration of 2560 Stream Processors over 40 CUs). These cards are expected to ship with 12 GB of GDDR6 memory over a 192-bit memory bus. However, it seems that AMD may have delayed the launch for these graphics cards. One can imagine that this move from AMD happens so as to not further dilute the TSMC wafers coming out of the factory, limited as they are, between yet another chip. One which will undoubtedly have lower margins than the company's Zen 3 CPUs, EPYC CPUs, RX 6800 and RX 6900, and that doesn't have the same level of impact on its business relations as console-bound SoCs. Besides, it likely serves AMD best to put out enough of its currently-launched products' to sate demand (RX 6000 series, Ryzen 5000, cof cof) than to launch yet another product with likely too limited availability in relation to the existing demand.

128-Core 2P AMD EPYC "Milan" System Benchmarked in Cinebench R23, Outputs Insane Numbers

AMD is preparing to launch its next-generation of EPYC processors codenamed Millan. Being based on the company's latest Zen 3 cores, the new EPYC generation is going to deliver a massive IPC boost, spread across many cores. Models are supposed to range anywhere from 16 to 64 cores, to satisfy all of the demanding server workloads. Today, thanks to the leak from ExecutableFix on Twitter, we have the first benchmark of a system containing two of the 64 core, 128 thread Zen 3 based EPYC Milan processors. Running in the 2P configuration the processors achieved a maximum boost clock of 3.7 GHz, which is very high for a server CPU with that many cores.

The system was able to produce a Cinebench R23 score of insane 87878 points. With that many cores, it is no wonder how it is done, however, we need to look at how does it fare against the competition. For comparison, the Intel Xeon Platinum 8280L processor with its 28 cores and 56 threads that boost to 4.0 GHz can score up to 49,876 points. Of course, the scaling to that many cores may not work very well in this example application, so we have to wait and see how it performs in other workloads before jumping to any conclusions. The launch date is unknown for these processors, so we have to wait and report as more information appears.

AWS Implements High-Performance EPYC and Radeon Pro Processors for Graphics Optimized Workloads

AMD announced Amazon Web Services, Inc. (AWS) has expanded its AMD-based offerings with a new cloud instance for Amazon Elastic Compute Cloud (Amazon EC2): Amazon EC2 G4ad instances for graphics-optimized workloads. With this new instance, AMD now powers eight Amazon EC2 instance families across 20 global AWS Regions. AMD also announced that Amazon GameLift, a fully managed dedicated game server hosting solution, is now providing its video game hosting customers access to AMD EPYC processor-based Amazon EC2 C5a, M5a and R5a instances.

"Today we build on the strong collaboration between AMD and AWS, which started in 2017. This expansion of our cooperation is a proof point of the continued performance and capabilities that AMD provides its customers," said Forrest Norrod, senior vice president and general manager, Data Center and Embedded Solutions Group, AMD. "Amazon EC2 G4ad instances are the first powered by both AMD EPYC CPUs and Radeon Pro GPUs, and adding to the existing EPYC processor-based instances, they exemplify the ways in which AMD CPUs and GPUs provide fantastic performance and price/performance for AWS customers."

"The high-performance capabilities of the AMD EPYC CPUs and Radeon Pro GPUs are enabling AWS to create a new graphics-focused instance that help us keep our leadership price/performance offerings that our customers expect," said David Brown, Vice President, Amazon EC2, Amazon Web Services, Inc. "We're delighted to continue this great collaboration with AMD, enabling the Amazon EC2 G4ad instances to provide the industry's best price performance for graphics-intensive applications."

ASRock Rack Brings AMD EPYC CPUs to "Deep" Mini-ITX Form Factor

ASRock Rack, a branch of ASRock focused on making server products, has today launched a new motherboard that can accommodate up to 64 core AMD EPYC CPU. Built on the new, proprietary form factor called "Deep Mini-ITX", the ROMED4ID-2T motherboard is just a bit bigger than the standard ITX board. The standard ITX boards are 170 x 170 mm, while this Deep Mini-ITX form extends the board by a bit. It measures 170 x 208.28 mm, or 6.7" x 8.2" for all of the American readers. ASRock specifies that the board supports AMD's second-generation EPYC "Rome" 7002 series processors. Of course, the socket for these CPUs is socket SP3 (LGA4094) with 4094 pins.

The motherboard comes with 4 DDR4 DIMM slots, of any type. Supported DIMM types are R-DIMM, LR-DIMM, and NV-DIMM. If you want the best capacity, LR-DIMM use enables you to use up to 256 GB of memory. When it comes to expansion, you can hook-up any PCIe 4.0 device to the PCIe 4.0 x16 slot. There is also an M.2 2280 key present, so you can fit in one of those high-speed PCIe 4.0 x4 M.2 SSDs. For connection to the outside world, the board uses an Intel X550-AT2 controller that controls two RJ45 10 GbE connectors. There are also two Slimline (PCIe 4.0 x8 or 8 SATA 6 Gb/s), and four Slimline (PCIe 4.0 x8) storage U.2 ports.

TOP500 Expands Exaflops Capacity Amidst Low Turnover

The 56th edition of the TOP500 saw the Japanese Fugaku supercomputer solidify its number one status in a list that reflects a flattening performance growth curve. Although two new systems managed to make it into the top 10, the full list recorded the smallest number of new entries since the project began in 1993.

The entry level to the list moved up to 1.32 petaflops on the High Performance Linpack (HPL) benchmark, a small increase from 1.23 petaflops recorded in the June 2020 rankings. In a similar vein, the aggregate performance of all 500 systems grew from 2.22 exaflops in June to just 2.43 exaflops on the latest list. Likewise, average concurrency per system barely increased at all, growing from 145,363 cores six months ago to 145,465 cores in the current list.
Return to Keyword Browsing
May 6th, 2024 03:14 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts