News Posts matching #EPYC

Return to Keyword Browsing

GIGABYTE Releases 2U Server: G262-ZR0 with NVIDIA HGX A100 4-GPU

GIGABYTE Technology, (TWSE: 2376), an industry leader in high-performance servers and workstations, today announced the G262-ZR0 for HPC, AI, and data analytics. Designed to support the highest-level of performance in GPU computing, the G262-ZR0 incorporates fast PCIe 4.0 throughput in addition to NVIDIA HGX technologies and NVIDIA NVLink to provide industry leading bandwidth performance.

Intel Xeon "Sapphire Rapids" LGA4677-X Processor Sample Pictured

Here are some of the first pictures of the humongous Intel Xeon "Sapphire Rapids-SP" processor, in the flesh. Pictured by YuuKi-AnS on Chinese micro-blogging site bilibili, the engineering sample looks visibly larger than an AMD EPYC. Bound for 2021, this processor will leverage the latest generation of Intel's 10 nm Enhanced SuperFin silicon fabrication node, the latest I/O that include 8-channel DDR5 memory, a large number of PCI-Express gen 5.0 lanes, and ComputeXpress Link (CXL) interconnect. Perhaps the most interesting bit of information from the YuuKi-AnS has to be the mention of an on-package high-bandwidth memory solution. The processors will introduce an IPC uplift over "Ice Lake-SP" processors, as they use the newer "Willow Cove" CPU cores.

AMD EPYC "Milan" Processors Pricing and Specifications Leak

AMD is readying its upcoming EPYC processors based on the refined Zen 3 core. Codenamed "Milan", the processor generation is supposed to bring the same number of PCIe lanes and quite possibly similar memory support. The pricing, along with the specifications, has been leaked and now we have information on every model ranging from eight cores to the whopping 64 cores. Thanks to @momomo_us on Twitter, we got ahold of Canadian pricing leaked on the Dell Canada website. Starting from the cheapest design listed here (many are missing here), you would be looking at the EPYC 7543 processor with 32 cores running at 2.8 GHz speed, 256 MB of L3 cache, and a TDP of 225 Watts. Such a processor will set you back as much as 2579.69 CAD, which is cheaper compared to the previous generation EPYC 7542 that costs 3214.70 CAD.

Whatever this represents more aggressive pricing to position itself better against the competition, we do not know. The same strategy is applied with the 64 core AMD EPYC 7763 processor (2.45 GHz speed, 256 MB cache, 280 W TDP) as the new Zen 3 based design is priced at 8069.69 CAD, which is cheaper than the 8180.10 CAD price tag of AMD EPYC 7762 CPU.

AMD 32-Core EPYC "Milan" Zen 3 CPU Fights Dual Xeon 28-Core Processors

AMD is expected to announce its upcoming EPYC lineup of processors for server applications based on the new Zen 3 architecture. Codenamed "Milan", AMD is continuing the use of Italian cities as codenames for its processors. Being based on the new Zen 3 core, Milan is expected to bring big improvements over the existing EPYC "Rome" design. Bringing a refined 7 nm+ process, the new EPYC Milan CPUs are going to feature better frequencies, which are getting paired with high core counts. If you are wondering how Zen 3 would look like in server configuration, look no further because we have the upcoming AMD EPYC 7543 32-core processor benchmarked in Geekbench 4 benchmark.

The new EPYC 7543 CPU is a 32 core, 64 thread design with a base clock of 2.8 GHz, and a boost frequency of 3.7 GHz. The caches on this CPU are big, and there is a total of 2048 KB (32 times 32 KB for instruction cache and 32 times 32 KB for data cache) of L1 cache, 16 MB of L2 cache, and as much as 256 MB of L3. In the GB4 test, a single-core test produced 6065 points, while the multi-core run resulted in 111379 points. If you are wondering how that fairs against something like top-end Intel Xeon Platinum 8280 Cascade Lake 28-core CPU, the new EPYC Milan 7543 CPU is capable of fighting two of them at the same time. In a single-core test, the Intel Xeon configuration scores 5048 points, showing that the new Milan CPU has 20% higher single-core performance, while the multi-core score of the dual Xeon setup is 117171 points, which is 5% faster than AMD CPU. The reason for the higher multi-core score is the sheer number of cores that a dual-CPU configuration offers (32 cores vs 56 cores).

AMD's Radeon RX 6700 Series Reportedly Launches in March

AMD may be finding itself riding a new wave of success caused by its accomplishments with the Zen architecture, which in turn bolstered its available R&D for its graphics division and thus turned the entire AMD business on its head. However, success comes at a cost, particularly when you don't own your own fabs and have to vie for capacity with TSMC against its cadre of other clients. I imagine that currently, AMD's HQ has a direct system of levers and pulleys that manage its chip allocation with TSMC: pull this lever and increase number of 7 nm SOC for the next-generation consoles; another controls Ryzen 5000 series; and so on and so on. As we know, production capacity on TSMC's 7 nm is through the roof, and AMD is finding it hard to ship enough of its Zen 3 CPUs and RDNA2 graphics cards. The reported delay for the AMD RX 6700 series may well be a result of AMD overextending its product portfolio on the 7 nm process with foundry partner TSMC.

A report coming from Cowcotland now points towards a 1Q2021 release for AMD's high-performance RX 6700 series, which was initially poised to see the light of day in the current month of January. The RX 6700 series will ship with AMD's Navi 22 chip, which is estimated to be half of the full Navi 21 chip (which puts it at a top configuration of 2560 Stream Processors over 40 CUs). These cards are expected to ship with 12 GB of GDDR6 memory over a 192-bit memory bus. However, it seems that AMD may have delayed the launch for these graphics cards. One can imagine that this move from AMD happens so as to not further dilute the TSMC wafers coming out of the factory, limited as they are, between yet another chip. One which will undoubtedly have lower margins than the company's Zen 3 CPUs, EPYC CPUs, RX 6800 and RX 6900, and that doesn't have the same level of impact on its business relations as console-bound SoCs. Besides, it likely serves AMD best to put out enough of its currently-launched products' to sate demand (RX 6000 series, Ryzen 5000, cof cof) than to launch yet another product with likely too limited availability in relation to the existing demand.

128-Core 2P AMD EPYC "Milan" System Benchmarked in Cinebench R23, Outputs Insane Numbers

AMD is preparing to launch its next-generation of EPYC processors codenamed Millan. Being based on the company's latest Zen 3 cores, the new EPYC generation is going to deliver a massive IPC boost, spread across many cores. Models are supposed to range anywhere from 16 to 64 cores, to satisfy all of the demanding server workloads. Today, thanks to the leak from ExecutableFix on Twitter, we have the first benchmark of a system containing two of the 64 core, 128 thread Zen 3 based EPYC Milan processors. Running in the 2P configuration the processors achieved a maximum boost clock of 3.7 GHz, which is very high for a server CPU with that many cores.

The system was able to produce a Cinebench R23 score of insane 87878 points. With that many cores, it is no wonder how it is done, however, we need to look at how does it fare against the competition. For comparison, the Intel Xeon Platinum 8280L processor with its 28 cores and 56 threads that boost to 4.0 GHz can score up to 49,876 points. Of course, the scaling to that many cores may not work very well in this example application, so we have to wait and see how it performs in other workloads before jumping to any conclusions. The launch date is unknown for these processors, so we have to wait and report as more information appears.

AWS Implements High-Performance EPYC and Radeon Pro Processors for Graphics Optimized Workloads

AMD announced Amazon Web Services, Inc. (AWS) has expanded its AMD-based offerings with a new cloud instance for Amazon Elastic Compute Cloud (Amazon EC2): Amazon EC2 G4ad instances for graphics-optimized workloads. With this new instance, AMD now powers eight Amazon EC2 instance families across 20 global AWS Regions. AMD also announced that Amazon GameLift, a fully managed dedicated game server hosting solution, is now providing its video game hosting customers access to AMD EPYC processor-based Amazon EC2 C5a, M5a and R5a instances.

"Today we build on the strong collaboration between AMD and AWS, which started in 2017. This expansion of our cooperation is a proof point of the continued performance and capabilities that AMD provides its customers," said Forrest Norrod, senior vice president and general manager, Data Center and Embedded Solutions Group, AMD. "Amazon EC2 G4ad instances are the first powered by both AMD EPYC CPUs and Radeon Pro GPUs, and adding to the existing EPYC processor-based instances, they exemplify the ways in which AMD CPUs and GPUs provide fantastic performance and price/performance for AWS customers."

"The high-performance capabilities of the AMD EPYC CPUs and Radeon Pro GPUs are enabling AWS to create a new graphics-focused instance that help us keep our leadership price/performance offerings that our customers expect," said David Brown, Vice President, Amazon EC2, Amazon Web Services, Inc. "We're delighted to continue this great collaboration with AMD, enabling the Amazon EC2 G4ad instances to provide the industry's best price performance for graphics-intensive applications."

ASRock Rack Brings AMD EPYC CPUs to "Deep" Mini-ITX Form Factor

ASRock Rack, a branch of ASRock focused on making server products, has today launched a new motherboard that can accommodate up to 64 core AMD EPYC CPU. Built on the new, proprietary form factor called "Deep Mini-ITX", the ROMED4ID-2T motherboard is just a bit bigger than the standard ITX board. The standard ITX boards are 170 x 170 mm, while this Deep Mini-ITX form extends the board by a bit. It measures 170 x 208.28 mm, or 6.7" x 8.2" for all of the American readers. ASRock specifies that the board supports AMD's second-generation EPYC "Rome" 7002 series processors. Of course, the socket for these CPUs is socket SP3 (LGA4094) with 4094 pins.

The motherboard comes with 4 DDR4 DIMM slots, of any type. Supported DIMM types are R-DIMM, LR-DIMM, and NV-DIMM. If you want the best capacity, LR-DIMM use enables you to use up to 256 GB of memory. When it comes to expansion, you can hook-up any PCIe 4.0 device to the PCIe 4.0 x16 slot. There is also an M.2 2280 key present, so you can fit in one of those high-speed PCIe 4.0 x4 M.2 SSDs. For connection to the outside world, the board uses an Intel X550-AT2 controller that controls two RJ45 10 GbE connectors. There are also two Slimline (PCIe 4.0 x8 or 8 SATA 6 Gb/s), and four Slimline (PCIe 4.0 x8) storage U.2 ports.

TOP500 Expands Exaflops Capacity Amidst Low Turnover

The 56th edition of the TOP500 saw the Japanese Fugaku supercomputer solidify its number one status in a list that reflects a flattening performance growth curve. Although two new systems managed to make it into the top 10, the full list recorded the smallest number of new entries since the project began in 1993.

The entry level to the list moved up to 1.32 petaflops on the High Performance Linpack (HPL) benchmark, a small increase from 1.23 petaflops recorded in the June 2020 rankings. In a similar vein, the aggregate performance of all 500 systems grew from 2.22 exaflops in June to just 2.43 exaflops on the latest list. Likewise, average concurrency per system barely increased at all, growing from 145,363 cores six months ago to 145,465 cores in the current list.

AMD Announces CDNA Architecture. Radeon MI100 is the World's Fastest HPC Accelerator

AMD today announced the new AMD Instinct MI100 accelerator - the world's fastest HPC GPU and the first x86 server GPU to surpass the 10 teraflops (FP64) performance barrier. Supported by new accelerated compute platforms from Dell, Gigabyte, HPE, and Supermicro, the MI100, combined with AMD EPYC CPUs and the ROCm 4.0 open software platform, is designed to propel new discoveries ahead of the exascale era.

Built on the new AMD CDNA architecture, the AMD Instinct MI100 GPU enables a new class of accelerated systems for HPC and AI when paired with 2nd Gen AMD EPYC processors. The MI100 offers up to 11.5 TFLOPS of peak FP64 performance for HPC and up to 46.1 TFLOPS peak FP32 Matrix performance for AI and machine learning workloads. With new AMD Matrix Core technology, the MI100 also delivers a nearly 7x boost in FP16 theoretical peak floating point performance for AI training workloads compared to AMD's prior generation accelerators.

AMD Wins Contract for European LUMI Supercomputer: 552 petaflop/s Powered by Epyc, AMD Instinct

AMD has won a contract to empower the LUMI supercomputer, designed for the EuroHPC Joint Undertaking (EuroHPC JU) in conjunction with 10 European countries. The contract will see AMD provide both the CPU and GPU innards of the LUMI, set to be populated with next-generation AMD Epyc CPUs and AMD Instinct GPUs. The supercomputer, which is set to enter operation come next year, will deliver an estimated 552 petaflop/s - higher than the world's current fastest supercomputer, Fugaku in Japan, which reaches peak performance of 513 petaflop/s - and is an Arm-powered affair.

The contract for LUMI's construction has been won by Hewlett Packard Enterprise (HPE), which will be providing an HPE Cray EX supercomputer powered by the aforementioned AMD hardware. LUMI has an investment cost set at 200 million euros, for both hardware, installation, and the foreseeable lifetime of its operation. This design win by AMD marks another big contract for the company, which was all but absent from the supercomputing space until launch, and subsequent iterations, of its Zen architecture and latest generations of Instinct HPC accelerators.

QNAP Launches 24-bay U.2 NVMe NAS Featuring 2nd Gen AMD EPYC

QNAP Systems, Inc., a leading computing, networking, and storage solution innovator, today introduced the lightning-fast NVMe all-flash TS-h2490FU. With 24 drive bays for U.2 NVMe Gen 3 x4 SSD, the TS-h2490FU provides up to 472K/205K iSCSI 4K random read/write IOPS with ultra-low latency. Equipped with four 25 GbE SFP28 and two 2.5 GbE RJ45 LAN ports, five PCIe Gen 4 slots, and 1100 W redundant power supplies, the TS-h2490FU provides exceptional hardware and connectivity. The ZFS-based QuTS hero operating system also includes powerful applications for data reduction and SSD optimization, ensuring that SSD performance and lifespan is maximized for mission-critical virtualized workloads and data centers with all-flash investments.

Los Alamos National Laboratory Deploys HPE Cray EX 'Chicoma' Supercomputer Powered by AMD EPYC Processors

Los Alamos National Laboratory has completed the installation of a next-generation high performance computing platform, with aim to enhance its ongoing R&D efforts in support of the nation's response to COVID-19. Named Chicoma, the new platform is poised to demonstrate Hewlett Packard Enterprise's new HPE Cray EX supercomputer architecture for solving complex scientific problems.

"As extensive social and economic impacts from COVID-19 continue to grip the nation, Los Alamos scientists are actively engaged in a number of critical research efforts ranging from therapeutics design to epidemiological modeling," said Irene Qualters, Associate Laboratory Director for Simulation and Computing at Los Alamos. "High Performance Computing is playing a critical role by allowing scientists to model the complex phenomena involved in viral evolution and propagation."

OIST Deploys AMD EPYC Processors with Over 2 PFLOPs of Computing Power Dedicated to Scientific Research

Today, AMD and Okinawa Institute of Science and Technology Graduate University (OIST), announced the deployment of AMD EPYC 7702 processors for use in a new, high performance computing system. The EPYC processor-based supercomputer will deliver the 2.36 petaflops of computing power OIST plans to use for scientific research at the University. The Scientific Computing & Data Analysis Section (SCDA) of OIST plans to implement the new supercomputer for supporting OIST computationally intensive research ranging from bioinformatics, computational neuroscience, and physics. SCDA adopted AMD EPYC after significant growth, including a 2X increase in users.

"2020 is a milestone year for OIST with new research units expanding the number of research areas. This growth is driving a significant increase in our computational needs," said Eddy Taillefer, Ph.D., Section Leader, Scientific Computing & Data Analysis Section. "Under the common resource model for which the computing system is shared by all OIST users we needed a significant increase in core-count capacity to both absorb these demands and cope with the significant growth of OIST. The latest AMD EPYC processor was the only technology that could match this core-count need in a cost-performance effective way."

AMD EPYC Processors Optimized for VMware vSphere 7.0U1

AMD today highlighted the latest expansion of the AMD EPYC processor ecosystem for virtualized and hyperconverged infrastructure (HCI) environments with VMware adding support for AMD Secure Encrypted Virtualization-Encrypted State (SEV-ES) in its newest vSphere release, 7.0U1. With the latest release, VMware vSphere now enables AMD SEV-ES, which is part of AMD Infinity Guard, a robust set of modern, hardware enabled features found in all 2nd Gen AMD EPYC processors. In addition to VM memory encryption, SEV-ES also provides encryption of CPU registers and provides VMware customers with easy-to-implement and enhanced security for their environments.

"As the modern data center continues to evolve into a virtualized, hybrid cloud environment, AMD and VMware are working together to make sure customers have access to systems that provide high levels of performance on virtualization workloads, while enabling advanced security features that are simple to implement for better protection of data," said Dan McNamara, senior vice president and general manager, Server Business Unit, AMD. "A virtualized data center with AMD EPYC processors and VMware enables customers to modernize the data center and have access to high-performance and leading-edge security features, across a wide variety of OEM platforms."

GIGABYTE, Northern Data AG and AMD Join Forces to Drive HPC Mega-Project

GIGABYTE Technology, an industry leader in high-performance servers and workstations, today is announcing a partnership with Northern Data AG to create a HPC mega-project with computing power of around 3.1 exaflops. GIGABYTE will supply GPU-based server systems equipped with proven AMD EPYC processors and AMD Radeon Instinct accelerators from technology partner AMD, a leading provider of high performance computing and graphics technologies, to Northern Data.

Northern Data develops a distributed computing cluster based on the hardware at locations in Norway, Sweden and Canada, which in its final stage of deployment will provide FP32 computing power of around 3.1 exaflops (3.1 million teraflops and 274.54 petaflops FP64). The world's fastest supercomputer, the Japanese "Fukagu" (Fujitsu), has a calculation power of 1.07 exaflops FP32 and 415.3 petaflops FP64, whereas the second fastest, the US supercomputer "Summit" (IBM) has a calculation power of 0.414 exaflops FP32 and 148.0 petaflops FP64.

AMD Zen 3-based EPYC Milan CPUs to Usher in 20% Performance Increase Compared to Rome

According to a report courtesy of Hardwareluxx, where contributor Andreas Schilling reportedly gained access to OEM documentation, AMD's upcoming EPYC Milan CPUs are bound to offer up to 20% performance improvements over the previous EPYC generation. The report claims a 15% IPC performance, paired with an extra 5% added via operating frequency optimization. The report claims that AMD's 64-core designs will feature a lower-clock all-core operating mode, and a 32-core alternate for less threaded workloads where extra frequency is added to the working cores.

Apparently, AMD's approach for the Zen 3 architecture does away with L3 subdivisions according to CCXs; now, a full 32 MB of L3 cache is available for each 8-core Core Compute Die (CCD). AMD has apparently achieved new levels of frequency optimization under Zen 3, with higher upward frequency limits than before. This will see the most benefits in lower core-count designs, as the amount of heat being generated is necessarily lesser compared to more core-dense designs. Milan keeps the same 7 nm manufacturing tech, DDR4, PCIe 4.0, and 120-225 W TDP as the previous-gen Rome. It remains to be seen how these changes actually translate to the consumer versions of Zen 3, Vermeer, later this year.

GIGABYTE Announces G242-Z11 HPC Node with PCIe 4.0

GIGABYTE Technology,, an industry leader in high-performance servers and workstations, today announced the launch of the GIGABYTE G242-Z11 with PCIe 4.0, which adds to an already extensive line of G242 series servers, designed for AI, deep learning, data analytics, and scientific computing. High-speed interfaces such as Ethernet, Infiniband, and PCI Express rely on fast data transfer, and PCIe 3.0 can pose a bottleneck in some servers. With the expansion of the AMD EPYC family of processors comes PCIe Gen 4.0, which is valuable to servers so as not to bottleneck high bandwidth applications. The 2nd Gen AMD EPYC 7002 processors have added PCIe Gen 4.0, and GIGABYTE has included an ever-evolving line of servers to accommodate the latest technology.

The G242-Z11 caters to the capabilities of 2nd Gen AMD EPYC 7002 series processors. The G242-Z11 is built around a single AMD EPYC processor, and this even includes the new 280 W 64-core (128 threads) AMD EPYC 7H12. Besides a high core count, the 7002 series has 128 PCIe lanes and natively supports PCIe Gen 4.0. It offers double the speed and bandwidth when compared to PCIe 3.0. Having PCIe 4.0 allows for 16GT/s per lane and a total bandwidth of 64 GB/s. As far as memory support, the G242-Z11 has support for 8-channel DDR4 with room for up to 8 DIMMs. In this 1 DIMM per channel configuration, it can support up to 2 TB of memory and speeds up to 3200 MHz.

Intel Ice Lake-SP Processors Get Benchmarked Against AMD EPYC Rome

Intel is preparing to launch its next-generation for server processors and the next in line is the Ice Lake-SP 10 nm CPU. Featuring a Golden Cove CPU and up to 28 cores, the CPU is set to bring big improvements over the past generation of server products called Cascade Lake. Today, thanks to the sharp eye of TUM_APISAK, we have a new benchmark of the Ice Lake-SP platform, which is compared to AMD's EPYC Rome offerings. In the latest GeekBench 4 score, appeared an engineering sample of unknown Ice Lake-SP model with 28 cores, 56 threads, a base frequency of 1.5 GHz, and a boost of 3.19 GHz.

This model was put in a dual-socket configuration that ends up at a total of 56 core and 112 threads, against a single 64 core AMD EPYC 7442 Rome CPU. The dual-socket Intel configuration scored 3424 points in the single-threaded test, where AMD configuration scored notably higher 4398 points. The lower score on Intel's part is possibly due to lower clocks, which should improve in the final product, as this is only an engineering sample. When it comes to the multi-threaded test, Intel configuration scored 38079 points, where the AMD EPYC system did worse and scored 35492 points. The reason for this higher result is unknown, however, it shows that Ice Lake-SP has some potential.

AMD Confirms "Zen 4" on 5nm, Other Interesting Tidbits from Q2-2020 Earnings Call

AMD late Tuesday released its Q2-2020 financial results, which saw the company rake in revenue of $1.93 billion for the quarter, and clock a 26 percent YoY revenue growth. In both its corporate presentation targeted at the financial analysts, and its post-results conference call, AMD revealed a handful interesting bits looking into the near future. Much of the focus of AMD's presentation was in reassuring investors that [unlike Intel] it is promising a stable and predictable roadmap, that nothing has changed on its roadmap, and that it intends to execute everything on time. "Over the past couple of quarters what we've seen is that they see our performance/capability. You can count on us for a consistent roadmap. Milan point important for us, will ensure it ships later this year. Already started engaging people on Zen4/5nm. We feel customers are very open. We feel well positioned," said president and CEO Dr Lisa Su.

For starters, there was yet another confirmation from the CEO that the company will launch the "Zen 3" CPU microarchitecture across both the consumer and data-center segments before year-end, which means both Ryzen and EPYC "Milan" products based on "Zen 3." Also confirmed was the introduction of the RDNA2 graphics architecture across consumer graphics segments, and the debut of the CDNA scalar compute architecture. The company started shipping semi-custom SoCs to both Microsoft and Sony, so they could manufacture their next-generation Xbox Series X and PlayStation 5 game consoles in volumes for the Holiday shopping season. Semi-custom shipments could contribute big to the company's Q3-2020 earnings. CDNA won't play a big role in 2020 for AMD, but there will be more opportunities for the datacenter GPU lineup in 2021, according to the company. CDNA2 debuts next year.

AMD Reports Second Quarter 2020 Financial Results

AMD (NASDAQ:AMD) today announced revenue for the second quarter of 2020 of $1.93 billion, operating income of $173 million, net income of $157 million and diluted earnings per share of $0.13. On a non-GAAP basis, operating income was $233 million, net income was $216 million and diluted earnings per share was $0.18. "We delivered strong second quarter results, led by record notebook and server processor sales as Ryzen and EPYC revenue more than doubled from a year ago," said Dr. Lisa Su, AMD president and CEO. "Despite some macroeconomic uncertainty, we are raising our full-year revenue outlook as we enter our next phase of growth driven by the acceleration of our business in multiple markets."

Linux Performance of AMD Rome vs Intel Cascade Lake, 1 Year On

Michael Larabel over at Phoronix posted an extremely comprehensive analysis on the performance differential between AMD's Rome-based EPYC and Intel's Cascade Lake Xeons one-year after release. The battery of tests, comprising more than 116 benchmark results, pits a Xeon Platinum 8280 2P system against an EPYC 7742 2P one. The tests were conducted pitting performance of both systems while running benchmarks under the Ubuntu 19.04 release, which was chosen as the "one year ago" baseline, against the newer Linux software stack (Ubuntu 20.10 daily + GCC 10 + Linux 5.8).

The benchmark conclusions are interesting. For one, Intel gained more ground than AMD over the course of the year, with the Xeon platform gaining 6% performance across releases, while AMD's EPYC gained just 4% over the same period of time. This means that AMD's system is still an average of 14% faster across all tests than the Intel platform, however, which speaks to AMD's silicon superiority. Check some benchmark results below, but follow the source link for the full rundown.

Advanced Security Features of AMD EPYC Processors Enable New Google Cloud Confidential Computing Portfolio

AMD and Google Cloud today announced the beta availability of Confidential Virtual Machines (VMs) for Google Compute Engine powered by 2nd Gen AMD EPYC processors, taking advantage of the processors' advanced security features. The first product in the Google Cloud Confidential Computing portfolio, Confidential VMs, enables customers for the first time to encrypt data in-use while it is being processed and not just when at rest and in-transit. Based on the N2D family of VMs for Google Compute Engine, Confidential VMs provide customers high performance processing for the most demanding computational tasks and enable encryption for even the most sensitive data in the cloud while it is being processed.

"At Google Cloud, we believe the future of cloud computing will increasingly shift to private, encrypted services where users can be confident that the confidentiality of their data is always under their control. To help customers in making that transition, we've created Confidential VMs, the first product in our Google Cloud Confidential Computing portfolio," said Vint Cerf, vice president and chief internet evangelist, Google. "By using advanced security technology in the AMD EPYC processors, we've created a breakthrough technology that allows customers to encrypt their data in the cloud while it's being processed and unlock computing scenarios that had previously not been possible."

AMD 64-core EPYC "Milan" Based on "Zen 3" Could Ship with 3.00 GHz Clocks

AMD's 3rd generation EPYC line of enterprise processors that leverage the "Zen 3" microarchitecture, could innovate in two directions - towards increasing performance by doing away with the CCX (compute complex) multi-core topology; and taking advantage of a newer/refined 7 nm-class node to increase clock-speeds. Igor's Lab decoded as many as three OPNs of the upcoming 3rd gen EPYC series, including a 64-core/128-thread part that ships with frequency of 3.00 GHz. The top 2nd gen EPYC 64-core part, the 7662, ships with 2.00 GHz base frequency and 3.30 GHz boost; and 225 W TDP. AMD is expected to unveil its "Zen 3" microarchitecture within 2020.

AMD Ryzen Threadripper PRO 3995WX Processor Pictured: 8-channel DDR4

Here is the first picture of the Ryzen Threadripper PRO 3995WX processor, designed to be part of AMD's HEDT/workstation processor launch for this year. The picture surfaced briefly on the ChipHell forums, before being picked up by HXL (@9550pro) This processor is designed to compete with Intel Xeon W series processors, such as the W-3175X, and is hence located a segment above even the "normal" Threadripper series led by the 64-core/128-thread Threadripper 3990X. Besides certain features exclusive to Ryzen PRO series processors, the killer feature with the 3995WX is a menacing 8-channel DDR4 memory interface, that can handle up to 2 TB of memory with ECC.

The Threadripper PRO 3995X is expected to have a mostly identical I/O to the most expensive EPYC 7662 processor. As a Ryzen-branded chip, it could feature higher clock speeds than its EPYC counterpart. To enable its 8-channel memory, the processor could come with a new socket, likely the sWRX8, and AMD WRX80 chipset, although it wouldn't surprise us if these processors have some form of inter-compatibility with sTRX4 and TRX40 (at limited memory bandwidth and PCIe capabilities, of course). Sources tell VideoCardz that AMD could announce the Ryzen Threadripper PRO series as early as July 14, 2020.
Return to Keyword Browsing