News Posts matching #EPYC

Return to Keyword Browsing

MSI Showcases GPU Servers for Media and Entertainment Industry at 2024 NAB Show

MSI, a leading global server provider, is showcasing its latest GPU servers powered by AMD processors at the 2024 NAB Show, Booth #SL9137 in the Las Vegas Convention Center from April 14-17. These servers are designed to meet the evolving needs of modern creative projects in Media and Entertainment industry. "As AI continues to reshape the Media and Entertainment industry, it brings unprecedented speed and performance to tasks such as animation, visual effects, video editing, and rendering," said Danny Hsu, General Manager of Enterprise Platform Solutions. "MSI's GPU platforms empower content creators to execute every project with efficiency, speed, and uncompromising quality."

The G4101 is a 4U 4GPU server platform, purpose-built to unleash the full potential of creative professionals in the Media and Entertainment industry. It supports a single AMD EPYC 9004 Series processor equipped with a liquid cooling module, along with twelve DDR5 RDIMM slots. Additionally, it features four PCIe 5.0 x16 slots tailored for triple-slot graphic cards with coolers, ensuring increased airflow and sustained performance. With twelve front 2.5-inch U.2 NVMe/SATA drive bays, it offers high-speed and flexible storage options, catering to the diverse needs of AI workloads. The G4101 combines air flow spacing and liquid closed-loop cooling, making it the optimal thermal management solution for even the most demanding tasks.

AMD EPYC "Turin" 9000-series Motherboard Specs Suggest Support for DDR5 6000 MT/s

AMD's next-gen EPYC Zen 5 processor family seems to be nearing launch status—late last week, momomo_us uncovered an unnamed motherboard's datasheet; this particular model will accommodate a single 9000-series CPU—with a maximum 400 W TDP—via an SP5 socket. 500 W and 600 W limits have been divulged (via leaks) in the past, so the 400 W spec could be an error or a: "legitimate compatibility issue with the motherboard, though 400 Watts would be in character with high-end Zen 4 SP5 motherboards," according to Tom's Hardware analysis.

AMD's current-gen "Zen 4" based EPYC "Genoa" processor family—sporting up to 96-cores/192-threads—is somewhat limited by its DDR5 support transfer rates of up to 4800 MT/s. The latest leak suggests that "Turin" is upgraded quite nicely in this area—when compared to predecessors—the SP5 board specs indicate DDR5 speeds of up to 6000 MT/s with 4 TB of RAM. December 2023 reports pointed to "Zen 5c" variants featuring (max.) 192-core/384-thread configurations, while larger "Zen 5" models are believed to be "modestly" specced with up to 128-cores and 256-threads. AMD has not settled on an official release date for its EPYC "Turin" 9000-series processors, but a loose launch window is expected "later in 2024" based on timeframes presented within product roadmaps.

AMD Response to "ZENHAMMER: Rowhammer Attacks on AMD Zen-Based Platforms"

On February 26, 2024, AMD received new research related to an industry-wide DRAM issue documented in "ZENHAMMER: Rowhammering Attacks on AMD Zen-based Platforms" from researchers at ETH Zurich. The research demonstrates performing Rowhammer attacks on DDR4 and DDR5 memory using AMD "Zen" platforms. Given the history around Rowhammer, the researchers do not consider these rowhammering attacks to be a new issue.

Mitigation
AMD continues to assess the researchers' claim of demonstrating Rowhammer bit flips on a DDR5 device for the first time. AMD will provide an update upon completion of its assessment.

Dr. Lisa Su Responds to TinyBox's Radeon RX 7900 XTX GPU Firmware Problems

The TinyBox AI server system attracted plenty of media attention last week—its creator, George Hotz, decided to build with AMD RDNA 3.0 GPU hardware rather than the expected/traditional choice of CDNA 3.0. Tiny Corp. is a startup firm dealing in neural network frameworks—they currently "write and maintain tinygrad." Hotz & Co. are in the process of assembling rack-mounted 12U TinyBox systems for customers—an individual server houses an AMD EPYC 7532 processor and six XFX Speedster MERC310 Radeon RX 7900 XTX graphics cards. The Tiny Corp. social media account has engaged in numerous NVIDIA vs. AMD AI hardware debates/tirades—Hotz appears to favor the latter, as evidenced in his latest choice of components. ROCm support on Team Red AI Instinct accelerators is fairly mature at this point in time, but a much newer prospect on gaming-oriented graphics cards.

Tiny Corporation's unusual leveraging of Radeon RX 7900 XTX GPUs in a data center configuration has already hit a developmental roadblock. Yesterday, the company's social media account expressed driver-related frustrations in a public forum: "If AMD open sources their firmware, I'll fix their LLVM spilling bug and write a fuzzer for HSA. Otherwise, it's not worth putting tons of effort into fixing bugs on a platform you don't own." Hotz's latest complaint was taken onboard by AMD's top brass—Dr. Lisa Su responded with the following message: "Thanks for the collaboration and feedback. We are all in to get you a good solution. Team is on it." Her software engineers—within a few hours—managed to fling out a set of fixes in Tiny Corporation's direction. Hotz appreciated the quick turnaround, and proceeded to run a model without encountering major stability issues: "AMD sent me an updated set of firmware blobs to try. They are responsive, and there have been big strides in the driver in the last year. It will be good! This training run is almost 5 hours in, hasn't crashed yet." Tiny Corp. drummed up speculation about AMD open sourcing GPU MES firmware—Hotz disclosed that he will be talking (on the phone) to Team Red leadership.

AMD Hires Thomas Zacharia to Expand Strategic AI Relationships

AMD announced that Thomas Zacharia has joined AMD as senior vice president of strategic technology partnerships and public policy. Zacharia will lead the global expansion of AMD public/private relationships with governments, non-governmental organizations (NGOs) and other organizations to help fast-track the deployment of customized AMD-powered AI solutions to meet rapidly growing number of global projects and applications targeting the deployment of AI for the public good.

"Thomas is a distinguished leader with decades of experience successfully creating public/private partnerships that have resulted in consistently deploying the world's most powerful and advanced computing solutions, including the world's fastest supercomputer Frontier," said AMD Chair and CEO Lisa Su. "As the former Director of the U.S.'s largest multi-program science and energy research lab, Thomas is uniquely positioned to leverage his extensive experience advancing the frontiers of science and technology to help countries around the world deploy AMD-powered AI solutions for the public good."

Supermicro Accelerates Performance of 5G and Telco Cloud Workloads with New and Expanded Portfolio of Infrastructure Solutions

Supermicro, Inc. (NASDAQ: SMCI), a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, delivers an expanded portfolio of purpose-built infrastructure solutions to accelerate performance and increase efficiency in 5G and telecom workloads. With one of the industry's most diverse offerings, Supermicro enables customers to expand public and private 5G infrastructures with improved performance per watt and support for new and innovative AI applications. As a long-term advocate of open networking platforms and a member of the O-RAN Alliance, Supermicro's portfolio incorporates systems featuring 5th Gen Intel Xeon processors, AMD EPYC 8004 Series processors, and the NVIDIA Grace Hopper Superchip.

"Supermicro is expanding our broad portfolio of sustainable and state-of-the-art servers to address the demanding requirements of 5G and telco markets and Edge AI," said Charles Liang, president and CEO of Supermicro. "Our products are not just about technology, they are about delivering tangible customer benefits. We quickly bring data center AI capabilities to the network's edge using our Building Block architecture. Our products enable operators to offer new capabilities to their customers with improved performance and lower energy consumption. Our edge servers contain up to 2 TB of high-speed DDR5 memory, 6 PCIe slots, and a range of networking options. These systems are designed for increased power efficiency and performance-per-watt, enabling operators to create high-performance, customized solutions for their unique requirements. This reassures our customers that they are investing in reliable and efficient solutions."

Lenovo HPC Infrastructure Powers Pre-Exascale Supercomputer Marenostrum 5 to Enable New Scientific Advances and Solve Global Challenges

Lenovo (HKSE: 992) (ADR: LNVGY) has today announced that the General Purpose Partition of the MareNostrum 5, a new pre-exascale supercomputer running on Lenovo's HPC infrastructure, has been classified as the top x86 general-purpose cluster on the recently published TOP500 list of the most powerful supercomputers globally.

Officially inaugurated at Barcelona Supercomputing Center on December 21st, MareNostrum 5 has been built for the European High Performance Computing Joint Undertaking (EuroHPC JU). The pre-exascale supercomputer will bolster the EU's mission to provide Europe with the most advanced supercomputing technology and accelerate the capacity for artificial intelligence (AI) research, enabling new scientific advances that will help solve global challenges. It aims to empower a wide range of complex HPC-specific applications, from climate research and engineering to material science and earth sciences, adeptly handling tasks that extend beyond the capabilities of cloud computing.

AMD Zen 5 Linux Kernel Patches Point to Power Management Updates

AMD released its latest PMC (power management controller) driver patches for the Linux kernel, which reference a yet unreleased "Family 1Ah" processors. Phoronix believes this is the first reference to AMD's next generation "Zen 5" microarchitecture in the PMC driver. We've already seen AMD EPYC "Turin" server processors based on "Zen 5" in the flesh, and it's likely that AMD is handing these out to some of its biggest data-center customers for testing and evaluation, before giving them some final touches and green-lighting mass-production in 2024. The patches themselves are barely two new lines of code, and talk about a new sleep state called "s2idle." This is a software-defined system sleep state. The EPYC "Turin" processor comes in two packages, one with up to 128 "Zen 5" cores, and another with up to 192 "Zen 5c" cores for cloud applications.

AMD 5th Gen EPYC "Turin" Pictured: Who Needs Accelerators When You Have 192 Cores?

AMD's upcoming server processor, the 5th Gen EPYC "Turin," has been pictured as an engineering sample is probably being evaluated by the company's data-center or cloud customers. The processor has a mammoth core-count of 192-core/384-thread in its high-density cloud-focused variant that uses "Zen 5c" CPU cores. Its regular version that uses larger "Zen 5" cores that can sustain higher clock speeds, also comes with a fairly high core-count of 128-core/256-thread, up from the 96-core/192-thread of the "Zen 4" based EPYC "Genoa."

The EPYC "Turin" server processor based on "Zen 5" comes with an updated sIOD (server I/O die), surrounded by as many as 16 CCDs (CPU complex dies). AMD is expected to build these CCDs on the TSMC N4P foundry node, which is a more advanced version of the TSMC N4 node the company currently uses for its "Phoenix" client processors, and the TSMC N5 node it uses for its "Zen 4" CCD. TSMC claims that the N4P node offers an up to 22% improvement in power efficiency over N5, as well as a 6% increase in transistor density. Each of the "Zen 5" CCDs is confirmed to have 8 CPU cores sharing 32 MB L3 cache memory. A total of 16 such CCDs add up to the processor's 128-core/256-thread number. The high-density "Turin" meant for cloud data-centers, is a whole different beast.

Intel "Emerald Rapids" Xeon Platinum 8592+ Tested, Shows 20%+ Improvement over Sapphire Rapids

Yesterday, Intel unveiled its latest Xeon data center processors, codenamed Emerald Rapids, delivering the new Xeon Platinum 8592+ flagship SKU with 64 cores and 128 threads. Packed into its fresh silicon, Intel promises boosted performance and reduced power hunger. The comprehensive tech benchmarking website Phoronix essentially confirms Intel's pitch. Testing production servers running the new 8592+ showed solid gains over prior Intel models, let alone older generations still commonplace in data centers. On average, upgrading to the 8592+ increased single-socket server performance by around 23.5% compared to the previous generation configs of Sapphire Rapid, Xeon Platinum 8490H. The dual-socket configuration records a 17% boost in performance.

However, Intel is not in the data center market by itself. AMD's 64-core offering that Xeon Platinum 8592+ is competing with is AMD EPYC 9554. The Emerald Rapids chip is faster by about 2.3%. However, AMD's lineup doesn't stop at only 64 cores. AMD's Genoa and Genoa-X with 3D V-cache top out at 96 cores, while Bergamo goes up to 128 cores. On the power consumption front, the Xeon Platinum 8592+ was pulling about 289 Watts compared to the Xeon Platinum 8490H average of 306 Watts. At peak, the Xeon Platinum 8592+ CPU managed to hit 434 Watts compared to the Xeon Platinum 8490H peak of 469 Watts. This aligns with Intel's claims of enhanced efficiency. However, compared to the 64-core counterpart from AMD, the EPYC 9554 had an average power consumption of 227 Watts and a recorded peak of 369 Watts.

Intel "Sierra Forest" Xeon System Surfaces, Fails in Comparison to AMD Bergamo

Intel's upcoming Sierra Forest Xeon server chip has debuted on Geekbench 6, showcasing its potential in multi-core performance. Slated for release in the first half of 2024, Sierra Forest is equipped with up to 288 Efficiency cores, positioning it to compete with AMD's Zen 4c Bergamo server CPUs and other ARM-based server chips like those from Ampere for the favor of cloud service providers (CSP). In the Geekbench 6 benchmark, a dual-socket configuration featuring two 144-core Sierra Forest CPUs was tested. The benchmark revealed a notable multi-core score of 7,770, surpassing most dual-socket systems powered by Intel's high-end Xeon Platinum 8480+, which typically scores between 6,500 and 7,500. However, Sierra Forest's single-core score of 855 points was considerably lower, not even reaching half of that of the 8480+, which manages 1,897 points.

The difference in single-core performance is a matter of choice, as Sierra Forest uses Crestmont-derived Sierra Glen E-cores, which are more power and area-efficient, unlike the Golden Cove P-cores in the Sapphire Rapids-based 8480+. This design choice is particularly advantageous for server environments where high-core counts are crucial, as CSPs usually partition their instances by the number of CPU cores. However, compared to AMD's Bergamo CPUs, which use Zen 4c cores, Sierra Forest lacks pure computing performance, especially in multi-core. The Sierra Forest lacks hyperthreading, while Bergaamo offers SMT with 256 threads on the 128-core SKU. Comparing the Geekbench 6 scores to AMD Bergamo EPYC 9754 and Sierra Forest results look a lot less impressive. Bergamo scored 1,597 points in single-core, almost double that of Sierra Forest, and 16,455 points in the multi-core benchmarks, which is more than double. This is a significant advantage of the Zen 4c core, which cuts down on caches instead of being an entirely different core, as Intel does with its P and E-cores. However, these are just preliminary numbers; we must wait for real-world benchmarks to see the actual performance.

Supermicro Extends AI and GPU Rack Scale Solutions with Support for AMD Instinct MI300 Series Accelerators

Supermicro, Inc., a Total IT Solution Manufacturer for AI, Cloud, Storage, and 5G/Edge, is announcing three new additions to its AMD-based H13 generation of GPU Servers, optimized to deliver leading-edge performance and efficiency, powered by the new AMD Instinct MI300 Series accelerators. Supermicro's powerful rack scale solutions with 8-GPU servers with the AMD Instinct MI300X OAM configuration are ideal for large model training.

The new 2U liquid-cooled and 4U air-cooled servers with the AMD Instinct MI300A Accelerated Processing Units (APUs) accelerators are available and improve data center efficiencies and power the fast-growing complex demands in AI, LLM, and HPC. The new systems contain quad APUs for scalable applications. Supermicro can deliver complete liquid-cooled racks for large-scale environments with up to 1,728 TFlops of FP64 performance per rack. Supermicro worldwide manufacturing facilities streamline the delivery of these new servers for AI and HPC convergence.

AMD Brings New AI and Compute Capabilities to Microsoft Customers

Today at Microsoft Ignite, AMD and Microsoft featured how AMD products, including the upcoming AMD Instinct MI300X accelerator, AMD EPYC CPUs and AMD Ryzen CPUs with AI engines, are enabling new services and compute capabilities across cloud and generative AI, Confidential Computing, Cloud Computing and smarter, more intelligent PCs.

"AMD is fostering AI everywhere - from the cloud, to the enterprise and end point devices - all powered by our CPUs, GPUs, accelerators and AI engines," said Vamsi Boppana, Senior Vice President, AI, AMD. "Together with Microsoft and a rapidly growing ecosystem of software and hardware partners, AMD is accelerating innovation to bring the benefits of AI to a broad portfolio of compute engines, with expanding software capabilities."

TYAN Announces New Server Line-Up Powered by 4th Gen AMD EPYC (9004/8004 Series) and AMD Ryzen (7000 Series) Processors at SC23

TYAN, an industry leader in server platform design and a subsidiary of MiTAC Computing Technology Corporation, debuts its new server line-up for 4th Gen AMD EPYC & AMD Ryzen Processors at SC23, Booth #1917, in the Colorado Convention Center, Denver, CO, November 13-16.

AMD EPYC 9004 processor features leadership performance and is optimized for a wide range of HPC, cloud-native computing and Generative AI workloads
TYAN offers server platforms supporting the AMD EPYC 9004 processors that provide up to 128 Zen 4C cores and 256 MB of L3 Cache for dynamic cloud-native applications with high performance, density, energy efficiency, and compatibility.

AMD EPYC CPUs Affected by CacheWarp Vulnerability, Patches are Already Available

Researchers at Graz University of Technology and the Helmholtz Center for Information Security have released their paper on CacheWarp—the latest vulnerability affecting some of the prior generation AMD EPYC CPUs. Titled CVE-2023-20592, the exploit targets first-generation EPYC Naples, second-generation EPYC Rome, and third-generation EPYC Milan. CacheWarp operates by exploiting a vulnerability in AMD's Secure Encrypted Virtualization (SEV) technology, specifically targeting the SEV-ES (Encrypted State) and SEV-SNP (Secure Nested Paging) versions. The attack is a software-based fault injection technique that manipulates the cache memory of a virtual machine (VM) running under SEV. It cleverly forces modified cache lines of the guest VM to revert to their previous state. This action circumvents the integrity checks that SEV-SNP is designed to enforce, allowing the attacker to inject faults without being detected.

Unlike attacks that rely on specific guest VM vulnerabilities, CacheWarp is more versatile and dangerous because it does not depend on the characteristics of the targeted VM. It exploits the underlying architectural weaknesses of AMD SEV, making it a broad threat to systems relying on this technology for security. The CacheWarp attack can bypass robust security measures like encrypted virtualization, posing a significant risk to data confidentiality and integrity in secure computing environments. AMD has issued an update for EPYC Milan with a hot-loadable microcode patch and updated the firmware image without any expected performance degradation. And for the remaining generations, AMD states that no mitigation is available for the first or second generations of EPYC processor (Naples and Rome) since the SEV and SEV-ES features are not designed to protect guest VM memory integrity, and the SEV-SNP is not available.

ASRock Rack Announces Support of NVIDIA H200 GPUs and GH200 Superchips and Highlights HPC and AI Server Platforms at SC 23

ASRock Rack Inc., the leading innovative server company, today is set to showcase a comprehensive range of servers for diverse AI workloads catering to scenarios from the edge, on-premises, and to the cloud at booth #1737 at SC 23 held at the Colorado Convention Center in Denver, USA. The event is from November 13th to 16th, and ASRock Rack will feature the following significant highlights:

At SC 23, ASRock Rack will demonstrate the NVIDIA-Qualified 2U4G-GENOA/M3 and 4U8G series GPU server solutions along with the NVIDIA H100 PCIe. The ASRock Rack 4U8G and 4U10G series GPU servers are able to accommodate eight to ten 400 W dual-slot GPU cards and 24 hot-swappable 2.5" drives, designed to deliver exceptional performance for demanding AI workloads deployed in the cloud environment. The 2U4G-GENOA/M3, tailored for lighter workloads, is powered by a single AMD EPYC 9004 series processor and is able to support four 400 W dual-slot GPUs while having additional PCIe and OCP NIC 3.0 slots for expansions.

GIGABYTE Demonstrates the Future of Computing at Supercomputing 2023 with Advanced Cooling and Scaled Data Centers

GIGABYTE Technology, Giga Computing, a subsidiary of GIGABYTE and an industry leader in high-performance servers, server motherboards, and workstations, continues to be a leader in cooling IT hardware efficiently and in developing diverse server platforms for Arm and x86 processors, as well as AI accelerators. At SC23, GIGABYTE (booth #355) will showcase some standout platforms, including for the NVIDIA GH200 Grace Hopper Superchip and next-gen AMD Instinct APU. To better introduce its extensive lineup of servers, GIGABYTE will address the most important needs in supercomputing data centers, such as how to cool high-performance IT hardware efficiently and power AI that is capable of real-time analysis and fast time to results.

Advanced Cooling
For many data centers, it is becoming apparent that their cooling infrastructure must radically shift to keep pace with new IT hardware that continues to generate more heat and requires rapid heat transfer. Because of this, GIGABYTE has launched advanced cooling solutions that allow IT hardware to maintain ideal performance while being more energy-efficient and maintaining the same data center footprint. At SC23, its booth will have a single-phase immersion tank, the A1P0-EA0, which offers a one-stop immersion cooling solution. GIGABYTE is experienced in implementing immersion cooling with immersion-ready servers, immersion tanks, oil, tools, and services spanning the globe. Another cooling solution showcased at SC23 will be direct liquid cooling (DLC), and in particular, the new GIGABYTE cold plates and cooling modules for the NVIDIA Grace CPU Superchip, NVIDIA Grace Hopper Superchip, AMD EPYC 9004 processor, and 4th Gen Intel Xeon processor.

MSI Introduces New AI Server Platforms with Liquid Cooling Feature at SC23

MSI, a leading global server provider, is showcasing its latest GPU and CXL memory expansion servers powered by AMD EPYC processors and 4th Gen Intel Xeon Scalable processors, which are optimized for enterprises, organizations and data centers, at SC23, booth #1592 in the Colorado Convention Center in Denver from November 13 to 16.

"The exponential growth of human- and machine-generated data demands increased data center compute performance. To address this demand, liquid cooling has emerged as a key trend, said Danny Hsu, General Manager of Enterprise Platform Solutions. "MSI's server platforms offer a well-balanced hardware foundation for modern data centers. These platforms can be tailored to specific workloads, optimizing performance and aligning with the liquid cooling trend."

ASUS Demonstrates AI and Immersion-Cooling Solutions at SC23

ASUS today announced a showcase of the latest AI solutions to empower innovation and push the boundaries of supercomputing, at Supercomputing 2023 (SC23) in Denver, Colorado, from 12-17 November, 2023. ASUS will demonstrate the latest AI advances, including generative-AI solutions and sustainability breakthroughs with Intel, to deliver the latest hybrid immersion-cooling solutions, plus lots more - all at booth number 257.

At SC23, ASUS will showcase the latest NVIDIA-qualified ESC N8A-E12 HGX H100 eight-GPU server empowered by dual-socket AMD EPYC 9004 processors and is designed for enterprise-level generative AI with market-leading integrated capabilities. Related to NVIDIA announcement on the latest NVIDIA H200 Tensor Core GPU at SC23, which is the first GPU to offer HBM3E for faster, larger memory to fuel the acceleration of generative AI and large language models, ASUS will offer an update of H100-based system with an H200-based drop-in replacement in 2024.

Leaked Flyer Hints at Possible AMD Ryzen 9000 Series Powered by Zen 5

A curious piece of marketing material on the Chiphell forum has sent ripples through the tech community, featuring what appears to be an Alienware desktop equipped with an unannounced AMD Ryzen 9000-series processor. The authenticity of this flyer is up for debate, with possibilities ranging from a simple typo by Alienware to a fabricated image, or it could even suggest that AMD is on the cusp of unveiling its next-generation Ryzen CPUs for desktop PCs. While intrigue is high, it's important to approach such revelations cautiously, with a big grain of salt. AMD's existing roadmap points toward a 2024 release for its Zen 5-based Ryzen desktop processors and EPYC server CPUs, which casts further doubt on the Ryzen 9000 series appearing ahead of schedule.

We have to wait for AMD's major upcoming events, including the "Advancing AI" event on December 6, where the company will showcase how its partners and AMD use AI for applications. Next, we hope to hear from AMD about upcoming events such as CES in January and Computex in May, but we don't have any official information on product launches in the near term. If the company is preparing anything, the Alienware flyer pictured below should indicate it, if the source is confirmed. However, the doubt remains, and we should be skeptical of its truthfulness.

Micron Announces 128GB DRAM Low-Latency, High-Capacity RDIMMs

Micron Technology, Inc. (Nasdaq: MU), today demonstrated its industry leadership by announcing its 32Gb monolithic die-based 128 GB DDR5 RDIMM memory featuring best-in-class performance of up to 8000 MT/s to support data center workloads today and into the future. These high-capacity, high-speed memory modules are engineered to meet the performance and data-handling needs of a wide range of mission-critical applications in data center and cloud environments, including artificial intelligence (AI), in-memory databases (IMDBs) and efficient processing for multithreaded, multicore count general compute workloads.

Powered by Micron's industry-leading 1β (1-beta) technology, the 32Gb DDR5 DRAM die-based 128 GB DDR5 RDIMM memory delivers the following enhancements over competitive 3DS through-silicon via (TSV) products:
  • more than 45% improved bit density
  • up to 24% improved energy efficiency
  • up to 16% lower latency
  • up to a 28% improvement in AI training performance

AMD Extends 3rd Gen EPYC CPU Lineup to Deliver New Levels of Value for Mainstream Applications

Today, AMD announced the extension of its 3rd Gen AMD EPYC processor family with six new offerings providing a robust suite of data center CPUs to meet the needs of general IT and mainstream computing for businesses seeking to leverage the economics of established platforms. The complete family of 3rd Gen AMD EPYC CPUs complements the leadership performance and efficiency of the latest 4th Gen AMD EPYC processors with impressive price-performance, modern security features and energy efficiency for less technically demanding business critical workloads.

The race to deliver AI and high performance computing is creating a technology gap for IT decision-makers seeking mainstream performance. To meet the growing demand for widely deployed, cost effective and proven mainstream solutions in the mid-market and in the channel, AMD is extending the 3rd Gen EPYC CPU offering to provide excellent value, performance, energy efficiency and security features for business-critical applications. The 3rd Gen AMD EPYC CPU portfolio enables a wide array of broadly deployed enterprise server solutions, supported by trusted channel sellers and OEMs such as Cisco, Dell Technologies, Gigabyte, HPE, Lenovo and Supermicro.

GIGABYTE Announces New Direct Liquid Cooling (DLC) Multi-Node Servers Ahead of SC23

GIGABYTE Technology, Giga Computing, a subsidiary of GIGABYTE and an industry leader in high-performance servers, server motherboards, and workstations, today announced direct liquid cooling (DLC) multi-node servers for NVIDIA Grace CPU & NVIDIA Grace Hopper Superchip. In addition, a DLC ready Intel-based server for the NVIDIA HGX H100 8-GPU platform and a high-density server for AMD EPYC 9004 processors. For the ultimate in efficiency, is also a new 12U single-phase immersion tank. All these mentioned products will be at GIGABYTE booth #355 at SC23.

Just announced high-density CPU servers include Intel Xeon-based H263-S63-LAN1 and AMD EPYC-based H273-Z80-LAN1. These 2U 4 node servers employ DLC for all eight CPUs, and although it is dense computing CPU performance achieves its full potential. In August, GIGABYTE announced new servers for NVIDIA HGX H100 GPU, and now adds the DLC version to the G593 series, G593-SD0-LAX1, for NVIDIA HGX H100 8-GPU.

AMD Reports Third Quarter 2023 Financial Results, Revenue Up 4% YoY

AMD (NASDAQ:AMD) today announced revenue for the third quarter of 2023 of $5.8 billion, gross margin of 47%, operating income of $224 million, net income of $299 million and diluted earnings per share of $0.18. On a non-GAAP basis, gross margin was 51%, operating income was $1.3 billion, net income was $1.1 billion and diluted earnings per share was $0.70.

"We delivered strong revenue and earnings growth driven by demand for our Ryzen 7000 series PC processors and record server processor sales," said AMD Chair and CEO Dr. Lisa Su. "Our data center business is on a significant growth trajectory based on the strength of our EPYC CPU portfolio and the ramp of Instinct MI300 accelerator shipments to support multiple deployments with hyperscale, enterprise and AI customers."

Velocity Micro Announces ProMagix G480a and G480i, Two GPU Server Solutions for AI and HPC

Velocity Micro, the premier builder of award-winning enthusiast desktops, laptops, high performance computing solutions, and professional workstations announces the immediate availability of the ProMagix G480a and G480i - two GPU servers optimized for High Performance Computing and Artificial Intelligence. Powered by either dual AMD Epyc 4th Gen or dual 4th Gen Intel Scalable Xeon processors, these 4U form factor servers support up to eight dual slot PCIe Gen 5 GPUs to create incredible compute power designed specifically for the highest demand workflows including simulation, rendering, analytics, deep learning, AI, and more. Shipments begin immediately.

"By putting emphasis on scalability, functionality, and performance, we've created a line of server solutions that tie in the legacy of our high-end brand while also offering businesses alternative options for more specialized solutions for the highest demand workflows," said Randy Copeland, President and CEO of Velocity Micro. "We're excited to introduce a whole new market to what we can do."
Return to Keyword Browsing
Apr 24th, 2024 07:02 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts