News Posts matching #EPYC

Return to Keyword Browsing

Supermicro Announces High Volume Production of E3.S All-Flash Storage Portfolio with CXL Memory Expansion

Supermicro, Inc., a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, is delivering a high-throughput, low latency E3.S storage solutions supporting the industry's first PCIe Gen 5 drives and CXL modules to meet the demands of large AI Training and HPC clusters, where massive amounts of unstructured data must be delivered to the GPUs and CPUs to achieve faster results.

Supermicro's Petascale systems are a new class of storage servers supporting the latest industry standard E3.S (7.5 mm) Gen 5 NVMe drives from leading storage vendors for up to 256 TB of high throughput, low latency storage in 1U or up to a half petabyte in 2U. Inside, Supermicro's innovative symmetrical architecture reduced latency by ensuring the shortest signal paths for data and maximized airflow over critical components, allowing them to run at optimal speeds. With these new systems, a standard rack can now hold over 20 Petabytes of capacity for high throughput NVMe-oF (NVMe over Fabrics) configurations, ensuring that GPUs remain saturated with data. Systems are available with either the 4th Gen Intel Xeon Scalable processors or 4th Gen AMD EPYC processors.

AMD Reports Second Quarter 2023 Financial Results, Revenue Down 18% YoY

AMD today announced revenue for the second quarter of 2023 of $5.4 billion, gross margin of 46%, operating loss of $20 million, net income of $27 million and diluted earnings per share of $0.02. On a non-GAAP basis, gross margin was 50%, operating income was $1.1 billion, net income was $948 million and diluted earnings per share was $0.58.

"We delivered strong results in the second quarter as 4th Gen EPYC and Ryzen 7000 processors ramped significantly," said AMD Chair and CEO Dr. Lisa Su. "Our AI engagements increased by more than seven times in the quarter as multiple customers initiated or expanded programs supporting future deployments of Instinct accelerators at scale. We made strong progress meeting key hardware and software milestones to address the growing customer pull for our data center AI solutions and are on-track to launch and ramp production of MI300 accelerators in the fourth quarter."

Inventec's C805G6 Data Center Solution Brings Sustainable Efficiency & Advanced Security for Powering AI

Inventec, a global leader in high-powered servers headquartered in Taiwan, is launching its cutting-edge C805G6 server for data centers based on AMD's newest 4th Gen EPYC platform—a major innovation in computing power that provides double the operating efficiency of previous platforms. These innovations are timely, as the industry worldwide faces converse challenges—on one hand, a growing need to reduce carbon footprints and power consumption, while, on the other hand, the push for ever higher computing power and performance for AI. In fact, in 2022 MIT found that improving a machine learning model tenfold will require a 10,000-fold increase in computational requirements.

Addressing both pain points, George Lin, VP of Business Unit VI, Inventec Enterprise Business Group (Inventec EBG) notes that, "Our latest C805G6 data center solution represents an innovation both for the present and the future, setting the standard for performance, energy efficiency, and security while delivering top-notch hardware for powering AI workloads."

China Hosts 40% of all Arm-based Servers in the World

The escalating challenges in acquiring high-performance x86 servers have prompted Chinese data center companies to accelerate the shift to Arm-based system-on-chips (SoCs). Investment banking firm Bernstein reports that approximately 40% of all Arm-powered servers globally are currently being used in China. While most servers operate on x86 processors from AMD and Intel, there's a growing preference for Arm-based SoCs, especially in the Chinese market. Several global tech giants, including AWS, Ampere, Google, Fujitsu, Microsoft, and Nvidia, have already adopted or developed Arm-powered SoCs. However, Arm-based SoCs are increasingly favorable for Chinese firms, given the difficulty in consistently sourcing Intel's Xeon or AMD's EPYC. Chinese companies like Alibaba, Huawei, and Phytium are pioneering the development of these Arm-based SoCs for client and data center processors.

However, the US government's restrictions present some challenges. Both Huawei and Phytium, blacklisted by the US, cannot access TSMC's cutting-edge process technologies, limiting their ability to produce competitive processors. Although Alibaba's T-Head can leverage TSMC's latest innovations, it can't license Arm's high-performance computing Neoverse V-series CPU cores due to various export control rules. Despite these challenges, many chip designers are considering alternatives such as RISC-V, an unrestricted, rapidly evolving open-source instruction set architecture (ISA) suitable for designing highly customized general-purpose cores for specific workloads. Still, with the backing of influential firms like AWS, Google, Nvidia, Microsoft, Qualcomm, and Samsung, the Armv8 and Armv9 instruction set architectures continue to hold an edge over RISC-V. These companies' support ensures that the software ecosystem remains compatible with their CPUs, which will likely continue to drive the adoption of Arm in the data center space.

AMD Radeon RX 7900 GRE ASIC Smaller than Navi 31, Slightly Larger than Navi 21

The GPU at the heart of the China-exclusive AMD Radeon RX 7900 GRE (Golden Rabbit Edition) sparked much curiosity. It is a physically different GPU from the one found in desktop Radeon RX 7900 XT and RX 7900 XTX graphics cards. AMD wouldn't go through all that effort designing a whole different GPU just for a limited edition graphics card, which means this silicon could find greater use for the company—for example, this could be the package AMD uses for its upcoming mobile RX 7900 series. AMD wouldn't go through all the effort designing a first-party MBA (made by AMD) PCB for the silicon just for the RX 7900 GRE, and so this PCB, with this particular version of the "Navi 31" silicon, could see a wider global launch, probably as the rumored Radeon RX 7800 XT, or something else (although with a different set of specs from the RX 7900 GRE).

We compared the sizes of the new "Navi 31" package found in the RX 7900 GRE, with those of the regular "Navi 31" powering the RX 7900 XT/XTX, the previous-generation "Navi 21" powering the RX 6900 XT, and the NVIDIA AD103 silicon powering the desktop GeForce RTX 4080. There are some interesting findings. The new smaller "Navi 31" package is visibly smaller than the one powering the RX 7900 XT/XTX. It is a square package, compared to the larger rectangular one, and has a significantly thinner metal reinforcement brace. What's interesting is that the 5 nm GCD is still surrounded by six 6 nm MCDs. We don't know if they've disabled two of the six MCDs, or whether they're dummies. AMD uses dummy chiplets as structural reinforcement in some of its EPYC server processors. The dummies spread some of the mounting pressure applied by the IHS or cooling solution, so the logic behind surrounding the GCD with six of these MCDs could be the same.

Zenbleed Vulnerability Affects All AMD Zen 2 CPUs

A new vulnerability has been discovered in AMD Zen 2 based CPUs by Tavis Ormandy, a Google Information Security researcher. Ormandy has named the new vulnerability Zenbleed—also known as CVE-2023-20593—and it's said to affect all Zen 2 based AMD processors, which means Ryzen 3000, 4000 and 5000-series CPUs and APUs, as well as EPYC server chips. The reason why Zenbleed is of concern is because it doesn't require a potential attacker to have physical access to the computer or server in question and it's said to be possible to trigger the vulnerability via executing a javascript on a webpage. This means that the attack vector ends up being massive, at least when we're talking about something like a webhosting company.

Zenbleed is said to allow a potential attacker to gain access to things like encryption keys and user logins via triggering something called "the XMM Register Merge Optimization2, followed by a register rename and a mispredicted vzeroupper." Apparently this requires some precision for the vulnerability to work, but due to these registers being used system wide, even a sandboxed attacker can gain access to them. AMD has already issued a patch for its EPYC server CPUs, which obviously are the most vulnerable systems in question and the company is planning to release patches for all of its Zen 2 based CPUs before the end of the year. Hit up the source links for more details about Zenbleed.

Cerebras and G42 Unveil World's Largest Supercomputer for AI Training with 4 ExaFLOPS

Cerebras Systems, the pioneer in accelerating generative AI, and G42, the UAE-based technology holding group, today announced Condor Galaxy, a network of nine interconnected supercomputers, offering a new approach to AI compute that promises to significantly reduce AI model training time. The first AI supercomputer on this network, Condor Galaxy 1 (CG-1), has 4 exaFLOPs and 54 million cores. Cerebras and G42 are planning to deploy two more such supercomputers, CG-2 and CG-3, in the U.S. in early 2024. With a planned capacity of 36 exaFLOPs in total, this unprecedented supercomputing network will revolutionize the advancement of AI globally.

"Collaborating with Cerebras to rapidly deliver the world's fastest AI training supercomputer and laying the foundation for interconnecting a constellation of these supercomputers across the world has been enormously exciting. This partnership brings together Cerebras' extraordinary compute capabilities, together with G42's multi-industry AI expertise. G42 and Cerebras' shared vision is that Condor Galaxy will be used to address society's most pressing challenges across healthcare, energy, climate action and more," said Talal Alkaissi, CEO of G42 Cloud, a subsidiary of G42.

AMD EPYC 7003 Series CPUs Announced as Powering SAP Applications

Today, AMD announced that SAP has chosen AMD EPYC processor-powered Google Cloud N2D virtual machines (VMs) to run its cloud ERP delivery operations for RISE with SAP; further increasing adoption of AMD EPYC for cloud-based workloads. As enterprises look toward digital modernization, many are adopting cloud-first architectures to complement their on-premises data centers. AMD, Google Cloud and SAP can help customers achieve their most stringent performance goals while delivering on energy efficiency, scalability and resource utilization needs.

AMD EPYC processors offer exceptional performance as well as robust security features, and energy efficient solutions for enterprise workloads in the cloud. RISE with SAP helps maximize customer investments in cloud infrastructure and, paired with AMD EPYC processors and Google Cloud N2D VMs, aims to modernize customer data centers and transform data into actionable insights, faster. "AMD powers some of the most performant and energy efficient cloud instances available in the world today," said Dan McNamara, senior vice president and general manager, Server Business Unit, AMD. "As part of our engagement with Google Cloud and SAP, SAP has selected AMD EPYC CPU-powered N2D instances to host its Business Suite enterprise software workloads. This decision by SAP delivers the performance and performance-per-dollar of EPYC processors to customers looking to modernize their data centers and streamline IT spending by accelerating time to value on their enterprise applications."

AMD Starts Software Enablement of Zen 5 Processors

According to the Linux Kernel Mailing List, AMD started to enable next-generation processors by submitting patches to the Linux kernel. Codenamed Family 1Ah or Family 26 in decimal notation, the set of patches corresponds to the upcoming AMD Zen 5 core, which is the backbone of the upcoming Ryzen 8000 series processors. The patches have a few interesting notes, namely few of them being: added support for the amd64_edac (Error Detection and Correction) module and temperature monitoring; added PCI IDs for these models covering 00h-1Fh and 20h; added required support in k10temp driver.

The AMD EDAC driver also points out that the Zen 5 server CPUs will max out with 12-channel memory. Codenames 0-31 correspond to next-generation EPYC, while 40 to 79 are desktop and laptop SKUS. Interestingly, these patches are just the start, as adding PCI IDs and temperature drivers are basic enablement. With the 2024 launch date nearing, we expect to see more Linux kernel enablement efforts, especially with more complicated parts of the kernel.

Oracle Introduces Next-Gen Exadata X10M Platforms

Oracle today introduced the latest generation of the Oracle Exadata platforms, the X10M, delivering unrivaled performance and availability for all Oracle Database workloads. Starting at the same price as the previous generation, these platforms support higher levels of database consolidation with more capacity and offer dramatically greater value than previous generations. Thousands of organizations, large and small, run their most critical and demanding workloads on Oracle Exadata including the majority of the largest financial, telecom, and retail businesses in the world.

"Our 12th generation Oracle Exadata X10M continues our strategy to provide customers with extreme scale, performance, and value, and we will make it available everywhere—in the cloud and on-premises," said Juan Loaiza, executive vice president, Mission-Critical Database Technologies, Oracle. "Customers that choose cloud deployments also benefit from running Oracle Autonomous Database, which further lowers costs by delivering true pay-per-use and eliminating database and infrastructure administration."

AMD EPYC Embedded Series Processors Power New HPE Alletra Storage MP Solution

AMD today announced that its AMD EPYC Embedded Series processors are powering Hewlett Packard Enterprise's new modular, multi-protocol storage solution, HPE Alletra Storage MP. AMD EPYC Embedded processors provide the performance and energy efficiency required for enterprise-class storage systems with high availability, resilience, and industry-leading connectivity and longevity.

The HPE Alletra Storage MP supports a disaggregated infrastructure with multiple storage protocols on the same hardware that can scale independently for performance and capacity. Configurable for block and file stores, HPE Alletra Storage MP gives customers the ability to deploy, manage, and orchestrate data and storage services via the HPE GreenLake edge-to-cloud platform, regardless of the workload and storage protocol. This eliminates data silos, reducing cost and complexity while improving performance.

AMD Zen 4c Not an E-core, 35% Smaller than Zen 4, but with Identical IPC

AMD on Tuesday (June 13) launched the EPYC 9004 "Bergamo" 128-core/256-thread high density compute server processor, and with it, debuted the new "Zen 4c" CPU microarchitecture. A lot had been made out about Zen 4c in the run up to yesterday's launch, such as rumors that it is a Zen 4 "lite" core that has lesser number-crunching muscle, and hence lower IPC, and that Zen 4c is AMD's answer to Intel's E-core architectures, such as "Gracemont" and "Crestmont." It turns out that it's neither a lite version of Zen 4, nor is it an E-core, but a physically compacted version of the Zen 4 core, with identical number crunching machinery.

First things first—Zen 4c has the same exact IPC as Zen 4 (that's performance at a given clock-speed). This is because its front-end, execution stage, load/store component, and internal cache hierarchy is exactly the same. It has the same 88-deep load queue, 64-deep store queue, the same 675,000 µop cache, the exact same INT+FP issue width of 10+6, the same exact INT register file, the same scheduler, and cache latencies. The L1I and L1D caches are the same 32 KB in size as "Zen 4," and so is the dedicated L2 cache, at 1 MB.

ASRock Rack Leveraging Latest 4th Gen AMD EPYC Processors with AMD "Zen 4c" Architecture,

ASRock Rack, the leading innovative server company, today announced its support of 4th Gen AMD EPYC processors with AMD "Zen 4c" architecture and 4th Gen AMD EPYC processors with AMD 3D V-Cache technology, as well as the expansion of their new products ranging from high-density storage, GPU, multi-nodes servers all for the new AMD processors.

"4th Gen AMD EPYC processors offer the highest core density of any x86 processor in the world and will deliver outstanding performance and efficiency for cloud-native workloads," said Lynn Comp, corporate vice president, Server Product and Technology Marketing, AMD. "Our latest family of data center processors allow customers to balance workload growth and flexibility with critical infrastructure consolidation mandates, enabling our customers to do more work, with more energy efficiency at a time when cloud native computing is transforming the data center."

Giga Computing Expands Support for 4th Gen AMD EPYC Processors

Giga Computing, a subsidiary of GIGABYTE and an industry leader in high-performance servers, server motherboards, and workstations, today announced support for the latest 4th Gen AMD EPYC processors. The new processors, based on "Zen 4c" architecture and featuring AMD 3D V-Cache technology, enhance Giga Computing's enterprise solutions, enabling superior performance and scalability for cloud native computing and technical computing applications in GIGABYTE enterprise solutions. To date, more than thirty unique GIGABYTE systems and platforms support the latest generation of AMD EPYC 9004 processors. As time goes on Giga Computing will roll out more new GIGABYTE models for this platform, including more SKUs for immersion-ready servers and direct liquid cooling systems.

"For every new generation of AMD EPYC processors, GIGABYTE has been there, offering diverse platform options for all workloads and users," said Vincent Wang, Sales VP at Giga Computing. "And with the recent announcement of new AMD EPYC 9004 processors for technical computing and cloud native computing, we are also ready to support them at this time on our current AMD EPYC 9004 Series platforms."

AMD Details New EPYC CPUs, Next-Generation AMD Instinct Accelerator, and Networking Portfolio for Cloud and Enterprise

Today, at the "Data Center and AI Technology Premiere," AMD announced the products, strategy and ecosystem partners that will shape the future of computing, highlighting the next phase of data center innovation. AMD was joined on stage with executives from Amazon Web Services (AWS), Citadel, Hugging Face, Meta, Microsoft Azure and PyTorch to showcase the technological partnerships with industry leaders to bring the next generation of high performance CPU and AI accelerator solutions to market.

"Today, we took another significant step forward in our data center strategy as we expanded our 4th Gen EPYC processor family with new leadership solutions for cloud and technical computing workloads and announced new public instances and internal deployments with the largest cloud providers," said AMD Chair and CEO Dr. Lisa Su. "AI is the defining technology shaping the next generation of computing and the largest strategic growth opportunity for AMD. We are laser focused on accelerating the deployment of AMD AI platforms at scale in the data center, led by the launch of our Instinct MI300 accelerators planned for later this year and the growing ecosystem of enterprise-ready AI software optimized for our hardware."

AMD Expands 4th Gen EPYC CPU Portfolio with Processors for Cloud Native and Technical Computing Workloads

Today, at the "Data Center and AI Technology Premiere," AMD announced the addition of two new, workload optimized processors to the 4th Gen EPYC CPU portfolio. By leveraging the new "Zen 4c" core architecture, the AMD EPYC 97X4 cloud native-optimized data center CPUs further extend the EPYC 9004 Series of processors to deliver the thread density and scale needed for leadership cloud native computing. Additionally, AMD announced the 4th Gen AMD EPYC processors with AMD 3D V-Cache technology, ideally suited for the most demanding technical computing workloads.

"In an era of workload optimized compute, our new CPUs is pushing the boundaries of what is possible in the data center, delivering new levels of performance, efficiency, and scalability," said Forrest Norrod, executive vice president and general manager, Data Center Solutions Business Group, AMD. "We closely align our product roadmap to our customers' unique environments and each offering in the 4th Gen AMD EPYC family of processors is tailored to deliver compelling and leadership performance in general purpose, cloud native or technical computing workloads."

AMD EPYC "Bergamo" Uses 16-core Zen 4c CCDs, Barely 10% Larger than Regular Zen 4 CCDs

A SemiAnalysis report sheds light on just how much smaller the "Zen 4c" CPU core is compared to the regular "Zen 4." AMD's upcoming high core-count enterprise processor for cloud data-center deployments, the EPYC "Bergamo," is based on the new "Zen 4c" microarchitecture. Although with the same ISA as "Zen 4," the "Zen 4c" is essentially a low-power, lite version of the core, with significantly higher performance/Watt. The core is physically smaller than a regular "Zen 4" core, which allows AMD to create CCDs (CPU core dies) with 16 cores, compared to the current "Zen 4" CCD with 8.

The 16-core "Zen 4c" CCD is built on the same 5 nm EUV foundry node as the 8-core "Zen 4" CCD, and internally features two CCX (CPU core complex), each with 8 "Zen 4c" cores. Each of the two CCX shares a 16 MB L3 cache among the cores. The SemiAnalysis report states that the dedicated L2 cache size of the "Zen 4c" core remains at 1 MB, just like that of the regular "Zen 4." Perhaps the biggest finding is their die-size estimation, which puts the 16-core "Zen 4c" CCD just 9.6% larger in die-area, than the 8-core "Zen 4" CCD. That's 72.7 mm² per CCD, compared to 66.3 mm² of the regular 8-core "Zen 4" CCD.

Micron Announces High-Capacity 96 GB DDR5-4800 RDIMMs

Micron Technology, Inc., (Nasdaq: MU) today announced volume production availability of high-capacity 96 GB DDR5 RDIMMs in speeds up to 4800MT/s, which have double the bandwidth compared to DDR4 memory. By unlocking the next level of monolithic technology, the integration of Micron's high-density memory solutions empowers artificial intelligence (AI) and in-memory database workloads and eliminates the need for costly die stacking that also adds latency. Micron's 96 GB DDR5 RDIMM modules are qualified with 4th Gen AMD EPYC processors. Additionally, the Supermicro 8125GS - an AMD-based system - includes the Micron 96 GB DDR5 modules and is an excellent platform for high-performance computing, artificial intelligence and deep learning training, and industrial server workloads.

"Delivering high-capacity memory solutions that enable the right performance for compute-intensive workloads is essential to Micron's role as a leading memory innovator and manufacturer. Micron's 96 GB DDR5 DRAM module establishes a new optimized total cost of ownership solution for our customers," stated Praveen Vaidyanathan, vice president and general manager of Micron's Compute Products Group. "Our collaboration with a flexible system provider like Supermicro leverages each of our strengths to provide customers with the latest memory technology to address their most challenging data center needs."
"Supermicro's time-to-market collaboration with Micron benefits a wide variety of key customers," said Don Clegg, senior vice president, Worldwide Sales, Supermicro. "Micron's portfolio of advanced memory and storage products, aligned with Supermicro's broad server and storage innovations deliver validated, tested, and proven solutions for data center deployments and advanced workloads."

Tyan Showcases Density With Updated AMD EPYC 2U Server Lineup

Tyan, subsidary of MiTAC, showed off their new range of AMD EPYC based servers with a distinct focus on compute density. These included new introductions to their Transport lineup of configurable servers which now host EPYC 9004 "Genoa" series processors with up to 96-cores each. The new additions come as 2U servers each with a different specialty focus. First up is the Transport SX TN85-B8261, aimed squarely at HPC and AI/ML deployment, with support for up to dual 96-Core EPYC "Genoa" processors, 3 TB of registered ECC DDR5-4800, dual 10GbE via an Intel x550-AT2 as well as 1GbE for IPMI, six PCI-E Gen 5 x16 slots with support for four GPGPUs for ML/HPC compute, and eight NVMe drives at the front of the chassis. An optional more storage focused configuration if you choose not to install GPUs is to have 24 total NVMe SSDs at the front soaking up the 96 lanes of PCI-E.

TYAN Server Platforms to Boost Data Center Computing Performance with 4th Gen AMD EPYC Processors at Computex 2023

TYAN, an industry-leading server platform design manufacturer and a subsidiary of MiTAC Computing Technology Corporation, will be showcasing its latest HPC, cloud and storage platforms at Computex 2023, Booth #M0701a in Taipei, Taiwan from May 30 to June 2. These platforms are powered by AMD EPYC 9004 Series processors, which offer superior energy efficiency and are designed to enhance data center computing performance.

"As businesses increasingly prioritize sustainability in their operations, data centers - which serve as the computational core of an organization - offer a significant opportunity to improve efficiency and support ambitious sustainability targets," said Eric Kuo, Vice President of the Server Infrastructure Business Unit at MiTAC Computing Technology Corporation. "TYAN's server platforms powered by 4th Gen AMD EPYC processor enable IT organizations to achieve high performance while remaining cost-effective and contributing to environmental sustainability."

Latest TOP500 List Highlights World's Fastest and Most Energy Efficient Supercomputers are Powered by AMD

Today, AMD (NASDAQ: AMD) showcased its high performance computing (HPC) leadership at ISC High Performance 2023 and celebrated, along with key partners, its first year of breaking the exascale barrier. AMD EPYC processors and AMD Instinct accelerators continue to be the solutions of choice behind many of the most innovative, green and powerful supercomputers in the world, powering 121 supercomputers on the latest TOP500 list.

"AMD's mission in high-performance computing is to enable our customers to tackle the world's most important challenges," said Forrest Norrod, executive vice president and general manager, Data Center Solutions Business Group, AMD. "Our industry partners and the global HPC community continue to leverage the performance and efficiency of AMD EPYC processors and Instinct accelerators to advance their groundbreaking work and scientific discoveries."

Supermicro Launches Industry's First NVIDIA HGX H100 8 and 4-GPU H100 Servers with Liquid Cooling

Supermicro, Inc., a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, continues to expand its data center offerings with liquid cooled NVIDIA HGX H100 rack scale solutions. Advanced liquid cooling technologies entirely from Supermicro reduce the lead time for a complete installation, increase performance, and result in lower operating expenses while significantly reducing the PUE of data centers. Savings for a data center are estimated to be 40% for power when using Supermicro liquid cooling solutions compared to an air-cooled data center. In addition, up to 86% reduction in direct cooling costs compared to existing data centers may be realized.

"Supermicro continues to lead the industry supporting the demanding needs of AI workloads and modern data centers worldwide," said Charles Liang, president, and CEO of Supermicro. "Our innovative GPU servers that use our liquid cooling technology significantly lower the power requirements of data centers. With the amount of power required to enable today's rapidly evolving large scale AI models, optimizing TCO and the Total Cost to Environment (TCE) is crucial to data center operators. We have proven expertise in designing and building entire racks of high-performance servers. These GPU systems are designed from the ground up for rack scale integration with liquid cooling to provide superior performance, efficiency, and ease of deployments, allowing us to meet our customers' requirements with a short lead time."

AMD EPYC 8004 Data Center "Siena" CPUs Certified for General SATA and PCI Support

Keen-eyed hardware tipster momomo_us this week spotted that an upcoming AMD data center "Siena Dense" CPU has received verification, in the general sense, for SATA and PCI support - courtesy of the Serial ATA International Organization (SATA-IO). The information dump was uploaded to SATA-IO's online database on April 6 of this year - under the heading: "AMD EPYC 8004 Series Processors." As covered by TPU mid-way through this month the family of enterprise-grade processors, bearing codename Siena, is expected to be an entry-level alternative to the EPYC Genoa-X range, set for launch later in 2023.

The EPYC Siena series is reported to arrive with a new socket type - SP6 (LGA 4844) - which is said to be similar in size to the older Socket SP3. The upcoming large "Genoa-X" and "Bergamo" processors will sit in the already existing Socket SP5 (LGA 6096) - 2022's EPYC Genoa lineup makes use of it already. AMD has not made its SP6 socket official to the public, but industry figures have been informed that it can run up to 64 "Zen 4" cores. This new standard has been designed with more power efficient tasks in mind - targeting intelligent edge and telecommunication sectors. The smaller SP6 socket will play host to CPUs optimized for as low as 70 W operation, with hungrier variants accommodated up to 225 W. This single platform solution is said to offer 6-channel memory, 96 PCIe Gen 5.0 lanes, 48 lanes for CXL V1.1+, and 8 PCIe Gen 3.0 lanes.

Ericsson strikes Cloud RAN agreement with AMD

Ericsson is boosting its Open RAN and Cloud RAN ecosystem commitment through an agreement with US-based global ICT industry leader AMD. The agreement - intended to strengthen the Open RAN ecosystem and vendor-agnostic Cloud RAN environment - aims to offer communications service providers (CSPs) a combination of high performance and additional flexibility for open architecture offerings.

The Ericsson-AMD collaboration will see additional processing technologies in the Ericsson Cloud RAN offering. The expanded offering aims to enhance the performance of Cloud RAN and secure high-capacity solutions. The collaboration will enable joint exploration of AMD EPYC processors and T2 Telco accelerator for utilization in Cloud RAN solutions, while also investigating future platform generations of these technologies.

Gigabyte Extends Its Leading GPU Portfolio of Servers

Giga Computing, a subsidiary of GIGABYTE and an industry leader in high-performance servers, server motherboards, and workstations, today announced a lineup of powerful GPU-centric servers with the latest AMD and Intel CPUs, including NVIDIA HGX H100 servers with both 4-GPU and 8-GPU modules. With growing interest in HPC and AI applications, specifically generative AI (GAI), this breed of server relies heavily on GPU resources to tackle compute-heavy workloads that handle large amounts of data. With the advent of OpenAI's ChatGPT and other AI chatbots, large GPU clusters are being deployed with system-level optimization to train large language models (LLMs). These LLMs can be processed by GIGABYTE's new design-optimized systems that offer a high level of customization based on users' workloads and requirements.

The GIGABYTE G-series servers are built first and foremost to support dense GPU compute and the latest PCIe technology. Starting with the 2U servers, the new G293 servers can support up to 8 dual-slot GPUs or 16 single-slot GPUs, depending on the server model. For the ultimate in CPU and GPU performance, the 4U G493 servers offer plenty of networking options and storage configurations to go alongside support for eight (Gen 5 x16) GPUs. And for the highest level of GPU compute for HPC and AI, the G393 & G593 series support NVIDIA H100 Tensor Core GPUs. All these new two CPU socket servers are designed for either 4th Gen AMD EPYC processors or 4th Gen Intel Xeon Scalable processors.
Return to Keyword Browsing
Jun 2nd, 2024 16:37 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts