News Posts matching #Server

Return to Keyword Browsing

IBM Power11 Raises the Bar for Enterprise IT

Today, IBM revealed IBM Power11, the next generation of IBM Power servers. Redesigned with innovations across its processor, hardware architecture, and virtualization software stack, Power11 is designed to deliver the availability, resiliency, performance, and scalability enterprises demand, for seamless hybrid deployment on-premises or in IBM Cloud.

Organizations across industries have long run their most mission-critical, data-intensive workloads on IBM Power, most notably those within the banking, healthcare, retail, and government spaces. Now, enterprises face an onslaught of new technologies and solutions as they transition into the age of AI. IDC found that one billion new logical applications are expected by 2028, and the proliferation of these systems poses new complexities for companies. IBM built Power11 to deliver simplified, always-on operations with hybrid cloud flexibility for enterprises to maintain competitiveness in the AI era.

Giga Computing Unveils Liquid and Air-Cooled GIGABYTE AI Servers Accelerated by NVIDIA HGX B200 Platform

Giga Computing, an industry innovator and leader in enterprise hardware and advanced cooling solutions, today announced four new GIGABYTE servers built on the NVIDIA HGX B200 platform. This expansion of the GIGABYTE GPU server portfolio brings greater thermal design flexibility and support for the latest processors, including the new AI-optimized Intel Xeon 6 CPUs, giving customers more options as they tailor their systems for workloads and efficiency.

NVIDIA HGX B200 propels the data center into a new era of accelerating computing and generative AI. Built on NVIDIA Blackwell GPUs, the HGX B200 platform features 15X faster real-time inference on trillion-parameter models.

Intel's Server Share Slips to 67% as AMD and Arm Widen the Gap

In just a few years, AMD has gone from the underdog to Intel's most serious challenger in the server world. Thanks to its EPYC processors, AMD now captures about a third of every dollar spent on server CPUs, up from essentially zero in 2017. Over that same period, Intel's share has slipped from nearly 100% to roughly 63%, signaling a significant shift in what companies choose to power their data centers. The real inflection point came with AMD's Zen architecture: by mid-2020, EPYC had already claimed more than 10% of server-CPU revenues. Meanwhile, Intel's rollout of Sapphire Rapids Xeons encountered delays and manufacturing issues, leaving customers to look elsewhere. By late 2022, AMD was over the 20% mark, and Intel found itself under 75% for the first time in years.

Looking ahead, analysts at IDC and Mercury Research, with data compiled by Bank of America, expect AMD's slice of the revenue pie to grow to about 36% by 2025, while Intel drops to around 55%. Arm-based server chips are also starting to make real inroads, forecast to account for roughly 9% of CPU revenue next year as major cloud providers seek more energy- and cost-efficient options. By 2027, AMD could approach a 40% revenue share, Intel may fall below half the market, and Arm designs could capture 10-12%. Remember that these figures track revenue rather than unit sales: AMD's gains come primarily from high-end, high-core-count processors, whereas Intel still shifts plenty of lower-priced models. With AMD poised to launch its Genoa and Bergamo EPYCs and Intel banking on the upcoming E-core Xeon 6 series to regain its footing, the fight for server-CPU supremacy is far from over. Still, Intel's once-unbeatable lead is clearly under threat.

Loongson Unveils 64-Core LS3C6000 Server CPUs to Rival Intel "Ice Lake-SP" Xeons

China's Loongson has introduced its latest server processor family, the LS3C6000 series, its most powerful domestically designed processor. These new chips use Loongson's fourth-generation microarchitecture and pack 16 64-bit superscalar LA664 cores on each die. With simultaneous multi-threading, the single-die "S" model handles 32 logical threads, while the dual-die "D" and quad-die "Q" versions support 64 and 128 threads, respectively. Operating between 2.0 GHz and 2.2 GHz, the family delivers peak double-precision FP64 performance of 844.8 GigaFLOPS for S units, 1.612 TeraFLOPS for D units, and 3.072 TeraFLOPS for Q units. Each core benefits from 64 KB of dedicated instruction cache and 64 KB of data cache, plus a 256 KB private L2 cache, while a shared 32 MB L3 cache serves all cores on a die.

Memory bandwidth is served by four 72-bit DDR4-3200 channels in the S version and eight channels in larger SKUs, and connectivity comes via 64 PCI Express lanes for S models and 128 lanes for D and Q models, alongside SPI, UART, I²C and GPIO ports. Security is addressed through an integrated SE module with a secondary LA264 core that accelerates SM2, SM3, and SM4 cryptographic functions. Loongson relies on its proprietary Coherent Link interconnect, which uses PCI Express-style links and board-level direct paths to scale up to 256 logical cores in multi-socket systems. Power consumption ranges from 100 to 120 W for S parts, 180 to 200 W for D parts, and 250 to 300 W for Q parts. Loongson suggests that its 16-core S model competes with Intel's (now old) third-generation Xeon Scalable processors, while its 64-core Q model is roughly comparable to the Xeon Platinum 8380 with 40 cores and 80 threads.

AAEON Unveils World's First 8L Dual-GPU AI Inference Server, the MAXER-5100

Leading provider of advanced AI solutions AAEON has released a new addition to its AI Inference Server product line, the MAXER-5100 - the world's first 8L AI inference server equipped with two integrated GPUs. The MAXER-5100's default model comes with the 24 core, 32 thread 14th Generation Intel Core i9 processor 14900K and two onboard NVIDIA RTX 2000 Ada GPUs. Meanwhile, a barebone SKU is available, accommodating up to 65 W CPUs from across the 12th, 13th, and 14th Generation Intel Core processor lines as well as PCIe slots for other compatible GPUs, per project need.

Given the processing power and AI performance the system offers, the MAXER-5100 is primarily positioned as a central server for the management of multiple edge devices, particularly with its Certificate Authority (CA) support granting additional security for smart infrastructure, healthcare, and advanced manufacturing applications. Moreover, the MAXER-5100's use of a zero-trust secure tunnel and onboard TPM 2.0 allows for encrypted data transmission between the server and multiple edge devices, as well implement over-the-air updates and remote diagnostics.

Shadow Launches Neo: The Next Generation Cloud Gaming PC

SHADOW, the Global leader in high-performance cloud computing, is proud to announce the launch of Neo, a brand-new cloud gaming PC offering designed to deliver next-level RTX experiences for gamers, creators, and professionals alike. Neo will officially roll out in Europe and North America starting June 16, 2025.

Building on the success of the company's previous offers, Neo replaces its widely adopted "Boost" tier and delivers major performance leaps—up to 150% more in gaming and 200% more in pro software performance. All existing Boost users are being upgraded to Neo at no additional cost while new users rates will start at $37.99 per month.

AMD Namedrops EPYC "Venice" Zen 6 and EPYC "Verano" Zen 7 Server Processors

AMD at its 2025 Advancing AI event name-dropped its two next generations of EPYC server processors to succeed the current EPYC "Turin" powered by Zen 5 microarchitecture. 2026 will see AMD debut the Zen 6 microarchitecture, and its main workhorse for the server segment will be EPYC "Venice." This processor will likely see a generational increase in CPU core counts, increased IPC from the full-sized Zen 6 cores, support for newer ISA, and an updated I/O package. AMD is looking to pack "Venice" with up to 256 CPU cores per package.

AMD is looking to increase the CPU core count per CCD (CPU complex die) with "Zen 6." The company plans to build these CCDs on the 2 nm TSMC N2 process node. The sIOD (server I/O die) of "Venice" implements PCI-Express Gen 6 for a generational doubling in bandwidth to GPUs, SSDs, and NICs. AMD is also claiming memory bandwidth as high as 1.6 TB/s. There are a couple of ways they can go about achieving this, either by increasing the memory clock speeds, or giving the processor a 16-channel DDR5 memory interface, up from the current 12-channel DDR5. The company could also add support for multichannel DIMM standards, such as MR-DIMM and MCR-DIMMs. All said and done, AMD is claiming a 70% increase in multithreaded performance over the current EPYC "Turin," which we assume is comparing the highest performing part to its next-gen successor.

Supermicro Delivers Liquid-Cooled and Air-Cooled AI Solutions with AMD Instinct MI350 Series GPUs and Platforms

Supermicro, Inc., a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is announcing that both liquid-cooled and air-cooled GPU solutions will be available with the new AMD Instinct MI350 series GPUs, optimized for unparalleled performance, maximum scalability, and efficiency. The Supermicro H14 generation of GPU optimized solutions featuring dual AMD EPYC 9005 CPUs along with the AMD Instinct MI350 series GPUs, are designed for organizations seeking maximum performance at scale, while reducing the total cost of ownership for their AI-driven data centers.

"Supermicro continues to lead the industry with the most experience in delivering high-performance systems designed for AI and HPC applications," said Charles Liang, president and CEO of Supermicro. "Our Data Center Building Block Solutions enable us to quickly deploy end-to-end data center solutions to market, bringing the latest technologies for the most demanding applications. The addition of the new AMD Instinct MI350 series GPUs to our GPU server lineup strengthens and expands our industry-leading AI solutions and gives customers greater choice and better performance as they design and build the next generation of data centers."

Europe Builds AI Infrastructure With NVIDIA to Fuel Region's Next Industrial Transformation

NVIDIA today announced it is working with European nations, and technology and industry leaders, to build NVIDIA Blackwell AI infrastructure that will strengthen digital sovereignty, support economic growth and position the continent as a leader in the AI industrial revolution. France, Italy, Spain and the U.K. are among the nations building domestic AI infrastructure with an ecosystem of technology and cloud providers, including Domyn, Mistral AI, Nebius and Nscale, and telecommunications providers, including Orange, Swisscom, Telefónica and Telenor.

These deployments will deliver more than 3,000 exaflops of NVIDIA Blackwell compute resources for sovereign AI, enabling European enterprises, startups and public sector organizations to securely develop, train and deploy agentic and physical AI applications. NVIDIA is establishing and expanding AI technology centers in Germany, Sweden, Italy, Spain, the U.K. and Finland. These centers build on NVIDIA's history of collaborating with academic institutions and industry through the NVIDIA AI Technology Center program and NVIDIA Deep Learning Institute to develop the AI workforce and scientific discovery throughout the regions.

Pegatron Unveils AI-Optimized Server Innovations at GTC Paris 2025

PEGATRON, a globally recognized Design, Manufacturing, and Service (DMS) provider, is showcasing its latest AI server solutions at GTC Paris 2025. Built on NVIDIA Blackwell architecture, PEGATRON's cutting-edge systems are tailored for AI training, reasoning, and enterprise-scale deployment.

NVIDIA GB300 NVL72
At the forefront is the RA4802-72N2, built on the NVIDIA GB300 NVL72 rack system, featuring 72 NVIDIA Blackwell Ultra GPUs and 36 NVIDIA Grace CPUs. Designed for AI factories, it boosts output by up to 50X. PEGATRON's in-house developed Coolant Distribution Unit (CDU) delivers 310 kW of cooling capacity with redundant hot-swappable pumps, ensuring performance and reliability for mission-critical workloads.

Supermicro Unveils Industry's Broadest Enterprise AI Solution Portfolio for NVIDIA Blackwell Architecture

Supermicro, Inc., a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is announcing an expansion of the industry's broadest portfolio of solutions designed for NVIDIA Blackwell Architecture to the European market. The introduction of more than 30 solutions reinforces Supermicro's industry leadership by providing the most comprehensive and efficient solution stack for NVIDIA HGX B200, GB200 NVL72, and RTX PRO 6000 Blackwell Server Edition deployments, enabling rapid time-to-online for European enterprise AI factories across any environment. Through close collaboration with NVIDIA, Supermicro's solution stack enables the deployment of NVIDIA Enterprise AI Factory validated design and supports the upcoming introduction of NVIDIA Blackwell Ultra solutions later this year, including NVIDIA GB300 NVL72 and HGX B300.

"With our first-to-market advantage and broad portfolio of NVIDIA Blackwell solutions, Supermicro is uniquely positioned to meet the accelerating demand for enterprise AI infrastructure across Europe," said Charles Liang, president and CEO of Supermicro. "Our collaboration with NVIDIA, combined with our global manufacturing capabilities and advanced liquid cooling technologies, enables European organizations to deploy AI factories with significantly improved efficiency and reduced implementation timelines. We're committed to providing the complete solution stack enterprises need to successfully scale their AI initiatives."

MSI Powers AI's Next Leap for Enterprises at ISC 2025

MSI, a global leader in high-performance server solutions, is showcasing its enterprise-grade, high-performance server platforms at ISC 2025, taking place June 10-12 at booth #E12. Built on standardized and modular architectures, MSI's AI servers are designed to power next-generation AI and accelerated computing workloads, enabling enterprises to rapidly advance their AI innovations.

"As AI workloads continue to grow and evolve toward inference-driven applications, we're seeing a significant shift in how enterprises approach AI deployment," said Danny Hsu, General Manager of Enterprise Platform Solutions at MSI. "With modular and standards-based architectures, enterprise data centers can now adopt AI technologies more quickly and cost-effectively than ever before. This marks a new era where AI is not only powerful but also increasingly accessible to businesses of all sizes.

ASUS Announces Key Milestone with Nebius and Showcases NVIDIA GB300 NVL72 System at GTC Paris 2025

ASUS today joined GTC Paris at VivaTech 2025 as a Gold Sponsor, highlighting its latest portfolio of AI infrastructure solutions and reinforcing its commitment to advancing the AI Factory vision with a full range of NVIDIA Blackwell Ultra solutions, delivering breakthrough performance from large-scale datacenter to personal desktop.

ASUS is also excited to announce a transformative partnership milestone in its partnership with Nebius. Together, the two companies are enabling a new era of AI innovation built on NVIDIA's advanced platforms. Building on the success of the NVIDIA GB200 NVL72 platform deployment, ASUS and Nebius are now moving forward with strategic collaborations featuring the next-generation NVIDIA GB300 NVL72 platform. This ongoing initiative underscores ASUS's role as a key enabler in AI infrastructure, committed to delivering scalable, high-performance solutions that help enterprises accelerate AI adoption and innovation.

Rising Demand and EOL Plans from Suppliers Drive Strong DDR4 Contract Price Hikes in 2Q25 for Server and PC Markets

TrendForce's latest investigations find that DDR4 contract prices for servers and PCs are expected to rise more sharply in the second quarter of 2025 due to two key factors: major DRAM suppliers scaling back DDR4 production and buyers accelerating procurement ahead of U.S. tariff changes. As a result, server DDR4 contract prices are forecast to rise by 18-23% QoQ, while PC DDR4 prices are projected to increase by 13-18%—both surpassing earlier estimates.

TrendForce notes that DDR4 has been in the market for over a decade, and demand is increasingly shifting toward DDR5. Given the significantly higher profit margins for HBM, DDR5, and LPDDR5(X), suppliers have laid out EOL plans for DDR4, with final shipments expected by early 2026. Current EOL notifications largely target server and PC clients, while consumer DRAM (mainly DDR4) remains in production due to continued mainstream demand.

Funcom Details Dune: Awakening's Rentable Private Server System

Greetings soon-to-be-awakened, today, just about 72 hours before the floodgates open, we can finally share with you that rentable private servers will be available from head start launch on June 5th! We've previously communicated that private servers are for post-launch, but we're happy to share that progress has been faster than expected. We do, however, want to manage expectations about how private servers work in Dune: Awakening. As you know, this is not your typical survival game.

Why private servers work differently in Dune: Awakening
Dune: Awakening is powered by a unique server and world structure, something we went in-depth on in a recent blog post. In short: each server belongs to a World consisting of several other servers, and each of those share the same social hubs and Deep Desert. This allows us to retain a neighborhood-like feel to the Hagga Basin and provide persistent, freeform building, and other server-demanding mechanics you typically see in survival games. We combine this with the large-scale multiplayer mechanics you would expect to find in MMOs where hundreds of players meet each other in social hubs and the Deep Desert to engage in social activities, trade, conflict, and more.

Dell Technologies Delivers First Quarter Fiscal 2026 Financial Results

Dell Technologies (NYSE: DELL) announces financial results for its fiscal 2026 first quarter. The company also provides guidance for its fiscal 2026-second quarter and full year.

First-Quarter Summary
  • First-quarter revenue of $23.4 billion, up 5% year over year
  • First-quarter operating income of $1.2 billion, up 21% year over year, and non-GAAP operating income of $1.7 billion, up 10%
  • First-quarter diluted EPS of $1.37, flat year over year, and non-GAAP diluted EPS of $1.55, up 17%

Infineon Announces Collaboration with NVIDIA on Power Delivery Chips for Future Server Racks

Infineon Technologies AG is revolutionizing the power delivery architecture required for future AI data centers. In collaboration with NVIDIA, Infineon is developing the next generation of power systems based on a new architecture with central power generation of 800 V high-voltage direct current (HVDC). The new system architecture significantly increases energy-efficient power distribution across the data center and allows power conversion directly at the AI chip (Graphic Processing Unit, GPU) within the server board. Infineon's expertise in power conversion solutions from grid to core based on all relevant semiconductor materials silicon (Si), silicon carbide (SiC) and gallium nitride (GaN) is accelerating the roadmap to a full scale HVDC architecture.

This revolutionary step paves the way for the implementation of advanced power delivery architectures in accelerated computing data centers and will further enhance reliability and efficiency. As AI data centers already are going beyond 100,000 individual GPUs, the need for more efficient power delivery is becoming increasingly important. AI data centers will require power outputs of one megawatt (MW) and more per IT rack before the end of the decade. Therefore, the HVDC architecture coupled with high-density multiphase solutions will set a new standard for the industry, driving the development of high-quality components and power distribution systems.

Dell Technologies Unveils Next Generation Enterprise AI Solutions with NVIDIA

The world's top provider of AI-centric infrastructure, Dell Technologies, announces innovations across the Dell AI Factory with NVIDIA - all designed to help enterprises accelerate AI adoption and achieve faster time to value.

Why it matters
As enterprises make AI central to their strategy and progress from experimentation to implementation, their demand for accessible AI skills and technologies grows exponentially. Dell and NVIDIA continue the rapid pace of innovation with updates to the Dell AI Factory with NVIDIA, including robust AI infrastructure, solutions and services that streamline the path to full-scale implementation.

MiTAC Computing Unveils Full Server Lineup for Data Centers and Enterprises with Intel Xeon 6 at Computex 2025

MiTAC Computing Technology Corporation, a leading server platform designer, manufacturer, and a subsidiary of MiTAC Holdings Corporation, has launched its full suite of next-generation servers for data centers and enterprises at COMPUTEX 2025 (Booth M1110). Powered by Intel Xeon 6 processors, including those with Performance-cores (P-cores), MiTAC's new platforms are purpose-built for AI, HPC, cloud, and enterprise applications.

"For over five decades, MiTAC and Intel have built a close, collaborative relationship that continues to push innovation forward. Our latest server lineup reflects this legacy—combining Intel's cutting-edge processing power with MiTAC Computing's deep expertise in system design to deliver scalable, high-efficiency solutions for modern data centers." - Rick Hwang, President of MiTAC Computing.

MSI Unveils Next-Level AI Solutions Using NVIDIA MGX and DGX Station at COMPUTEX 2025

MSI, a leading global provider of high-performance server solutions, unveils its latest AI innovations using NVIDIA MGX and NVIDIA DGX Station reference architectures at COMPUTEX 2025, held from May 20-23 at booth J0506. Purpose-built to address the growing demands of AI, HPC, and accelerated computing workloads, MSI's AI solutions feature modular, scalable building blocks designed to deliver next-level AI performance for enterprises and cloud data center environments.

"AI adoption is transforming enterprise data centers as organizations move quickly to integrate advanced AI capabilities," said Danny Hsu, General Manager of Enterprise Platform Solutions at MSI. "With the explosive growth of generative AI and increasingly diverse workloads, traditional servers can no longer keep pace. MSI's AI solutions, built on the NVIDIA MGX and NVIDIA DGX Station reference architectures, deliver the scalability, flexibility, and performance enterprises need to future-proof their infrastructure and accelerate their AI innovation."

ASUS Announces ESC A8A-E12U Support for AMD Instinct MI350 Series GPUs

ASUS today announced that its flagship high-density AI server, ESC A8A-E12U, now supports the latest AMD Instinct MI350 series GPUs. This enhancement empowers enterprises, research institutions, and cloud providers to accelerate their AI and HPC workloads with next-generation performance and efficiency—while preserving compatibility with existing infrastructure.

Built on the 4th Gen AMD CDNA architecture, AMD Instinct MI350 series GPUs deliver powerful new capabilities, including 288 GB of HBM3E memory and up to 8 TB/s of bandwidth—enabling faster, more energy-efficient execution of large AI models and complex simulations. With expanded support for low-precision compute formats such as FP4 and FP6, the Instinct MI350 series significantly accelerates generative AI, inference, and machine-learning workloads. Importantly, Instinct MI350 series GPUs maintain drop-in compatibility with existing AMD Instinct MI300 series-based systems, such as those running Instinct MI325X—offering customers a cost-effective and seamless upgrade path. These innovations reduce server resource requirements and simplify scaling and workload management, making Instinct MI350 series GPUs an ideal choice for efficient, large-scale AI deployments.

MiTAC Computing Launches the Latest Scale-out AI Server G4527G6 by NVIDIA MGX at Computex 2025

MiTAC Computing Technology Corporation, a leading server platform design, manufacturer, and a subsidiary of MiTAC Holdings Corporation (TSE:3706), will present its latest innovations in AI infrastructure at COMPUTEX 2025. At booth M1110, MiTAC Computing will display its next-level AI server platforms MiTAC G4527G6, fully optimized for NVIDIA MGX architecture, which supports NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs and the NVIDIA H200 NVL platform to address the evolving demands of enterprise AI workloads.

Next-Gen AI with High-Performance Computing
With the increasing adoption of generative AI and accelerated computing, MiTAC Computing introduces the latest NVIDIA MGX-based server solution, the MiTAC G4527G6, designed to support complex AI and high-performance computing (HPC) workloads. Built on Intel Xeon 6 processors, the G4527G6 accommodates up to eight NVIDIA GPUs, 8 TB of DDR5-6400 memory, sixteen hot-swappable E1.s drives, and an NVIDIA BlueField-3 DPU for efficient north-south connectivity. Crucially, it integrates four next-generation NVIDIA ConnectX-8 SuperNICs, delivering up to 800 gigabits per second (Gb/s) of NVIDIA InfiniBand and Ethernet networking—significantly enhancing system performance for AI factories and cloud data center environments.

Wistron's New U.S. Facilities for NVIDIA Servers to be Operational Next Year

Taiwanese electronics manufacturer Wistron announced on Friday that its new U.S. manufacturing facilities for NVIDIA will be ready next year, confirming they're in discussions with other potential customers as well, Reuters reports. Wistron CEO Jeff Lin made his first public statement since NVIDIA's announcement, saying, "All our progress will follow the customer's lead," and confirmed their timeline aligns with NVIDIA's expectations. The facilities will partly support NVIDIA's ambitious plan to build AI servers worth up to $500 billion in the U.S. over the next four years. NVIDIA revealed in April its strategy to establish supercomputer manufacturing plants in Texas, collaborating with Foxconn in Houston and Wistron in Dallas, with both locations expected to increase production within 12-15 months. Recently Wistron's board has approved a $500 million investment in its new U.S. subsidiary.

The facilities will focus on producing high-performance computing and AI-related products, though Lin declined to name the other companies they're in talks with. When asked about U.S. restrictions on advanced chip exports to China, Lin noted that demand outside China remains robust. "We expect to grow alongside our customers... As for developments in the Middle East, most of them are essentially our indirect customers," he added. This comes as the UAE and U.S. signed an agreement this week to build the largest AI campus outside America, potentially involving the purchase of 500,000 of NVIDIA's most advanced AI chips yearly starting in 2025. Wistron also mentioned it's considering moving notebook production in Mexico in an attempt to avoid tariffs under the United States-Mexico-Canada trade agreement.

MiTAC Computing Deploys Latest AMD EPYC 4005 Series Processors

MiTAC Computing Technology Corp., a subsidiary of MiTAC Holdings Corp. and a leading manufacturer in server platform design, introduced its latest offering featuring the AMD EPYC 4005 Series processors. These updated server solutions offer enhanced performance and energy efficiency to meet the growing demands of modern business workloads, including AI, cloud services, and data analytics.

"The new AMD EPYC 4005 Series processors deliver the performance and capabilities our customers need at a price point that makes ownership more attractive and attainable," said Derek Dicker, corporate vice president, Enterprise and HPC Business, AMD. "We're enabling businesses to own their computing infrastructure at an economical price, while providing the performance, security features and efficiency modern workloads demand."

ASRock Rack Announces Support for AMD EPYC 4005 Series Processors

ASRock Rack Inc., the leading innovative server company, today announced support for the newly launched AMD EPYC 4005 Series processors across its extensive lineup of AM5 socket server systems and motherboards. This announcement reinforces ASRock Rack's commitment to delivering cutting-edge performance, broad platform compatibility, and long-term value to customers in data centers, growing businesses, and edge computing environments.

Built on the AMD 'Zen 5' architecture, the AMD EPYC 4005 Series features up to 16 SMT-enabled cores and supports DDR5 memory speeds up to 5600 MT/s, delivering class-leading performance per watt within constrained IT budgets. As AI becomes embedded in everyday business software, AMD EPYC 4005 Series CPUs provide the performance headroom needed for AI-enhanced workloads such as automated customer service and data analytics while maintaining the affordability essential for small businesses. The series expands the proven AMD EPYC portfolio with solutions purpose-built for growing infrastructure demands.
Return to Keyword Browsing
Jul 15th, 2025 09:42 CDT change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts