News Posts matching #Scalable

Return to Keyword Browsing

ASUS Announces AMD EPYC 9005-Series CPU-based Servers with MI325X Accelerators

ASUS today announced a series of servers powered by the groundbreaking AMD EPYC 9005-series processors, setting new standards in performance and density for AI-driven data center workloads. The full line-up includes ASUS ESC A8A-E12U supporting AMD Instinct MI325X accelerators, and ASUS ESC8000A-E13P GPU servers, capable of supporting eight GPUs for large-scale AI model training, ensuring unmatched computational power. ASUS RS520QA-E13 is a multi-node server for EDA and cloud computing. ASUS offers versatile solutions including RS720A-E13, RS700A-E13, and RS521A and RS501A for general-purpose tasks. These servers are engineered to deliver excel performance across a wide range of applications, meeting the demands of the most rigorous workloads.

At the core of these servers lies AMD EPYC 9005-series processors, previously codenamed Turin, and based on the Zen 5 architecture with up to 192 cores / 384 threads and up to 5 GHz frequencies, powerhouse CPUs designed to elevate AI-driven data center workloads to unprecedented levels. With industry-leading vCPU density, EPYC 9005-series processors combine optimized compute and AI capabilities to deliver superior performance in AI inference, whether handling mixed or traditional tasks. This lineup diversity is unified by the x86 architecture, enables customers to easily integrate ASUS servers powered by AMD EPYC 9005-series processors into their existing x86 infrastructure. This allows for efficient transitions and migrations within the data center, ensuring continuity and maximizing operational efficiency.

Advantech Launches AIR-310, Ultra-Low-Profile Scalable AI Inference

Advantech, a leading provider of edge computing solutions, introduces the AIR-310, a compact edge AI inference system featuring an MXM GPU card. Powered by 12th/13th/14th Gen Intel Core 65 W desktop processors, the AIR-310 delivers up to 12.99 TFLOPS of scalable AI performance via the NVIDIA Quadro 2000A GPU card in a 1.5U chassis (215 x 225 x 55 mm). Despite its compact size, it offers versatile connectivity with three LAN ports and four USB 3.0 ports, enabling seamless integration of sensors and cameras for vision AI applications.

The system includes smart fan management, operates in temperatures from 0 to 50°C (32 to 122°F), and is shock-resistant, capable of withstanding 3G vibration and 30G shock. Bundled with Intel Arc A370 and NVIDIA A2000 GPUs, it is certified to IEC 61000-6-2, IEC 61000-6-4, and CB/UL standards, ensuring stable 24/7 operation in harsh environments, including space-constrained or mobile equipment. The AIR-310 supports Windows 11, Linux Ubuntu 24.04, and the Edge AI SDK, enabling accelerated inference deployment for applications such as factory inspections, real-time video surveillance, GenAI/LLM, and medical imaging.

ASUS Announces ESC N8-E11 AI Server with NVIDIA HGX H200

ASUS today announced the latest marvel in the groundbreaking lineup of ASUS AI servers - ESC N8-E11, featuring the intensely powerful NVIDIA HGX H200 platform. With this AI titan, ASUS has secured its first industry deal, showcasing the exceptional performance, reliability and desirability of ESC N8-E11 with HGX H200, as well as the ability of ASUS to move first and fast in creating strong, beneficial partnerships with forward-thinking organizations seeking the world's most powerful AI solutions.

Shipments of the ESC N8-E11 with NVIDIA HGX H200 are scheduled to begin in early Q4 2024, marking a new milestone in the ongoing ASUS commitment to excellence. ASUS has been actively supporting clients by assisting in the development of cooling solutions to optimize overall PUE, guaranteeing that every ESC N8-E11 unit delivers top-tier efficiency and performance - ready to power the new era of AI.

Supermicro Launches Plug-and-Play SuperCluster for NVIDIA Omniverse

Supermicro, Inc., a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is announcing a new addition to its SuperCluster portfolio of plug-and-play AI infrastructure solutions for the NVIDIA Omniverse platform to deliver the high-performance generative AI-enhanced 3D workflows at enterprise scale. This new SuperCluster features the latest Supermicro NVIDIA OVX systems and allows enterprises to easily scale as workloads increase.

"Supermicro has led the industry in developing GPU-optimized products, traditionally for 3D graphics and application acceleration, and now for AI," said Charles Liang, president and CEO of Supermicro. "With the rise of AI, enterprises are seeking computing infrastructure that combines all these capabilities into a single package. Supermicro's SuperCluster features fully interconnected 4U PCIe GPU NVIDIA-Certified Systems for NVIDIA Omniverse, with up to 256 NVIDIA L40S PCIe GPUs per scalable unit. The system helps deliver high performance across the Omniverse platform, including generative AI integrations. By developing this SuperCluster for Omniverse, we're not just offering a product; we're providing a gateway to the future of application development and innovation."

Next-Gen Computing: MiTAC and TYAN Launch Intel Xeon 6 Processor-Based Servers for AI, HPC, Cloud, and Enterprise Workloads at COMPUTEX 2024

The subsidiary of MiTAC Holdings Corp, MiTAC Computing Technology and its server brand TYAN, the leading manufacturer in server platform design worldwide, unveil their new server systems and motherboards optimized for today's AI, HPC, cloud, and enterprise workloads at COMPUTEX 2024, Booth # M1120 in Taipei, Taiwan from June 4 to June 7. Harnessing the power of the latest Intel Xeon 6 processor and 4th and 5th Gen Intel Xeon Scalable processors, these solutions deliver cutting-edge performance.

"For over a decade, MiTAC has worked with Intel at the forefront of server technology innovation, consistently delivering cutting-edge solutions tailored for AI and high-performance computing (HPC). The integration of Intel's latest Xeon 6 processors into our MiTAC and TYAN server platforms transforms computational capabilities, significantly enhancing AI performance, boosting efficiency, and scaling cloud operations. These advancements empower our customers with a competitive edge through superior performance and optimized total cost of ownership," said Rick Hwang, President of MiTAC Computing Technology Corporation.

Thrustmaster Unveils New AVA, Modular and Scalable Base for Flight Simulation Joysticks

Thrustmaster is thrilled to unveil its new modular and scalable base for flight simulation joysticks. AVA (Advanced Versatile Aviation) has been carefully designed to perfectly meet your needs and preferences, offering you a multitude of tailor-made configurations that are great for all types of aviation. Create your ideal configuration from the many adjustment possibilities offered by the AVA base's various interchangeable elements: the axis curve, the resistance, the travel and the damping. AVA's design was specially created for quick modification of all its elements, allowing for customization between flights, and on-the-fly adjustments.

In addition, a real ecosystem is taking shape around the AVA base - with the introduction of a metal plate designed for desktop use, an offset adapter that lets you adjust the position of the grip to 15° or 30° in the joystick's central position, as well as the announcement of future add-ons and accessories. You'll also want to discover the comprehensive bundle featuring the AVA base, metal plate and offset adapter, which includes Thrustmaster's world-famous F/A-18C Hornet grip. Excellent for military, civil, space or helicopter piloting, the AVA base will adapt to your requirements and transport you into a world of simulation that's more realistic than ever before.

Intel Prepares 500-Watt Xeon 6 SKUs of Granite Rapids and Sierra Forest

Intel is preparing to unveil its cutting-edge Xeon 6 series server CPUs, known as Granite Rapids and Sierra Forest. These forthcoming processors are set to deliver a significant boost in performance, foreshadowing a new era of computing power, albeit with a trade-off in increased power consumption. Two days ago, Yuuki_Ans posted information about the Beechnut City validation platform. Today, he updated the X thread with more information that Intel is significantly boosting core counts across its new Xeon 6 lineup. The flagship Xeon 6 6980P is a behemoth, packing 128 cores with a blistering 500 Watt Thermal Design Power (TDP) rating. In fact, Intel is equipping five of its Xeon 6 CPUs with a sky-high 500 W TDP, including the top four Granite Rapids parts and even the flagship Sierra Forest SKU, which is composed entirely of efficiency cores. This marks a substantial increase from Intel's previous Xeon Scalable processors, which maxed out at 350-385 Watts.

The trade-off for this performance boost is a dramatic rise in power consumption. By nearly doubling the TDP ceiling, Intel can double the core count from 64 to 128 cores on its Granite Rapids CPUs, vastly improving its multi-core capabilities. However, this focus on raw performance over power efficiency means server manufacturers must redesign their cooling solutions to accommodate Intel's flagship 500 W parts adequately. Failure to do so could lead to potential thermal throttling issues. Intel's next-gen Xeon CPU architectures are shaping up to be one of the most considerable generational leaps in recent memory. Still, they come with a trade-off in power consumption that vendors and data centers will need to address. Densely packing thousands of these 500-Watt SKUs will lead to new power and thermal challenges, and we wait to see future data center projects utilizing them.

Intel Xeon Scalable Gets a Rebrand: Intel "Xeon 6" with Granite Rapids and Sierra Forest Start a New Naming Scheme

During the Vision 2024 event, Intel announced that its upcoming Xeon processors will be branded under the new "Xeon 6" moniker. This rebranding effort aims to simplify the company's product stack and align with the recent changes made to its consumer CPU naming scheme. In contrast to the previous "x Generation Xeon Scalable", the new branding aims to simplify the product family. The highly anticipated Sierra Forest and Granite Ridge chips will be the first processors to bear the Xeon 6 branding, and they are set to launch in the coming months. Intel has confirmed that Sierra Forest, designed entirely with efficiency cores (E-cores), remains on track for release this quarter. Supermicro has already announced early availability and remote testing programs for these chips. Intel's Sierra Forest is set to deliver a substantial leap in performance. According to the company, it will offer a 2.4X improvement in performance per watt and a staggering 2.7X better performance per rack compared to the previous generation. This means that 72 Sierra Forest server racks will provide the same performance as 200 racks equipped with older second-gen Xeon CPUs, leading to significant power savings and a boost in overall efficiency for data centers upgrading their system.

Intel has also teased an exciting feature in its forthcoming Granite Ridge processors-support for the MXFP4 data format. This new precision format, backed by the Open Compute Project (OCP) and major industry players like NVIDIA, AMD, and Arm, promises to revolutionize performance. It could reduce next-token latency by up to 6.5X compared to fourth-gen Xeons using FP16. Additionally, Intel stated that Granite Ridge will be capable of running 70 billion parameter Llama-2 models, a capability that could open up new possibilities in data processing. Intel claims that 70 billion 4-bit models run entirely on Xeon in just 86 milliseconds. While Sierra Forest is slated for this quarter, Intel has not provided a specific launch timeline for Granite Ridge, stating only that it will arrive "soon after" its E-core counterpart. The Xeon 6 branding aims to simplify the product stack and clarify customer performance tiers as the company gears up for these major releases.

Intel Xeon "Granite Rapids-SP" 80-core Engineering Sample Leaked

A CPU-Z screenshot has been shared by YuuKi_AnS—the image contains details about an alleged next-gen Intel Xeon Scalable processor engineering sample (ES). The hardware tipster noted in (yesterday's post) that an error had occurred in the application's identification of this chunk of prototype silicon. CPU-Z v2.09 has recognized the basics—an Intel Granite Rapids-SP processor that is specced with 80 cores, 2.5 GHz max frequency, a whopping 672 MB of L3 cache, and a max. TDP rating of 350 W. The counting of 320 threads seems to be CPU-Z's big mistake here—previous Granite Rapids-related leaks have not revealed Team Blue's Hyper-Threading technology producing such impressive numbers.

The alleged prototype status of this Xeon chip is very apparent in CPU-Z's tracking of single and multi-core performance—the benchmark results are really off the mark, when compared to finalized current-gen scores (produced by rival silicon). Team Blue's next-gen Xeon series is likely positioned to catch up with AMD EPYC's deployment of large core counts—"Granite Rapids" has been linked to the Intel 3 foundry node, reports from last month suggest that XCC-type processors could be configured with "counts going up to 56-core/112-threads." Micron is prepping next-gen "Tall Form Factor" memory modules, designed with future enterprise processor platforms in mind—including Intel's Xeon Scalable "Granite Rapids" family. Industry watchdogs posit that Team Blue will be launching this series in the coming months.

MiTAC Unleashes Revolutionary Server Solutions, Powering Ahead with 5th Gen Intel Xeon Scalable Processors Accelerated by Intel Data Center GPUs

MiTAC Computing Technology, a subsidiary of MiTAC Holdings Corp., proudly reveals its groundbreaking suite of server solutions that deliver unsurpassed capabilities with the 5th Gen Intel Xeon Scalable Processors. MiTAC introduces its cutting-edge signature platforms that seamlessly integrate the Intel Data Center GPUs, both Intel Max Series and Intel Flex Series, an unparalleled leap in computing performance is unleashed targeting HPC and AI applications.

MiTAC Announce its Full Array of Platforms Supporting the latest 5th Gen Intel Xeon Scalable Processors
Last year, Intel transitioned the right to manufacture and sell products based on Intel Data Center Solution Group designs to MiTAC. MiTAC confidently announces a transformative upgrade to its product offerings, unveiling advanced platforms that epitomize the future of computing. Featured with up to 64 cores, expanded shared cache, increased UPI and DDR5 support, the latest 5th Gen Intel Xeon Scalable Processors deliver remarkable performance per watt gains across various workloads. MiTAC's Intel Server M50FCP Family and Intel Server D50DNP Family fully support the latest 5th Gen Intel Xeon Scalable Processors, made possible through a quick BIOS update and easy technical resource revisions which provide unsurpassed performance to diverse computing environments.

IBM Introduces LinuxONE 4 Express, a Value-oriented Hybrid Cloud & AI Platform

IBM has announced IBM LinuxONE 4 Express, extending the latest performance, security and AI capabilities of LinuxONE to small and medium sized businesses and within new data center environments. The pre-configured rack mount system is designed to offer cost savings and to remove client guess work when spinning up workloads quickly and getting started with the platform to address new and traditional use cases such as digital assets, medical imaging with AI, and workload consolidation.

Building an integrated hybrid cloud strategy for today and years to come
As businesses move their products and services online quickly, oftentimes, they are left with a hybrid cloud environment created by default, with siloed stacks that are not conducive to alignment across businesses or the introduction of AI. In a recent IBM IBV survey, 84% of executives asked acknowledged their enterprise struggles in eliminating silo-to-silo handoffs. And 78% of responding executives said that an inadequate operating model impedes successful adoption of their multicloud platform. With the pressure to accelerate and scale the impact of data and AI across the enterprise - and improve business outcomes - another approach that organizations can take is to more carefully identify which workloads should be on-premises vs in the cloud.

6th Gen Intel Xeon "Granite Rapids" CPU L3 Cache Totals 480 MB

Intel has recently updated its Software Development Emulator (now version 9.33.0)—InstLatX64 noted some intriguing cache designations for Fifth Generation Xeon Scalable Processors. The "Emerald Rapids" family was introduced at last December's "AI Everywhere" event—with sample units released soon after for review. Tom's Hardware was impressed by the Platinum 8592+ CPU's tripled L3 Cache (over the previous generation): "(it) contributed significantly to gains in Artificial Intelligence inference, data center, video encoding, and general compute workloads. While AMD EPYC generally remains the player to beat in the enterprise CPU space, Emerald Rapids marks a significant improvement from Intel's side of that battlefield, especially as it pertains to Artificial Intelligence workloads and multi-core performance in general."

Intel's SDE 9.33.0 update confirms 320 MB of L3 cache for "Emerald Rapids," but the next line down provides a major "Granite Rapids" insight—480 MB of L3 cache, representing a 2.8x leap over the previous generation. Team Blue's 6th Gen (all P-core) Xeon processor series is expected to launch within the latter half of 2024. The American multinational technology company is evidently keen to take on AMD in the enterprise CPU market segment, although Team Red is already well ahead with its current crop of L3 cache designations. EPYC CPUs in Genoa and Genoa-X guises offer maximum totals of 384 MB and 1152 MB (respectively). Intel's recently launched "Emerald Rapids" server chips are observed as being a good match against Team Red EPYC "Bergamo" options.

GIGABYTE Launches Servers Powered by Intel Xeon E-2400 processors and Shares Updates to Support 5th Gen Intel Xeon Scalable Processors

GIGABYTE Technology: Giga Computing, a subsidiary of GIGABYTE and an industry leader in high-performance servers and IT infrastructure, is thrilled to present a cutting-edge series of servers optimized for the newly launched Intel Xeon E-2400 processors. These servers deliver essential computing power, ensuring a dependable workload for a wide range of enterprise and edge computing applications, all while maintaining an impressive price-to-performance ratio.

"We are thrilled to unveil our latest server product line, which is engineered to deliver unparalleled performance and reliability," said Vincent Wang, sales VP at Giga Computing. "By leveraging the power of the new Intel Xeon E processors, our servers empower businesses to elevate their computational capabilities, enabling them to achieve greater efficiency and productivity. Whether it's for enterprise applications or edge computing tasks, GIGABYTE servers are the cornerstone of innovation in the digital landscape." ⁠

TYAN Upgrades HPC, AI and Data Center Solutions with the Power of 5th Gen Intel Xeon Scalable Processors

TYAN, a leading server platform design manufacturer and a MiTAC Computing Technology Corporation subsidiary, today introduced upgraded server platforms and motherboards based on the brand-new 5th Gen Intel Xeon Scalable Processors, formerly codenamed Emerald Rapids.

5th Gen Intel Xeon processor has increased to 64 cores, featuring a larger shared cache, higher UPI and DDR5 memory speed, as well as PCIe 5.0 with 80 lanes. Growing and excelling with workload-optimized performance, 5th Gen Intel Xeon delivers more compute power and faster memory within the same power envelope as the previous generation. "5th Gen Intel Xeon is the second processor offering inside the 2023 Intel Xeon Scalable platform, offering improved performance and power efficiency to accelerate TCO and operational efficiency", said Eric Kuo, Vice President of Server Infrastructure Business Unit, MiTAC Computing Technology Corporation. "By harnessing the capabilities of Intel's new Xeon CPUs, TYAN's 5th-Gen Intel Xeon-supported solutions are designed to handle the intense demands of HPC, data centers, and AI workloads.

TYAN Unveils its Robuste Immersion Cooling Solution that Delivering Significant PUE Enhancement at SC23

TYAN, an industry leader in server platform design and a subsidiary of MiTAC Computing Technology Corporation, unveils an immersion cooling solution that delivering significant PUE (Power Usage Effectiveness) enhancement and showcases its latest server platforms powered by 4th Gen. Intel Xeon Scalable Processors targeting HPC, AI and Cloud Computing applications at SC23, Booth #1917.

Significant PUE Enhancement shown in an Immersion-cooling Tank vs. Conventional Air-cooling Operation Cabinet
The immersion cooling system live demonstrated at TYAN booth during SC23 is a 4U hybrid single phase tank enclosure equipped with 4 units of TYAN GC68A-B7136 cloud computing servers. Comparing to conventional Air-cooling operating cabinet, this hybrid immersion cooling system could offer huge improvement of PUE which makes it become an ideal mission-critical solution for the users aimed in energy-saving and green products.

Intel "Emerald Rapids" 8592+ and 8558U Xeon CPUs with 64C and 48C Configurations Spotted

Intel's next-generation Emerald Rapids Xeon lineup is just around the corner, and we are now receiving more leaks as the launch nears. Today, we get to see leaks of two models: a 64-core Xeon 8592+ Platinum and a 48-core Xeon 8558U processor. First is the Xeon 8592+ Platinum, which is possibly Intel's top-end design with 64 cores and 128 threads. Running at the base frequency of 1.9 GHz, the CPU can boost up to 3.9 GHz. This SKU carries 488 MB of total cache, where 120 MB is dedicated to L2 and 320 MB is there for L3. With a TDP of 350 Watts, the CPU can even be adjusted to 420 Watts.

Next up, we have the Xeon 8558U processor, which has been spotted in Geekbench. The Xeon 8558U is a 48-core, 96-threaded CPU with a 2.0 GHz base clock whose boost frequency has yet to be shown or enabled, likely because it is an engineering sample. It carries 96 MB of L2 cache and 260 MB of L3 cache, making for a total of 356 MB of cache (which includes L1D and L1I as well). Both of these SKUs should launch with the remaining models in the Emerald Rapids family, dubbed 5th generation Xeon Scalable, on December 14 this year.

Intel, Dell Technologies and University of Cambridge Announce Deployment of Dawn Supercomputer

Dell Technologies, Intel and the University of Cambridge announce the deployment of the co-designed Dawn Phase 1 supercomputer. Leading technical teams built the U.K.'s fastest AI supercomputer that harnesses the power of both artificial intelligence (AI) and high performance computing (HPC) to solve some of the world's most pressing challenges. This sets a clear way forward for future U.K. technology leadership and inward investment into the U.K. technology sector. Dawn kickstarts the recently launched U.K. AI Research Resource (AIRR), which will explore the viability of associated systems and architectures. Dawn brings the U.K. closer to reaching the compute threshold of a quintillion (1018) floating point operations per second - one exaflop, better known as exascale. For perspective: Every person on earth would have to make calculations 24 hours a day for more than four years to equal a second's worth of processing power in an exascale system.

"Dawn considerably strengthens the scientific and AI compute capability available in the U.K., and it's on the ground, operational today at the Cambridge Open Zettascale Lab. Dell PowerEdge XE9640 servers offer a no-compromises platform to host the Intel Data Center GPU Max Series accelerator, which opens up the ecosystem to choice through oneAPI. I'm very excited to see the sorts of early science this machine can deliver and continue to strengthen the Open Zettascale Lab partnership between Dell Technologies, Intel and the University of Cambridge, and further broaden that to the U.K. scientific and AI community," said Adam Roe, EMEA HPC technical director at Intel.

Supermicro Starts Shipments of NVIDIA GH200 Grace Hopper Superchip-Based Servers

Supermicro, Inc., a Total IT Solution manufacturer for AI, Cloud, Storage, and 5G/Edge, is announcing one of the industry's broadest portfolios of new GPU systems based on the NVIDIA reference architecture, featuring the latest NVIDIA GH200 Grace Hopper and NVIDIA Grace CPU Superchip. The new modular architecture is designed to standardize AI infrastructure and accelerated computing in compact 1U and 2U form factors while providing ultimate flexibility and expansion ability for current and future GPUs, DPUs, and CPUs. Supermicro's advanced liquid-cooling technology enables very high-density configurations, such as a 1U 2-node configuration with 2 NVIDIA GH200 Grace Hopper Superchips integrated with a high-speed interconnect. Supermicro can deliver thousands of rack-scale AI servers per month from facilities worldwide and ensures Plug-and-Play compatibility.

"Supermicro is a recognized leader in driving today's AI revolution, transforming data centers to deliver the promise of AI to many workloads," said Charles Liang, president and CEO of Supermicro. "It is crucial for us to bring systems that are highly modular, scalable, and universal for rapidly evolving AI technologies. Supermicro's NVIDIA MGX-based solutions show that our building-block strategy enables us to bring the latest systems to market quickly and are the most workload-optimized in the industry. By collaborating with NVIDIA, we are helping accelerate time to market for enterprises to develop new AI-enabled applications, simplifying deployment and reducing environmental impact. The range of new servers incorporates the latest industry technology optimized for AI, including NVIDIA GH200 Grace Hopper Superchips, BlueField, and PCIe 5.0 EDSFF slots."

Fujitsu Details Monaka: 150-core Armv9 CPU for AI and Data Center

Ever since the creation of A64FX for the Fugaku supercomputer, Fujitsu has been plotting the development of next-generation CPU design for accelerating AI and general-purpose HPC workloads in the data center. Codenamed Monaka, the CPU is the latest creation for TSMC's 2 nm semiconductor manufacturing node. Based on Armv9-A ISA, the CPU will feature up to 150 cores with Scalable Vector Extensions 2 (SVE2), so it can process a wide variety of vector data sets in parallel. Using a 3D chiplet design, the 150 cores will be split into different dies and placed alongside SRAM and I/O controller. The current width of the SVE2 implementation is unknown.

The CPU is designed to support DDR5 memory and PCIe 6.0 connection for attaching storage and other accelerators. To bring cache coherency among application-specific accelerators, CXL 3.0 is present as well. Interestingly, Monaka is planned to arrive in FY2027, which starts in 2026 on January 1st. The CPU will supposedly use air cooling, meaning the design aims for power efficiency. Additionally, it is essential to note that Monaka is not a processor that will power the post-Fugaku supercomputer. The post-Fugaku supercomputer will use post-Monaka design, likely iterating on the design principles that Monaka uses and refining them for the launch of the post-Fugaku supercomputer scheduled for 2030. Below are the slides from Fujitsu's presentation, in Japenese, which highlight the design goals of the CPU.

MiTAC to Showcase Cloud and Datacenter Solutions, Empowering AI at Intel Innovation 2023

Intel Innovation 2023 - September 13, 2023 - MiTAC Computing Technology, a professional IT solution provider and a subsidiary of MiTAC Holdings Corporation, will showcase its DSG (Datacenter Solutions Group) product lineup powered by 4th Gen Intel Xeon Scalable processors for enterprise, cloud and AI workloads at Intel Innovation 2023, booth #H216 in the San Jose McEnery Convention Center, USA, from September 19-20.

"MiTAC has seamlessly and successfully managed the Intel DSG business since July. The datacenter solution product lineup enhances MiTAC's product portfolio and service offerings. Our customers can now enjoy a comprehensive one-stop service, ranging from motherboards and barebones servers to Intel Data Center blocks and complete rack integration for their datacenter infrastructure needs," said Eric Kuo, Vice President of the Server Infrastructure Business Unit at MiTAC Computing Technology.

Supermicro Launches Industry Leading vSAN HCI Solution

Supermicro, Inc., a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, today announced a new VMware vSAN solution optimized to run enterprise class hyperconverged virtualized workloads. As virtualized workloads become more advanced, processing power and storage performance requirements increase, requiring greater capacity to meet application SLAs and maximize virtual machine density. This solution also utilizes the latest Intel AMX accelerator for AI workloads.

When compared to the Supermicro X11 BigTwin, benchmark testing conducted by Supermicro showed up to 4.7X higher IO throughput and 8.2X lower latency on the HCIBench benchmark, up to 4.9X faster image classification inference throughput on the ResNet50 model and up to 4X faster natural language processing throughput on BERT-Large model. In addition, the superior power and efficiency of the Supermicro X13 BigTwin architecture can deliver up to 3X cost and performance improvement within the same node footprint compared to a similar deployment based on older generation Supermicro systems, creating a compelling case for organizations to upgrade their aging infrastructure.

MaxLinear Announces Production Availability of Panther III Storage Accelerator OCP Adapter Card

MaxLinear, Inc., a leader in data storage accelerator solutions, today announced the production-release of the OCP 3.0 storage accelerator adapter card for Panther III. The ultra-low latency accelerator is designed to quicken key storage workloads, including database acceleration, storage offload, encryption, compression, and deduplication enablement for maximum data reduction. The Panther III OCP card is ideal for use in modern data centers, including public to edge clouds, enterprise data centers, and telecommunications infrastructure, allowing users to access, process, and transfer data up to 12 times faster than without a storage accelerator. The OCP version of the card is available immediately with a PCIe version available in Q3 2023.

"In an era where the amount of data generated exceeds new storage installations by multiple fold, Panther III helps reduce the massive storage gap while improving TCO per bit stored," said Dylan Patel, Chief Analyst at SemiAnalysis.

IBM Launches AI-informed Cloud Carbon Calculator

IBM has launched a new tool to help enterprises track greenhouse gas (GHG) emissions across cloud services and advance their sustainability performance throughout their hybrid, multicloud journeys. Now generally available, the IBM Cloud Carbon Calculator - an AI-informed dashboard - can help clients access emissions data across a variety of IBM Cloud workloads such as AI, high performance computing (HPC) and financial services.

Across industries, enterprises are embracing modernization by leveraging hybrid cloud and AI to digitally transform with resiliency, performance, security, and compliance at the forefront, all while remaining focused on delivering value and driving more sustainable business practices. According to a recent study by IBM, 42% of CEOs surveyed pinpoint environmental sustainability as their top challenge over the next three years. At the same time, the study reports that CEOs are facing pressure to adopt generative AI while also weighing the data management needs to make AI successful. The increase in data processing required for AI workloads can present new challenges for organizations that are looking to reduce their GHG emissions. With more than 43% of CEOs surveyed already using generative AI to inform strategic decisions, organizations should prepare to balance executing high performance workloads with sustainability.

AIC Launches HA401-TU, a New High-availability Server Model

AIC has launched the new high-availability storage server HA401-TU, which is optimized for mission-critical, enterprise-level storage applications. This cluster-in-a-box solution with active-active failover design and eliminates single points of failure. HA401-TU is a 4U high-availability (HA) server with 2 controller nodes and supports 24 3.5" SAS 12 Gb/s drives. Each controller node is equipped with an AIC Tucana server board that is powered by dual 3rd Gen Intel Xeon Scalable processors and Intel C621A chipset, which supports UPI speed up to 11.2 GT/s. HA401-TU provides enterprise users with a number of crucial benefits. The redundant hardware components ensure that there is no single point of failure.

With the hot-swappable functionality, the controller canisters protect enterprises from the loss of revenue that can occur when access to mission-critical data or applications is disrupted. Both controller nodes process data input/output (I/O) operations and users can experience simultaneous and balanced access to logical devices. In the event of failover, the secondary node will automatically take over the devices, client connections and all the processes and services running in the system. This high-availability design significantly enhances the overall performance of clusters, enabling seamless handling of demanding workloads.

ASUS Unveils ESC N8-E11, an HGX H100 Eight-GPU Server

ASUS today announced ESC N8-E11, its most advanced HGX H100 eight-GPU AI server, along with a comprehensive PCI Express (PCIe) GPU server portfolio—the ESC8000 and ESC4000 series empowered by Intel and AMD platforms to support higher CPU and GPU TDPs to accelerate the development of AI and data science.

ASUS is one of the few HPC solution providers with its own all-dimensional resources that consist of the ASUS server business unit, Taiwan Web Service (TWS) and ASUS Cloud—all part of the ASUS group. This uniquely positions ASUS to deliver in-house AI server design, data-center infrastructure, and AI software-development capabilities, plus a diverse ecosystem of industrial hardware and software partners.
Return to Keyword Browsing
Nov 2nd, 2024 03:30 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts