News Posts matching #AI

Return to Keyword Browsing

Huawei Aims to Develop Homegrown HBM Memory Amidst US Sanctions

According to The Information, in a strategic maneuver to circumvent the constraints imposed by US sanctions, Huawei is accelerating efforts to establish domestic production capabilities for High Bandwidth Memory (HBM) within China. This move addresses the limitations that have hampered the company's advancements in AI and high-performance computing (HPC) sectors. HBM technology plays a pivotal role in enhancing the performance of AI and HPC processors by mitigating memory bandwidth bottlenecks. Recognizing its significance, Huawei has assembled a consortium comprising memory manufacturers backed by the Chinese government and prominent semiconductor companies like Fujian Jinhua Integrated Circuit. This consortium is focused on advancing HBM2 memory technology, which is crucial for Huawei's Ascend-series processors for AI applications.

Huawei's initiative comes at a time when the company faces challenges in accessing HBM from external sources, impacting the availability of its AI processors in the market. Despite facing obstacles such as international regulations restricting the sale of advanced chipmaking equipment to China, Huawei's efforts underscore China's broader push for self-sufficiency in critical technologies essential for AI and supercomputing. By investing in domestic HBM production, Huawei aims to secure a stable supply chain for these vital components, reducing reliance on external suppliers. This strategic shift not only demonstrates Huawei's resilience in navigating geopolitical challenges but also highlights China's determination to strengthen its technological independence in the face of external pressures. As the global tech landscape continues to evolve, Huawei's move to develop homegrown HBM memory could have far-reaching implications for China's AI and HPC capabilities, positioning the country as a significant player in the memory field.

TSMC Celebrates 30th North America Technology Symposium with Innovations Powering AI with Silicon Leadership

TSMC today unveiled its newest semiconductor process, advanced packaging, and 3D IC technologies for powering the next generation of AI innovations with silicon leadership at the Company's 2024 North America Technology Symposium. TSMC debuted the TSMC A16 technology, featuring leading nanosheet transistors with innovative backside power rail solution for production in 2026, bringing greatly improved logic density and performance. TSMC also introduced its System-on-Wafer (TSMC-SoW) technology, an innovative solution to bring revolutionary performance to the wafer level in addressing the future AI requirements for hyperscaler datacenters.

This year marks the 30th anniversary of TSMC's North America Technology Symposium, and more than 2,000 attended the event, growing from less than 100 attendees 30 years ago. The North America Technology Symposium in Santa Clara, California kicks off TSMC Technology Symposiums around the world in the coming months. The symposium also features an "Innovation Zone," designed to highlight the technology achievements of our emerging start-up customers.

Aetina Accelerates Embedded AI with High-performance, Small Form-factor Aetina IA380E-QUFL Graphics Card

Aetina, a leading Edge AI solution provider, announced the launch of the Aetina IA380E-QUFL at Embedded World 2024 in Nuremberg, Germany. This groundbreaking product is a small form factor PCIe graphics card powered by the high-performance Intel Arc A380E GPU.

Unmatched Power in a Compact Design
The Aetina IA380E-QUFL delivers workstation-level performance packed into a low-profile, single-slot form factor. This innovative solution consumes only 50 W, making it ideal for space and power-constrained edge computing environments. Embedded system manufacturers and integrators can leverage the power of 4.096 TFLOPs peak FP32 performance delivered by the Intel Arc A380E GPU.

US Weighs National Security Risks of China's RISC-V Chip Development Involvement

The US government is investigating the potential national security risks associated with China's involvement in the development of open-source RISC-V chip technology. According to a letter obtained by Reuters, the Department of Commerce has informed US lawmakers that it is actively reviewing the implications of China's work in this area. RISC-V, an open instruction set architecture (ISA) created in 2014 at the University of California, Berkeley, offers an alternative to proprietary and licensed ISAs like those developed by Arm. This open-source ISA can be utilized in a wide range of applications, from AI chips and general-purpose CPUs to high-performance computing applications. Major Chinese tech giants, including Alibaba and Huawei, have already embraced RISC-V, positioning it as a new battleground in the ongoing technological rivalry between the United States and China over cutting-edge semiconductor capabilities.

In November, a group of 18 US lawmakers from both chambers of Congress urged the Biden administration to outline its strategy for preventing China from gaining a dominant position in RISC-V technology, expressing concerns about the potential impact on US national and economic security. While acknowledging the need to address potential risks, the Commerce Department noted in its letter that it must proceed cautiously to avoid unintentionally harming American companies actively participating in international RISC-V development groups. Previous attempts to restrict the transfer of 5G technology to China have created obstacles for US firms involved in global standards bodies where China is also a participant, potentially jeopardizing American leadership in the field. As the review process continues, the Commerce Department faces the delicate task of balancing national security interests with the need to maintain the competitiveness of US companies in the rapidly evolving landscape of open-source chip technologies.

Qualcomm Continues to Disrupt the PC Industry with the Addition of Snapdragon X Plus Platform

Qualcomm Technologies, Inc. today expands the leading Snapdragon X Series platform portfolio with Snapdragon X Plus. Snapdragon X Plus features the state-of-the-art Qualcomm Oryon CPU, a custom-integrated processor that delivers up to 37% faster CPU performance compared to competitors, while consuming up to 54% less power. This remarkable advancement in CPU performance sets a new standard in mobile computing, enabling users to accomplish more with greater efficiency. Snapdragon X Plus is also designed to meet the demands of on-device AI-driven applications, powered by the Qualcomm Hexagon NPU capable of 45 TOPS, making it the world's fastest NPU for laptops. This platform is a significant leap in computing innovation and is set to transform the PC industry.

"Snapdragon X Series platforms deliver leading experiences and are positioned to revolutionize the PC industry. Snapdragon X Plus will power AI-Supercharged PCs that enable even more users to excel as radical new AI experiences emerge in this period of rapid development and deployment," said Kedar Kondap, senior vice president and general manager of compute and gaming, Qualcomm Technologies, Inc. "By delivering leading CPU performance, AI capabilities, and power efficiency, we are once again pushing the boundaries of what is possible in mobile computing."

Dynabook Releases Hyperlight 14-inch Portégé X40L-M Laptop with Intel Core Ultra Processors and Powerful AI Integration

Dynabook Americas, Inc., the gold standard for long-lasting, professional-grade laptops, today unveiled the latest generation of its hyperlight 14-inch premium business laptop - the Portégé X40L-M. Now engineered with cutting-edge Intel Core Ultra (Series 1) processors and packing advanced AI capabilities, this powerful laptop redefines productivity, performance, and security for today's on-the-go professionals, while meeting Intel EVO platform and Windows 11 Secured-core PC standards.

"The Portégé X40L-M is a testament to Dynabook's commitment to delivering premium, cutting-edge solutions that empowers professionals to achieve more in their work," said James Robbins, General Manager, Dynabook Americas, Inc. "With the integration of Intel's latest Core Ultra processors, advanced AI capabilities, and seamless Windows 11 with Copilot integration, the Portégé X40L-M sets a new standard for productivity, performance, and innovation in the business laptop market."

AI Demand Drives Rapid Growth in QLC Enterprise SSD Shipments for 2024

North American customers are increasing their orders for storage products as energy efficiency becomes a key priority for AI inference servers. This, in turn, is driving up demand for QLC enterprise SSDs. Currently, only Solidigm and Samsung have certified QLC products, with Solidigm actively promoting its QLC products and standing to benefit the most from this surge in demand. TrendForce predicts shipments of QLC enterprise SSD bits to reach 30 exabytes in 2024—increasing fourfold in volume from 2023.

TrendForce identifies two main reasons for the increasing use of QLC SSDs in AI applications: the products' fast read speeds and TCO advantages. AI inference servers primarily perform read operations, which occur less frequently than the data writing required by AI training servers. In comparison to HDDs, QLC enterprise SSDs offer superior read speeds and have capacities that have expanded up to 64 TB.

Unreal Engine 5.4 is Now Available With Improvements to Nanite, AI and Machine Learning, TSR, and More

Unreal Engine 5.4 is here, and it's packed with new features and improvements to performance, visual fidelity, and productivity that will benefit game developers and creators across industries. With this release, we're delivering the toolsets we've been using internally to build and ship Fortnite Chapter 5, Rocket Racing, Fortnite Festival, and LEGO Fortnite. Here are some of the highlights.

Animation
Character rigging and animation authoring
This release sees substantial updates to Unreal Engine's built-in animation toolset, enabling you to quickly, easily, and enjoyably rig characters and author animation directly in engine, without the frustrating and time-consuming need to round trip to external applications. With an Experimental new Modular Control Rig feature, you can build animation rigs from understandable modular parts instead of complex granular graphs, while Automatic Retargeting makes it easier to get great results when reusing bipedal character animations. There are also extensions to the Skeletal Editor and a suite of new deformer functions to make the Deformer Graph more accessible.

Velocity Micro Announces New Line of Powerful Linux PCs

Velocity Micro, the premier builder of award-winning enthusiast desktops, laptops, high performance computing solutions, and professional workstations announces the immediate availability of a line of Linux powered PCs preinstalled with Ubuntu 22.04. Systems are designed for a wide range of customers and use cases such as individual enthusiasts, CAD and content creation professionals, machine learning/AI, and enterprise, to bring the flexibility and security of an open source OS platform into the mainstream. Desktops start at $1,429 with Linux laptops starting at $1,199. Shipments begin immediately.

"We've been preinstalling various flavors of Linux on custom server and workstation builds for years, which has really afforded us the opportunity to get hands-on to learn the OS," said Randy Copeland, President and CEO of Velocity Micro. "We're excited to bring what we've learned to market and introduce our customers to Ubuntu Linux."

PC Market Returns to Growth in Q1 2024 with AI PCs to Drive Further 2024 Expansion

Global PC shipments grew around 3% YoY in Q1 2024 after eight consecutive quarters of declines due to demand slowdown and inventory correction, according to the latest data from Counterpoint Research. The shipment growth in Q1 2024 came on a relatively low base in Q1 2023. The coming quarters of 2024 will see sequential shipment growth, resulting in 3% YoY growth for the full year, largely driven by AI PC momentum, shipment recovery across different sectors, and a fresh replacement cycle.

Lenovo's PC shipments were up 8% in Q1 2024 off an easy comparison from last year. The brand managed to reclaim its 24% share in the market, compared to 23% in Q1 2023. HP and Dell, with market shares of 21% and 16% respectively, remained flattish, waiting for North America to drive shipment growth in the coming quarters. Apple's shipment performance was also resilient, with the 2% growth mainly supported by M3 base models.

AMD "Strix Halo" Zen 5 Mobile Processor Pictured: Chiplet-based, Uses 256-bit LPDDR5X

Enthusiasts on the ChipHell forum scored an alleged image of AMD's upcoming "Strix Halo" mobile processor, and set out to create some highly plausible schematic slides. These are speculative. While "Strix Point" is the mobile processor that succeeds the current "Hawk Point" and "Phoenix" processors; "Strix Halo" is in a category of its own—to offer gaming experiences comparable to discrete GPUs in the ultraportable form-factor where powerful discrete GPUs are generally not possible. "Strix Halo" also goes head on against Apple's M3 Max and M3 Pro processors powering the latest crop of MacBook Pros. It has the same advantages as a single-chip solution, as the M3 Max.

The "Strix Halo" silicon is a chiplet-based processor, although very different from "Fire Range". The "Fire Range" processor is essentially a BGA version of the desktop "Granite Ridge" processor—it's the same combination of one or two "Zen 5" CCDs that talk to a client I/O die, and is meant for performance-thru-enthusiast segment notebooks. "Strix Halo," on the other hand, use the same one or two "Zen 5" CCDs, but with a large SoC die featuring an oversized iGPU, and 256-bit LPDDR5X memory controllers not found on the cIOD. This is key to what AMD is trying to achieve—CPU and graphics performance in the league of the M3 Pro and M3 Max at comparable PCB and power footprints.

Acer Expands Chromebook Plus Laptop Lineup with New 14-Inch Model Powered by Intel Core Processors

Acer today expanded its line of Chromebook Plus laptops with the Acer Chromebook Plus 514 (CB514-4H/T), providing users with a performance-minded, compact and durable model that enables them to do more with the AI-powered capabilities of ChromeOS. "The new Acer Chromebook Plus 514 (CB514-4H/T) delivers the sought-after combination of a portable design, 14-inch Full HD display and performance-minded technology that lets users get the most out of exciting capabilities offered with Chromebook Plus," said James Lin, General Manager, Notebooks, Acer Inc. "Students, businesses, families, and individuals need to be more productive, connected and empowered than ever, and can achieve this using Acer Chromebook Plus devices."

The new Acer Chromebook Plus 514 is the latest addition to Acer's lineup of Chromebook Plus laptops that offer enhanced Chromebook performance and experiences, emphasizing better hardware designs with upgraded displays and cameras paired with powerful productivity, creativity, and multimedia capabilities. Like all Acer Chromebook Plus laptops, users have the power to do more with the new Chromebook Plus 514 (CB514-4H/T). Powered by an Intel Core i3-N305 processor and an ample 8 GB of LPDDR5 RAM, the Acer Chromebook Plus 514 provides 2x the speed, memory, and storage, giving responsive performance and efficient multitasking, whether running built-in AI-powered apps like Google Docs and Photos, watching favorite shows in full HD on a 1080p display, or movie-making with LumaFusion. Plus, the processor ensures all-day enjoyment with up to 11 hours of usage on the fast-charging battery.

Intel Builds World's Largest Neuromorphic System to Enable More Sustainable AI

Today, Intel announced that it has built the world's largest neuromorphic system. Code-named Hala Point, this large-scale neuromorphic system, initially deployed at Sandia National Laboratories, utilizes Intel's Loihi 2 processor, aims at supporting research for future brain-inspired artificial intelligence (AI), and tackles challenges related to the efficiency and sustainability of today's AI. Hala Point advances Intel's first-generation large-scale research system, Pohoiki Springs, with architectural improvements to achieve over 10 times more neuron capacity and up to 12 times higher performance.

"The computing cost of today's AI models is rising at unsustainable rates. The industry needs fundamentally new approaches capable of scaling. For that reason, we developed Hala Point, which combines deep learning efficiency with novel brain-inspired learning and optimization capabilities. We hope that research with Hala Point will advance the efficiency and adaptability of large-scale AI technology." -Mike Davies, director of the Neuromorphic Computing Lab at Intel Labs

Lenovo Prepares Thinkpad T14s and Yoga Slim 14 Laptops with Qualcomm Snapdragon X Processor

Lenovo is putting finishing touches on the Yoga Slim 7 14 2024, one of the first non-reference laptop to feature Qualcomm's latest Snapdragon X processor. Leaked images circulating on X (formerly Twitter) reveal a sleek and stylish design, with a 14-inch or 14.5-inch display encased in a slim and portable form factor. Qualcomm has previously showcased eye-catching demo reference systems in a striking red color scheme, but Lenovo's Yoga Slim 7 14 2024 marks the first time a significant laptop manufacturer with actual product images incorporating the Snapdragon X chip. The Yoga Slim 7 14 2024 is part of Lenovo's popular Slim laptop lineup, including models powered by Intel and AMD processors. The latest "Gen 8" iteration featured options for AMD Ryzen 7040 series and Intel 13th Gen Core i and Core Ultra series CPUs.

One notable addition to the Snapdragon X-powered model is the inclusion of a dedicated Microsoft Copilot button. Qualcomm has heavily touted the Snapdragon X's Neural Processing Unit (NPU) performance and its ability to directly accelerate various AI and machine learning algorithms on the device. There have been a few comparison points between Meteor Lake with Intel's NPU and Snapdragon X Elite with Qualcomm's NPU. The chipmaker's X Elite and X Plus laptop offerings are expected to arrive soon, and there are strong indications that this may happen during the Computex trade show.

Update 17:28 UTC: X user WalkingCat has posted images of Lenovo Thinkpad T14s laptop, which can be seen below.

NVIDIA Launches the RTX A400 and A1000 Professional Graphics Cards

AI integration across design and productivity applications is becoming the new standard, fueling demand for advanced computing performance. This means professionals and creatives will need to tap into increased compute power, regardless of the scale, complexity or scope of their projects. To meet this growing need, NVIDIA is expanding its RTX professional graphics offerings with two new NVIDIA Ampere architecture-based GPUs for desktops: the NVIDIA RTX A400 and NVIDIA RTX A1000.

They expand access to AI and ray tracing technology, equipping professionals with the tools they need to transform their daily workflows. The RTX A400 GPU introduces accelerated ray tracing and AI to the RTX 400 series GPUs. With 24 Tensor Cores for AI processing, it surpasses traditional CPU-based solutions, enabling professionals to run cutting-edge AI applications, such as intelligent chatbots and copilots, directly on their desktops. The GPU delivers real-time ray tracing, so creators can build vivid, physically accurate 3D renders that push the boundaries of creativity and realism.

AMD Expands Commercial AI PC Portfolio to Deliver Leadership Performance Across Professional Mobile and Desktop Systems

Today, AMD announced new products that will expand its commercial mobile and desktop AI PC portfolio, delivering exceptional productivity and premium AI and connectivity experiences to business users. The new AMD Ryzen PRO 8040 Series are the most advanced x86 processors built for business laptops and mobile workstations. In addition, AMD also announced the AMD Ryzen PRO 8000 Series desktop processor, the first AI enabled desktop processor for business users, engineered to deliver cutting-edge performance with low power consumption.

With AMD Ryzen AI built into select models, AMD is further extending its AI PC leadership. By leveraging the CPU, GPU, and dedicated on-chip neural processing unit (NPU), new Ryzen AI-powered processors provide more dedicated AI processing power than previous generations, with up to 16 dedicated NPU TOPS (Trillions of Operations Per Second) and up to 39 total system TOPS. Commercial PCs equipped with new Ryzen AI-enabled processors will help transform user experience, offering next-gen performance for AI-enabled collaboration, content creation, and data and analytics workloads. With the addition of AMD PRO technologies, IT managers can unlock enterprise-grade manageability features to simplify IT operations and complete PC deployment faster across the organization, built-in security features for chip-to-cloud defense from sophisticated attacks, as well as unprecedented stability, reliability and platform longevity for enterprise software.

Lenovo Embraces the AI PC Era with New ThinkCentre Desktops Powered by AMD Ryzen PRO 8000 Series Desktop Processors

Lenovo has unveiled a selection of ThinkCentre desktops powered by AMD Ryzen PRO 8000 Series desktop processors with up to 16 TOPS (trillion operations per second) of integrated NPU capability dedicated to process AI workloads, including the performance focused ThinkCentre M75t Gen 5, the flexible ThinkCentre M75s Gen 5, and the compact ThinkCentre M75q Gen 5. Designed to meet the diverse needs of modern business, the ThinkCentre M75 Gen 5 family of desktops harnesses the AI capability of its component while optimizing its energy efficiency to deliver impressive results.

"The AI PC era is already here and at Lenovo we are embracing it to unlock new possibilities," said Sanjeev Menon, vice president and general manager, Worldwide Desktop Business in Intelligent Devices Group, Lenovo. "The need for businesses to integrate AI into their operations continues to grow and our ThinkCentre M75 family of desktops, with a strong and stable power supply, the ability to upgrade components when needed, and the space to expand memory and optimize thermal management are the ideal options to enhance productivity with AI without heavy investments. Lenovo and AMD have a long-standing partnership focused on delivering value to our customers and we know users will be delighted by the leap in performance of our new desktops."

Demand for NVIDIA's Blackwell Platform Expected to Boost TSMC's CoWoS Total Capacity by Over 150% in 2024

NVIDIA's next-gen Blackwell platform, which includes B-series GPUs and integrates NVIDIA's own Grace Arm CPU in models such as the GB200, represents a significant development. TrendForce points out that the GB200 and its predecessor, the GH200, both feature a combined CPU+GPU solution, primarily equipped with the NVIDIA Grace CPU and H200 GPU. However, the GH200 accounted for only approximately 5% of NVIDIA's high-end GPU shipments. The supply chain has high expectations for the GB200, with projections suggesting that its shipments could exceed millions of units by 2025, potentially making up nearly 40 to 50% of NVIDIA's high-end GPU market.

Although NVIDIA plans to launch products such as the GB200 and B100 in the second half of this year, upstream wafer packaging will need to adopt more complex and high-precision CoWoS-L technology, making the validation and testing process time-consuming. Additionally, more time will be required to optimize the B-series for AI server systems in aspects such as network communication and cooling performance. It is anticipated that the GB200 and B100 products will not see significant production volumes until 4Q24 or 1Q25.

Sony PlayStation 5 Pro Specifications Confirmed, Console Arrives Before Holidays

Thanks for the detailed information obtained by The Verge, today we confirm previously leaked details as Sony gears up to unveil the highly anticipated PlayStation 5 Pro, codenamed "Trinity." According to insider reports, Sony is urging developers to optimize their games for the PS5 Pro, with a primary focus on enhancing ray tracing capabilities. The console is expected to feature an RDNA 3 GPU with 30 WGP running BVH8, capable of 33.5 TeraFLOPS of FP32 single-precision computing power, and a slightly quicker CPU running at 3.85 GHz, enabling it to render games with ray tracing enabled or achieve higher resolutions and frame rates in select titles. Sony anticipates GPU rendering on the PS5 Pro to be approximately 45 percent faster than the standard PlayStation 5. The PS5 Pro GPU will be larger and utilize faster system memory to bolster ray tracing performance, boasting up to three times the speed of the regular PS5.

Additionally, the console will employ a more powerful ray tracing architecture, backed by PlayStation Spectral Super Resolution (PSSR), allowing developers to leverage graphics features like ray tracing more extensively. To support this endeavor, Sony is providing developers with test kits, and all games submitted for certification from August onward must be compatible with the PS5 Pro. Insider Gaming, the first to report the full PS5 Pro specs, suggests a potential release during the 2024 holiday period. The PS5 Pro will also feature modifications for developers regarding system memory, with Sony increasing the memory bandwidth from 448 GB/s to 576 GB/s, enhancing efficiency for an even more immersive gaming experience. To do AI processing, there is an custom AI accelerator capable of 300 8-bit INT8 TOPS and 67 16-bit FP16 TeraFLOPS, in addition to ACV audio codec running up to 35% faster.

Apple Preparing M4 Chips with AI Capabilities to Fight Declining Mac Sales

While everyone has been focused on shipping an AI-enhanced product recently, one tech giant didn't appear to be bothered- Apple. However, according to Mark Gurman from Bloomberg, Apple is readying an overhaul of its Apple Silicon M-series chips to embed AI processing capabilities at the processor level. As the report indicates, Apple is preparing an update for late 2024 and early 2025 with the M4 series of chips, which will reportedly feature AI processing units similar to those found in other commercial chips. There should be three levels of the M4 series, with the entry-level M4 codenamed Donan, the mid-level M4 chip codenamed Brava, and the high-end M4 chip codenamed Hydra.

Sales of Apple Macs peaked in 2022; the following year was a sharp decline, and sales have continued to be flat since. The new AI PCs for Windows-based systems have been generating hype from all major vendors, hoping to introduce AI features to end users. However, Apple wants to be part of the revolution, and the company has already scheduled the World Wide Developer Conference for June 10th. At WWDC this year, Apple is supposed to show a suite of AI-powered solutions to enable better user experience and increase productivity. With M4 chips getting AI enhancement, the WWDC announcements will get extra hardware accelerations. However, we must wait for the exact announcements before making further assumptions.

Meta Announces New MTIA AI Accelerator with Improved Performance to Ease NVIDIA's Grip

Meta has announced the next generation of its Meta Training and Inference Accelerator (MTIA) chip, which is designed to train and infer AI models at scale. The newest MTIA chip is a second-generation design of Meta's custom silicon for AI, and it is being built on TSMC's 5 nm technology. Running at the frequency of 1.35 GHz, the new chip is getting a boost to 90 Watts of TDP per package compared to just 25 Watts for the first-generation design. Basic Linear Algebra Subprograms (BLAS) processing is where the chip shines, and it includes matrix multiplication and vector/SIMD processing. At GEMM matrix processing, each chip can process 708 TeraFLOPS at INT8 (presumably meant FP8 in the spec) with sparsity, 354 TeraFLOPS without, 354 TeraFLOPS at FP16/BF16 with sparsity, and 177 TeraFLOPS without.

Classical vector and processing is a bit slower at 11.06 TeraFLOPS at INT8 (FP8), 5.53 TeraFLOPS at FP16/BF16, and 2.76 TFLOPS single-precision FP32. The MTIA chip is specifically designed to run AI training and inference on Meta's PyTorch AI framework, with an open-source Triton backend that produces compiler code for optimal performance. Meta uses this for all its Llama models, and with Llama3 just around the corner, it could be trained on these chips. To package it into a system, Meta puts two of these chips onto a board and pairs them with 128 GB of LPDDR5 memory. The board is connected via PCIe Gen 5 to a system where 12 boards are stacked densely. This process is repeated six times in a single rack for 72 boards and 144 chips in a single rack for a total of 101.95 PetaFLOPS, assuming linear scaling at INT8 (FP8) precision. Of course, linear scaling is not quite possible in scale-out systems, which could bring it down to under 100 PetaFLOPS per rack.
Below, you can see images of the chip floorplan, specifications compared to the prior version, as well as the system.

ASUS IoT Announces PE8000G

ASUS IoT, the global AIoT solution provider, today announced PE8000G at Embedded World 2024, a powerful edge AI computer that supports multiple GPU cards for high performance—and is expertly engineered to handle rugged conditions with resistance to extreme temperatures, vibration and variable voltage. PE8000G is powered by formidable Intel Core processors (13th and 12th gen) and the Intel R680E chipset to deliver high-octane processing power and efficiency.

With its advanced architecture, PE8000G excels at running multiple neural network modules simultaneously in real-time—and represents a significant leap forward in edge AI computing. With its robust design, exceptional performance and wide range of features, PE8000G series is poised to revolutionize AI-driven applications across multiple industries, elevating edge AI computing to new heights and enabling organizations to tackle mission-critical tasks with confidence and to achieve unprecedented levels of productivity and innovation.

Report: Global PC Shipments Return to Growth and Pre-Pandemic Volumes in the First Quarter of 2024

After two years of decline, the worldwide traditional PC market returned to growth during the first quarter of 2024 (1Q24) with 59.8 million shipments, growing 1.5% year over year, according to preliminary results from the International Data Corporation (IDC) Worldwide Quarterly Personal Computing Device Tracker. Growth was largely achieved due to easy year-over-year comparisons as the market declined 28.7% during the first quarter of 2023, which was the lowest point in PC history. In addition, global PC shipments finally returned to pre-pandemic levels as 1Q24 volumes rivaled those seen in 1Q19 when 60.5 million units were shipped.

With inflation numbers trending down, PC shipments have begun to recover in most regions, leading to growth in the Americas as well as Europe, the Middle East, and Africa (EMEA). However, the deflationary pressures in China directly impacted the global PC market. As the largest consumer of desktop PCs, weak demand in China led to yet another quarter of declines for global desktop shipments, which already faced pressure from notebooks as the preferred form factor.

Google Launches Axion Arm-based CPU for Data Center and Cloud

Google has officially joined the club of custom Arm-based, in-house-developed CPUs. As of today, Google's in-house semiconductor development team has launched the "Axion" CPU based on Arm instruction set architecture. Using the Arm Neoverse V2 cores, Google claims that the Axion CPU outperforms general-purpose Arm chips by 30% and Intel's processors by a staggering 50% in terms of performance. This custom silicon will fuel various Google Cloud offerings, including Compute Engine, Kubernetes Engine, Dataproc, Dataflow, and Cloud Batch. The Axion CPU, designed from the ground up, will initially support Google's AI-driven services like YouTube ads and Google Earth Engine. According to Mark Lohmeyer, Google Cloud's VP and GM of compute and machine learning infrastructure, Axion will soon be available to cloud customers, enabling them to leverage its performance without overhauling their existing applications.

Google's foray into custom silicon aligns with the strategies of its cloud rivals, Microsoft and Amazon. Microsoft recently unveiled its own AI chip for training large language models and an Arm-based CPU called Cobalt 100 for cloud and AI workloads. Amazon, on the other hand, has been offering Arm-based servers through its custom Graviton CPUs for several years. While Google won't sell these chips directly to customers, it plans to make them available through its cloud services, enabling businesses to rent and leverage their capabilities. As Amin Vahdat, the executive overseeing Google's in-house chip operations, stated, "Becoming a great hardware company is very different from becoming a great cloud company or a great organizer of the world's information."

Micron Debuts World's First Quad-Port SSD to Accelerate Data-Rich Autonomous and AI-Enabled Workloads

Micron Technology, Inc., today announced it is sampling the automotive-grade Micron 4150AT SSD, the world's first quad-port SSD, capable of interfacing with up to four systems on chips (SoCs) to centralize storage for software-defined intelligent vehicles. The Micron 4150AT SSD combines market-leading features such as single-root input/output virtualization (SR-IOV), a PCIe Generation 4 interface and ruggedized automotive design. With these features, the automotive-grade SSD provides the ecosystem with data center-level flexibility and power.

"As storage requirements race to keep up with rich in-vehicle experiences featuring AI and advanced algorithms for higher levels of autonomous safety, this era demands a new paradigm for automotive storage to match," said Michael Basca, Micron vice president of embedded products and systems. "Building on our collaboration with the innovators redefining next-generation automotive architectures, Micron has reimagined storage from the ground up to deliver the world's first quad-port SSD - the Micron 4150AT - which provides the industry flexibility and horsepower to roll out the transformative technologies on the horizon."
Return to Keyword Browsing
Apr 29th, 2024 19:11 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts