News Posts matching #next-gen

Return to Keyword Browsing

NVIDIA Introduces NVIDIA RTX 2000 Ada Generation GPU

Generative AI is driving change across industries—and to take advantage of its benefits, businesses must select the right hardware to power their workflows. The new NVIDIA RTX 2000 Ada Generation GPU delivers the latest AI, graphics and compute technology to compact workstations, offering up to 1.5x the performance of the previous-generation RTX A2000 12 GB in professional workflows. From crafting stunning 3D environments to streamlining complex design reviews to refining industrial designs, the card's capabilities pave the way for an AI-accelerated future, empowering professionals to achieve more without compromising on performance or capabilities. Modern multi-application workflows, such as AI-powered tools, multi-display setups and high-resolution content, put significant demands on GPU memory. With 16 GB of memory in the RTX 2000 Ada, professionals can tap the latest technologies and tools to work faster and better with their data.

Powered by NVIDIA RTX technology, the new GPU delivers impressive realism in graphics with NVIDIA DLSS, delivering ultra-high-quality, photorealistic ray-traced images more than 3x faster than before. In addition, the RTX 2000 Ada enables an immersive experience for enterprise virtual-reality workflows, such as for product design and engineering design reviews. With its blend of performance, versatility and AI capabilities, the RTX 2000 Ada helps professionals across industries achieve efficiencies. Architects and urban planners can use it to accelerate visualization workflows and structural analysis, enhancing design precision. Product designers and engineers using industrial PCs can iterate rapidly on product designs with fast, photorealistic rendering and AI-powered generative design. Content creators can edit high-resolution videos and images seamlessly, and use AI for realistic visual effects and content creation assistance. And in vital embedded applications and edge computing, the RTX 2000 Ada can power real-time data processing for medical devices, optimize manufacturing processes with predictive maintenance and enable AI-driven intelligence in retail environments.

Latest AMD Linux Graphics Driver Patches Linked to "RDNA 4"

Phoronix head honcho, Michael Larabel, has noticed another set of interesting updates for AMD Graphics on Linux—albeit in preparation for next generation solutions: "engineers on Monday (February 5) posted a few new patch series for enabling some updated IP (intellectual property) blocks within their open-source AMDGPU Linux kernel graphics driver. This new IP is presumably part of the ongoing hardware enablement work for their next-gen RDNA 4 graphics." Team Red GitHub patches for "GFX12" targets appeared online last November, again highlighted by Larabel's investigative work—AMD engineers appear to be quite determined with their open-source software endeavors, as seen in LLVM-Project notes regarding GFX1200's enablement.

The new "IP block" updates included patches for the enabling ATHUB 4.1, LSDMA 7.0, IH 7.0, and HDP 7.0—presumably for next generation Radeon graphics solutions. Larabel provided a quick breakdown of these components: "ATHUB 4.1 is needed for clock-gating / power management features, LSDMA 7.0 is the latest IP for Light SDMA for general purpose System DMA (SDMA) on the GPU, IH 7.0 for the Interrupt Handler on the GPU, and HDP 7.0 for the Host Data Path support for CPU accessing the GPU device memory via the PCI BAR. As far as code changes, the big chunks of the work are from the auto-generated header files." He believes that AMD's engineers have largely moved on from current generation tasks: "The big version bumps for these IP blocks all the more are likely indicative of these bits being for next-gen RDNA 4 as opposed to further iterating on RDNA3 or similar." The patches could be merged into the upcoming Linux 6.9 release, possibly coinciding with a Radeon RX 8000 series rollout.

Intel Foundry Services Get 18A Order: Arm-based 64-Core Neoverse SoC

Faraday Technology Corporation, a Taiwanese silicon IP designer, has announced plans to develop a new 64-core system-on-chip (SoC) utilizing Intel's most advanced 18A process technology. The Arm-based SoC will integrate Arm Neoverse compute subsystems (CSS) to deliver high performance and efficiency for data centers, infrastructure edge, and 5G networks. This collaboration brings together Faraday, Arm, and Intel Foundry Services. Faraday will leverage its ASIC design and IP solutions expertise to build the SoC. Arm will provide the Neoverse compute subsystem IP to enable scalable computing. Intel Foundry Services will manufacture the chip using its cutting-edge 18A process, which delivers one of the best-in-class transistor performance.

The new 64-core SoC will be a key component of Faraday's upcoming SoC evaluation platform. This platform aims to accelerate customer development of data center servers, high-performance computing ASICs, and custom SoCs. The platform will also incorporate interface IPs from the Arm Total Design ecosystem for complete implementation and verification. Both Arm and Intel Foundry Services expressed excitement about working with Faraday on this advanced Arm-based custom silicon project. "We're thrilled to see industry leaders like Faraday and Intel on the cutting edge of Arm-based custom silicon development," said an Arm spokesperson. Intel SVP Stuart Pann said, "We are pleased to work with Faraday in the development of the SoC based on Arm Neoverse CSS utilizing our most competitive Intel 18A process technology." The collaboration represents Faraday's strategic focus on leading-edge technologies to meet evolving application requirements. With its extensive silicon IP portfolio and design capabilities, Faraday wants to deliver innovative solutions and break into next-generation computing design.

Korea Quantum Computing Signs IBM watsonx Deal

IBM has announced (on January 29) that Korea Quantum Computing (KQC) has engaged IBM to offer IBM's most advanced AI software and infrastructure, as well as quantum computing services. KQC's ecosystem of users will have access to IBM's full stack solution for AI, including watsonx, an AI and data platform to train, tune and deploy advanced AI models and software for enterprises. KQC is also expanding its quantum computing collaboration with IBM. Having operated as an IBM Quantum Innovation Center since 2022, KQC will continue to offer access to IBM's global fleet of utility-scale quantum systems over the cloud. Additionally, IBM and KQC plan to deploy an IBM Quantum System Two on-site at KQC in Busan, South Korea by 2028.

"We are excited to work with KQC to deploy AI and quantum systems to drive innovation across Korean industries. With this engagement, KQC clients will have the ability to train, fine-tune, and deploy advanced AI models, using IBM watsonx and advanced AI infrastructure. Additionally, by having the opportunity to access IBM quantum systems over the cloud, today—and a next-generation quantum system in the coming years—KQC members will be able to combine the power of AI and quantum to develop new applications to address their industries' toughest problems," said Darío Gil, IBM Senior Vice President and Director of Research. This collaboration includes an investment in infrastructure to support the development and deployment of generative AI. Plans for the AI-optimized infrastructure includes advanced GPUs and IBM's Artificial Intelligence Unit (AIU), managed with Red Hat OpenShift to provide a cloud-native environment. Together, the GPU system and AIU combination is being engineered to offer members state-of-the-art hardware to power AI research and business opportunities.

Samsung Showcases B2B Displays with Advanced Connectivity at ISE 2024

Samsung Electronics today at Integrated Systems Europe (ISE) 2024 in Barcelona is showcasing how SmartThings will bolster its B2B displays to shape the future of business connectivity. Samsung's "SmartThings for Business" exhibition emphasizes the new advancements that the cutting-edge internet-of-things (IoT) platform will offer, as well as Samsung's commitment to providing more connected, easy-to-control digital signage across industries. "In a commercial display sector where operational efficiency is key, Samsung digital signage is leveraging SmartThings to deliver next-gen connectivity and features to organizations of all sizes," said SW Yong, President and Head of Visual Display Business at Samsung Electronics. "This further expansion of the SmartThings ecosystem will serve to elevate experiences for customers and partners from a wide variety of industries."

How Businesses Can Leverage Connected Tech Through SmartThings—From the Smart Store to Smart Office
At the event, Samsung is showcasing how SmartThings enables business owners to leverage their digital signage to connect and gain more control of their smart devices across various landscapes. By offering the SmartThings connectivity feature to commercial display products such as Smart Signage and Hotel TVs, users can experience the convenience of hyper-connectivity in their business environments. These changes will include Samsung smart devices, as well as other devices that support the industry's latest IoT specifications, Matter and the Home Connectivity Alliance (HCA). Through the application of SmartThings to various business environments, Samsung contributes to the more efficient management of space and energy by transforming places of business into interconnected smart spaces. These connectivity improvements have been designed to benefit all types of business customers, from small and mid-sized business owners to enterprises. Examples of the smart spaces—including a smart store, smart office and smart hotel—are on display at Samsung's booth at ISE 2024.

Tipster Claims AMD "Kraken Point" APU Configured with Zen 5 & Zen 5c Cores

Everest (@Olrak29_) has kept track of many AMD processor families over the past couple of years—his latest insight provides an early look at the alleged internal makeup of Team Red's "Kraken Point" APU series. The rumor mill has designated these next-gen mobile processors as 2025 follow-ups to the recently launched Ryzen 8040 "Hawk Point" family of mainstream laptop APUs. The tipster's initial social media post only mentioned the presence of both Zen 5 and Zen 5c cores within Kraken Point processors, but he later clarified that a total of eight cores would include four large units and four smaller types. TPU's past coverage of Kraken Point pointed to rumors of an 8-core, 16-thread configuration, but leaked slides (from late 2023) did not mention the integration of efficiency-tuned Zen 5c "Prometheus" cores, along with presumed Zen 5 "Nirvana" cores.

Everest's continuous flow of insider information reveals that "Kraken Point" shares many "Hawk Point" traits—four workgroup processors (WGP) could be present on final retail products, granting eight compute units (8 CUs in total). He responded to a query regarding AMD's choice of integrated graphics technology—the succinct answer being RDNA 3.5. Past leaks allege that XDNA 2 will drive the NPU side of things—offering a performance range of around 45 to 50 TOPS. The Kraken Point APU is believed to be sticking with a safe monolithic die design, manufactured on a non-specific 4 nm process. Team Red is rumored to be in TSMC's order books for all sorts of next generation silicon.

Apple Reportedly in the VVIP Lane for TSMC's 2 Nanometer GAA

A DigiTimes Asia report posits that TSMC is preparing another VVIP foundry lane for Apple Inc.—insiders claim that the Taiwanese foundry giant is in the process of expanding production capacity into next generation 2 nm nanometer fields. This expensive and time consuming endeavor is only made possible with the reassurance of big customers being added to the foundry's order books. TSMC's 2 nm-class N2, N2P, and N2X process technologies are due in 2025 and beyond (according to recent presentation slides)—these advanced packages are set to drop with all sorts of innovations: nanosheet gate-all-around (GAA) transistors, backside power delivery, and super-high-performance metal-insulator-metal (SHPMIM). According to a DigiTimes source "Apple is widely believed to be the initial client to utilize the (next-gen) process."

Apple and NVIDIA were reported to be ahead of many important clients in the queue for TSMC's 3 nm process nodes, so it is not surprising to see old patterns repeat (according to industry rumors) again. Apple is expected to update its next generation iPhones, iPad, and Mac laptop product lines with more advanced Bionic and M-series chipsets in 2025. Last year's roster included a rollout of 3 nm TSMC silicon across Apple A17 Pro and M3 ARM-based processors.

Khronos Publishes Vulkan Roadmap 2024, Highlights Expanded 3D Features

Today, The Khronos Group, an open consortium of industry-leading companies creating advanced interoperability standards, announced the latest roadmap milestone for Vulkan, the cross-platform 3D graphics and compute API. The Vulkan roadmap targets the "immersive graphics" market, made up of mid- to high-end smartphones, tablets, laptops, consoles, and desktop devices. The Vulkan Roadmap 2024 milestone captures a set of capabilities that are expected to be supported in new products for that market, beginning in 2024. The roadmap specification provides a significant increase in functionality for the targeted devices and sets the evolutionary direction of the API, including both new hardware capabilities and improvements to the programming model for Vulkan developers.

Vulkan Roadmap 2024 is the second milestone release on the Vulkan Roadmap. Products that support it must be Vulkan 1.3 conformant and support the extensions and capabilities defined in both the 2022 and 2024 Roadmap specifications. Vulkan roadmap specifications use the Vulkan Profile mechanism to help developers build portable Vulkan applications; roadmap requirements are expressed in machine-readable JSON files, and tooling in the Vulkan SDK auto-generates code that makes it easy for developers to query for and enable profile support in their applications.

HBM Industry Revenue Could Double by 2025 - Growth Driven by Next-gen AI GPUs Cited

Samsung, SK hynix, and Micron are considered to be the top manufacturing sources of High Bandwidth Memory (HBM)—the HBM3 and HBM3E standards are becoming increasingly in demand, due to a widespread deployment of GPUs and accelerators by generative AI companies. Taiwan's Commercial Times proposes that there is an ongoing shortage of HBM components—but this presents a growth opportunity for smaller manufacturers in the region. Naturally, the big name producers are expected to dive in head first with the development of next generation models. The aforementioned financial news article cites research conducted by the Gartner group—they predict that the HBM market will hit an all-time high of $4.976 billion (USD) by 2025.

This estimate is almost double that of projected revenues (just over $2 billion) generated by the HBM market in 2023—the explosive growth of generative AI applications has "boosted" demand for the most performant memory standards. The Commercial Times report states that SK Hynix is the current HBM3E leader, with Micron and Samsung trailing behind—industry experts believe that stragglers will need to "expand HBM production capacity" in order to stay competitive. SK Hynix has shacked up with NVIDIA—the GH200 Grace Hopper platform was unveiled last summer; outfitted with the South Korean firm's HBM3e parts. In a similar timeframe, Samsung was named as AMD's preferred supplier of HBM3 packages—as featured within the recently launched Instinct MI300X accelerator. NVIDIA's HBM3E deal with SK Hynix is believed to extend to the internal makeup of Blackwell GB100 data-center GPUs. The HBM4 memory standard is expected to be the next major battleground for the industry's hardest hitters.

Intel's Next-gen Xeon "Clearwater Forest" E-Core CPU Series Spotted in Patch

Intel presented its next generation Xeon "Clearwater Forest" processor family during September's Innovation Event—their roadmap slide (see below) included other Birch Stream platform architecture options. Earlier this week, Team Blue's software engineers issued a Linux kernel patch that contains details pertaining to codenamed projects: Sierra Forest, Grand Ridge and the aforementioned Clearwater Forest. All E-Core Xeon "Sierra Forest" processors are expected to launch around the middle of 2024—this deployment of purely efficiency-oriented "Sierra Glen" (Atom Crestmont) cores in enterprise/server chip form will be a first for Intel. The Sierra Forest Xeon range has been delayed a couple of times; but some extra maturation time has granted a jump from an initial maximum 144 E-Core count up to 288. The latest patch notes provide an early look into Clearwater Forest's basic foundations—it seems to be Sierra Forest's direct successor.

The Intel Xeon "Granite Rapids" processor family is expected to hit retail just after a Sierra Forest product launch, but the former sports a very different internal configuration—an all "Redwood Cove" P-Core setup. Phoronix posits that Sierra Forest's groundwork is clearing the way for its natural successor: "Clearwater Forest is Intel's second generation E-core Xeon...Clearwater Forest should ship in 2025 while the open-source Intel Linux engineers begin in their driver support preparations and other hardware enablement well in advance of launch. With engineers already pushing Sierra Forest code into the Linux kernel and related key open-source projects like Clang and GCC since last year, their work on enabling Sierra Forest appears to be largely wrapping up and in turn the enablement is to begin for Clearwater Forest. Sent out...was the first Linux kernel patch for Sierra Forest. As usual, for the first patch it's quite basic and is just adding in the new model number for Clearwater Forest CPUs. Clear Water Forest has a model number of 0xDD (221). The patch also reaffirms that the 0xDD Clearwater Forest CPUs are using Atom Darkmont cores."

EdgeCortix Foresees Barrier Breaking Efficient Next-gen Edge AI Chips

EdgeCortix, the Japan-based fabless semiconductor company focused on energy-efficient AI processing, predicts that 2024 is set to be a watershed moment for Edge AI. Through its predictions for the year, EdgeCortix believes that Edge AI landscape will be transformed during this exciting year for the industry. Next-gen AI chips, hybrid edge-cloud architectures, software supremacy and the rise of new generative-AI applications "at the edge," will revolutionize the world of business as we know it.

1. Next-Gen efficient Edge AI Chips will break barriers:
Prepare for a hardware uprising! EdgeCortix foresees next-gen energy-efficient AI chips that not only break the barriers of processing power but redefine them. These chips are not just powerful; they are customized for multi-modal generative AI and efficient language models, enabling cutting-edge AI capabilities at low power for a whole new spectrum of applications.

ASUS Kills Off NUC Extreme Range

ASUS finalized its adoption of the Intel Next Unit of Computing (NUC) product lines at a special autumn 2023 handover event. A post-ceremony statement outlined the company's vision going forward: "ASUS kicked-off its NUC business and started to take orders for NUC 10th to 13th generation systems on September 1. The new business is generating a wide variety of exciting opportunities for the company and the transition has progressed smoothly for NUC customers. The vision of the newly established ASUS NUC BU is to provide the most impactive edge computing with comprehensive commercial and AIoT solutions that can sustain the industry and businesses." Just over a week ago, TechPowerUp was granted access to next generation NUC devices at CES 2024—including ROG NUC, as well as NUC 14 Pro and NUC 14 Pro+ models. Many folks in attendance noticed a complete absence of NUC Extreme products at the ASUS Las Vegas showroom.

Online publication, Fudzilla, has investigated this matter—Fuad Abazovic (Editor-in-Chief) managed to chase down an ASUS spokesperson. It seems that the Taiwanese manufacturer is integrating some if its best known branding into the NUC ecosystem, and Team Blue nomenclature is on the chopping block: "the company won't have an update to the NUC Extreme 7.5 liter device. The Raptor Canyon remains the last NUC of its kind, as ASUS has ROG Strix systems in the same ballpark. Fudzilla already covered the announcement of the NUC and NUC pro, and the ROG NUC. We were assured that the 2.5L ROG NUC will remain the fastest gaming-oriented device and that, at this plan, the company doesn't plan to develop the successor of NUC Extreme 7.5 liter. ASUS has announced ROG Strix G16CHR, its 7.5-liter desktop that comes with an air and water cooler and hosts up to Intel Core i7-14700KF Processor 3.4 GHz (33M Cache, up to 5.5 GHz, 20 cores), NVIDIA GeForce RTX 4080 16 GB GDDR6X 3x DP, 2x HDMI, and up to 64 GB RAM in 4x DDR5 U-DIMM slots."

Meta Will Acquire 350,000 H100 GPUs Worth More Than 10 Billion US Dollars

Mark Zuckerberg has shared some interesting insights about Meta's AI infrastructure buildout, which is on track to include an astonishing number of NVIDIA H100 Tensor GPUs. In the post on Instagram, Meta's CEO has noted the following: "We're currently training our next-gen model Llama 3, and we're building massive compute infrastructure to support our future roadmap, including 350k H100s by the end of this year -- and overall almost 600k H100s equivalents of compute if you include other GPUs." That means that the company will enhance its AI infrastructure with 350,000 H100 GPUs on top of the existing GPUs, which is equivalent to 250,000 H100 in terms of computing power, for a total of 600,000 H100-equivalent GPUs.

The raw number of GPUs installed comes at a steep price. With the average selling price of H100 GPU nearing 30,000 US dollars, Meta's investment will settle the company back around $10.5 billion. Other GPUs should be in the infrastructure, but most will comprise the NVIDIA Hopper family. Additionally, Meta is currently training the LLama 3 AI model, which will be much more capable than the existing LLama 2 family and will include better reasoning, coding, and math-solving capabilities. These models will be open-source. Later down the pipeline, as the artificial general intelligence (AGI) comes into play, Zuckerberg has noted that "Our long term vision is to build general intelligence, open source it responsibly, and make it widely available so everyone can benefit." So, expect to see these models in the GitHub repositories in the future.

AEWIN Intros SCB-1942, a Dual Intel 5th Gen Xeon Driven Flagship Series

AEWIN is glad to announce our latest High-Performance Network Appliance powered by Intel latest 5th Gen Xeon Scalable Processors, SCB-1942 Series. It is a series of flagship products powered by dual Intel Emerald Rapids CPUs, having up to 128 CPU cores (64 cores per CPU) for the extreme computing power pursued in the market. SCB-1942 series has multiple SKU with various PCIe slots options for great expandability to fulfill customer's solutions.

The SCB-1942A is a 2U, 2-socket network computing platform having 16x memory socket of DDR5 up to 5600 MHz, and 8x PCIe 5.0 expansion slots for AEWIN wide coverage NIC cards with 1G/10/25/40/100G copper/fiber interfaces or other Accelerators & NVMe SSDs for flexible functionality enhancement. The SCB-1942A provides the flexibility to change the 2x PCIe slots to 1x PCIe x16 slot for standard PCIe form factor which can install off-the-shelf add-on card for additional function required. It can support 400G NIC card installed such as Mellanox PCIe 5.0 NIC. In addition, the SCB-1942 series support 10 SATA which make it also suitable for various kinds of storage applications.

Hi-Rez Studios Reveals SMITE 2

Titan Forge Games and Hi-Rez Studios have made the biggest announcement in SMITE history—SMITE 2! Become a god and wage war in SMITE 2, the Unreal Engine 5-powered sequel to the legendary third-person action MOBA. If you love rewarding teamwork, challenging competition, intense action, ability-driven third-person combat, or godly mythology, you'll see why 40+ million players love SMITE.

SMITE, EVOLVED
More than 40 million players have enjoyed SMITE. Now the next generation of action MOBA begins, with immersive next-gen visuals powered by UE5, more impactful combat, and new creative gameplay options to shape the game in your own way.

Samsung Demos AI-Powered Galaxy Book4 Series at CES 2024

Samsung Electronics showcased its AI-strengthened Galaxy ecosystem at the Consumer Electronics Show (CES) 2024 in Las Vegas from January 9-12. The dedicated Mobile Experience Zone at Samsung's booth pulled attendees in for one last glimpse at the company's top innovations where visitors could experience the latest Galaxy products, including the Galaxy Book4 Series, and view how they are strengthened by partnerships with industry leaders. Samsung Newsroom visited the site and summarized the main highlights of the event. Read on to learn how the company is driving the hyper-connected mobile experience of the future.

New, Exciting AI Capabilities With the Galaxy Book4 Series
The new Galaxy Book4 Series was spotlighted near the entrance of the zone. Attendees could experience the strong AI performance and the incredibly responsive touchscreen display of the Galaxy Book4, released on January 4 in Korea. The Galaxy Book4 series is equipped with the new Intel Core Ultra Processor, offering expanded connectivity and a high-performance AI experience. The zone featured up-close demonstrations of the laptop's increased processing power, made possible by a newly-added Neural Processing Unit (NPU).

ASUS Showcases ZenWiFi BQ16 and BQ16 Pro WiFi 7 Mesh Routers

ASUS today displayed the ZenWiFi BQ16 and ZenWiFi BQ16 Pro mesh routers at CES 2024, featuring next-gen quad-band WiFi 7 speeds of up to 30 Gbps and coverage of up to 8000 sq. ft. (2pk) for future-ready smart homes. Dual 10 Gbps ports empower wired gaming as well as ultimate work-from-home productivity and 4K HDR entertainment streaming. ASUS ZenWiFi mesh routers won the coveted PC Magazine 2023 Readers' Choice Award. The authoritative publication reported that "ASUS takes a clear lead in this category," with the highest scores "for both overall satisfaction and likelihood to recommend." PCMag also stated, "ASUS scores exceptionally high for mesh coverage, WiFi speed and network management."

ASUS also announces the availability of its ROG Rapture GT-BE98 and GT-BE98 Pro WiFi 7 gaming routers. Each router is equipped with powerful quad-band AiMesh, cutting-edge technology and innovative design, ensuring that users can enjoy ultimate performance.

AMD Withholds Radeon RX 7600 XT Launch in China Amid Strong RX 6750 GRE Sales

According to the latest round of reports, AMD has decided not to include China in the initial global launch of its upcoming Radeon RX 7600 XT graphics card. The RX 7600 XT, featuring 16 GB of memory and based on AMD's next-generation RDNA 3 architecture, was expected to launch soon at a price of around $300. However, the company is currently re-evaluating its Chinese GPU launch strategy due to the runaway success of its existing Radeon RX 6750 Golden Rabbit Edition (GRE) series in the region. The RX 6750 GRE cards with 10 GB and 12 GB configurations retail between $269-$289 in China, offering exceptional value compared to rival NVIDIA RTX models. AMD seems hesitant to risk undercutting sales of its popular RX 6750 GPUs by launching the newer 7600 XT.

While the RX 7600 XT promises more raw performance thanks to advanced RDNA 3 architecture, 6750 GRE, with its RDNA 2 design, seemingly remains efficient enough for most Chinese mainstream gamers. With the RX 6750 GRE still selling strongly in China, AMD has postponed the RX 7600 XT introduction for this key market. Final launch timelines for the 7600 XT in China and globally remain unconfirmed by AMD at time of writing. The company appears to be treading cautiously amidst the shifting competitive landscape.

TSMC Plans to Put a Trillion Transistors on a Single Package by 2030

During the recent IEDM conference, TSMC previewed its process roadmap for delivering next-generation chip packages packing over one trillion transistors by 2030. This aligns with similar long-term visions from Intel. Such enormous transistor counts will come through advanced 3D packaging of multiple chipsets. But TSMC also aims to push monolithic chip complexity higher, ultimately enabling 200 billion transistor designs on a single die. This requires steady enhancement of TSMC's planned N2, N2P, N1.4, and N1 nodes, which are slated to arrive between now and the end of the decade. While multi-chipset architectures are currently gaining favor, TSMC asserts both packaging density and raw transistor density must scale up in tandem. Some perspective on the magnitude of TSMC's goals include NVIDIA's 80 billion transistor GH100 GPU—among today's largest chips, excluding wafer-scale designs from Cerebras.

Yet TSMC's roadmap calls for more than doubling that, first with over 100 billion transistor monolithic designs, then eventually 200 billion. Of course, yields become more challenging as die sizes grow, which is where advanced packaging of smaller chiplets becomes crucial. Multi-chip module offerings like AMD's MI300X and Intel's Ponte Vecchio already integrate dozens of tiles, with PVC having 47 tiles. TSMC envisions this expansion to chip packages housing more than a trillion transistors via its CoWoS, InFO, 3D stacking, and many other technologies. While the scaling cadence has recently slowed, TSMC remains confident in achieving both packaging and process breakthroughs to meet future density demands. The foundry's continuous investment ensures progress in unlocking next-generation semiconductor capabilities. But physics ultimately dictates timelines, no matter how aggressive the roadmap.

Samsung Electronics and Red Hat Partnership To Lead Expansion of CXL Memory Ecosystem With Key Milestone

Samsung Electronics, a world leader in advanced memory technology, today announced that for the first time in the industry, it has successfully verified Compute Express Link (CXL) memory operations in a real user environment with open-source software provider Red Hat, leading the expansion of its CXL ecosystem.

Due to the exponential growth of data throughput and memory requirements for emerging fields like generative AI, autonomous driving and in-memory databases (IMDBs), the demand for systems with greater memory bandwidth and capacity is also increasing. CXL is a unified interface standard that connects various processors, such as CPUs, GPUs and memory devices through a PCIe interface that can serve as a solution for limitations in existing systems in terms of speed, latency and expandability.

AWS and NVIDIA Partner to Deliver 65 ExaFLOP AI Supercomputer, Other Solutions

Amazon Web Services, Inc. (AWS), an Amazon.com, Inc. company (NASDAQ: AMZN), and NVIDIA (NASDAQ: NVDA) today announced an expansion of their strategic collaboration to deliver the most-advanced infrastructure, software and services to power customers' generative artificial intelligence (AI) innovations. The companies will bring together the best of NVIDIA and AWS technologies—from NVIDIA's newest multi-node systems featuring next-generation GPUs, CPUs and AI software, to AWS Nitro System advanced virtualization and security, Elastic Fabric Adapter (EFA) interconnect, and UltraCluster scalability—that are ideal for training foundation models and building generative AI applications.

The expanded collaboration builds on a longstanding relationship that has fueled the generative AI era by offering early machine learning (ML) pioneers the compute performance required to advance the state-of-the-art in these technologies.

Manufacturers Anticipate Completion of NVIDIA's HBM3e Verification by 1Q24; HBM4 Expected to Launch in 2026

TrendForce's latest research into the HBM market indicates that NVIDIA plans to diversify its HBM suppliers for more robust and efficient supply chain management. Samsung's HBM3 (24 GB) is anticipated to complete verification with NVIDIA by December this year. The progress of HBM3e, as outlined in the timeline below, shows that Micron provided its 8hi (24 GB) samples to NVIDIA by the end of July, SK hynix in mid-August, and Samsung in early October.

Given the intricacy of the HBM verification process—estimated to take two quarters—TrendForce expects that some manufacturers might learn preliminary HBM3e results by the end of 2023. However, it's generally anticipated that major manufacturers will have definite results by 1Q24. Notably, the outcomes will influence NVIDIA's procurement decisions for 2024, as final evaluations are still underway.

AMD Radeon "GFX12" RX 8000 Series GPUs Based on RDNA4 Appear

AMD is working hard on delivering next-generation products, and today, its Linux team has submitted a few interesting patches that made a subtle appearance through recent GitHub patches for GFX12 targets, as reported by Phoronix. These patches have introduced two new discrete GPUs into the LLVM compiler for Linux, fueling speculation that these will be the first iterations of the RDNA4 graphics architecture, potentially being a part of the Radeon RX 8000 series of desktop graphics cards. The naming scheme for these new targets, GFX1200 and GFX1201, suggests a continuation of AMD's logical progression through graphics architectures, considering the company's history of associating RDNA1 with GFX10 and following suit with subsequent generations, like RDNA2 was GFX10.2 and RDNA3 was GFX11.

The development of these new GPUs is still in the early stages, indicated by the lack of detailed information about the upcoming graphics ISA or its features within the patches. Currently, the new GFX12 targets are set to be treated akin to GFX11 as the patch notes that "For now they behave identically to GFX11," implying that AMD is keeping the specifics under wraps until closer to release. The patch that defines target names and ELF numbers for new GFX12 targets GFX1200 and GFX1201 is needed in order to enable timely support for AMD ROCm compute stack, the AMDVLK Vulkan driver, and the RadeonSI Gallium3D driver.

Leaked Flyer Hints at Possible AMD Ryzen 9000 Series Powered by Zen 5

A curious piece of marketing material on the Chiphell forum has sent ripples through the tech community, featuring what appears to be an Alienware desktop equipped with an unannounced AMD Ryzen 9000-series processor. The authenticity of this flyer is up for debate, with possibilities ranging from a simple typo by Alienware to a fabricated image, or it could even suggest that AMD is on the cusp of unveiling its next-generation Ryzen CPUs for desktop PCs. While intrigue is high, it's important to approach such revelations cautiously, with a big grain of salt. AMD's existing roadmap points toward a 2024 release for its Zen 5-based Ryzen desktop processors and EPYC server CPUs, which casts further doubt on the Ryzen 9000 series appearing ahead of schedule.

We have to wait for AMD's major upcoming events, including the "Advancing AI" event on December 6, where the company will showcase how its partners and AMD use AI for applications. Next, we hope to hear from AMD about upcoming events such as CES in January and Computex in May, but we don't have any official information on product launches in the near term. If the company is preparing anything, the Alienware flyer pictured below should indicate it, if the source is confirmed. However, the doubt remains, and we should be skeptical of its truthfulness.

AMD Extends 3rd Gen EPYC CPU Lineup to Deliver New Levels of Value for Mainstream Applications

Today, AMD announced the extension of its 3rd Gen AMD EPYC processor family with six new offerings providing a robust suite of data center CPUs to meet the needs of general IT and mainstream computing for businesses seeking to leverage the economics of established platforms. The complete family of 3rd Gen AMD EPYC CPUs complements the leadership performance and efficiency of the latest 4th Gen AMD EPYC processors with impressive price-performance, modern security features and energy efficiency for less technically demanding business critical workloads.

The race to deliver AI and high performance computing is creating a technology gap for IT decision-makers seeking mainstream performance. To meet the growing demand for widely deployed, cost effective and proven mainstream solutions in the mid-market and in the channel, AMD is extending the 3rd Gen EPYC CPU offering to provide excellent value, performance, energy efficiency and security features for business-critical applications. The 3rd Gen AMD EPYC CPU portfolio enables a wide array of broadly deployed enterprise server solutions, supported by trusted channel sellers and OEMs such as Cisco, Dell Technologies, Gigabyte, HPE, Lenovo and Supermicro.
Return to Keyword Browsing
May 16th, 2024 10:42 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts