News Posts matching #CDNA3

Return to Keyword Browsing

Samsung Signs $3 Billion HBM3E 12H Supply Deal with AMD

Korean media reports that Samsung Electronics has signed a 4.134 trillion Won ($3 billion) agreement with AMD to supply 12-high HBM3E stacks. AMD uses HBM stacks in its AI and HPC accelerators based on its CDNA architecture. This deal is significant, as it gives analysts some idea of the kind of volumes of AI GPUs AMD is preparing to push into the market, if they know what percent of an AI GPU's bill of materials is made up by memory stacks. AMD has probably negotiated a good price for Samsung's HBM3E 12H stacks, given that rival NVIDIA almost exclusively uses HBM3E made by SK Hynix.

The AI GPU market is expected to heat up with the ramp of NVIDIA's "Hopper" H200 series, the advent of "Blackwell," AMD's MI350X CDNA3, and Intel's Gaudi 3 generative AI accelerator. Samsung debuted its HBM3E 12H memory in February 2024. Each stack features 12 layers, a 50% increase over the first generation of HBM3E, and offers a density of 36 GB per stack. An AMD CDNA3 chip with 8 such stacks would have 288 GB of memory on package. AMD is expected to launch the MI350X in the second half of 2024. The star attraction with this chip is its refreshed GPU tiles built on the TSMC 4 nm EUV foundry node. This seems like the ideal product for AMD to debut HBM3E 12H on.

Lenovo Anticipates Great Demand for AMD Instinct MI300X Accelerator Products

Ryan McCurdy, President of Lenovo North America, revealed ambitious forward-thinking product roadmap during an interview with CRN magazine. A hybrid strategic approach will create an anticipated AI fast lane on future hardware—McCurdy, a former Intel veteran, stated: "there will be a steady stream of product development to add (AI PC) hardware capabilities in a chicken-and-egg scenario for the OS and for the (independent software vendor) community to develop their latest AI capabilities on top of that hardware...So we are really paving the AI autobahn from a hardware perspective so that we can get the AI software cars to go faster on them." Lenovo—as expected—is jumping on the AI-on-device train, but it will be diversifying its range of AI server systems with new AMD and Intel-powered options. The company has reacted to recent Team Green AI GPU supply issues—alternative units are now in the picture: "with NVIDIA, I think there's obviously lead times associated with it, and there's some end customer identification, to make sure that the products are going to certain identified end customers. As we showcased at Tech World with NVIDIA on stage, AMD on stage, Intel on stage and Microsoft on stage, those industry partnerships are critical to not only how we operate on a tactical supply chain question but also on a strategic what's our value proposition."

McCurdy did not go into detail about upcoming Intel-based server equipment, but seemed excited about AMD's Instinct MI300X accelerator—Lenovo was (previously) announced as one of the early OEM takers of Team Red's latest CDNA 3.0 tech. CRN asked about the firm's outlook for upcoming MI300X-based inventory—McCurdy responded with: "I won't comment on an unreleased product, but the partnership I think illustrates the larger point, which is the industry is looking for a broad array of options. Obviously, when you have any sort of lead times, especially six-month, nine-month and 12-month lead times, there is interest in this incredible technology to be more broadly available. I think you could say in a very generic sense, demand is as high as we've ever seen for the product. And then it comes down to getting the infrastructure launched, getting testing done, and getting workloads validated, and all that work is underway. So I think there is a very hungry end customer-partner user base when it comes to alternatives and a more broad, diverse set of solutions."

Tiny Corp. Prepping Separate AMD & NVIDIA GPU-based AI Compute Systems

George Hotz and his startup operation (Tiny Corporation) appeared ready to completely abandon AMD Radeon GPUs last week, after experiencing a period of firmware-related headaches. The original plan involved the development of a pre-orderable $15,000 TinyBox AI compute cluster that housed six XFX Speedster MERC310 RX 7900 XTX graphics cards, but software/driver issues prompted experimentation via alternative hardware routes. A lot of media coverage has focused on the unusual adoption of consumer-grade GPUs—Tiny Corp.'s struggles with RDNA 3 (rather than CDNA 3) were maneuvered further into public view, after top AMD brass pitched in.

The startup's social media feed is very transparent about showcasing everyday tasks, problem-solving and important decision-making. Several Acer Predator BiFrost Arc A770 OC cards were purchased and promptly integrated into a colorfully-lit TinyBox prototype, but Hotz & Co. swiftly moved onto Team Green pastures. Tiny Corp. has begrudgingly adopted NVIDIA GeForce RTX 4090 GPUs. Earlier today, it was announced that work on the AMD-based system has resumed—although customers were forewarned about anticipated teething problems. The surprising message arrived in the early hours: "a hard to find 'umr' repo has turned around the feasibility of the AMD TinyBox. It will be a journey, but it gives us an ability to debug. We're going to sell both, red for $15,000 and green for $25,000. When you realize your pre-order you'll choose your color. Website has been updated. If you like to tinker and feel pain, buy red. The driver still crashes the GPU and hangs sometimes, but we can work together to improve it."

Financial Analyst Outs AMD Instinct MI300X "Projected" Pricing

AMD's December 2023 launch of new Instinct series accelerators has generated a lot of tech news buzz and excitement within the financial world, but not many folks are privy to Team Red's MSRP for the CDNA 3.0 powered MI300X and MI300A models. A Citi report has pulled back the curtain, albeit with "projected" figures—an inside source claims that Microsoft has purchased the Instinct MI300X 192 GB model for ~$10,000 a piece. North American enterprise customers appear to have taken delivery of the latest MI300 products around mid-January time—inevitably, top secret information has leaked out to news investigators. SeekingAlpha's article (based on Citi's findings) alleges that the Microsoft data center division is AMD's top buyer of MI300X hardware—GPT-4 is reportedly up and running on these brand new accelerators.

The leakers claim that businesses further down the (AI and HPC) food chain are having to shell out $15,000 per MI300X unit, but this is a bargain when compared to NVIDIA's closest competing package—the venerable H100 SXM5 80 GB professional card. Team Green, similarly, does not reveal its enterprise pricing to the wider public—Tom's Hardware has kept tabs on H100 insider info and market leaks: "over the recent quarters, we have seen NVIDIA's H100 80 GB HBM2E add-in-card available for $30,000, $40,000, and even much more at eBay. Meanwhile, the more powerful H100 80 GB SXM with 80 GB of HBM3 memory tends to cost more than an H100 80 GB AIB." Citi's projection has Team Green charging up to four times more for its H100 product, when compared to Team Red MI300X pricing. NVIDIA's dominant AI GPU market position could be challenged by cheaper yet still very performant alternatives—additionally chip shortages have caused Jensen & Co. to step outside their comfort zone. Tom's Hardware reached out to AMD for comment on the Citi pricing claims—a company representative declined this invitation.

AMD Instinct MI300X Released at Opportune Moment. NVIDIA AI GPUs in Short Supply

LaminiAI appeared to be one of the first customers to receive an initial shipment of AMD's Instinct MI300X accelerators, as disclosed by their CEO posting about functioning hardware on social media late last week. A recent Taiwan Economic Daily article states that the "MI300X is rumored to have begun supply"—we are not sure about why they have adopted a semi-secretive tone in their news piece, but a couple of anonymous sources are cited. A person familiar with supply chains in Taiwan divulged that: "(they have) been receiving AMD MI300X chips one after another...due to the huge shortage of NVIDIA AI chips, the arrival of new AMD products is really a timely rainfall." Favorable industry analysis (from earlier this month) has placed Team Red in a position of strength, due to growing interest in their very performant flagship AI accelerator.

The secrecy seems to lie in Team Red's negotiation strategies in Taiwan—the news piece alleges that big manufacturers in the region have been courted. AMD has been aggressive in a push to: "cooperate and seize AI business opportunities, with GIGABYTE taking the lead and attracting the most attention. Not only was GIGABYTE the first to obtain a partnership with AMD's MI300A chip, which had previously been mass-produced, but GIGABYTE was also one of the few Taiwanese manufacturers included in AMD's first batch of MI300X partners." GIGABYTE is expected to release two new "G593" product lines of server hardware later this year, based on combinations of AMD's Instinct MI300X accelerator and EPYC 9004 series processors.

AI Datacenters Warming Up to Instinct CDNA Causes AMD Stock to Hit Near Record High

With NVIDIA's Ampere and Hopper GPUs enjoying a domination in the AI acceleration industry, compute companies are turning to AMD's Instinct CDNA series accelerators to look for alternatives. It seems like they've found one. This has financial market analysts excited, causing the AMD company stock to hit near record highs. AMD recently launched the Instinct MI300X and MI300A processors based on the CDNA 3 architecture, which the company claims beat NVIDIA's H100 "Hopper" processors at competitive prices, which has encouraged analysts from major financial institutions, including Barclays, KeyBanc Capital, and Susquehanna Financial Group, to increase their price targets for the AMD stock. As of market closure at Jan 17, 7:59:56 PM UTC, the AMD stock stood at $160.17, near its November 2021 record high of $164.46.

AMD's data center business looks to ramp up Instinct CDNA accelerators through 2024. These large chiplet-based GPUs are based on the same 5 nm TSMC foundry nodes to NVIDIA's H100 "Hopper," and to maximize the use of its foundry allocation, it's been reported that AMD might even forego large gaming GPUs based on its Radeon RX RDNA4 architecture, to maximize its allocation for high-margin CDNA3 chips. The Instinct MI300X features a colossal 304 compute units worth 19,456 stream processors capable of AI-relevant math formats, and 192 GB of 8192-bit HBM3 memory, with 5.2 TB/s of memory bandwidth on tap.

AMD Instinct MI300X Could Become Company's Fastest Product to Rake $1 Billion in Sales

AMD in its post Q3-2023 financial results call stated that it expects the Instinct MI300X accelerator to be the fastest product in AMD history to rake in $1 billion in sales. This would be the time it took for a product in its lifecycle to register $1 billion in sales. With the MI300 series, the company hopes to finally break into the AI-driven HPC accelerator market that's dominated by NVIDIA, and at scale. This growth is attributable to two distinct factors. The first of which is that NVIDIA is supply bottlenecked, and customers and looking for alternatives, and finally found a suitable one with the MI300 series; and the second is that with the MI300 series, AMD has finally ironed out the software ecosystem backing the hardware that looks incredible on paper.

It's also worth noting here, that AMD is rumored to be sacrificing its market presence in the enthusiast-class gaming GPU segment with its next-generation, with the goal of maximizing its foundry allocation for HPC accelerators such as the MI300X. HPC accelerators are a significantly higher margin class of products than gaming GPUs such as the Radeon RX 7900 XTX. The RX 7900 XTX and its refresh under the RX 7950 series, are not expected to have a successor in the RDNA4 generation. "We now expect datacenter GPU revenue to be approximately $400 million in the fourth quarter and exceed $2 billion in 2024 as revenue ramps throughout the year," said Dr. Lisa Su, CEO AMD, at the company's earnings call with analysts and investors. "This growth would make MI300 the fastest product to ramp to $1 billion in sales in AMD history."

Two-ExaFLOP El Capitan Supercomputer Starts Installation Process with AMD Instinct MI300A

When Lawrence Livermore National Laboratory (LLNL) announced the creation of a two-ExaFLOP supercomputer named El Capitan, we heard that AMD would power it with its Instinct MI300 accelerator. Today, LNLL published a Tweet that states, "We've begun receiving & installing components for El Capitan, @NNSANews' first #exascale #supercomputer. While we're still a ways from deploying it for national security purposes in 2024, it's exciting to see years of work becoming reality." As published images show, HPE racks filled with AMD Instinct MI300 are showing up now at LNLL's facility, and the supercomputer is expected to go operational in 2024. This could mean that November 2023 TOP500 list update wouldn't feature El Capitan, as system enablement would be very hard to achieve in four months until then.

The El Capitan supercomputer is expected to run on AMD Instinct MI300A accelerator, which features 24 Zen4 cores, CDNA3 architecture, and 128 GB of HBM3 memory. All paired together in a four-accelerator configuration goes inside each node from HPE, also getting water cooling treatment. While we don't have many further details on the memory and storage of El Capitan, we know that the system will exceed two ExFLOPS at peak and will consume close to 40 MW of power.

AMD Shows Instinct MI300 Exascale APU with 146 Billion Transistors

During its CES 2023 keynote, AMD announced its latest Instinct MI300 APU, a first of its kind in the data center world. Combining the CPU, GPU, and memory elements into a single package eliminates latency imposed by long travel distances of data from CPU to memory and from CPU to GPU throughout the PCIe connector. In addition to solving some latency issues, less power is needed to move the data and provide greater efficiency. The Instinct MI300 features 24 Zen4 cores with simultaneous multi-threading enabled, CDNA3 GPU IP, and 128 GB of HBM3 memory on a single package. The memory bus is 8192-bit wide, providing unified memory access for CPU and GPU cores. CLX 3.0 is also supported, making cache-coherent interconnecting a reality.

The Instinct MI300 APU package is an engineering marvel of its own, with advanced chiplet techniques used. AMD managed to do 3D stacking and has nine 5 nm logic chiplets that are 3D stacked on top of four 6 nm chiplets with HBM surrounding it. All of this makes the transistor count go up to 146 billion, representing the sheer complexity of a such design. For performance figures, AMD provided a comparison to Instinct MI250X GPU. In raw AI performance, the MI300 features an 8x improvement over MI250X, while the performance-per-watt is "reduced" to a 5x increase. While we do not know what benchmark applications were used, there is a probability that some standard benchmarks like MLPerf were used. For availability, AMD targets the end of 2023, when the "El Capitan" exascale supercomputer will arrive using these Instinct MI300 APU accelerators. Pricing is unknown and will be unveiled to enterprise customers first around launch.

Intel GPU Business in a $3.5 Billion Hole, Jon Peddie Recommends Sell or Kill

Jon Peddie Research (JPR) provides some of the most authoritative and informative market-research into the PC graphics hardware industry. The firm just published a scathing editorial on the future of Intel AXG (Accelerated Computing Systems and Graphics), the business tasked with development of competitive discrete GPU and HPC compute accelerators for Intel. Founded to much fanfare in 2016 and led by Raja Koduri since 2016; AXG has been in the news for the development of the Xe graphics and compute architecture, particularly with the Xe-HP "Ponte Vecchio" HPC accelerator; and the Arc brand of consumer discrete graphics solutions. JPR reports that Intel has invested several billions of Dollars into AXG, to little avail, with none of its product lines bringing in notable revenues for the company. Xe-LP based iGPUs do not count as they're integrated with client processors, and their revenues are clubbed with CCG (Client Computing Group).

Intel started reporting revenues from the AXG business since Q1-2021, around which time it started selling its first discrete GPUs as the Intel DG1 Xe MAX, based on the same Xe-LP architecture powering its iGPUs. The company's Xe-HPG architecture, designed for high-performance gaming, was marketed as its first definitive answer to NVIDIA GeForce and AMD Radeon. Since Q1-2021, Intel has lost $2.1 billion to AXG, with not much to show for. The JPR article suggests that Intel missed the bus both with its time-to-market and scale.

AMD Instinct MI300 APU to Power El Capitan Exascale Supercomputer

The Exascale supercomputing race is now well underway, as the US-based Frontier supercomputer got delivered, and now we wait to see the remaining systems join the race. Today, during 79th HPC User Forum at Oak Ridge National Laboratory (ORNL), Terri Quinn at Lawrence Livermore National Laboratory (LLNL) delivered a few insights into what El Capitan exascale machine will look like. And it seems like the new powerhouse will be based on AMD's Instinct MI300 APU. LLNL targets peak performance of over two exaFLOPs and a sustained performance of more than one exaFLOP, under 40 megawatts of power. This should require a very dense and efficient computing solution, just like the MI300 APU is.

As a reminder, the AMD Instinct MI300 is an APU that combines Zen 4 x86-64 CPU cores, CDNA3 compute-oriented graphics, large cache structures, and HBM memory used as DRAM on a single package. This is achieved using a multi-chip module design with 2.5D and 3D chiplet integration using Infinity architecture. The system will essentially utilize thousands of these APUs to become one large Linux cluster. It is slated for installation in 2023, with an operating lifespan from 2024 to 2030.

AMD's Second Socket AM5 Ryzen Processor will be "Granite Ridge," Company Announces "Phoenix Point"

AMD in its 2022 Financial Analyst Day presentation announced the codename for the second generation of Ryzen desktop processors for Socket AM5, which is "Granite Ridge." A successor to the Ryzen 7000 "Raphael," the next-generation "Granite Ridge" processor will incorporate the "Zen 5" CPU microarchitecture, with its CPU complex dies (CCDs) built on the 4 nm silicon fabrication node. "Zen 5" will feature several core-level designs as detailed in our older article, including a redesigned front-end with greater parallelism, which should indicate a much large execution stage. The architecture could also incorporate AI/ML performance enhancements as AMD taps into Xilinx IP to add more fixed-function hardware backing the AI/ML capabilities of its processors.

The "Zen 5" microarchitecture makes its client debut with Ryzen "Granite Ridge," and server debut with EPYC "Turin." It's being speculated that AMD could give "Turin" a round of CPU core-count increases, while retaining the same SP5 infrastructure; which means we could see either smaller CCDs, or higher core-count per CCD with "Zen 5." Much like "Raphael," the next-gen "Granite Ridge" will be a series of high core-count desktop processors that will feature a functional iGPU that's good enough for desktop/productivity, though not gaming. AMD confirmed that it doesn't see "Raphael" as an APU, and that its definition of an "APU" is a processor with a large iGPU that's capable of gaming. The company's next such APU will be "Phoenix Point."

AMD CDNA3 Architecture Sees the Inevitable Fusion of Compute Units and x86 CPU at Massive Scale

AMD in its 2022 Financial Analyst Day presentation unveiled its next-generation CDNA3 compute architecture, which will see something we've been expecting for a while—a compute accelerator that has a large number of compute units for scalar processing, and a large number of x86-64 CPU cores based on some future "Zen" microarchitecture, onto a single package. The presence of CPU cores on the package would eliminate the need for the system to have an EPYC or Xeon processor at its head, and clusters of Instinct CDNA3 processors could run themselves without the need for a CPU and its system memory.

The Instinct CDNA3 processor will feature an advanced packaging technology that brings various IP blocks together as chiplets, each based on a node most economical to it, without compromising on its function. The package features stacked HBM memory, and this memory is shared not just by the compute units and x86 cores, but also forms part of large shared memory pools accessible across packages. 4th Generation Infinity Fabric ties it all together.

Alleged AMD Instinct MI300 Exascale APU Features Zen4 CPU and CDNA3 GPU

Today we got information that AMD's upcoming Instinct MI300 will be allegedly available as an Accelerated Processing Unit (APU). AMD APUs are processors that combine CPU and GPU into a single package. AdoredTV managed to get ahold of a slide that indicates that AMD Instinct MI300 accelerator will also come as an APU option that combines Zen4 CPU cores and CDNA3 GPU accelerator in a single, large package. With technologies like 3D stacking, MCM design, and HBM memory, these Instinct APUs are positioned to be a high-density compute the product. At least six HBM dies are going to be placed in a package, with the APU itself being a socketed design.

The leaked slide from AdoredTV indicates that the first tapeout is complete by the end of the month (presumably this month), with the first silicon hitting AMD's labs in Q3 of 2022. If the silicon turns out functional, we could see these APUs available sometime in the first half of 2023. Below, you can see an illustration of the AMD Instinct MI300 GPU. The APU version will potentially be of the same size with Zen4 and CDNA3 cores spread around the package. As Instinct MI300 accelerator is supposed to use eight compute tiles, we could see different combinations of CPU/GPU tiles offered. As we await the launch of the next-generation accelerators, we are yet to see what SKUs AMD will bring.

AMD MI300 Compute Accelerator Allegedly Features Eight Logic Dies

AMD's next-generation MI300 compute accelerator is expected to significantly scale up the logic density, according to a rumor by Moore's Law is Dead. Based on the CDNA3 compute architecture, the MI300 will be a monstrous large multi-chip module with as many as 8 logic dies (compute dies), each with its dedicated HBM3 stack. The compute dies (logic dies), will be 3D-stacked on top of I/O dies that pack the memory controllers, and the interconnect that performs the inter-die, and inter-package communication.

The report even goes on to mention that the compute die at the top level of the stack will be built on TSMC N5 (5 nm) silicon fabrication process, while the I/O die below will be TSMC N6 (6 nm). At this point it's not known if AMD will use the package to wire the logic stacks to the memory stacks, or whether it will take the pricier route of using a silicon interposer, but the report supports the interposer theory—that an all-encompassing interposer seats all eight compute dies, all four I/O dies (each with two compute dies), and the eight HBM3 stacks. An interposer is a silicon die that facilitates high density microscopic wiring between two dies on a package, which are otherwise not possible through large package substrate wiring.
Return to Keyword Browsing
Apr 25th, 2024 06:58 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts