• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD RDNA 5 a "Clean Sheet" Graphics Architecture, RDNA 4 Merely Corrects a Bug Over RDNA 3

After the bad experience with chiplets, is it really a good idea to move on with them? :confused:
GPUs are very sensitive to latencies, they work best when the latencies are extremely low, which means a monolithic design, chiplets are good for CPUs, but extremely bad for GPUs.
That's why CrossFire is no longer supported.
Do they want to invent a new type of CrossFire?
With skyrocketing manufacturing costs accompanied by minimal improvements ?

A Multi-GCD design is the most important thing AMD could bring out to be more competitive. Instead of developing 5-6 chips, a single block (GCD) would serve all segments, simply by putting these chips together. Billions would be saved in the process.

But it's obvious that such a design needs to drastically change the graphics processing model.

1000015379.png


"The new patent (PDF) is dated November 23, 2023, so we'll likely not see this for a while. It describes a GPU design radically different from its existing chiplet layout, which has a host of memory cache dies (MCD) spread around the large main GPU die, which it calls the Graphics Compute Die (GCD). The new patent suggests AMD is exploring making the GCD out of chiplets instead of just one giant slab of silicon at some point in the future. It describes a system that distributes a geometry workload across several chiplets, all working in parallel. Additionally, no "central chiplet" is distributing work to its subordinates, as they will all function independently.
 
With skyrocketing manufacturing costs accompanied by minimal improvements ?

Doesn't assembly of chiplets also cost quite a bit and is more expensive production practice than simply putting a single die onto the interposer/PCB?

Also, they can compensate by using faster architectures on older/cheaper processes?
I still don't know why they haven't released a pipecleaner, 150 mm^2 GPU built on the newer TSMC N4 or TSMC N3 processes?
 
Doesn't assembly of chiplets also cost quite a bit and is more expensive production practice than simply putting a single die onto the interposer/PCB?

Also, they can compensate by using faster architectures on older/cheaper processes?
I still don't know why they haven't released a pipecleaner, 150 mm^2 GPU built on the newer TSMC N4 or TSMC N3 processes?
Not even close.
Bleeding edge manufacturing nodes, and the price-bidding war to win allocation means that N4 and N3 are an order of magnitude more expensive than the interposer/assembly costs. Those are rapidly becoming irrelevant, too - since AMD have been doing it for so long that it's a solved problem with plenty of experience and a relatively smooth/effortless process now.
They won't release lower-end parts on N4 and N3 simply because the profit margins for those lower end, smaller dies don't actually merit the high cost AMD pays TSMC for the flagship nodes.
 
Across the stack, this generation can be summed up by:
  • AMD: 5-10% faster raster at ~5% lower prices
  • Nvidia: 50-200% faster RT/AI/DLSS
  • All of AMD's buyers (historic low at 5% of the discrete consumer GPUs market share): Flooding all websites and forums with "WHY WON'T YOU BUY AMD IT'S 5-10% FASTER FOR 5% CHEAPER YOU MUST BE A BLIND NVIDIA FANBOY NOBODY USES RT/AI/DLSS THEY'RE SCAMS REEEEEEEEEEEEEEEEE".

Even worse is that UE5 which is touted as next-gen of games, require Upscaling to be playable (50% boost to FPS is hard to ignore when all UE5 games run poorly).
 
The MCM design works in compute, it's a different ballgame within graphics. That's why there's not that much difference in between 6x00 and 7x00 generation.
This has nothing to do with MCM at the moment, the GCD is still monolithic, Navi31 simply hasn't added all that many more CUs.
 
After the bad experience with chiplets, is it really a good idea to move on with them?
Chiplets for graphics is are not new. AMD has been using it in enterprize, a year before the RDNA3's launch. And if the product was flawed, that market won't tolerate it. So if AMD managed to sell these like the hot panckakes, this means, the architecture and the execution is sturdy, and reliable.

You see, AMD took the approach, of designing the top architecture, and then using it for derivative product ranges. This is the VAG of silicon. So the EPYC/Instinct is MAN/Bugatti, while Ryzen and Radeon are A6, Passat, Golf and Polo; and Threadripper/Radeon Pro are somewhat between Rolls-Royce and Crafter. And this is brilliant strategy, to be honest. ANd this is why AMD holds on to it so much, because it brought them the fortune, they never ever had before. That's why, I think, that AMD isn't going to cut MCM/MCD design for consumer grade cards (with possibility of the lower tier chips joining MCM design as well), by improving it instead, akin how Intel holds for Arc. Because it's much cheaper and easier to keep the same approach for all products, and just rectify the sissues, rayther then dedicate the budget for development of separate architecture.

So I guess, that although the CDNA and RDNA architectures are different, the ideas, technology and design, and execution might have many in common, sans video output.
Thus the problem might be specifically with maintaining it for "multipurpose"/gaming use, where the frequencies are higher and load is variable. So the strain on the hardware is not constant and can easilly exceede the chip/link capabilities during load spikes. Thsese are just layman's speculations, but I hope you got the point.
 
Last edited:
Just a matter of years to have a bit of hope then?
GPU market is in such a sad state in the last ~5 years...
 
Doesn't assembly of chiplets also cost quite a bit and is more expensive production practice than simply putting a single die onto the interposer/PCB?

Also, they can compensate by using faster architectures on older/cheaper processes?
I still don't know why they haven't released a pipecleaner, 150 mm^2 GPU built on the newer TSMC N4 or TSMC N3 processes?

Chiplets have added assembly cost that depends on just how advanced the packaging is. The packaging used for AMD's CPUs for example, where it's just a dumb interposer, is very cheap. A step up from that is the organic substrate used for AMD's 7000 series GPUs. This organic substrate allows them to increase the bandwidth of the link between chiplets while keeping packaging costs in check as the substrate itself is still dumb (no logic). Those two are pretty economical. A big step above that cost wise is CoWoS, which is the most expensive option here but likewise provides the most bandwidth. You see this kind of packaging used by AMD and Nvidia in the enterprise space and it's used to connect the die and HBM or in AMD's case all the chiplets and HBM.

The cost overhead of chiplets is vastly outweighed by the cost savings. By splitting a design into smaller chiplets you increase the number of good chips yieled per wafer. The exact increase depends on the defect density of the node but as you increase defect density the greater the benefit chiplets have. Even at TSMC's 5nm's defect density of 0.1 per square cm the number of additional chips yielded is significant, let alone 3nm which TSMC is currently having issues yield wise. This goes triple for GPUs, which are huge dies that disproportionately benefit from the disaggregation that chiplets bring.

In essence you are weighing the cost of wasted silicon compared to the added cost of a silicon interposer. I managed to find an estimate from 2018 which places the cost between $30 (for a medium sized chip) and $100 (for an interposer a multiple of reticle size stitched together): https://www.frost.com/frost-perspectives/overview-interposer-technology-packaging-applications/

AMD's desktop CPUs qualify below that lower figure and flaghip GPUs (600mm2+) likely sit above the middle at $70-80. I would not be surprised if those costs have gone down for dumb interposers since that was published (not CoWoS though, which is in high demand).

Also consider that chiplets allow you to use older processes for certain parts of the chip for additional savings and you only have to design just the chiplets that will then be used in every SKU in your lineup, both of which will influence total cost to manufacture in a positive way.
 
Last edited:
With skyrocketing manufacturing costs accompanied by minimal improvements ?

A Multi-GCD design is the most important thing AMD could bring out to be more competitive. Instead of developing 5-6 chips, a single block (GCD) would serve all segments, simply by putting these chips together. Billions would be saved in the process.

But it's obvious that such a design needs to drastically change the graphics processing model.

View attachment 347369

"The new patent (PDF) is dated November 23, 2023, so we'll likely not see this for a while. It describes a GPU design radically different from its existing chiplet layout, which has a host of memory cache dies (MCD) spread around the large main GPU die, which it calls the Graphics Compute Die (GCD). The new patent suggests AMD is exploring making the GCD out of chiplets instead of just one giant slab of silicon at some point in the future. It describes a system that distributes a geometry workload across several chiplets, all working in parallel. Additionally, no "central chiplet" is distributing work to its subordinates, as they will all function independently.
This is seriously starting to look like those decade+ old "far future AMD roadmap" leaks, were true...
 
They won't release lower-end parts on N4 and N3 simply because the profit margins for those lower end, smaller dies don't actually merit the high cost AMD pays TSMC for the flagship nodes.

Now or never? Now - "maybe". If never, it's game over for AMD.

Bleeding edge manufacturing nodes, and the price-bidding war to win allocation means that N4 and N3 are an order of magnitude more expensive than the interposer/assembly costs.

The only things that I see that AMD bleeds are performance left on the table because the chiplets are too slow, and the connected market share loss.
It's about making decisions.

Maybe AMD must put on the table the profit margins, and instead start thinking about the gamers?
 

Chiplets for graphics is are not new. AMD has been using it in enterprize, a year before the RDNA3's launch. And if the product was flawed, that market won't tolerate it. So if AMD managed to sell these like the hot panckakes, this means, the architecture and the execution is sturdy, and reliable.
GPU compute for the datacenter and AI isn't particularly latency sensitive, so the latency penalty of a chiplet MCM approach is almost irrelevant and the workloads benefit hugely from the raw compute bandwidth.

GPU for high-fps gaming is extremely latency-sensitive, so the latency penalty of chiplet MCM is 100% a total dealbreaker.

AMD hasn't solved/evolved the inter-chiplet latency well enough for them to be suitable for a real-time graphics pipeline yet, but that doesn't mean they won't.
 
nGreedia's is a bubble, and one day, it will burst. I am VERY much looking forward to that whether it be a year from now, or 10.

You may be disappointed to hear that by the time any such bubble pops they will remain a multi trillion corporation.

$NVDA is priced as is because they provide both the hardware and software tools for AI companies to develop their products. OpenAI for example is a private corporation (similar to Valve), and AI is widely considered to be in its infancy. It's the one lesson not to mock a solid ecosystem.
 
Now or never? Now - "maybe". If never, it's game over for AMD.
Never, for sure.
It's simply a question of cost because low end parts need to be cheap, which means using expensive nodes for them makes absolutely zero sense.

I can confidently say that it's not happened in the entire history of AMD graphics cards, going back to the early ATi Mach cards, 35 years ago!
Look at the column for manufacturing node; The low end of each generation is always last years product rebranded, or - if it's actually a new product rather than a rebrand - it's always an older process node to save money.

So yes, please drop it. I don't know how I can explain it any more clearly to you. Low end parts don't get made on top-tier, expensive, flagship manufacturing nodes, because it's simply not economically viable. Companies aiming to make a profit will not waste their limited quantity of flagship node wafer allocations on low-end shit - that would be corporate suicide!

If Pirelli came accross a super-rare, super-expensive, extra-sticky rubber but there was a limited quantity of the stuff - they could use it to make 1000 of the best Formula 1 racing tyres ever seen and give their brand a huge marketing boost and recognition, OR they could waste it making 5000 more boring, cheap, everyday tyres for commuter workhorse cars like your grandma's Honda Civic.
 
Last edited:
Never, for sure.
It's simply a question of cost because low end parts need to be cheap, which means using expensive nodes for them makes absolutely zero sense.

I can confidently say that it's not happened in the entire history of AMD graphics cards, going back to the early ATi Mach cards, 35 years ago!
Look at the column for manufacturing node; The low end of each generation is always last years product rebranded, or - if it's actually a new product rather than a rebrand - it's always an older process node to save money.

So yes, please drop it. I don't know how I can explain it any more clearly to you. Low end parts don't get made on top-tier, expensive, flagship manufacturing nodes, because it's simply not economically viable. Companies aiming to make a profit will not waste their limited quantity of flagship node wafer allocations on low-end shit - that would be corporate suicide!

What is expensive today, will not necessarily be expensive tomorrow. Wafer prices fall, N4 will be an ancient technology in 5 or 10 years.
Saying never, means that you must have an alternative in mind? What's it? Making RX 7600 on 6nm for 20 years more?

 
You may be disappointed to hear that by the time any such bubble pops they will remain a multi trillion corporation.

$NVDA is priced as is because they provide both the hardware and software tools for AI companies to develop their products. OpenAI for example is a private corporation (similar to Valve), and AI is widely considered to be in its infancy. It's the one lesson not to mock a solid ecosystem.

Yeah, even this generations being one of the worst for Nvidia from a price to performance standpoint they are still obliterating AMD in gaming revenue while really only focusing on AI although at least with Nvidia some of that trickles to their gaming cards.

Nvidia left the door wide open this generation for AMD and they are like nah we love being stuck as an insignificant % of the market. It's really a total opposite of how AMD handled Zen.

We need both these companies pushing each other to make better products but if RDNA5 is a bust like 3 I'm not sure CDNA can save the whole gpu side at amd... Maybe we are just seeing the ceiling for an AMD branded gpu regardless of how good of a product amd makes.

Who knows maybe Nvidia will open the door even wider next generation been hearing 1200 ish for a 5080 that only offers 4090 performance with less Vram which would be a pretty terrible product.
 
What is expensive today, will not necessarily be expensive tomorrow. Wafer prices fall, N4 will be an ancient technology in 5 or 10 years.
Saying never, means that you must have an alternative in mind? What's it? Making RX 7600 on 6nm for 20 years more?

Ohhh, you mean on N4 once N4 is old and cheap?
Sure, that'll eventually happen. That's where N6 is right now - but it's not relevant to this discussion, is it?
 
What is expensive today, will not necessarily be expensive tomorrow. Wafer prices fall, N4 will be an ancient technology in 5 or 10 years.
Saying never, means that you must have an alternative in mind? What's it? Making RX 7600 on 6nm for 20 years more?


But bruh who will be interested on a 7600 10 years from now? Chrispy is right on this one. Just makes no sense.
 
But bruh who will be interested on a 7600 10 years from now? Chrispy is right on this one. Just makes no sense.

He is right, but the point is that AMD will not be able to sell these cards. This is unsustainable strategy, leading to downward spiraling. Bleeding market share to the more popular competitor, and then forcing to exit the market segment.
 
He is right, but the point is that AMD will not be able to sell these cards. This is unsustainable strategy, leading to downward spiraling. Bleeding market share to the more popular competitor, and then forcing to exit the market segment.

The low end market is less sensitive to bleeding edge technology. People would actually rather get something tried and true here, so it works out in the end. Using earlier nodes on lower cost products is therefore a great idea.
 
The low end market is less sensitive to bleeding edge technology. People would actually rather get something tried and true here, so it works out in the end. Using earlier nodes on lower cost products is therefore a great idea.

The question is - when do you expect an RX 6600/RX 7600 owner to upgrade? If following this logic - never, or maybe in 7-10 years?
Is it fine for AMD to get so rare gamers' purchases? If so, then it's fine.

But it would mean that the niche market will not hold for many more years. No reason to upgrade.
 
The low end market is less sensitive to bleeding edge technology. People would actually rather get something tried and true here, so it works out in the end. Using earlier nodes on lower cost products is therefore a great idea.

Yeah the RX 580 has the highest presence for a discrete AMD gpu on the hardware survey and there are a bunch of crappy 50/60 class cards from Nvidia in the top 20 people buy whatever they can afford at the low end regardless of how meh it is.
The question is - when do you expect an RX 6600/RX 7600 owner to upgrade? If following this logic - never, or maybe in 7-10 years?
Is it fine for AMD to get so rare gamers' purchases? If so, then it's fine.

But it would mean that the niche market will not hold for many more years. No reason to upgrade.

Most of AMDs low end base is still on 580s I think they would be happy with them actually buying 7600s as it is.
 
  • Nvidia: 50-200% faster RT/AI/DLSS
Where do you get this number from? :wtf:

Now or never? Now - "maybe". If never, it's game over for AMD.



The only things that I see that AMD bleeds are performance left on the table because the chiplets are too slow, and the connected market share loss.
It's about making decisions.

Maybe AMD must put on the table the profit margins, and instead start thinking about the gamers?
Do you think chiplets are about gamers? Far from it. The post you replied to demonstrates that it's a cost saving technique, nothing else. Better yields on smaller chips, the ability to link chips made on different nodes, etc.
 
Yeah, even this generations being one of the worst for Nvidia from a price to performance standpoint they are still obliterating AMD in gaming revenue while really only focusing on AI although at least with Nvidia some of that trickles to their gaming cards.

Nvidia left the door wide open this generation for AMD and they are like nah we love being stuck as an insignificant % of the market. It's really a total opposite of how AMD handled Zen.

We need both these companies pushing each other to make better products but if RDNA5 is a bust like 3 I'm not sure CDNA can save the whole gpu side at amd... Maybe we are just seeing the ceiling for an AMD branded gpu regardless of how good of a product amd makes.

Who knows maybe Nvidia will open the door even wider next generation been hearing 1200 ish for a 5080 that only offers 4090 performance with less Vram which would be a pretty terrible product.

The duopoly must continue, Nvidia is pricing their gaming GPU just high enough to make sure of that.

It's so easy for AMD and Nvidia to figure out the minimum prices of their competitor, given that they share the same chip manufacturer (TSMC), same GDDR manufacturer (Samsung), same PCB manufacturer (AIBs).

Who know perhaps Nvidia will charge higher margins next-gen, just so Radeon can improve their terrible margins.
 
The duopoly must continue, Nvidia is pricing their gaming GPU just high enough to make sure of that.

It's so easy for AMD and Nvidia to figure out the minimum prices of their competitor, given that they share the same chip manufacturer (TSMC), same GDDR manufacturer (Samsung), same PCB manufacturer (AIBs).

Who know perhaps Nvidia will charge higher margins next-gen, just so Radeon can improve their terrible margins.

Yeah, I would really like to see a BOM cost if it was high it would make me feel better lol.
 
Back
Top