• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD's Upcoming UDNA / RDNA 5 GPU Could Feature 96 CUs and 384-bit Memory Bus

Unless it's some earth-shattering improvement, I'm afraid I'll probably still go with NVidia. The pure raster performance is not that relevant anymore. Subsequently, the power of the RTX/DLSS ecosystem and its market penetration are just impossible to ignore and worth the premium for me. But I guess it'll be a good option for people who have time for tweaking stuff and playing with tools such as Optiscaler.
UDNA / RDNA 5 is going to have better RT than RDNA 4 so what do you mean by pure raster?

The problem is that to show off, you actually need to be in the lead. With those specs, unless there's a historically unprecedented IPC gain per WGP, that GPU isn't gonna be a threat to an RTX 5090. It'd be impressive if it released today, but by late 2026/early 2027? That's another story entirely.
I think price / performance matters more than a halo product.

Nobody wants to be spending $2500+ on GPU's anymore.
 
A little napkin math comparing N6 to N4P proves this notion mostly correct, in terms of viability a cheaper old product is better than attempting to minimize a new one, though with something like a cut-down Navi... 55, assuming the pattern holds?—the whole point is attempting to salvage tolerably flawed dies as opposed to writing them off as a total loss.

The 9060 XT 8GB is far outsold by its 16GB brother as-is, I don't see why AMD wouldn't shift their strategy towards fully investing these lesser-yield dies into a product that, while isn't a breadwinner by any definition, props up overall margin by offsetting what would otherwise be extra cost on top of the good dies?
While the 16GB versions of 5060 Ti and 9060XT seems to outsell the 8GB versions, I assume that with OEM systems this could be the complete opposite. The average consumer will buy a "gaming PC" and if they are lucky that "gaming PC" wouldn't be just an AMD APU. And they will probably go for the 8GB graphics card, not because they know about VRAM and stuff and know that it is enough for them and the games they want to play, but because probably it will be $100 less than the system with the 16GB GPU.

As for flawed dies, they will need plenty to create a new model. If they don't have plenty they might create again an OEM product that will go to 1-2 OEMs or be sold to China for example.
 
AMD gets gun shy making dies larger than 500mm2, but I wonder if UDNA is going to modify the chiplet approach so that two full dies can sit on top of an interposer and act as one (instead of the RDNA3 chiplet + IO approach).

This way two full fledged 450mm2 96CU RDNA5 dies can potentially act like one 900mm2 die, but without the extra sync overhead of the chiplet approach.

It would be a sort of hybrid approach somewhere between chiplet and crossfire.
 
AMD gets gun shy making dies larger than 500mm2, but I wonder if UDNA is going to modify the chiplet approach so that two full dies can sit on top of an interposer and act as one (instead of the RDNA3 chiplet + IO approach).

This way two full fledged 450mm2 96CU RDNA5 dies can potentially act like one 900mm2 die, but without the extra sync overhead of the chiplet approach.

It would be a sort of hybrid approach somewhere between chiplet and crossfire.
Splitting operations suitable for different architectures onto two different dies possible will increase latency. Rather, the pipelines will be re-mixed onto a single die, similar to GCN before the cDNA and rDNA split.
 
I think price / performance matters more than a halo product.
Well the RDNA4 cards have arguably better ratios than comparable Blackwell cards, but I still see people jumping on nVidia like crazy even though Blackwell is trash, pure stop-gap generation, nothing more. The 5090 stands out because it's huge and the performance difference compared to the 5080 is too evident, it's clear that the current product stack is missing a card that is a further cut-down of the 5090 to fill the gap between it and the 5080.

The argument that favors nVidia is the feature set, people say that DLSS is superior to FSR4 even though they can't discern it while actually playing, when you point this out to them, they say yeah they're very close in quality but the game support is far greater for nVidia.
So AMD can't win without parity (for game support), or at least being close enough which at this point it isn't.
There are still people that consider nVidia better at ray tracing even though the real advantage it has is in path tracing.

AMD is forced to make a flagship that can hang in there with nVidia's one, if it does then it proves that nVidia can't deliver, all that money, all those resources and it can't decisively win, if FSR gets closer to DLSS it won't matter if there's a difference if it's only revealed by VMAF tests.
AMD needs to concentrate on game support and getting all of these "productivity" features straightened out and deliver a cohesive package, yes no CUDA obviously, and nVidia will still lead in rendering but if it's just that and the rest is on par then the value of AMD GPUs will be undeniable.

And they've made mistakes this generation, small ones but which got noticed. Like the memory temps, it just makes the products feel second rate, even if they run within specs, and when Samsung fixed the temps they "unfixed" the latencies and performance in synthetics was marginally affected.
It just looks amateurish, without this issue they could have had a selling point, the RDNA4 cards apparently can have great GPU temps with smaller coolers than comparable nVidia cards.
And AMD could have said, hey look our cards have similar thermal and acoustic performance, but they're more compact and lighter and have more reliable power connectors.
But how can you say your product is more reliable when the temp delta between the core and memory is ~30°C???
Nobody wants to be spending $2500+ on GPU's anymore.
Sure they do. The ones that got 5090s for $4000+, when they upgrade to 6090, 7090 or 8090 they would prefer to pay $3000. :laugh:
 
I think price / performance matters more than a halo product.

Nobody wants to be spending $2500+ on GPU's anymore.

I don't think anyone does, but what we're demanding here (advanced semiconductors on bleeding edge nodes, backed by high complexity software at a low price) is an exceptionally difficult proposition. No doubt AMD's next-gen product will be good (unless it isn't, but we have to wait these 2 years to find out), but if you're vying for supremacy, you can pull no punches, and at that point, price takes a distant back seat (if not thrown completely out of the window, see: RTX 5090)

AMD gets gun shy making dies larger than 500mm2, but I wonder if UDNA is going to modify the chiplet approach so that two full dies can sit on top of an interposer and act as one (instead of the RDNA3 chiplet + IO approach).

This way two full fledged 450mm2 96CU RDNA5 dies can potentially act like one 900mm2 die, but without the extra sync overhead of the chiplet approach.

It would be a sort of hybrid approach somewhere between chiplet and crossfire.

500 mm² of extremely valuable TSMC silicon on a Radeon product makes absolutely zero business sense to AMD at the moment, especially if attempting to target the price ranges that their customer base demands. It's a paradoxical situation, where even within the relatively low-margin gaming segment, AMD's customers are aggressively value minded and are happy to settle for inferior products as long as they save a buck, yet at the same time, they simply cannot be, and are not, the company's priority. It's a niche with lots of room to grow, as long as the fabs are sorted out. Except that, they are never going to be.

Intel, like AMD, relies on TSMC and has even lower priority, Nvidia isn't interested because they have the premium segments in a chokehold, and the Chinese products from the likes of Moore Threads and Glenfly aren't competitive. While this could theoretically be solved by restarting production on GPUs that retain advanced feature sets but are built with older nodes (such as RDNA 2 or Ampere), there is zero business incentive to do so, as the nodes can be used to supply semicustom demand effortlessly (i.e. game consoles and automotive sectors) and it'd only serve to deflate prices on their latest products and stabilize the market, which is something shareholders are ultimately not interested in.

So unless they can make a GPU that pulls all the stops and ticks every box, and can be sold for an equally high price, they aren't gonna make one. To the hyperconscious gamer with a small wallet, an RTX 5090 might sound like an enormous investment... Nvidia is selling these at almost charity prices to gamers compared to what they could fetch by making AI accelerators for the enterprise segment.
 
Pass the popcorn, this thread is full of shilling. Red vs Green again.

The answer to everything is "wait for the reviews"
 
Like the memory temps, it just makes the products feel second rate, even if they run within specs, and when Samsung fixed the temps they "unfixed" the latencies and performance in synthetics was marginally affected.
Nothing AMD can do here. Memory is memory and this is on memory manufacturers. As if Nvidia has not had high memory temps or every RTX generation a major hardware controversy?
But how can you say your product is more reliable when the temp delta between the core and memory is ~30°C???
GPU hot spot is much closer to memory temps and the delta is much smaller. I haven't looked at regular GPU temp in ages. I always look for GPU hot spot temp.
 
Pass the popcorn, this thread is full of shilling. Red vs Green again.

The answer to everything is "wait for the reviews"

Shilling for what anyway... it's speculation on a what if scenario for a what if distant future, we're at an absolute minimum 2 years away from the next hardware release cycle
 
The 7900 XTX Navi 31 already has only 96 compute units.
This means the new architecture will have much beefier compute units, rather than thin ones but with high parallelism.

The UDNA x1 part will probably have 100 billion transistors, and hopefully will be manufactured using the newest TSMC N2P lithography node.
 
The 7900 XTX Navi 31 already has only 96 compute units.
This means the new architecture will have much beefier compute units, rather than thin ones but with high parallelism.

The UDNA x1 part will probably have 100 billion transistors, and hopefully will be manufactured using the newest TSMC N2P lithography node.

Haha, and you expect that for $499 plus a $50 mail in rebate, let me guess, it'll also have 64 GB of VRAM. Parallelism is the core concept of a GPU architecture. It's never been the problem. The problem is how to make the architecture scale so all of its execution resources are well utilized, or rather, utilized at all. That's why a GPU is so difficult to design, develop and polish. Honestly, if what I mentioned earlier hasn't even crossed your mind when you're expecting a 100B transistor, N2P GPU to be affordable... I guess it's OK to dream?

That’s what any AMD GPU news/threads falls into, especially when the resident trolls sound off their broken record.

If the problem is always, and consistently everyone else, then it's time for some introspection ;)
 
Last edited:
Well the RDNA4 cards have arguably better ratios than comparable Blackwell cards, but I still see people jumping on nVidia like crazy even though Blackwell is trash, pure stop-gap generation, nothing more. The 5090 stands out because it's huge and the performance difference compared to the 5080 is too evident, it's clear that the current product stack is missing a card that is a further cut-down of the 5090 to fill the gap between it and the 5080.

This isn't a surprise there are people that are firmly in the Nvidia camp and won't leave regardless of what the competition is putting out. That same goes for Intel before Arrow lake!

Sure they do. The ones that got 5090s for $4000+, when they upgrade to 6090, 7090 or 8090 they would prefer to pay $3000. :laugh:

While this is true the amount of people that actually did this is very small when you look at the whole market.

Don't let the people on tech forums cloud your judgement we all a very small part of the market.
 
This isn't a surprise there are people that are firmly in the Nvidia camp and won't leave regardless of what the competition is putting out.

What if Nvidia exits the market and they have no choice but to jump on the other ship?

Honestly, this makes no sense - it makes sense if the average joe is a die-hard football team fan, but IT people behaving illogically is a mystery. Maybe they are not intelligent, after all?
 
While the 16GB versions of 5060 Ti and 9060XT seems to outsell the 8GB versions, I assume that with OEM systems this could be the complete opposite. The average consumer will buy a "gaming PC" and if they are lucky that "gaming PC" wouldn't be just an AMD APU. And they will probably go for the 8GB graphics card, not because they know about VRAM and stuff and know that it is enough for them and the games they want to play, but because probably it will be $100 less than the system with the 16GB GPU.
That's been the case for a while now, especially with the last two xx60Ti cards. Oft system integrators or OEMs will shy away from directly saying it's an 8GB card in the first place to avoid dissuading a buyer who might have heard of the whole debacle, which I find rather insidious. Such cards find success through obfuscation, a lack of clear comparison, and a target customer they know wouldn't be able to reliably appreciate the value or viability of what they're selling.

As for flawed dies, they will need plenty to create a new model. If they don't have plenty they might create again an OEM product that will go to 1-2 OEMs or be sold to China for example.
Fair enough, though with 3nm the yield rate can vary from 80% to 90%, and somewhere in there is a proportion of dies that have salvageable flaws or supposedly 'functional' dies that otherwise bin too low to be usable for a standard fully-enabled model. The volume would still likely be an order of magnitude below the main 'XT' variant, but that's comparing millions to hundreds of thousands. It would have to be uncharacteristically popular to end up chronically starved of stock (and subsequently marked up).

Then again, the 5050 is rather dried up in that department currently, so maybe there's merit to some of the most popular cards also being the absolute cheapest.
 
Nothing AMD can do here. Memory is memory and this is on memory manufacturers. As if Nvidia has not had high memory temps or every RTX generation a major hardware controversy?

GPU hot spot is much closer to memory temps and the delta is much smaller. I haven't looked at regular GPU temp in ages. I always look for GPU hot spot temp.
Yeah but nVidia was fine with GDDR6X and now with GDDR7 as well, while AMD is stuck on GDDR6 20Gbps (even though they could have probably had 24Gbps in 2025), like that X memory isn't hardware coded for nVidia or whatever, as I said perception wise it makes them look poor, like they can't afford the latest "shit" and they have to make do with what's available.
Again I'm not saying there's a functional problem, the cards run fine, it's just that perception wise they might seem lacking for some people, like the case with the feature set, the whole driver debate etc.

AMD has to fight on all fronts with UDNA, it has to dominate against their previous products (RDNA4), dominate against previous nVidia cards (Blackwell) and severely disrupt the used market (make people prefer buying new UDNA cards instead of used Blackwell and used RDNA4) and (at least) match the RTX 6000 series for the whole product stack.
Otherwise they'll just pick up scraps like jackals around a lion kill.
The 7900 XTX Navi 31 already has only 96 compute units.
Old dusty ones. That are matched by 64 of the 9070 XT. How would a 96 CU RDNA4 card perform?
Then how would a 96 CU UDNA card perform?
AMD will never admit that they've made a mistake by not making a bigger die this generation, but it's so obvious that we don't need them to admit.

But perhaps this generation was a sacrificial one, one where they made great progress but also some mistakes, the important thing is that they got their foot in the door and if they learn from these mistakes then UDNA will be Radeon's Ryzen moment. It's not impossible, not even improbable. The proof is that Blackwell is a stop-gap solution while nVidia waits for 2nm. While RDNA4 is clear progress compared to the previous gen. It's easier to miss the progress because the product stack stops at the midrange.
 
  • Like
Reactions: ARF
Unless it's some earth-shattering improvement, I'm afraid I'll probably still go with NVidia. The pure raster performance is not that relevant anymore. Subsequently, the power of the RTX/DLSS ecosystem and its market penetration are just impossible to ignore and worth the premium for me. But I guess it'll be a good option for people who have time for tweaking stuff and playing with tools such as Optiscaler.
You sound like you are rich so good luck with nvidia :)
 
If the problem is always, and consistently everyone else, then it's time for some introspection

Good ole fashion case of pot calling the kettle black here. Ironic.

Anyways, and aside from that, I was misremembering the CU count of Navi 48 being 48 and not the correct 64, so yes unlikely to be faster than a 5090 unless its a hair.
 
What if Nvidia exits the market and they have no choice but to jump on the other ship?

Honestly, this makes no sense - it makes sense if the average joe is a die-hard football team fan, but IT people behaving illogically is a mystery. Maybe they are not intelligent, after all?
Nvidia exiting the dGPU market isn't very likely, even though the same Nvidia customers here cheerlead the company for AI and for them not being a gaming GPU company anymore, not realizing the leather jacket man is just handing Geforce customers the leftovers.
But even IT people like to pick teams, it makes no sense to me.
the important thing is that they got their foot in the door
I think AMD getting a foothold with RDNA4 is much more important than chasing after the high end, especially when the high end buyers will open their wallets for an Nvidia card regardless, just see the comments from people here saying they'll buy a Geforce card no matter what.
Good ole fashion case of pot calling the kettle black here. Ironic.

Anyways, and aside from that, I was misremembering the CU count of Navi 48 being 48 and not the correct 64, so yes unlikely to be faster than a 5090 unless its a hair.
The same people who always derail these threads can't see the irony in it.
If the CU's are even more beefed up than RDNA4, as RDNA4 has higher IPC than Blackwell, AMD does have a chance with competing with RTX 6000 series.
Though I'd be more interested in a chiplet design midrange UDNA card, the high end has gotten so expensive it's a niche of a niche of potential sales.
 
500 mm² of extremely valuable TSMC silicon on a Radeon product makes absolutely zero business sense to AMD at the moment, especially if attempting to target the price ranges that their customer base demands. It's a paradoxical situation, where even within the relatively low-margin gaming segment, AMD's customers are aggressively value minded and are happy to settle for inferior products as long as they save a buck, yet at the same time, they simply cannot be, and are not, the company's priority. It's a niche with lots of room to grow, as long as the fabs are sorted out. Except that, they are never going to be.

Intel, like AMD, relies on TSMC and has even lower priority, Nvidia isn't interested because they have the premium segments in a chokehold, and the Chinese products from the likes of Moore Threads and Glenfly aren't competitive. While this could theoretically be solved by restarting production on GPUs that retain advanced feature sets but are built with older nodes (such as RDNA 2 or Ampere), there is zero business incentive to do so, as the nodes can be used to supply semicustom demand effortlessly (i.e. game consoles and automotive sectors) and it'd only serve to deflate prices on their latest products and stabilize the market, which is something shareholders are ultimately not interested in.

So unless they can make a GPU that pulls all the stops and ticks every box, and can be sold for an equally high price, they aren't gonna make one. To the hyperconscious gamer with a small wallet, an RTX 5090 might sound like an enormous investment... Nvidia is selling these at almost charity prices to gamers compared to what they could fetch by making AI accelerators for the enterprise segment.

- All true, which is exactly why AMD keeps pursuing chiplets, and its really the holy grail for them if they can make things work since they'll get to still make "reasonably" sized GPU dies and scale up or down by simply adding more chiplets.

RDNA3 was a really cool proof of concept, RDNA4 was originally supposed to be a chiplet design but everything larger than the N44 got scrapped and replaced by the last minute with the N48, presumably because it was running into the same clockspeed issues RDNA3 did.

Presumably N5x was still being designed as a chiplet based arch so if AMD has had time to smooth out some of the clockspeed issues associated with RDNA3, there is still a chance for RDNA5/UDNA to be chiplet based to deliver the same 800mm2 of silicon NV does, but 400mm2 at a time.
 
I see this maybe matching a 4090, highly doubt it'll match or beat a 5090. At $1000 though, matching a 4090 isn't bad. It's what we expected the 5080 to do, maybe this will light a fire under Nvidia.
 
No and no. they only need to bring down the costs, deliver better prices for mid-range producs and that's a win. RASTER is still king. Nobody cares about fake frames, distorted frames.
Have you been living in a cave for the last decade?

Do you know why nobody tests fill rate or texture rate anymore? Because raster is obsolete. It’s irrelevant.
 
I don't think it's going to happen. I don't believe RDNA 3 is sold well enough to justify making high-end products again.
 
I don't think it's going to happen. I don't believe RDNA 3 is sold well enough to justify making high-end products again.

Sales on 7900 XTX were historically poor, but they picked up well after DeepSeek's public release. I think they sold almost as much as the 4080 overall. Which is a ton less than the 4090 did, but a decent amount nonetheless.
 
I want to know - and obviously will have to wait to see - what kind of ipc uplift they can extract.

Node shrink for higher clocks is good, 50% increase in CU's, also good, but if the ipc remains the same then even flagship next gen will fall behind nvidia's current gen flagship.

I think in order to be attractive it needs to be able to trade blows with or be faster than a 5090/6080 but at more than a 5-10% discount.

In my market right now (Australia) the 9070XT trades blows with the 5070Ti but is 20+ % cheaper.

Regardless of the tier they end up competing at, if they're not at least 20% cheaper they're not going to entice many nvidia owners away (or enough to swing the market share back a little)

I will throw this out there though, for the "FSR4 only supported by a single generation of cards" crowd, remember DLSS didn't work on every generation of nvidia card nor does every feature of it run on every RTX card. Next gen AMD cards will have 2 generations of FSR4 cards. Nvidia will have 5 for DLSS.
 
Back
Top