• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD's Upcoming UDNA / RDNA 5 GPU Could Feature 96 CUs and 384-bit Memory Bus

If you think that Optiscaler is some magic wand which allows you to instantly "enable" things, without learning anything, checking the wiki, compatibility, etc, then you clearly have never used it.
As I said before, people who will be interested in replacing DLSS dlls, will probably know or be able to understand easily how to use these kind of programs.

And as I also said before, but I see you avoid giving a straight answer, you make it look like someone needs to invest a couple of hours per game to use Optiscaler. So, are you saying that someone needs to pass two hours messing with stuff, with the help of Optiscaler, just to make one game work with a different upscaler?

What i would like to see is AMD starts pricing their cards reasonably like they used to do
They tried it for over a decade. People where laughing at AMD's face and kept buying Nvidia cards anyway

"Please AMD, release something good enough but much cheaper than Nvidia, so that Nvidia lowers prices and I can buy cheaper Nvidia GPUs"
combined with
"AMD cards are trash, they are for poor people and they have the worst drivers imaginable".

I guess the age of gifts has reach it's end. AMD some times offers better GPUs at lower prices and people again will choose Nvidia. The same will happen again now with people buying 8GB RTX 5050 and RTX 5060, over the faster 8GB RX 9060 XT.
 
150-200 CUs please.
Not unless AMD commits to making a ~800mm² chip. I dont see it happening.
GDDR7 on the table? Surely not, this is AMD. The 384-bit bus is to compensate for the GDDR6 ;)
You do realize that we are talking about a 2027 product here meant to come out in ~1,5 years from now?
Of course it will use G7. It makes sense. Historically AMD has been the first with many memory standards.

Even if they weren't they always moved to the new standard eventually. G7 was very new when RDNA4 launched and Nvidia likely hogged all the capacity.
It's not whether you CAN, it's whether you have TIME. I can barely scrape a few hours per week to play games themselves, then there is adding necessary mods/reshades. Deep diving into things like Optiscaler leaves very little time for playing itself (and that's aside from the fact it's not always perfect).
Download, extract, install. If you dont have the time or patience to do that then get a console.
Personally I am still hoping for a 9080xt/9070xtx with 24gb vram.
Not going to happen. This would require and entirely new chip with 384bit bus or 3GB G6 modules that dont exist.
We dont even know of RDNA4 has G7 MC to be able to use 3GB G7 chips to reach 24GB with 256bit.
I was thinking that you are quite optimistic about the needed CU count. And then I remembered that 5090 has 170SM
Different architecture. CU and SM counts cant be directly compared. Also very different chip sizes. AMD does not like to make huge ~800mm² chips and hence they wont have 200 CU chips.
AMD doesn't need 150-200 CUs. 9070XT is performing more or less like a 7900XTX based on TPUs ranking(9070XT 6% slower). So a 9000 series with 96 CUs will be probably between 5080 and 4090. With 128 CUs it will be at 5090 level of performance.
That is what i concluded as well based on per CU performance of RDNA4. But that was a theoretical exercise of an RDNA4 card that will likely never exist.
The problem is that to show off, you actually need to be in the lead. With those specs, unless there's a historically unprecedented IPC gain per WGP, that GPU isn't gonna be a threat to an RTX 5090. It'd be impressive if it released today, but by late 2026/early 2027? That's another story entirely.
9070 XT with 96CU would likely be around 4090 performance.
We can assume that per CU perf increases in the next gen so i would not be surprised if 96CU RDNA5/UDNA will be around 5090 performance.

They key here is 2027. Nvidia will likely release 6090 by then and perhaps 6080 will finally beat 4090.
But the biggest question is price. Yet Nvidia will likely maintain the Halo product, but if AMD keeps a hard cap of 999 for their top end card then they will still have the advantage over 6080 and 4090/5090.

Even in 2027 i would imagine people would like ~5090 performance for ~1100 (assuming MSRP's will not hold again).
 
The same will happen again now with people buying 8GB RTX 5050 and RTX 5060, over the faster 8GB RX 9060 XT.
They deserve it for not seizing the opportunity to make a $300 12 GB card. If Intel can do it at $250, and NVIDIA could do it at $300 with the RTX 3060, then so could AMD. It boggles the mind that they instead copied the same dumb/insulting product segmentation as NVIDIA.
 
9070 XT with 96CU would likely be around 4090 performance.
We can assume that per CU perf increases in the next gen so i would not be surprised if 96CU RDNA5/UDNA will be around 5090 performance.
It's a pitty that AMD did not release enthusiast class GPU this time. For sure there is potential for such card with RDNA4 improvements in place.
 
They deserve it for not seizing the opportunity to make a $300 12 GB card. If Intel can do it at $250, and NVIDIA could do it at $300 with the RTX 3060, then so could AMD. It boggles the mind that they instead copied the same dumb/insulting product segmentation as NVIDIA.
That's what I am saying. Always finding an excuse to NOT buy an AMD GPU. It boggles the mind that people find insulting business practice X when AMD is doing it, and justified when Intel/Nvidia is doing it.

No, no, no. AMD should stop offering gifts. Prices close to Nvidia, features close to Nvidia and every new feature exclusive to new cards. After 3-5-10 years people might... MIGHT stop pointing the finger at AMD, really I mean REALLY start questioning the practices of the 4 trillion company with the 75% profit margin that controls over 80% of the gaming market and influences developers, tech press and even big AIBs and start doing something positive with their wallets. Until today they are just trying to find any excuse to justify using their wallet to help Nvidia's monopoly.
 
Folks like you are living memes :) Nobody cares? What was the AMD GPU market share again?
Drowning in the wave of corporate fashion imposed by Nvidia is not a smart move, nor does it make you great. Moreover, when a person is drowning, he is crying for help, not making a green advertisement.
 
That's what I am saying. Always finding an excuse to NOT buy an AMD GPU. It boggles the mind that people find insulting business practice X when AMD is doing it, and justified when Intel/Nvidia is doing it.

No, no, no. AMD should stop offering gifts. Prices close to Nvidia, features close to Nvidia and every new feature exclusive to new cards. After 3-5-10 years people might... MIGHT stop pointing the finger at AMD, really I mean REALLY start questioning the practices of the 4 trillion company with the 75% profit margin that controls over 80% of the gaming market and influences developers, tech press and even big AIBs and start doing something positive with their wallets. Until today they are just trying to find any excuse to justify using their wallet to help Nvidia's monopoly.
The heck are you talking about? When was the last time amd offered any gifts? 15 years ago? Starting from 2015 - tell me what generation was amd offering "gifts"?

The last 4 generations they havent offered anything, its the usual nvidia with a 9,99$ discount, worse features, worse support.

DLSS Swapper in most of the case does not violate Cheap-Anti.
Optiscaler does.
Dont really care though, im not going to start downloading random crap to use dlss 4. Im not going to do nvidias work. I just wanna play the game.
 
While a RDNA4-GPU with those specs would be between 4090 and 5090, AMD will have to atleast double 9070XT-performance (espercially in RT and PT) to beat 5090-successor. With only 50% more CUs and wider SI, they would have to improve performance via IPC and clock as much as between RDNA3 and RDNA4, again, to do that.

I would be content with something around 5090-level, though, which seems doable with moderate IPC-improvement and those specs. Just don't go crazy on the price again.
 
The problem is that to show off, you actually need to be in the lead. With those specs, unless there's a historically unprecedented IPC gain per WGP, that GPU isn't gonna be a threat to an RTX 5090. It'd be impressive if it released today, but by late 2026/early 2027? That's another story entirely.

Based on the 9060xt to 9070xt (@1440p), a doubling of hardware results in a ~90% gain. Even considering a worse % scaling efficiency of 65-70% it would already be faster than 5090. Thats before considering any IPC gain, a fact people seem to miss with RDNA 4 being significantly faster than RDNA 3 per unit. I don’t expect it to be faster than a 6090 or whatever halo part comes next but, it will certainly be faster than a 5090.
 
They key here is 2027. Nvidia will likely release 6090 by then and perhaps 6080 will finally beat 4090.
But the biggest question is price.
100% it will beat it (I say about halfway between 4090 and 5090). Very likely 24GB buffer (3GB chips) over a 256 bit bus. Power draw not higher than current 5080. The risk though is MSRP will be $1200 instead of $1000. This is what happens when there is no competition at higher price brackets.
It's a pitty that AMD did not release enthusiast class GPU this time. For sure there is potential for such card with RDNA4 improvements in place.
Absolutely. The first rule of winning a fight is to show up. They didn't even show up.
And nVidia is preparing the Supers for January 2026. AMD doesn't have a counter for this, doubling the VRAM on the 9070 XT and overclocking the living daylights out of it would be a huge mistake, such a card will have a horrible power draw.

A bigger die with larger bus would've helped them to compete successfully against the 5080 which is gimped by its VRAM buffer, also it would've allowed a cut down version to slot in between the flagship and the 9070 XT, and very importantly sabotage the secondary market where the 4090 reigns supreme, offering a valid alternative for the 4090 is not negligible, and the satisfaction of card blocking the 4090 sellers is itself worth its weight in gold.
 
100% it will beat it (I say about halfway between 4090 and 5090). Very likely 24GB buffer (3GB chips) over a 256 bit bus. Power draw not higher than current 5080. The risk though is MSRP will be $1200 instead of $1000. This is what happens when there is no competition at higher price brackets.

Absolutely. The first rule of winning a fight is to show up. They didn't even show up.
And nVidia is preparing the Supers for January 2026. AMD doesn't have a counter for this, doubling the VRAM on the 9070 XT and overclocking the living daylights out of it would be a huge mistake, such a card will have a horrible power draw.

A bigger die with larger bus would've helped them to compete successfully against the 5080 which is gimped by its VRAM buffer, also it would've allowed a cut down version to slot in between the flagship and the 9070 XT, and very importantly sabotage the secondary market where the 4090 reigns supreme, offering a valid alternative for the 4090 is not negligible, and the satisfaction of card blocking the 4090 sellers is itself worth its weight in gold.

A 96 CU part is easily going to pull 400-450w, theres no chance this theoretical card is going to pull less power than 5080. Not to mention it’d probably be a mistake to pair it with anything less than a 384bit bus (more power).
 
A 96 CU part is easily going to pull 400-450w, theres no chance this theoretical card is going to pull less power than 5080. Not to mention it’d probably be a mistake to pair it with anything less than a 384bit bus (more power).
On current 4nm yes tho it depends on clock speeds too as usually higher CU parts are lower clocked. I've seen momentary spikes as high as 600W+ on my 9070 XT (likely nano or milliseconds) so that's why it's important to have overspecced good PSU (i have 1kW Titanium).
96CU on 2nm - who knows what the power there will be...
 
A 96 CU part is easily going to pull 400-450w, theres no chance this theoretical card is going to pull less power than 5080. Not to mention it’d probably be a mistake to pair it with anything less than a 384bit bus (more power).
I was talking about the 6080 in my first paragraph.
 
Or it could feature more or it could feature less. It could be as fast as 5090! Or it could be faster! Miracles on gossip street. Someone should make a Daily Star front page parody with all the hardware rumours.
 
Or it could feature more or it could feature less. It could be as fast as 5090! Or it could be faster! Miracles on gossip street. Someone should make a Daily Star front page parody with all the hardware rumours.

It’s an article discussing potential new parts, you don’t have to post. Novel idea isn’t it?
 
It’s an article discussing potential new parts, you don’t have to post. Novel idea isn’t it?
God forbid someone makes fun of yet another article about an anonymous twitter post.
 
paired with a 384-bit bus for memory. We still don't know what type of memory AMD will ultimately use, but an early assumption could be that GDDR7 is on the table
Using 30Gbps GDDR7 modules that would be like 1.4TB/s of memory bandwidth, pretty nice. If AMD manages to source 40Gbps modules by then, that'd be close to 2TB/s, which is even faster than a 5090 with its 512-bit bus and 28Gbps modules.
With 24Gbit modules, that 384-bit bus would provide a nice framebuffer of 36GB, or 72GB with clamshell (the latter will likely only be available for enterprise offerings). For the right price, this would be a really compelling product.

It's not that simple: the underlying architecture might be the same, but the silicon configuration is different. The 5090 silicon isn't used in the DGX rack used for HPC.
The X100 chips are solely meant for DC, yes, the same happened with the V100, A100, H100, etc etc. Those were also the only HPC-capable chips since they had proper FP64, unlike the other parts.
However, the GB202 is still widely used in the RTX PRO 6000, which is a really great DC part for inference and even small-scale training. It's also being widely used in workstations.
 
If we are really, really lucky it might be 50% faster than current gen 9070XT.
 
. If AMD manages to source 40Gbps modules by then
Is possible 42.5Gbps modules to be in production in time when next generation will launch on market. This is not enough earlier to be used. I suppose modules 36-37Gbps will be in production later this year. This is in Samsung's plans. Mass production early next year this modules can be used in next generation graphic cards.
 
I wouldn't expect a new product at $230, no matter how much cut down it will be. AMD will not waste TSMC N3E wafers for a low end product. But we could hope for a cheaper RX 7600 8GB at $200, because I suppose TSMC's 6nm is cheaper today than what it was when 7000 series was released.

A little napkin math comparing N6 to N4P proves this notion mostly correct, in terms of viability a cheaper old product is better than attempting to minimize a new one, though with something like a cut-down Navi... 55, assuming the pattern holds?—the whole point is attempting to salvage tolerably flawed dies as opposed to writing them off as a total loss.

The 9060 XT 8GB is far outsold by its 16GB brother as-is, I don't see why AMD wouldn't shift their strategy towards fully investing these lesser-yield dies into a product that, while isn't a breadwinner by any definition, props up overall margin by offsetting what would otherwise be extra cost on top of the good dies?
 
Based on the 9060xt to 9070xt (@1440p), a doubling of hardware results in a ~90% gain. Even considering a worse % scaling efficiency of 65-70% it would already be faster than 5090. Thats before considering any IPC gain, a fact people seem to miss with RDNA 4 being significantly faster than RDNA 3 per unit. I don’t expect it to be faster than a 6090 or whatever halo part comes next but, it will certainly be faster than a 5090.

I for one am thinking you're being optimistic, but a top tier 2027 GPU needs to beat a top tier 2025 GPU to even begin to stake a claim of leadership. Anything else won't do.
 
I for one am thinking you're being optimistic, but a top tier 2027 GPU needs to beat a top tier 2025 GPU to even begin to stake a claim of leadership. Anything else won't do.

I don’t think so, with those sorts of specs a 384-bit bus with enough bandwidth, should be plenty to beat a 5090. Price is everything tbh, if it’s priced right it will sell. The market for $2000+ consumer GPUs is miniscule, and not a game they’re going to win moving back into enthusiast tier parts after passing one generation. *edit; brain decided to think navi 48 had 48 compute units today and not the correct 64. So probably between a 4090 and 5090.

The more interesting question is will Nvidia actually provide an IPC increase that benefits consumer graphics next gen, or just slap more cores on and increase clocks while theyre already pushing the limits of die size and call it a day. Also will AMD make a second attempt at chiplet styled design again.
 
Last edited:
Back
Top