• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

[TT] AMD rumored Radeon RX 9080 XT: up to 32GB of faster GDDR7, up to 4GHz GPU clocks, 450W power

They could “do” anything.

Let’s say the Navi 4X was fully designed and was cancelled just before first silicon was ordered. You are now 18 months away from having a product on the shelves - 3 months to get first silicon back, 12 months of validation, 3 more months for production silicon.

That makes the 9080 a 2027 product. Even AMD isn’t dumb enough to spend the money and effort to release a “high end” RDNA 4 product in 2027.
Yeah that makes sense. I didn't realize it would take that much time to bring a hypothetical big Navi 4X to market if it was already design complete.
 
^^ Someone cranked the RTX on that jacket.. woah.
 
Bro really said

And they were using the high preset, no RT and dynamic resolution scaling lolol :roll:

But surely we've got it all wrong and it's an Nvidia fan double standard though, right? Some people either need to work on reading comprehension, or take a good hard look at how their own bias contaminates their judgement, or both.
Dlss looks like crap, dynamic all the way :roll:
 
And they were using the high preset, no RT and dynamic resolution scaling lolol :roll:

But surely we've got it all wrong and it's an Nvidia fan double standard though, right? Some people either need to work on reading comprehension, or take a good hard look at how their own bias contaminates their judgement, or both.
The double standards are indeed interesting, DSR is bad, but DLSS and fake frames is "better than native" or so the influencers say.

Some really do need to work on their bias and stop trashing AMD threads with toxic brand favoritism.

Yeah that makes sense. I didn't realize it would take that much time to bring a hypothetical big Navi 4X to market if it was already design complete.
I think it depends if AMD had a pro or workstation card ready, but it doesn't make much sense now, especially when they're working on UDNA/RDNA5.
 
The double standards are indeed interesting, DSR is bad, but DLSS and fake frames is "better than native" or so the influencers say.

Some really do need to work on their bias and stop trashing AMD threads with toxic brand favoritism.


I think it depends if AMD had a pro or workstation card ready, but it doesn't make much sense now, especially when they're working on UDNA/RDNA5.
Nobody cares about the brand. I'd be saying the exact same things if he claimed his 4060ti is getting 160 fps in spiderman 2 maxed out. There is no card currently on the planet that can do it. In fact, I bet money not even a 6090 will be able to. But nope, we should believe him because AMD. Goddamn..
 
Some really do need to work on their bias and stop trashing AMD threads with toxic brand favoritism.
It's hilarious to me that you don't even have the self awareness to see what you're doing here. I'd ask you to self reflect, re-read the comments and see why it's preposterous, but I get the impression you know what you're doing and don't care that it's farcical

Someone made an incorrect statement, they were corrected, and then you, not part of that at all, jump in with your - completely unrelated to the point at hand - toxic anti-Nvidia statements, and making it seem like someone said something they didn't and tried painting them as a fanboy - which it might have been if it was actually what they said.

So congratulations for winning the argument you made up in your own head I guess. For anyone with basic reading comprehension, the truth is obvious.
 
Really, I just read the last couple of pages... seems like a pointless fight. The fact that DRS is on makes it entirely unsurprising the frame rate is what is being claimed, because, you know, the entire point of DRS is to reduce the resolution until that frame rate is met, at the cost of image quality. :)
 
Really, I just read the last couple of pages.

Yep, trainwreck. Thread has been derailed once again. It supposed to be about a "Rumored" 9080XT. Somehow, it's become into a Spider-Man 2 benchmark thread for AMD/Nvidia fans. Have to lol though.. :rolleyes:

1748930630467.png

If we don't get back on track either me or another mod will be closing it.
 
It promises 15-40% better gaming performance than the RX 9070 XT, potentially outperforming NVIDIA’s RTX 5080 and rivaling the RTX 5080 SUPER.
Honestly, AMD should not bother going this high with RDNA 4 unless they can replace the 4090. This doesn't cut it.

Gamers who spend 80 class money buy NVIDIA. They'll need a seriously compelling product to sway that mindshare.

All they need to do this generation is flood the market with 9070s and 9060s and keep prices good. Save the better cards for UDNA.
 
Last edited:
Sure, MLID. And I'm the King of England. Enjoy the ad revenue from the droves of idealistic dreamers who genuinely think AMD can conjure such a card out of thin air, I guess if you ask the genie in the bottle really nicely it's probably gonna happen. I'll eat my words if they ship anything with Navi 40.



Yup, my thoughts exactly. It's a good thing that dreaming is free, because the chance that you will run GDDR7, especially the higher 32 Gbps bin off a GDDR6 PHY, is about as high as me actually being the King of England.



There are two things to account for here, one is actual VRAM usage vs. allocation, and that Monster Hunter Wilds has the most absolute dogshit port in recent memory - with grotesque RAM and VRAM requirements.

With all the commotion over VRAM recently, I've actually decided to do some research to gauge the viability of low-memory dGPUs and I happened to have the perfect test subject on hand. If I manage to come up with a balanced suite of games that makes sense, I will make a thread about it sometime - but what I can tell you is that I've found the claims that 8 GB GPUs are no longer operable to be somewhere between greatly exaggerated and utter tripe. Most games will run on 4 GB with low enough settings, but you may have a taste:

With a healthy dose of DLSS (25% scale) and low settings, I was able to run Black Myth Wukong, at 4K, on a GPU that's basically a mere step above what you'll find on a Nintendo Switch 2:

View attachment 401934

Here, I'm providing the CapFrameX profiling data of this benchmark run as well (JSON included in the zip attached to this post, if you want to load it on the software yourself):

View attachment 401936

MH Wilds on the other hand, I didn't even bother. It was a total writeoff from the start, the game is simply not functional, its memory management is terrible and the game actively malfunctions with all sorts of performance, shading and texturing issues on a VRAM-limited scenario. It also crashes extremely frequently, one run completed out of the 5 I've tried.

View attachment 401938

This indicates more of a problem with MH Wilds than a problem with low VRAM hardware in itself - this game has a very bad port and we, as gamers, must demand better.
If there is a way I can work with you on this project, I will.

I too am sick of the BS about the vram. It's heavily pushed by HW unboxed and gamers nexus. And while there is truth in what they say, it is more like they are over exaggerating the issue by using ultra settings. Most games look no different between high and ultra while ultra is a killer.

Anyway.... good post
 
If there is a way I can work with you on this project, I will.

I too am sick of the BS about the vram. It's heavily pushed by HW unboxed and gamers nexus. And while there is truth in what they say, it is more like they are over exaggerating the issue by using ultra settings. Most games look no different between high and ultra while ultra is a killer.

Anyway.... good post
Interesting to see what sort of effect this sort of thing has on usage going forward


"The GPU is no longer just getting faster – it is getting smarter. And that is the real progress."
 
  • Like
Reactions: NSR
The original reason that I brought in Monster Hunter Wilds as a VRAM hog is that I was trying to make a point about how the 32 GB in the rumored video card would be a great feature for the longevity of such a video card. I would like at least 24 GB in a card that does not consume too much power and is not a fire hazard due to one reason or another. 4K is rapidly becoming the next standard non-esports monitor and TV resolution, so 16 GB video cards are starting to have their VRAM buffer margins eaten away by the GPU needing large enough buffers to render such big frames.

One of the features of GDDR7 is that certain non-power of 2 VRAM chip sizes such as 24 Gb per chip become possible. With GDDR6X and earlier VRAM technologies, you are limited to power of 2 VRAM sizes such as 16 Gb per chip. This can help video card vendors offer more VRAM size options. I feel that GPU vendors restricting which GPUs can get which VRAM sizes should be an antitrust issue because they often do that to ensure that you have to pay too much for a so-called "workstation-grade" video card whose main difference is VRAM size.

I don't need the fastest GPU, but would want one with big enough VRAM size options and enough speed to render decent 4K to last for several years. By switching to GDDR7, the rumored AMD GPU could offer more size options without having to widen the memory bus even more and consuming more power needed to power the required memory controllers. Should a GPU get overloaded due to work required to generate a real frame, AI upscaling could really help. AI upscaling is more of a hard sell if the GPU is stuck swapping in stuff from PCIe due to running out of VRAM due to a game needing too much graphical assets in VRAM. With either the PlayStation 6 and/or possibly the next Xbox rolling out soon which I expect both to have more VRAM, I worry about short video card lifetimes due to not having enough VRAM to hanlde well-optimized games in the next console generation.
 
Interesting to see what sort of effect this sort of thing has on usage going forward
Doubling PT performance with a system update is not something I thought I'd hear anytime soon.

With VRAM, what I'd actually prefer (software-wise) is optimising the relationship between VRAM and DRAM. Saving 1GB-2GB is cool and all but having easy access to an additional 32GB is even better. Currently this transition is too rough.
 
AI upscaling is more of a hard sell if the GPU is stuck swapping in stuff from PCIe due to running out of VRAM due to a game needing too much graphical assets in VRAM.
I'm not sure how prominently something like neural texture compression will feature in these conversations?!

Doubling PT performance with a system update is not something I thought I'd hear anytime soon.

With VRAM, what I'd actually prefer (software-wise) is optimising the relationship between VRAM and DRAM. Saving 1GB-2GB is cool and all but having easy access to an additional 32GB is even better. Currently this transition is too rough.
Yea, which is why standardization is interesting as I imagine standing issues becoming exasperated pretty quickly otherwise, as is the worry with "Jensen's law" and gen a.i. (Karen Hao's book "Empire of AI" springs to mind)
 
Last edited:
  • Like
Reactions: NSR
That's what she said....
Damn this analogy is really fitting for VRAM actually. In the end, they want the bigger one regardless

If there is a way I can work with you on this project, I will.

I too am sick of the BS about the vram. It's heavily pushed by HW unboxed and gamers nexus. And while there is truth in what they say, it is more like they are over exaggerating the issue by using ultra settings. Most games look no different between high and ultra while ultra is a killer.

Anyway.... good post
Part of the issue those channels run into, is that this is indeed a very 'nuanced' thing to report about.

They, effectively, need ultra settings to make their point clear enough.
Its very hard to show microstutter in a video, too - heck the video itself will likely have worse frametimes than the content inside.

Another part of this vram discussion that many fail to grasp is the direct relation between VRAM capacities, the year in which they release and the price point they get. Thát in the end determines what is a poor product to be buying. Additionally, the relation between core power and VRAM, or, overall balance of a GPU's resources matters too.

Too much VRAM on an otherwise underpowered GPU will only sip away power that can be used to clock better - but that issue is very limited and almost never reaches a product on shelves, the 5060ti 16GB is now a rare example of it - an example NOT replicated by any RDNA3 card that we consider to be having a bit too much VRAM for its performance. They just have enough board power.

All of these aspects are at play right now, in different products. Nothing applies to an entire stack or gen. But its good that we start critically identifying the (most) rotten apples in a GPU stack too. That 5060ti 8GB shouldn't exist in 2025. Simple. Either a 12GB version of it, or a 16GB version of it lacking that, would have suited the perf level. IMHO that's all this VRAM debate is really about: what should you, or should you not buy. Not only for your own gaming, but also for the future generations and the progress customers should demand to get a decent deal. The whole reason Nvidia has the gall to follow up a 4060 with what they have now, is because there are too many sheep around.
 
Last edited:
Navi 48-based GPU with 32 GB VRAM does not make any sense, unless it's aimed at AI. Navi 48 paired with 24 GB would make at least some sort of sense for gaming.
Also 4.0 GHz would require helluva voltage. I'd say it's much more reasonable to come with bigger die than insanely clocking die built on TSMC N4P.
 
The original reason that I brought in Monster Hunter Wilds as a VRAM hog is that I was trying to make a point about how the 32 GB in the rumored video card would be a great feature for the longevity of such a video card. I would like at least 24 GB in a card that does not consume too much power and is not a fire hazard due to one reason or another. 4K is rapidly becoming the next standard non-esports monitor and TV resolution, so 16 GB video cards are starting to have their VRAM buffer margins eaten away by the GPU needing large enough buffers to render such big frames.

One of the features of GDDR7 is that certain non-power of 2 VRAM chip sizes such as 24 Gb per chip become possible. With GDDR6X and earlier VRAM technologies, you are limited to power of 2 VRAM sizes such as 16 Gb per chip. This can help video card vendors offer more VRAM size options. I feel that GPU vendors restricting which GPUs can get which VRAM sizes should be an antitrust issue because they often do that to ensure that you have to pay too much for a so-called "workstation-grade" video card whose main difference is VRAM size.

I don't need the fastest GPU, but would want one with big enough VRAM size options and enough speed to render decent 4K to last for several years. By switching to GDDR7, the rumored AMD GPU could offer more size options without having to widen the memory bus even more and consuming more power needed to power the required memory controllers. Should a GPU get overloaded due to work required to generate a real frame, AI upscaling could really help. AI upscaling is more of a hard sell if the GPU is stuck swapping in stuff from PCIe due to running out of VRAM due to a game needing too much graphical assets in VRAM. With either the PlayStation 6 and/or possibly the next Xbox rolling out soon which I expect both to have more VRAM, I worry about short video card lifetimes due to not having enough VRAM to hanlde well-optimized games in the next console generation.
I don't think way overkill VRAM helps a card in longevity. Sufficient - the word describes itself - is just that. Sufficient. The fact an odd game is coded and ported like absolute shit does not make a case for a different kind of GPU. In fact, it explicitly makes a case for such a product to NOT exist, because if they do and become mainstream, devs have no reason to not port and code like absolute shit. That's basically the counter point to @Dr. Dro 's argument about having the hardware on tap for a poorly coded game. There are limits to this. And we should guard them - because what developers love to do in our space, is pass part of the burden of 'work' to the end user.

No longer do we pre-cook the lighting, no, let's do it on the fly, so you get to pay for it instead of the developers' time. This is why RT is lauded as an improvement on their end. Not having to do work, and passing the bill on to you, us. A balance must be struck here, as clearly, heavily expanded hardware requirements aren't exactly helping anyone either. Look at the mess that is UE5 performance - it backfires on those same devs too.
 
Honestly, AMD should not bother going this high with RDNA 4 unless they can replace the 4090. This doesn't cut it.

Gamers who spend 80 class money buy NVIDIA. They'll need a seriously compelling product to sway that mindshare.

All they need to do this generation is flood the market with 9070s and 9060s and keep prices good. Save the better cards for UDNA.

Even if they can replace the 4090, they would be at least 4 years late at this point, all of that to most likely still lose to a 5090. That would be a lot of money invested for a very minimal impact on the market. It would even go against their claim of "85% of gamers use a GPU that cost less than $700" (the 9070 XT already went against that point). This generation as a whole was supposed to give them back a bit of market share, but they still found ways to fumble that, when Nvidia let them an open goal.
 
  • Like
Reactions: NSR
A 32GB GDDR7 card doesn't make sense this late into the pipeline.

Others have given lots of good explanations into that, but I'd like to add that, as mentioned before, 3GB modules are a possibility.
Such product in the OP is likely meant for workstation use, so why not either go all in and do a clamshell of 3GB modules instead of 2GB ones (for a total of 48GB of VRAM), or simply save yourself the trouble of a clamshell design and do the regular 8 modules for a total of 24GB of VRAM?
 
I'm not sure how prominently something like neural texture compression will feature in these conversations?!
Nvidia's history of multiple proven incidents of false advertising and releasing half-baked features that required significant refinement before they became useful or have their serious problems solved makes planning to use neural texture compression or any other promised Nvidia feature in real games moot until the feature is demonstrated and proven useful. Nvidia has often been innovative, but has often failed to fully develop its ideas before releasing them in a ready-to-use state.

Examples of false advertising from Nvidia include asynchronous compute support in Maxwell (though I believe that this was an accidental case because Nvidia did make a genuine try at asynchronous compute in Maxwell, but did not include enough scheduling hardware to allow the feature to work as intended), the GTX 970 having only 3.5 GB of usable VRAM when Nvidia advertised the GTX 970 with 4 GB of VRAM (technically true, but the last 512 MB was practically unusable due to the way a GTX 970 was die harvested so it is practically false), and the RTX 5070 having the same performance as the RTX 4090 (proven blatantly false because frame generation worsens frame latency because it depends on AI generating and displaying frames in between the rendered frames, forcing the rendered frames to be held back until the generated frames have been displayed).

Examples of features that were half-baked and required refinment included Nvidia's implementation of asynchronous compute, ray tracing, and DLSS. Asynchronous compute was one thing that AMD invented that Nvidia had to copy in a hurry since asynchronous compute created a big speedup in AMD's GPUs in Direct3D 12 by helping the GPU maximize utilization. As seen above, Nvidia made a try at it in Maxwell and completely failed due to inadequate hardware. Nvidia did disclose that it tried to make it work in Maxwell by using its then-usual software driver wizardry when its driver quality was unquestionably good, but found out that it did not have good enough hardware to make it work and speed things up. Instead, things slowed down. Nvidia added more scheduling hardware and made it work in Pascal and later GPUs. Later, Nvidia disclosed several security vulnerabilities in Maxwell, Pascal, and Turing involving microcontrollers in those GPUs as seen in https://nvidia.custhelp.com/app/answers/detail/a_id/5263 . I suspect that these microcontrollers are meant to manage asynchronous compute in those GPUs and that later GPU generations starting with Ampere fixed the hardware security vulnerabilities. Ray tracing in the RTX 20 series was half-baked, too slow, and inefficient to be of much use in games, and games that used it got slammed for poor performance. Ray tracing in the RTX 30 series became useful in games if used judiciously. RTX 40 refined it further, improving its speed, allowing it to enhance scenes further. There are scenes that are truly impossible to properly render in real time without ray tracing. DLSS 1.0 got all sorts of negative press due to being half-baked garbage that required AI training for each game, requiring DLSS 2.0 before people accepted DLSS. Newer versions of DLSS improved image quality. Newer versions of DLSS introduced frame generation which can be useful in games that are not too sensitive to latency but is worse than useless in latency-sensitive games such as esports like Street Fighter 6, Tekken 8, Counter-Strike 2, or Valorant. Some features and versions of DLSS require newer hardware to run.

If neural texture compression works better than most GPUs' native texture compression hardware and does not make most GPUs lag from using AI hardware in novel ways that it was not originally designed for, that would be great except for those who own GPUs without AI hardware that is good and flexible enough to be repurposed towards neural rendering. However, like other new features that Nvidia is promoting, game developers should not plan to depend on it until experiments show that it is much better than dedicated texture compression hardware algorithms that are already baked into today's GPUs to make neural texture compression worth the effort, storage costs, and memory to add them and the required fallback paths for GPUs that can't or shouldn't use neural texture compression.

I don't think way overkill VRAM helps a card in longevity. Sufficient - the word describes itself - is just that. Sufficient. The fact an odd game is coded and ported like absolute shit does not make a case for a different kind of GPU. In fact, it explicitly makes a case for such a product to NOT exist, because if they do and become mainstream, devs have no reason to not port and code like absolute shit. That's basically the counter point to @Dr. Dro 's argument about having the hardware on tap for a poorly coded game. There are limits to this. And we should guard them - because what developers love to do in our space, is pass part of the burden of 'work' to the end user.

No longer do we pre-cook the lighting, no, let's do it on the fly, so you get to pay for it instead of the developers' time. This is why RT is lauded as an improvement on their end. Not having to do work, and passing the bill on to you, us. A balance must be struck here, as clearly, heavily expanded hardware requirements aren't exactly helping anyone either. Look at the mess that is UE5 performance - it backfires on those same devs too.
First, some kinds of effects are only able to be done with ray tracing. There is no way for developers to pre-cook every possible scenario without blowing your storage to infinity. However, such ray tracing effects need to be judiciously used to where they are truly needed.

Second, there are some well-optimized Unreal Engine 5 games. Tekken 8 is a very well-optimized Unreal Engine 5 game in terms of graphics. To be fair, its initial Season 2 patch has totally wrecked the game balance to make it way too overly offensive without sufficient defensive options, and the developers are trying to fix that balance mess and tone down the offense. Those who ship badly optimized Unreal Engine 5 games have more work to do.

Third, I would rather splurge now on a GPU that can last me a while rather than buy a GPU whose VRAM buffers have low margins from the get-go especially when a console transition is predicted to be relatively soon, which will be the start of games using more VRAM, forcing me to change my GPU and risk more damage to my motherboard due to changing the GPU. I already destroyed one motherboard when trying to remove a GPU from the motherboard because Corsair made a bone-headed design error in one of its cases to have some sort of non-screw pin in the motherboard's central motherboard stand-off hole that Corsair elevated so that a standard stand-off wouldn't fit. Combine that with the fact about multi-purpose monitors and TVs being 4K, and 8 GB is either barely sufficient or not sufficient anymore as seen in https://www.techspot.com/review/2856-how-much-vram-pc-gaming/ even if low settings are used. At better than console settings, 12 GB or 16 GB is pushing it or barely insufficient to hold everything without swapping even if you are counting only memory that is actually filled with content. If you asked me at the beginning of the PlayStation 5 and Xbox Series S | X lifetimes, I could see 8 GB VRAM being sufficient. That does not feel true anymore with a console transition being predicted to be soon. This is why I am excited for the new AMD GPU.

I can see this rumored GPU being a backup plan in case UDNA fails graphics performance tests in the lab versus RDNA 4. At least it should have sufficient VRAM to last if UDNA needs much more time in development or fails in the lab and has to be sent back to the drawing board. AMD doesn't need another Bulldozer with no backup plan.

I'm not sure how prominently something like neural texture compression will feature in these conversations?!
I just found this video of neural texture compression in action on an RTX 4090. Neural texture compression can really cut VRAM usage, but also causes a big hit to the frame rate.


Neural texture compression has a very impressive compression ratio, but will require more experimentation to see which GPUs have good enough AI hardware to use this without too much of a performance hit.
 
Last edited:
This generation as a whole was supposed to give them back a bit of market share, but they still found ways to fumble that, when Nvidia let them an open goal.
To be fair, the fact the 9070 cards keeps selling above MSRP is a good sign for their market share.

But I would say they fumbled the 60 class. Should've been a $299 16GB XT and no 8GB XT. Save the silicon for the 16GB card and make all the 5060s look stupid in comparison. Instead we get a half assed jab at the 5060 Ti.
 
Neural texture compression has a very impressive compression ratio, but will require more experimentation to see which GPUs have good enough AI hardware to use this without too much of a performance hit.
Yea. It will be interesting to see how it scales between platforms, along with the other advertised benefits of neural rendering.
 
Is it possible to ban AI generated posts?
 
I really dislike the way that this forum software appends multiple posts from one user into one post if no other user makes any posts in between the multiple posts from the same user. It makes the resulting conjoined post look crazy when under other forum software the separate consecutive posts would remain split and look normal.
 
Back
Top