• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

[TT] AMD rumored Radeon RX 9080 XT: up to 32GB of faster GDDR7, up to 4GHz GPU clocks, 450W power

I really dislike the way that this forum software appends multiple posts from one user into one post if no other user makes any posts in between the multiple posts from the same user. It makes the resulting conjoined post look crazy when under other forum software the separate consecutive posts would remain split and look normal.
I like it, otherwise topics can look like a monologue, its not like you're gonna read that any more eagerly than a wall of text split up by a few quotes.

Still though my experience is that those that have sufficient attention span and know the poster can make some sense, do read those walls ;)

To be fair, the fact the 9070 cards keeps selling above MSRP is a good sign for their market share.

But I would say they fumbled the 60 class. Should've been a $299 16GB XT and no 8GB XT. Save the silicon for the 16GB card and make all the 5060s look stupid in comparison. Instead we get a half assed jab at the 5060 Ti.
Rather, 12GB on that price point would have been fine too, and still be a fantastic statement. They'd be having the sweet middle ground neither of these cards offer.
 
So...this may be a hot take, but I think the product placing is stupid. A 450 watt card to compete with Nvidia is definitely a thing...but the "win" for the 9070 xt was to have more VRAM, close the RT gap, and still come in at or below the 5070. I think that was a success...but I know your mileage may vary.

Looking at the 9080 to compete with the 5080...it's not going to be hugely better. The need for 32 GB of VRAM is something....that screams home AI rather than GPU. I'd love to have seen something like a 24 GB card, that came within spitting distance of the 5080 on both RT and general performance, with a couple of hundred dollars less cost. That said...what seems to be pitched here is an expensive attempt to compete against Nvidia in a generation that already feels lost. Why in Hades would you invest money into that sort of phyric victory?



Side note though, the 9060 is also hot garbage in my opinion. It was mispriced at the 8GB level, and is competing with Nvidia for most meh offering. The 16GB is pretty OK...but it's basically killed between having too much cost and not enough performance differentiating it from previous generations unless you want the lowest spec 1920x1080 that has some ability to do RT...at which point the win is to cross your eyes and pray that you can see a difference. The 5060 and 9060 offerings just seem like they are the new lowest tier of offering...which is quite depressing.
 
Wasn't there a rumor that they were developing a big Navi 4X with separate MCDs that was canceled? Seems like they could possibly bring that back to go up against a super refresh. I believe the rumor was it was too expensive to make so it was canceled. Assuming that's actually true, and given they can't keep navi 48 in stock at $100 over MSRP, perhaps that could be plausible in these market conditions. But yeah, 4 Ghz N48 with 32 GB GDDR7 sounds like total nonsense.
Based on the leak from Moore's Law is Dead, the GPU will not use a chiplet design. It seems AMD is taking a more traditional approach with this GPU.
 
Pretty sure this is going to be one of their Pro cards. There were rumors like this right as 9070XT came out and AMD denied all of it and then more news was coming of a new Pro card with the same specs. Seems like all of that has been resurrected again. Would love if AMD made a higher end offering of RDNA4 though because it's a killer arch.

Though having said all of that, i could see AMD make a cheeky move if Nvidia released 50 series Super cards.
 
  • Like
Reactions: NSR
Thought AMD made it clear they don't want to make big GPU for high end this time around?
 
Rather, 12GB on that price point would have been fine too, and still be a fantastic statement. They'd be having the sweet middle ground neither of these cards offer.
12 GB with GDDR6 would have required narrowing the memory bus and running in clamshell mode, or widening the bus. Narrowing the bus and running in clamshell mode to hit a 12 GB target would have cut performance by starving the shader hardware of needed memory throughput, but allow the card to outperform a wider memory bus 8 GB card at higher resolutions or games with lots of game assets. Widening the bus to hit a 12 GB target would make the GPU and video card more expensive, but might alleviate a memory throughput bottleneck if one exists. GDDR7 could open more memory size options without forcing too many GPU design changes due to its ability to support non-power of 2 memory sizes. However, due to being a new technology, it would have required all of the problems and costs of adopting a new technology including and not limited to redesigning the GPU's memory controllers.
 
Based on the leak from Moore's Law is Dead, the GPU will not use a chiplet design. It seems AMD is taking a more traditional approach with this GPU.
MLID isn’t a leaker, he’s a liar.
 
Sometimes he is quoting sources, i guess they reach to him or vice versa. the adverts are persistent.
Reminds me of 7990 with 3.6Ghz boost GDDR6 rated for 24 Gbps.. it wouldn't be surprising if MLID came up with this too.
 
Based on the leak from Moore's Law is Dead, the GPU will not use a chiplet design. It seems AMD is taking a more traditional approach with this GPU.

That dude just makes stuff up lmao

Sometimes he is quoting sources, i guess they reach to him or vice versa. the adverts are persistent.
Reminds me of 7990 with 3.6Ghz boost GDDR6 rated for 24 Gbps.. it wouldn't be surprising if MLID came up with this too.

99.5% that he did. He was getting mocked by VideoCardz on X this morning of all people or publications that could mock him :laugh:

 
Thought AMD made it clear they don't want to make big GPU for high end this time around?
They did, though with Nvidia pricing their products like crazy people and them being less and less interested in gaming GPU's, I could see AMD wanting to get some more GPU's out. Seems a good time to compete.
 
Who said it's a big die? it's just overclocked. AMD can easily release a 500mm² 6144 shader, 384-bit graphics card. But every other generation has been limited to 256-bit cards for some reason, artificial intelligence or something.
 
Who said it's a big die? it's just overclocked. AMD can easily release a 500mm² 6144 shader, 384-bit graphics card. But every other generation has been limited to 256-bit cards for some reason, artificial intelligence or something.
No, no, you read it wrong. It's big LIE not a big DIE.
 
12 GB with GDDR6 would have required narrowing the memory bus and running in clamshell mode, or widening the bus. Narrowing the bus and running in clamshell mode to hit a 12 GB target would have cut performance by starving the shader hardware of needed memory throughput, but allow the card to outperform a wider memory bus 8 GB card at higher resolutions or games with lots of game assets. Widening the bus to hit a 12 GB target would make the GPU and video card more expensive, but might alleviate a memory throughput bottleneck if one exists. GDDR7 could open more memory size options without forcing too many GPU design changes due to its ability to support non-power of 2 memory sizes. However, due to being a new technology, it would have required all of the problems and costs of adopting a new technology including and not limited to redesigning the GPU's memory controllers.
Sure, I was just reasoning from the perspective of product positioning, not so much the technical limitations.
 
Monster Hunter: Wilds already uses over 16 GB of VRAM at 1440p on a mobile RTX 5090 (not the same as a desktop RTX 5090!) at ultra settings as seen below:

24 GB is starting to look like it is starting to have slim margins. 32GB does not look excessive in this light.
To me 16gig is mainstream capacity and 24-32 is high end. So 32 a good amount to put on this card.
 
It would be interesting to see how AMD would justify larger amounts of VRAM considering they recently said that the "Majority of gamers are still playing at 1080p and have no use for more than 8GB of memory,"
 
  • Like
Reactions: NSR
It would be interesting to see how AMD would justify larger amounts of VRAM considering they recently said that the "Majority of gamers are still playing at 1080p and have no use for more than 8GB of memory,"
Give those same 8GB buyers an affordable 16GB GPU and watch the statistics change. Don't know if AMD/NVIDIA are that obtuse or just playing the corpo game.

In any case I don't see why anybody would say no to more capacity. Yeah, it doesn't make sense in gaming for a card that can't really do 4k 60fps max settings to have the capacity for 8k but it makes sense for other use cases like AI. Even if it's something you think you'll never use, you might want to dabble in it here and there and it's nice to have the capability.
 
It would be interesting to see how AMD would justify larger amounts of VRAM considering they recently said that the "Majority of gamers are still playing at 1080p and have no use for more than 8GB of memory,"
Majority of gamers is fucking clueless because they don't realize this is a self fulfilling prophecy, that ends up with developers limiting their content to sell on 8GB cards.

Even despite the stagnation, developers are slowly raising the middle finger as VRAM demands do go up, and do pass 8GB even on 1080p in an increasing number of titles. They have definitely adjusted in the past years seeing the shitshow that was RDNA3/Ada > RDNA4/Blackwell releases. And sometimes that adjustment even had to happen post launch because what we have in x60 for the last four years has been a real break from the norm. Its a complete standstill in terms of resources developers get to work with.

FWIW I don't even get why resolution is still a key player in this discussion because we've seen that in many games, the entire VRAM demand is just either low or very high and that doesn't change across resolutions. Sure it gets a bit more per step up, but a game that uses 7GB on low won't be using 20 on high. And if it does, the visual gap is substantial and low was just painful to look at.
 
  • Like
Reactions: NSR
It would be interesting to see how AMD would justify larger amounts of VRAM considering they recently said that the "Majority of gamers are still playing at 1080p and have no use for more than 8GB of memory,"

Schadenfreude.

On the one hand, AMD markets their 9070xt specifically for having more VRAM. On the other, it's not necessary. Anyone who can think knows this is AMD lying to try and have it both ways...and even those who like the 9070 and xt have to admit AMD is showing their arse by pulling this crap. I...well, let's just say that my money has purchased a 9070xt, but it'll never open for a $300+street tax 9060 at 8 GB. It's just them (AMD) being frugal at the expense of the future viability of their product...which is sad. If the 8GB was better priced then it'd be a different story, but that's not where we are today.
 
Majority of gamers is fucking clueless because they don't realize this is a self fulfilling prophecy, that ends up with developers limiting their content to sell on 8GB cards.

Even despite the stagnation, developers are slowly raising the middle finger as VRAM demands do go up, and do pass 8GB even on 1080p in an increasing number of titles. They have definitely adjusted in the past years seeing the shitshow that was RDNA3/Ada > RDNA4/Blackwell releases. And sometimes that adjustment even had to happen post launch because what we have in x60 for the last four years has been a real break from the norm. Its a complete standstill in terms of resources developers get to work with.

FWIW I don't even get why resolution is still a key player in this discussion because we've seen that in many games, the entire VRAM demand is just either low or very high and that doesn't change across resolutions. Sure it gets a bit more per step up, but a game that uses 7GB on low won't be using 20 on high. And if it does, the visual gap is substantial and low was just painful to look at.
We'll have to see how expectations match reality when The Witcher 4 releases

Schadenfreude.

On the one hand, AMD markets their 9070xt specifically for having more VRAM. On the other, it's not necessary. Anyone who can think knows this is AMD lying to try and have it both ways...and even those who like the 9070 and xt have to admit AMD is showing their arse by pulling this crap. I...well, let's just say that my money has purchased a 9070xt, but it'll never open for a $300+street tax 9060 at 8 GB. It's just them (AMD) being frugal at the expense of the future viability of their product...which is sad. If the 8GB was better priced then it'd be a different story, but that's not where we are today.
Leading analyst firm Ampere says more Switch 2 consoles will be sold this year than the PC gaming handheld landscape has sold in its entirety"

It would be interesting if these "lower tier" cards at least had the hardware to be used as testbeds (maybe showcase is the right word) for things like Neural texture compression etc so they could punch above their weight, like the Switch 2 supposedly does with its custom SOC.
 
Majority of gamers is fucking clueless because they don't realize this is a self fulfilling prophecy, that ends up with developers limiting their content to sell on 8GB cards.

Even despite the stagnation, developers are slowly raising the middle finger as VRAM demands do go up, and do pass 8GB even on 1080p in an increasing number of titles. They have definitely adjusted in the past years seeing the shitshow that was RDNA3/Ada > RDNA4/Blackwell releases. And sometimes that adjustment even had to happen post launch because what we have in x60 for the last four years has been a real break from the norm. Its a complete standstill in terms of resources developers get to work with.

FWIW I don't even get why resolution is still a key player in this discussion because we've seen that in many games, the entire VRAM demand is just either low or very high and that doesn't change across resolutions. Sure it gets a bit more per step up, but a game that uses 7GB on low won't be using 20 on high. And if it does, the visual gap is substantial and low was just painful to look at.
Ill play devils advocate. Assuming you agree with the below

5090 32gb = Good
5080 16gb = not terrible but should have 24
5070ti 16gb = okay
5070 12gb = not terrible but 16 would have been preferable

Now we are left with the 5060 and the 5060ti. Assuming you agree that 16 gb on the 5070ti is fine i dont see why a 5060 shouldn't have 8 frankly. It costs less than half of the 5070ti and the 9070xt.
 
MLID isn’t a leaker, he’s a liar.
Take a claimed rumor for what it is and grow up.

I'm not saying you're wrong, it's just that we're heard it a million times. Get over it.

I really dislike the way that this forum software appends multiple posts from one user into one post if no other user makes any posts in between the multiple posts from the same user. It makes the resulting conjoined post look crazy when under other forum software the separate consecutive posts would remain split and look normal.
Trust me, some posts wouldn't look better they were separated like you describe.

Especially not essays.
 

Hey guys, what about this now? :laugh:
 
It is interesting how the same people bring their love for Nvidia into AMD threads and then call people liars because they did not match Wizz settings exactly. Let's get something clear no reviewer uses AMD software to review their GPUs. So before you call me a liar maybe ask them to use AMD software. A lot of people well just about everyone uses MSI Afterburner and while that is great it is not as good as AMD software. Now I have a user telling me that I am getting 20-30 FPS in Spiderman 2. It is like the people that say DLSS is better or even DLAA without knowing anything about Hyper RX. It does not matter I was told that TPU was more tame than Reddit but on that you don't have Nvidia fanboys calling people liars in AMD threads. What is worse though is the parrots that champion this insanity. You are trying to tell me I am not having much fun as I could Gaming because my Card is not as good at RT. Well CP2077 looks fine and plays super smooth. I wonder when you do those late Game missions if RT is as important. Of course someone is going to say something negative to this but I will be too busy enjoying my Gaming PC on my lunch. I really don't know how much longer I will be on this site,
 
Back
Top