• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

The future of RDNA on Desktop.

I mean, of course. But that's also a matter of common sense, a 1080p gamer with a 600W power supply doesn't have the budget or the need for a product of that price. I'll go a step beyond... that's probably the kind of gamer that the RX 9060 and 5060 "vanilla" versions are gonna target. That, indeed, would be the equivalent of installing a Ferrari engine onto a VW bug.
Who says they don't have the budget? I have friends who earn more than I do, but are on 1080p out of choice. You don't need a 30-something inch monster screen on a small desk, and there's no need to go higher than 1080p at 24" and below.

One of those friends has a 1000 W PSU and a 7800 XT inside an old-style noname grey sleeper chassis that he found next to a recycling container. Why? Because he likes it.

If your computer is of a higher end variety, and can support a 7900 XTX, that is probably still what you should buy. At least for now, although I expect many improvements from the RX 9070 XT to be relevant enough to make it remain more desirable, for example the new Radeon Image Sharpening 2.0 feature they announced last week will not be available on RDNA 3, the driver release notes state it is exclusive to RDNA 4. I know the technical reason why, although I'm not sure this has been divulged publicly anywhere just yet so, I'll write you a rain check on that one.
That's what I mean. What's "the best" isn't all black and white.
 
1741340333020.png

(3919mhz)

Oh, ok, I see what they did there. Got it. That's pretty cool. I mean, theoretically, right, that would only take ~26.1xxgbps ram to run full out w/ rt (~5% more than raster)?

Probably hot as hell, though, if it's even possible with other stuff running. NGL, I'm surprised that clock even possible at all on 4nm. That actually could use more than 16GB of ram.
I wonder if that's truly even possible in any practical 3D application, or rather prepping the design for 3nm. Perhaps both. That could be actually pretty interesting.
What I'm curious about is...Are they running split raster/shader clocks again? Which one is getting reported in GPU-Z? Are they doing something weird like using a similar clock domain as like Zen 5c?

Side note, I was always trying to figure out why 5080 was limited to ~3154mhz avg. As I've said before, 5080FE at stock (2640mhz stable) requires 22gbps bw. At 3154 it would require ~26284mhz?
Which is a weird place to put a general cut-off. 22Gbps is nice and obvious; beat a 20gbps GDDR5 design. Which they kinda-sorta didn't, but it makes sense bc maybe they didn't account for cache increase.
Yeah, that'll happen. That fake advertised clock of 2.97ghz insinuating the old L2 will getcha every time when it's actually 3.15ghz in actuality (which is the difference the cache makes).
Sometimes you gotta rush an article nobody will ever read onto a forum somewhere as soon as you realize why it matters, even though nobody else probably cares...until they do (but still don't understand).
You'll get it in a second why nVIDIA probably did their clocks in general, but now it makes even more sense why the clock limit/weird bandwidth requirement if that's the actual capability of N48.

And then ofc for excess bw it's about 16% perf on average for a doubling over what's required...blahblahblah...it decreases if they actually use it for real compute performance but excessive still helps.
Which they will sell to everyone, as they have as the second coming of the flying spaghet monster, when it really isn't that big of deal (<6% extra perf currently on 5080).
But you guys don't care about that part.
I do, so you know it doesn't actually matter much, but could have.

To actually *use* 30gbps on GB203, they would need a 3600mhz core clock w/ 10752sp.
Gives you an idea of how these designs *could* have gone. You know, that way on N4P...or 12288sp @ 3150mhz even if current '4NP'...exactly...which they also didn't give us, but could have.
Because, well, greed. At some point nVIDIA fans really should be sad when you realize the very obvious designs they've tested...and then decided "No, fuck it, sell it it about six more times until they get that".
Ofc, if they *had* made that design, the replacement Rubin would be 9216sp @ 4200/40000. But no, they'll sell each step (36000/40000) as different gens, probably, perhaps using a denser process at first...because cheaper (especially w/Micron ram), and even then the small boost later as a freakin' 6070 Super or some shit. Maybe 7070. Each as a boost. You want MORE fuckery?
You may ask yourself, why not just 12288sp @ 3360/32000, the actual speed of the fucking ram on 4nm and the capability of even the dense process? Answer: Bc nVIDIA does it little as it can, to sell it again.
Because then the 12288sp part on 3nm with higher clocks would be the replacement. Not the 9216sp part. Potentially twice. Are you following?

Kinda just got interesting, though, imo. I still think nVIDA needs to give people a guaranteed ~3.23-3.24ghz/24GB clock to make that product, especially if AMD pull this off, which would be nice, or else the difference between that and even the next smaller Rubin (9216, low 3nm clocks) will do the exact same shit they keep doing to outdate GPUs (unlike AMD). They'll still probably do it again when they eventually use 40gbps ram (and high clocks), but if they do it two times even after using a massively cut down design already on Blackwell to replace a even smaller Rubin as an upgrade (twice)...That'd be funny.
That really is selling a possible design like 6 times, and cheap as possible for them each time.
While looking like they're improving things. And it will work. Because most people probably will not understand what I just wrote.

Don't forget, the quality of DLSS will likely magically improve as a 'feature', absolutely not an obsolescence technique, by the compute difference between the GPUs each time, relegating the former enough under 60fps when in newer titles (absolutely sponsored by nVIDIA) the new one makes the cut each time.
.
Because they're just that much better, guys. Shit, I left my italics on. Pretend the sentence two sentences ago was even more italtics to symbolize sarcasm. Bold or underline just doesn't do it justice.


You know, just thought I'd throw it out there for the about 407th time in case nVIDIA doesn't realize they're not going to get away with doing that if I can help it.
Different designs are one thing; the DLSS thing, not unlike never giving enough buffer, is fuckin' bullshit. Because they *know* people won't understand.

Will they do even 3.24+ with a 24GB model, or just try to beat AMD using similar clocks to the current 5080, which may be enough for some current games at 1440p when not limited by ram?
Probably the later, bc nVIDIA.
I have no doubt they tested 24gbps (and the N4P process) for the practical limit, as the 3600mhz ideal of 30gbps ram insuates that, if not something similar or even AMD's design itself.

Either practically or 'with the power of [5070 is a 4090] AI'. On the "so we don't make Fermi again" supercomputer. That made Blackwell. The greatest GPU family in existence, some people say. At least one guy.

That guy wasn't me.

This is what I mean, though. I don't know how they know, but ALWAYS KNOW what AMD is capable of doing. The popcorn moment is if nVIDIA will give people that GPU clock, which might actually force them to sell *slightly* better GPUs next generation. If they don't, they're still doing even more of the same shit, even after doing the same shit most of you don't even know they already did, which likely made their products worse three times over before you even knew they existed.

What do you guys think...do you think it's possible it might actually happen with these two GPUs? Any hope of getting a decent stock 1440pRT?

The real truth is that you *know* AMD is trying to get there...and you *know* nVIDIA really hopes they won't have to make it happen.




1741342421776.png
 
Last edited:
It's no mystery that RDNA4 could have utilized more memory but that's not the point of the card.
There's no guarantee of what is to come but it would be a hilarious gut punch if the next AMD model is a 9080XT.
Preserves the 90 stack name to keep the sauce confusing to everybody, development kept very hush and made in limited units...
Ships with 20-24GB, 3.3GHz core clocks and completely edges out nvidia's 5080 refresh by high single digit % right before it drops.
First to shelf = first to sale. That's the drum beat that AMD needs to fully understand and I think they're finally starting to get it.
 
It's no mystery that RDNA4 could have utilized more memory but that's not the point of the card.
There's no guarantee of what is to come but it would be a hilarious gut punch if the next AMD model is a 9080XT.
Preserves the 90 stack name to keep the sauce confusing to everybody, development kept very hush and made in limited units...
Ships with 20-24GB, 3.3GHz core clocks and completely edges out nvidia's 5080 refresh by high single digit % right before it drops.
First to shelf = first to sale. That's the drum beat that AMD needs to fully understand and I think they're finally starting to get it.
That clock is 3919mhz. Potentially (with overclocked 24gbps ram) they could literally do that in RT (if somehow they have it set up that way). The bandwidth is there, unlike 9070 xt which is bw-limited.

Think about that. According to W1z, the current RT clock is 3ghz. That is not a *small* increase, and *could* actually be a 1440p card.

5080 always could have been, nVIDIA decided to not let it be one. If they will with 24GB...Unknown. I would hope so, but you truly never know with them....they truly do as little as they have to in order to win.
So they can make the cheapest design the next time to sell as an improvement. I know some people think I'm being hyperbolic, but I'm truly not. Read what I wrote about other potential designs.
And why they did what they did. They literally are going to replace the current 5080 with a whole chip down then they *might* have. I mean, honestly, they *could* have made 4090/'6080' a 4080. Instead...
Gotta sell it 6 times. 4080. 4080S. 5080. 508024GB. 6070. 6070S. That is amazing. It will be interesting to see the 6 generations of progress.
Of damn near nothing in reality.

N48 would actually need more ram (unlike anything under 60TF; 9070 xt) if it is like this...and actually be extremely neat. If it didn't have more ram, It would have the same problem as the current 5080.

I may be interpreting what they're doing incorrectly.

But I don't think I am.
 
Last edited:
The reason the RTX 4090 outperforms the 5080 is because its core is (sometimes significantly) more powerful, not because the 5080 is memory capacity starved. To run into the limitations of 16 GB, you currently have to go all-out, with the most extreme scenarios (and it would still fit into memory by a hair) - not to mention W1zz tested this on the 5090, where this would be about 49.5% of its capacity. On a 16 GB card it would preallocate less, and use a bit less as a result.
Yes exactly. AMD usually does add just enough VRAM to be usable by the GPU at its intended resolution. If you have to resort to 4K to get to the VRAM cap for a 1440p-intended card, maybe consider getting a 4K-intended GPU instead.
 
Yes exactly. AMD usually does add just enough VRAM to be usable by the GPU at its intended resolution. If you have to resort to 4K to get to the VRAM cap for a 1440p-intended card, maybe consider getting a 4K-intended GPU instead.
I recommend watching the 5070 and 9070 reviews on Gamer's Nexus, with special attention to Cyberpunk with RT. It's unplayable on the 5070, but runs fine on similarly specced 16 GB cards.

I'm not saying that Cyberpunk with RT is a must-play, but it's a good indication that VRAM capacity is not to be underestimated in every case.
 
My 4090 has never exceeded 16GB vram allocation. For gaming, 32GB is totally unnecessary and at least for now, 16GB is enough for anything ( well maybe there's some very unique edge cases ). Can't say for how long that will be the case. But for now it is. I wish the 5090 had less vram to be honest. It would be less desirable for non-gamers and therefore demand would be lowered. Obviously thats not in nvidia's best interest, but imo, its in gamer's best interest.

Besides, there's already an enterprise line, if you need super high amounts of vram, go through those channels. But I am guessing, this is for the productivity individual and small business that can't afford a enterprise card. But still, I personally do not like geforce cards with specs clearly aimed at non-gaming uses. Geforce is supposed to be the gaming line of cards.

Anyway sorry I guess thats off topic, anyway to bring this back to amd, I think 16GB is fine, developers are still trying to consider how to have 8 and 12gb cards be compatible with their games. I think 16GB will be safe for a while longer. And these cards are supposed to be mid-range. The complete vacuum of cards from everybody right now is obviously going to jack up prices beyond what was intended but still... they said they are not going halo, and 16GB is enough for midrange now and presumably, a while into the future as well.

And yeah I guess for the moment, midange is now 700+ unless you get lucky and live by a microcentre or something where I hear they have stock. Nowhere in my area has stock. One place offers a queue for the non-xt..... thats it.

And it makes me nervous for what I'd do if my 4090 bites the dust....
 
(snip) I think 16GB is fine, developers are still trying to consider how to have 8 and 12gb cards be compatible with their games. I think 16GB will be safe for a while longer. (snip)
I would not go so far as to say that 8 GB cards are still being figured out by developers. Quite a few games I have easily use more than 8 GB, and even go right up to the 12 GB limit. 12 GB is the new 8 GB and 16 GB is the new sweet spot that 12 GB used to occupy.

I just don't think you need more than 16 GB in March 2025 for 1440p, which is what I consider to be the intended resolution for the 9070 cards. The bottleneck is elsewhere in the design, not VRAM.
 
I would not go so far as to say that 8 GB cards are still being figured out by developers. Several games I have easily use more than 8 GB. 12 GB is the new 8 GB and 16 GB is the new sweet spot that 12 GB used to occupy.

I just don't think you need more than 16 GB in March 2025 for 1440p, which is what I consider to be the intended resolution for the 9070 cards.

I know 8gb is a problem, has been for a couple years now, and I would not recommend them for anybody wanting to play new games, but there' still a lot of people with them, so not all new games are completely disregarding them, thats what I meant. If you look at system requirements you will often see 8gb in the minimum area, with big sacrifices having to be made in some cases. So they are clearly still considered, even if in a diminished way.
 
Last edited:
To be completely fair, that was a flagship product. Don't know what people were expecting. But the rate of generation uplift NVIDIA provides is decreasing, the 40 series (pre super) and 50 series are good examples of that, whereas the gen uplift on AMD is inconsistent rather than a noticeable decline.

The next generation of GPU's will probably have either a super big generational uplift akin to what NVIDIA used to pump out, or very little. And that's just the GPU side of things, AAA developers woefully don't optimize their games till after launch 99% of the time it seems.

It is decreasing because Nvidia are cheaping out... Blackwell was supposed to be made on TSMC 3nm and with a better design. But NVIDIA decided to change their plants when AMD cancelled NAVI 41 chips. They knew they would have no competitor so they took it easy. The 5090 would have been around 50% more powerful on average if it was made on 3nm.
 
View attachment 387774

The point is 4090 won't be a 4090 on 3nm. It will be a 6080 (and likely faster so 1440p->4k up-scaling more consistent). Again, I think the 'whatever they call the 5080 replacement w/ 18GB" will be 1440p.
Because again, 5080 is 10752sp @ 2640mhz. 9216sp @ 3780sp is 22% faster in RT/raster, and 18GB of RAM is 12.5% more than 16GB. How far is a 5080 away from 60?
What if you turn on FG (native framerate)? OH, that's right...nVIDIA literally HIDES IT FROM YOU BC OF THIS REASON.

Again, then 9070 xt / 128-bit cards will be 1080p. No up-scaling (in a situation like this; yes more demanding situations exist and hence why higher-end cards exist).

I don't get how other people don't see this? I think it's clear as day.

I just do not agree, and it really goes to show that a lot of people have not used RT and/or up-scaling. It's very normal. Asking for 4kRT is ridiculously absurd (this gen). Pick a game and look at even a 5090.
Also, 1440p->4k up-scaling looks pretty good, even with FSR. Now 960->1440p will look good (always has been okay with DLSS, but now will with FSR4, I think), which again, is the point of 9070 xt.
1440p isn't good-enough (in my view, especially with any longevity and/or using more features like FG) on 5080 bc it doesn't have to be...yet. It could be, but it isn't bc there is no competition.
Now, up-scaling is even important for 1080p->4k, which is *literally the point of DLSS4*. Even for a 5080 (because of the situation above).
'5070 is a 4090' is because 1080p up-scaling IQ has improved to the point they think they can compare it (along with adding FG) to a 4090 running native 4k (in raster). That is the point of that.
Up-scaling is super important. On consoles, they have (and continue to use) DRS. This is no different than that, really. Asking for consistant native frames at high-rez using RT is just not realistic for most budgets.
This is the whooollleee point of why they're improving up-scaling. RT will exist, in some cases in a mandatory way. You will use up-scaling, and you will prefer it looks ok. OR, you will not play those games.
OR, you will spend a fortune on a GPU. Or you willl lower other settings (conceivably quite a bit as time goes on). That's just reality.

Look, I get that some people still get hung up on things like "but gddr7" and such. GUYS, a 5080 FE needs 22gbps ram to run at stock (to saturate compute at 2640mhz). Do you know why those speeds?
Think of what AMD is putting out, and where that ram clocks. That is what you do (when you can). You put out the slowest thing you can to win; nothing more. Give nothing away you can sell as an upgrade.
Especially, as I'm showing you above, when it can be tangible. Save it for next-gen and sell it then. nVIDIA truly could give you 24GB and 3.23ghz clocks. They didn't...but they could.

This will bear out when people overclock 9070 XT's (somehow often just short of a stock 5080FE or similar to 5070ti OC) and/or there is a 24gbps card.
Somehow magically similar in many circumstances, especially if 3.47ghz or higher.
Because it's really, honestly, just MATH. It's not opinion. It's MATH. Yes, some units/ways of doing things differ; yes excess bw helps some (perhaps ~6% stock in this case?), but so do extra ROPs on N48.
I don't have the math on the ROPs; I'm sure it differs by resolution. I've never looked into it. But compute similar; I don't know about how much the TMUs help RT (yet). But the main point remains.
Compute is compute (in which 10752 @ 2640= 8192 @ 3465). Bandwidth is bandwidth. Buffer is buffer. It's all solvable. None of this is magic, but they will try to sell you bullshit which I am not.


SMH.

Sometimes I just want to dip until Rubin comes out. We'll just see...won't we. Period not question mark.
The 5080... you mean that 999,- MSRP GPU that you can't buy, and if you do, you're still going to fiddle with adapters on your current PSU, or have to upgrade it anyway alongside it... and you might be missing 12% perf due to 8 missing ROPs? That one?

Oh man, where can I sign up! Yeah, RT is really here with THAT GPU. The 6080! That'll be the day! All yours for the low-low price of 1299,-, coming soon TM with DLSS 19

Come on. I'm never gonna be buying into that bullshit "enthusiast" n00b trap. No matter how many influencers say I need to. Its just a terrible deal, and gen-to-gen progress has come to a complete and utter standstill because of 'Nvidia' pushing RT 'forward'. You have to be blind to not see this, and still keep promoting the tech. Not even Nvidia wants it to succeed, they're busy selling AI and RT is just a fun side project. So the only way RT will really gain traction is if we do a lot less of it, so a shitty x60 that is in fact an x50 in disguise can still run it somehow. Otherwise its dead in the water, forever the dream that will never materialize.

Even RDNA4 didn't move the needle forward; the performance level on offer, was already in the market for years. RT perf has fully stagnated - only fake frames can move it ahead now. Telling, indeed.
 
Last edited:
Wut? UDNA is 2027.
AMD needs to get something out ASAP. Like the 5700XT, this is shaping up to be a short generation. N3 is set to ramp up production this year for Vera Rubin. announcement of something in a few days.
 
[..] The next node for Desktop GPUs will be UDNA. [..]
If true, yes, you hit the nail on the head: One could argue why get RDNA4 now, if a major architecture overhaul will supersede RDNA4 and it will be the last RDNA architecture. If one doesn't really need a new GPU, one may wait. CUDA-like support on consumer hardware in UDNA1 lets goo. With UDNA, the same team can develop for both HPC and consumer and it is should save AMD money, money is the only reason why any company would do it, and not out of their hearts (e.g. loosing market share to CUDA). Supposedly PlayStation 6 may also use UDNA1, another reason to wait.
It is nice that AMD tries different things, like chiplet based design in RDNA3 or the architecture split of CDNA and RDNA, although they revert both, back to UDNA (CDNA + RDNA) and back to monolithic chip design in RDNA4 (at least so far with the 9070 (XT)).
 
My 4090 has never exceeded 16GB vram allocation. For gaming, 32GB is totally unnecessary and at least for now, 16GB is enough for anything ( well maybe there's some very unique edge cases ). Can't say for how long that will be the case. But for now it is. I wish the 5090 had less vram to be honest. It would be less desirable for non-gamers and therefore demand would be lowered. Obviously thats not in nvidia's best interest, but imo, its in gamer's best interest.

Besides, there's already an enterprise line, if you need super high amounts of vram, go through those channels. But I am guessing, this is for the productivity individual and small business that can't afford a enterprise card. But still, I personally do not like geforce cards with specs clearly aimed at non-gaming uses. Geforce is supposed to be the gaming line of cards.

Anyway sorry I guess thats off topic, anyway to bring this back to amd, I think 16GB is fine, developers are still trying to consider how to have 8 and 12gb cards be compatible with their games. I think 16GB will be safe for a while longer. And these cards are supposed to be mid-range. The complete vacuum of cards from everybody right now is obviously going to jack up prices beyond what was intended but still... they said they are not going halo, and 16GB is enough for midrange now and presumably, a while into the future as well.

And yeah I guess for the moment, midange is now 700+ unless you get lucky and live by a microcentre or something where I hear they have stock. Nowhere in my area has stock. One place offers a queue for the non-xt..... thats it.

And it makes me nervous for what I'd do if my 4090 bites the dust....
No game has ever exceeded 16Gb, because there has never been a GPU for games above 16Gb.

When a developer is going to make a game, they purposely limit the use of VRAM.

Imagine if they were to make a game and didn't set a limit on the number of objects and textures in the game? They could easily use 50Gb and the game would crash and crash.

If the Game Director wants the game to use a maximum of 12Gb, then all developers will not be able to exceed that limit.

That's the basics of the basics
 
Which goes down into semantics, if it's the most you can have due to whatever conditions (market, technology limitations, choice of the vendor not to release anything better, etc.) it's also the best you can have.
Yeah I mean it's undeniably the fastest gaming graphics card on the market, but people could have (or invent) their own reasons why it wouldn't suit them I guess. It's certainly not for everyone, but I'd agree the word best fits here. Semantics perhaps.
 
No game has ever exceeded 16Gb, because there has never been a GPU for games above 16Gb.
Lets not forget about the system resource hog "Star Citizen".
 
If true, yes, you hit the nail on the head: One could argue why get RDNA4 now, if a major architecture overhaul will supersede RDNA4 and it will be the last RDNA architecture. If one doesn't really need a new GPU, one may wait. CUDA-like support on consumer hardware in UDNA1 lets goo. With UDNA, the same team can develop for both HPC and consumer and it is should save AMD money, money is the only reason why any company would do it, and not out of their hearts (e.g. loosing market share to CUDA). Supposedly PlayStation 6 may also use UDNA1, another reason to wait.
It is nice that AMD tries different things, like chiplet based design in RDNA3 or the architecture split of CDNA and RDNA, although they revert both, back to UDNA (CDNA + RDNA) and back to monolithic chip design in RDNA4 (at least so far with the 9070 (XT)).
The thing about it is the UDNA 1 Card looks to be aways off. Everything is highly speculative at this point. But the launch window lies in the 2028 to 2031 timeframe. Several Sources, include Wendel from Level 1Techs, have cited the new card will be a Halo Product that will consist of 2 Dies. One for Compute Loads to handle CAD type work and a Second Die for Gaming. Both glued together with infinity fabric. The Graphics Die may very well be a direct descendant from the RDNA but that name may well be erased from the AMD History Books along with the Radeon Technologies Group. There were news stories circulating around a couple of weeks ago of Layoffs at RTG. It seems this is a precursor of what is to come at RTG. Once it gets cut down to a size that AMD views as acceptable to AMD Leadership the remnants will be folded into the CDNA Division. Once the consolidation is complete even the CDNA/UDNA nomenclature will disappear.

I must state the old RDNA Technology will not go away. To the contrary it will be developed and new Graphics dies produced under a different department with the Radeon Branding wiped out. As I opened this post RDNA could be snoozed for a while. AMD has stated there will be an RDNA 5 but it will be only for Consoles and APUs only. No discrete GPUs. That launch is some 18 months away. But after the RDNA 5 Launch is completed development of the UDNA Card goes full steam. At that point, early 2027, the discrete GPU market could be hungry for new series of cards. While AMD will be working on a new Halo Card for which AMD will charge Top Dollar. But what about the Midrange Market that AMD has cultivated ? We still can't gauge the Sustained Demand for RDNA-4. In the last 72 Hours Roman derBauer has revealed the RX9070XT's Performance can be pushed clear up to RTX5080 levels by Undervolting then Overclocking the High End RX9070XT Cards. This claim was replicated by another OverClocker in Indonesia 36 Hours later. We don't know if this operating regime is stable just yet but if it does it could finish-off the litany of screw-ups that has beset Nvidia and nail the coffin shut on the RTX 5000 Series. This in turn could create an Over Demand Problem for the 9070 Series leading to chronic shortages for the rest of 2025 and a complete sellout before the 2025 Holiday Season ends. What happens then if AMD has to go back to TSMC asking for a Supplemental run of New 9070 Wafers ? Does AMD just hand TSMC the files and say "another 1,000 Wafers please" or do they tempt fate with some Engineering Change Orders. Obviously AMD should know by the end of June how much strength the demand crush has and when they will run out of new cards.

These are still early days but all indications are pointing to AMD having to address problems it hasn't b my seen in a very long time.
 
Last edited:
We still can't gauge the Sustained Demand for RDNA-4. In the last 72 Hours Roman derBauer has revealed the RX9070XT's Performance can be pushed clear up to RTX5080 levels by Undercoating then Overclocking the High End RX9070XT Cards. This claim was replicated by another OverClocker in Indonesia 36 Hours later. We don't know if this operating regime is stable just yet but if it does it could finish-off the litany of screw-ups that has beset Nvidia and nail the coffin shut on the RTX 5000 Series. This in turn could create an Over Demand Problem for the 9070 Series leading to chronic shortages for the rest of 2025 and a complete sellout before the 2025 Holiday Season ends.
This part right here, I've been subconsciously locked on for three months. Yes the 9070 cards are great and all but the silicon production situation is BAD bad.
I've been in a situation where skipping multiple generations has put me OUT of the support ring and it has started to impact my work, so it's kind of a panic.
We know TSMC is on some iteration of their 5nm technology with no surprise to anyone, continues to run 100% until demand falls off.
That could be anywhere between several months to a few years. There are new production strategies coming out of the woodwork too.
Just not soon enough to make any impact on the current market, maybe not even the rest of this year.
So for anyone that actually needs these cards, that train has already left the station. Gotta snipe now.
What happens then if AMD has to go back to TSMC asking for a Supplemental run of New 9070 Wafers ? Does AMD just hand TSMC the files and say "another 1,000 Wafers please" or do they tempt fate with some Engineering Change Orders.
It would be a minor revision at worst but another AM4 situation at best.
This 9070 generation is a short one and should stay that way.
It's just not a great investment with UDNA around the corner.
Stragglers like myself trying to obsolete OLD OLD cards are the audience.
That should tell you everything.
 
UDNA and Rubin(?) are at least 2 years off from now. I understand that someone on Ada or maybe 7900 XTX may feel comfortable waiting for it, but if you're on anything weaker, the 9070 XT is arguably the card to get, 5080 if you can spare the cash.
 
UDNA and Rubin(?) are at least 2 years off from now. I understand that someone on Ada or maybe 7900 XTX may feel comfortable waiting for it, but if you're on anything weaker, the 9070 XT is arguably the card to get, 5080 if you can spare the cash.
I'd argue an MSRP 5070ti is better value than a 5080, and could be a contender here too.
 
I'd argue an MSRP 5070ti is better value than a 5080, and could be a contender here too.

Mmm, yeahh the problem is getting anything at MSRP these days. If you see any card at MSRP... and you need it, jump at the opportunity :(
 
Mmm, yeahh the problem is getting anything at MSRP these days. If you see any card at MSRP... and you need it, jump at the opportunity :(
Yeah ! AMD said they moved only a Small Fraction of chips/cards Retailers have stock piled since December last Thursday. But expect most outlets to be restocked soon. It sounds like MSRP Cards might get scarce this go round.
 
At what point do you suspect prices come completely unglued from reality?
It's most likely the next step but I don't have any indication for the moment.
 
If people could control their urges and not buy anything for insane money, then maybe supply would have a chance to catch up with demand and prices would eventually return to normal. I know, wishful thinking.

I'd argue an MSRP 5070ti is better value than a 5080, and could be a contender here too.
If you can find any GPU at MSRP, please let us know.
 
Back
Top