• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

RDNA4 Prediction Time!!!

It's gonna be a nice collector's piece in either case. When you look at the box on your shelf 10 years later and think "hmm, that launch was really something, oh yeah". :D

I know, I'm weird.

Best case scenario they don't want Nvidia to drop prices till after reviews and are going to release a much better card a week later and go checkmate muthaF$&%^#....

To be clear that's not how I think its going to go down but would be awesome if it did lol.
 
Best case scenario they don't want Nvidis to drop prices till after reviewes and are going to release a much better card a week later and go checkmate muthaF$&%^#....

To be clear that's not how I think its going to go down but would be awesome if it did lol.
Maybe, or maybe not, we'll see, but I'm 99% sure that's what their plan is.
 
Speculation is not truth, wait until parts are out then revisit in 3-6 months
 
I'f I'm not mistaken 5090 die is 750mm so reticle limit of what is currently possible. The only way forward for 6090 is to go MCM.
If rumours are true and UDNA will be next AMD arch and they will go back to high end/ enthusiast then it's safe to assume that they will also use MCM as in Instinct cards.
Then those 50% or more gains of perf are possible. Only limit will be how much they will want to and be able to compete.
Reticle limit is 858 mm2 and halved 429 for high Na EUV. But tsmc isn't moving to High Na before N1 and clearly the AMD's approach is to do separate chip lets for the Memory and GCD, certainly not MCM. Pretty sure 6090 is back to 600mm2 with 30K CUDA. Good luck beating that when 390 mm2 9070 4K shaders is much slower than a 378 mm2 5080. They lose in effective use if die area. Should have done the next in line 400mm2 GCD. They had 200 and 300mm2 GCD so why not just double those in size and call it a day. Perfectly reasonable. 9070 is nothing but a monolithic 7800. There are no miracles here, for it to be 50% faster while GDDR6 remains absolutely the same it means the shaders have to be 100% better in some way.
 
Reticle limit is 858 mm2 and halved 429 for high Na EUV. But tsmc isn't moving to High Na before N1 and clearly the AMD's approach is to do separate chip lets for the Memory and GCD, certainly not MCM. Pretty sure 6090 is back to 600mm2 with 30K CUDA. Good luck beating that when 390 mm2 9070 4K shaders is much slower than a 378 mm2 5080. They lose in effective use if die area. Should have done the next in line 400mm2 GCD. They had 200 and 300mm2 GCD so why not just double those in size and call it a day. Perfectly reasonable. 9070 is nothing but a monolithic 7800. There are no miracles here, for it to be 50% faster while GDDR6 remains absolutely the same it means the shaders have to be 100% better in some way.


AI analzed your post:

The reticle limit is indeed approximately 858mm² for standard EUV, and High-NA EUV has a smaller field size of 429mm²[1][3].

AMD's approach is actually evolving beyond just memory and GCD separation. A new patent shows they're exploring a three-die GPU design with multiple GPU chiplets that can function either as a single GPU or as multiple GPUs[2][5].

Regarding die sizes and performance comparisons, it's not accurate to directly correlate die size with performance. AMD's patent suggests a more complex approach using front-end dies and shader engine dies working in parallel[2], which could potentially improve efficiency beyond simple die size scaling.

## Future Manufacturing Considerations

The transition to High-NA EUV won't happen immediately, with TSMC likely implementing it after N2[3]. This timing allows manufacturers to develop new approaches to handle the smaller reticle size limitation, including the chiplet-based designs AMD is exploring[5].

The assertion about shader performance improvements needing to be 100% better with same GDDR6 is oversimplified, as performance gains can come from various architectural improvements, not just raw shader count or memory bandwidth.

Citations:
[1] High-NA EUV lithography: the next step after EUVL - IMEC https://www.imec-int.com/en/articles/high-na-euvl-next-major-step-lithography
[2] AMD patents configurable multi-chiplet GPU - Tom's Hardware https://www.tomshardware.com/pc-com...lti-chiplet-gpu-illustration-shows-three-dies
[3] Large die and High-NA EUV reticle size - Real World Tech https://www.realworldtech.com/forum/?threadid=213553&curpostid=213584
[4] Extreme ultraviolet lithography - Wikipedia https://en.wikipedia.org/wiki/Extreme_ultraviolet_lithography
[5] New AMD Patent Describes Potential Chiplet-Based GPU Design https://www.extremetech.com/gaming/new-amd-patent-describes-potential-chiplet-based-gpu-design
[6] Die space. There's a maximum reticle size. You can't make chips ... https://news.ycombinator.com/item?id=34959009
[7] EUV reticle limit : r/hardware - Reddit [8] AMD's multi-chiplet GPU design might finally come true | Digital Trends https://www.digitaltrends.com/computing/amd-patent-shows-multi-chiplet-gpu-design/
 
It's gonna be a nice collector's piece in either case. When you look at the box on your shelf 10 years later and think "hmm, that launch was really something, oh yeah". :D

I know, I'm weird.
No we are on the same page, I bought the 3070Ti, and 4080 12GB mostly because of the price, they were cheaper then the "other guys" at the time..

And because the internet hates them, that was the clincher.. mostly on the 4080 12GB :laugh:
 
No we are on the same page, I bought the 3070Ti, and 4080 12GB mostly because of the price, they were cheaper then the "other guys" at the time..

And because the internet hates them, that was the clincher.. mostly on the 4080 12GB :laugh:

The 4070ti 12Gb is a way better product than the 4080 12GB was going to be lol you also paid much less than most people for it around 650 usd If I'm remembering correctly. Still the gpu is fine people don't like it because it was 800 usd starting price and only came with 12GB of vram the same as the 330 usd 3060 came with the previous generation lmao.

3070/3070ti just aged poorly with the 3070ti just being a cash grab by Nvidia....

IF it was a 16GB card it wouldn't be losing to the 2080ti here a card that has a slower core. Even the 6800XT with much weaker RT performance outpaces it due to not getting choked by vram.
relative-performance-rt-2560-1440.png
 
Last edited:
The 4070ti 12Gb is a way better product than the 4080 12GB was going to be lol
Really eh? I thought they just tucked it away and changed the name :laugh:
 
Really eh? I thought they just tucked it away and changed the name :laugh:

The key was it went from 899 which would be really bad to 800 which was at least debatable with the only real meh thing was 4k scaling and the 12GB of vram.
 
It's gonna be a nice collector's piece in either case.
No we are on the same page
As someone who considers themselves a collector now, a trend I believe I'm seeing is that especially after it's past it's reasonable 'modern' life (ie, not in a era specific/retro box), lets just say 8-10 years, the cards that sold in lower numbers / have a meh rep are often among the more valuable and sought after by collectors. I can wager that's probably due in a large part to the fact that less were produced so the rarity is automatically higher, but being more sought after sure has some intrigue to it sometimes.

A couple of examples would be high end GeForce FX series cards, or high end ATi cards from 9000 series to HD3000 series, in the scheme of things and relative also to actual rendering power and usefulness, they're rare and expensive. I certainly don't have nearly as many ATi/AMD cards as I want for my collection, and only in recent years made the decision to keep all my hardware rather than sell to fund new parts.

Back to topic, it's hard to be all that excited about any of the upcoming products, it's a rollercoaster of excitement and then pessimism about availability and pricing (more than anything).
 
AI analzed your post:

The reticle limit is indeed approximately 858mm² for standard EUV, and High-NA EUV has a smaller field size of 429mm²[1][3].

AMD's approach is actually evolving beyond just memory and GCD separation. A new patent shows they're exploring a three-die GPU design with multiple GPU chiplets that can function either as a single GPU or as multiple GPUs[2][5].

Regarding die sizes and performance comparisons, it's not accurate to directly correlate die size with performance. AMD's patent suggests a more complex approach using front-end dies and shader engine dies working in parallel[2], which could potentially improve efficiency beyond simple die size scaling.

## Future Manufacturing Considerations

The transition to High-NA EUV won't happen immediately, with TSMC likely implementing it after N2[3]. This timing allows manufacturers to develop new approaches to handle the smaller reticle size limitation, including the chiplet-based designs AMD is exploring[5].

The assertion about shader performance improvements needing to be 100% better with same GDDR6 is oversimplified, as performance gains can come from various architectural improvements, not just raw shader count or memory bandwidth.

Citations:
[1] High-NA EUV lithography: the next step after EUVL - IMEC https://www.imec-int.com/en/articles/high-na-euvl-next-major-step-lithography
[2] AMD patents configurable multi-chiplet GPU - Tom's Hardware https://www.tomshardware.com/pc-com...lti-chiplet-gpu-illustration-shows-three-dies
[3] Large die and High-NA EUV reticle size - Real World Tech https://www.realworldtech.com/forum/?threadid=213553&curpostid=213584
[4] Extreme ultraviolet lithography - Wikipedia https://en.wikipedia.org/wiki/Extreme_ultraviolet_lithography
[5] New AMD Patent Describes Potential Chiplet-Based GPU Design https://www.extremetech.com/gaming/new-amd-patent-describes-potential-chiplet-based-gpu-design
[6] Die space. There's a maximum reticle size. You can't make chips ... https://news.ycombinator.com/item?id=34959009
[7] EUV reticle limit : r/hardware - Reddit [8] AMD's multi-chiplet GPU design might finally come true | Digital Trends https://www.digitaltrends.com/computing/amd-patent-shows-multi-chiplet-gpu-design/
AI agrees with your AI and asks a question.

""It seems you're diving deep into the nuances of advanced semiconductor manufacturing and AMD's innovative approaches. You're right in highlighting that die size alone doesn't determine performance, and AMD's evolving chiplet strategy introduces a level of flexibility and parallelism that could significantly improve efficiency without directly correlating to die size scaling.

Regarding the High-NA EUV transition, the timeline for TSMC's implementation post-N2 will indeed shape the future of chip manufacturing, especially considering the challenges of smaller reticle sizes. The exploration of chiplet-based designs, like the ones AMD is patenting, could offer more efficient solutions to maximize performance while circumventing the limitations of lithography.

You're also correct in addressing the oversimplification of shader performance improvements; other factors, such as architectural innovations and optimized memory subsystems, could contribute significantly to performance without needing to double the raw shader count. Would you say this multi-chiplet GPU approach could potentially outperform traditional monolithic designs in the long run, especially with the added flexibility for scaling performance based on workload?""
 
As someone who considers themselves a collector now, a trend I believe I'm seeing is that especially after it's past it's reasonable 'modern' life (ie, not in a era specific/retro box), lets just say 8-10 years, the cards that sold in lower numbers / have a meh rep are often among the more valuable and sought after by collectors. I can wager that's probably due in a large part to the fact that less were produced so the rarity is automatically higher, but being more sought after sure has some intrigue to it sometimes.

A couple of examples would be high end GeForce FX series cards, or high end ATi cards from 9000 series to HD3000 series, in the scheme of things and relative also to actual rendering power and usefulness, they're rare and expensive. I certainly don't have nearly as many ATi/AMD cards as I want for my collection, and only in recent years made the decision to keep all my hardware rather than sell to fund new parts.

Back to topic, it's hard to be all that excited about any of the upcoming products, it's a rollercoaster of excitement and then pessimism about availability and pricing (more than anything).

Truly, I know what you you're trying to say. Some cards you gotta keep. Some because they just won't die (like an overclocked 2080ti in 1080p), and they have use until they're antiques.

Others, for instance, other reasons. I keep my MAXX and Voodoo2 for sentimental reasons. Also, I keep my 970 that erupted into flames (by only what I can assume is trying to access that last GB of memory).

#stutterfire

If I had a Vega, I'd keep that as well. Lots of interesting designs throughout the history of GPU designs, if you think about it. Many of them ATi/AMD imo. RDNA3 will prolly go on the list as well.

____________________________________________

I'm excited for RDNA4 simply because I want to know if it clocks well. If it does, why is AMD keeping it a secret? It appears fairly obvious given the design (power potential/die size given small unit count).

If it doesn't they kinda failed imo. I don't know why being so cagey. Either they have something pretty cool that will clock to 3.3ghz+ or they have something pretty cheap. Perhaps both; that'd be cool.

I look forward to seeing if >375w designs can be pushed some crazy level like ~3.6ghz or so. That would be neat (only if offered on a higher-end model with 24gbps memory).

I truly do think that's why everyone is being so weird. I really do think the design targets 3.4ghz+ for a 24gbps model, which may not even be apparent on 9070 xt (if they limit PL/voltage/clocks in bios).
The lower-end model(s) could benefit from 3.3+ghz clockspeeds some cases, even with 20gbps memory...which might be what they're banking on (the cool 'overclock' hype)...kinda like 9800x3d.

From the second they nixed it from the presentation I couldn't help but think of something like that being the case, but they really wanted it to be a 'surprise' against a more-expensive 5070 or 5070ti.

If it exists, they probably have to release it for $600, which is likely less than they wanted.



We shall see.
 
Last edited:
I think we now have enough leaks/rumors to predict the 9070XT performance, price and power with incredible accuracy. From the latest MLID video, I compiled the fps from each of the games rounded to the nearest 5 for both the 9070XT and the 7800XT:


Game (4K Ultra)9070XT Ras7800XT Ras%FasterGame9070XT RT7800XT RT%Faster
Watch Dogs Legion
90​
60​
1.50​
Black Myth Wuhong
45​
20​
2.25​
Far Cry 6
115​
80​
1.44​
Hitman 3
30​
20​
1.50​
Forza Horizon 5
155​
110​
1.41​
F1 23
55​
40​
1.38​
Hitman 3
155​
120​
1.29​
Shadow of the TR
80​
50​
1.60​
F1 23
160​
130​
1.23​
Cyberpunk 2077
25​
15​
1.67​
Shadow of the TR
120​
80​
1.50​
Avg​
1.68​
Borderlands 3
100​
70​
1.43​
Horizon Zero Dawn
110​
70​
1.57​
Cyberpunk 2077
60​
40​
1.50​
Avg​
1.43​

...

What do you think?

Those numbers look very good imo. If the 9070 XT was sold around $500 or less then it could be a Home Run for AMD. RDNA 4 would bring a big performance bump whereas RTX 50s are not much of an improvement without AI. I think it would force Nvidia to release new GPUs but cheaper (like the RTX 40 SUPER series). Also if RDNA 4 GPUs are really that cheap to manufacture then they will definitely have a clear advantage over the RTX 50s.
 
RX 7800 XT 499$ on average is 3% faster

Not a very good generational leap now is it, disregarding the prices.

They're not in the same league. The $499 7800 XT is the successor of the $479 6700 XT, not the much more expensive 6800 XT.

Is that my fault? :)

only in recent years made the decision to keep all my hardware rather than sell to fund new parts.

My too ;)

I've still got my original i7 990x, X58A OC with its DDR3 memory and my cards date back to my original 980 GTX. Every other piece of hardware I wish I really had of kept beforehand. Kind of like trophies/sedimental value for fond memories to reflect back on. I always say to myself these days, "gees, I wish I hadn't of sold that".

Problem is, it keeps getting more expensive to upgrade as I don't sell anything anymore. Gona have to burgle my local IGA for an RTX 5090 :cool:

Anyways guys, I'm an old school ATi/Radeon fan so honestly can't wait for what these 9070 XT's can do and will have no hesitation on buying one if need be!
 
Anyways guys, I'm an old school ATi/Radeon fan so honestly can't wait for what these 9070 XT's can do and will have no hesitation on buying one if need be!

That's my guy. :)

Not too many of us left that can say we're old-school ATi fans. I was watching some video the other day and this youngster was declaring about nVIDIA inventions like tesselation and GPU physics.

You know, their 'technology leadership since forever'. Makes me sad people don't even remember so much crap and how nVIDIA didn't even used to know how to support color output correctly.

I was like:

"Do you remember Havok and Truform my guy? Oh wait, I don't think you were born yet."

nVIDIA marketing brainwashing and memory attrition of different days (like before project green light [limitation of overclocking/AIB designs] etc) has affected a whole generation now.

Every day this forum feels a little bit more like that guy....So, glad you're around. :lovetpu:
 
Last edited:
Makes me sad people don't even remember so much crap and how nVIDIA

When we used to run ATi and Nvidia cards back in the day, there was and still is a "Texture filtering - Quality" option in the Nvidia driver panel. When you enabled "High Quality" it matched the graphics quality of ATi cards but dropped more frames on the Nvidia cards.

Me and my mates all agreed that Nvidia's graphics were not as refined as ATi's on the "Quality Setting" and only until we set the "High Quality" setting in the Nvidia control panel did we think the graphics were then of equal quality. All benchmarks were run with the stock driver "quality" setting back in the day which gave Nvidia more frames but had a slightly more washed look compared to ATi stock driver setting.

Just little things like that we picked up ;)
 
That's my guy. :)

Not too many of us left that can say we're old-school ATi fans. I was watching some video the other day and this youngster was declaring about nVIDIA inventions like tesselation and GPU physics.

You know, their 'technology leadership since forever'. Makes me sad people don't even remember so much crap and how nVIDIA didn't even used to know how to support color output correctly.

I was like:

"Do you remember Havok and Truform my guy? Oh wait, I don't think you were born yet."

nVIDIA marketing brainwashing and memory attrition of different days (like before project green light [limitation of overclocking/AIB designs] etc) has affected a whole generation now.

Every day this forum feels a little bit more like that guy....So, glad you're around. :lovetpu:

At some point old stuff don't matter as much. I too remember the image quality arguments, but those are hardly relevant today. Sure history should be remembered correctly but it is also just history. BTW back then I was perplexed by people claiming themselves to be fans of Company X or Z, and I still think those people have no shame or pride. Buy the products but that has to be the end of it.
 
Is that my fault? :)
Never said it was anyone's fault. I just said you were comparing the wrong products.

My too ;)

I've still got my original i7 990x, X58A OC with its DDR3 memory and my cards date back to my original 980 GTX. Every other piece of hardware I wish I really had of kept beforehand. Kind of like trophies/sedimental value for fond memories to reflect back on. I always say to myself these days, "gees, I wish I hadn't of sold that".
And me. :)

If I sell something, I'll spend the money anyway, and won't notice the difference, but I definitely regret it later. I wish I never sold my half-height 1650, for example. It would have been a nice upgrade over the 1030 in my bedroom HTPC. I also wish I never sold my 7800 XT (I kind of had to, but anyway). I wouldn't be looking for a new GPU now if I still had it.
 
I'm not so optimistic about 50%

I'd say 30% better price to performance gen on gen is about right. I'm less bothered by actual perfomance gains as long as I'm getting 30% more for my money

So the 9070 should offer at least 60% better price to performance vs whatever 6000 card it's priced most similar to msrp wise.

The fact that AMD fans might finally get decent image quality with upscaling would be the cherry on top


I don't want 50% more performance if it cost me 70% more money like the 4080 gave us for example.
Yeah im not saying it's very probable, im say it's very possible. You don't even need 50% gen on gen, with 40% gen on gen you get to twice the 6800xt performance after 2 generation.
 
Yeah im not saying it's very probable, im say it's very possible. You don't even need 50% gen on gen, with 40% gen on gen you get to twice the 6800xt performance after 2 generation.
It is doable, proven by AMD themselves. The 7800 XT and 7900 XTX both had 50% more over their predecessors (the 6700 XT and 6900 XT) at the same price.
 
I've seen a few posts about Nvidia performance. Pretty sure this is an AMD thread. Please try to keep it that way.
 
Everything Ngreedia has done for the past 7 years is sell snake oil, sell you "features" because they want to become a software company, because they don't make good hardware anymore. Everything they do is overpriced, every generation is single digits improvements, etc... So they have to sell you fake frames, fake resolution, fake AI bullshit, fake quality, fake everything.
BINGO. That's exactly what I don't understand. If all of that is true and nvidia offers single digit improvements, why in hell isn't amd curbstomping nvidia in raw performance? This does not make any sense, nvidias last generation model will be faster than the fastest cards amd is going to release. It cannot be true that nvidia offers single digit improvements unless amd offers negative digit improvements.
 
I just said you were comparing the wrong products.

But isn't a 7800XT a 7800XT and a 6800XT is 6800XT? Thats what I was originally comparing, those two cards and from my perspective the 7800XT didn't do diddly-squat over a 6800XT and only if you didn't originally purchase a 6800XT was a 7800XT an option.

looking for a new GPU now

Plenty of options soon matey :cool:
 
BINGO. That's exactly what I don't understand. If all of that is true and nvidia offers single digit improvements, why in hell isn't amd curbstomping nvidia in raw performance? This does not make any sense, nvidias last generation model will be faster than the fastest cards amd is going to release. It cannot be true that nvidia offers single digit improvements unless amd offers negative digit improvements.
AMD has been on a 50% price-to-performance improvement lately. You only don't see that because they F*ed up on the stupid 7800 XT model name. It should have been called the 7700 XT to match the level it was priced at.

The 9070 won't match the 4090 because it's a good two tiers below that. It's not targeting anywhere near that performance. AMD themselves said that they're not going for the high-end this time, so which one of their cards are you expecting to beat Nvidia's last gen halo?
 
AMD has been on a 50% price-to-performance improvement lately. You only don't see that because they F*ed up on the stupid 7800 XT model name. It should have been called the 7700 XT to match the level it was priced at.
But how is that possible my man? How far behind was amd with RDNA2 when they get 50% performance gen on gen (with nvidia getting single digits) and nvidia still ends up with the faster cards? Something doesn't add up. I've been reading that since turing to be honest, and yet still amd's fastest cards sits at like 5th - 6th place. How can they consistently offer huge improvements while nvidia offers single digits and nvidia still is at the top??
 
Back
Top