• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Top AMD RDNA4 Part Could Offer RX 7900 XTX Performance at Half its Price and Lower Power

It's hard for me to imagine similar performance with RAM, which is less, which has about 67% bandwidth, with much lower power consumption and cheaper, in native mode. And in the next generation, not in a few generations ahead. Of course, with some many software magic, at least theoretically, it could look like it has similar performance.

24 GB of VRAM is definitely an overkill. 20 GB is ok. More VRAM doesn't mean higher performance, except under those settings which eat lots and lots of VRAM and its buffer is overloaded.
Less memory throughput is also fine, it depends on how fast the shaders are, how much L3 cache it has, etc.
Getting higher performance with less resources / higher architectural efficiency has always been the case and the reason for generational progress.
 
24 GB of VRAM is definitely an overkill. 20 GB is ok. More VRAM doesn't mean higher performance, except under those settings which eat lots and lots of VRAM and its buffer is overloaded.
Less memory throughput is also fine, it depends on how fast the shaders are, how much L3 cache it has, etc.
Getting higher performance with less resources / higher architectural efficiency has always been the case and the reason for generational progress.
I agree, but progress has slowed significantly compared to the beginning of the century and, as I have specified, I do not believe that a serious difference is possible in another generation with the listed disadvantages.
 
24 GB of VRAM is definitely an overkill. 20 GB is ok. More VRAM doesn't mean higher performance, except under those settings which eat lots and lots of VRAM and its buffer is overloaded.
Less memory throughput is also fine, it depends on how fast the shaders are, how much L3 cache it has, etc.
Getting higher performance with less resources / higher architectural efficiency has always been the case and the reason for generational progress.
It's all relative, isn't it?

Newer games with higher-resulution assets making use of more features are what are driving up VRAM. Even at 4K max settings 10GB used to be enough only a few short years ago. People who bought 3080s probably skipped the 40-series and they've been suffering with 10GB for a good year or more, in so much as "suffering" is still little more than a minor inconvenience of having to compromise on some graphics settings.

I think 16GB is the new sweet spot in that it will be enough for max or high settings for a decent GPU lifespan right now. 20 and 24GB sure do feel like overkill when the consoles are making do with somewhere between 9GB and 12.5GB depending on how much system RAM the game requires. Throw some headroom into that and a 12GB card is probably fine for lower resolutions, 16GB should handle 4K, and by the time games actually need 20 or 24GB, cards like the 7900-series, 3090/4090 will lack the raw compute to actually run at 4K max settings.

We're all speculating, and this is a thread based on speculation anyway, but as someone with friends working in multiple different game studios, there's a strong focus on developing for the majority, which means devs are targeting consoles and midrange GPUs at most. If you have more GPU power on tap, you can get higher framerates and/or resolution but don't expect anything else except in rare edge cases like CP2077 where Nvidia basically dumped millions of dollars of effort and cooperation with CDPR as a marketing stunt more than a practical example of the real-world future that all games will look like.
 
I agree, but progress has slowed significantly compared to the beginning of the century and, as I have specified, I do not believe that a serious difference is possible in another generation with the listed disadvantages.

Because a 2 nm TSMC wafer costs $30,000 per piece. 3 nm costs $20,000 per piece.
While in 2004, a 90 nm wafer cost only $2,000 per piece.

1732975297600.png


2 nm and 3 nm are forbidden for AMD, which means no new graphics cards, and AMD going out of the GPU business.

 
Because a 2 nm TSMC wafer costs $30,000 per piece. 3 nm costs $20,000 per piece.
While in 2004, a 90 nm wafer cost only $2,000 per piece.

View attachment 373866

2 nm and 3 nm are forbidden for AMD, which means no new graphics cards, and AMD going out of the GPU business.

Let me question these prices. They seem too round and I certainly don't know what is written in the contracts of the companies renting capacity from TSMC.
 
Let me question something else - why was AMD's latest graphics card launched back in 2022? We are close to 2025 and there are not even hints about anything better to be coming?
What has Nvidia offered?
 
Let me question something else - why was AMD's latest graphics card launched back in 2022? We are close to 2025 and there are not even hints about anything better to be coming?
1732983789472.png


By your reasoning, Nvidia's latest graphics card was also launched back in 2022, 2 months before the first Radeon 7000-series offering.

I guess Nvidia have an excuse though - they're poor and they can't afford to develop new graphics cards, nor is it economically viable for them to do that with their tiny marketshare.
 
24 GB of VRAM is definitely an overkill. 20 GB is ok. More VRAM doesn't mean higher performance, except under those settings which eat lots and lots of VRAM and its buffer is overloaded.
Less memory throughput is also fine, it depends on how fast the shaders are, how much L3 cache it has, etc.
Getting higher performance with less resources / higher architectural efficiency has always been the case and the reason for generational progress.
all depends on what you do on the machine.

For games maybe.

I run LLM's on my machine and 24GB of VRAM isn't enough for some of the medium to larger models. So if I could get a card with 32 to 48GB of VRAM on the consumer side that didn't cost a kidney I would do it.
 
all depends on what you do on the machine.

For games maybe.

I run LLM's on my machine and 24GB of VRAM isn't enough for some of the medium to larger models. So if I could get a card with 32 to 48GB of VRAM on the consumer side that didn't cost a kidney I would do it.
image;s=1000x700


Quadro RTX 8000 is available at quite low prices. True, this is only the first series of cards supporting DX 12.2 and with not very high performance today, but it does have 48GB of VRAM. In fact, I came across an ad in a bazaar in my homeland, a configuration with such parameters as in the photo for the equivalent of USD $4057.
 
Yes, thanks - that's the page I copied the previous table from, and cited in my comment about Nvidia's Ada architecture launching two months earlier in October 2022.

You still seem to be oblivious to the reason I quoted you in the first place.
why was AMD's latest graphics card launched back in 2022?
The latest graphics card is 2024's 7600XT.

Stop citing and complaining about architecture launch dates; Architectures have spanned multiple generations of graphics cards for decades now. It's historical fact that cannot be changed or argued with and it's very common to see old architectures in new generations, sometimes even entire new generations of graphics cards have remained on last-gen architecture.
 
There is no free lunch for you. You want professional cards, you pay professional money.

View attachment 373910

@Chrispy_ FYI:

View attachment 373911
Well it would actually be cheaper to buy 7900 XTX x2 which would give you 48GB of vram vs W7800 32GB card which cost $3700 CAD or W7900 48GB for $5426 CAD.

But this is more of a hobby for me if it was work related my employer would be footing the bill for the hardware :)
 
It's all relative, isn't it?

Newer games with higher-resulution assets making use of more features are what are driving up VRAM. Even at 4K max settings 10GB used to be enough only a few short years ago. People who bought 3080s probably skipped the 40-series and they've been suffering with 10GB for a good year or more, in so much as "suffering" is still little more than a minor inconvenience of having to compromise on some graphics settings.

I think 16GB is the new sweet spot in that it will be enough for max or high settings for a decent GPU lifespan right now. 20 and 24GB sure do feel like overkill when the consoles are making do with somewhere between 9GB and 12.5GB depending on how much system RAM the game requires. Throw some headroom into that and a 12GB card is probably fine for lower resolutions, 16GB should handle 4K, and by the time games actually need 20 or 24GB, cards like the 7900-series, 3090/4090 will lack the raw compute to actually run at 4K max settings.

We're all speculating, and this is a thread based on speculation anyway, but as someone with friends working in multiple different game studios, there's a strong focus on developing for the majority, which means devs are targeting consoles and midrange GPUs at most. If you have more GPU power on tap, you can get higher framerates and/or resolution but don't expect anything else except in rare edge cases like CP2077 where Nvidia basically dumped millions of dollars of effort and cooperation with CDPR as a marketing stunt more than a practical example of the real-world future that all games will look like.
Why it's one Overkill to have 24 GB VRAM or even more than that. NVIDIA brings the RTX 5090 with 32 GB VRAM. There are some games who needed yet about 12 GB VRAM, already.

In 1440p. Like Alan Wake-2, or Metro Exodus Enhanced. Or Space Simulation game. I have forgotten the complete name. So 16 GB VRAM wouldn't be the sweet spot. It's would be the minimum size in the future now.

But those green devils have one very strange politics. The absolute Flagship does become huge VRAM. But the rest is equipped with peanuts . Like 16 or 12 GB VRAM.

AMD has every time spendet enough VRAM on her GPUs. Like they did just perfectly in RDNA-3. So the next upcoming RDNA-4 would have the same we can assumed.

I agree with 1 of your statements, Ray Tracing is nonsense and subjective. Personally I do not like how it looks. The performance hit is far too big, but maybe in the coming years it will even out and just be a regular option for gamers.
I'm quite have the same interpretation. RayTracing is way to hyped by greedy NVIDIA. Those green devils. The red AMD angels should keep more focused on Raster. But to be, and keep competitive with the green team, they needs to improve their own RT performance. There are no way out.
 
AMD has every time spendet enough VRAM on her GPUs. Like they did just perfectly in RDNA-3. So the next upcoming RDNA-4 would have the same we can assumed.
RDNA 4 (8800 XT) is midrange only so it will have 16GB's max memory.

I'm waiting for the next Highend Radeon before I move off the 7900XTX since I do use the additional vram
 
The latest graphics card is 2024's 7600XT.

Stop citing and complaining about architecture launch dates; Architectures have spanned multiple generations of graphics cards for decades now. It's historical fact that cannot be changed or argued with and it's very common to see old architectures in new generations, sometimes even entire new generations of graphics cards have remained on last-gen architecture.

Err, if they relaunch RX 580 in 2027, for you it will be the latest. Of course, it is not. You count the date of the first graphics card with a new microarchitecture. All variants after it are late iterations and don't count..
 
Why it's one Overkill to have 24 GB VRAM or even more than that. NVIDIA brings the RTX 5090 with 32 GB VRAM. There are some games who needed yet about 12 GB VRAM, already.

In 1440p. Like Alan Wake-2, or Metro Exodus Enhanced. Or Space Simulation game. I have forgotten the complete name. So 16 GB VRAM wouldn't be the sweet spot. It's would be the minimum size in the future now.

But those green devils have one very strange politics. The absolute Flagship does become huge VRAM. But the rest is equipped with peanuts . Like 16 or 12 GB VRAM.

AMD has every time spendet enough VRAM on her GPUs. Like they did just perfectly in RDNA-3. So the next upcoming RDNA-4 would have the same we can assumed.


I'm quite have the same interpretation. RayTracing is way to hyped by greedy NVIDIA. Those green devils. The red AMD angels should keep more focused on Raster. But to be, and keep competitive with the green team, they needs to improve their own RT performance. There are no way out.
I probably understand what you're saying (highlighted above) about VRAM but the way you're saying it doesn't make sense.
Sweetspot (for amount, frequency or whatever in discussion) usually means the point where after it the return is not so great or miniscule or none at all.
I am considering 12GB of VRAM to be the minimum and 16GB the sweetspot. 8GB is dying even at 1080p native and max settings from what I've seen on latest UE5+ games.
And by minimum I mean (and I believe most people do) games don't glitch, stutter or have selected textures downgraded.

--------------------------------------------

As for RT, personally I like it. Its small things (visually) like this that all add up to a more realistic visuals. I like it when passing by a pothole filled with water everything is mirrored inside instead of a smudged image.
Taking too much computational power? Yes it does. Upscaling (quality) exists.
I am considering "normal" medium RT settings as the sweetspot. Max settings is past that spot. Path tracing even further away...
 
You count the date of the first graphics card with a new microarchitecture. All variants after it are late iterations and don't count..

If you want to count architectures, say "architectures", not "graphics cards".

A graphics card is not an architecture. Confusing these two terms highlights an absolutely, non-debatable gap in your understanding.
 
Last edited:
I probably understand what you're saying (highlighted above) about VRAM but the way you're saying it doesn't make sense.
Sweetspot (for amount, frequency or whatever in discussion) usually means the point where after it the return is not so great or miniscule or none at all.
I am considering 12GB of VRAM to be the minimum and 16GB the sweetspot. 8GB is dying even at 1080p native and max settings from what I've seen on latest UE5+ games.
And by minimum I mean (and I believe most people do) games don't glitch, stutter or have selected textures downgraded.

--------------------------------------------

As for RT, personally I like it. Its small things (visually) like this that all add up to a more realistic visuals. I like it when passing by a pothole filled with water everything is mirrored inside instead of a smudged image.
Taking too much computational power? Yes it does. Upscaling (quality) exists.
I am considering "normal" medium RT settings as the sweetspot. Max settings is past that spot. Path tracing even further away...
Well I used to play with 8 GB VRAM in 2K and even 4K with details on high. Older games could be played well. Graphics cards were or are Radeon Vega 64 and VII, RTX 2080 too.

And for RayTracing, I agree too. I'm curious to see how they're presenting. I bought one RTX 4080 Super this summer ️. In September. But due to my big lack in mental health (depressive episodes) I haven't played much since that time. Now this year is almost done. And I want to upgrade my second of 3 PCs.

The CPUs I had choosed were the Ryzen 7800x3D or 9800x3D. The bloody thing are the continuing increase of the prices. From 300 bucks this summer. The 7800x3D is now at 549 €. The sucessor 9800x3D, when hit the market, 529 € is now at 719 € on eBay.

All this have to do with simply facts. The shortage of both processors. The old Gen weren't produced any more. And the new Gen isn't saturated the market. Cuz of the huge demand. I'll wait till January or February next year. Till The prices fall again. Currently they're quite to high.

Over 500 € bucks for one CPU isn't candy. It's sour cream. But if the prices still continue at this high levels, I had in mind to invest a little bit more than that's. For 649 € the all mighty 16 core 7950 is available.
 
It seems AMD has dramatically improved the transistor density using the 4nm node, which means Navi 48 will pack as many as 45 billion transistors, while the smaller Navi 44 will pack 23 billion transistors.

Navi 44: 153 mm², 22.98 BTr, 150.2 MTr/mm²
It lies, though, that the die is MCM. It is not.

1733502348400.png


For a comparison:
Navi 48: 300 mm², 45.06 BTr, 150.2 MTr/mm²

Navi 31: 529 mm², 57.7 BTr
Navi 32: 346 mm², 28.1 BTr
Navi 33: 204 mm², 13.3 BTr, 65.2 MTr/ mm²

Navi 10: 251 mm², 10.3 BTr

Navi 24: 107 mm², 5.4 BTr
Navi 23: 237 mm², 11.06 BTr
Navi 22: 335 mm², 17.2 BTr
Navi 21: 520 mm², 26.8 BTr

Performance estimate: Navi 44 ~ Radeon RX 7700 XT | RTX 4070
Performance estimate: Navi 48 ~ Radeon RX 7900 XT | RTX 4070 Ti S (if everything goes well)

 
AMD Radeon RX 8800 XT “RDNA 4” GPU Allegedly Offers 45% Faster RT Performance Than 7900 XTX, On-Par With RTX 4080 In Raster & Lower Power

RE4 is a Title that runs very well on Radeons and it has light RT.

So will need to wait for a 3rd party review and with a game that doesn't favor radeon to get a real idea of how it will perform.
 
It won't offer the XTX performance, while the power consumption will remain extremely high, even if around 300W. This card needed to be 215 W max.
 
It won't offer the XTX performance, while the power consumption will remain extremely high, even if around 300W. This card needed to be 215 W max.
You forget to keep adding. In my opinion.
 
In my opinion.
It's a well known fact. If it did they'd already released it. AMD fans' copium levels are now off all charts.
 
Back
Top