Monday, February 20th 2023

AMD RDNA4 Architecture to Build on Features Relevant to Gaming Performance, Doesn't Want to be Baited into an AI Feature Competition with NVIDIA

AMD's next-generation RDNA4 graphics architecture will retain a design-focus on gaming performance, without being drawn into an AI feature-set competition with rival NVIDIA. David Wang, SVP Radeon Technologies Group; and Rick Bergman, EVP of Computing and Graphics Business at AMD; gave an interview to Japanese tech publication 4Gamers, in which they dropped the first hints on the direction which the company's next-generation graphics architecture will take.

While acknowledging NVIDIA's movement in the GPU-accelerated AI space, AMD said that it didn't believe that image processing and performance-upscaling is the best use of the AI-compute resources of the GPU, and that the client segment still hasn't found extensive use of GPU-accelerated AI (or for that matter, even CPU-based AI acceleration). AMD's own image processing tech, FSR, doesn't leverage AI acceleration. Wang said that with the company introducing AI acceleration hardware with its RDNA3 architecture, he hopes that AI is leveraged in improving gameplay—such as procedural world generation, NPCs, bot AI, etc; to add the next level of complexity; rather than spending the hardware resources on image-processing.
AMD also stressed on the need to make the GPU more independent of the CPU in graphics rendering. The company took several steps in this direction over the past many generations, with the most recent being the multi-draw indirect accelerator (MDIA) component introduced with RDNA3. Using this, software can dispatch multiple instanced draw commands that can be issued on the GPU, greatly reducing the CPU-level overhead. RDNA3 is up to 2.3x more efficient at this than RDNA2. Expect more innovations along these lines with RDNA4.

AMD understandably didn't talk anything about the "when," "what," and "how" of RDNA4 as its latest RDNA3 architecture is just off the ground, and awaiting a product ramp through 2023 into the various market-segments spanning from iGPUs to mobile GPUs, and mainstream desktop GPUs. RDNA3 is currently powering the Radeon RX 7900 series high-end graphics cards, and the company's latest 5 nm "Phoenix Point" Ryzen 7000-series mobile processor iGPUs. You can catch the 4Gamer interview in the source link below.
Sources: 4Gamers.net, HotHardware
Add your own comment

221 Comments on AMD RDNA4 Architecture to Build on Features Relevant to Gaming Performance, Doesn't Want to be Baited into an AI Feature Competition with NVIDIA

#151
ratirt
fevgatosSure, is a video with each individual core usage enough as proof?
Can you tell that the game is using the ecore or is it the usage of a windows 11 background processes that is being shown? After you switch the ecores off (considering they are being used by the game) does the performance drop? If it does drop, by how much?
Posted on Reply
#152
fevgatos
ratirtCan you tell that the game is using the ecore or is it the usage of a windows 11 background processes that is being shown? After you switch the ecores off (considering they are being used by the game) does the performance drop? If it does drop, by how much?
Yes, the game is using the ecores, it's pretty obvious to tell. 16 ecores at 30 to 50%+ usage. And yes in those specific games I mentioned performance drops by around 15% with ecores off in cpu demanding areas of the game (apartments, Tom's dinner etc) . That's with the 13900k, I don't know the exact numbers with a 12900k but since that's the cpu I'm using right now I can test it
Posted on Reply
#153
beedoo
fevgatoseah, plenty. Cyberpunk, spiderman remastered
Is that the same Spiderman remaster than runs better on a 7900XT than an overclocked 4070ti?
Posted on Reply
#154
ratirt
fevgatosYes, the game is using the ecores, it's pretty obvious to tell. 16 ecores at 30 to 50%+ usage. And yes in those specific games I mentioned performance drops by around 15% with ecores off in cpu demanding areas of the game (apartments, Tom's dinner etc) . That's with the 13900k, I don't know the exact numbers with a 12900k but since that's the cpu I'm using right now I can test it
Funny, you still haven't shown anything just write your theories. 13900k and 12900k will act exactly the same with ecores and pcores utilization since windows is scheduling those not the CPU itself.
According to 13900K benchmark on TPU with the Cyberpunk game at any given resolution that is not correct. Performance does not drop when disabling ecores by 15% but rather 1.8% at 1080p.
Posted on Reply
#155
fevgatos
beedooIs that the same Spiderman remaster than runs better on a 7900XT than an overclocked 4070ti?
And the same cyberpunk that runs better on a 3080 than a 7900xt
Posted on Reply
#156
Vya Domus
fevgatosAnd the same cyberpunk that runs better on a 3080 than a 7900xt


1 fps dude, it runs 1 fps faster with RT. 1 fps

Meanwhile it's like 40% faster with RT off, not even worth comparing the two.

"runs better on a 3080", the nonsense rubbish you say never ceases to amaze me, you might just take the cake for the worst fanboy I've seen on this site yet.
Posted on Reply
#157
TheoneandonlyMrK
fevgatosNever said the 4090 is a low power card. I said it's incredibly efficient. Which it is.

Never said the 13900k is exceptional. Actually I swapped back to my 12900k cause I prefer it. Had I said is that ecores are not useless in gaming, since there are games that benefit a lot from them. Try to actually argue with what people are saying instead of constant strawmaning
You never said anything on topic so far.

Can this shitpostfest be locked yet it's clear that some would rather argue about AMD v Nvidia or CPUs.
Posted on Reply
#158
las
fevgatosYeah, plenty. Cyberpunk, spiderman remastered, spiderman miles morales


You said ecores are useless. I said they are as useless as the 2nd ccd. Can you actually give me a couple of applications that the 2nd ccd boosts performance but ecores don't? If not, then you HAVE to admit that ecores are as useless as the 2nd ccd.
Oh really? Techpowerup test showed 2% less performance in Cyberpunk with e cores off

And if you actually run Windows 10 instead of 11 most games will perform like crap because of no thread director, which is essential so e-cores are not used for stuff that actually matters
Posted on Reply
#159
fevgatos
ratirtFunny, you still haven't shown anything just write your theories. 13900k and 12900k will act exactly the same with ecores and pcores utilization since windows is scheduling those not the CPU itself.
According to 13900K benchmark on TPU with the Cyberpunk game at any given resolution that is not correct. Performance does not drop when disabling ecores by 15% but rather 1.8% at 1080p.
I have videos on my channel showing 13900k with ecore usage on a 13900k

I'll show you when I'm home
Vya Domus

1 fps dude, it runs 1 fps faster with RT. 1 fps

Meanwhile it's like 40% faster with RT off, not even worth comparing the two.

"runs better on a 3080", the nonsense rubbish you say never ceases to amaze me, you might just take the cake for the worst fanboy I've seen on this site yet.
So it runs slower than a 2.5 year old card. Splendid
lasOh really? Techpowerup test showed 2% less performance in Cyberpunk with e cores off

And if you actually run Windows 10 instead of 11 most games will perform like crap because of no thread director, which is essential so e-cores are not used for stuff that actually matters
I don't really care what tpup showed, I have the cpu and the game. If tpup doesn't test in cpu demanding areas that need more than 8cores then obviously you won't see a difference.
Posted on Reply
#160
Vya Domus
fevgatosSo it runs slower than a 2.5 year old card. Splendid
Well, looks like you forgot this, a game where a 4070ti is slower than Nvidia's own 2.5 years old previous generation. What should I make of this, our resident fanboy ? I suppose that's splendid as well.

Posted on Reply
#161
Vayra86
fevgatosOkay, do you think for example w1z considers them gimmicks? Dodge the question, go ahead
Everyone is entitled to their opinion. Do you really think 'you have me' now or something? Get a life.

W1z has already confirmed that despite his general statements about the need for more VRAM (generally: core runs out as VRAM runs out), which I do agree with, the exceptions do make the rule and we've already seen several examples appear in his own reviews, where this had to be acknowledged. Similarly, W1z has also been seen saying the technologies in play here are progress, and that he likes to see it. But has also been saying how abysmal the performance can get in certain games. Its a thing called nuance. You should try it someday.

And that's my general stance with regard to these new features too: the general movement is good. But paying through the nose for them today is just early adopting into stuff with an expiry date, and very little to show for it. Upscaling technologies, are good. And they're much better if they are hardware agnostic.

Similarly, RT tech, is good. And its much better if its hardware agnostic.
AMD is proving the latter in fact 'just works' too.

And that is why Nvidia's approach is indeed a gimmick, where fools and money get parted. History repeats.
Posted on Reply
#162
fevgatos
Vayra86Everyone is entitled to their opinion. Do you really think 'you have me' now or something? Get a life.

W1z has already confirmed that despite his general statements about the need for more VRAM (generally: core runs out as VRAM runs out), which I do agree with, the exceptions do make the rule and we've already seen several examples appear in his own reviews, where this had to be acknowledged. Similarly, W1z has also been seen saying the technologies in play here are progress, and that he likes to see it. But has also been saying how abysmal the performance can get in certain games. Its a thing called nuance. You should try it someday.

And that's my general stance with regard to these new features too: the general movement is good. But paying through the nose for them today is just early adopting into stuff with an expiry date, and very little to show for it. Upscaling technologies, are good. And they're much better if they are hardware agnostic.

Similarly, RT tech, is good. And its much better if its hardware agnostic.
AMD is proving the latter in fact 'just works' too.

And that is why Nvidia's approach is indeed a gimmick, where fools and money get parted. History repeats.
We were talking about dlss / fsr, that's at least what the post I quoted was talking about. Those are definitely not gimmicks. Rt, sure, you can call it that, especially in certain games.
Posted on Reply
#163
las
fevgatosI have videos on my channel showing 13900k with ecore usage on a 13900k

I'll show you when I'm home


So it runs slower than a 2.5 year old card. Splendid


I don't really care what tpup showed, I have the cpu and the game. If tpup doesn't test in cpu demanding areas that need more than 8cores then obviously you won't see a difference.
I do, because I look at facts and proof. If you actually need more than 8 cores in a game, you are screwed anyway, because E cores are slow and runs at ~4 GHz using dated architecture.

i7-13700K has the same game performance as i9-13900K.

Even i5-13600K only performs 1% behind i9-13900K. For half the price.

Efficiency cores gives you exactly nothing. Performance cores is what matters for gaming performance.

Ryzen 7800X3D will smack i9-13900K for half the price in a few weeks. Oh, and half the watt usage.

You can enable DLSS 3 to make fake frames tho, that will remove cpu bottleneck :roll:
Posted on Reply
#164
fevgatos
Vya DomusWell, looks like you forgot this, a game where a 4070ti is slower than Nvidia's own 2.5 years old previous generation. What should I make of this, our resident fanboy ? I suppose that's splendid as well.

The 4070ti is slower than a 2 year old Much more expensive card. The XT is slower than a 2 year old cheaper card.

But it doesn't matter, it wasn't me who made the claim. Someone said "are we talking about the spiderman that the 7900xt is faster than a 4070ti" as that means something. It doesn't. It's a pointless statement that doesn't seem to bother you. You seem particularly bothered when a specific company is losing. Someone would even call you biased
lasI do, because I look at facts and proof. If you actually need more than 8 cores in a game, you are screwed anyway, because E cores are slow and runs at ~4 GHz using dated architecture.

i7-13700K has the same game performance as i9-13900K.

Even i5-13600K only performs 1% behind i9-13900K. For half the price.

Efficiency cores gives you exactly nothing. Performance cores is what matters for gaming performance.

Ryzen 7800X3D will smack i9-13900K for half the price in a few weeks. Oh, and half the watt usage.

You can enable DLSS 3 to make fake frames tho, that will remove cpu bottleneck :roll:
Well obviously you don't look at facts and proof cause you don't have the actual cpu. I do, and I'm telling you in areas that are cpu demanding ecores boost performance by a lot

I'll make some videos with ecores off as well since in my channel I only have with ecores on and you'll see that there is a difference.
Posted on Reply
#165
Vya Domus
fevgatosSomeone would even call you biased
Yeah sure because I am the one who says a card that runs 1 fps faster in RT and 40% slower in raster in a game "runs better". That's totally not a laughable statement and it doesn't sound like something someone who is biased would ever say.
Posted on Reply
#166
Vayra86
fevgatosWe were talking about dlss / fsr, that's at least what the post I quoted was talking about. Those are definitely not gimmicks. Rt, sure, you can call it that, especially in certain games.
Correct, and I mentioned them. But this topic is about 'the use of AI' for GPU acceleration in general, too - see title.

None of the technologies in play 'require AI' because Nvidia said so, and the point isn't proven either because Nvidia has a larger share of the market now. That just proves the marketing works - until a competitor shows a competitive product/a design win (like Zen!) and the world turns upside down. See, the truth isn't what the majority thinks it is. The truth is what reality dictates - a principle people seem to have forgotten in their online bubbles. And then they meet the real world, where real shit has real consequences. Such as the use of die space vs cost vs margins vs R&D budgets.
Posted on Reply
#167
Vya Domus
Vayra86None of the technologies in play 'require AI' because Nvidia said so
I still remember the first implementation of DLSS in Control where remedy said it didn't actually used tensor cores proving it's completely redundant.
Posted on Reply
#168
Vayra86
Vya DomusI still remember the first implementation of DLSS in Control where remedy said it didn't actually used tensor cores proving it's completely redundant.
Yeah I never quite did understand what's 'AI' about upscaling anyway. You just create a set of rules to implement upscaling, there is no way you're passing through every scene in a game to determine what's what. People play the game with a flexible viewport, they don't rerun a benchmark.

Nvidia is clearly charging ahead with their implementation and marketing because having to dial it back would:
A. destroy their dual-use strategy for datacenter and consumer GPUs
B. force them to revert to old technology sans special cores
C. Redesign the CUDA core to actually do more per clock, or somehow improve throughput further while carrying their old featureset

They realistically can't go back, so strategically, AMD's bet is a perfect one - note that I said this exact thing when they talked about their proprietary RTX cores shortly after it was initially announced. Also, the fact Wang is saying now what he's said years ago at around the same time... Time might be on either company's side, really. Its going to be exciting to see how this works out. Still though, the fact AMD is still on the same trajectory is telling, it shows they have faith in the approach of doing more with less. Historically, doing more with less has always been the success formula for hardware - and it used to be the 'Nvidia thing'.
Posted on Reply
#169
fevgatos
Vayra86Correct, and I mentioned them. But this topic is about 'the use of AI' for GPU acceleration in general, too - see title.

None of the technologies in play 'require AI' because Nvidia said so, and the point isn't proven either because Nvidia has a larger share of the market now. That just proves the marketing works - until a competitor shows a competitive product/a design win (like Zen!) and the world turns upside down. See, the truth isn't what the majority thinks it is. The truth is what reality dictates - a principle people seem to have forgotten in their online bubbles. And then they meet the real world, where real shit has real consequences. Such as the use of die space vs cost vs margins vs R&D budgets.
But you are not paying through the nose for them. Thats just false. Launch prices, 7900xt was 15% more expensive than the 4070 ti, while being only 12% faster in raster, much slower in rt, with worse up scaling and worse power draw. So how exactly are you paying through the nose for them? The 70ti in fact had better performance per dollar even on just raster. I'm sorry but it seems to me you are paying through the nose for amd
Posted on Reply
#170
Vayra86
fevgatosBut you are not paying through the nose for them. Thats just false. Launch prices, 7900xt was 15% more expensive than the 4070 ti, while being only 12% faster in raster, much slower in rt, with worse up scaling and worse power draw. So how exactly are you paying through the nose for them? The 70ti in fact had better performance per dollar even on just raster. I'm sorry but it seems to me you are paying through the nose for amd
That's just false, you say, and yet, perf/dollar has barely moved forward since the first gen of RTX cards. You're living a nice alternate reality :)

Note my specs, and note how I'm not paying through the nose at any time ever - I still run a 1080 because every offer past it has been regression, not progress. You might not want to see it, but the fact is, the price to get an x80 GPU has more than doubled since then and you actually get less hardware for it. Such as lower VRAM relative to core.

I'm not even jumping on a 550-600 dollar RX 6800 (XT) because we're in 2023 now and this is the original MSRP of years back. That's paying too much for what its going to do, even if it nearly doubles game performance relative to the old card.

There are a LOT of people on this dilemma right now. Every offer the market has currently is crappy in one way or another. If a deal is hard to swallow, its a no deal in my world. Good deals feel like a win-win. There is no way any card in the new gen is a win-win right now.

Chasing the cutting edge has never been great, even when I did try doing so. I've learned I like my products & purchases solid and steady, so that I get what I pay for.

Hey, and don't take it from me, you don't have to:
www.techpowerup.com/forums/threads/graphics-card-prices-doubled-on-average-between-2020-and-2023-mindfactory-data.305018/
Posted on Reply
#171
fevgatos
Vayra86And that is why Nvidia's approach is indeed a gimmick, where fools and money get parted. History repeats.
This is what you said that I disagreed with. Of course prices are up, but they are up for both amd and nvidia cards. In fact as evident by the 4070ti launch prices compared to the 7900xt, you are not paying for those gimmicks, they come for free, since the 4070ti had the better value even for pure raster performance, completely excluding rt dlss and fg.

So who's the fool here? Amd buyers are paying more money for less features, higher power draw, worse rt and similar raster per dollar.
Posted on Reply
#172
mrnagant
I'd agree generally. AI is taking off, but very little is turnkey, through a simple exe. It's usually this whole environment you have to set up. Nvidia has had Tensor since 2018 in consumer chips, yet no AI in games. Then some stuff that may use AI cores, or could use AI cores, run just fine without them. Nvidia has had RTX Voice, which is awesome. But apps do a fine job without AI cores. I have voice.ai installed and it uses no more than 3% of my CPU. We have so much CPU overhead, and keep getting more and more cores that already go underutilized. For games, Nvidia has DLSS, but the competitors are still pretty dang good.

With RDNA3 we are seeing AI accelerators that will largely go unused, especially for gaming until FSR3 comes out. Zen5 will introduce AI accelerators and we already have that laptop Zen that has XDNA. On top of all the CPU cycles that go unused.

It's coming, but I think it's overrated in the consumer space atm. It's very niche to need those Tensor cores and a gaming GPU. On the business side, AMD has had CDNA with AI. What is really limiting is consumer software and strong AI environments on the AMD side. For gaming I'm more excited for raytracing and would rather that be the focus. RT is newer and needs that dedicated hardware. But generally, we are still lacking in how much hardware we are getting to accelerate that RT performance even from Nvidia. If for example Nvidia removed all that Tensor and replaced it with RT and just use FSR or similar, that would be mouth watering performance.

For AMDs argument, if they made up for it in rasterization and ray-tracing performance, that would make since. But they can't even do that. Seems more like AMD just generally lacks resources.
Posted on Reply
#173
Dimitriman
evernessince...AMD announced a DLSS 3.0 competitor some time ago during the launch of RNDA3 GPUs. Technically you are correct, AMD won't have a DLSS3.0 competitor with RDNA4 but that's because they already will have it under RDNA3.
Ok, they did not claim anywhere that it includes frame generation, instead they used the term "fluid motion frames", but sure, let's assume so. But then when? all they claimed is 2023 and it is still no where to be found, and we are approaching March. Maybe by the time this comes out and actually works, RDNA 4 will already be launched.
Posted on Reply
#174
nguyen
mrnagantI'd agree generally. AI is taking off, but very little is turnkey, through a simple exe. It's usually this whole environment you have to set up. Nvidia has had Tensor since 2018 in consumer chips, yet no AI in games. Then some stuff that may use AI cores, or could use AI cores, run just fine without them. Nvidia has had RTX Voice, which is awesome. But apps do a fine job without AI cores. I have voice.ai installed and it uses no more than 3% of my CPU. We have so much CPU overhead, and keep getting more and more cores that already go underutilized. For games, Nvidia has DLSS, but the competitors are still pretty dang good.

With RDNA3 we are seeing AI accelerators that will largely go unused, especially for gaming until FSR3 comes out. Zen5 will introduce AI accelerators and we already have that laptop Zen that has XDNA. On top of all the CPU cycles that go unused.

It's coming, but I think it's overrated in the consumer space atm. It's very niche to need those Tensor cores and a gaming GPU. On the business side, AMD has had CDNA with AI. What is really limiting is consumer software and strong AI environments on the AMD side. For gaming I'm more excited for raytracing and would rather that be the focus. RT is newer and needs that dedicated hardware. But generally, we are still lacking in how much hardware we are getting to accelerate that RT performance even from Nvidia. If for example Nvidia removed all that Tensor and replaced it with RT and just use FSR or similar, that would be mouth watering performance.

For AMDs argument, if they made up for it in rasterization and ray-tracing performance, that would make since. But they can't even do that. Seems more like AMD just generally lacks resources.
Yup, AMD still lose in rasterization even when they throw everything plus the kitchen sink at it, it's quite pathetic.

And it's not like AMD is doing more with less, they are doing less with more, 7900XTX with 384bit bus + 24GB VRAM barely beat 256bit 4080 by a hair in raster and lose in everything else ;). The BOM on the 7900XTX is definitely higher than that of 4080 and the only way for AIBs to earn any profit is selling 7900XTX at ~1100usd, which make it the worse choice than 1200usd 4080.

Everyone and their mother should realize by now Nvidia is just letting RTG survive enough to keep the pseudo duopoly going.
Posted on Reply
#175
RH92
evernessinceFor the love of god people read the short article. AMD is not giving up on AI
Nobody claimed AMD was giving up on AI , the claim was that they are already behind the competition on that front and things aren't going to get any better for them since they seem to have dropped the ball on the idea of competing head to head with Nvidia .
evernessince''Wang said that with the company introducing AI acceleration hardware with its RDNA3 architecture, he hopes that AI is leveraged in improving gameplay—such as procedural world generation, NPCs, bot AI, etc; to add the next level of complexity; rather than spending the hardware resources on image-processing."

they just focusing on what will have the biggest impact for gaming on their gaming GPUs.
Do you know why this speech sounds hollow ? Because it's based on thin air !

AMD pulls the old switcheroo and claims they have implemented AI acceleration hardware in RDNA3 for what ? Features that may or may not be a thing when RDNA3 goes EOL ? When Nvidia implemented AI acceleration hardware in Turing they did also immediately put games that would leverage said hardware to the table , they didn't wait for it to happen .

Yet somehow you are falling for it ... well majority of the market isn't .
Posted on Reply
Add your own comment
May 7th, 2024 02:39 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts