Monday, February 20th 2023

AMD RDNA4 Architecture to Build on Features Relevant to Gaming Performance, Doesn't Want to be Baited into an AI Feature Competition with NVIDIA

AMD's next-generation RDNA4 graphics architecture will retain a design-focus on gaming performance, without being drawn into an AI feature-set competition with rival NVIDIA. David Wang, SVP Radeon Technologies Group; and Rick Bergman, EVP of Computing and Graphics Business at AMD; gave an interview to Japanese tech publication 4Gamers, in which they dropped the first hints on the direction which the company's next-generation graphics architecture will take.

While acknowledging NVIDIA's movement in the GPU-accelerated AI space, AMD said that it didn't believe that image processing and performance-upscaling is the best use of the AI-compute resources of the GPU, and that the client segment still hasn't found extensive use of GPU-accelerated AI (or for that matter, even CPU-based AI acceleration). AMD's own image processing tech, FSR, doesn't leverage AI acceleration. Wang said that with the company introducing AI acceleration hardware with its RDNA3 architecture, he hopes that AI is leveraged in improving gameplay—such as procedural world generation, NPCs, bot AI, etc; to add the next level of complexity; rather than spending the hardware resources on image-processing.
AMD also stressed on the need to make the GPU more independent of the CPU in graphics rendering. The company took several steps in this direction over the past many generations, with the most recent being the multi-draw indirect accelerator (MDIA) component introduced with RDNA3. Using this, software can dispatch multiple instanced draw commands that can be issued on the GPU, greatly reducing the CPU-level overhead. RDNA3 is up to 2.3x more efficient at this than RDNA2. Expect more innovations along these lines with RDNA4.

AMD understandably didn't talk anything about the "when," "what," and "how" of RDNA4 as its latest RDNA3 architecture is just off the ground, and awaiting a product ramp through 2023 into the various market-segments spanning from iGPUs to mobile GPUs, and mainstream desktop GPUs. RDNA3 is currently powering the Radeon RX 7900 series high-end graphics cards, and the company's latest 5 nm "Phoenix Point" Ryzen 7000-series mobile processor iGPUs. You can catch the 4Gamer interview in the source link below.
Sources: 4Gamers.net, HotHardware
Add your own comment

221 Comments on AMD RDNA4 Architecture to Build on Features Relevant to Gaming Performance, Doesn't Want to be Baited into an AI Feature Competition with NVIDIA

#201
Avro Arrow
Here's an idea for AMD...

Instead of talking about RDNA4, how about actually releasing a new RDNA3 card? The RX 7900 cards came out over two months ago and we haven't really gotten much more than a sniff of what is to come. Thus far, nVidia has released the RTX 4090, 4080, 4070 Ti, is primed to release three versions of the 4070 and has been talking about the 4060 for awhile now. We've heard a few things about the RX 7800 XT but that was a long time ago. We haven't heard anything about when the 7800 cards will be available, let alone the 7700 and 7600 cards!

Maybe AMD should fix the RDNA3 situation before talking about what comes next. I'm sure that the people care more about what's happening now than what will happen in a few years time.
Posted on Reply
#202
RH92
chrcolukThey power more gaming devices than Nvidia, dGPU is smaller than the console market.
Completely irrelevant but since you want to go there , please compare dGPU margins to iGPU margins ....
Posted on Reply
#203
Vayra86
AquinusI'm not a fan of nVidia, but the Nintendo Switch accounts for a very large chunk of gaming consoles in the wild and that's powered by a nVidia chip. However, when it comes to non-portable gaming consoles, AMD practically has a monopoly on custom designed SoCs for these devices. dGPU market share doesn't properly describe the diversity of AMD's portfolio to be completely honest.
This is a good point, but it's an ARM system that also doesn't share any, or barely any games with other platforms nor the PC. You could say Nintendo carved out its own market.
Posted on Reply
#204
Aquinus
Resident Wat-man
Vayra86This is a good point, but it's an ARM system that also doesn't share any, or barely any games with other platforms nor the PC. You could say Nintendo carved out its own market.
Definitely, but it's a huge slice of the market. It doesn't really matter that it's ARM or that almost everything on it is exclusive to Nintendo. The reality is that there are a lot of them out there in the wild, so while it might be a carved out market of their own, it's a pretty big slice of the pie.
Posted on Reply
#205
chrcoluk
RH92Completely irrelevant but since you want to go there , please compare dGPU margins to iGPU margins ....
Its relevant in terms of influence on game design and AI features which is the subject of this thread.

Developers dont care about margins that is something thats not relevant unless....

Is this a Nvidia vs AMD troll discussion or a discussion on if AMD are right to not invest in hardware AI chips?
Posted on Reply
#206
N3M3515
Avro ArrowWe've heard a few things about the RX 7800 XT
I'm pretty sure that one is going to be a big disappointment. 7900XT is $900. The gap in performance between it and the 6800xt is small(33%) but the gap in price is huge (56%). My guess is it will be barely faster than the 6800xt by 16% which is splitting the difference between the 6800xt and the 7900xt. But at what price? $800 - $750 - $700? If it's $750 then it's 15% more expensive for 16% more perf. Which means price/performance stagnation.
That $750 price with put it on par with the 6950xt on performance. The headline will be "last gen $1100 performance for only $750!!! what a steal!!"
Posted on Reply
#207
Ahhzz
Stick to the topic. Trolling is not allowed. ;)
Posted on Reply
#208
mathohardo
well as someone with a high end pc I bought a ps5 to play the new god of war on my 75 inch tv and overall the game is super super pretty (using the 60 fps mode) that is a 10.3 teraflop gpu aka a 16 GB total system and gpu memory ... the 7900xtx is 61 teraflops and the new rdna 4 with be on better processes size then that so even on medium end cards 2k native will obvious run like melted butter... i still hop amd is working on a AI mode but fsr1 to fsr2.1 is kinda amazing... but were are closing in on if you dont use raytracing or 4k your gpu is overkill... like massively as 60 fps for over a decade was a hard thing to do then now games can (esp at 1080 and e sports) break 600 if 2k was by law the max resolution and raytracing was banned the next gen cards would be over the human eyes ability to care at 300 for intensive games and 400+ for lesser taxing titles so it kinda doesn't matter.. other then are companys not over charging and nivida absolutey has... and only owns up to bad decisions after outcry like the last gpu scandle of names they used...

keep in mind nivinda released a 1030 with ddr4 years later then the 1030 ddr5 was already bad and it had a massive performance hit.... and it decived the avereage tech user look it up game ran 30-40% worse with the same name but 85% of h ppl couldn't tell you what the dif is between dd5 and ddr4 if ask that as a straight question
Posted on Reply
#209
Avro Arrow
N3M3515I'm pretty sure that one is going to be a big disappointment. 7900XT is $900. The gap in performance between it and the 6800xt is small(33%) but the gap in price is huge (56%).
Well, that's similar to the relationship between the RX 6800 XT and RX 6900 XT. The RX 6900 XT was an extra $350 for a "whopping" 9% increase in performance. People only bought the RX 6900 XT because it was often all that they could find. AMD's going to learn this the hard way.
N3M3515My guess is it will be barely faster than the 6800xt by 16% which is splitting the difference between the 6800xt and the 7900xt. But at what price? $800 - $750 - $700? If it's $750 then it's 15% more expensive for 16% more perf. Which means price/performance stagnation.
That $750 price with put it on par with the 6950xt on performance. The headline will be "last gen $1100 performance for only $750!!! what a steal!!"
I'm actually expecting the RX 7800 XT to be about $550-$600. I think that this would be fair because it would be more performance than the RX 6800 XT for a lower MSRP. If AMD does that, it's game over for nVidia this generation as people will go nuts for it. What AMD needs to focus on is availability because no matter how good of a deal something is, if you don't have it, you can't sell it and it's worthless to you.
Posted on Reply
#210
720p low
Regardless of AI involvement or not, I don't like the concept of upscaling or frame-generation in games. If a game can't be run at a favorable frame-rate, especially with higher tier hardware, without resorting to these magic tricks, then it seems to me the game needs to be better tuned. Or, as many say, "optimized."

Major game projects are generally in development for several years, or so we've been told. If that is accurate, then what sort of hardware is in use at the developer's studio during that long development time? Is everyone using the x86 Cray in the basement? Tell me, how do you construct a game that will be dependent on hardware and drivers that don't yet exist that the game will require to run well upon release? (Yes, that is a rhetorical question.) But it seems that frame-generation and upscaling are being employed as the, "Get Out Of Jail Free" card to smooth over game development shortcomings.

As for RT, it won't be a thing for me personally until a $300 card made by IDon'tCare can run a heavily ray-traced, AAA game using Very High IQ settings at 1080p and never drop below 60 fps. And, without resorting to any sort of upscaling or frame-generation slight-of-hand. Until then, I can't be bothered. I refuse to be Captain Ahab chasing the globally illuminated White Whale, Moby Dick, all over the Atlantic Ocean that is thousands of dollars deep. For what? To play Assassin's Creed AI 2077? Forget that.

I don't know precisely what AMD's plans are, but I hope they see the obvious place where they could move some serious product; namely, that lower mid-tier range where most PC gamers live. Something akin to the RX 6750XT class, decently built, maybe with the new encoder and HDMI standard, and with enough AIB incentives to insure being in the $329 ~ $349 USD range at retail. We don't need no stinkin' AI.
Posted on Reply
#211
Vayra86
720p lowRegardless of AI involvement or not, I don't like the concept of upscaling or frame-generation in games. If a game can't be run at a favorable frame-rate, especially with higher tier hardware, without resorting to these magic tricks, then it seems to me the game needs to be better tuned. Or, as many say, "optimized."

Major game projects are generally in development for several years, or so we've been told. If that is accurate, then what sort of hardware is in use at the developer's studio during that long development time? Is everyone using the x86 Cray in the basement? Tell me, how do you construct a game that will be dependent on hardware and drivers that don't yet exist that the game will require to run well upon release? (Yes, that is a rhetorical question.) But it seems that frame-generation and upscaling are being employed as the, "Get Out Of Jail Free" card to smooth over game development shortcomings.

As for RT, it won't be a thing for me personally until a $300 card made by IDon'tCare can run a heavily ray-traced, AAA game using Very High IQ settings at 1080p and never drop below 60 fps. And, without resorting to any sort of upscaling or frame-generation slight-of-hand. Until then, I can't be bothered. I refuse to be Captain Ahab chasing the globally illuminated White Whale, Moby Dick, all over the Atlantic Ocean that is thousands of dollars deep. For what? To play Assassin's Creed AI 2077? Forget that.

I don't know precisely what AMD's plans are, but I hope they see the obvious place where they could move some serious product; namely, that lower mid-tier range where most PC gamers live. Something akin to the RX 6750XT class, decently built, maybe with the new encoder and HDMI standard, and with enough AIB incentives to insure being in the $329 ~ $349 USD range at retail. We don't need no stinkin' AI.
Someone gets it.

Except games are not developed to run on hardware of tomorrow - more often than not, games are in fact tuned to hardware of yesteryear. Because that's where the market is - like you say yourself - the midrange and 300 eur card is the moment a performance level is truly 'enabled' in gaming. But the idea to tune to old hardware is in conflict with the approach of shiny graphics to impress and attract audience for a game. Upscaling technologies kinda solve that conflict.

RT is a funny one though, much like numerous heavy post processing effects, and that's where the FSR/DLSS approaches truly shine. Anything with (volumetric) / dynamic lighting, whether raster or RT based, simply incurs a heavy performance hit because its dynamic, in real time. Anything that is calculated in real time will take render time - continuously! - and incurs an FPS hit. No matter how fast the GPU. Upscaling technologies claw back some of that lost time, and in a way, enable more effects rather than applying a lower quality pass to make the game playable. Now, you can run through Cyberpunk's area's without losing half your FPS because of a volumetric patch of fog - the dip is less pronounced - even though the rest of the game might run a solid 60 FPS.

As for AI... I think its a dangerous development in many ways. It doesn't inspire creativity, but rather tunnel vision much like typical algorithms do: in the end, whether programmed or trained 'entities', human input is the basis for its decision making ability, and as such it is limited by it. We've already seen some shining examples from ChatGPT. For gaming, I see a purpose in game development, but not in the consumer space, and there is the risk of an overall decline in quality titles as the barrier of entry gets lower.
Posted on Reply
#212
duladrop
My concerns about this is that, if the game devs will adopt their platform. Just picture this, AMD's AI Accelerators can work on a number of features like, RT's, NPCs, bots, and game environment, Image processing, here's a BIG "IF", the game adopts it, It is fun for gamers an it can be more immersive, right? But what happens to NVIDIA with their Tensor cores only used for image processing and RT's? they are not optimized or designed for bots, npcs.

If you will say yeah they have RT's, well AMD's RDNA 3 can do RT's as well their AI accelerators can do that.

But what happens to other things like NPC's, Bots optimization. So we are looking at as to what platform that is more appealing to Game Dev's to make the game more fun and Immersive.
Posted on Reply
#213
lordmogul
If done wrong it will go the same way as PhysX. A sidenote that nobody cares about because not everyone can use it. We could've gotten super realistic physics simulation (liquids, fabric, dust, etc) a long time ago, but it's GPU acceleration is nvidia only, so nobody used it for anything meaningful.

Do the same with any of the AI processing, make it vendor specific, and you'll lose out. To sell your game well, you'll need to make sure as many people as possible can run it. And since games are mostly developed once and ported over that applies not just on PC but also on consoles.
So we're back to RDNA2 compatibility for the given time.
Any developer even remotely interested in using hardware to improve live AI in games would be smart to do it in a way that runs on all of them without hitting performance too much.

Realtime raytracing has to look at it the same way. If it doesn't run sufficiently well on the midrange cards of both (or now all three) vendors, it won't be more than an interesting gimmick.
In the current market that would be a RTX 3060, RX 6650 XT and A750.
Make it run well there at high settings without upsampling and we're back where it used to be for many years.
The sub 350 cards always used to be able to run the then current games at then common resolutions at high settings. And without upsampling, because there was no upsampling.

In short, no developer would bring out a game in 2023 that uses all the AI and RT stuff only to set the hardware requirements to "RDNA3/Ada Lovelace or higher".
Posted on Reply
#214
Avro Arrow
lordmogulIf done wrong it will go the same way as PhysX. A sidenote that nobody cares about because not everyone can use it. We could've gotten super realistic physics simulation (liquids, fabric, dust, etc) a long time ago, but it's GPU acceleration is nvidia only, so nobody used it for anything meaningful.
Yep. That's why Havok ended up winning that war. I think the same way when I think of DLSS and FSR. Hell, nVidia blocks its own cards from using those features and now they're saying that DLSS3 only works on the RTX 40-series. All I can do is shake my head at all the people who keep buying nVidia and getting screwed like this. When FSR works on ALL cards, even old nVidia cards that nVidia won't let use DLSS, you know that something's rotten in Santa Clara.
lordmogulDo the same with any of the AI processing, make it vendor specific, and you'll lose out. To sell your game well, you'll need to make sure as many people as possible can run it. And since games are mostly developed once and ported over that applies not just on PC but also on consoles.
So we're back to RDNA2 compatibility for the given time.
Any developer even remotely interested in using hardware to improve live AI in games would be smart to do it in a way that runs on all of them without hitting performance too much.

Realtime raytracing has to look at it the same way. If it doesn't run sufficiently well on the midrange cards of both (or now all three) vendors, it won't be more than an interesting gimmick.
In the current market that would be a RTX 3060, RX 6650 XT and A750.
Yeah, none of those cards can run HW-accelerated RT properly. It honestly makes me wonder why people are paying extra for mid-to-low nVidia cards when they can't run RT at that level anyway.
lordmogulMake it run well there at high settings without upsampling and we're back where it used to be for many years.
Ohhh, they don't want that! Jensen wants RT to seem like a "pie-in-the-sky-holy-grail-must-pay-hundreds-for-it" kind of magic.
lordmogulThe sub 350 cards always used to be able to run the then current games at then common resolutions at high settings. And without upsampling, because there was no upsampling.
Which is how it should be.
lordmogulIn short, no developer would bring out a game in 2023 that uses all the AI and RT stuff only to set the hardware requirements to "RDNA3/Ada Lovelace or higher".
I agree with you... well, unless Jensen finances them enough like he did with Control, Mincraft RT and Portal RTX. :laugh:
Posted on Reply
#215
medi01
RH92Completely irrelevant but since you want to go there , please compare dGPU margins to iGPU margins ....
Welp, and?

Last quarter:

greedia GPU business - 1.57 billion
amdia GPU + console business - 1.6 billion

how were the margins, cough?
Posted on Reply
#216
wolf
Performance Enthusiast
Avro Arrownone of those cards can run HW-accelerated RT properly. It honestly makes me wonder why people are paying extra for mid-to-low nVidia cards when they can't run RT at that level anyway.
I've been playing around with my new A2000 6GB (~3050 performance) in my 2nd 5600G rig, and I have played several games with high settings (or higher), RT+DLSS on and what I call a good-to-great graphics experience at 1080P. Metro Exodus 60+ fps, Doom Eternal ~120 fps, Control 60+ fps, CP 2077 ~50fps, Spiderman+MM 75+ fps. Naturally the rebuttal here is that DLSS is fake resolution, so it's not even a real 1080p, but with the 2.5.1 DLL dropped in all of these games look and play fantastic by my measure, with IQ highly comparable to native now with state of the art, sometimes transformative, visuals. Now the rebuttal may be that your 'measure' sets the bar higher than that, fake resolution gimmicks and so on, I can't argue against what you deem proper an acceptable, but yeah for my taste, in my first hand experience, even the lowest end RTX suite cards can absolutely play games with RT on while looking and playing great.

FWIW I paid 'extra' for the performance per form factor on this one, didn't want an ageing 1650 or anaemic RX6400.
Avro ArrowAll I can do is shake my head at all the people who keep buying nVidia and getting screwed like this.
Curious, from your unique perspective, is there a use case/buying case in your mind where it is acceptable to buy Nvidia without the head shake? Or does every buyer fit in the same bucket. I'm not after an argument, I'm genuinely curious what you think.
Posted on Reply
#217
Avro Arrow
wolfI've been playing around with my new A2000 6GB (~3050 performance) in my 2nd 5600G rig, and I have played several games with high settings (or higher), RT+DLSS on
I said "not potent enough to properly use RT" and properly using RT means not using DLSS. Jeez, I bet that Jensen just loves you because he's successfully lowered your expectations by making you think that using DLSS is the same as playing at native resolution (except that it's not).

"Sure, let's make it prettier with RT but uglier with DLSS or FSR!" <- Somehow that makes no sense.
Posted on Reply
#218
wolf
Performance Enthusiast
Avro ArrowI said "not potent enough to properly use RT" and properly using RT means not using DLSS
Sorry, I didn't realise you get to define what properly means.

I eluded to this exactly in my post lol, except you cut that all out including my rationale, given I don't draw an arbitrary line where I think the tech is unacceptable despite the results, and just conclude it's ugly, well done. Second question ignored too, nice.
Posted on Reply
#219
Ahhzz
Avro ArrowI said "not potent enough to properly use RT" and properly using RT means not using DLSS. Jeez, I bet that Jensen just loves you because he's successfully lowered your expectations by making you think that using DLSS is the same as playing at native resolution (except that it's not).

"Sure, let's make it prettier with RT but uglier with DLSS or FSR!" <- Somehow that makes no sense.
wolfSorry, I didn't realise you get to define what properly means.

I eluded to this exactly in my post lol, except you cut that all out including my rationale, given I don't draw an arbitrary line where I think the tech is unacceptable despite the results, and just conclude it's ugly, well done. Second question ignored too, nice.
I'm going to stop you both right here, for your own sake's. Find the topic, stick to it, act like adults.
Posted on Reply
Add your own comment
Jun 12th, 2024 09:07 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts