Monday, February 20th 2023

AMD RDNA4 Architecture to Build on Features Relevant to Gaming Performance, Doesn't Want to be Baited into an AI Feature Competition with NVIDIA

AMD's next-generation RDNA4 graphics architecture will retain a design-focus on gaming performance, without being drawn into an AI feature-set competition with rival NVIDIA. David Wang, SVP Radeon Technologies Group; and Rick Bergman, EVP of Computing and Graphics Business at AMD; gave an interview to Japanese tech publication 4Gamers, in which they dropped the first hints on the direction which the company's next-generation graphics architecture will take.

While acknowledging NVIDIA's movement in the GPU-accelerated AI space, AMD said that it didn't believe that image processing and performance-upscaling is the best use of the AI-compute resources of the GPU, and that the client segment still hasn't found extensive use of GPU-accelerated AI (or for that matter, even CPU-based AI acceleration). AMD's own image processing tech, FSR, doesn't leverage AI acceleration. Wang said that with the company introducing AI acceleration hardware with its RDNA3 architecture, he hopes that AI is leveraged in improving gameplay—such as procedural world generation, NPCs, bot AI, etc; to add the next level of complexity; rather than spending the hardware resources on image-processing.
AMD also stressed on the need to make the GPU more independent of the CPU in graphics rendering. The company took several steps in this direction over the past many generations, with the most recent being the multi-draw indirect accelerator (MDIA) component introduced with RDNA3. Using this, software can dispatch multiple instanced draw commands that can be issued on the GPU, greatly reducing the CPU-level overhead. RDNA3 is up to 2.3x more efficient at this than RDNA2. Expect more innovations along these lines with RDNA4.

AMD understandably didn't talk anything about the "when," "what," and "how" of RDNA4 as its latest RDNA3 architecture is just off the ground, and awaiting a product ramp through 2023 into the various market-segments spanning from iGPUs to mobile GPUs, and mainstream desktop GPUs. RDNA3 is currently powering the Radeon RX 7900 series high-end graphics cards, and the company's latest 5 nm "Phoenix Point" Ryzen 7000-series mobile processor iGPUs. You can catch the 4Gamer interview in the source link below.
Sources: 4Gamers.net, HotHardware
Add your own comment

221 Comments on AMD RDNA4 Architecture to Build on Features Relevant to Gaming Performance, Doesn't Want to be Baited into an AI Feature Competition with NVIDIA

#26
Vya Domus
Who really cares, they're both right and wrong, besides upscaling the ML hardware accelerators really are worthless for the consumer space, at the same time they wont be used for anything else any time soon.
R-T-BGamers aren't the core drivers of the market anymore.
OneMoargamers are a insignificant part of the market
You're both beyond utterly wrong though, over 3 billion in revenue is not insignificant by any stretch of the imagination.



They've always made a huge chunk of their money from consumer products, sadly for Nvidia marketing and sponsorship deals don't work very well outside of the consumer market. You can't buy your way to success as easily and actually have to provide extremely competitive pricing because ROI is critical to businesses as opposed to regular consumers so you can't just price everything to infinity and provide shit value.
Posted on Reply
#27
Vayra86
CrackongAgreed

When things became more and more demanding
Nvidia will have to allocate more and more die space for AI dedicated processing units.
Soon it will reach a critical point, where there is too much 'dead weight' and it is no longer cost effective to do it.

and Nvidia themselves will have to find a way to integrate AI processing back to the "Normal" stream processors.
So the cycle begins again
This.

Nvidia is desperately looking to repurpose its fancy cores because all that die space is sitting there. They use it for DLSS, while AMD achieves near-similar results without using that die space for FSR. We can talk for ages about whether each pixel is arranged favorably on a per-game basis, but if you zoom out a little bit, that is, into an ergonomic seating position, you can't tell the difference proper.

Nvidia is pushing RT but AMD is carrying it just fine, again without special cores. AMD's approach is clearly superior when you see the perf gap relatively shrink between RDNA2 and 3, and between AMD and Nvidia. We're building larger GPUs anyway, and its a major step back to have parts of the die sit there not being used. We figured this out already decades ago... Nvidia's move since Turing was, is and will continue to be a competitive regression - not progress. They're using die space for single purpose nonsense that barely pays off. We're paying that die space bigtime.

As long as AMD controls the console space, and as the only capable custom APU with decent graphics, that's what they'll keep, they can dictate the progress of RT and gaming in general because shit just has to run on their hardware. And they're making it happen as we speak. You cán use RT on AMD. You cán use FSR on any GPU. The technologies work and pay off just the same.

Nvidia is diving into a hole, and this is a long journey. Let's see where it ends. I don't think AMD is using a bad strategy here, nor a loser's strategy. Chasing the most popular soundbites isn't by definition the best way forward. And if you combine this with the general economical situation... wasting cores and valuable die space on special purpose makes even less sense.
Posted on Reply
#28
fevgatos
beedooCan't see the problem. Here in Australia, the 7900XT seems to be cheaper than the 4070ti, and as far as TPU's own reviews of the two cards is concerned, I can't see the 7900XT being destroyed by the 4070ti anywhere - in fact, the average gaming framerate chart at the end shows the 7900XT above the overclocked 4070ti at every resolution (raster)...

...unless you meant RT, or video compression, or DLSS, or something else - but you didn't say any of that.
Nowadays yes, they have similar price, amd dropped the price a couple of weeks ago. Hogwarts benchmark, latest patch from pcgh, the 7900xt is losing to a 3080 while consuming 30% more power than the 4070ti.

Posted on Reply
#29
Dirt Chip
Thing is, NV can quickly and easily catch up with AMD for any non-visual AI implementation but AMD will never get close to NV in the AI visual department.
AMD perpetuate the current state by choosing this route.
Posted on Reply
#30
Vayra86
OneMoarlet me put this to you

What if I where to combine AI Art Generation with ChatGPTs Natural Lanuage Interface with Something like Unreal Engine 5 (we really are not far away from this at all all the pieces exist it just takes somebody to bring it all togetor )

what if you could generate entire envroments just by telling a AI to "show me the bridge of the enterprise"
if you can't see the potental and the way the winds are shifting you may our soon to exist Ai-god have mercy on your fleshy soul
Then you would have a hype and then it would die down as people slowly figured out this is nothing other than what procedural generation has been doing in No Man's Sky. We're humans and we love to figure out patterns. The idea is that AI is too complex to show us that so we'll consider it more 'sentient'. But then we ask 15 consecutive questions and memory runs out and it starts throwing gibberish out.

Come on bro. Our 'AI' technology is input > output and lots of training. Its tunnel vision waiting to happen, and we've seen it all before. Its just a fancy slightly less random RNG. These things are full of paradoxical stuff, and we're discovering that as we speak...err chat.

In gaming what is the big constant? Good design, talented developers make great games when given the tools. The tools are just tools. If you haven't got brilliance at the helm, you'll get thirteen in a dozen crap and we have a well fleshed out history of said games and differences. AI won't change that at all. It'll only lower the bar for more bottom feeding bullshit.
Posted on Reply
#31
Lionheart
OneMoarthe nvidia vs amd argument is irrelevant
the argument is AMD Throwing in the towel when they can't hack it
A company basically admitting will we aren't as good as your competitors so we aren't going to to even try
yea thats gonna go over really well
What do you expect! Nvidia's mindshare is just too strong & AMD can't get away from their "bad driver" stigma no matter how much they improve them.
Posted on Reply
#32
fevgatos
LionheartWhat do you expect! Nvidia's mindshare is just too strong & AMD can't get away from their "bad driver" stigma no matter how much they improve them.
Definitely it's the stigma. Not like current top end amd cards losing to a 3080 in hogwarts latest patch that is the issue
Posted on Reply
#33
las
fevgatosOf course, the 7900xt was very reasonable, getting absolutely destroyed by the 4070ti. And their cpus? Oh those are insanely reasonable, they priced 6 cores at 300€ msrp when their competition asks 350 for 14cores that smack it around the room.
7900 XT beats 4070 Ti by 10% at 1440p and has 20GB with 800 GB/s - 4070 Ti has 12GB with 504 GB/s.

And Intel uses big.LITTLE, meaning tons of efficiency cores while AMD uses performance cores only, so you can stop compairing cores like that. Ryzen 7800X3D will probably smack i9-13900KS in gaming when released in a few weeks.
Posted on Reply
#34
btk2k2
fevgatosDefinitely it's the stigma. Not like current top end amd cards losing to a 3080 in hogwarts latest patch that is the issue

Until you get to hogsmeade and then



Then at 1080p the 3080 chokes and can't even keep up with the 3060 that atleast manages to hit 30 fps.

At 4K nothing manages to hit a nice 60 fps. Even the 4090 suffers a fair number of drops below 50 but atleast that could work, everything else is just too slow unless you want to turn on DLSS/FSR or accept 30fps.

Posted on Reply
#35
fevgatos
btk2k2Until you get to hogsmeade and then



Then at 1080p the 3080 chokes and can't even keep up with the 3060 that atleast manages to hit 30 fps.

At 4K nothing manages to hit a nice 60 fps. Even the 4090 suffers a fair number of drops below 50 but atleast that could work, everything else is just too slow unless you want to turn on DLSS/FSR or accept 30fps.

That's the old version before the patch. Pcgh numbers are from 2 days ago with the latest patch
Posted on Reply
#36
Vayra86
btk2k2Until you get to hogsmeade and then



Then at 1080p the 3080 chokes and can't even keep up with the 3060 that atleast manages to hit 30 fps.

At 4K nothing manages to hit a nice 60 fps. Even the 4090 suffers a fair number of drops below 50 but atleast that could work, everything else is just too slow unless you want to turn on DLSS/FSR or accept 30fps.

Conclusion, the game is an absolute joke performing like that for the way it looks. We shouldn't care.

Or...we can feed the troll here just a bit more with random charts from random moments in time.
Posted on Reply
#37
fevgatos
las7900 XT beats 4070 Ti by 10% at 1440p and has 20GB with 800 GB/s - 4070 Ti has 12GB with 504 GB/s.

And Intel uses big.LITTLE, meaning tons of efficiency cores while AMD uses performance cores only, so you can stop compairing cores like that. Ryzen 7800X3D will probably smack i9-13900KS in gaming when released in a few weeks.
Beats it by 10% but until a few days ago it was 14% more expensive, while massively losing in rt and consuming 30% more power. Great deal my man
Posted on Reply
#38
btk2k2
fevgatosThat's the old version before the patch. Pcgh numbers are from 2 days ago with the latest patch
PCGH did not test in hogsmeade village so until that test is done with the new patch and new data is generated there is no evidence that it is entirely fixed in that scenario.
Posted on Reply
#39
dyonoctis
evernessinceNo, gaming is still Nvidia's top earning segment. To go as far as the poster you are quoting to say that there are irrelevant is laughably incorrect. 50% of your revenue is not remotely insignificant.
The real question is how much of that is actually made of pure gamers rather than professionals that don't need a Quadro (people like Beeple, Ash thorp, Vitaly Bulgarov, made millions with their computers and worked on blockbusters without Quadros). I've seen people working in the creative industry saying that stuff like a 3090/4090/4080 are so expensive because they are targeting people who can actually justify it as a work expense rather than leisure.
Posted on Reply
#40
fevgatos
btk2k2PCGH did not test in hogsmeade village so until that test is done with the new patch and new data is generated there is no evidence that it is entirely fixed in that scenario.
How about tpups numbers? Let's ignore those as well right?
Posted on Reply
#41
Dirt Chip
This one game example side quest has no end and no purpose in this thread..

AMD will need to make up any kind of dlss3 answer, even if not frame generation related, to make up for the preformance gap.
I don't see "procedural world generation, NPCs, bot AI" as the write say helping doing so.
Posted on Reply
#42
Argyr
mamaThis thread is looking like troll country.
I agree, if you don't gobble up and fully support everything AMD does, then you're a troll
Posted on Reply
#43
btk2k2
fevgatosHow about tpups numbers? Let's ignore those as well right?
It shows clearly the game is not really ready to be a staple in a benchmark suite, different testing areas have such varied results there is no singular correct conclusion that can be drawn.

Trying to use it in part of an argument given these obvious and observable short comings suggests you have a disingenuous agenda.
Posted on Reply
#44
Vya Domus
fevgatosNowadays yes, they have similar price, amd dropped the price a couple of weeks ago. Hogwarts benchmark, latest patch from pcgh, the 7900xt is losing to a 3080 while consuming 30% more power than the 4070ti.
Here's a different game where the 4070ti get annihilated even by Nvidia's own older generation cards let alone a 7900XT, so what's your point ? Though I am sure you're going to find some nonsensical explanation as to how this chart is less relevant than yours because I don't know, fanboy reasons, I guess.

Posted on Reply
#45
fevgatos
Vya DomusHere's a different game where the 4070ti get annihilated even by Nvidia's own older generation cards let alone a 7900XT, so what's your point ? Though I am sure you're going to find some nonsensical explanation as to how this chart is less relevant than yours because I don't know, fanboy reasons, I guess.

The problem is, the 7900xt should be beating the 4070Ti across the board, since not only was it 15% more expensive, it also consumed a truckload more power. So that's normal. Losing to a 4070ti on the other hand isn't really normal, and it happens across the board in the majority of rt games
Posted on Reply
#46
ratirt
fevgatosThe problem is, the 7900xt should be beating the 4070Ti across the board, since not only was it 15% more expensive, it also consumed a truckload more power. So that's normal. Losing to a 4070ti on the other hand isn't really normal, and it happens across the board in the majority of rt games
And it doesn't? You call for one game which is brand new and it is hard to determine the overall performance for any given card since it jumps when you are in a different place in the game.
As far as I know, across the board, 7900xtx is way faster than a 4070Ti.
About RT. heh, it is sad you bring that up since not even 4090 can pull this RT off in certain games. having 50FPS or 20FPS does not make a difference really. Nice to have RT but rely solely on that is fools errand.
Just like pursuing some sort of AI scheme that it supposedly is better for gaming. AI is not for gaming but it does make huge strides in other areas but not gaming.
Posted on Reply
#47
fevgatos
ratirtAnd it doesn't? You call for one game which is brand new and it is hard to determine the overall performance for any given card since it jumps when you are in a different place in the game.
As far as I know, across the board, 7900xtx is way faster than a 4070Ti.
About RT. heh, it is sad you bring that up since not even 4090 can pull this RT off in certain games. having 50FPS or 20FPS does not make a difference really. Nice to have RT but rely solely on that is fools errand.
Just like pursuing some sort of AI scheme that it supposedly is better for gaming. AI is not for gaming but it does make huge strides in other areas but not gaming.
Well it doesn't really matter whether you think it's worth it or not. Why is it okay for the 7900xt not only to be 15% more expensive at launch, not only consuming 30% more power, but also getting it's ass handed to it in rt?
Posted on Reply
#48
TheoneandonlyMrK
fevgatosHow about tpups numbers? Let's ignore those as well right?
We could try not to turn every thread into a us v them brand debate but your not into that eh.
Posted on Reply
#49
Vya Domus
fevgatosThe problem is, the 7900xt should be beating the 4070Ti across the board
It does by roughly 10%, look up the reviews on TPU, sometimes by a lot more at 4K as shown in that example. So of course it's more expensive, it also has a lot more VRAM, that stuff isn't given out for free.

But muh RT you will say, sure it's a bit faster in RT, doesn't really matter because in order to get playable framerates you'll need upscaling anyway.
fevgatosit also consumed a truckload more power.
fevgatosnot only consuming 30% more power
30W isn't a "truckload" nor is it 30% more you mathematical prodigy, though I am sure in your view even 1W more would be a truckload because you are obsessively trying to harp on AMD over any insignificant difference.



Now that I realize it, Nvidia somehow managed to make a chip that has 20 billion less transistors on a newer node pull almost as much power as Navi 31, amazing stuff.
Posted on Reply
#50
fevgatos
Vya DomusIt does by roughly 10%, look up the reviews on TPU, sometimes by a lot more at 4K as shown in that example. So of course it's more expensive, it also has a lot more VRAM, that stuff isn't given out for free.

But muh RT you will say, sure it's a bit faster in RT, doesn't really matter because in order to get playable framerates you'll need upscaling anyway.



30W isn't a "truckload" nor is it 30% more you mathematical prodigy, though I am sure in your view even 1W more would be a truckload because you are obsessively trying to harp on AMD over any insignificant difference.



Now that I realize it, Nvidia somehow managed to make a chip that has 20 billion less transistors on a newer node pull almost as much power as Navi 31, amazing stuff.
The fact that you are misquoting numbers on power draw tells me all I need to know. Maximum power draw is completely useless. In games as per tpu the 7900xt draws 50 more watts. In basic video playback it consumes 400% (lol) more power. 400 freaking percent. That numbers is insane.

Posted on Reply
Add your own comment
Jun 12th, 2024 13:52 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts