Monday, February 20th 2023

AMD RDNA4 Architecture to Build on Features Relevant to Gaming Performance, Doesn't Want to be Baited into an AI Feature Competition with NVIDIA

AMD's next-generation RDNA4 graphics architecture will retain a design-focus on gaming performance, without being drawn into an AI feature-set competition with rival NVIDIA. David Wang, SVP Radeon Technologies Group; and Rick Bergman, EVP of Computing and Graphics Business at AMD; gave an interview to Japanese tech publication 4Gamers, in which they dropped the first hints on the direction which the company's next-generation graphics architecture will take.

While acknowledging NVIDIA's movement in the GPU-accelerated AI space, AMD said that it didn't believe that image processing and performance-upscaling is the best use of the AI-compute resources of the GPU, and that the client segment still hasn't found extensive use of GPU-accelerated AI (or for that matter, even CPU-based AI acceleration). AMD's own image processing tech, FSR, doesn't leverage AI acceleration. Wang said that with the company introducing AI acceleration hardware with its RDNA3 architecture, he hopes that AI is leveraged in improving gameplay—such as procedural world generation, NPCs, bot AI, etc; to add the next level of complexity; rather than spending the hardware resources on image-processing.
AMD also stressed on the need to make the GPU more independent of the CPU in graphics rendering. The company took several steps in this direction over the past many generations, with the most recent being the multi-draw indirect accelerator (MDIA) component introduced with RDNA3. Using this, software can dispatch multiple instanced draw commands that can be issued on the GPU, greatly reducing the CPU-level overhead. RDNA3 is up to 2.3x more efficient at this than RDNA2. Expect more innovations along these lines with RDNA4.

AMD understandably didn't talk anything about the "when," "what," and "how" of RDNA4 as its latest RDNA3 architecture is just off the ground, and awaiting a product ramp through 2023 into the various market-segments spanning from iGPUs to mobile GPUs, and mainstream desktop GPUs. RDNA3 is currently powering the Radeon RX 7900 series high-end graphics cards, and the company's latest 5 nm "Phoenix Point" Ryzen 7000-series mobile processor iGPUs. You can catch the 4Gamer interview in the source link below.
Sources: 4Gamers.net, HotHardware
Add your own comment

221 Comments on AMD RDNA4 Architecture to Build on Features Relevant to Gaming Performance, Doesn't Want to be Baited into an AI Feature Competition with NVIDIA

#1
RH92
This is why AMD sits at an all time low of 10% dGPU market share , they fail to read the room !!!
Posted on Reply
#2
dir_d
I wouldnt chase Nvidia down the AI hole too, they are litterally years ahead of AMD. I think the way AMD is going is the correct one they just need to master the MCM GPU.
Posted on Reply
#3
JAB Creations
I'm sure glad the first comments are always by people who comprehend business and economics; I'm sure they're running multi-billion businesses that are responsible for employing thousands of people. :rolleyes:

I think it's a good strategy. Nvidia keeps making the wrong hard bets:
  • Lower all cards below the 4090 by two tiers and pretend that crypto-hashing is still a thing to continue to jack up their prices.
  • Max-out L2 cache from 6MB to 72MB to make their cards memory dependent instead of more efficient.
  • Not bother to go after modularization as aggressively like AMD so they force the higher prices for relatively close to the same performance.
  • Implement dedication to very-specific kinds of AI stuff that requires dedicated silicon that doesn't inherently go towards gaming performance.
  • Make all of their implementations only work on their cards.
  • Buy hey, even if the 4030 is mislabeled as a "4060" it's nice to see a 30-series card come out with 8GB of video RAM!
  • Make their customers think that their $400 card which is going to be 30% slower than AMD's equivalent is better because the $2,000+ card they can't afford is faster than AMD's fastest.
Yeah, a lot of people fall for the last one but there are lots of people who monkey-see monkey-do and make zero effort in to doing research.

AMD on the other hand makes reasonable products that perform well though the just absolutely suck at marketing. I've never once seen their marketing department make a point other than cost/value ratio about their products that cover any of the reasons I buy anything technology related. That being said:
  • Their technologies like works both on AMD and Nvidia cards (probably Intel's too) so game developers have much better incentive to implement something that will work on more cards than fewer.
  • AI isn't supposed to be complex, it's supposed to be generic and numerous just like the calculations themselves.
Posted on Reply
#4
ratirt
OneMoaryou may not like it but raster rendering has hit a wall in terms of performance per watt
its not the way forward
That is so weird to say. It's like blaming oil company for car pollution. it is not the raster but the graphics cards that determine the performance per watt.
Raster rendering is well and it is evolving. So surprised you have even mentioned it here considering all the new stuff that is constantly showing up.
OneMoarand ignoring AI
What about AI? It is great in the industry but not for gaming. At this point it is just a marketing scheme. Kinda like RT is now. It is there but what's the point if the performance is being hit so much by it giving literally not much in return.
Posted on Reply
#5
phanbuey
JAB CreationsI'm sure glad the first comments are always by people who comprehend business and economics; I'm sure they're running multi-billion businesses that are responsible for employing thousands of people. :rolleyes:

I think it's a good strategy. Nvidia keeps making the wrong hard bets:
  • Lower all cards below the 4090 by two tiers and pretend that crypto-hashing is still a thing to continue to jack up their prices.
  • Max-out L2 cache from 6MB to 72MB to make their cards memory dependent instead of more efficient.
  • Not bother to go after modularization as aggressively like AMD so they force the higher prices for relatively close to the same performance.
  • Implement dedication to very-specific kinds of AI stuff that requires dedicated silicon that doesn't inherently go towards gaming performance.
  • Make all of their implementations only work on their cards.
  • Make their customers think that their $400 card which is going to be 30% slower than AMD's equivalent is better because the $2,000+ card they can't afford is faster than AMD's fastest.
  • Buy hey, even if the 4030 is mislabeled as a "4060" it's nice to see a 30-series card come out with 8GB of video RAM!
Yeah, a lot of people fall for the last one but there are lots of people who monkey-see monkey-do and make zero effort in to doing research.

AMD on the other hand makes reasonable products that perform well though the just absolutely suck at marketing. I've never once seen their marketing department make a point other than cost/value ratio about their products that cover any of the reasons I buy anything technology related. That being said:
  • Their technologies like works both on AMD and Nvidia cards (probably Intel's too) so game developers have much better incentive to implement something that will work on more cards than fewer.
  • AI isn't supposed to be complex, it's supposed to be generic and numerous just like the calculations themselves.
AMD's current features are Open Source because it's playing catch-up - Gsync, DLSS, Frame Generation etc. were all strong first movers -- and in order to make the AMD versions appealing they HAVE to make it open source since it's already technically weaker AND late to market -- this in a duopoly, because there's no risk of nvidia using freesync, FSR or FSR frame gen to gain an advantage. there's no logical reason to make it proprietary. Making it proprietary will only guarantee it's early death, so the only hope is a weaker open source version.

When AMD was leading with their own features (Eyefinity, Radeon Image Sharpening, SenseMI etc.) then that wasn't open source. And if they come out with something new and cool, you better believe it won't be open source.

AMD's pricing is also in the same boat-- usually AMD's products see the largest drops in price since they really like to gouge all of the early adopters, then drop prices massively. You can see this with every Zen and TR generation. You can also see this with the 7900xtx and 7900xt pricing - they had the opportunity to really undercut nvidia, but chose to play the market.

Nvidia is hard gouging its customers -- and AMD is right there at those price levels -- just ever so slightly under, due to inferior features, and with more RAM. So I agree that Nvidia is making very anti-consumer bets, but I disagree that AMD is "on the other hand" so to speak -- same hand, slaps just as hard.
Posted on Reply
#6
djuice
OneMoaryou may not like it but raster rendering has hit a wall in terms of performance per watt
its not the way forward
This article is talking about AI acceleration hardware bonded to a GPU or GPU segmented card.

You can make AI accelerated hardware that is bound to the GPU, and separating those 2 products into their own segments would benefit all of us.

Not every gamer does AI shit, we just want maximum performance per watt for gaming, and not even AI programmer is using their GPUs to play games, or using their GPU to accelerate computing.

I think AMD is heading in the right direction, separate their AI and GPU segments, and if they want to go AI, they can focus on a separate product.
Posted on Reply
#7
OneMoar
There is Always Moar
gamers don't matter
ok
I can't really make that any clearer
gamers are a insignificant part of the market
Posted on Reply
#8
TheinsanegamerN
Good, the last thing we need is a race to make the most fake frames and other latent BS. DLSS3 looks like trash and is not an excuse for genuine GPU improvements.

That being said, if you're gonna focus on performance, you mind telling me WTF happened with rDNA3? It's performance improvements are perfectly in line with SM increases, not counting the doubling of shader count that seemed to do F all. If you want to be a "premium brand" and you are not ogingg to do the AI thing, you need to hit it out of the park on raster performance, not play second fiddle.
OneMoargamers don't matter
ok
I can't really make that any clearer
gamers are a insignificant part of the market
Geforce and Quadro/A series are comparable in terms off yearly sales to nvidia.
ratirtThat is so weird to say. It's like blaming oil company for car pollution. it is not the raster but the graphics cards that determine the performance per watt.
Raster rendering is well and it is evolving. So surprised you have even mentioned it here considering all the new stuff that is constantly showing up.

What about AI? It is great in the industry but not for gaming. At this point it is just a marketing scheme. Kinda like RT is now. It is there but what's the point if the performance is being hit so much by it giving literally not much in return.
Much like RT, AI at the consumer level will likely gain traction within a few years. The idea of being able to run something like Stable Diffusion on your home PC is exciting to many, and its something that AMD will be playing second fiddle to, much like RT. Fromt he sound of his statement, AMD is specifically looking at the likes of AI accelerated DLSS3, which is not as warmly received as 1 and 2, and rejecting that.
Posted on Reply
#9
phanbuey
djuiceThis article is talking about AI acceleration hardware bonded to a GPU or GPU segmented card.

You can make AI accelerated hardware that is bound to the GPU, and separating those 2 products into their own segments would benefit all of us.

Not every gamer does AI shit, we just want maximum performance per watt for gaming, and not even AI programmer is using their GPUs to play games, or using their GPU to accelerate computing.

I think AMD is heading in the right direction, separate their AI and GPU segments, and if they want to go AI, they can focus on a separate product.
It's a good point --- they're making a bet that seems silly - but I also thought Ford was stupid for not making sedans, ended up being one of their better decisions.

This bet is: AI accelerators won't be relevant to gamers. The reason this initially seems wrong is that there's synergy between compute dedicated to AI and there's some claims that DLSS tech and RT relies on AI (by nvidia and Intel with their Xess and compute tiles push) -- will be interesting to see if that's actually true though. If it isn't, and Radeon is right, then they will simplify the design and have a cheaper/better gaming product at a much lower cost and complexity, which is always an easy and massive win.
TheinsanegamerNFromt he sound of his statement, AMD is specifically looking at the likes of AI accelerated DLSS3, which is not as warmly received as 1 and 2, and rejecting that.
I honestly think their bet is that Nvidia will have to continue gouging customers and their chips will continue to be huge and expensive with included AI processing, and because DC demand will be strong for the same chips that power the GPUs, that they will have a product segmentation issue.

They can't really compete with DLSS 3 so why even try -- make a way smaller and cheaper chip that rasters similarly to big daddy NV, does Lumen and open RTX methods just fine and has the benefit of not competing with the MI300... let's see.
Posted on Reply
#10
KrazyT
Rendez vous in 2 years, we'll see who's right or not :D :D
AMD can secretly works in IA app after all ...
Posted on Reply
#11
mama
This thread is looking like troll country.
Posted on Reply
#12
R-T-B
OneMoargamers don't matter
ok
I can't really make that any clearer
gamers are a insignificant part of the market
People forget this.

Gamers aren't the core drivers of the market anymore.

Anyone tieing their boat to gamers exclusively will be relegated to mediocrity.

I like games too, and yes, I'm a gamer, but these are just the facts of today.
Posted on Reply
#13
clopezi
JAB CreationsI'm sure glad the first comments are always by people who comprehend business and economics; I'm sure they're running multi-billion businesses that are responsible for employing thousands of people. :rolleyes:
Probably not, but we all see the AMD Radeon numbers and Nvidia numbers, and it's easy to understand to everyone.

Nvidia needs competition, AMD Radeon today, unfortunately, it's not. AMD cards are not so cheaper to justify the lack of RT performance, for example.
Posted on Reply
#14
ratirt
TheinsanegamerNMuch like RT, AI at the consumer level will likely gain traction within a few years. The idea of being able to run something like Stable Diffusion on your home PC is exciting to many, and its something that AMD will be playing second fiddle to, much like RT. Fromt he sound of his statement, AMD is specifically looking at the likes of AI accelerated DLSS3, which is not as warmly received as 1 and 2, and rejecting that.
In a few or several years when it is a mainstream then sure but for now, today, it isn't. No wonder it looks for DLSS3 like feature since nothing (when RT is ok) can work without it in a manner you would appreciate. Does it change my mind about the AI? Not really. The other problem here is DLSS3. Not that I dont like it but it is for certain users only. I don't want to go for something (with a huge money toll) which is for today OK and tomorrow we will get something new with a new cards. As I see it, it is not a standard and it is not broadly available for variety of cards.
AMD will have something different for everyone like FSR? Maybe. That makes it a better perspective but still not convinced. We will have to wait longer for those to matter more as for now gaming is doing pretty good without AI.
Posted on Reply
#15
evernessince
It's amazing how few people in this thread actual read what is already a short article

"Wang said that with the company introducing AI acceleration hardware with its RDNA3 architecture, he hopes that AI is leveraged in improving gameplay—such as procedural world generation, NPCs, bot AI, etc; to add the next level of complexity; rather than spending the hardware resources on image-processing."

So no, AMD is not giving up on AI. They are *gasp* going to focusing their consumer gaming cards AI capabilities on *gasp* AI that improves games. This is not a change in trajectory for AMD, they had already split their architecture into two separate branches including RDNA and CDNA. This is merely a commentary more on how Nvidia has cards with AI capabilities that don't really benefit gamers.
R-T-BPeople forget this.

Gamers aren't the core drivers of the market anymore.

Anyone tieing their boat to gamers exclusively will be relegated to mediocrity.

I like games too, and yes, I'm a gamer, but these are just the facts of today.
No, gaming is still Nvidia's top earning segment. To go as far as the poster you are quoting to say that there are irrelevant is laughably incorrect. 50% of your revenue is not remotely insignificant.
Posted on Reply
#16
las
RH92This is why AMD sits at an all time low of 10% dGPU market share , they fail to read the room !!!
I use Nvidia RTX and I don't give a f about ray tracing, gimping performance while delivering pretty much nothing you will see when you actually play - instead of standing still - yet decreases performance by a ton. DLSS and DLDSR is the best features about RTX for me and lets be honest, it could probably have been done without dedicated hardware.......

RT is mostly good for screenshots because without the absolute most expensive card, people won't be using it anyway, unless they think 30 fps is great, hell in some games it even feels like there's additional processing lag when RT is enabled, even when fps is "decent" - I think it's a gimmick and I hope AMD will be very competitive in raster perf going forward. ALOT of people don't care about RT and 1440p is still the sweet spot and will be for long, this is where AMD shines as well. 7900XTX already bites 4090 in the butt in some games when raster only and 1440p. This is why I consider going AMD next time.

And FYI AMD has 15-16% dGPU marketshare and that is on Steam, it's probably more + 100% of Console market.
Posted on Reply
#17
fevgatos
JAB CreationsAMD on the other hand makes reasonable products that perform well though the just absolutely suck at marketing. I've never once seen their marketing department make a point other than cost/value ratio about their products that cover any of the reasons I buy anything technology related. That being said:
Of course, the 7900xt was very reasonable, getting absolutely destroyed by the 4070ti. And their cpus? Oh those are insanely reasonable, they priced 6 cores at 300€ msrp when their competition asks 350 for 14cores that smack it around the room.
Posted on Reply
#18
OneMoar
There is Always Moar
let me put this to you

What if I where to combine AI Art Generation with ChatGPTs Natural Lanuage Interface with Something like Unreal Engine 5 (we really are not far away from this at all all the pieces exist it just takes somebody to bring it all togetor )

what if you could generate entire envroments just by telling a AI to "show me the bridge of the enterprise"
if you can't see the potental and the way the winds are shifting you may our soon to exist Ai-god have mercy on your fleshy soul
Posted on Reply
#19
tabascosauz
JAB Creations
  • Max-out L2 cache from 6MB to 72MB to make their cards memory dependent instead of more efficient.
Let me guess, when AMD introduced a 128MB stack of L3 ("Infinity Cache") to cushion the reduction in bus width and bandwidth, you hailed it as a technological breakthrough.
When Nvidia does the exact same thing with 48MB/64MB/72MB L2, you consider it "making the wrong bet". Okay.
JAB Creations
  • Not bother to go after modularization as aggressively like AMD so they force the higher prices for relatively close to the same performance.
In case you haven't noticed, a 530mm^2 aggregation of chiplets and an expensive new interconnect didn't exactly pass along the savings to gamers any more than Nvidia's 295mm^2 monolithic product did.
Posted on Reply
#20
Kohl Baas
clopeziProbably not, but we all see the AMD Radeon numbers and Nvidia numbers, and it's easy to understand to everyone.

Nvidia needs competition, AMD Radeon today, unfortunately, it's not. AMD cards are not so cheaper to justify the lack of RT performance, for example.
Why? Are nVidia cards cheap enough to justify the lack in RT performance?
Posted on Reply
#21
Crackong
phanbueyI honestly think their bet is that Nvidia will have to continue gouging customers and their chips will continue to be huge and expensive with included AI processing, and because DC demand will be strong for the same chips that power the GPUs, that they will have a product segmentation issue.

They can't really compete with DLSS 3 so why even try -- make a way smaller and cheaper chip that rasters similarly to big daddy NV, does Lumen and open RTX methods just fine and has the benefit of not competing with the MI300... let's see.
Agreed

When things became more and more demanding
Nvidia will have to allocate more and more die space for AI dedicated processing units.
Soon it will reach a critical point, where there is too much 'dead weight' and it is no longer cost effective to do it.

and Nvidia themselves will have to find a way to integrate AI processing back to the "Normal" stream processors.
So the cycle begins again
Posted on Reply
#22
Dirt Chip
And by that you can forget about viable compatition, at least until ARC will be an option.
Good luck, AMD.
Posted on Reply
#23
OneMoar
There is Always Moar
the nvidia vs amd argument is irrelevant
the argument is AMD Throwing in the towel when they can't hack it
A company basically admitting will we aren't as good as your competitors so we aren't going to to even try
yea thats gonna go over really well
Posted on Reply
#24
Patriot
OneMoargamers don't matter
ok
I can't really make that any clearer
gamers are a insignificant part of the market
This article is clearly not for you then, it is about a gaming focused gpu, not the instincts, but those aren't for you either because you are obviously just a fanboi and not in industry.
Also... RDNA3 already has tensor cores... AMD just calls them WMMA Matrix cores... They will continue to add feature sets to them... What Wang said was...

He thinks that FSR and DLSS is a waste of the matrix math engines these gpus have... when they could be used on smarter NPCs and game enhancing features... not just as a means to fix poor game optimizations. But evidently reading is hard.

Since you are unawares...
videocardz.com/newz/amd-adds-wmma-wave-matrix-multiply-accumulate-support-to-gfx11-rdna3-architecture-amds-tensor-core
AMD added matrix cores to CDNA 1.0 MI100, enhanced them for CNDA 2 MI210/250x And RDNA3 got them and its unclear if they are enhanced past CNDA2 as CDNA3 is already in testing.
AMD has added a fpga xilinx core to the Zen4 laptop line for ai inferencing... and it is fairly clear they will continue to add and support an accelerated future. This article was not about a lack of AI support but in using it for... enhancement, not as a replacement for proper game design and optimization.
Posted on Reply
#25
beedoo
fevgatosOf course, the 7900xt was very reasonable, getting absolutely destroyed by the 4070ti...
Can't see the problem. Here in Australia, the 7900XT seems to be cheaper than the 4070ti, and as far as TPU's own reviews of the two cards is concerned, I can't see the 7900XT being destroyed by the 4070ti anywhere - in fact, the average gaming framerate chart at the end shows the 7900XT above the overclocked 4070ti at every resolution (raster)...

...unless you meant RT, or video compression, or DLSS, or something else - but you didn't say any of that.
Posted on Reply
Add your own comment
Apr 26th, 2024 15:22 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts