• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD RDNA4 Architecture to Build on Features Relevant to Gaming Performance, Doesn't Want to be Baited into an AI Feature Competition with NVIDIA

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,696 (7.42/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
AMD's next-generation RDNA4 graphics architecture will retain a design-focus on gaming performance, without being drawn into an AI feature-set competition with rival NVIDIA. David Wang, SVP Radeon Technologies Group; and Rick Bergman, EVP of Computing and Graphics Business at AMD; gave an interview to Japanese tech publication 4Gamers, in which they dropped the first hints on the direction which the company's next-generation graphics architecture will take.

While acknowledging NVIDIA's movement in the GPU-accelerated AI space, AMD said that it didn't believe that image processing and performance-upscaling is the best use of the AI-compute resources of the GPU, and that the client segment still hasn't found extensive use of GPU-accelerated AI (or for that matter, even CPU-based AI acceleration). AMD's own image processing tech, FSR, doesn't leverage AI acceleration. Wang said that with the company introducing AI acceleration hardware with its RDNA3 architecture, he hopes that AI is leveraged in improving gameplay—such as procedural world generation, NPCs, bot AI, etc; to add the next level of complexity; rather than spending the hardware resources on image-processing.



AMD also stressed on the need to make the GPU more independent of the CPU in graphics rendering. The company took several steps in this direction over the past many generations, with the most recent being the multi-draw indirect accelerator (MDIA) component introduced with RDNA3. Using this, software can dispatch multiple instanced draw commands that can be issued on the GPU, greatly reducing the CPU-level overhead. RDNA3 is up to 2.3x more efficient at this than RDNA2. Expect more innovations along these lines with RDNA4.

AMD understandably didn't talk anything about the "when," "what," and "how" of RDNA4 as its latest RDNA3 architecture is just off the ground, and awaiting a product ramp through 2023 into the various market-segments spanning from iGPUs to mobile GPUs, and mainstream desktop GPUs. RDNA3 is currently powering the Radeon RX 7900 series high-end graphics cards, and the company's latest 5 nm "Phoenix Point" Ryzen 7000-series mobile processor iGPUs. You can catch the 4Gamer interview in the source link below.

View at TechPowerUp Main Site | Source
 
This is why AMD sits at an all time low of 10% dGPU market share , they fail to read the room !!!
 
I wouldnt chase Nvidia down the AI hole too, they are litterally years ahead of AMD. I think the way AMD is going is the correct one they just need to master the MCM GPU.
 
Low quality post by OneMoar
in short we lack the talent to compete so we aren't even going to try
why does this sound so familar

O wait this is exactly the same bullshit they said with bulldozer

I swear idiots run this company

futher more AI is becomming the hottest thing since the sun and AMD is like mmmm no thanks we are going to keep doing what we are doing
which is being a mediocur second fiddle to everybody else
because that as worked so well before
Arrrrrrrrrrg this level of stupid short sighted quitter talk drives me batty
 
I'm sure glad the first comments are always by people who comprehend business and economics; I'm sure they're running multi-billion businesses that are responsible for employing thousands of people. :rolleyes:

I think it's a good strategy. Nvidia keeps making the wrong hard bets:
  • Lower all cards below the 4090 by two tiers and pretend that crypto-hashing is still a thing to continue to jack up their prices.
  • Max-out L2 cache from 6MB to 72MB to make their cards memory dependent instead of more efficient.
  • Not bother to go after modularization as aggressively like AMD so they force the higher prices for relatively close to the same performance.
  • Implement dedication to very-specific kinds of AI stuff that requires dedicated silicon that doesn't inherently go towards gaming performance.
  • Make all of their implementations only work on their cards.
  • Buy hey, even if the 4030 is mislabeled as a "4060" it's nice to see a 30-series card come out with 8GB of video RAM!
  • Make their customers think that their $400 card which is going to be 30% slower than AMD's equivalent is better because the $2,000+ card they can't afford is faster than AMD's fastest.
Yeah, a lot of people fall for the last one but there are lots of people who monkey-see monkey-do and make zero effort in to doing research.

AMD on the other hand makes reasonable products that perform well though the just absolutely suck at marketing. I've never once seen their marketing department make a point other than cost/value ratio about their products that cover any of the reasons I buy anything technology related. That being said:
  • Their technologies like works both on AMD and Nvidia cards (probably Intel's too) so game developers have much better incentive to implement something that will work on more cards than fewer.
  • AI isn't supposed to be complex, it's supposed to be generic and numerous just like the calculations themselves.
 
you may not like it but raster rendering has hit a wall in terms of performance per watt
its not the way forward
That is so weird to say. It's like blaming oil company for car pollution. it is not the raster but the graphics cards that determine the performance per watt.
Raster rendering is well and it is evolving. So surprised you have even mentioned it here considering all the new stuff that is constantly showing up.
and ignoring AI
What about AI? It is great in the industry but not for gaming. At this point it is just a marketing scheme. Kinda like RT is now. It is there but what's the point if the performance is being hit so much by it giving literally not much in return.
 
I'm sure glad the first comments are always by people who comprehend business and economics; I'm sure they're running multi-billion businesses that are responsible for employing thousands of people. :rolleyes:

I think it's a good strategy. Nvidia keeps making the wrong hard bets:
  • Lower all cards below the 4090 by two tiers and pretend that crypto-hashing is still a thing to continue to jack up their prices.
  • Max-out L2 cache from 6MB to 72MB to make their cards memory dependent instead of more efficient.
  • Not bother to go after modularization as aggressively like AMD so they force the higher prices for relatively close to the same performance.
  • Implement dedication to very-specific kinds of AI stuff that requires dedicated silicon that doesn't inherently go towards gaming performance.
  • Make all of their implementations only work on their cards.
  • Make their customers think that their $400 card which is going to be 30% slower than AMD's equivalent is better because the $2,000+ card they can't afford is faster than AMD's fastest.
  • Buy hey, even if the 4030 is mislabeled as a "4060" it's nice to see a 30-series card come out with 8GB of video RAM!
Yeah, a lot of people fall for the last one but there are lots of people who monkey-see monkey-do and make zero effort in to doing research.

AMD on the other hand makes reasonable products that perform well though the just absolutely suck at marketing. I've never once seen their marketing department make a point other than cost/value ratio about their products that cover any of the reasons I buy anything technology related. That being said:
  • Their technologies like works both on AMD and Nvidia cards (probably Intel's too) so game developers have much better incentive to implement something that will work on more cards than fewer.
  • AI isn't supposed to be complex, it's supposed to be generic and numerous just like the calculations themselves.

AMD's current features are Open Source because it's playing catch-up - Gsync, DLSS, Frame Generation etc. were all strong first movers -- and in order to make the AMD versions appealing they HAVE to make it open source since it's already technically weaker AND late to market -- this in a duopoly, because there's no risk of nvidia using freesync, FSR or FSR frame gen to gain an advantage. there's no logical reason to make it proprietary. Making it proprietary will only guarantee it's early death, so the only hope is a weaker open source version.

When AMD was leading with their own features (Eyefinity, Radeon Image Sharpening, SenseMI etc.) then that wasn't open source. And if they come out with something new and cool, you better believe it won't be open source.

AMD's pricing is also in the same boat-- usually AMD's products see the largest drops in price since they really like to gouge all of the early adopters, then drop prices massively. You can see this with every Zen and TR generation. You can also see this with the 7900xtx and 7900xt pricing - they had the opportunity to really undercut nvidia, but chose to play the market.

Nvidia is hard gouging its customers -- and AMD is right there at those price levels -- just ever so slightly under, due to inferior features, and with more RAM. So I agree that Nvidia is making very anti-consumer bets, but I disagree that AMD is "on the other hand" so to speak -- same hand, slaps just as hard.
 
Low quality post by nguyen
LOL, AMD is the most boring tech company ever, one word to describe AMD: "meh"
 
you may not like it but raster rendering has hit a wall in terms of performance per watt
its not the way forward

This article is talking about AI acceleration hardware bonded to a GPU or GPU segmented card.

You can make AI accelerated hardware that is bound to the GPU, and separating those 2 products into their own segments would benefit all of us.

Not every gamer does AI shit, we just want maximum performance per watt for gaming, and not even AI programmer is using their GPUs to play games, or using their GPU to accelerate computing.

I think AMD is heading in the right direction, separate their AI and GPU segments, and if they want to go AI, they can focus on a separate product.
 
Last edited by a moderator:
gamers don't matter
ok
I can't really make that any clearer
gamers are a insignificant part of the market
 
Good, the last thing we need is a race to make the most fake frames and other latent BS. DLSS3 looks like trash and is not an excuse for genuine GPU improvements.

That being said, if you're gonna focus on performance, you mind telling me WTF happened with rDNA3? It's performance improvements are perfectly in line with SM increases, not counting the doubling of shader count that seemed to do F all. If you want to be a "premium brand" and you are not ogingg to do the AI thing, you need to hit it out of the park on raster performance, not play second fiddle.

gamers don't matter
ok
I can't really make that any clearer
gamers are a insignificant part of the market
Geforce and Quadro/A series are comparable in terms off yearly sales to nvidia.

That is so weird to say. It's like blaming oil company for car pollution. it is not the raster but the graphics cards that determine the performance per watt.
Raster rendering is well and it is evolving. So surprised you have even mentioned it here considering all the new stuff that is constantly showing up.

What about AI? It is great in the industry but not for gaming. At this point it is just a marketing scheme. Kinda like RT is now. It is there but what's the point if the performance is being hit so much by it giving literally not much in return.
Much like RT, AI at the consumer level will likely gain traction within a few years. The idea of being able to run something like Stable Diffusion on your home PC is exciting to many, and its something that AMD will be playing second fiddle to, much like RT. Fromt he sound of his statement, AMD is specifically looking at the likes of AI accelerated DLSS3, which is not as warmly received as 1 and 2, and rejecting that.
 
Last edited by a moderator:
This article is talking about AI acceleration hardware bonded to a GPU or GPU segmented card.

You can make AI accelerated hardware that is bound to the GPU, and separating those 2 products into their own segments would benefit all of us.

Not every gamer does AI shit, we just want maximum performance per watt for gaming, and not even AI programmer is using their GPUs to play games, or using their GPU to accelerate computing.

I think AMD is heading in the right direction, separate their AI and GPU segments, and if they want to go AI, they can focus on a separate product.

It's a good point --- they're making a bet that seems silly - but I also thought Ford was stupid for not making sedans, ended up being one of their better decisions.

This bet is: AI accelerators won't be relevant to gamers. The reason this initially seems wrong is that there's synergy between compute dedicated to AI and there's some claims that DLSS tech and RT relies on AI (by nvidia and Intel with their Xess and compute tiles push) -- will be interesting to see if that's actually true though. If it isn't, and Radeon is right, then they will simplify the design and have a cheaper/better gaming product at a much lower cost and complexity, which is always an easy and massive win.


Fromt he sound of his statement, AMD is specifically looking at the likes of AI accelerated DLSS3, which is not as warmly received as 1 and 2, and rejecting that.

I honestly think their bet is that Nvidia will have to continue gouging customers and their chips will continue to be huge and expensive with included AI processing, and because DC demand will be strong for the same chips that power the GPUs, that they will have a product segmentation issue.

They can't really compete with DLSS 3 so why even try -- make a way smaller and cheaper chip that rasters similarly to big daddy NV, does Lumen and open RTX methods just fine and has the benefit of not competing with the MI300... let's see.
 
Last edited:
Rendez vous in 2 years, we'll see who's right or not :D :D
AMD can secretly works in IA app after all ...
 
This thread is looking like troll country.
 
gamers don't matter
ok
I can't really make that any clearer
gamers are a insignificant part of the market
People forget this.

Gamers aren't the core drivers of the market anymore.

Anyone tieing their boat to gamers exclusively will be relegated to mediocrity.

I like games too, and yes, I'm a gamer, but these are just the facts of today.
 
I'm sure glad the first comments are always by people who comprehend business and economics; I'm sure they're running multi-billion businesses that are responsible for employing thousands of people. :rolleyes:
Probably not, but we all see the AMD Radeon numbers and Nvidia numbers, and it's easy to understand to everyone.

Nvidia needs competition, AMD Radeon today, unfortunately, it's not. AMD cards are not so cheaper to justify the lack of RT performance, for example.
 
Much like RT, AI at the consumer level will likely gain traction within a few years. The idea of being able to run something like Stable Diffusion on your home PC is exciting to many, and its something that AMD will be playing second fiddle to, much like RT. Fromt he sound of his statement, AMD is specifically looking at the likes of AI accelerated DLSS3, which is not as warmly received as 1 and 2, and rejecting that.
In a few or several years when it is a mainstream then sure but for now, today, it isn't. No wonder it looks for DLSS3 like feature since nothing (when RT is ok) can work without it in a manner you would appreciate. Does it change my mind about the AI? Not really. The other problem here is DLSS3. Not that I dont like it but it is for certain users only. I don't want to go for something (with a huge money toll) which is for today OK and tomorrow we will get something new with a new cards. As I see it, it is not a standard and it is not broadly available for variety of cards.
AMD will have something different for everyone like FSR? Maybe. That makes it a better perspective but still not convinced. We will have to wait longer for those to matter more as for now gaming is doing pretty good without AI.
 
It's amazing how few people in this thread actual read what is already a short article

"Wang said that with the company introducing AI acceleration hardware with its RDNA3 architecture, he hopes that AI is leveraged in improving gameplay—such as procedural world generation, NPCs, bot AI, etc; to add the next level of complexity; rather than spending the hardware resources on image-processing."

So no, AMD is not giving up on AI. They are *gasp* going to focusing their consumer gaming cards AI capabilities on *gasp* AI that improves games. This is not a change in trajectory for AMD, they had already split their architecture into two separate branches including RDNA and CDNA. This is merely a commentary more on how Nvidia has cards with AI capabilities that don't really benefit gamers.

People forget this.

Gamers aren't the core drivers of the market anymore.

Anyone tieing their boat to gamers exclusively will be relegated to mediocrity.

I like games too, and yes, I'm a gamer, but these are just the facts of today.

No, gaming is still Nvidia's top earning segment. To go as far as the poster you are quoting to say that there are irrelevant is laughably incorrect. 50% of your revenue is not remotely insignificant.
 
This is why AMD sits at an all time low of 10% dGPU market share , they fail to read the room !!!
I use Nvidia RTX and I don't give a f about ray tracing, gimping performance while delivering pretty much nothing you will see when you actually play - instead of standing still - yet decreases performance by a ton. DLSS and DLDSR is the best features about RTX for me and lets be honest, it could probably have been done without dedicated hardware.......

RT is mostly good for screenshots because without the absolute most expensive card, people won't be using it anyway, unless they think 30 fps is great, hell in some games it even feels like there's additional processing lag when RT is enabled, even when fps is "decent" - I think it's a gimmick and I hope AMD will be very competitive in raster perf going forward. ALOT of people don't care about RT and 1440p is still the sweet spot and will be for long, this is where AMD shines as well. 7900XTX already bites 4090 in the butt in some games when raster only and 1440p. This is why I consider going AMD next time.

And FYI AMD has 15-16% dGPU marketshare and that is on Steam, it's probably more + 100% of Console market.
 
AMD on the other hand makes reasonable products that perform well though the just absolutely suck at marketing. I've never once seen their marketing department make a point other than cost/value ratio about their products that cover any of the reasons I buy anything technology related. That being said:
Of course, the 7900xt was very reasonable, getting absolutely destroyed by the 4070ti. And their cpus? Oh those are insanely reasonable, they priced 6 cores at 300€ msrp when their competition asks 350 for 14cores that smack it around the room.
 
let me put this to you

What if I where to combine AI Art Generation with ChatGPTs Natural Lanuage Interface with Something like Unreal Engine 5 (we really are not far away from this at all all the pieces exist it just takes somebody to bring it all togetor )

what if you could generate entire envroments just by telling a AI to "show me the bridge of the enterprise"
if you can't see the potental and the way the winds are shifting you may our soon to exist Ai-god have mercy on your fleshy soul
 
  • Max-out L2 cache from 6MB to 72MB to make their cards memory dependent instead of more efficient.

Let me guess, when AMD introduced a 128MB stack of L3 ("Infinity Cache") to cushion the reduction in bus width and bandwidth, you hailed it as a technological breakthrough.
When Nvidia does the exact same thing with 48MB/64MB/72MB L2, you consider it "making the wrong bet". Okay.

  • Not bother to go after modularization as aggressively like AMD so they force the higher prices for relatively close to the same performance.

In case you haven't noticed, a 530mm^2 aggregation of chiplets and an expensive new interconnect didn't exactly pass along the savings to gamers any more than Nvidia's 295mm^2 monolithic product did.
 
Probably not, but we all see the AMD Radeon numbers and Nvidia numbers, and it's easy to understand to everyone.

Nvidia needs competition, AMD Radeon today, unfortunately, it's not. AMD cards are not so cheaper to justify the lack of RT performance, for example.
Why? Are nVidia cards cheap enough to justify the lack in RT performance?
 
I honestly think their bet is that Nvidia will have to continue gouging customers and their chips will continue to be huge and expensive with included AI processing, and because DC demand will be strong for the same chips that power the GPUs, that they will have a product segmentation issue.

They can't really compete with DLSS 3 so why even try -- make a way smaller and cheaper chip that rasters similarly to big daddy NV, does Lumen and open RTX methods just fine and has the benefit of not competing with the MI300... let's see.
Agreed

When things became more and more demanding
Nvidia will have to allocate more and more die space for AI dedicated processing units.
Soon it will reach a critical point, where there is too much 'dead weight' and it is no longer cost effective to do it.

and Nvidia themselves will have to find a way to integrate AI processing back to the "Normal" stream processors.
So the cycle begins again
 
And by that you can forget about viable compatition, at least until ARC will be an option.
Good luck, AMD.
 
Back
Top