Monday, February 20th 2023

AMD RDNA4 Architecture to Build on Features Relevant to Gaming Performance, Doesn't Want to be Baited into an AI Feature Competition with NVIDIA

AMD's next-generation RDNA4 graphics architecture will retain a design-focus on gaming performance, without being drawn into an AI feature-set competition with rival NVIDIA. David Wang, SVP Radeon Technologies Group; and Rick Bergman, EVP of Computing and Graphics Business at AMD; gave an interview to Japanese tech publication 4Gamers, in which they dropped the first hints on the direction which the company's next-generation graphics architecture will take.

While acknowledging NVIDIA's movement in the GPU-accelerated AI space, AMD said that it didn't believe that image processing and performance-upscaling is the best use of the AI-compute resources of the GPU, and that the client segment still hasn't found extensive use of GPU-accelerated AI (or for that matter, even CPU-based AI acceleration). AMD's own image processing tech, FSR, doesn't leverage AI acceleration. Wang said that with the company introducing AI acceleration hardware with its RDNA3 architecture, he hopes that AI is leveraged in improving gameplay—such as procedural world generation, NPCs, bot AI, etc; to add the next level of complexity; rather than spending the hardware resources on image-processing.
AMD also stressed on the need to make the GPU more independent of the CPU in graphics rendering. The company took several steps in this direction over the past many generations, with the most recent being the multi-draw indirect accelerator (MDIA) component introduced with RDNA3. Using this, software can dispatch multiple instanced draw commands that can be issued on the GPU, greatly reducing the CPU-level overhead. RDNA3 is up to 2.3x more efficient at this than RDNA2. Expect more innovations along these lines with RDNA4.

AMD understandably didn't talk anything about the "when," "what," and "how" of RDNA4 as its latest RDNA3 architecture is just off the ground, and awaiting a product ramp through 2023 into the various market-segments spanning from iGPUs to mobile GPUs, and mainstream desktop GPUs. RDNA3 is currently powering the Radeon RX 7900 series high-end graphics cards, and the company's latest 5 nm "Phoenix Point" Ryzen 7000-series mobile processor iGPUs. You can catch the 4Gamer interview in the source link below.
Sources: 4Gamers.net, HotHardware
Add your own comment

221 Comments on AMD RDNA4 Architecture to Build on Features Relevant to Gaming Performance, Doesn't Want to be Baited into an AI Feature Competition with NVIDIA

#176
AusWolf
ratirtI think that depends on the screen quality and size of it. I would not suggest getting 32 inch if you are planning to play 1080p at some point in the future and you are not going to upgrade GPU.
Agreed, although I'm not getting anything. I'm OK with 24" 1080p. My desk isn't that big anyway. :)
fevgatosOf course it doesn't increase iq in all games, but it doesn't really decrease it either. I cannot for the life of me tell the difference, not even with screenshot next to each other. Even balanced looks great on static screens, but then it loses in motion where you can see some minor artifacts, but dlssQ is amazing
Considering that you're talking about higher resolutions, I believe you. It's a different story with 1080p, that's all I'm saying.
Posted on Reply
#177
ratirt
AusWolfAgreed, although I'm not getting anything. I'm OK with 24" 1080p. My desk isn't that big anyway. :)
Damn I wish I was in your shoes now. I'd really want to go through the amazement when I get to switch to 4k and get my mind blown. Now it is the other way around when i switch to 1080p and still get my mind blown but rather in a different way. It is your choice I'm sure the 1440p and 4k will knock on your door at some point. What I can tell you is, when you switch you will not regret it. Maybe it is not your time yet.
Posted on Reply
#178
evernessince
DimitrimanOk, they did not claim anywhere that it includes frame generation, instead they used the term "fluid motion frames", but sure, let's assume so. But then when? all they claimed is 2023 and it is still no where to be found, and we are approaching March. Maybe by the time this comes out and actually works, RDNA 4 will already be launched.
Remember that when you add frames to something it makes it look more fluid. That's ultimately the goal of DLSS 3.0, making the game look fluid. Not sure how AMD is going to approach it but frame interpolation (which is what Nvidia is doing) doesn't require AI to work.

IMO there's no huge rush to release it because as tech outlets like HWUB have reported, DLSS 3.0 is only useful in very specific scenarios due to it's drawbacks. The introduced latency will always be a problem for DLSS 3.0 unless Nvidia fundamentally change the technology. Unless Nvidia starts inserting the next frame instead of the frame between the current and last, DLSS 3.0 frame insertion will always be niche. By end of 2023 is still a year before RDNA4's likely launch mind you.
Posted on Reply
#179
TheinsanegamerN
RH92Nobody claimed AMD was giving up on AI , the claim was that they are already behind the competition on that front and things aren't going to get any better for them since they seem to have dropped the ball on the idea of competing head to head with Nvidia .



Do you know why this speech sounds hollow ? Because it's based on thin air !

AMD pulls the old switcheroo and claims they have implemented AI acceleration hardware in RDNA3 for what ? Features that may or may not be a thing when RDNA3 goes EOL ? When Nvidia implemented AI acceleration hardware in Turing they did also immediately put games that would leverage said hardware to the table , they didn't wait for it to happen .

Yet somehow you are falling for it ... well majority of the market isn't .
It sure seems to rhyme with AMD dropping out of the high end CPU market way back yonder.

"CEO Rory Read has made the comment that AMD will no longer compete head to head with Intel in the CPU market"
decryptedtech.com/amds-rory-read-says-there%E2%80%99s-enough-processing-power-on-every-laptop-on-the-planet-today

Ahh, dark days.
Posted on Reply
#180
AusWolf
ratirtDamn I wish I was in your shoes now. I'd really want to go through the amazement when I get to switch to 4k and get my mind blown. Now it is the other way around when i switch to 1080p and still get my mind blown but rather in a different way. It is your choice I'm sure the 1440p and 4k will knock on your door at some point. What I can tell you is, when you switch you will not regret it. Maybe it is not your time yet.
My time will come when I can get such a monitor for under £200 and a graphics card to drive it without upscaling for under £400. Maybe. :) My current system really stretches the limit of how much I'm willing to spend on my parts.
Posted on Reply
#181
mrnagant
I just thought of this. The Phoenix is Zen4 with integrated RDNA3, which comes with 2 AI accelerators per CU. So has up to 24. Then it also comes with XDNA AI engine.

I wonder how that is going to be handled? Will they have similar or different functionality? Can they help each other out? Is the OS going to have to handle which one to use? Can end-users pick which ones we use? Just wait. We are going to be blow up the rear end with AI cores. Zen 5 is going to have an XDNA AI Engine, which will also be an APU that will have AI accelerators contained in RDNA and then those with discrete cards will have AI cores there too.
Posted on Reply
#182
tfdsaf
I'd rather AMD invest all that die space into more shaders and clock speed, make GPU's faster, Ngreedia is only doing these things to check boxes on their advertising, these are all worthless "features" that wouldn't be needed if we actually had faster GPU's that can provide more FPS without the need to cheat!
Posted on Reply
#183
Vayra86
tfdsafI'd rather AMD invest all that die space into more shaders and clock speed, make GPU's faster, Ngreedia is only doing these things to check boxes on their advertising, these are all worthless "features" that wouldn't be needed if we actually had faster GPU's that can provide more FPS without the need to cheat!
Well honestly, raster graphics are simply done for the most part. We're now mulling over whether subpixels are the right color at the right time, go figure.

That's what they're using 'AI' for, after all. Its hilarious in all of its sadness, the low hanging fruit is long gone and graphical improvements have hit diminishing returns bigtime. Expensive post processing is expensive, always has been, and RT then sells us the idea that brute forcing the whole scene's lighting is a great step forward. Again, its hilariously stupid and sad if you think of it. It is desperate commerce looking for desperate measures to keep any semblance of progress in the GPU space to keep selling products to us.

You can still put RT and non RT scenes side by side and be challenged to spot a difference. The vast majority of its lighting is still rasterized/pre cooked, and the moment its not, the performance nosedives. If we RT a full scene, you're down to unplayable FPS on the fastest GPU on the planet right now (Portal), and again, struggling to see the point / what's gained in actual graphics fidelity.

Fact is, some shit's just done at some point.
Posted on Reply
#184
medi01
"AI" talk in terms of GPU is a very straightforward number cruncher. You have X mm2 of silicon, you make use of it.

Literally a comment by the Leather Man himself, when Frau Su rolled out some "AI" crap GPU that rivaled his.
RH92This is why AMD sits at an all time low of 10% dGPU market share
Yeah. I mean, latest quarter earnings:

1.6 billion made by AMD GPU + Console business in one quarter.
1.57 billion made by NV (a sharp drop from earlier years mind you).

"But that marketing company told me so".
Vayra86You can still put RT and non RT scenes side by side and be challenged to spot a difference.
I can spot the difference 100% of times, if FPS counter is on, though. :D
Dimitrimanincludes frame generation
Something my 5+ years old TV is doing.

Yes, it also adds lag, naturally.
Posted on Reply
#185
RH92
medi01Yeah. I mean, latest quarter earnings:

1.6 billion made by AMD GPU + Console business in one quarter.
1.57 billion made by NV (a sharp drop from earlier years mind you).

"But that marketing company told me so".
Guy throws consoles into the mix when the topic is dGPU , might aswell throw cellphones while you are at it :roll::roll::roll:. According to your logic Intel dominates dGPU market :roll: .



Even in a declining market Nvidia managed to grind more market share over AMD which tells you how well AMD plans are going ... unless of course you are trying to tell me that AMD plans to abandon dGPU market and focus only on making iGPUs and SOCs for consoles :roll:.
Posted on Reply
#186
Vayra86
RH92Guy throws consoles into the mix when the topic is dGPU , might aswell throw cellphones while you are at it :roll::roll::roll:. According to your logic Intel dominates dGPU market :roll: .



Even in a declining market Nvidia managed to grind more market share over AMD which tells you how well AMD plans are going ... unless of course you are trying to tell me that AMD plans to abandon dGPU market and focus only on making iGPUs and SOCs for consoles :roll:.
News flash; the majority of console games are also key drivers for the PC platform. Its only been like that for 20 odd years, I know its hard to stay up to date.
Posted on Reply
#187
Dave65
JAB CreationsI'm sure glad the first comments are always by people who comprehend business and economics; I'm sure they're running multi-billion businesses that are responsible for employing thousands of people. :rolleyes:

I think it's a good strategy. Nvidia keeps making the wrong hard bets:
  • Lower all cards below the 4090 by two tiers and pretend that crypto-hashing is still a thing to continue to jack up their prices.
  • Max-out L2 cache from 6MB to 72MB to make their cards memory dependent instead of more efficient.
  • Not bother to go after modularization as aggressively like AMD so they force the higher prices for relatively close to the same performance.
  • Implement dedication to very-specific kinds of AI stuff that requires dedicated silicon that doesn't inherently go towards gaming performance.
  • Make all of their implementations only work on their cards.
  • Buy hey, even if the 4030 is mislabeled as a "4060" it's nice to see a 30-series card come out with 8GB of video RAM!
  • Make their customers think that their $400 card which is going to be 30% slower than AMD's equivalent is better because the $2,000+ card they can't afford is faster than AMD's fastest.
Yeah, a lot of people fall for the last one but there are lots of people who monkey-see monkey-do and make zero effort in to doing research.

AMD on the other hand makes reasonable products that perform well though the just absolutely suck at marketing. I've never once seen their marketing department make a point other than cost/value ratio about their products that cover any of the reasons I buy anything technology related. That being said:
  • Their technologies like works both on AMD and Nvidia cards (probably Intel's too) so game developers have much better incentive to implement something that will work on more cards than fewer.
  • AI isn't supposed to be complex, it's supposed to be generic and numerous just like the calculations themselves.
THIS!
Posted on Reply
#188
N3M3515
fevgatosWell it doesn't really matter whether you think it's worth it or not. Why is it okay for the 7900xt not only to be 15% more expensive at launch, not only consuming 30% more power, but also getting it's ass handed to it in rt?
Why is it okay for the 4080 to be 71% more expensive than the 3080 while only being 50% faster?
Posted on Reply
#189
Xajel
I think AI, ML & some other HW acceleration is crucial for the consumer in modern days and more in the future, not just for gaming, consumers don't just game on their Radeon and GeForce; that's why most content creators use NVIDIA nowadays because the other non-gaming features of the GPU is outstanding and influential in NVIDIA, and they keep adding more features like the RTX acceleration for the prosumers (OptiX engine for 3D rendering).

I don't mean overpowered implementation, like 2x over NV, but something competitive and useful and doesn't affect the other business they have.

And if this is their idea about AI, why they added AI to their Pheonix APUs?
Posted on Reply
#190
medi01
RH92Guy throws consoles into the mix when the topic is dGPU
Yeah. Why could consoles matter, huh? :D

Oh ,wait, AMD reported console APUs + GPU sales combined, THAT IS WHY?

Oh. But hard to follow, isn't it?


7 or so console APUs sold by AMD in the same period. Reported 1.6 billion revenue. (NV reported 1.57 from GPU business)

So the question, that needs a rocket scientist, I guess, judging by comments in this thread, how much of that 1.6 billion chunk is console APUs?

How much could AMD realistically charge for a bare APU chip, if consoles start at $399 with a controller, SSD and what not?
Say $100-150.

That gives us

between
1600 - 7'*100 = 900 million
1600 - 7*150 = 550 million

for GPU revenue on AMD side.


For "12% of the market share" to be true, AMD would need to make on GPUs less than 1.57 (NV revenue)*12/84 = 220 million on GPUs.
Then console APUs should on average cost 1600-220=1380/7 = $197

No way in hell is AMD getting half of the PS5 console price for its chip alone.
fevgatosgetting it's ass handed to it in rt?
This is how we refer to a 20% more expensive card being 16% faster at RTthese days. :roll:

If you wonder what RT is: the most impactful aspect of this feature is "bring down my FPS by 40-50%".
It was promised that "new GPUs" won't be affected, because they have "more RT thingies".
But for some reason it didn't happened even in the 3rd generation of RT cards.
As if "RT thingies" were still largely utilizing good old raster thingies... :D
Posted on Reply
#191
las
nguyenYup, AMD still lose in rasterization even when they throw everything plus the kitchen sink at it, it's quite pathetic.

And it's not like AMD is doing more with less, they are doing less with more, 7900XTX with 384bit bus + 24GB VRAM barely beat 256bit 4080 by a hair in raster and lose in everything else ;). The BOM on the 7900XTX is definitely higher than that of 4080 and the only way for AIBs to earn any profit is selling 7900XTX at ~1100usd, which make it the worse choice than 1200usd 4080.

Everyone and their mother should realize by now Nvidia is just letting RTG survive enough to keep the pseudo duopoly going.
7900 XTX beats or performs on par with 4090 in plenty of games for 60% less money. 4090 is terrible value.
7900 XTX even has higher minimum fps than 4090 at Ultra settings 4K in Hogwartz Legacy; www.techspot.com/review/2627-hogwarts-legacy-benchmark/

AMD does less with more? Hahah. Nvidia needed a node advantage and GDDR6X. You compare bus width and memory size but don't talk about GDDR6 vs GDDR6X and manufacturing proces ... Once again, Clueless. No wonder Nvidia already prepping 4090 Ti and 4080 Ti :laugh: RDNA3 is getting faster and faster for every driver and OC headroom on 7900XTX is like 10-15%, meanwhile you can get 2-5% tops on Nvidia, because they already maxed them out and don't allow proper OC.

Most gamers don't give a F about Ray Tracing. It's a gimmick. No Nvidia card will do heavy ray tracing at high res without using upscaling anyway. Maybe 5000 series will be able to, 2000, 3000 and 4000 series are too slow for proper RT unless you accept much lower fps, with huge dips. Even 4090 is slow as hell with RT on high.
Posted on Reply
#192
nguyen
las7900 XTX beats or performs on par with 4090 in plenty of games for 60% less money. 4090 is terrible value.
7900 XTX even has higher minimum fps than 4090 at Ultra settings 4K in Hogwartz Legacy; www.techspot.com/review/2627-hogwarts-legacy-benchmark/

AMD does less with more? Hahah. Nvidia needed a node advantage and GDDR6X. You compare bus width and memory size but don't talk about GDDR6 vs GDDR6X and manufacturing proces ... Once again, Clueless. No wonder Nvidia already prepping 4090 Ti and 4080 Ti :laugh: RDNA3 is getting faster and faster for every driver and OC headroom on 7900XTX is like 10-15%, meanwhile you can get 2-5% tops on Nvidia, because they already maxed them out and don't allow proper OC.

Most gamers don't give a F about Ray Tracing. It's a gimmick. No Nvidia card will do heavy ray tracing at high res without using upscaling anyway. Maybe 5000 series will be able to, 2000, 3000 and 4000 series are too slow for proper RT unless you accept much lower fps, with huge dips. Even 4090 is slow as hell with RT on high.
Yes Nvidia is prepping 4090ti to beat RDNA4, what a dumbster fire RDNA3 is, 4090 will be the next 1080Ti that took AMD 3.5 years to beat ;)
Posted on Reply
#193
las
nguyenYes Nvidia is prepping 4090ti to beat RDNA4, what a dumbster fire RDNA3 is, 4090 will be the next 1080Ti that took AMD 3.5 years to beat ;)
Considering 7900XTX is already close overall, I doubt it

Btw 7900XTX uses 50 less watts than 4090 and performs on par or beats it in plenty of games in raster which is the only thing that matters to most gamers :roll: 4090 is 60-75% more expensive as well, what a steal :laugh:

Nvidia is prepping 4090 Ti because AMD is prepping 7950XTX :toast:
Posted on Reply
#194
nguyen
lasConsidering 7900XTX is already close, I doubt it

Btw 7900XTX uses 50 less watts than 4090 and performs on par or beats it in plenty of games in raster which is the only thing that matters to most gamers :roll: 4090 is 60-75% more expensive as well, what a steal :laugh:
Plenty of gimmicky game that the majority of gamers don't care about anyways ;), but AMD is paying YTers to include those games into their benchmark suit all right

Oh well if you can't afford 4090, don't feel too bad ;)

Oh and Nvidia is prepping for Blackwell too, which could be twice as fast as 4090 if Nvidia is serious, poor RDNA3/4 are so 1-2 gen behind :/
Posted on Reply
#195
las
nguyenPlenty of gimmicky game that the majority of gamers don't care about anyways ;), but AMD is paying YTers to include those games into their benchmark suit all right

Oh well if you can't afford 4090, don't feel too bad ;)

Oh and Nvidia is prepping for Blackwell too, which could be twice as fast as 4090 if Nvidia is serious, poor RDNA3/4 are so 1-2 gen behind :/
I can easily afford it, but I am not stupid :laugh: When do you replace that dated OLED? Can't afford better or do you enjoy low nits and crappy HDR?

It's funny to see how butthurt you are about 7900XTX getting closer and closer. You probably cleared the bank to buy that entry level 4090 :laugh: Probably why you re-used an old HX850 :roll:

Glad I am not forced to use Windows 11 to make my CPU work right :laugh:

Yeah, a gimmick game that sells 12+ million copies in 2 weeks :laugh:
nguyenYes Nvidia is prepping 4090ti to beat RDNA4, what a dumbster fire RDNA3 is, 4090 will be the next 1080Ti that took AMD 3.5 years to beat ;)
Keep dreaming, 1080 Ti was a $699 card, not $1599 and up like 4090. 1080 Ti had proper connectivity, 4090 don't even have DP 2.1 in 2023 :roll:
7900 XTX is already close to 4090 in tons of games and even beats it on some, for 600 dollars less and with 50 watts less :laugh:

Nvidia also skimped on the VRAM on 4090; 4080 have higher clocked memory.

4080 Ti and 4090 Ti = DP 2.1 + High speed GDDR6X modules. Both will kill off 4090 and make it EoL. Then 1 year after, 5070 will come out and beat 4090 :laugh: And by then, resell value will be sub 400 dollars :laugh:
Great buy :toast: But I guess you can afford it, thats why you bought the absolute cheapest 4090 and skimps on other parts :laugh: Remember to replace that dated PSU before it pops, you are using high watt parts afterall :p
Posted on Reply
#196
nguyen
poor AMD, trying their hardest every single day to fix RDNA3 (with ton of copium), meanwhile Nvidia is just chilling with their full AD102 chip selling for over 7k :rolleyes:.
Maybe AMD should just rename RTG to CTG (Console Technology Group)
Posted on Reply
#197
las
nguyenpoor AMD, trying their hardest every single day to fix RDNA3 (with ton of copium), meanwhile Nvidia is just chilling with their full AD102 chip selling for over 7k :rolleyes:.
Maybe AMD should just rename RTG to CTG (Console Technology Group)
Truth hurt I see :laugh: They milked you hard this time. Did your 4090 caught fire yet?
Posted on Reply
#198
ratirt
I'm trying to wrap my head around, make GPU more independent on the CPU in graphics rendering. Why would that be an issue here? There are tasks that would require processors to perform despite GPU while rendering. Is this to mitigate the bottlenecks that a CPU can cause when paired with a powerful GPU? There is so many variables in here and yet, AMD is addressing this particular issue (if you can even call it that way).
Posted on Reply
#199
chrcoluk
RH92This is why AMD sits at an all time low of 10% dGPU market share , they fail to read the room !!!
They power more gaming devices than Nvidia, dGPU is smaller than the console market.
Posted on Reply
#200
Aquinus
Resident Wat-man
chrcolukThey power more gaming devices than Nvidia
I'm not a fan of nVidia, but the Nintendo Switch accounts for a very large chunk of gaming consoles in the wild and that's powered by a nVidia chip. However, when it comes to non-portable gaming consoles, AMD practically has a monopoly on custom designed SoCs for these devices. dGPU market share doesn't properly describe the diversity of AMD's portfolio to be completely honest.
Posted on Reply
Add your own comment
May 7th, 2024 12:35 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts