Sunday, August 6th 2023

AMD Retreating from Enthusiast Graphics Segment with RDNA4?
AMD is rumored to be withdrawing from the enthusiast graphics segment with its next RDNA4 graphics architecture. This means there won't be a successor to its "Navi 31" silicon that competes at the high-end with NVIDIA; but rather one that competes in the performance segment and below. It's possible AMD isn't able to justify the cost of developing high-end GPUs to push enough volumes over the product lifecycle. The company's "Navi 21" GPU benefited from the crypto-currency mining swell, but just like with NVIDIA, the company isn't able to push enough GPUs at the high-end.
With RDNA4, the company will focus on specific segments of the market that sell the most, which would be the x700-series and below. This generation will be essentially similar to the RX 5000 series powered by RDNA1, which did enough to stir things up in NVIDIA's lineup, and trigger the introduction of the RTX 20 SUPER series. The next generation could see RDNA4 square off against NVIDIA's next-generation, and hopefully, Intel's Arc "Battlemage" family.
Source:
VideoCardz
With RDNA4, the company will focus on specific segments of the market that sell the most, which would be the x700-series and below. This generation will be essentially similar to the RX 5000 series powered by RDNA1, which did enough to stir things up in NVIDIA's lineup, and trigger the introduction of the RTX 20 SUPER series. The next generation could see RDNA4 square off against NVIDIA's next-generation, and hopefully, Intel's Arc "Battlemage" family.
363 Comments on AMD Retreating from Enthusiast Graphics Segment with RDNA4?
www.anandtech.com/print/7974/amd-beema-mullins-architecture-a10-micro-6700t-performance-preview
Obviously they have their calculations and projections and spending that kind of money on GPUs isn't a priority. WE ALL AGREE ON THIS POINT. For a company who's stock price doubled since 2020, that has made multi billion dollar accquistions since 2020, I just happen to think they should have invested more in securing a more advanced node for Navi 3X.
Believe your wild conclusions all you’d like, but until you run a multi billion dollar with multiple departments providing you with the background on how and why these decisions are made, you nor I have zero ground to stand on.
Seeing as how well AMD as a whole has done since the changes of CEO to Lisa, initial design and improvements of Zen, and multiplying of net worth, it seems they’re well off with the decisions they’ve made.
It's funny to be offended for a company, which i did not attack (I'm a huge AMD fan btw)
www.anandtech.com/show/15272/imagination-and-apple-sign-new-agreement
www.anandtech.com/show/11867/imagination-technologies-to-be-acquired-by-canyon-bridge-for-550m
Basically a one trick pony with Apple's graphics IP deal.
Everyone is moving towards chiplet design, AMD looks to have been playing the long game, and has arguably made the smart move.
That being said, latency is about 10% worse than with a monolithic RDNA 3, in this case 7600: chipsandcheese.com/2023/06/14/latency-testing-is-hard-rdna-3-power-saving/
So, I don't see a upside in MCM aside from, again, saving wafers, and that is only so needed for AMD because they share it with their CPUs - something which would not be the case if Radeon would still be ATI.
Nvidia will go for CHIPLET as soon as it makes SENSE for them, as in, better performance or efficiency. Both things which AMD did not achieve this time.
That being said, my arguments are MONOLITHIC vs MCM. MCM = only a upside in performance if you use multiple CORE CHIPLETS, not just 1x CORE CHIPLET + MCDs.
The INTER DIE COMMUNICATION increases power (=decreases efficiency) because of the longer traveling time from GCD to MCD, which is not a thing in a monolithic design.
Does the 7900 XTX not out perform the 6900XT?
Does the 7900 XTX have better, on average and across the board, better efficiency than the 6900XT?
I’ll spoil it for you, yes and yes.
You can even look at Zen4 vs gen13 intel, in terms of efficiency intel gets slaughtered. And then we can look at the effects of die shrinks and transistor density making chips ever more heat dense and difficult to cool. There are way more benefits to MCM than costs savings with the ever encroaching limitations of silicon. To say otherwise is to be naive.
Leave the engineering to Intel, AMD and Nvidia, your armchair serves little purpose.
Fact is, with monolithic it's realistic to expect that it would've also had a higher clock due to higher efficiency + better latency, which is also higher perf. This would've made it easily faster than the 4080, but with the downside that AMD would not have made big financial gains with it. Then realistic to expect, it would've been priced maybe at 1200 instead of 1000, due to being faster than 4080, despite other short comings (no DLSS 3, no other NV features). Could've been still very profitable, but = less chips, less graphics cards = less money.
You’re ignoring what I was replying to, and making a point of.
Viper stated as some self proclaimed fact that the only advantage is “saving dies” (whatever that actually means), and that Navi31 offers no energy efficiency or performance improvements over Navi21, which is objectively false. To what those performance and efficiency improvements are directly attributed to we cannot say for sure, not without some white paper documentation dissecting architectural improvement.
As it exists, the 7900XTX is a faster, more efficient GPU, that is a fact. We have no proof of what a monolithic navi31 would have been in terms of hardware specs and how scaling/die size limitations may have benefited or harmed the design and performance.
Which all circles back to Viper just stating a bunch of bs assumptions without any way to ever prove, let alone understand why AMD chose to approach the design of Navi31 the way they did.
If I were to make some sort of equivalent statement along the thought process of both vipers and yours, I should be calling AMD incapable of making good decisions for not making a monolithic GPU with 12k shaders, 64GB of ram, on a 512bit bus with 384mb of infinity cache on a 2nm process. Do you see how that’s ridiculous?
It’s one thing to discuss hypothetical technology, but something else entirely to call out a company for not doing something when you have no idea of what the engineering limitations or problems that design could entail.
Absolutely no one is saying oh if it was monolithic it should have 12k cores, 64gb HBM3 ram, etc. Show me where I even imply that. Quit with the straw man stuff THAT'S ridiculous.
Viper is saying MCM make more sense when you know, have actually more than 1 (graphics) chiplet. How is this not sinking in? If you don't believe that you are drowning in AMD kool aid brother.
Yes AMD got a head start on MCM for GPUs, which should I hope help them in the future since yes chiplets are probably the way forward. But their GPU situation right now is as dire as its ever been. Not saying they're doomed but the optimism is down for me.
While you both continue to falsely claim you know exactly what the minutia of Navi31s design attributes to x% performance increase, I’m simply stating what Navi31 is based on the information we have.
The crystal ball armchair engineer logic is strong with you two; we have no clue what a monolithic die Navi31 would be for better or worse. I’m just not naive enough to believe I know better than the 100s of engineers working for these companies.
Also that in bold is not the truth, they have increased MSRP across most
everytiers. I am wrong about 7900xtx and 6900xt MSRP.Come again? Exact same MSRP, images taken from TPU reference reviews for their respective release.
Claims I’m straw manning and starts putting words in other peoples mouths.