Friday, March 6th 2020

AMD RDNA2 Graphics Architecture Detailed, Offers +50% Perf-per-Watt over RDNA

With its 7 nm RDNA architecture that debuted in July 2019, AMD achieved a nearly 50% gain in performance/Watt over the previous "Vega" architecture. At its 2020 Financial Analyst Day event, AMD made a big disclosure: that its upcoming RDNA2 architecture will offer a similar 50% performance/Watt jump over RDNA. The new RDNA2 graphics architecture is expected to leverage 7 nm+ (7 nm EUV), which offers up to 18% transistor-density increase over 7 nm DUV, among other process-level improvements. AMD could tap into this to increase price-performance by serving up more compute units at existing price-points, running at higher clock speeds.

AMD has two key design goals with RDNA2 that helps it close the feature-set gap with NVIDIA: real-time ray-tracing, and variable-rate shading, both of which have been standardized by Microsoft under DirectX 12 DXR and VRS APIs. AMD announced that RDNA2 will feature dedicated ray-tracing hardware on die. On the software side, the hardware will leverage industry-standard DXR 1.1 API. The company is supplying RDNA2 to next-generation game console manufacturers such as Sony and Microsoft, so it's highly likely that AMD's approach to standardized ray-tracing will have more takers than NVIDIA's RTX ecosystem that tops up DXR feature-sets with its own RTX feature-set.
AMD GPU Architecture Roadmap RDNA2 RDNA3 AMD RDNA2 Efficiency Roadmap AMD RDNA2 Performance per Watt AMD RDNA2 Raytracing
Variable-rate shading is another key feature that has been missing on AMD GPUs. The feature allows a graphics application to apply different rates of shading detail to different areas of the 3D scene being rendered, to conserve system resources. NVIDIA and Intel already implement VRS tier-1 standardized by Microsoft, and NVIDIA "Turing" goes a step further in supporting even VRS tier-2. AMD didn't detail its VRS tier support.

AMD hopes to deploy RDNA2 on everything from desktop discrete client graphics, to professional graphics for creators, to mobile (notebook/tablet) graphics, and lastly cloud graphics (for cloud-based gaming platforms such as Stadia). Its biggest takers, however, will be the next-generation Xbox and PlayStation game consoles, who will also shepherd game developers toward standardized ray-tracing and VRS implementations.

AMD also briefly touched upon the next-generation RDNA3 graphics architecture without revealing any features. All we know about RDNA3 for now, is that it will leverage a process node more advanced than 7 nm (likely 6 nm or 5 nm, AMD won't say); and that it will come out some time between 2021 and 2022. RDNA2 will extensively power AMD client graphics products over the next 5-6 calendar quarters, at least.
Add your own comment

306 Comments on AMD RDNA2 Graphics Architecture Detailed, Offers +50% Perf-per-Watt over RDNA

#127
EarthDog
Super XPGoing by YouTube analysis by various techies yes I think so.
Will you elaborate on what these YTs said to make you feel this way?

....especially in light of the link I just provided?

If we know their Navi/RDNA/7nm is less efficient than Nvidiz now...assuming both of those articles are true.... why would they be worried about maintaining their efficiency over AMD gpus?

Which is more realistic to you for the 50% increase? An new arch with a die shrink, or an update arch on the same process? I think both will get there, however nvidia isnt worried about this..
Posted on Reply
#128
wolf
Performance Enthusiast
FluffmeisterIt's certainly interesting reading the two threads, one is haha never gonna happen leather jacket man, the other is... awesome take that leather jacket man.

Nice features though, welcome to 2018.
Of course, it's been like that for a while here at TPU, Nvidia is the company people love to hate while AMD as the underdog gets off light. Fair few examples floating around where similar things happen or are claimed, Nvidia gets sh*t on and AMD get's excitement and praise.

I realllllly want to see AMD pull the rabbit out of the hat on this on, I want the competition to be richer and I am craving a meaningful upgrade to my GTX1080 that has RTRT and VRS. I will buy the most compelling offering from either camp, it just has to be compelling. Really not in the mood for another hot, loud card, with coil whine and driver issues. If I can buy a 2080Ti perf or higher card for ~$750 USD or less that ticks those boxes, happy days.

Truly AMD, I am rooting for you, do what you did with Zen!
Posted on Reply
#129
ratirt
rvalencia1. I was referring to Radeon VII
2. I was referring to perf/watt.
3. GDDR6 (for 16 GBps 2.5w each x 8 chips) and HBM v2 (e.g `~20 watts Vega Frontier 16 GB) power consumption difference is minor when compared to GPUs involved.

16 GB HBM v2 power consumption is lower when compared to GDDR6 16 chip 16GB Clamshell Mode which is irrelevant for RX-5700 XT's 8 chips GDDR6-14000.
Not so sure about that. HBM2 uses half the power than GDDR6 considering same capacity. If in your eyes it is minor then fine but it is still a difference which you haven't considered. I'm saying your comparison is not accurate. Also you are not comparing chip vs chip but card vs card and that is entirely different thing.
Posted on Reply
#130
moproblems99
EarthDogWill you elaborate on what these YTs said to make you feel this way?
I think the words 'great' and '50%' were used in the same video.
Posted on Reply
#131
efikkan
The only thing that would worry Nvidia is if their next generation somehow gets delayed, but there are no indicators of that yet.
ValantarStill, up to 50% is damn impressive without a node change (remember what changed from 14nm to the tweaked "12nm"? Yeah, near nothing). Here's hoping the minimum increase (for common workloads) is well above 30%. 40% would still make for a very good ~275W card (especially if they use HBM), though obviously we all want as fast as possible :p
As I pointed out, it depends how you compare. If you selectively compare with a previous chip with higher clocks, then you can get numbers like this easily.
To achieve a 50% efficiency gain in average between Navi 1x and Navi 2x would be a huge achievement, and is fairly unlikely. It's hard to predict the gains from a refined node, but we have seen in the past that refinements can do good improvements, like Intel's 14nm+/14nm++, but still far away from reaching 50%.

And as always, any node advancements will be available to Nvidia as well.
Posted on Reply
#132
Valantar
efikkanAs I pointed out, it depends how you compare. If you selectively compare with a previous chip with higher clocks, then you can get numbers like this easily.
... which is why I said I hoped for relatively high minimum perf/W gains also, and not just peak.
efikkanTo achieve a 50% efficiency gain in average between Navi 1x and Navi 2x would be a huge achievement, and is fairly unlikely. It's hard to predict the gains from a refined node, but we have seen in the past that refinements can do good improvements, like Intel's 14nm+/14nm++, but still far away from reaching 50%.
Preaching to the choir here man. Though there haven't been any real efficiency gains on Intel 14nm since Skylake, just clock scaling improvements (and later node revisions actually sacrifice efficiency to achieve that). Still an achievement hitting those clocks, but the sacrifices involved have been many and large.
Posted on Reply
#133
Super XP
EarthDogWill you elaborate on what these YTs said to make you feel this way?

....especially in light of the link I just provided?

If we know their Navi/RDNA/7nm is less efficient than Nvidiz now...assuming both of those articles are true.... why would they be worried about maintaining their efficiency over AMD gpus?

Which is more realistic to you for the 50% increase? An new arch with a die shrink, or an update arch on the same process? I think both will get there, however nvidia isnt worried about this..
I'm not going to dig into all his videos to find the various quotes he mentions, but this is one YouTuber that claims this based on sources. Probably an over exaggeration but RDNA2 IS going to challenge Nvidia, which will affect its overall sales. So in that respect, I am sure they are curious about this Big Navi.
Moore's Law Is Dead
www.youtube.com/channel/UCRPdsCVuH53rcbTcEkuY4uQ
Posted on Reply
#134
EarthDog
Super XPI'm not going to dig into all his videos to find the various quotes he mentions,
Kind of a shame. You made some claims but, put the effort on others to find them? I'll pass. :)
Super XPI am sure they are curious about this Big Navi.
Curious...sure. Always. You have to keep an eye on the competition. But that is quite a bit different than "worried". ;)
Posted on Reply
#135
sergionography
efikkanAnd expecting AMD to double and then triple the performance in two years wasn't a clue either? :p
Well it wasn't a clue because I thought it's doable. NAVI 1x is a 250mm2 chip which is small considering you could probably go up to 750-800mm2 (unlikely tho). But then 5nm EUV should be around by that time.
Posted on Reply
#136
efikkan
Super XPI'm not going to dig into all his videos to find the various quotes he mentions, but this is one YouTuber that claims this based on sources. Probably an over exaggeration but RDNA2 IS going to challenge Nvidia, which will affect its overall sales. So in that respect, I am sure they are curious about this Big Navi.
Moore's Law Is Dead
www.youtube.com/channel/UCRPdsCVuH53rcbTcEkuY4uQ
I hope you're not basing your expectations of RDNA2 on this random nobody. This guy claimed last year that AMD were holding big Navi back because they didn't need to release it (facepalm), claiming that AMD were renaming chips codenames to excuse his mispredictons (which they would never do), and that Navi 12 was coming in 2019 to crush RTX 2080 Super, and that was just from a single of his BS videos.

Don't get me wrong though, I hope RDNA2 is as good as possible. But please don't spread the nonsense these losers on YouTube are pulling out of their behinds. ;)
sergionographyWell it wasn't a clue because I thought it's doable. NAVI 1x is a 250mm2 chip which is small considering you could probably go up to 750-800mm2 (unlikely tho). But then 5nm EUV should be around by that time.
It's also a 250mm² chip that draws ~225W ;)

Building big chips is not the problem, but doing big chips with high clocks though, that would require a much more efficient architecture.
Posted on Reply
#137
Super XP
EarthDogKind of a shame. You made some claims but, put the effort on others to find them? I'll pass. :)

Curious...sure. Always. You have to keep an eye on the competition. But that is quite a bit different than "worried". ;)
When I comment with such information, you should take it as fact. I have no reason to BS. And I was watching YouTube on my big screen TV after work one day and heard the individual say what I stated. I'm not going to take a notepad and start writing down what I hear. Lol
Would you?
efikkanI hope you're not basing your expectations of RDNA2 on this random nobody. This guy claimed last year that AMD were holding big Navi back because they didn't need to release it (facepalm), claiming that AMD were renaming chips codenames to excuse his mispredictons (which they would never do), and that Navi 12 was coming in 2019 to crush RTX 2080 Super, and that was just from a single of his BS videos.

Don't get me wrong though, I hope RDNA2 is as good as possible. But please don't spread the nonsense these losers on YouTube are pulling out of their behinds. ;)


It's also a 250mm² chip that draws ~225W ;)

Building big chips is not the problem, but doing big chips with high clocks though, that would require a much more efficient architecture.
I've also heard RedTagGaming and Gamer Meld YouTube channels that seem quite exited about RDNA2 based on what there sources have hinted. I'm keeping my expectations conservative. Though, I have a strong gut feeling RDNA2 is the real deal and not just another Vega like GPU.
Posted on Reply
#138
EarthDog
Super XPWhen I comment with such information, you should take it as fact.
:respect:o_O

I don't need to write anything down. Thanks for the info and bread crumb trail. :)
Posted on Reply
#139
efikkan
Super XPI've also heard RedTagGaming and Gamer Meld YouTube channels that seem quite exited about RDNA2 based on what there sources have hinted. I'm keeping my expectations conservative. Though, I have a strong gut feeling RDNA2 is the real deal and not just another Vega like GPU.
Which are yet more channels which fall into the bucket of less "competent" "tech" YouTube channels. I would advice to avoid such channels unless you do it for amusement or looking for sources of false rumors. These channels serve one of two purposes; serve people the "news" they want to hear (in the echo chambers), or to shape public opinion. If you listen to more than a few episodes you'll see all of these are all over the place, are inconsistent with themselves, and fail to master any deeper technical knowledge. Some of these provide their own "leaks", while others just recite pretty much everything they can scrape of the web.

Speculation is of course fine, and many of us enjoy discussing potential hardware, myself included, but speculation should be labeled as such, not be labeled as "leaks" when it's not. Whenever we see leaks we should always check if it passes some basic "smell tests";
  • Who is the source and does it have a good track record? Always see where the leak originates; if it's from WCCFTech, VideoCardz, FudZilla or somewhere random, then it's fairly certainly fake, random twitter/forum posts often is fake, but can occasionally be true, etc. "Leaks" from official drivers, compilers, official papers etc. are pretty solid. Some sources are also know to have a certain bias, even though they can have elements of truth to their claims.
  • Is the nature of the "leak" something which can be known, or is likely to be known outside a few core engineers? Example: Clock speeds are never set in stone until they have the final stepping shortly ahead of a release, so when someone posts a table of clock speeds of CPUs/GPUs 6-12 monts ahead, you can know it's BS.
  • Is the specificity of the leak something that is sensitive? If the details is only known to a few people under NDA, then those leaking it will risk losing their job and potential lawsuits, how many are willing to do that to serve a random YouTube channel or webpage? What is their motivation?
  • Is the scope of the leak(s) likely at all? Some of these channels claims to have dozens of sources inside Intel/AMD/Nvidia, seriously a random guy in his basement have such good sources? Some of these claims to even have single sources who provides sensitive NDA'ed information from both Intel and AMD about products 1+ years away, there is virtually no chance this claim is true, and is an immediate red flag to me.
Unfortunately, most "leaks" are either qualified guesses or pure BS, sometimes an accumulation of both (either intentionally or not). Perhaps sometime you should look back after a product release and evaluate the accuracy and the timeline of the leaks. The general trend is usually that early leaks are usually only true about "big" features, early "specific"(clocks, TDP, shader count(GPUs)) leaks are usually fake. Then usually there is a spike in leaks around the time the first engineering samples arrives, various leaked benchmarks, etc. but clocks are still all over the place. Then there is another spike when board partners get their hands on it, then the accuracy increases a lot, but there is still some variance. Then usually a few weeks ahead of release, we get pretty much precise details.

Edit:
Rumors about Polaris, Vega, Vega 2x and Navi 1x have pretty much started out the same way; very unrealistic initially, and then pessimistic close to the actual release. Let's hope Navi 2x delivers, but please don't drive the hype too high.
Posted on Reply
#140
Valantar
efikkanIt's also a 250mm² chip that draws ~225W ;)

Building big chips is not the problem, but doing big chips with high clocks though, that would require a much more efficient architecture.
Not that difficult - there's not much reason to push a big chip that far up the efficiency curve, and seeing just how much power can be saved on Navi by downclocking just a little, it's not too big a stretch of the imagination to see a 500mm² chip at, say, 200-300MHz less stay below 300W, especially if it uses HBM2. Of course AMD did say that they would be increasing clocks with RDNA2 while still improving efficiency, which really makes me wonder what kind of obvious fixes they left for themselves when they designed RDNA (1). Even with a tweaked process node, that is a big ask.
Posted on Reply
#141
Xmpere
this super xp guy is just a AMD fanboy. Anyone who is a fanboy/bias towards to a company, it statements renders invalid.
Posted on Reply
#142
Super XP
Xmperethis super xp guy is just a AMD fanboy. Anyone who is a fanboy/bias towards to a company, it statements renders invalid.
You claiming I am a fanboy renders your statement invalid. Not to mention, I've been here since 2005. YOU?
efikkanWhich are yet more channels which fall into the bucket of less "competent" "tech" YouTube channels. I would advice to avoid such channels unless you do it for amusement or looking for sources of false rumors. These channels serve one of two purposes; serve people the "news" they want to hear (in the echo chambers), or to shape public opinion. If you listen to more than a few episodes you'll see all of these are all over the place, are inconsistent with themselves, and fail to master any deeper technical knowledge. Some of these provide their own "leaks", while others just recite pretty much everything they can scrape of the web.

Speculation is of course fine, and many of us enjoy discussing potential hardware, myself included, but speculation should be labeled as such, not be labeled as "leaks" when it's not. Whenever we see leaks we should always check if it passes some basic "smell tests";
  • Who is the source and does it have a good track record? Always see where the leak originates; if it's from WCCFTech, VideoCardz, FudZilla or somewhere random, then it's fairly certainly fake, random twitter/forum posts often is fake, but can occasionally be true, etc. "Leaks" from official drivers, compilers, official papers etc. are pretty solid. Some sources are also know to have a certain bias, even though they can have elements of truth to their claims.
  • Is the nature of the "leak" something which can be known, or is likely to be known outside a few core engineers? Example: Clock speeds are never set in stone until they have the final stepping shortly ahead of a release, so when someone posts a table of clock speeds of CPUs/GPUs 6-12 monts ahead, you can know it's BS.
  • Is the specificity of the leak something that is sensitive? If the details is only known to a few people under NDA, then those leaking it will risk losing their job and potential lawsuits, how many are willing to do that to serve a random YouTube channel or webpage? What is their motivation?
  • Is the scope of the leak(s) likely at all? Some of these channels claims to have dozens of sources inside Intel/AMD/Nvidia, seriously a random guy in his basement have such good sources? Some of these claims to even have single sources who provides sensitive NDA'ed information from both Intel and AMD about products 1+ years away, there is virtually no chance this claim is true, and is an immediate red flag to me.
Unfortunately, most "leaks" are either qualified guesses or pure BS, sometimes an accumulation of both (either intentionally or not). Perhaps sometime you should look back after a product release and evaluate the accuracy and the timeline of the leaks. The general trend is usually that early leaks are usually only true about "big" features, early "specific"(clocks, TDP, shader count(GPUs)) leaks are usually fake. Then usually there is a spike in leaks around the time the first engineering samples arrives, various leaked benchmarks, etc. but clocks are still all over the place. Then there is another spike when board partners get their hands on it, then the accuracy increases a lot, but there is still some variance. Then usually a few weeks ahead of release, we get pretty much precise details.

Edit:
Rumors about Polaris, Vega, Vega 2x and Navi 1x have pretty much started out the same way; very unrealistic initially, and then pessimistic close to the actual release. Let's hope Navi 2x delivers, but please don't drive the hype too high.
Thanks for the information. Most of the so called Rumors from Wccftech is regurgitation off VideoCardz and most VideoCardz rumors comes from Twitter.
As for Fudzilla, I would take them a lot more serious over the 2 mentioned. Fudzilla used to be part of Mike Magee's group which wrote for The Inquirer.net (No longer around). Also Charlie Demerjian of SemiAccurate was also part of Mike Magee's group. My point was Mike had real industry sources and was well respected in the computer tech industry. I believe he's been retired for years now. So Fudzilla & SemiAccurate may not get it right all the time, they get pretty close to to the actual truth, because nothing in rumor ever comes 100% accurate. Companies always make last minute changes to products.
ValantarNot that difficult - there's not much reason to push a big chip that far up the efficiency curve, and seeing just how much power can be saved on Navi by downclocking just a little, it's not too big a stretch of the imagination to see a 500mm² chip at, say, 200-300MHz less stay below 300W, especially if it uses HBM2. Of course AMD did say that they would be increasing clocks with RDNA2 while still improving efficiency, which really makes me wonder what kind of obvious fixes they left for themselves when they designed RDNA (1). Even with a tweaked process node, that is a big ask.
RDNA1 was just to get a new 7nm hybrid graphics chip that competes well out the door. Testing the waters of RDNA1 design. One example is for GCN, 1 instruction is issued every 4 cycles. With this RDNA hybrid, 1 instruction is issued every 1 cycle, making it much more efficient.
RDNA2 is the real deal according to AMD. I believe they will release a 280W max version, where they will still be able to achieve at least 25%-40% performance improvement over the RTX 2080-Ti. RDNA2 is an Ampere competitor.
Posted on Reply
#143
EarthDog
Super XPYou claiming I am a fanboy renders your statement invalid. Not to mention, I've been here since 2005. YOU?


Thanks for the information. Most of the so called Rumors from Wccftech is regurgitation off VideoCardz and most VideoCardz rumors comes from Twitter.
As for Fudzilla, I would take them a lot more serious over the 2 mentioned. Fudzilla used to be part of Mike Magee's group which wrote for The Inquirer.net (No longer around). Also Charlie Demerjian of SemiAccurate was also part of Mike Magee's group. My point was Mike had real industry sources and was well respected in the computer tech industry. I believe he's been retired for years now. So Fudzilla & SemiAccurate may not get it right all the time, they get pretty close to to the actual truth, because nothing in rumor ever comes 100% accurate. Companies always make last minute changes to products.


RDNA1 was just to get a new 7nm hybrid graphics chip that competes well out the door. Testing the waters of RDNA1 design. One example is for GCN, 1 instruction is issued every 4 cycles. With this RDNA hybrid, 1 instruction is issued every 1 cycle, making it much more efficient.
RDNA2 is the real deal according to AMD. I believe they will release a 280W max version, where they will still be able to achieve at least 25%-40% performance improvement over the RTX 2080-Ti. RDNA2 is an Ampere competitor.
Sorry... what does when you signed up to this site have to do with anything? Seems similar to equating knowledge with post count.... :(

Anyway, just to get to 2080ti FE speeds from their current 5700 xt flagship is 46%. To go another 25-40% faster that would be a 71-86% increase. Have we ever seen that in the history of gpus? A 71% increase from previous gen flagship to current gen flagship?

You've sure got a lot of faith in this architecture with about the only thing going for it is AMD marketing...

If ampre comes in like Turing did over kepler (25%) that's the bottom end of your goal with their new gpu performing 71% faster than it's current flagship. That's a ton, period, not to mention on the same node.
Posted on Reply
#144
Valantar
EarthDogSorry... what does when you signed up to this site have to do with anything? Seems similar to equating knowledge with post count.... :(

Anyway, just to get to 2080ti FE speeds from their current 5700 xt flagship is 46%. To go another 25-40% faster that would be a 71-86% increase. Have we ever seen that in the history of gpus? A 71% increase from previous gen flagship to current gen flagship?

You've sure got a lot of faith in this architecture with about the only thing going for it is AMD marketing...

If ampre comes in like Turing did over kepler (25%) that's the bottom end of your goal with their new gpu performing 71% faster than it's current flagship. That's a ton, period, not to mention on the same node.
The 5700 XT is a "flagship" GPU only in terms of being the fastest SKU made this generation. Otherwise it really isn't (and isn't meant to be) - not in die size, not in performance, not in power draw, and certainly not in price. The 5700 XT was designed to be an upper mid-range GPU, which is what it is. That they managed that with just 40 CUs and power headroom to spare tells us that they definitely have room to grow upwards unlike the previous generations (especially as RDNA is no longer architecturally limited to 64 CUs). So there's no reason to extrapolate AMD being unable to compete in higher tiers from the positioning of the 5700 XT - quite the opposite. They likely just wanted to make the first RDNA chips high volume sellers rather than expensive and low-volume flagship level SKUs (on a limited and expensive 7nm node). Now that the arch is further matured, Apple has moved on from 7nm freeing up capacity for AMD, and they have even more money to spend, there's definitely a proper flagship coming.
Posted on Reply
#145
Super XP
EarthDogSorry... what does when you signed up to this site have to do with anything? Seems similar to equating knowledge with post count.... :(

Anyway, just to get to 2080ti FE speeds from their current 5700 xt flagship is 46%. To go another 25-40% faster that would be a 71-86% increase. Have we ever seen that in the history of gpus? A 71% increase from previous gen flagship to current gen flagship?

You've sure got a lot of faith in this architecture with about the only thing going for it is AMD marketing...

If ampre comes in like Turing did over kepler (25%) that's the bottom end of your goal with their new gpu performing 71% faster than it's current flagship. That's a ton, period, not to mention on the same node.
He called me a fanboy that has absolutely no relevance to the topic at hand? Or perhaps he never knew I have a high end Intel & Nvidia gaming laptop because AMD graphics didn't cut at the time I purchased it in 2018.

With regards to the 3080-Ti and Big Navi performance numbers, it's all up in the air speculation. Some think RDNA2 (Big Navi) is going to compete with the 2080-TI and others believe AMD is targeting the 3080-Ti. In order for AMD to target Nvidia's speculative 3080-Ti, they are probably going to compare Nvidia's performance improvements per generation to have an idea on how fast RDNA2 needs to be. I don't think AMD will push it to the limits, I think they are working more on power efficiency and performance efficiency when they designed RDNA2. I know this is marketing, but Micro-Architecture Innovation = Improved Per-per-Clock (IPC), Logic Enhancement = Reduce Complexity and Switching Power & Physical Optimizations = Increase Clock Speed.

What does all these enhancements have in common? Gaming Consoles
ValantarThe 5700 XT is a "flagship" GPU only in terms of being the fastest SKU made this generation. Otherwise it really isn't (and isn't meant to be) - not in die size, not in performance, not in power draw, and certainly not in price. The 5700 XT was designed to be an upper mid-range GPU, which is what it is. That they managed that with just 40 CUs and power headroom to spare tells us that they definitely have room to grow upwards unlike the previous generations (especially as RDNA is no longer architecturally limited to 64 CUs). So there's no reason to extrapolate AMD being unable to compete in higher tiers from the positioning of the 5700 XT - quite the opposite. They likely just wanted to make the first RDNA chips high volume sellers rather than expensive and low-volume flagship level SKUs (on a limited and expensive 7nm node). Now that the arch is further matured, Apple has moved on from 7nm freeing up capacity for AMD, and they have even more money to spend, there's definitely a proper flagship coming.
Agreed.
I have a suspicion, what ZEN2 did to the market, RDNA2 will also have a similar effect. And it's a much needed effect, as we need better competition to help drive resonable GPU pricing once again.
Posted on Reply
#146
Flanker
Super XPI have a suspicion, what ZEN2 did to the market, RDNA2 will also have a similar effect. And it's a much needed effect, as we need better competition to help drive resonable GPU pricing once again.
If it does what the HD4870/50 did, that will be incredible
Posted on Reply
#147
ratirt
EarthDogSorry... what does when you signed up to this site have to do with anything? Seems similar to equating knowledge with post count.... :(

Anyway, just to get to 2080ti FE speeds from their current 5700 xt flagship is 46%. To go another 25-40% faster that would be a 71-86% increase. Have we ever seen that in the history of gpus? A 71% increase from previous gen flagship to current gen flagship?

You've sure got a lot of faith in this architecture with about the only thing going for it is AMD marketing...

If ampre comes in like Turing did over kepler (25%) that's the bottom end of your goal with their new gpu performing 71% faster than it's current flagship. That's a ton, period, not to mention on the same node.
pack 5700xt chip in one die :) 500mm2 and you should be ok. I know it may not work like that but who knows? Besides the RDNA2 will offer a bit more horse power due to some improvements so it is possible. 500mm2 chip is not as big as NV's 2080Ti 754mm2 though. I get what you are saying the 5700xt is AMD's flagship the best released so far but with the 251mm2 size it is fairly small wouldn't you say? Flagship released and capabilities of the architecture are two different things.
Posted on Reply
#148
EarthDog
ValantarThe 5700 XT is a "flagship" GPU only in terms of being the fastest SKU made this generation. Otherwise it really isn't (and isn't meant to be) - not in die size, not in performance, not in power draw, and certainly not in price. The 5700 XT was designed to be an upper mid-range GPU, which is what it is. That they managed that with just 40 CUs and power headroom to spare tells us that they definitely have room to grow upwards unlike the previous generations (especially as RDNA is no longer architecturally limited to 64 CUs). So there's no reason to extrapolate AMD being unable to compete in higher tiers from the positioning of the 5700 XT - quite the opposite. They likely just wanted to make the first RDNA chips high volume sellers rather than expensive and low-volume flagship level SKUs (on a limited and expensive 7nm node). Now that the arch is further matured, Apple has moved on from 7nm freeing up capacity for AMD, and they have even more money to spend, there's definitely a proper flagship coming.
Super XPHe called me a fanboy that has absolutely no relevance to the topic at hand? Or perhaps he never knew I have a high end Intel & Nvidia gaming laptop because AMD graphics didn't cut at the time I purchased it in 2018.

With regards to the 3080-Ti and Big Navi performance numbers, it's all up in the air speculation. Some think RDNA2 (Big Navi) is going to compete with the 2080-TI and others believe AMD is targeting the 3080-Ti. In order for AMD to target Nvidia's speculative 3080-Ti, they are probably going to compare Nvidia's performance improvements per generation to have an idea on how fast RDNA2 needs to be. I don't think AMD will push it to the limits, I think they are working more on power efficiency and performance efficiency when they designed RDNA2. I know this is marketing, but Micro-Architecture Innovation = Improved Per-per-Clock (IPC), Logic Enhancement = Reduce Complexity and Switching Power & Physical Optimizations = Increase Clock Speed.

What does all these enhancements have in common? Gaming Consoles


Agreed.
I have a suspicion, what ZEN2 did to the market, RDNA2 will also have a similar effect. And it's a much needed effect, as we need better competition to help drive resonable GPU pricing once again.
ratirtpack 5700xt chip in one die :) 500mm2 and you should be ok. I know it may not work like that but who knows? Besides the RDNA2 will offer a bit more horse power due to some improvements so it is possible. 500mm2 chip is not as big as NV's 2080Ti 754mm2 though. I get what you are saying the 5700xt is AMD's flagship the best released so far but with the 251mm2 size it is fairly small wouldn't you say? Flagship released and capabilities of the architecture are two different things.
Semantics of a flagship aside, I see is a 225W 'flagship' 7nm part that is 2% (1440p) faster than a 175W 12nm part (rtx 2070).

The improvements they need to make to match ampre, both in raw performance and ppw (note hat is matching ampre using last generation's paltry 25% gain - remember they added ray tracing and tensor core hardware), is 71%. That's a ton. Only time will tell, and I hope your glass half full attitude pans out to reality, but I'm not holding my breath. I think they will close the gap, but will fall well short of ampre's consumer flagship. At best I see it splitting the difference between 2080ti and ampre. I think it will end up a lot closer to 2080ti than ampre. They have a lot of work to do.

Remember, both amd and nvidia touted 50% ppw gains... if both are true, how can they catch up?
Posted on Reply
#149
Valantar
ratirtpack 5700xt chip in one die :) 500mm2 and you should be ok. I know it may not work like that but who knows? Besides the RDNA2 will offer a bit more horse power due to some improvements so it is possible. 500mm2 chip is not as big as NV's 2080Ti 754mm2 though. I get what you are saying the 5700xt is AMD's flagship the best released so far but with the 251mm2 size it is fairly small wouldn't you say? Flagship released and capabilities of the architecture are two different things.
For that you'd also need a 512-bit memory bus, which ... well, is both expensive, huge, and power hungry. Not a good idea (as the 290(X)/390(X) showed us).
EarthDogSemantics of a flagship aside, I see is a 225W 'flagship' 7nm part that is 2% (1440p) faster than a 175W 12nm part.

The improvements they need to make to match ampre, both in raw performance and ppw (note hat is matching ampre using last generation's paltry 25% gain - remember they added ray tracing and tensor core hardware), is 71%. That's a ton. Only time will tell, and I hope your glass half full attitude pans out to reality, but I'm not holding my breath. I think they will close the gap, but will fall well short of ampre's consumer flagship. At best I see it splitting the difference between 2080ti and ampre. I think it will end up a lot closer to 2080ti than ampre. They have a lot of work to do.
What GPU are you comparing to? If we go by TPU's review, the average gaming power draw of the 5700 XT is 219W, with the 2070 at 195W and the 2060S at 184W. I'm assuming you're pointing to the 2070 as it's 2% slower in the same review. Nice job slightly bumping up AMD's power draw and lowering Nvidia's by a full 10%, though. That's how you make a close race (219W-194W=24W) look much worse (225W-175W=50W).

Edit: ah, I see you edited in the 2070 as the comparison. Your power draw number is still a full 20W too low though.
Posted on Reply
#150
ratirt
ValantarFor that you'd also need a 512-bit memory bus, which ... well, is both expensive, huge, and power hungry. Not a good idea (as the 290(X)/390(X) showed us).
It would have been a big chip so yes you would need it but in any case this 500mm2 chip, would do the trick tapping beyond 2080 Ti's performance. You pack a lot of cores you need to feed them so either way you need to do something with the memory interface. Power hungry, yes but not all the way. You need to remember, it all depends on the frequency used if you balance it it would be ok. There are possibilities to make it happen.
Posted on Reply
Add your own comment
Apr 26th, 2024 12:28 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts