Friday, March 6th 2020

AMD RDNA2 Graphics Architecture Detailed, Offers +50% Perf-per-Watt over RDNA

With its 7 nm RDNA architecture that debuted in July 2019, AMD achieved a nearly 50% gain in performance/Watt over the previous "Vega" architecture. At its 2020 Financial Analyst Day event, AMD made a big disclosure: that its upcoming RDNA2 architecture will offer a similar 50% performance/Watt jump over RDNA. The new RDNA2 graphics architecture is expected to leverage 7 nm+ (7 nm EUV), which offers up to 18% transistor-density increase over 7 nm DUV, among other process-level improvements. AMD could tap into this to increase price-performance by serving up more compute units at existing price-points, running at higher clock speeds.

AMD has two key design goals with RDNA2 that helps it close the feature-set gap with NVIDIA: real-time ray-tracing, and variable-rate shading, both of which have been standardized by Microsoft under DirectX 12 DXR and VRS APIs. AMD announced that RDNA2 will feature dedicated ray-tracing hardware on die. On the software side, the hardware will leverage industry-standard DXR 1.1 API. The company is supplying RDNA2 to next-generation game console manufacturers such as Sony and Microsoft, so it's highly likely that AMD's approach to standardized ray-tracing will have more takers than NVIDIA's RTX ecosystem that tops up DXR feature-sets with its own RTX feature-set.
AMD GPU Architecture Roadmap RDNA2 RDNA3 AMD RDNA2 Efficiency Roadmap AMD RDNA2 Performance per Watt AMD RDNA2 Raytracing
Variable-rate shading is another key feature that has been missing on AMD GPUs. The feature allows a graphics application to apply different rates of shading detail to different areas of the 3D scene being rendered, to conserve system resources. NVIDIA and Intel already implement VRS tier-1 standardized by Microsoft, and NVIDIA "Turing" goes a step further in supporting even VRS tier-2. AMD didn't detail its VRS tier support.

AMD hopes to deploy RDNA2 on everything from desktop discrete client graphics, to professional graphics for creators, to mobile (notebook/tablet) graphics, and lastly cloud graphics (for cloud-based gaming platforms such as Stadia). Its biggest takers, however, will be the next-generation Xbox and PlayStation game consoles, who will also shepherd game developers toward standardized ray-tracing and VRS implementations.

AMD also briefly touched upon the next-generation RDNA3 graphics architecture without revealing any features. All we know about RDNA3 for now, is that it will leverage a process node more advanced than 7 nm (likely 6 nm or 5 nm, AMD won't say); and that it will come out some time between 2021 and 2022. RDNA2 will extensively power AMD client graphics products over the next 5-6 calendar quarters, at least.
Add your own comment

242 Comments on AMD RDNA2 Graphics Architecture Detailed, Offers +50% Perf-per-Watt over RDNA

#101
Vya Domus
medi01
Welp, what about Vega vs Navi? Same process, 330mm2 with faster mem barely beating 250mm2 chip from the next generation.
It should go without saying that the Navi part runs at higher clocks and does so more consistently. It's not magic, when you look more into this you realize performance is quite predictable and given mostly by a few metrics.
Posted on Reply
#102
Valantar
ARF
Look, Valantar , I am talking about simple thing competitiveness, you are talking about utopia and how the CTO is always right.
The same people who introduced R600, Bulldozer, Jaguar, Vega and now have two competing chips Polaris 30 and Navi 14 covering absolutely the same market segment.

Please, let's just agree to disagree with each other and stop the argument here and now.

Thanks.
Sorry, but no, I'll not agree to disagree when you aren't actually managing to formulate a coherent argument or even correctly read what I'm writing. Let's see. Did I say "the CTO is always right"? No, among other things I said
Valantar
Of course it's possible for these choices to turn out to be completely wrong (Hello, Bulldozer architecture!)
Which is a rather explicit acknowledgement that mistakes can and have and will be made, no? You, on the other hand, are saying "Mark Papermaster said they made 'the right' improvements, therefore this must be subjective and wrong!" with zero basis for saying so (at least that you are able to present here). Having made bad calls previously does not mean that all future calls will be poor. Besides, Papermaster wasn't the person responsible for a lot of what you're pointing out, so I don't quite understand why you're singling that specific executive out as fundamentally incapable of making sound technical decisions. Not to mention that no executive makes any sort of decision except based on the work of their team. If you want your opinion to be respected, at least show us others the respect of presenting it in a coherent and rational manner instead of just throwing out accusations and wild claims with no basis.

(And again, please don't read this as me somehow saying that "Mark Papermaster is a genius that can only make brilliant decisions" - I am not arguing for something, I am arguing against your brash and unfounded assertions that these decisions are necessarily wrong. They might be wrong, but given AMD's recent history they might also be right. And unless you can present some actual basis for your claims, this is just wild speculation and entirely useless anyhow.)

Polaris production is winding down, current "production" is likely just existing chip inventories being sold out (including that new China-only downclocked "RX 590" whatsitsname). They are only competing directly as far as previous-gen products are still in the channel, which is a situation that takes a while to resolve itself every generation. Remember, the RX 5500 launched less than three months ago. A couple more months and supply of new Polaris cards will be all but gone.

But beyond that, you aren't talking about competitiveness, in fact I would say you aren't presenting a coherent argument for anything specific at all. What does an imagined delay from an imagined previous (2019?) launch date of Navi 2X have to do with competitiveness as long as it launches reasonably close to Nvidia's next generation and performs competitively? What does the lack of RTRT in Navi 1X have to do with competitiveness when there are currently just a handful of RTRT titles? If you want to make an overarching point about something, please make sure what you're talking about actually relates to that point.

Also, I forgot this one:
ARF
How do consoles with poor compared to the top PC hardware run 4K then and why?
Why are 4K TVs mainstream now?
4K TVs are mainstream because TV manufacturers need to sell new products and have spent a fortune on marketing a barely perceptible (at TV sizes and viewing distances) increase in resolution as a revolutionary upgrade. TVs are also not even close to mainly used or sold for gaming, they are TVs. 4k TVs being mainstream has nothing to do with gaming whatsoever.

Consoles can run 4k games because they turn down the image quality settings dramatically, and (especially in the case of the PS4 Pro) use rendering tricks like checkerboard rendering. They also generally target 30fps, at least at 4k. Console games generally run quality settings comparable to medium-low settings in their own PC ports. Digital Foundry (part of Eurogamer) has done a lot of great analyses on this, comparing various parts of image quality across platforms for a bunch of games. Worth the read/watch! But the point is, if you set your games to equivalent quality settings and lower your FPS expectations you can match any console with a similarly specced PC GPU. Again, DF has tested this too, with comparison images and frame time plots to document everything.
medi01
Looking at TSMC process chart, I simply do not see where the perf/watt jump should come from.
7N => 7NP/7N+ could give 10%/15% power savings, but the rest...
So, 35-40% improvement would come from arch updates alone?
And that following major perf/watt jump Vega=>Navi?
That was what they said in the fin an day presentation, yeah, including specifically . This does make it seem like like RDNA (1) was a bit of a "we need to get this new arch off the ground" effort with lots of low-hanging fruit left in terms of IPC improvements. I'm mildly skeptical - it seems too good to be true - but saying stuff you aren't sure of at a presentation targeting the financial sector is generally not what risk-averse corporations tend to do. PR is BS, but what you say to your (future) shareholders you might actually be held accountable for.

medi01
Welp, what about Vega vs Navi? Same process, 330mm2 with faster mem barely beating 250mm2 chip from the next generation.
Not to mention at ~70W more power draw.
Posted on Reply
#103
medi01
Vya Domus
It should go without saying that the Navi part runs at higher clocks and does so more consistently. It's not magic, when you look more into this you realize performance is quite predictable and given mostly by a few metrics.
Hm, but VII is 35% more TFLops, claimed "game" clock is the same as for 5700XT.

Also, if it is so straightforward, why does Intel struggle to roll out a competitive GPU?
Posted on Reply
#104
Vya Domus
medi01
Hm, but VII is 35% more TFLops, claimed "game" clock is the same as for 5700XT.
And VII is faster most of the time, nothing out of the ordinary. I also pointed out above how GCN is less efficient per clock cycle than RDNA2. Shader count and clockspeed are still the primary indicators for performance.

medi01
Also, if it is so straightforward, why does Intel struggle to roll out a competitive GPU?
Because the one GPU Intel showed was a minuscule low TDP chip on a not so great of a node, it's not like I'm implying it's easy and everyone can do it. It's not easy to make a large GPU with a lot of shaders and high clockspeed without a colossal TDP and transistor count.
Posted on Reply
#105
sergionography
efikkan
While it might be understandable that not everyone in this thread understood the Navi terminology, but those who have been deeply engaged in the discussions for a while should have gotten that Navi 1x is Navi 10/12/14 and Navi 2x is Navi 21/22/23*, we have known this for about a year or so. Even more astounding, I noticed several of those so-called "experts" on YouTube that some of you like to cite for analysis and leaks, who can ramble on about Navi for hours, still managed to fail to know this basic information about Navi. It just goes to show how little these nobodies on YouTube actually know.

*) I only know about Navi 21/22/23 so far.


Which delay in particular are you thinking of?
Oh I already knew about Navi 20 etc, yet somehow I totally missed the naming reference. I think we got too optimistic with doubling performance perhaps so it was more wishful thinking
Posted on Reply
#106
moproblems99
medi01
2080Ti is about 46%/55% faster than 5700XT (ref vs ref) at 1440p/4k respectively in TPU benchmarks.
Yeah, bit I believe this post is spawned off the idea of two 5700s glued together. You would have to assume everything scaled perfectly in order to come out on top by any reasonable margin. I don't feel that will be the case. Or if it is the case, consider power draw and heat. Again, not likely.
Posted on Reply
#107
Super XP
Vya Domus
It should go without saying that the Navi part runs at higher clocks and does so more consistently. It's not magic, when you look more into this you realize performance is quite predictable and given mostly by a few metrics.
GCN vs. RDNA1? It's a lot more than just higher clocks, if that is what you are saying.
The main differences between GCN and RDNA1 is GCN issues one instruction every 4 cycles. RDNA1 issues one instruction every 1 cycle. Also the wavefront size differs. GCN the wavefront is of 64 threads (Wave64). RDNA1 it's both 32 threads (Wave32) & 64 threads (Wave64). Even the multilevel cache has been greatly improved in RDNA1 over GCN.

UPDATE: I just read a few more of your posts. You already know what I posted. Ignore this :D .


medi01
Looking at TSMC process chart, I simply do not see where the perf/watt jump should come from.
It comes from a refined 7nm process node over that what the 5700XT uses.
It also comes from RDNA2 being a brand new architecture. Look at RDNA1 as a placeholder, to test the GPU waters and it did quite successfully.
RDNA2 is going to be a game changer IMO. :D
Posted on Reply
#108
Vayra86
AMD slides. Nuff said.

Product pls. The hype train crashed long ago.
Posted on Reply
#109
medi01
Vayra86
AMD slides. Nuff said.
What is this supposed to mean?
Posted on Reply
#110
r.h.p
ARF
Look, Valantar , I am talking about simple thing competitiveness, you are talking about utopia and how the CTO is always right.
The same people who introduced R600, Bulldozer, Jaguar, Vega and now have two competing chips Polaris 30 and Navi 14 covering absolutely the same market segment.

Please, let's just agree to disagree with each other and stop the argument here and now.

Thanks.
yes I must agree with https://www.techpowerup.com/forums/members/valantar.171585/ , ive had r9 290x , vega 64 ref , and now 5700xt strix and to be honest im not that impressed with all of them as high or mid high end GPU segments .
they all get way too hot and only give 1440p performance . THE VEGA 64 Was supposed to be a game changer , but no.... the Bulldozer was junk and was the first time I changed to intel in 10 years
for 1 series of CPU ( Haswell ) . The new Ryzen seems to be going ok , lucky for them
Posted on Reply
#111
Valantar
r.h.p
yes I must agree with https://www.techpowerup.com/forums/members/valantar.171585/ , ive had r9 290x , vega 64 ref , and now 5700xt strix and to be honest im not that impressed with all of them as high or mid high end GPU segments .
they all get way too hot and only give 1440p performance . THE VEGA 64 Was supposed to be a game changer , but no.... the Bulldozer was junk and was the first time I changed to intel in 10 years
for 1 series of CPU ( Haswell ) . The new Ryzen seems to be going ok , lucky for them
Yet another post that doesn't really relate to the topic of this thread. I don't see how you are agreeing with me either; none of what you say here aligns with what I've been saying. Also, a lot of what you're saying here is ... if not wrong, then at least very odd. At the time the 290X launched there was no such thing as 4k gaming, so saying it "only gives 1440p performance" is meaningless. There were barely 4k monitors available at all at that time. You're absolutely right the Vega 64 was overhyped and poorly marketed, and it ended up being way too much of a compute-focused architecture with advantages that translated poorly into gaming performance, causing it to underperform while consuming a lot of power compared to the competition. As for the 5700 strix running hot - that's a design flaw that Asus has admitted, and offers an RMA program for, with current revisions being fixed. Also, complaining that a $400 GPU only plays 1440p Ultra is ... weird. Do you expect 4k Ultra performance from a card 1/3rd the price of the competing flagship? 4k60 Ultra in AAA titles is still something that flagship GPUs struggle with (depending on the game). And sure, Bulldozer was terrible. AMD gambled hard on CPU performance branching off in a direction which it ultimately didn't, leaving them with an underperforming architecture and no money to make a new one for quite a few years. But Zen has now been out for ... three years now, and has performed quite well the whole time. As such I don't see how complaining about Bulldozer currently makes much sense. Should we then also be complaining about Netburst P4s? No, it's time to move on. AMD is fully back in the CPU game - arguably the technological leader now, if not actually the market leader - and are finally getting around to competing in the flagship GPU space again, which they haven't really touched since 2015 even if their marketing has made a series of overblown and stupid statements about their upper midrange/high end cards in previous generations. AMD's marketing department really deserves some flack for how they've handled things like Vega, and for the unrealistic claims they have made, but even with all that taken into account AMD has competed decently on value if not absolute performance. We'll see how the new cards perform (fingers crossed we'll see some actual competition bringing prices back down!), but at least they're now promising outright to return to the performance leadership fight, which is largely due to the technologies finally being in place for them to do so. Which is what this thread is actually supposed to be about.
Posted on Reply
#112
r.h.p
Valantar
Yet another post that doesn't really relate to the topic of this thread. I don't see how you are agreeing with me either; none of what you say here aligns with what I've been saying. Also, a lot of what you're saying here is ... if not wrong, then at least very odd. At the time the 290X launched there was no such thing as 4k gaming, so saying it "only gives 1440p performance" is meaningless. There were barely 4k monitors available at all at that time. You're absolutely right the Vega 64 was overhyped and poorly marketed, and it ended up being way too much of a compute-focused architecture with advantages that translated poorly into gaming performance, causing it to underperform while consuming a lot of power compared to the competition. As for the 5700 strix running hot - that's a design flaw that Asus has admitted, and offers an RMA program for, with current revisions being fixed. Also, complaining that a $400 GPU only plays 1440p Ultra is ... weird. Do you expect 4k Ultra performance from a card 1/3rd the price of the competing flagship? 4k60 Ultra in AAA titles is still something that flagship GPUs struggle with (depending on the game). And sure, Bulldozer was terrible. AMD gambled hard on CPU performance branching off in a direction which it ultimately didn't, leaving them with an underperforming architecture and no money to make a new one for quite a few years. But Zen has now been out for ... three years now, and has performed quite well the whole time. As such I don't see how complaining about Bulldozer currently makes much sense. Should we then also be complaining about Netburst P4s? No, it's time to move on. AMD is fully back in the CPU game - arguably the technological leader now, if not actually the market leader - and are finally getting around to competing in the flagship GPU space again, which they haven't really touched since 2015 even if their marketing has made a series of overblown and stupid statements about their upper midrange/high end cards in previous generations. AMD's marketing department really deserves some flack for how they've handled things like Vega, and for the unrealistic claims they have made, but even with all that taken into account AMD has competed decently on value if not absolute performance. We'll see how the new cards perform (fingers crossed we'll see some actual competition bringing prices back down!), but at least they're now promising outright to return to the performance leadership fight, which is largely due to the technologies finally being in place for them to do so. Which is what this thread is actually supposed to be about.
ok you have some points ,,,,,bulldozers was inferior to intel at the time , yet i doved in and bought one OH my AMD new multi core CPU .... fail slow . money talks pal . sold it for 40 bucks.
I not sure about ur gaming , yet i could play bf4 at 1440 p no probs . Also Civ V with my R9 290 x XFX . Frickin Civ V has the freesync turned off for anti flickering with all vega64 and 5700xt cards for my system lol , no drivers have helped and ive tried them all.....

The new Ryzen seems to be going ok , lucky for them like I said, being a AMD die hard since AXIA 1000mhz days pal when AMD were the first to reach 1000 MHz . Also in AUS my Vega 64 was $900
and my 5700xt strix was $ 860 AUS , these are not cheap GPUs pal , and on top of it VEGA WAS running AT 90C , FULL GAME LOAD. AMD better pull there finger out for there next release or im out of there GPU segment
Posted on Reply
#113
Valantar
r.h.p
ok you have some points ,,,,,bulldozers was inferior to intel at the time , yet i doved in and bought one OH my AMD new multi core CPU .... fail slow . money talks pal . sold it for 40 bucks.
I not sure about ur gaming , yet i could play bf4 at 1440 p no probs . Also Civ V with my R9 290 x XFX . Frickin Civ V has the freesync turned off for anti flickering with all vega64 and 5700xt cards for my system lol , no drivers have helped and ive tried them all.....
It seems like you're in the bad luck camp there - some people seem to have consistent issues with Navi, while others have none at all. I hope AMD figures this out soon.

r.h.p
The new Ryzen seems to be going ok , lucky for them like I said, being a AMD die hard since AXIA 1000mhz days pal when AMD were the first to reach 1000 MHz . Also in AUS my Vega 64 was $900
and my 5700xt strix was $ 860 AUS , these are not cheap GPUs pal , and on top of it VEGA WAS running AT 90C , FULL GAME LOAD. AMD better pull there finger out for there next release or im out of there GPU segment
That Strix price is pretty harsh, yeah - even accounting for the 10% Australian GST and AUD-to-USD conversion that's definitely on the high side. 860 AUD is 576 USD according to DuckDuckGo, so ~USD 524 without GST, while PCPartPicker lists it at USD 460-470 (though it's USD 590 on Amazon for some reason). That's at least 10% more than US prices, which is rather sucky. 900 AUD for the Vega 64 is actually below the original USD 699 MSRP with current exchange rates, though of course I don't know when you bought the card or what exchange rates were at that time.

Still, I do hope the 50% perf/W number actually holds up, if so we should see both some seriously powerful big GPUs from AMD next go around, and likely some very attractive midrange options too.
Posted on Reply
#114
Super XP
I truly believe RDNA2 is the real deal and will set AMDs GPU department up for years.
I see RDNA2 as the ZEN2 or ZEN3 of GPUs.
Posted on Reply
#115
Valantar
Super XP
I truly believe RDNA2 is the real deal and will set AMDs GPU department up for years.
I see RDNA2 as the ZEN2 or ZEN3 of GPUs.
Fingers crossed! Though calling it the Zen 3 of GPUs is a bit odd considering we know absolutely nothing for sure about Zen 3 ;)
Posted on Reply
#116
Fluffmeister
At best, it will be good to see what Turing brought to the table back in 2018 make it into the two big consoles, then there can be no more excuses.
Posted on Reply
#117
Super XP
Valantar
Fingers crossed! Though calling it the Zen 3 of GPUs is a bit odd considering we know absolutely nothing for sure about Zen 3 ;)
That is why I called it the ZEN2 of GPUs. I added the ZEN3 because ZEN3 is suppose to clobber ZEN2 in performance by a significant % clock for clock. Something that only happens mostly with new micro architectures. So who knows, RDNA2 might have that ZEN3 effect on the market.
Posted on Reply
#118
rvalencia
Vya Domus
There isn't really anything inherently faster about that if the workload is nontrivial, it's just a different way to schedule work. Over the span of 4 clock cycles both the GCN CU and and RDNA CU would go through the same amount of threads. To be fair there is nothing SIMD like anymore about both of these, Terrascale was the last architecture that used a real SIMD configuration, everything is now executed by scalar units in a SIMT fashion.

Instruction throughput is not indicative of performance because that's not how GPUs extract performance. Let's say you want to perform one FMA over 256 threads, with GCN5 you'd need 4 wavefronts that would take 4 clock cycles within one CU, with RDNA you'd need 8 wavefronts which would also take the same 4 clock cycles within one CU. The same work got done within the same time, it wasn't faster in either case.

Thing is, it takes more silicon and power to schedule 8 wavefronts instead of 4 so that actually makes GCN more efficient space and power wise, if you've ever wondered why AMD would always be able to fit more shaders within the same space and TDP than Nvida, that's how they did it. And that's also probably why Navi 10 wasn't as impressive power wise as some expected and why it had such a high transistor count despite it not having any RT and tensor hardware (Navi 10 and TU106 practically have the same transistor count).

But as always there's a trade off, a larger wavefront means more idle threads when a hazard occurs such as branching. Very few workloads are hazard-free, especially a complex graphics shader, so actually in practice GCN ends up being a lot more inefficient per clock cycle on average.
Some real clock cycle numbers

From Amd/comments/ctfbem
Figure 3 (bottom of page 5) shows 4 lines of shader instructions being executed in GCN, vs RDNA in Wave32 or “backwards compatible” Wave64.
Vega takes 12 cycles to complete the instruction on a GCN SIMD. Navi in Wave32 (optimized code) completes it in 7 cycles.
In backward-compatible (optimized for GCN Wave64) mode, Navi completes it in 8 cycles.
So even on code optimized for GCN, Navi is faster., but more performance can be extracted by optimizing for Navi.
Lower latency, and no wasted clock cycles.


For GCN wave64 mode, RDNA has about 33 percent higher efficiency when compared to Vega GCN, hence 5700 XT's 9.66 TFLOPS average yields around ‭12.8478‬ TFLOPS Vega II (real SKU has 14 TFLOPS). In terms of gaming performance, RX 5700 XT is very close to RX Vega II.

According to techpowerup,
RX 5700 XT has 219 watts average gaming while RX Vega II has 268 watts average gaming.
RX 5700 XT has 227 watts peek gaming while RX Vega II has 313 watts peek gaming.

Perf/watt improvements between RX 5700 XT and RX Vega II is about 27 percent. AMD's 50 percent perf/watt improvement between GCN to RDNA v1 is BS.

References
https://www.techpowerup.com/review/amd-radeon-rx-5700-xt/31.html
https://www.techpowerup.com/review/amd-radeon-vii/31.html
Posted on Reply
#119
ratirt
rvalencia
RX Vega II has
Which one is Vega II? Is that the Radeon VII?
You need to keep in mind that the RX5700Xt is way smaller than RVII so not sure what you are measuring? If you go only for performance then ok but if you put power consumption vs performance then for the 5700 XT it will be lower but the performance as well due to CUs used in 5700XT compared to RVII. 2560 for 5700 Xt vs 3860 for VII. That is quite a lot in my book so it is not a BS as you said.

EDIT: Not to mention you are comparing card vs card not chip vs chip. HBM2 vs GDDR6 have also different power usage which you haven't included in your calculations.
Posted on Reply
#120
efikkan
sergionography
Oh I already knew about Navi 20 etc, yet somehow I totally missed the naming reference. I think we got too optimistic with doubling performance perhaps so it was more wishful thinking
And expecting AMD to double and then triple the performance in two years wasn't a clue either? :P

rvalencia
According to techpowerup,
RX 5700 XT has 219 watts average gaming while RX Vega II has 268 watts average gaming.
RX 5700 XT has 227 watts peek gaming while RX Vega II has 313 watts peek gaming.

Perf/watt improvements between RX 5700 XT and RX Vega II is about 27 percent. AMD's 50 percent perf/watt improvement between GCN to RDNA v1 is BS.
As I mentioned earlier, claims like these are at best cherry-picked, to please investors.
It probably refers to the Navi model which have the largest gains over the previous model of similar performance or segment, whatever makes the most impressive metric. AMD, Intel, Nvidia, Apple, etc. are all guilty of doing this marketing crap.

But it doesn't mean that the whole lineup is 50% more efficient. People need to keep this in mind when they estimate Navi 2x, which is supposed to bring yet another "50%" efficiency, or rather up to 50% more efficiency.
Posted on Reply
#121
Valantar
efikkan
And expecting AMD to double and then triple the performance in two years wasn't a clue either? :p


As I mentioned earlier, claims like these are at best cherry-picked, to please investors.
It probably refers to the Navi model which have the largest gains over the previous model of similar performance or segment, whatever makes the most impressive metric. AMD, Intel, Nvidia, Apple, etc. are all guilty of doing this marketing crap.

But it doesn't mean that the whole lineup is 50% more efficient. People need to keep this in mind when they estimate Navi 2x, which is supposed to bring yet another "50%" efficiency, or rather up to 50% more efficiency.
All they need for it to be true (at least in a "not getting sued by the shareholders" way) is a single product, so yeah, up to is very likely the most correct reading. Still, up to 50% is damn impressive without a node change (remember what changed from 14nm to the tweaked "12nm"? Yeah, near nothing). Here's hoping the minimum increase (for common workloads) is well above 30%. 40% would still make for a very good ~275W card (especially if they use HBM), though obviously we all want as fast as possible :p
Posted on Reply
#122
rvalencia
ratirt
Which one is Vega II? Is that the Radeon VII?
You need to keep in mind that the RX5700Xt is way smaller than RVII so not sure what you are measuring? If you go only for performance then ok but if you put power consumption vs performance then for the 5700 XT it will be lower but the performance as well due to CUs used in 5700XT compared to RVII. 2560 for 5700 Xt vs 3860 for VII. That is quite a lot in my book so it is not a BS as you said.

EDIT: Not to mention you are comparing card vs card not chip vs chip. HBM2 vs GDDR6 have also different power usage which you haven't included in your calculations.
1. I was referring to Radeon VII
2. I was referring to perf/watt.
3. GDDR6 (for 16 GBps 2.5w each x 8 chips) and HBM v2 (e.g `~20 watts Vega Frontier 16 GB) power consumption difference is minor when compared to GPUs involved.

16 GB HBM v2 power consumption is lower when compared to GDDR6 16 chip 16GB Clamshell Mode which is irrelevant for RX-5700 XT's 8 chips GDDR6-14000.
Posted on Reply
#123
Super XP
+50% efficiently is very impressive. I can see why Nvidia may be worried.
Posted on Reply
#125
Fluffmeister
It's certainly interesting reading the two threads, one is haha never gonna happen leather jacket man, the other is... awesome take that leather jacket man.

Nice features though, welcome to 2018.
Posted on Reply
Add your own comment