Tuesday, October 20th 2020

AMD Radeon RX 6000 Series "Big Navi" GPU Features 320 W TGP, 16 Gbps GDDR6 Memory

AMD is preparing to launch its Radeon RX 6000 series of graphics cards codenamed "Big Navi", and it seems like we are getting more and more leaks about the upcoming cards. Set for October 28th launch, the Big Navi GPU is based on Navi 21 revision, which comes in two variants. Thanks to the sources over at Igor's Lab, Igor Wallossek has published a handful of information regarding the upcoming graphics cards release. More specifically, there are more details about the Total Graphics Power (TGP) of the cards and how it is used across the board (pun intended). To clarify, TDP (Thermal Design Power) is a measurement only used to the chip, or die of the GPU and how much thermal headroom it has, it doesn't measure the whole GPU power as there are more heat-producing components.

So the break down of the Navi 21 XT graphics card goes as follows: 235 Watts for the GPU alone, 20 Watts for Samsung's 16 Gbps GDDR6 memory, 35 Watts for voltage regulation (MOSFETs, Inductors, Caps), 15 Watts for Fans and other stuff, and 15 Watts that are used up by PCB and the losses found there. This puts the combined TGP to 320 Watts, showing just how much power is used by the non-GPU element. For custom OC AIB cards, the TGP is boosted to 355 Watts, as the GPU alone is using 270 Watts. When it comes to the Navi 21 XL GPU variant, the cards based on it are using 290 Watts of TGP, as the GPU sees a reduction to 203 Watts, and GDDR6 memory uses 17 Watts. The non-GPU components found on the board use the same amount of power.
When it comes to the selection of memory, AMD uses Samsung's 16 Gbps GDDR6 modules (K4ZAF325BM-HC16). The bundle AMD ships to its AIBs contains 16 GB of this memory paired with GPU core, however, AIBs are free to put different memory if they want to, as long as it is a 16 Gbps module. You can see the tables below and see the breakdown of the TGP of each card for yourself.
Sources: Igor's Lab, via VideoCardz
Add your own comment

153 Comments on AMD Radeon RX 6000 Series "Big Navi" GPU Features 320 W TGP, 16 Gbps GDDR6 Memory

#126
Vayra86
Power is a non issue.

Power does result in more heat, and more heat is always an issue, inside any case.

Worth considering is that CPU TDPs have been all over the place as well. The net result is you'll be taking a lot more measures than before just to keep a nice temp equilibrium. More fans, higher fan speeds, higher airflow requirements. Current day case design is of no real help either, in that sense. In that way, power increases directly translate to increased purchase price of the complete setup. And that is on top of the mild increase to a monthly (!) energy bill. 3,5 pounds per month... is another 42 pounds per year. Three years of high power GPU versus the same tier of past gen... +126 pounds sterling. 700 just became 826. Its not nothing. Its a structural increase of TCO. And not even considering the power/money used for that first 250W we always did.

Also worth considering is the fact that people desire smaller cases. ITX builds are gaining in popularity. Laptops are a growth market and a larger one than consumer desktops.

So... is power truly a non issue... not entirely then?
MakaveliUnless you have everything running fully loaded non stop you won't hit those maximum power numbers you are trying to use to make your argument. Current hardware is very good at quickly dropping in lower power states when needed. And pretty much everything out today is very good at idle power draw.
You can rest assured a common use case for GPU is to run it at a100% utilization. Even if that doesn't always translate to 100% of power budget... its still going to be close.
Posted on Reply
#127
Chrispy_
CheeseballThe new 20.10.1 driver seems to address the HDMI audio issue with AV receivers. I have not tested this on the RX 5700 XT and Onkyo yet.
Goddamnit! I waited three months and the 2060S has only been in there for four days before AMD fix it. I'm using Yamaha, but I suspect if they say "AV recievers" it basically means any situation where there's an intermediate device extracting audio between the GPU and the final display.

I'll have to give this a try at the weekend. The 5700XT is faster AND significantly quieter than the 2060S.

Or rather, I should say that the 5700XT I have is quieter than the 2060S I have. I shouldn't make sweeping generalisations since both cards have a wide variety of performance and acoustics depending on which exact model. Still, though, the 5700XT undervolts more gracefully than the 2060S, I guess that's 7nm vs 12nm for you....
Posted on Reply
#128
Cheeseball
Not a Potato
Chrispy_Goddamnit!
I waited three months and the 2060S has only been in there for four days before AMD fix it.
I'm on Yamaha, but I suspect if they say "AV recievers" it basically means any situation where there's an intermediate device between the GPU and the final display.
I'll have to give this a try at the weekend. The 5700XT is faster AND quieter than the 2060S and my highest-priority in an HTPC graphics card is silence and 4K60 performance, something I've recently realised the 2060S sucks at.
I would think that depends on which 2060S AIB card you purchased (in relation to the silence since the cooling layout varies). No doubt that the 2060S is a weaker card in terms of gaming performance, but it should be on-par when it comes to using NVENC/NVDEC compared to VCE (except in real-time H.264 transcoding/streaming).
Posted on Reply
#129
Makaveli
Vayra86You can rest assured a common use case for GPU is to run it at a100% utilization. Even if that doesn't always translate to 100% of power budget... its still going to be close.
Yes for those that are using them for work and miners.

But for gamers you are rarely sitting at 100% utilization.
Posted on Reply
#130
mtcn77
MakaveliUnless you have everything running fully loaded non stop you won't hit those maximum power numbers you are trying to use to make your argument. Current hardware is very good at quickly dropping in lower power states when needed. And pretty much everything out today is very good at idle power draw.
MakaveliYes for those that are using them for work and miners.
I have to say, after all the miner craze I have encountered, it seems very plausible that those numbers are not just real, they are vital to the operating life of the gpu. People have been cancelling factory overclocks just to last them a couple months longer. All those 20% overclocks, available power budget limits are thrown out the window when you have a brick.
Posted on Reply
#131
Zach_01
MakaveliBut for gamers you are rarely sitting at 100% utilization.
2~3 games I play lately I see an average of 98~99% GPU usage when GPU is unrestricted at full speed (no FPS cap). 1920x1200 max settings
Posted on Reply
#132
EarthDog
MakaveliBut for gamers you are rarely sitting at 100% utilization.
Sorry, what? Any modernish game that doesn't have any limitations (cpu or vsync) will run a gpu at that 98/99% threshold this is normal behavior. I cant think of a game I own outside of gemcraft that doesn't show ~99% use...
Posted on Reply
#133
Mysteoa
Vya DomusI don't follow, the limited edition of the 5700XT wasn't a different product, it was still named 5700XT. "6900XTX" implies a different product.
Navi 10 XTX is 5700 XT 50th Anniversary Lisa Su edition, so essential NAVI 21XTX is higher binned NAVI 21XT. Maybe NAVI 21XTX is a 6900XT watercooled edition.
Posted on Reply
#134
Makaveli
EarthDogSorry, what? Any modernish game that doesn't have any limitations (cpu or vsync) will run a gpu at that 98/99% threshold this is normal behavior. I cant think of a game I own outside of gemcraft that doesn't show ~99% use...
I mean it been pegged to 100% consistently. In most games usage will vary with load screens, where you are on the map, how many enemies etc.
Posted on Reply
#135
dragontamer5788
MakaveliI mean it been pegged to 100% consistently. In most games usage will vary with load screens, where you are on the map, how many enemies etc.
Even at 100% utilization, that doesn't mean that the GPU is using 100% of the power. Utilization is usually OS-level, which is to say how full the GPU-command queues are. Its not actually about power usage at all.

Different games will be mostly at high utilization (because the command queues have something in them constantly), but if you watch the power-usage, it will vary.
Posted on Reply
#136
EarthDog
MakaveliI mean it been pegged to 100% consistently. In most games usage will vary with load screens, where you are on the map, how many enemies etc.
Load screens, sure.. otherwise, its pretty consistent 98/99%.. very consistent (again unless vsync, or CPU bottleneck). As was said, power can vary though.
Posted on Reply
#137
Th3pwn3r
TurmaniaWe used to have two slot gpu's as of last gen that went upto 3 slots, and now we are seeing 4 slot gpu's and it is all about to cool the power hungry beasts. But this trend surely has to stop. Yes, we can undervolt to bring power consumption to our desired needs and will most certainly be more efficient to last gen. But is that what 99% of users would do? I think about not only the power bill, the heat that is outputted, the spinning of fans,band consequently the faster detoriation of those fans and other components in the system. The noise output, and the heat that comes from the case will be uncomfortable.
I suggest you shutdown your computer or power off your phone because you're just wasting electricity and generating heat for what reason? I'm not serious. There's a lot of goofy going on in your post but if you don't want noise and heat then just put your PC in the next room over and use extension cables for everything OR you could vent your PC somehow. A LONG time ago I vented my PC into my Attic and while many would say small, low pressure PC fans won't push the air up and out I can say for sure that it worked for me.
Posted on Reply
#138
Mussels
Freshwater Moderator
EarthDogI just don't get this undervolting crowd... You pay for a certain level of performance, but due to power you strip a fair amount of performance off of it to run lower. Why not pay less for a card down the stack with less power use and use it to its fullest potential? I don't buy a performance coupe and decide to chip it to lower the horsepower because I don't like the gas mileage... o_O :kookoo::wtf:
freedom to run either way. right now at 1440p 144hz, i doubt a 3080 will get maxed out very often - so i'd either run Vsync on, a power limiter or both to keep the heat down until i actually need the performance.

To put it another way, i dont want a 400W screamer to run CSGO - i'd rather it become a 200W card in that situation.

Edit: with my 1080, its almost always 100% load, except for the instances i'm CPU limited. Even if its not an issue NOW, it WILL be as the cards age.
Posted on Reply
#139
Th3pwn3r
Musselsfreedom to run either way. right now at 1440p 144hz, i doubt a 3080 will get maxed out very often - so i'd either run Vsync on, a power limiter or both to keep the heat down until i actually need the performance.

To put it another way, i dont want a 400W screamer to run CSGO - i'd rather it become a 200W card in that situation.
Fair point but this isn't the same scenario others describe. But what you've described is also wasteful if you're not going to use the full potential of the card you have installed. In your case you probably shouldn't have upgraded at all. Seems like a hassle to me to water down a card and then boost it back up just because I'm gonna play a different game .
Posted on Reply
#140
mtcn77
Th3pwn3rBut what you've described is also wasteful if you're not going to use the full potential of the card you have installed.
You have to consider whatever you do, that gpu is never gonna run its workloads serially. There is an order of magnitude power difference between running the card 99% and 100%. Are you going to pursue that 1%? It is not 99p's, or 999th's either. Just 99fps, or 100 for all disrupted case internals, cpu and psu overheating as a result. Not cool.
Posted on Reply
#141
Mussels
Freshwater Moderator
Th3pwn3rFair point but this isn't the same scenario others describe. But what you've described is also wasteful if you're not going to use the full potential of the card you have installed. In your case you probably shouldn't have upgraded at all. Seems like a hassle to me to water down a card and then boost it back up just because I'm gonna play a different game .
You dont see the point in getting a GPU twice as fast as what i have, for future games? You may have odd views on this stuff.
Posted on Reply
#142
Camm
Discussions on boosting from Sony and AMD continuing to separate Game from Boost clock is much more interesting that the 'TDP' numbers IMO.

Much like CPU's, TDP's will start becoming irrelevant, and I believe this is the first move as such, with boosting becoming much more deterministic and transitory.
Posted on Reply
#143
nguyen
Th3pwn3rFair point but this isn't the same scenario others describe. But what you've described is also wasteful if you're not going to use the full potential of the card you have installed. In your case you probably shouldn't have upgraded at all. Seems like a hassle to me to water down a card and then boost it back up just because I'm gonna play a different game .
have you ever seen those anime where the villains just keep on unleashing their power when MC power up :roll: .
Posted on Reply
#144
EarthDog
Musselsfreedom to run either way. right now at 1440p 144hz, i doubt a 3080 will get maxed out very often - so i'd either run Vsync on, a power limiter or both to keep the heat down until i actually need the performance.

To put it another way, i dont want a 400W screamer to run CSGO - i'd rather it become a 200W card in that situation.

Edit: with my 1080, its almost always 100% load, except for the instances i'm CPU limited. Even if its not an issue NOW, it WILL be as the cards age.
Fair point. But in cases like that, put a frame limiter on of some sort in-game... This was when you play titles like csgo hitting 300 fps, you'll cap it to your refresh, power use, noise, and temps all drop while other games where the horsepower is needed isnt then lacking.
Posted on Reply
#145
Th3pwn3r
mtcn77You have to consider whatever you do, that gpu is never gonna run its workloads serially. There is an order of magnitude power difference between running the card 99% and 100%. Are you going to pursue that 1%? It is not 99p's, or 999th's either. Just 99fps, or 100 for all disrupted case internals, cpu and psu overheating as a result. Not cool.
If your GPU is causing your CPU and PSU to overheat then you have some serious build issues and I suggest making necessary modifications. Maybe your case is one of those full glass, RGB pieces of junk with zero airflow.
MusselsYou dont see the point in getting a GPU twice as fast as what i have, for future games? You may have odd views on this stuff.
No, I don't see a point in paying a premium for a premium video card now to play future games later. You'd probably be better off buying a future card when the future games are out BUT I'm saying games a couple of years out or so. Personally I don't think future proofing is really a thing. I don't always upgrade out of necessity. However, I'm also not concerned about power consumption or heat. The smallest power supply I have is a 750 watt, followed by 850s, and 1200s (this laptop excluded).
Posted on Reply
#146
mtcn77
Th3pwn3rIf your GPU is causing your CPU and PSU to overheat then you have some serious build issues and I suggest making necessary modifications. Maybe your case is one of those full glass, RGB pieces of junk with zero airflow.
Yes, because everybody who isn't using a blower type reference card is in this group together.
I don't wanna fight, since I haven't sorted which type of internet Overlord you are, but it is wildly apparent that open-bench type cases do not constitute the bulk of pc users. I agree it doesn't matter in some cases, but they aren't in the majority of cases.
Posted on Reply
#147
EarthDog
mtcn77Yes, because everybody who isn't using a blower type reference card is in this group together.
I don't wanna fight, since I haven't sorted which type of internet Overlord you are, but it is wildly apparent that open-bench type cases do not constitute the bulk of pc users. I agree it doesn't matter in some cases, but they aren't in the majority of cases.
Nor is shoehorning a 320W+ card into a shoebox and thinking it would be OK. That's a two way street. ;)
Posted on Reply
#148
Vayra86
EarthDogNor is shoehorning a 320W+ card into a shoebox and thinking it would be OK. That's a two way street. ;)
Take off the top and cut out the displayports... External GPU enclosure for ultra cheap.
Posted on Reply
#149
Chrispy_
Oh man, you guys are still trying to get your head around this, huh?
Here's a 5700XT of mine, graphed for various things, but at the lowest stable OCCT voltages for each clock:



You buy the product and can run it at any clock and power level you choose as long as it's stable. You can see that in an ideal world, best performance/Watt for this card was ~1375MHz.
AMD sold it at 1850MHz and had a much higher TDP and subsequent heat/noise levels than the 12nm TU106 it competed against. That's taking the efficiency advantage of TSMC's 7nm node and throwing it away, and then throwing away even more just to get fractionally higher benchmark scores.

You literally get a slider in the driver where you can undo this dumb decision. What you do with that slider is entirely up to you, it's not going to change how much you paid for the card, only how much you want to trade peace and quiet for performance. Clearly noise is a big problem because quiet GPUs are a big selling point for all AIB vendors, all trying to compete with larger fans at lower RPMs, features like idle fan stop etc. If you have a huge case with tons of low-noise airflow you can afford to buy a gargantuan graphics card that'll dissipate 300W quietly and let the case deal with that 300W problem seperately.

If you don't have a high airflow case, or loads of room, such cards may not even physically fit and the card's own fan noise is irrelevant because it'll dump so much heat into a smaller case that all the other fans in the case ramp up in their attempt to compensate with the additional 300W burden of the graphics card.

I haven't even mentioned electricity cost or the unwanted effect of heating up the room. Those are also valid arguments but not necessary as the noise level created by higher power consumption is enough for me to justify undervolting (and minor underclocking) all by itself.
Posted on Reply
#150
squallheart
RedelZaVednoPerformance per watt did go up on Ampere, but that's to be expected given that Nvidia moved from TSMCs 12nm to Samsung’s 8nm 8LPP, a 10nm extension node. What is not impressive is only 10% performance per watt increase over Turing while being build on 25% denser node. RDNA2 arch being on 7 nm+ looks to be even worse efficiency wise given that density of 7nm+ is much higher, but let's wait for the actual benchmarks.
Did you literally just completely ignore the chart that was a few posts above you? 100/85 = 117.6% so still a 17.6% improvement in performance/watt over the most efficient Turing GPU.
Posted on Reply
Add your own comment
Apr 18th, 2024 01:32 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts