Tuesday, October 20th 2020

AMD Radeon RX 6000 Series "Big Navi" GPU Features 320 W TGP, 16 Gbps GDDR6 Memory

AMD is preparing to launch its Radeon RX 6000 series of graphics cards codenamed "Big Navi", and it seems like we are getting more and more leaks about the upcoming cards. Set for October 28th launch, the Big Navi GPU is based on Navi 21 revision, which comes in two variants. Thanks to the sources over at Igor's Lab, Igor Wallossek has published a handful of information regarding the upcoming graphics cards release. More specifically, there are more details about the Total Graphics Power (TGP) of the cards and how it is used across the board (pun intended). To clarify, TDP (Thermal Design Power) is a measurement only used to the chip, or die of the GPU and how much thermal headroom it has, it doesn't measure the whole GPU power as there are more heat-producing components.

So the break down of the Navi 21 XT graphics card goes as follows: 235 Watts for the GPU alone, 20 Watts for Samsung's 16 Gbps GDDR6 memory, 35 Watts for voltage regulation (MOSFETs, Inductors, Caps), 15 Watts for Fans and other stuff, and 15 Watts that are used up by PCB and the losses found there. This puts the combined TGP to 320 Watts, showing just how much power is used by the non-GPU element. For custom OC AIB cards, the TGP is boosted to 355 Watts, as the GPU alone is using 270 Watts. When it comes to the Navi 21 XL GPU variant, the cards based on it are using 290 Watts of TGP, as the GPU sees a reduction to 203 Watts, and GDDR6 memory uses 17 Watts. The non-GPU components found on the board use the same amount of power.
When it comes to the selection of memory, AMD uses Samsung's 16 Gbps GDDR6 modules (K4ZAF325BM-HC16). The bundle AMD ships to its AIBs contains 16 GB of this memory paired with GPU core, however, AIBs are free to put different memory if they want to, as long as it is a 16 Gbps module. You can see the tables below and see the breakdown of the TGP of each card for yourself.
Sources: Igor's Lab, via VideoCardz
Add your own comment

153 Comments on AMD Radeon RX 6000 Series "Big Navi" GPU Features 320 W TGP, 16 Gbps GDDR6 Memory

#51
RedelZaVedno
BoboOOZYou're assuming that based on Igor assumptions about his leak and you see no flaw in your reasoning? :)
It's all speculation at this point. But knowing that Sapphire rated Nitro+ 5700XT at 235W TBP and GPU at 170W while real life PPD actually measured 310W, I'd say it's pretty safe to assume +50W over estimated TBP IF leak of 230W GPU Power holds water.
Posted on Reply
#52
mtcn77
Chrispy_Underclockers and undervolters will likely be running their cards at 2GHz and sacrificing ~15% of the potential performance to get Sub-200W total board powers.
You can still overclock and throttle these cards. You just drop the power limit. It is crazy how much you can do.
Posted on Reply
#53
EarthDog
I just don't get this undervolting crowd... You pay for a certain level of performance, but due to power you strip a fair amount of performance off of it to run lower. Why not pay less for a card down the stack with less power use and use it to its fullest potential? I don't buy a performance coupe and decide to chip it to lower the horsepower because I don't like the gas mileage... o_O :kookoo::wtf:
Posted on Reply
#54
R0H1T
RedelZaVednoIt's all speculation at this point. But knowing that Sapphire rated Nitro+ 5700XT at 235W TBP and GPU at 170W while real life PPD actually measured 310W, I'd say it's pretty safe to assume +50W over estimated TBP IF leak of 230W GPU Power holds water.
GPUs draw more power at higher voltages & temps, that's called physics. Of course this varies from Si to Si so the cooling has to be over-engineered, not to mention AIBs can't possibly know the max power draw of the card in each & every scenario.
EarthDogI just don't get this undervolting crowd... You pay for a certain level of performance, but due to power you strip a fair amount of performance off of it to run lower. Why not pay less for a card down the stack with less power use and use it to its fullest potential? I don't buy a performance coupe and decide to chip it to lower the horsepower because I don't like the gas mileage... o_O :kookoo::wtf:
Yeah image buying a 9590 & undervolting or underclocking it to 4Ghz :slap:
Posted on Reply
#55
Kaleid
EarthDogI just don't get this undervolting crowd... You pay for a certain level of performance, but due to power you strip a fair amount of performance off of it to run lower. Why not pay less for a card down the stack with less power use and use it to its fullest potential? I don't buy a performance coupe and decide to chip it to lower the horsepower because I don't like the gas mileage... o_O :kookoo::wtf:
Haven't noticed the card changing it's MHZ down because of the undervolting. And I don't have much to gain by overclocking it either, it just adds another 50Mhz.
And of course some also want their cards to be quieter, which is nice.
Posted on Reply
#56
Chrispy_
EarthDogI just don't get this undervolting crowd... You pay for a certain level of performance, but due to power you strip a fair amount of performance off of it to run lower. Why not pay less for a card down the stack with less power use and use it to its fullest potential? I don't buy a performance coupe and decide to chip it to lower the horsepower because I don't like the gas mileage... o_O :kookoo::wtf:
We do it because we can afford to pay for higher tier cards and run them quieter. A 5700XT limted to 150W is much quieter and still faster than a 5600XT running close to its power and voltage limit. The higher-tier card usually has a better cooler and better quality of manufacture too, because the margins aren't as slim higher up the product stack.

If you're running on a tight budget then you buy a lower-tier card and overclock the snot out of it. AMD's cards have always typically been very close to their overclock limits at factory stocks, much like Nvidia's 3000-series are now.

I chose to run my 5700XT at 1750MHz and 120W (TGP probably about 150W board power) and I could afford to leave the fans on minimum speed for silent 4K gaming. At default clocks it would initially boost to about 1950MHz, get hot and then stabilise at about an 1850 game clock over a longer period. 1750MHz instead of 1850MHz is a small performance drop, basically negligible - but it was the difference between loud and silent - expecially important for an HTPC in a quiet living room using an SFF case with relatively low airflow.
Posted on Reply
#57
Vya Domus
R0H1TThe naming scheme really doesn't matter, functionally the 3900X & 3900XT are the same products as well. AMD could, in theory, do the same with "big" Navi.
Still, they have different clockspeeds, even though they are minuscule.
Posted on Reply
#58
EarthDog
Chrispy_We do it because we can afford to pay for higher tier cards and run them quieter. A 5700XT limted to 150W is much quieter and still faster than a 5600XT running close to its power and voltage limit. The higher-tier card usually has a better cooler and better quality of manufacture too, because the margins aren't as slim higher up the product stack.

If you're running on a tight budget then you buy a lower-tier card and overclock the snot out of it. AMD's cards have always typically been very close to their overclock limits at factory stocks, much like Nvidia's 3000-series are now.
lol, it's your money... it screams a waste of cash to me.
KaleidHaven't noticed the card changing it's MHZ down because of the undervolting.
Depends on different factors. I don't practice this curious waste of money, I was going off the mention of 15% performance loss. That's more than an entire card tier....
Posted on Reply
#59
mtcn77
EarthDogI just don't get this undervolting crowd... You pay for a certain level of performance, but due to power you strip a fair amount of performance off of it to run lower. Why not pay less for a card down the stack with less power use and use it to its fullest potential? I don't buy a performance coupe and decide to chip it to lower the horsepower because I don't like the gas mileage... o_O :kookoo::wtf:
Because it does not scale with available resources, in fact the more compute units you have, the more serial it gets to feed draw batches. Fury X, Vega 64, Navi, Radeon VII are all 16:1 cu per rasterizer. They take twice longer to issue commands than a simple Bonaire or one of those smaller gpus.
Posted on Reply
#60
EarthDog
mtcn77Because it does not scale with available resources, in fact the more compute units you have, the more serial it gets to feed them. Fury X, Vega, Navi VII are all 16:1 cu per rasterizer. They take twice longer to issue commands than a simple Bonaire or one of those smaller gpus.
Cool story... but this doesn't answer the question of why people pay for XX, lower volts and (generally) performance to WW... Get WW card, save money, be quieter. Or, if it bothers you that much (the noise) and money isn't an issue like the other dude asserts, buy an aftermarket heatsink...better performance for more money and quiet!
Posted on Reply
#61
mtcn77
EarthDogCool story... but this doesn't answer the question of why people pay for XX, lower volts and performance to WW... Get WW card, save money, be quiet. Or, if it bothers you that much (the noise) and money isn't an issue, buy an aftermarket heatsink...better performance for more money and quiet!
It is instruction scheduling limited. You can either wait until there is enough frontend decoding going on, or run them idle and kill any hope for overclocking gains. You aren't wasting, you are saving oc potential.
Look, people didn't question when Nvidia started gpuboost, or AMD started ulps, but today nobody knows what the hell these cards are doing. Buildzoid undervolted Pascal to 0.6v, it still kept going. I'm attributing it to voltage pumps - the gpu is buffering power to counteract vdroops.
Posted on Reply
#62
RedelZaVedno
I'm looking for 250W GPU max. Best price/performance at this wattage gets my money. 3070 looks promising, but I do expect RDNA2 to beat it in performance/watt and performance/dollar given that it's on superior node and AMD is an underdog in GPU game. 52CU clocked at 2100MHz (around 14 Tflops) should match 2080ti/3070 and have favorable performance/watt ratio. It will all come down to pricing. I really hope AMD doesn't become greedy. All these 300-400W GPUs are a no go in my eyes. I have no need for expensive room heaters.
Posted on Reply
#63
EarthDog
mtcn77It is instruction scheduling limited. You can either wait until there is enough frontend decoding going on, or run them idle and kill any hope for overclocking gains. You aren't wasting, you are saving oc potential.
Look, people didn't question when Nvidia started gpuboost, or AMD started ulps, but today nobody knows what the hell these cards are doing. Buildzoid undervolted Pascal to 0.6v, it still kept going. I'm attributing it to voltage pumps - the gpu is buffering power to counteract vdroops.
The story doesn't lay in the minutia...at least I couldn't care less about it (thanks for the deets though :)). I get it.. good to know... but that's minutia. Look at it from a big picture perspective.

The end result is XXX power and people are reducing voltage and at times clocks and performance to do it. I don't get it (the losing performance part), especially if it's several percent/card tier.
Posted on Reply
#64
R0H1T
Undervolting + OCing is the way to go, now if AMD's really pushed the card to the max that may not be feasible. I recall the VII could be undervolted & not lose much if at all in terms of consistent performance, though the max boost clocks might have gone down a bit.
Posted on Reply
#65
Vayra86
EarthDogThe story doesn't lay in the minutia...at least I couldn't care less about it (thanks for the deets though :)). I get it.. good to know... but that's minutia. Look at it from a big picture perspective.

The end result is XXX power and people are reducing voltage and at times clocks and performance to do it. I don't get it (the losing performance part), especially if it's several percent/card tier.
AIB cards generally get clocked out of their efficiency curve, and in this case even the FE does that. Its the same deal as with Vega, you underclock it because the gain you get is bigger than the FPS loss. Sometimes you can even get lower volts and the exact same performance, or you get better consistency.

I even see it on my Pascal 1080. 100% power target gets me just as far as 110%, but it still runs cooler, quieter and maintains a stable boost clock better. At the same time, the FPS gain isn't very linear with the clockspeed gain especially if that clock fluctuates all the time. Boost is great for the extra few hundred Mhz it gives, and you keep letting it do that, but the minor gains above that come at a high noise/power cost. This is technically not an undervolt of course, but it illustrates the point. The bar has been pushed further with the generations past Pascal, closer to the edge of what GPUs can do - note the 3080 2Ghz clock issue, and the general power draw increase across the board. Turing was also more hungry already while boosting a bit lower.
Posted on Reply
#66
mechtech
SLKLooks like this gen of GPUs are all power-hungry. Efficiency is out of the window!
Well they are huge chips and lots of memory. Any large engine will use a large amount of fuel.
Posted on Reply
#67
mtcn77
EarthDogThe end result is XXX power and people are reducing voltage and at times clocks and performance to do it.
That is the crazy part - we are only aware of it since it doesn't do it automatically!
I'm sure the next big thing they will try is full on shader-pipeline interlocking. Look at it this way: they were just decoding serially, then they introduced per-lane intrinsics and lane-switching, now it will instantiate when and where to power up and down.
Yes, I think it will come to that. Gpus are getting fully customizable and there is zero benefit to leaving it to the customer. I mean the workloads are the same, the pipeline is the same, they have to do something... what better way than to split the instruction pipeline from the shader pipelines and just power them when it covers their cost to run. Every watt saved from static losses is one more available to faster switching.

The vrms don't even care how much you pull, they just work until temperature kills them.
Posted on Reply
#68
RedelZaVedno
One stupid question... Could AMD just modify XBoX X 52CU GPU inside APU, clock it to lets say 2.23Ghz like in PS5 and offer it as discrete PC GPU?
That would probably be cheap to produce yet powerful solution. It would be a 3070 killer if equipped with 10/12 GB of VRAM and priced at 350/400 bucks (like 5700 series).
Is there any possibility of that happening?
Posted on Reply
#69
FinneousPJ
RedelZaVednoOne stupid question... Could AMD just modify XBoX X 52CU GPU inside APU, clock it to lets say 2.23Ghz like in PS5 and offer it as discrete PC GPU?
That would probably be cheap to produce yet powerful solution. It would be a 3070 killer if equipped with 10/12 GB of VRAM and priced at 350/400 bucks (like 5700 series).
Is there any possibility of that happening?
Isn't the console solution a single chip, i.e. there isn't a GPU there to copy.
Posted on Reply
#70
EarthDog
Vayra86AIB cards generally get clocked out of their efficiency curve, and in this case even the FE does that. Its the same deal as with Vega, you underclock it because the gain you get is bigger than the FPS loss. Sometimes you can even get lower volts and the exact same performance, or you get better consistency.

I even see it on my Pascal 1080. 100% power target gets me just as far as 110%, but it still runs cooler, quieter and maintains a stable boost clock better. At the same time, the FPS gain isn't very linear with the clockspeed gain especially if that clock fluctuates all the time. Boost is great for the extra few hundred Mhz it gives, and you keep letting it do that, but the minor gains above that come at a high noise/power cost. This is technically not an undervolt of course, but it illustrates the point. The bar has been pushed further with the generations past Pascal, closer to the edge of what GPUs can do - note the 3080 2Ghz clock issue, and the general power draw increase across the board. Turing was also more hungry already while boosting a bit lower.
It seems like Ampere so far is and the 5700XT... clocked out of their efficiency curves... and it seems the same with RDNA2 with the rumors so far...I don't think any of the AMD fanatics saw similar power envelopes coming (they are awfully quiet here... go figure) and here we are.
Posted on Reply
#71
RedelZaVedno
FinneousPJIsn't the console solution a single chip, i.e. there isn't a GPU there to copy.
Yes it is an APU, but still it has CPU and GPU inside it. I don't know how much modifying one needs to do to make it discrete GPU.
Posted on Reply
#72
mtcn77
FinneousPJIsn't the console solution a single chip, i.e. there isn't a GPU there to copy.
Under dx12, they are still discrete chips. I don't see any change any how in the near future. They haven't done it until they shift the gpu shaders into cpu fpus.
Posted on Reply
#73
Chrispy_
EarthDoglol, it's your money... it screams a waste of cash to me.
The entire "quiet computing" industry is a waste of cash. It doesn't add any performance at all but people pay serious money for it.
The entire "RGBLED" industry is a waste of cash. It doesn't add any performance at all but it costs quite a bit more whilst adding additional software bloat and cable spaghetti.
As you can tell from the current retail market - both of those segments are so successful that they utterly dominate the market and leave almost nothing else available.

Underclocking and undervolting a graphics card is exactly what every laptop manufacturer has ever done. Nvidia went one step further with their Max-Q models and gave people the option to buy far more expensive GPUs than their laptop cooling is capable of, but dialled back to heavily-reduced clocks and TDPs. They sold in their millions, Max-Q was a huge success in the laptop world, despite the high cost.

I think we can agree to disagree because having options on the market is good and more consumer choice is always better than less. At least AMD's graphics driver is an excellent tuning tool for undervolting and underclocking.
Posted on Reply
#74
FinneousPJ
RedelZaVednoYes it is an APU, but still it has CPU and GPU inside it. I don't know how much modifying one needs to do to make it discrete GPU.
I'd guess more modifying than is worth doing if they aren't doing it...
Posted on Reply
#75
mtcn77
I think, if they split instruction pipelines from shader pipelines, they can do a frontend overclock until the pipelines are full, say the gpu works at not just 2.3GHz, but say 3.0GHz when shaders are idle. How much it would help is relatable since they have pinpointed exactly where the bottlenecks are - 18% idle for 4 workgroups(just enough work for 1 shader of each 4096).
Posted on Reply
Add your own comment
May 9th, 2024 16:43 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts