Tuesday, October 20th 2020

AMD Radeon RX 6000 Series "Big Navi" GPU Features 320 W TGP, 16 Gbps GDDR6 Memory

AMD is preparing to launch its Radeon RX 6000 series of graphics cards codenamed "Big Navi", and it seems like we are getting more and more leaks about the upcoming cards. Set for October 28th launch, the Big Navi GPU is based on Navi 21 revision, which comes in two variants. Thanks to the sources over at Igor's Lab, Igor Wallossek has published a handful of information regarding the upcoming graphics cards release. More specifically, there are more details about the Total Graphics Power (TGP) of the cards and how it is used across the board (pun intended). To clarify, TDP (Thermal Design Power) is a measurement only used to the chip, or die of the GPU and how much thermal headroom it has, it doesn't measure the whole GPU power as there are more heat-producing components.

So the break down of the Navi 21 XT graphics card goes as follows: 235 Watts for the GPU alone, 20 Watts for Samsung's 16 Gbps GDDR6 memory, 35 Watts for voltage regulation (MOSFETs, Inductors, Caps), 15 Watts for Fans and other stuff, and 15 Watts that are used up by PCB and the losses found there. This puts the combined TGP to 320 Watts, showing just how much power is used by the non-GPU element. For custom OC AIB cards, the TGP is boosted to 355 Watts, as the GPU alone is using 270 Watts. When it comes to the Navi 21 XL GPU variant, the cards based on it are using 290 Watts of TGP, as the GPU sees a reduction to 203 Watts, and GDDR6 memory uses 17 Watts. The non-GPU components found on the board use the same amount of power.
When it comes to the selection of memory, AMD uses Samsung's 16 Gbps GDDR6 modules (K4ZAF325BM-HC16). The bundle AMD ships to its AIBs contains 16 GB of this memory paired with GPU core, however, AIBs are free to put different memory if they want to, as long as it is a 16 Gbps module. You can see the tables below and see the breakdown of the TGP of each card for yourself.
Sources: Igor's Lab, via VideoCardz
Add your own comment

153 Comments on AMD Radeon RX 6000 Series "Big Navi" GPU Features 320 W TGP, 16 Gbps GDDR6 Memory

#26
repman244
TurmaniaWe used to have two slot gpu's as of last gen that went upto 3 slots, and now we are seeing 4 slot gpu's and it is all about to cool the power hungry beasts. But this trend surely has to stop. Yes, we can undervolt to bring power consumption to our desired needs and will most certainly be more efficient to last gen. But is that what 99% of users would do? I think about not only the power bill, the heat that is outputted, the spinning of fans,band consequently the faster detoriation of those fans and other components in the system. The noise output, and the heat that comes from the case will be uncomfortable.
We still have a 2 slot card which can handle 4k gaming with "ease".
Posted on Reply
#27
renz496
RedelZaVednoPerformance per watt did go up on Ampere, but that's to be expected given that Nvidia moved from TSMCs 12nm to Samsung’s 8nm 8LPP, a 10nm extension node. What is not impressive is only 10% performance per watt increase over Turing while being build on 25% denser node. RDNA2 arch being on 7 nm+ looks to be even worse efficiency wise given that density of 7nm+ is much higher, but let's wait for the actual benchmarks.
the days smaller/improved node = better power consumptions are over.
Posted on Reply
#28
theGryphon
TurmaniaWith this rate even a successor to gtx 1650, which is a below 75w gpu will consume around 125w.
If it's a successor to GTX 1650, it HAS TO be a 75W card :banghead:
And, if the performance/watt numbers for this generation hold, we should get a decent upgrade in performance in the same 75W envelope.
Posted on Reply
#29
RedelZaVedno
renz496the days smaller/improved node = better power consumptions are over.
That's simply not true. Higher density = less power consumption or more transistors per mm2. That's what node shrinkage is all about. Smaller node with the same transistor count (lower wattage) or the same node with higher transistor count (same wattage or compromise between performance gain and wattage advantage).
Posted on Reply
#30
Unregistered
So let me get this straight.

Some
people are grumpy with AMD for supposedly not competing with the 3080, while some thought 3070 - and now speculation that one of the higher end cards looks as though it's getting a bit more juice (for whatever competitive reason) - people seem grumpy with that too...
#31
theGryphon
Thinking about these developments, I think AMD is simply following NVIDIA's footsteps in power draw. I mean NVIDIA opened the floodgates and AMD saw an opportunity to max out their performance within similar power ratings. I bet AMD has been working on tweaking their clock speeds in the last several weeks after NVIDIA launch.
Posted on Reply
#32
mtcn77
Turmaniais that what 99% of users would do?
Well, these gpus are packed with unified shaders. They don't work, "All the time". They wait for instructions and depending on what stage they are at the pipeline, they can throttle according to the workload and that is what any user should do, since that is what consoles do anyway. Every form of development effort is directed to the consoles and I have to say, they have gone into the intrinsics pretty wild. Let's wait to see SM6.0. I'm sure after introducing per-lane operations to expand instructional fidelity to be 4 times higher, they will go into full clock control.
theGryphonI mean NVIDIA opened the floodgates and AMD saw an opportunity to max out their performance within similar power ratings.
Tis wrong.
theGryphonI bet AMD has been working on tweaking their clock speeds in the last several weeks after NVIDIA launch.
Tis right. The way AMD and Nvidia approach gpu clock monitoring is different. Nvidia uses real time monitoring, AMD uses emulated monitoring. Nvidia can adapt to real changes in post launch stage better, but AMD can respond faster due to prelaunch approximated settings. If they simulated a scenario, the algorithm could emulate the power surge and what not.
Posted on Reply
#33
renz496
RedelZaVednoThat's simply not true. Higher density = less power consumption or more transistors per mm2. That's what node shrinkage is all about. Smaller node with the same transistor count (lower wattage) or the same node with higher transistor count (same wattage or compromise between performance gain and wattage advantage).
yes higher density means more transistor per mm2. but less power? i don't think that one is guaranteed. in fact higher density will lead to another problem: heat. how much power are being wasted as heat instead of increasing performance? in the end we are bound by the law of physics. we cannot get the improvement infinitely. even at 20nm we already see some problem. back then TSMC decided to ditch high performance node for 20nm because the power saving are not that better than enhanced 28nm process.
Posted on Reply
#34
mtcn77
Higher density indeed means less clocks.
Posted on Reply
#35
Nkd
RedelZaVednoIf TGP is 320W than peak power draw must be north of 400W, just like 3080 and 3090. That's really, really bad. Any single decent GPU should not peak over 300W, that's the datacenter rule of thumb and it's getting stumbled upon with Ampere and RDNA2. How long will air cooled 400W GPU last? I'm having hard time believing that there will be many fully functioning air cooled Big Navis/3080-90s around in 3-5 years time. Maybe that's the intend, 1080TIs are still killing new sales.
someone didn’t read the article. Seriously common now. It’s frickin only explains it in the article how much wattage is where.
Posted on Reply
#36
ThanatosPy
The problem with AMD allways gonna be the Drivers, man how them can be so bad?
Posted on Reply
#37
Nkd
ThanatosPyThe problem with AMD allways gonna be the Drivers, man how them can be so bad?
only 3 known issues as of last release. Time to move on.
Posted on Reply
#38
RedelZaVedno
Laws of physics apply always, it's just a matter of cost. As you go to smaller geometries it gets more and more expensive. One of the things that was driving Moore’s Law is that the cost per transistor was dropping. It hasn’t been dropping noticeably recently (going below 5nm), and in some cases it’s going flat. So yes, you can still get more transistors and lower wattage at smaller nodes, but the cost per die is going up significantly, so those two things balance out. I'd say 3nm is a sweet-spot for compute hardware for now, because it is great for compute density and relatively low leakage power. Below that we probably won't see retail GPUs and CPU shrinkage anytime soon as arch become very, very complex which means A LOT of R&D $$$ and abysmal die yields. But hey we're still talking about Samsung's 8nm here (aka 10nm in reality) and 7nm with Ampere/RDNA2 not below 3nm nodes, so there is still plenty power efficiency to gain simply by moving to smaller node. The problem Ampere has is that it was not build exclusively for gaming and Samsung's 8nm node that was never meant for big dies and it shows. It's Nvidia's "GCN 5 Vega" moment, trying to sit on 2 chairs at the same time & cheat on cheap inferior node. Luckily for them AMD is so far behind that they can pull it of without having to worry about competition too much. 3080 on TSMC 7nm euv process would be 250W TPD GPU, that's all NVidia had to do to obliterate RDNA2, but they've chosen profit margins over efficiency and maybe, just maybe that will bite them in the ass.
Posted on Reply
#39
EarthDog
beedooSo let me get this straight.

Some people are grumpy with AMD for supposedly not competing with the 3080, while some thought 3070 - and now speculation that one of the higher end cards looks as though it's getting a bit more juice (for whatever competitive reason) - people seem grumpy with that too...
lol, the fickle outweigh the logical these days... especially in forums.
Nkdonly 3 known issues as of last release. Time to move on.
I think the worry is launch day and the annual adrenalin drivers. It took them over a year to get rid of the black screen issue, for example. I'm glad they are pulling it together, but do understand the valid concerns.
Posted on Reply
#40
Vayra86
beedooSo let me get this straight.

Some people are grumpy with AMD for supposedly not competing with the 3080, while some thought 3070 - and now speculation that one of the higher end cards looks as though it's getting a bit more juice (for whatever competitive reason) - people seem grumpy with that too...
I think in general the fact that more performance is achieved with more power isn't exactly something to get all hyped up about.

We could practically do that already but never did, if you think about it - without major investments and price hikes. Just drag out 14nm a while longer and make it bigger?

The reality is, we're seeing the cost of RT and 4K added to the pipeline. Efficiency gains don't translate to lower res due to engine or CPU constraints. We're moving into a new era, in that sense. It doesn't look good right now because we're used to a very efficient era in resolutions. Hardly anyone plays at 4K yet, but their GPUs slaughter 1080p and sometimes 1440p. Basically these are new GPUs waiting to solve new problems we don't really have.
Posted on Reply
#41
AnarchoPrimitiv
TurmaniaDoes anybody not care about electricity bills anymore, or most not having responsibilty to pay the bills? Who would buy these cards?
First of all, the average price of electricty in America is $0.125/kWHr, which is very cheap. This results in, for example, a 100 watt difference in Videocards equating to $36.40/year if it's used 8 hours per day, 365 days per year... And that's a lot of gaming.

Don't get me wrong, I think efficiency should be the paramount concern considering the impending ecological collapse and all, but I think because Nvidia opened the door for a complete disregard for efficiency this time around, I think AMD is following suit and going all out with clocks because they realize they don't have to care about efficiency.

That being said, I wouldn't be surprised if you downclock and undervolt RDNA2, it'll probably be extremely efficient, much more than Ampere could or can be.
Posted on Reply
#42
RedelZaVedno
Nkdsomeone didn’t read the article. Seriously common now. It’s frickin only explains it in the article how much wattage is where.
What's wrong with my numbers? Igor writes 320W TBP for FE NAVI 21 XT and 355W for AIB variants ('Die 6800XT ist heiss, bis zu 355 Watt++') That translates into +400W peak power draw.
Posted on Reply
#43
EarthDog
RedelZaVednoWhat's wrong with my numbers? Igor writes 320W TBP for FE NAVI 21 XT and 355W for AIB variants ('bis zu 355Watt++').... That translates into +400W peak power draw.
How? Isn't TBP Total BOARD Power and that encompasses everything? How are you seein a TBP value of XXX and coming up with YYY (more)?
Posted on Reply
#44
RedelZaVedno
EarthDogHow? Isn't TBP Total BOARD Power and that encompasses everything? How are you seein a TBP value of XXX and coming up with YYY (more)?
Actually it doesn't. Official TBP of 3080 is 320 Watts, yet it peaks at 370W (FE) and up to 470W (AIBs). I have no reason to assume RDNA2 will be any different. That's why Igor wrote 355W++.
Posted on Reply
#45
EarthDog
RedelZaVednoActually it doesn't. Official TBP of 3080 is 320 Watts, yet it peaks at 370W (FE) and up to 470W (AIBs). I have no reason to assume RDNA2 will be any different. That's why Igor wrote 355W++.
You should link some support....As it stands, NV cards have a power limit where clocks and voltage lower to maintain that limit. In my experience, it doesn't go over that much...not even close. It depends on the power limits of the card. If it is set to 320W max, that is all they get, generally. It's true there are BIOS' with higher limits, but out of the factory at stock (FE speeds) it's a 320W card.
Posted on Reply
#46
BoboOOZ
beedooSo let me get this straight.

Some people are grumpy with AMD for supposedly not competing with the 3080, while some thought 3070 - and now speculation that one of the higher end cards looks as though it's getting a bit more juice (for whatever competitive reason) - people seem grumpy with that too...
In short, some people are always grumpy.

People, unhappy with high TDB GPUs? Buy a mid-tier one. Really interested in efficiency? Buy the biggest die, undervolt, underclock.

But how about waiting to see some actual numbers (performance, consumption, prices) before getting the pitchforks out?

Who am I kidding, those pitchforks are always out...
RedelZaVednoActually it doesn't. Official TBP of 3080 is 320 Watts, yet it peaks at 370W (FE) and up to 470W (AIBs). I have no reason to assume RDNA2 will be any different. That's why Igor wrote 355W++.
You're assuming that based on Igor assumptions about his leak and you see no flaw in your reasoning? :)
Posted on Reply
#47
mtcn77
AnarchoPrimitivAnd that's a lot of gaming.
This is not about gaming, it is about gpu behaviour. You cannot schedule work for 100%.
AnarchoPrimitivThat being said, I wouldn't be surprised if you downclock and undervolt RDNA2, it'll probably be extremely efficient,
I just read the Timothy Lottes guide and funny coincidence we have a local console developer who reverberated his steps verbatim, so I can say very groundedly that this is a matter of scheduling and how the gpu can 'see' the same workload progress that developers can see using radeon profiler. Work isn't parallel, in fact most times it is serial. If you have 64 compute units, the instruction engine assigns each one by one. Even in the best circumstances* that is ~5% time lost to idle. You don't even need to keep the shaders working up until they meet the work to idle requirement.
*PS: when 1 kernel is running, when multiple kernels are running this increases linearly, 4 workgroups increase idle time to ~18%.
Posted on Reply
#48
Turmania
Of course we are going to complain when a new gpu comes with 400W + consumption. How many of you can be so ignorant and dismassal towards others about it is beyond belief. I said the same thing on Nvidia when it released, a lovely card, a great performance job and not increasing cost from previous gen but all that at the cost of power consumption and complications that brings with it is a no go for me.
Posted on Reply
#49
R0H1T
Vya DomusI don't follow, the limited edition of the 5700XT wasn't a different product, it was still named 5700XT. "6900XTX" implies a different product.
The naming scheme really doesn't matter, functionally the 3900X & 3900XT are the same products as well. AMD could, in theory, do the same with "big" Navi.
Posted on Reply
#50
Chrispy_
At a rumoured 2.4GHz I'm expecting a lot of that GPU TDP to be caused by AMD's typical preference to ignore the performance/Watt sweet spot.

Underclockers and undervolters will likely be running their cards at 2GHz and sacrificing ~15% of the potential performance to get Sub-200W total board powers.

I am looking forward to reviews but looking forward even more to see what the undervolting potential is. Nothing says "quiet computing" more than an overengineered cooling system for 350+ Watts that barely breaks a sweat at 200W.
Posted on Reply
Add your own comment
May 17th, 2024 21:18 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts