Monday, June 15th 2020

AMD "Navi 12" Silicon Powering the Radeon Pro 5600M Rendered

Out of the blue, AMD announced its Radeon Pro 5600M mobile discrete graphics solution exclusive for Apple's 16-inch MacBook Pro. It turns out that the Pro 5600M is based on an all new ASIC by AMD, codenamed "Navi 12." This is a multi-chip module, much like "Vega 20," featuring a 7 nm GPU die and two 16 Gbit (4 GB) HBM2 memory stacks sitting on an interposer. While the actual specs of the GPU die on the "Navi 14" aren't known, on the Pro 5600M, it is configured with 40 RDNA compute units amounting to 2,560 stream processors, 160 TMUs, and possibly 64 ROPs.

The engine clock of the Pro 5600M is set at up to 1035 MHz. The HBM2 memory is clocked at 1.54 Gbps, which at the 2048-bit bus width, translates to 394 GB/s of memory bandwidth. There are two big takeaways from this expensive-looking ASIC design: a significantly smaller PCB footprint compared to a "Navi 10" ASIC with its eight GDDR6 memory chips; and a significantly lower power envelope. AMD rates the typical power at just 50 W. In the render below, the new ASIC is shown next to a "Navi 14" ASIC that power RX/Pro 5500-series SKUs.
Add your own comment

42 Comments on AMD "Navi 12" Silicon Powering the Radeon Pro 5600M Rendered

#26
IceShroom
evernessinceOn top of that the GPU being discussed in the article is the Radeon Pro 5600M, which is a completely different piece of silicon. If Navi 12 is based on RDNA2, which is highly likely, it will be significantly more power efficient then existing chips. AMD is claiming a 50% increase in performance per watt with Navi 2. Of course, just having HBM instead of GDDR means power savings as well.
Navi12 is RDNA1, not RDNA2. RDNA2 gpus will have Navi2x naming scheme.
Posted on Reply
#27
M2B
evernessinceThe video you linked clearly shows the 2060 system consuming more power. TDP does not equal power consumption. I'm tired of repeating this.

In addition, you cherry picked your 10% number. That's 10% at Maximum details settings, which in many games (being a laptop GPU) isn't a good experience. At high settings the lead is closer to 6% and at medium about 2%.

On top of that the GPU being discussed in the article is the Radeon Pro 5600M, which is a completely different piece of silicon. If Navi 12 is based on RDNA2, which is highly likely, it will be significantly more power efficient then existing chips. AMD is claiming a 50% increase in performance per watt with Navi 2. Of course, just having HBM instead of GDDR means power savings as well.
Total system power consumption is different because those are different laptops with many different parts. I can't believe I have to explain that.
And yes, TDP mostly equals power consumption.
I didn't say anything about this specific apple GPU, I just said AMD has no efficiency advantage.
Posted on Reply
#28
iO
What a shame it's Apple exclusive. This would make such a lovely ITX card...
Posted on Reply
#29
IceShroom
M2BTotal system power consumption is different because those are different laoptops with many different parts. I can't believe I have to explain that.
And yes, TDP mostly equals power consumption.
I didn't say anything about this specific apple GPU, I just said AMD has no efficiency advantage.
The video show that 2060 laptop consuming more power than the 5600M one. The CPU is same, so how 2060 is more efficient than the 5600M, when the 2060 laptop consuming more power??
Posted on Reply
#30
M2B
IceShroomThe video show that 2060 laptop consuming more power than the 5600M one. The CPU is same, so how 2060 is more efficient than the 5600M, when the 2060 laptop consuming more power??
Because the tested RTX 2060 has a 90W POWER LIMIT and the 5600M consumes up to 100W while the 2060 performs better. That's how.
Posted on Reply
#31
IceShroom
M2BBecause the tested RTX 2060 has a 90W POWER LIMIT and the 5600M up to 100W while the 2060 performs better. That's how.
How did he mesured that? Currently you cant do that on Smartshif Laptop as it show power consumption for both cpu and gpu??
And if 2060 is consuming less power than why the ASUS one has more power consumption??
Posted on Reply
#32
RH92
Mark LittleThe only thing I need to understand is that I have been taught after many years of leaving comments on the internet is not to engage with people like you.
Yeah right why would you want to engage with peoples like me who provide argumented criticism in order to correct your misconceptions ....... when it's much easier to stay in your denial bubble !
THANATOSYou forgot about HBM2, which is more power efficient than GDDR6.
Yep .
Posted on Reply
#33
evernessince
M2BTotal system power consumption is different because those are different laptops with many different parts. I can't believe I have to explain that.
And yes, TDP mostly equals power consumption.
I didn't say anything about this specific apple GPU, I just said AMD has no efficiency advantage.

You are straight up wrong.

As gamersnexus described it, it's a made up number used to beat down forum users over which processor has the lower TDP when in reality it isn't isn't supposed to represent power consumption, let alone being accurate at what it is supposed to indicate (thermal power dissipation).
Posted on Reply
#34
Chrispy_
evernessince<TDP talk>

As gamersnexus described it, it's a made up number used to beat down forum users over which processor has the lower TDP when in reality it isn't isn't supposed to represent power consumption, let alone being accurate at what it is supposed to indicate (thermal power dissipation).
Yep. AMD's current 65W models (so 3600, or 3700X) pull 85-90W at stock with PBO boosting enabled. Intel's current 10500 pulls about 130W when boosting.

Laptop 15W TDP ranges from 10W to about 45W depending on vendor and configuration.

GPU TDP is actually more closely constrained by AMD and Nvidia, simply because their entire product is a single board and as such they have far more control over the power delivery than AMD or Intel do with CPUs that rely on third-party motherboard manufacturers to handle.
Posted on Reply
#35
JB_Gamer
Not out of the Blue, out of the Red (team;)
Posted on Reply
#37
watzupken
M2BAMD has no efficiency advantage in the mobile space.
I've seen reviews of a 100W~ 5600M losing to a 90W RTX 2060 by 10% or so.

Objectively, I agree that the TDP comparison is moot. Generally the chip will not stick to the stated TDP, whether its GPU or CPU. At this point, CPU is the biggest offender.

In addition, it is very difficult to have an apple to apple comparison of performance between laptops. This is because the specs, cooling solution and the BIOS configurations differs widely. Even if we can get one with more or less exact specs, the cooling solution and laptop configurations will largely determine the performance of the laptop. Unlike on a desktop where you can afford huge cooling solution, the cooling solutions in laptops are really bare minimal. So if one is to cut cost and scrimp on the cooling solution, this may result in a hotter and/or slower performance. In terms of the laptop configuration, it depends on how aggressive the laptop maker wants to be by allowing a longer boost, higher power, higher temps, etc. All these are preset in the BIOS which we have no/ limited access to in laptops.
Posted on Reply
#38
londiste
GPU TDP is the power limit for it and has been for a long while now.
CPU situation is different.
Posted on Reply
#39
Valantar
M2BAMD has no efficiency advantage in the mobile space.
I've seen reviews of a 100W~ 5600M losing to a 90W RTX 2060 by 10% or so.

That isn't the same GPU ...

The GPU in question here is the Navi 12-based Radeon Pro 5600M (with HBM2).

The GPU in your video is the Navi 10-based Radeon RX 5600M (with GDDR6).

Yes, this naming is confusing in its similarity, but they are differen product lines (RX is mainstream consumer/gaming, Pro is productivity/workstation), so the similar naming just indicates similar performance/product stack positioning.
If this was used by anyone other than Apple I would expect a higher clocked version named RX 5700M or some such.
Chrispy_Are we really expecting the full 2560 (40CU) configuration in something that's wearing the 5600 moniker?
Yes. Considering that AMD just sent out a press release saying this in clear text, yes, that is exactly what is happening.


I have to say, I would love for them to make this into a premium SFF desktop GPU ... push it to 75W, stick it on a 2-slot HHHL card, wow, it would knock the socks off anything else available in that form factor. The price would obviously be high, but there are quite a few SFF enthusiasts out there willing to pay that premium.
Posted on Reply
#40
Chrispy_
ValantarYes. Considering that AMD just sent out a press release saying this in clear text, yes, that is exactly what is happening.

I have to say, I would love for them to make this into a premium SFF desktop GPU ... push it to 75W, stick it on a 2-slot HHHL card, wow, it would knock the socks off anything else available in that form factor. The price would obviously be high, but there are quite a few SFF enthusiasts out there willing to pay that premium.
Oh okay, I hadn't seen the press release when I asked that.

Agree with you on the premium SFF GPU. I intentionally paid extra for a 5700XT and other than a brief excercise in seeing what it was capable of, have never run it at speeds that would even beat a stock 5700. My daily-driver undervolt barely spins the fans beyond their minimum rpm and in benchmarks I'm giving up about 15% of the stock performance.

Yes, it's not great performance/$, but I'm happy to pay the premium for performance/Watt because although I don't care about electricity costs, I do care about it being damn-near silent.
Posted on Reply
#41
Valantar
Chrispy_Oh okay, I hadn't seen the press release when I asked that.

Agree with you on the premium SFF GPU. I intentionally paid extra for a 5700XT and other than a brief excercise in seeing what it was capable of, have never run it at speeds that would even beat a stock 5700. My daily-driver undervolt barely spins the fans beyond their minimum rpm and in benchmarks I'm giving up about 15% of the stock performance.

Yes, it's not great performance/$, but I'm happy to pay the premium for performance/Watt because although I don't care about electricity costs, I do care about it being damn-near silent.
If one has the money for that, that's a good approach, particularly with AMD cards that for the past near decade have been pushed too far up their DVFS curves.

I'm just imagining a ... let's call it RX 5700 Nano - why not? It would sure be a worthy successor to the near-legendary R9 Nano - at 75W, two slot HHHL form factor, probably ~1300MHz or possibly a bit more (given that this does 1035 at 50W). That would absolutely destroy the current highest performance HHHL GPU, the GTX 1650. They could even make it a harvested die with a couple of CUs cut off, letting Apple run off with the best chips. It could still deliver ~1660 Ti performance unless the frequency scaling falls off a cliff at low clocks (considering how low the MBP version is clocked, there should be plenty of headroom without getting notably inefficient). It might not be the highest volume product ever - not by any means given the premium pricing something like this would demand - but it would be the darling of the SFF crowd for years to come.
Posted on Reply
#42
Chrispy_
It should be easy enough to test. Get a vanilla 5700, set the power limit to -50% (so it's 90W max), tune the voltage curve with a bit of quick graph-plotting and then pick your 75W point on the curve. I reckon the HBM2 makes this considerably more efficient so 75W with GDDR6 may actually be only around 1GHz.
Posted on Reply
Add your own comment
Apr 23rd, 2024 02:25 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts