• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Updates its Statement on Radeon RX 480 Power Draw Controversy

Not quite. There, only review samples had higher clocks. Here, they all have them higher...
You mean, all cards are advertised as 150W parts while drawing ~165W? And that's ok?
 
The PCI-E spec is designed to handle more than 75 Watt reference spec.
o_OConfused :confused: So when it {SIG} revised the spec up from the 25w to allow 75w total[12v+3.3v] where is this allowing the 12v to be over 75w not including the 3.3v already there.
From what I seen this is the only card found that has had an equal draw from both slot and cable. [BTW I really like this card !]
So if it's been revised to allow 150w thru the slot why can't that be found ?
 
So instead of trying your motherboard, now you can choose to fry your PSU instead.

HAHAHA, nice one, Mr. PSU Expert.
A 6pin gpu connector can provide up to 200W of power, so even if the 480 does not draw power from PCI Slot, it would still be enought and will run the card, and not damage your power plug or PSU.

12V rail can provide the power the card needs to run only on that 6 pin connector alone.

I for one would try a different solution:
Why not design a custom PCB for this card and route all power to the chips to come from the 6 or 8 pin connector, and exclude PCI power?
Asus, MSI, Sapphire could do that and instead of 6 pin, go for an 8 pin for better power delivery, since 8 pin is rated at 150W per connector, and max it can go is 300W???
 
I think they've actually said they are doing two things (as well as the original draw being a non-issue as you mention):

1. Lowering the power draw from the PCIe slot (they didn't say by how much or if there was a penalty for that, benching will confirm) and this will be enabled as standard.

and separately:

2. Adding an option to reduce overall power by "some amount" at a cost to performance that should be off-set by a claimed 3% improvement in driver performance, assuming your game of choice is one of the "uplifted" ones.

Number 1 will be the most interesting to see the results of; how much have they reduced the PCIe slot power use and is the overall use now the same just with some amount moved to the PCIe power connector on the card and how has this affected performance, in theory, on the uplifted games, performance should be 3% better than the launch reviews.

Another question is if overall TDP/TBP has changed.

We should know more in a couple of days. ;)

For 1 this is the information we currently have, the current ref RX 480 PCB design is such that the IR3567B loop 1 (6 phases) is controlling the mosfets that supply GPU VDDC. Now the 6 phases are split 50/50 between PCI-E slot/plugs, we have independent 2 power planes, there is no "chip" on PCB that can adjust supply source to mosfets but IR3567B can do load balancing independently per phase so basically The Stilt's fix is lowering the ratio of loading capability of 3 phases supplied by PCI-E slot and shifting it to the 3 phases supplied by PCI-E plug. If there was any other method he could employ I would think he would have done that, especially with his experience and knowledge of AMD products. This is basically what I think AMD will do to reduce current/power usage from PCI-E slot. This does not effect how much power the GPU will draw or what voltage it will use, nor will it's performance be limited but what it does mean is 3 phases are being loaded more, you could depending on GPU properties/how much you push card for OC'ing hit OCP limit.

For 2 my opinion is this, PowerPlay in ROM has PowerTune table, which contains PowerLimit. These values are TDC/TDC/MPDL.

TDP: "Change TDP limit based on customer's thermal solution"

TDC: "PowerTune limit for maximum thermally sustainable current by VDDC regulator that can be supplied"

Maximum Power Delivery Limit (MPDL): "This power limit is the total chip power that we need to stay within in order to not violate the PCIe rail/connector power delivery"

These values limit GPU only not any other board elements (RAM,etc). TDC does not do any load balancing or differentiate between phases on VRM. MPDL does not differentiate between where power is drawn (ie slot/plugs). So AMD maybe implementing tweak of these settings in OS once driver load, as this is what we do when we change PL in OD/TriXX/MSI AB.

They may implement PowerTune algorithm tweak. Without delving too much into this I'll present an example based on 2 programs I experienced this on Fiji. In driver at x point they exposed a feature called "Power Efficiency". Prior to this setting being available I would adjust PL in ROM/OS to max, 3DM FS would run at max GPU clock but Heaven would show some clock dropping, with PE=Off no clock drop in Heaven. I was never reaching A/W/temps issue to make GPU drop clocks in Heaven with PE=On, so it has lead me to believe an aspect of PowerTune algorithm is being tweaked when I set PE=Off.

Just like you said I would agree it would be at a cost of performance which they are giving back in "some" titles for fix/point 2.

Now getting back to GPU properties, mainly leakage, higher leakage ASIC will draw more power. So you will have some cards not benefiting as much from the shift of load on phases to reduce PCI-E slot power usage. Like wise a higher leakage ASIC will reach PowerTune/Limit sooner under load so the owner may see more of a performance loss. IMO owners of lower leakage ASIC will benefit more from these fixes in context of "issue" and will be somewhat drawing less power as a whole at "stock".

When W1zzard tested The Stilt's tweak it drew ~10W less on PCI-E slot, which will be soon consumed by some OC'ing.

IMO the fixes are not "ideal" considering this is card which has PCI-E plug and by this I mean no disrespect to The Stilt, W1zzard and others involved with it. The ref PCB needs a redesign to be inline with what recent past AMD cards drew on PCI-E slot or to substantially reduce PCI-E slot power usage.

My apologies to members for mega long post.
 
I guess math isn't your strong point. The card currently consumes more that the compliant 150W (~165W). If you limit PCIe input to <75W, then you end up drawing >75W from the 6 pin connector. Which is still outside the spec. So instead of trying your motherboard, now you can choose to fry your PSU instead.
The only sane thing to do is make the thing draw 150W as advertised.


The PSU cables can withstand the current. Take a look at your PSU's 6+2 pin cable: the extra 2 pins don't even have their own wires, they are just bridged from the existing ones so when you use it for an 8 pin card only 6 wires are connected directly to the PSU

The 6 pin connector can safely draw an extra 16w without caching fire or anything.

Also, AMD didn't say that the card would draw 150w (total board power) just that the card TDP was 150w (not the same thing as power consumption).

Anyway, if you're so concerned about going over the PCIe specification you can turn on the toogle to limit the power draw to 150w.
 
Last edited:
The PSU cables can withstand the current. Take a look at your PSU's 6+2 pin cable: the extra 2 pins don't even have their own wires, they are just bridged from the existing ones so when you use it for an 8 pin card only 6 wires are connected directly to the PSU

The 6 pin connector can safely draw an extra 16w without caching fire or anything.

Also, AMD didn't say that the card would draw 150w (total board power) just that the card TDP was 150w (not the same thing as power consumption).

Anyway, if you're so concerned about going over the PCIe specification you can turn on the toogle to limit the power draw to 150w.

the horror will never end...
they have a fix people.. relax. or dont buy the damn card if you are so concerned. can we please just move along..
 
Is it really necessary for AMD to target its reference designs so close to the limit?

290x with jet engine cooler that cant keep temperatures bellow 90 ÂşC, 480x with 6-pin, but needs 8-pin and goes off spec.

Is the bad press they get from this stuff really worth the savings?:confused:
 
some people are saying that this will increase the load on the 6 pin VRMs and cause damage to them. My gosh, the VRMs are overspecified, each phase can provide 40w (240w total) so even if the PCIe VRMs are underused the 6pin VRMs can take the heat. Heck, AMD could disable one of the phases entirely and the card would still get more than enough power and stay within the VRMs limits.

AMD was very stupid for letting this happen even if the fix was trivial. While I don't think it will impact RX 480 sales a lot this will just give nVidia more fodder for when the GTX 1060 launches. They had a homerun in their hands and let it slip between the cracks.

I've already bought one, should be shipped on the 12th when amazon gets more stock. I guess I'll watch ebay and look for an used one for cheap, pretty sure some people will dump 480s after the 1060 becomes available.
 
Last edited:
some people are saying that this will increase the load on the 6 pin VRMs and cause damage to them. My gosh, the VRMs are overspecified, each phase can provide 40w (240w total) so even if the PCIe VRMs are underused the 6pin VRMs can take the heat. Heck, AMD could disable one of the phases entirely and the card would still get more than enough power and stay within the VRMs limits.

AMD was very stupid for letting this happen even if the fix was trivial. While I don't think it will impact RX 480 sales a lot this will just give nVidia more fodder for when the GTX 1060 launches. They had a homerun in their hands and let it slip between the cracks.

I've already bought one, should be shipped on the 12th when amazon gets more stock. I guess I'll watch ebay and look for an used one for cheap, pretty sure some people will dump 480s after the 1060 becomes available.

Who can be so stupid to sell his RX480 for a 1060? If he was a green fanboy he wouldn't buy it from the start. If he isn't a fanboy he will keep it just because it's the best VFM GPU for the last many years. :)
 
I'm sure some will. Perception is reality, they say. Right now the perception is that AMD is selling hardware that could damage your system... of course, people like you and me know better but that's not the case for most people.

Anyway, I'm sure it won't affect AMD much but why give munition to your critics in the first place?
 
some people are saying that this will increase the load on the 6 pin VRMs and cause damage to them. My gosh, the VRMs are overspecified, each phase can provide 40w (240w total) so even if the PCIe VRMs are underused the 6pin VRMs can take the heat. Heck, AMD could disable one of the phases entirely and the card would still get more than enough power and stay within the VRMs limits.

AMD was very stupid for letting this happen even if the fix was trivial. While I don't think it will impact RX 480 sales a lot this will just give nVidia more fodder for when the GTX 1060 launches. They had a homerun in their hands and let it slip between the cracks.

I've already bought one, should be shipped on the 12th when amazon gets more stock. I guess I'll watch ebay and look for an used one for cheap, pretty sure some people will dump 480s after the 1060 becomes available.

Who can be so stupid to sell his RX480 for a 1060? If he was a green fanboy he wouldn't buy it from the start. If he isn't a fanboy he will keep it just because it's the best VFM GPU for the last many years. :)

I'm sure some will. Perception is reality, they say. Right now the perception is that AMD is selling hardware that could damage your system... of course, people like you and me know better but that's not the case for most people.

Anyway, I'm sure it won't affect AMD much but why give munition to your critics in the first place?

you can always trust on the stupidity of people.. ;)
 
Is it really necessary for AMD to target its reference designs so close to the limit?

290x with jet engine cooler that cant keep temperatures bellow 90 ÂşC, 480x with 6-pin, but needs 8-pin and goes off spec.

Is the bad press they get from this stuff really worth the savings?:confused:

Wonders never cease my friend.
 
omg you're annoying.

The toggle is off by default, so reviewers would have to re-run the card not in the default setting, which they never do. But no doubt they will this time because of the beatup around this issue

Like I said from the start this whole issue is a beatup. AMD is saying what I said, they are confident the power draw will not damage hardware.

Hardware specs are waaaay on the conservative side. Thats why we can overclock the crap out of our computers and not do damage. The PCI-E spec is designed to handle more than 75 Watt reference spec. Much more. Same with the 6 pin and 8 pin plugs, they can handle double the power of the spec.

If people think an extra 10% or 15% is going to destroy a motherboard, they have no idea how things work.

Get over it..
 
Glad for you Red Team owners, hopefully no more burned mobo's!


Can you link me to a thread or video that has shown someone that has had a burnt out mobo? I havent seen any yet, people talk about it but I still havent seen any evidence to support it.
 
"Already they have begun to see cases of damaged motherboards"

Really it is not, so you have to grab it with tweezers, and in some cases of those who have caused so much uproar in the network, then we have seen (when reviewing pictures of the burnt motherboard) riding three graphics cards providing only 2 PCI Express slots with adapters extensors-because I was doing mining bitcoins 24 hours seven days a week, and when installing new graphics board was loaded. This case is one of the most publicized, but it was really out of spec to start was the user.

But of course that did not stop him putting together all possible disturbance in the network, although the logical thing would be that high demand for energy that should have started by buying a plate with three PCI Express slots for the three graphs mounted.

:/
 
Can you link me to a thread or video that has shown someone that has had a burnt out mobo? I havent seen any yet, people talk about it but I still havent seen any evidence to support it.
Fiery (AIDA dev) posted a link in an earlier thread.
 
The PSU cables can withstand the current. Take a look at your PSU's 6+2 pin cable: the extra 2 pins don't even have their own wires, they are just bridged from the existing ones so when you use it for an 8 pin card only 6 wires are connected directly to the PSU

The 6 pin connector can safely draw an extra 16w without caching fire or anything.

Also, AMD didn't say that the card would draw 150w (total board power) just that the card TDP was 150w (not the same thing as power consumption).

Anyway, if you're so concerned about going over the PCIe specification you can turn on the toogle to limit the power draw to 150w.

Maybe they can, but the PCIe still says 75W from the slot and another 75W from the 6 pin connector, so the card would still be running outside specs. Of course, this is more of a PCI-SIG failure, because they have certified a card that's not actually compliant, but what do they care? They're not the one selling video cards so they're not the ones catching flak.
 
o_OConfused :confused: So when it {SIG} revised the spec up from the 25w to allow 75w total[12v+3.3v] where is this allowing the 12v to be over 75w not including the 3.3v already there.
From what I seen this is the only card found that has had an equal draw from both slot and cable. [BTW I really like this card !]
So if it's been revised to allow 150w thru the slot why can't that be found ?

If the plug can handle 75 W at 12+3.3 it can handle more at 12 V alone.

The reason would be this: electrical transmission losses are as flows: I*I*R=P,
R is resistance in the plug, stays the same at 3.3 V or 12 V
I is current, and at the same power for 3,3 V and 12 V the current will be 3,6 = (12 V/3.3V) times higher, the loss in one 3.3 V pinn will therefore be 13 = (3,6 * 3,6) times higher than in a 12 V pin.

The 3.3 V pins cannot be used at 12 V pins, but without the added heat from the 3,3 V pins the 12 V pins will have more power headroom , as the whole plug will be cooler.
 
If the plug can handle 75 W at 12+3.3 it can handle more at 12 V alone.

The reason would be this: electrical transmission losses are as flows: I*I*R=P,
R is resistance in the plug, stays the same at 3.3 V or 12 V
I is current, and at the same power for 3,3 V and 12 V the current will be 3,6 = (12 V/3.3V) times higher, the loss in one 3.3 V pinn will therefore be 13 = (3,6 * 3,6) times higher than in a 12 V pin.

The 3.3 V pins cannot be used at 12 V pins, but without the added heat from the 3,3 V pins the 12 V pins will have more power headroom , as the whole plug will be cooler.

The official limit for 3.3V is 9.9W, or up to 3A. 3.3V supply line has 4 pins at its disposal. Thus, 0.75A per pin maximum.
The 12V limit is actually officially 66W or 5.5A. Given five pins for power transmission, this is 1.1A per pin maximum.

Also, take a look at your formula again, it correctly says dissipated power = current squared times resistance.
12V line is allowed more current per pin than 3.3V line is...
 
I've never had a PC full of dust, and the dust causing a PCIe slot to get burnt and become inoperational. Dust is not good, I grant you that, but it rarely cause any hardware failures. Except when the dust fills up a heatsink and that causes overheating.

That might be true and all, but you might want to ask yourself how does this individual handle his PC components? I'm guessing with not much care at all as it seems...
I'm sorry but, when the insides of your computer looks like a cat lives in it, with the accompanying fur, piss and god knows what else deposited on your motherboard, then any sort of HW failure claims are nullified in my eyes.
 
Last edited:
The official limit for 3.3V is 9.9W, or up to 3A. 3.3V supply line has 4 pins at its disposal. Thus, 0.75A per pin maximum.
The 12V limit is actually officially 66W or 5.5A. Given five pins for power transmission, this is 1.1A per pin maximum.

Also, take a look at your formula again, it correctly says dissipated power = current squared times resistance.
12V line is allowed more current per pin than 3.3V line is...

the loss in one 3.3 V pinn will therefore be 13 = (3,6 * 3,6) times higher than in a 12 V pin at the same power transfered.

was mostly just thinking it out while typing, and what is there to look at?
even with the 12 V pins beeing alowwed more A than a 3.3 V pin i still think the slightly to big power draw problem is blown out of proportions.
 
If the plug can handle 75 W at 12+3.3 it can handle more at 12 V alone
Source? It's already been raised from 25 to the 75, so that say's the plug is maxed out now..
Also from reading thru the spec they only allowed that since there is better convection cooling with newer PC's
 
Just read the link okidna posted. I guess we can put all of this behind and enjoy our 480s :D

I plan to enable the compatibility mode, undervolt the core and overclock the memory as much as possible.
 
Back
Top