Wednesday, July 6th 2016

AMD Updates its Statement on Radeon RX 480 Power Draw Controversy

AMD today provided an update on how it is addressing the Radeon RX 480 power-draw controversy. The company stated that it has assembled a worldwide team of developers to put together a driver update that lowers power-draw from the PCIe slot, with minimal performance impact. This driver will be labeled the Radeon Software Crimson Edition 16.7.1, and will be released in the next 2 days (before weekend). This fix will be called the "Compatibility" toggle in the Global Settings of the Radeon Settings app, which will be disabled by default. So AMD is giving users a fix, at the same time, isn't making a section of users feel like the card has been gimped with a driver update. The drivers will also improve game-specific performance by up to 3 percent.

The statement by AMD follows.

We promised an update today (July 5, 2016) following concerns around the Radeon RX 480 drawing excess current from the PCIe bus. Although we are confident that the levels of reported power draws by the Radeon RX 480 do not pose a risk of damage to motherboards or other PC components based on expected usage, we are serious about addressing this topic and allaying outstanding concerns. Towards that end, we assembled a worldwide team this past weekend to investigate and develop a driver update to improve the power draw. We're pleased to report that this driver-Radeon Software 16.7.1-is now undergoing final testing and will be released to the public in the next 48 hours.

In this driver we've implemented a change to address power distribution on the Radeon RX 480 - this change will lower current drawn from the PCIe bus.

Separately, we've also included an option to reduce total power with minimal performance impact. Users will find this as the "compatibility" UI toggle in the Global Settings menu of Radeon Settings. This toggle is "off" by default.

Finally, we've implemented a collection of performance improvements for the Polaris architecture that yield performance uplifts in popular game titles of up to 3%. These optimizations are designed to improve the performance of the Radeon RX 480, and should substantially offset the performance impact for users who choose to activate the "compatibility" toggle.

AMD is committed to delivering high quality and high performance products, and we'll continue to provide users with more control over their product's performance and efficiency. We appreciate all the feedback so far, and we'll continue to bring further performance and performance/W optimizations to the Radeon RX 480.
Add your own comment

77 Comments on AMD Updates its Statement on Radeon RX 480 Power Draw Controversy

#1
RejZoR
They say it will be OFF by default and you mention it'll be ON by default...
Posted on Reply
#2
Ferrum Master
btarunr, post: 3484508, member: 43587"
up to 3%1
Up to what? :/
Posted on Reply
#3
the54thvoid
The statement says the PCI will draw lower current in the driver and the toggle option is for a lower power draw. So does this mean they are automatically reducing voltage through the PCI (to meet specs) and then giving the option of overall lower power draw as well?

Bit confused.

But..... What AMD say about
we'll continue to provide users with more control over their product's performance
could maybe mean, if we're really lucky, that their next few designs (Vega) might have a bit of legroom for enthusiasts to play with.... Fingers crossed.
Posted on Reply
#4
btarunr
Editor & Senior Moderator
Ferrum Master, post: 3484512, member: 90058"
Up to what? :/
3 percent.

RejZoR, post: 3484511, member: 1515"
They say it will be OFF by default and you mention it'll be ON by default...
Fixed, thanks.
Posted on Reply
#5
RejZoR
What they said was:
- PCIe power draw really isn't an issue even if it's this high as found on RX480
- we are giving users control to restrict it to 75W with tiny performance penalty that is offset by a 3% performance boost via optimized driver
Posted on Reply
#6
cryohellinc
Glad for you Red Team owners, hopefully no more burned mobo's!
Posted on Reply
#7
Chaitanya
Like always I am going to wait 4-6 months before buying any new card. Hopefully by then all the issues in drivers would be straightned out and availability will improve in retail.
Posted on Reply
#8
ZoneDymo
I dont even understand this problem to begin with, why would a motherboard even be allowed to give so much current via a PCIe slot that it could destroy itself?
Why would there not be a hard limit on that?

Secondly, great for those who bought an RX480 but I think the best bet is still waiting for some 3rd party card with 8pin power connectors and custom (better) cooling solutions.
Posted on Reply
#9
nemesis.ie
RejZoR, post: 3484518, member: 1515"
What they said was:
- PCIe power draw really isn't an issue even if it's this high as found on RX480
- we are giving users control to restrict it to 75W with tiny performance penalty that is offset by a 3% performance boost via optimized driver
I think they've actually said they are doing two things (as well as the original draw being a non-issue as you mention):

1. Lowering the power draw from the PCIe slot (they didn't say by how much or if there was a penalty for that, benching will confirm) and this will be enabled as standard.

and separately:

2. Adding an option to reduce overall power by "some amount" at a cost to performance that should be off-set by a claimed 3% improvement in driver performance, assuming your game of choice is one of the "uplifted" ones.

Number 1 will be the most interesting to see the results of; how much have they reduced the PCIe slot power use and is the overall use now the same just with some amount moved to the PCIe power connector on the card and how has this affected performance, in theory, on the uplifted games, performance should be 3% better than the launch reviews.

Another question is if overall TDP/TBP has changed.

We should know more in a couple of days. ;)
Posted on Reply
#10
the54thvoid
ZoneDymo, post: 3484529, member: 66089"
I dont even understand this problem to begin with, why would a motherboard even be allowed to give so much current via a PCIe slot that it could destroy itself?
Why would there not be a hard limit on that?

Secondly, great for those who bought an RX480 but I think the best bet is still waiting for some 3rd party card with 8pin power connectors and custom (better) cooling solutions.
3rd party versions should be great. AMD and Nvidia worry too much (as far as us gamers are concerned) with power restrictions because OEM's etc like that stuff. But the partners can go hell bent for leather. I should imagine with custom PCB's the 480 will handle the 1060 easily (given how bad Nvidia has been for locking down the Pascal range so far)
Posted on Reply
#11
bug
ZoneDymo, post: 3484529, member: 66089"
I dont even understand this problem to begin with, why would a motherboard even be allowed to give so much current via a PCIe slot that it could destroy itself?
I guess that's just the issue: if the mobo decides it will not output that much power, the system may become unstable. And if it does, no one knows what the long term effects may be.

That aside, there will be a performance impact associated with this "fix". I wonder who will bother to review these cards again, because right now, it seems AMD just did probably the worst cheating in the history of video cards by allowing all review samples to run outsides PCIe specs.
Posted on Reply
#12
nem..
ok good to know that eventually everything will be solved and finally the RX 480 will perform as it should , replacing the 380 and surpassing the 960 (his rival in price and with the same size of silicon) by 45% more power in DX11 titles, and on 980 level in dx12 titles, will be great to see also the AIB Rx 480 models, as well as the battle against the gtx1060, the latter almost throwing smoke to 1.7GHz .:D
Posted on Reply
#13
ZoneDymo
bug, post: 3484534, member: 157434"
I guess that's just the issue: if the mobo decides it will not output that much power, the system may become unstable. And if it does, no one knows what the long term effects may be.

That aside, there will be a performance impact associated with this "fix". I wonder who will bother to review these cards again, because right now, it seems AMD just did probably the worst cheating in the history of video cards by allowing all review samples to run outsides PCIe specs.
Well they are acknowledging that, hence its a toggleble options, you dont have to put it on if you dont have issues and secondly
"Finally, we've implemented a collection of performance improvements for the Polaris architecture that yield performance uplifts in popular game titles of up to 3%. These optimizations are designed to improve the performance of the Radeon RX 480, and should substantially offset the performance impact for users who choose to activate the "compatibility" toggle."
Posted on Reply
#14
bug
ZoneDymo, post: 3484537, member: 66089"
Well they are acknowledging that, hence its a toggleble options, you dont have to put it on if you dont have issues and secondly
"Finally, we've implemented a collection of performance improvements for the Polaris architecture that yield performance uplifts in popular game titles of up to 3%. These optimizations are designed to improve the performance of the Radeon RX 480, and should substantially offset the performance impact for users who choose to activate the "compatibility" toggle."
Read that statement as you want, but it neither says it addresses ALL titles nor that it makes up for ALL the performance impact; it doesn't exactly define "popular game titles" either. Hence a re-review would be in order.
Posted on Reply
#15
nemesis.ie
ZoneDymo, post: 3484537, member: 66089"
Well they are acknowledging that, hence its a toggleble options, you dont have to put it on if you dont have issues and secondly
"Finally, we've implemented a collection of performance improvements for the Polaris architecture that yield performance uplifts in popular game titles of up to 3%. These optimizations are designed to improve the performance of the Radeon RX 480, and should substantially offset the performance impact for users who choose to activate the "compatibility" toggle."
No, as I wrote above, the base fix is not toggleable. The option is in addition to that.

It remains to be seen what they have done in the "base fix", my guess is they have managed to move at least enough of the power over to the PCIe connector to get it back within or close to spec but the TBP is still likely "a bit higher" than the 150W claimed (although it does seem within margin of error/spec tolerance (10%?) for that "on average when gaming", so the optional part will likely reduce that part of the equation. That's just speculation though and we won't know until it is tested.

I think serious sites like TPU/PCPer will indeed re-bench with the new drivers with the option on and off and show what has changed to both power use/distribution and performance, especially those that have highlighted the issue.
Posted on Reply
#16
nemesis.ie
cryohellinc, post: 3484524, member: 165485"
Glad for you Red Team owners, hopefully no more burned mobo's!
We should ignore the trolling; but can you show even 1 case where there has been a burned motherboard from this?
Posted on Reply
#17
ixi
Chaitanya, post: 3484527, member: 93474"
Like always I am going to wait 4-6 months before buying any new card. Hopefully by then all the issues in drivers would be straightned out and availability will improve in retail.
Yeah, Same goes for me. Will wait till prices will go down, so that people with shaking legs and arms can buy first products, while I will wait for bigger stocks and lower prices :).
Posted on Reply
#18
Fiery
FinalWire / AIDA64 Developer
nemesis.ie, post: 3484552, member: 22637"
We should ignore the trolling; but can you show even 1 case where there has been a burned motherboard from this?
https://community.amd.com/thread/202410
Posted on Reply
#19
ArdWar
ZoneDymo, post: 3484529, member: 66089"
I dont even understand this problem to begin with, why would a motherboard even be allowed to give so much current via a PCIe slot that it could destroy itself?
Why would there not be a hard limit on that?

Secondly, great for those who bought an RX480 but I think the best bet is still waiting for some 3rd party card with 8pin power connectors and custom (better) cooling solutions.
Sigh...

The power pin is directly connected into MB power plane and ground plane, there's nothing to limit it.

It's analogous with overloading a power cord. There's nothing that prevent you loading a 24 AWG wires with 100 Amperes of current.

There's fuses, and breakers, etc. But the point of PCI-e specification in the first place is to ensure that no one exceeding the limit so there's no need for system engineer to add unnecessary (i.e. avoidable) components. Reducing costs, simplifying designs and compliance certifications, and less components means higher reliability (if everything is behaving as intended).
Posted on Reply
#20
Shamalamadingdong
It's quite clear. They can change power distribution through software. That means no loss in performance as the excess power draw will move to the PCIe connector (which despite officially being limited at 75W, can actually do 150W easily and some have even suggested it could do 200W).

The optional fix is to lower the performance to keep both power sources within spec. The performance impact will be small because power and frequency doesn't scale linearly, so reducing the frequency a little bit can give a significant reduction in power consumption.

By default it will just change power distribution which will be just fine because even a 6-pin connector is over-engineered to the point where it can handle massive amounts of power.
Posted on Reply
#21
sutyi
Fiery, post: 3484559, member: 65547"
https://community.amd.com/thread/202410
I almost believed the guys story, right till the point where he posted a picture of his rig. The bottom of my desk side paperbin is more cleaner than his motherboard. It has dust clamps, cat hair, and drip marks. The case looks like somebody had go at it with a shear.
Posted on Reply
#22
Fiery
FinalWire / AIDA64 Developer
sutyi, post: 3484570, member: 112688"
I almost believed the guys story, right till the point where he posted a picture of his rig. The bottom of my desk side paperbin is more cleaner than his motherboard. It has dust clamps, cat hair, and drip marks. The case looks like somebody had go at it with a shear.
I've never had a PC full of dust, and the dust causing a PCIe slot to get burnt and become inoperational. Dust is not good, I grant you that, but it rarely cause any hardware failures. Except when the dust fills up a heatsink and that causes overheating.
Posted on Reply
#23
ArdWar
Fiery, post: 3484572, member: 65547"
I've never had a PC full of dust, and the dust causing a PCIe slot to get burnt and become inoperational. Dust is not good, I grant you that, but it rarely cause any hardware failures. Except when the dust fills up a heatsink and that causes overheating.
Oxidation is bigger problem with slots, connectors and pins.
Posted on Reply
#24
D007
I read twice in that statement that this will affect performance.. ATI users will not be happy about that..
Posted on Reply
#25
ZoneDymo
ArdWar, post: 3484563, member: 156586"
Sigh...

The power pin is directly connected into MB power plane and ground plane, there's nothing to limit it.

It's analogous with overloading a power cord. There's nothing that prevent you loading a 24 AWG wires with 100 Amperes of current.

There's fuses, and breakers, etc. But the point of PCI-e specification in the first place is to ensure that no one exceeding the limit so there's no need for system engineer to add unnecessary (i.e. avoidable) components. Reducing costs, simplifying designs and compliance certifications, and less components means higher reliability (if everything is behaving as intended).
Ah thanks for the information.
I still find it an odd choice though on the motherboard part, really doubt an extra fuse would make any difference and there are plenty of safeguards in other areas that I would then consider equally unneeded but sure.
Thanks again.
Posted on Reply
Add your own comment