Friday, July 8th 2016

AMD Releases PCI-Express Power Draw Fix, We Tested, Confirmed, Works

Earlier today, AMD has posted a new Radeon Crimson Edition Beta, 16.7.1, which actually includes two fixes for the reported PCI-Express overcurrent issue that kept Internet people busy the last days.

The driver changelog mentions the following: "Radeon RX 480's power distribution has been improved for AMD reference boards, lowering the current drawn from the PCIe bus", and there's also a second item "A new "compatibility mode" UI toggle has been made available in the Global Settings menu of Radeon Settings. This option is designed to reduce total power with minimal performance impact if end users experience any further issues."

In order to adjust the power distribution between PCI-Express slot power and power drawn from the PCI-Express 6-pin power connector, AMD uses a feature of the IR3567 voltage controller that's used on all reference design cards.
This feature lets you adjust the power phase balance by changing the controller's configuration via I2C (a method to talk to the voltage controller directly, something that GPU-Z uses too, to monitor VRM temperature, for example). By default, power draw is split 50/50 between slot and 6-pin, this can be adjusted per-phase, by a value between 0 to 15. AMD has chosen a setting of 13 for phases 1, 2 and 3, which effectively shifts some power draw from the slot away onto the 6-pin connector, I'm unsure why they did not pick a setting of 15 (which I've tested to shift even more power).

The second adjustment is an option inside Radeon Settings, called "Compatibility Mode", kinda vague, and the tooltip doesn't reveal anything else either. Out of the box, the setting defaults to off and should only be employed by people who still run into trouble, even with the adjusted power distribution from the first change, which is always on and has no performance impact. When Compatibility Mode is enabled, it will slightly limit the performance of the GPU, which results in reduced overall power draw.

We tested these options, below you will find our results using Metro Last Light (with the card being warmed up before the test run). First we measured power consumption using the previous 16.6.2 driver, then we installed 16.7.1 (while keeping Compatibility Mode off), then we turned Compatibility Mode on.

As you can see, just the power-shift alone, while working, is not completely sufficient to reduce slot power below 75 W, we measured 76 W. As the name suggests, the changed power distribution results in increased power draw from 6-pin, which can easily handle slightly higher power draw though.
With the Compatibility Mode option enabled, power from the slot goes down to 71 W only, which is perfectly safe, but will cost performance.

AMD has also promised improved overall performance with 16.7.1, so we took a look at performance, using Metro again.
Here you can see that the new driver adds about 2.3% performance, which is a pretty decent improvement. Once you enable Compatibility Mode though, performance goes down slightly below the original result (0.8% lower), which means Compatibility Mode costs you around 3%, in case you really want to use it. I do not recommend using Compatibility Mode, personally I don't think anyone with a somewhat modern computer would have run into any issues due to the increased power draw in the first place, neither did AMD. It is good to see that AMD still chose to address the problem, and solved it fully, in a good way, and quick.
Add your own comment

147 Comments on AMD Releases PCI-Express Power Draw Fix, We Tested, Confirmed, Works

#51
cadaveca
My name is Dave
the54thvoid, post: 3486244, member: 79251"
Is the rumour not that the higher SKU, RX490 is a dual card?
Sarcasm. RX485, not RX490.

I don't think it actually had anything to do with that. I originally thought it made their binning process easier to do it the way they did.
Posted on Reply
#52
the54thvoid
$ReaPeR$, post: 3486278, member: 56172"
isnt the 490 supposed to have a vega gpu?
edit: damn! well this sounds stupid!! if its a dual gpu solution i dont think it will be much of an oponent to the 1080.. crossfire never scales 100%, and sometimes it doesnt work altogether.
I got it through a google now feed...

http://wccftech.com/amd-rx-490-dual-gpu/

Good old WCCFTech.....

Perhaps Vega will be called "Faster and Fury-ous"?
Posted on Reply
#53
RejZoR
Well, if it is a dual GPU card and they are commited to deliver drivers 1 day before game release, then I'd be actually fine with it. About GTX 1080 performance for roughly 200 € less, hell yeah.
Posted on Reply
#54
Tatty_One
Senior Moder@tor
I see the children are misbehaving in the nursery yet again, reply bans for this thread will be issued if it continues, followed by free holiday passes, it's been a quiet day, feel free to liven it up for me...... thank you as always.
Posted on Reply
#55
PP Mguire
$ReaPeR$, post: 3486278, member: 56172"
isnt the 490 supposed to have a vega gpu?
edit: damn! well this sounds stupid!! if its a dual gpu solution i dont think it will be much of an oponent to the 1080.. crossfire never scales 100%, and sometimes it doesnt work altogether.

well, im sorry that im not as godlike as you and perfect in every way. your problem was with my stupid post and not with the fanboys that blew this way out of proportion. and, in order not to be misunderstood, im not talking about the people saw this as it is, a stupid mistake that could easily be avoided, im talking about the people that were posting in the "this card is shit, amd is worthless, i told you so" kind of mentality. and you judge me to be hypocritical.. yeah sure mate.
Edited due to above post not being seen.
Posted on Reply
#56
xvi
I think it's a bit funny that at first everyone complains about the PCIe bus power issue and now that it's fixed, they're talking about how beefy the VRMs are and how much power they think we can push through it.
W1zzard, post: 3486238, member: 1"
They could have just connected one more phase to the 6-pin, instead of 50/50, very easy to do during board design stage.
Fair point. Think there's any chance that non-reference cards will make this modification?
Posted on Reply
#57
$ReaPeR$
the54thvoid, post: 3486285, member: 79251"
I got it through a google now feed...

http://wccftech.com/amd-rx-490-dual-gpu/

Good old WCCFTech.....

Perhaps Vega will be called "Faster and Fury-ous"?
LOL that would be funny!! i dont mind the 2 tier approach, though it may be confusing for the average user..

RejZoR, post: 3486286, member: 1515"
Well, if it is a dual GPU card and they are commited to deliver drivers 1 day before game release, then I'd be actually fine with it. About GTX 1080 performance for roughly 200 € less, hell yeah.
i think you are a bit optimistic mate. (its not a bad thing, i also hope thats the case) i dont know if amd can pull that off. if it worked that would be great though. doesnt dx12 play a role in this situation?
Posted on Reply
#58
GhostRyder
cadaveca, post: 3486245, member: 25138"
Sarcasm. RX485, not RX490.

I don't think it actually had anything to do with that. I originally thought it made their binning process easier to do it the way they did.
I am going to slap my head if they do that with the naming scheme alone. Though I would hope if they do a RX 485/RX 480X then its just a full version of this chip (If there is such a thing, unless this is the full chip already in the RX 480).

the54thvoid, post: 3486285, member: 79251"
I got it through a google now feed...

http://wccftech.com/amd-rx-490-dual-gpu/

Good old WCCFTech.....

Perhaps Vega will be called "Faster and Fury-ous"?
the54thvoid, post: 3486244, member: 79251"
Is the rumour not that the higher SKU, RX490 is a dual card?
If they do that, then might as well write off the high end department for them and call it a day. But then again it is WCCFTech...
Posted on Reply
#59
$ReaPeR$
GhostRyder, post: 3486322, member: 149328"
I am going to slap my head if they do that with the naming scheme alone. Though I would hope if they do a RX 485/RX 480X then its just a full version of this chip (If there is such a thing, unless this is the full chip already in the RX 480).



If they do that, then might as well write off the high end department for them and call it a day. But then again it is WCCFTech...
i think the 480 has the full chip, havent seen anything that it would suggest otherwise.
as for the 490's "duality" :D i dont think they have any other choice until vega is ready. which is sad because it means that the prices will remain high in the nvidia camp due to lack of competition. neither sli nor cf have been better till now than a single card solution.
Posted on Reply
#60
PP Mguire
xvi, post: 3486302, member: 32389"
I think it's a bit funny that at first everyone complains about the PCIe bus power issue and now that it's fixed, they're talking about how beefy the VRMs are and how much power they think we can push through it.

Fair point. Think there's any chance that non-reference cards will make this modification?
I found this a bit hilarious as well.
Posted on Reply
#61
Jism
PP Mguire, post: 3486341, member: 57732"
I found this a bit hilarious as well.
I think half of the members in this thread should apply at AMD for an VGA engineering job.

Geezus. You know that some motherboards offer Crossfire or SLI up to 4 slots? And those slots DONT even have an extra Molex-connector? How do you think these boards will hold up the moment you put 4 cards in without any PCI-express booster? That's 4x 75W's pulling from PCI-E bus alone, over 2 wires on that poor 24 ATX connector.

Geezus people even many ancient motherboards offer onboard Molex-connector, it proberly sits near the PCI-express slot which feeds an additional current.

And if it's not there, you'd buy a pci-express booster:



This way you are able to pull more current then the standard 75W's.
Posted on Reply
#62
RejZoR
PCIe is specified to deliver 75W per slot. If board can't deliver that power, why the F does it even exist then?
Posted on Reply
#63
Jism
RejZoR, post: 3486353, member: 1515"
PCIe is specified to deliver 75W per slot. If board can't deliver that power, why the F does it even exist then?
It's not about delivery, it's about the traces through the motherboard, ancient & cheap ones are designed to handle a current up to 75Watts. If a card exceeds that 75W current, the problem is is that traces might burn eventually or simple 'degrade' for running too hot / too long. That is in theory, what is going on.

The 12V power comes straight from the 24 pins ATX plug. Some motherboards even share the 8 / 4 PINS from the CPU towards the PCI-express.
Posted on Reply
#64
ensabrenoir
Jism, post: 3486351, member: 91255"
I think half of the members in this thread should apply at AMD for an VGA engineering job.

Geezus. You know that some motherboards offer Crossfire or SLI up to 4 slots? And those slots DONT even have an extra Molex-connector? How do you think these boards will hold up the moment you put 4 cards in without any PCI-express booster? That's 4x 75W's pulling from PCI-E bus alone, over 2 wires on that poor 24 ATX connector.

Geezus people even many ancient motherboards offer onboard Molex-connector, it proberly sits near the PCI-express slot which feeds an additional current.

And if it's not there, you'd buy a pci-express booster:



This way you are able to pull more current then the standard 75W's.
Cool!!!!!! It would be like twitch plays Pokemon!!!!!! Thousands of people design a gpu over the internet.
Posted on Reply
#65
xvi
ensabrenoir, post: 3486360, member: 83675"
Cool!!!!!! It would be like twitch plays Pokemon!!!!!! Thousands of people design a gpu over the internet.
Praise Helix.
Posted on Reply
#66
GhostRyder
$ReaPeR$, post: 3486327, member: 56172"
i think the 480 has the full chip, havent seen anything that it would suggest otherwise.
as for the 490's "duality" :D i dont think they have any other choice until vega is ready. which is sad because it means that the prices will remain high in the nvidia camp due to lack of competition. neither sli nor cf have been better till now than a single card solution.
Yea, I am giving it that it may unfortunately. I was actually hoping that it was like the R9 285/380 and they were just stocking up/waiting to release the full chip.

As for the dual card, it would be very foolish if they do that. Unless its priced super aggressively (like 1.5-1.75x the price of the RX 480) its not going to be worth it. Plus its still a dual card which carries lots of problems relating to scaling in proper games. I don't think they would do that personally because dual GPU's have been on the down slide alot lately in sales.
Posted on Reply
#68
Fluffmeister
Oh nem, I love your attempts at damage control.
Posted on Reply
#69
CounterZeus
nem.., post: 3486403, member: 164231"
ASUS GTX 950 2 GB ( no power connector ) 79w tdp

https://www.techpowerup.com/reviews/ASUS/GTX_950/21.HTML
79W was peak during gaming, the highest value (might as well be a little power spike), while furmark only hit 76W. Pretty good within the limits I would say.

Btw, tdp means thermal design power, it's designed to still be able to cool up until 75W.
Posted on Reply
#70
AsRock
TPU addict
I was thinking the same that it was odd that gaming was higher so it was just a spike. Not as any of it really matters anyways, AMD fixed the shit and people should be happy and leave it at that.

Fluffmeister, post: 3486404, member: 101373"
Oh nem, I love your attempts at damage control.
Thing is there is not damage to control and tbh i have no idea why he would post that anyways as it looks all good anyways.
Posted on Reply
#71
GhostRyder
nem.., post: 3486403, member: 164231"
ASUS GTX 950 2 GB ( no power connector ) 79w tdp

https://www.techpowerup.com/reviews/ASUS/GTX_950/21.HTML






First off TDP is not actual power usage. Second so what? Not on the subject much and we already deduced how little this is a problem anyways.
AsRock, post: 3486408, member: 40310"
I was thinking the same that it was odd that gaming was higher so it was just a spike. Not as any of it really matters anyways, AMD fixed the shit and people should be happy and leave it at that.



Thing is there is not damage to control and tbh i have no idea why he would post that anyways as it looks all good anyways.
Bingo
Posted on Reply
#72
nem..
GhostRyder, post: 3486409, member: 149328"
First off TDP is not actual power usage. Second so what? Not on the subject much and we already deduced how little this is a problem anyways.

Bingo
Even without OC the power consumption has been more than 75w from PCIE port.

tdp thermal dissipation power its mean average power disipated as i do undestand , but in this case than we are talking about there are no doubts the real tdp be 75w tdp o even more as in the OC case where consumption rises without touch voltage.


CounterZeus, post: 3486405, member: 77235"
79W was peak during gaming, the highest value (might as well be a little power spike), while furmark only hit 76W. Pretty good within the limits I would say.

Btw, tdp means thermal design power, it's designed to still be able to cool up until 75W.
The point be than the 75w limit has been exceeded many times this is not peaks on energy consume as marking the Asus Strix up to 250w.


Posted on Reply
#73
Fluffmeister
AsRock, post: 3486408, member: 40310"
I was thinking the same that it was odd that gaming was higher so it was just a spike. Not as any of it really matters anyways, AMD fixed the shit and people should be happy and leave it at that.

Thing is there is not damage to control and tbh i have no idea why he would post that anyways as it looks all good anyways.
Thing is, nem is still posting random nonsense, he stinks of damage control.
Posted on Reply
#74
Fluffmeister
nem.., post: 3486421, member: 164231"
the only here posting nonsenses til i can see are you. :D
Honestly, I hope you aren't banned again, your my fave.

Glad AMD addressed the juicy-ness of the 480.
Posted on Reply
#75
xvi
*hovers over unsub button*

It's almost always a mistake subbing to any GPU-related thread. :shadedshu:
Posted on Reply
Add your own comment