Saturday, July 2nd 2016

Official Statement from AMD on the PCI-Express Overcurrent Issue

AMD sent us this statement in response to growing concern among our readers that the Radeon RX 480 graphics card violates PCI-Express power specification, by overdrawing power from its single 6-pin PCIe power connector and the PCI-Express slot. Combined, the total power budged of the card should be 150W, however, it was found to draw well over that power limit.

AMD has had out-of-spec power designs in the past with the Radeon R9 295X2, for example, but that card is targeted at buyers with reasonably good PSUs. The RX 480's target audience could face troubles powering the card. Below is AMD's statement on the matter. The company stated that it's working on a driver update that could cap the power at 150W. It will be interesting to see how that power-limit affects performance.
"As you know, we continuously tune our GPUs in order to maximize their performance within their given power envelopes and the speed of the memory interface, which in this case is an unprecedented 8 Gbps for GDDR5. Recently, we identified select scenarios where the tuning of some RX 480 boards was not optimal. Fortunately, we can adjust the GPU's tuning via software in order to resolve this issue. We are already testing a driver that implements a fix, and we will provide an update to the community on our progress on Tuesday (July 5, 2016)."
Add your own comment

358 Comments on Official Statement from AMD on the PCI-Express Overcurrent Issue

#101
GhostRyder
newtekie1
Even the $30 430w eVGA has an 8-pin.

And the ASUS gtx950 with no power connector pulls a maximum of 76w from the slot. So they managed to keep it right at the limit.


Yea, even the Corsair CX 430 has an 8 pin. In short putting a 6 pin only was a foolish decision. The whole sad issue is that this seems to only be a detriment for no logical reason. I mean were already hearing (Supposedly) other vendors hitting 1500mhz+ on cards with different power system. If they wanted to make one low end, then limit it with the 4gb, and with the 8gb give use the power. I am so lost by the reasoning behind this...

I am just curious if you are able to go a little past it with overclocking. Have not tried yet...Excuse me, while I see what I can do with it lol.
Posted on Reply
#102
$ReaPeR$
sith'ari
-Buddy, my system remains ancient for a certain reason. If i would like, i could replace it within a heartbeat! it's not a question of money!
-P.S. I won't hide my feelings for AMD. I clearly remember the period before FuryX's release. They have been brainwashing us for months about the tremendeous capabilities of the HBM memory, making us believe that they will release somekind of rocket instead of a GPU that will destroy every competition!! And when they finally released this "rocket" it would struggle to surpass a 980Ti reference model !! EDIT: This was the LAST time i took them seriously!!
its not AMDs fault if people cannot think critically and buy into the hype. if you cannot grasp the simple fact that the vram is secondary to the gpu concearning total performance, then i think you are in the wrong forum buddy.

R-T-B
If you have a junky motherboard, maybe, but I doubt any mainstream brands don't build in A LITTLE reserve.
i know mate.. but people .. well people are people..
Posted on Reply
#104
HTC
GhostRyder
Yea, even the Corsair CX 430 has an 8 pin. In short putting a 6 pin only was a foolish decision. The whole sad issue is that this seems to only be a detriment for no logical reason. I mean were already hearing (Supposedly) other vendors hitting 1500mhz+ on cards with different power system. If they wanted to make one low end, then limit it with the 4gb, and with the 8gb give use the power. I am so lost by the reasoning behind this...

I am just curious if you are able to go a little past it with overclocking. Have not tried yet...Excuse me, while I see what I can do with it lol.
By watching the video i posted just before this reply, i've learned that the problem isn't the card using over 150W: the problem is the PCI-e consistently using over 75W. Had the over-150W-usage come from the 6 pin wouldn't be such an issue but when some of it comes from the PCI-e slot, that can be dangerous.
Posted on Reply
#105
GhostRyder
HTC
By watching the video i posted just before this reply, i've learned that the problem isn't the card using over 150W: the problem is the PCI consistently using over 75W. Had the over-150W-usage come from the 6 pin wouldn't be such an issue but when some of it comes from the PCI slot, that can be dangerous.
Yea that part is also a problem, but what I am saying is it makes no logical sense not to just put an 8pin on it and call it a day or hard limit it (Since they have two versions, do one of each).
Posted on Reply
#106
HTC
IMHO, AMD has once again shot itself in the foot.

They could have made this card a bit worse so that it used around 130W or so of total power. It's performance would be worse, obviously, but it shouldn't be too much of a hit in order to reach that wattage. Ofc, for this to work, the wattage coming from the PCI-e slot should not exceed 70W, even when the card is overclocked.

Had they taken this approach, the card would still be a good performer and none of this power-consumption fiasco would have occurred ... but noooo ... the just HAD to shoot themselves AGAIN ...
Posted on Reply
#107
GC_PaNzerFIN
Putting a 8-pin power connector would have allowed them to have it solely input GPU VREG leaving pci-e slot for less power hungry items like memory, meeting all PCI-E SIG specs. Really, there is no technical or even cost reason why they couldn't have done this when they realized what the TDP was going to be.
Only reason that comes to my mind is that marketing PR department had long before said there will be a 6-pin, and engineering team facepalmed while doing only thing they could, using PCI-E bus to deliver power to GPU as well.
Posted on Reply
#108
$ReaPeR$
R-T-B
I hear that. I mean somehow, China sells shit like this to this very day:

https://www.techpowerup.com/forums/threads/x58-unknown-motherboard.223785/

:)
holy shit!! but ive seen worse.. much worse, lack of solid state caps for the cpu worse.. what can you do, people are ignorant and there are so menay assholes that will use that ignorance..
i was wandering something though, could amd use the drivers to divert power usage from the pcie to the 6pin? is that even possible?
Posted on Reply
#109
theoneandonlymrk
Mines clocked to a meager 1300 stable at the minute and is not as good yet the same as my 390 but I think driver updates will help given a few months and 2 of these make more sense then 2x 390 ,folding power is up a bit too despite already selling my 390 ,defo sidegrade all in, though a water block will help I'm sure as for power over pciex I have feelings it will be fine.
Posted on Reply
#110
arbiter
HTC
By watching the video i posted just before this reply, i've learned that the problem isn't the card using over 150W: the problem is the PCI-e consistently using over 75W. Had the over-150W-usage come from the 6 pin wouldn't be such an issue but when some of it comes from the PCI-e slot, that can be dangerous.
The 6pin/8pin can do well over what spec says of them, case in point the reference 295x2, AMD had that card pulling anywhere 240 to almost 300watts per plug. That card had a TDP of 500watts but it was noted to draw as much as 600. Traces in a MB are much smaller then that wire of PCI-e power cable. so if board is cheap and even mid range boards could be effected. Reason i say midrange boards is they will be little cheaper but not as much so extra power draw over say 6 months or a year could eventually fail.
Posted on Reply
#111
jabbadap
HTC
IMHO, AMD has once again shot itself in the foot.

They could have made this card a bit worse so that it used around 130W or so of total power. It's performance would be worse, obviously, but it shouldn't be too much of a hit in order to reach that wattage. Ofc, for this to work, the wattage coming from the PCI-e slot should not exceed 70W, even when the card is overclocked.

Had they taken this approach, the card would still be a good performer and none of this power-consumption fiasco would have occurred ... but noooo ... the just HAD to shoot themselves AGAIN ...
Sad part is, they should not have to even make it worse. They could have restrict pcie slot power below the spec and take over power from 6-pin connector(best practice, this way you don't over power pcie slot even while overclocking) or slap that damn 8-pin connector to it.
Posted on Reply
#112
$ReaPeR$
GC_PaNzerFIN
Putting a 8-pin power connector would have allowed them to have it solely input GPU VREG leaving pci-e slot for less power hungry items like memory, meeting all PCI-E SIG specs. Really, there is no technical or even cost reason why they couldn't have done this when they realized what the TDP was going to be.
Only reason that comes to my mind is that marketing PR department had long before said there will be a 6-pin, and engineering team facepalmed while doing only thing they could, using PCI-E bus to deliver power to GPU as well.
that is probably the reason.. marketing people shouldnt even exist, the only produce they make is bullshit, and we already have cows for that..
Posted on Reply
#113
$ReaPeR$
jabbadap
Sad part is, they should not have to even make it worse. They could have restrict pcie slot power below the spec and take over power from 6-pin connector(best practice, this way you don't over power pcie slot even while overclocking) or slap that damn 8-pin connector to it.
could they do that from the driver?
Posted on Reply
#114
HTC
arbiter
The 6pin/8pin can do well over what spec says of them, case in point the reference 295x2, AMD had that card pulling anywhere 240 to almost 300watts per plug. That card had a TDP of 500watts but it was noted to draw as much as 600. Traces in a MB are much smaller then that wire of PCI-e power cable. so if board is cheap and even mid range boards could be effected. Reason i say midrange boards is they will be little cheaper but not as much so extra power draw over say 6 months or a year could eventually fail.
Drawing high amounts of powers isn't necassarily bad, UNLESS that extra power comes from the PCI-e slot on a consistent basis: if it has high spikes but keeps a "within tolerance" average, it should be OK, depending on how high those spikes actually are.
Posted on Reply
#115
john_
cadaveca
Perhaps AMD gave the card a BIOS that allowed it to exceed PCIe spec because it wanted it to be reviewed in the best light, and knew reviewers sometimes do not investigate OC? Given how their clocks are "managed" compared to NVidia's Turbo, this actually seems like the most reasonable explanation for what happened. Not every site has the capability to accurately measure power consumption for PCIe devices, so many sites wouldn't even be able to test such an issue.
AMD should know by now that even if one site finds something that it looks like a problem, all hell will get lose. If they where thinking like that, then maybe AMD's engineers live under a rock and have no contact with internet and the real world. Like Nvidia's engineers who where also living under a rock and never noticed that one of their company's products was selling for over 6 months with wrong specs. Or someone in AMD is a moron. Probably he also insists in giving exclusive interviews to unfriendly sites.
Posted on Reply
#116
arbiter
john_
AMD should know by now that even if one site finds something that it looks like a problem, all hell will get lose. If they where thinking like that, then maybe AMD's engineers live under a rock and have no contact with internet and the real world. Like Nvidia's engineers who where also living under a rock and never noticed that one of their company's products was selling for over 6 months with wrong specs. Or someone in AMD is a moron. Probably he also insists in giving exclusive interviews to unfriendly sites.
Theory that was floated, is 480 was clocked at a lower mhz but the launch of pascal 1070/1080 changed what clocks 480 was set to run which changed power draw. Its a plausible idea and AMD didn't have time to test it which they should of.

Techinically, 970 wasn't selling with the wrong spec's It HAS 4gb of memory no matter how much people say its only 3.5. there is 4gb there so specs were correct.
Posted on Reply
#117
john_
newtekie1
And the ASUS gtx950 with no power connector pulls a maximum of 76w from the slot. So they managed to keep it right at the limit.
Overclock it and you probably go to 85-90W.

Custom RX 480 cards will come and this matter will be forgotten fast. It will be only one of the things that fanboys will be remembering in fanboy wars. "RX 480 was a fire hazard". "The same can be said for GTX 570(I think)" etc.

What I think we should learn here, is that when overclocking a card that it is at it's TDP limit from the factory(GTX 950 with no power connector, R9 270X with only one 6pin), we are not just stressing the card, we are probably stressing the motherboard. I was in total darkness until now. Was I the only one? I wonder how many people out there get a GTX 950 without a power connector and overclock it because
"It doesn't need a power connector, so it must have some really top quality GPU in there that probably overclocks better than those used in cards that need an extra power connector".


arbiter
Theory that was floated, is 480 was clocked at a lower mhz but the launch of pascal 1070/1080 changed what clocks 480 was set to run which changed power draw. Its a plausible idea and AMD didn't have time to test it which they should of.

Techinically, 970 wasn't selling with the wrong spec's It HAS 4gb of memory no matter how much people say its only 3.5. there is 4gb there so specs were correct.
The real competition for AMD, the bar they had set to pass, was GTX 970, not 1070 or 1080. They needed a card faster than GTX 970 and at the same time at 150W TDP limit. Lowering the clocks was probably enough to lose some benchmarks. So they decided to go over the TDP limits, probably knowing that if there where any incidents, they would be few. Wrong thinking.

Technically if my EVO 840 was 100GB SSD and 20GB HDD, I wouldn't say "That's OK, I always have 20GBs free", or "That's OK, 100GBs+20GBs it's 120GBs".

I don't give any excuses to AMD, and no one should. The world will be a better place, and products of better quality, if people also stop giving excuses to Nvidia for it's mistakes/lies.

PS Also less ROPs, less cache, less bandwidth. You keep forgetting those.
Posted on Reply
#118
arbiter
HTC
I found this video to be quite informative:


If people that claimed "960 uses 200watts" or 750ti does this would watch the first 10min of this video it would go through and in plain and easy terms explain it on what you see with tomshardware's graph they would understand things a bit more.
Posted on Reply
#119
newtekie1
Semi-Retired Folder
john_
Overclock it and you probably go to 85-90W.

Custom RX 480 cards will come and this matter will be forgotten fast. It will be only one of the things that fanboys will be remembering in fanboy wars. "RX 480 was a fire hazard". "The same can be said for GTX 570(I think)" etc.

What I think we should learn here, is that when overclocking a card that it is at it's TDP limit from the factory(GTX 950 with no power connector, R9 270X with only one 6pin), we are not just stressing the card, we are probably stressing the motherboard. I was in total darkness until now. Was I the only one? I wonder how many people out there get a GTX 950 without a power connector and overclock it because
"It doesn't need a power connector, so it must have some really top quality GPU in there that probably overclocks better than those used in cards that need an extra power connector".
Except that isn't how it works on modern cards anymore. They have power limits in place to make sure they don't go over their power target. The power limit on the GTX950 with no power connector was 75w. You can up the clocks all you want, but GPU Boost will make them drop back down to keep within the 75w power limit. That is why tests like furmark don't give stupid high power numbers anymore. So overclocking without raising the power limit would still give ~75w.

john_
Technically if my EVO 840 was 100GB SSD and 20GB HDD, I wouldn't say "That's OK, I always have 20GBs free", or "That's OK, 100GBs+20GBs it's 120GBs".
Interesting analogy, but somewhat fitting. In fact, there are MLC SSDs that perform crazy good, almost at SLC levels until you have them filled up a certain amount, then the performance starts to drop off. The reason being that they run all the MLC flash in SLC mode until the extra space is needed, then it switches to MLC mode. But the benchmarks in the review sure look good.
Posted on Reply
#121
INSTG8R
My Custom Title
GC_PaNzerFIN
https://bitcointalk.org/index.php?topic=1433925.msg15438988#msg15438988

[IMG]https://ip.bitcointalk.org/?u=http%3A%2F%2Fwww.holylands.com%2Fmb.jpg&t=566&c=eY1CYsORUkYKEA[/IMG]

Hmm, Has anyone run any GPGPU compute power measurements on RX 480? See above, claiming 3x underclocked RX 480s + Coin mining killed it.
MoBo still has IDE Port...Seems Legit. Should be in spec.
Posted on Reply
#122
qubit
Overclocked quantum bit
HTC
I found this video to be quite informative:


Just seen the whole video and now I'm even happier that I haven't bought an AMD card since 2008. Signifcant driver and performance glitches are one thing (and bad enough) but potentially killing the mobo with excess current is a new low. There's no way they couldn't have known about this at the design and testing phase. No, they tried to palm off a substandard product and hoped they wouldn't get cought out. It amounts to a kind of fraud FFS. :rolleyes:

IMO these cards should be pulled from the market until the fix has been applied and tested to be effective.

The way this company is going I'm unlikely to ever buy one of their graphics cards again. No wonder NVIDIA can charge what they like for their cards. At least they work beautifully most of the time.
Posted on Reply
#123
GC_PaNzerFIN
INSTG8R
MoBo still has IDE Port...Seems Legit. Should be in spec.
It is actually not THAT old motherboard, previous generation to Sandy Bridge. I spent a while investigating this and I have much more valid question how you could manage to connect 3 cards on the board, even with ribbon cables.
Posted on Reply
#124
ppn
ATX24 is the weak point. It is the equivalent of 2/3 6pin. It cant compensate for the lack of 3x6pin

Actually you can connect as many as this motherboard has 4 PCIe. 16,4,1,1x
Posted on Reply
#125
GC_PaNzerFIN
ppn
ATX24 is the weak point. It is the equivalent of 2/3 6pin. It cant compensate for the lack of 3x6pin

Actually you can connect as many as this motherboard has 4 PCIe. 16,4,1,1x
Based on connector scheme on AsRock H81 BTC boards, it does seem like people usually use x1 ribbon cables for mining rigs. Indeed possible then.
Posted on Reply
Add your own comment