Wednesday, August 17th 2016

High PCIe Slot Power Draw Costs RX 480 PCI-SIG Integrator Listing

AMD's design of the Radeon RX 480 graphics card, which draws over 75W of power from the PCI-Express x16 slot, has cost it a product listing on the PCI-SIG Integrators List. The list is compiled for hardware devices implementing the various PCI-Express specifications to the letter. The RX 480 is off-spec, in that it overdraws power from the slot, as the card needs more power than what the slot and the 6-pin PCIe power connector can provide while staying within specs. According to these specs, the slot can provide up to 75W of power, and the 6-pin connector another 75W. The RX 480 was tested to draw more than this 150W power budget.

What this means for AMD is that it cannot display the PCI-Express certification logo on the product or its marketing materials. This, however, may not affect AMD's add-in board (AIB) partners that are PCI-SIG members in their own right, and make graphics cards with their own sub-vendor IDs, provided their power-supply designs comply with PCIe specs. Custom-design cards with an 8-pin PCIe power connector, instead of 6-pin, may qualify as the combination of the 8-pin connector and the slot yields a power budget of 225W. AMD released a software fix to the issue of the cards overdrawing power from the slot, with the Radeon Software Crimson Edition 16.7.1 Beta.
Source: Heise.de
Add your own comment

63 Comments on High PCIe Slot Power Draw Costs RX 480 PCI-SIG Integrator Listing

#51
kn00tcn
REALLY... are people such stupid douchebags that they bring up 970!?

gee i didnt know JEDEC makes a compliance list or that segmented memory speed is anything close to electricity & failures or even fires

obviously we dont care about the sticker, what about someone with a lower end mobo? what about corporate IT support? what about an OEM? how about thinking of others for once

which reminds me, never called out the hysterical mob in the call of duty article months ago, sick of such people infesting what should be an intelligent empathetic enthusiast forum
Posted on Reply
#52
Pewzor
kn00tcn, post: 3507787, member: 65960"
REALLY... are people such stupid douchebags that they bring up 970!?

gee i didnt know JEDEC makes a compliance list or that segmented memory speed is anything close to electricity & failures or even fires

obviously we dont care about the sticker, what about someone with a lower end mobo? what about corporate IT support? what about an OEM? how about thinking of others for once

which reminds me, never called out the hysterical mob in the call of duty article months ago, sick of such people infesting what should be an intelligent empathetic enthusiast forum
Calm your tits, just a prank bro. XDDD.

Nice reply tho I score you 3.5/4
Posted on Reply
#53
kn00tcn
Pewzor, post: 3507792, member: 165978"
just a prank bro
that did make me giggle

but i'm cereal, dough
Posted on Reply
#54
xorbe
Hmm. Your ideas are intriguing to me, and I wish to subscribe to your newsletter.
Posted on Reply
#55
Fluffmeister
Ahh the 970, aka the GM204, still giving 14nm Polaris a hard time... what a GPU.
Posted on Reply
#56
Pewzor
Fluffmeister, post: 3507799, member: 101373"
Ahh the 970, aka the GM204, still giving 14nm Polaris a hard time... what a GPU.
Yep the 3.5gb/4gb is working out just fine.
Posted on Reply
#57
xorbe
Still waiting for my ISO9001 approved video card ...
Posted on Reply
#58
Fluffmeister
Pewzor, post: 3507802, member: 165978"
Yep the 3.5gb/4gb is working out just fine.
That's just it, it is! The card outsold everything for good reason too.
Posted on Reply
#59
semitope
Fluffmeister, post: 3507808, member: 101373"
That's just it, it is! The card outsold everything for good reason too.
yes, ignorance is widespread. enjoy that 380x level performance in dx12
Posted on Reply
#60
Fluffmeister
semitope, post: 3507815, member: 138941"
yes, ignorance is widespread. enjoy that 380x level performance in dx12
You're right, all those superb DX12 games, gee I feel I'm missing out already.
Posted on Reply
#61
ty_ger
$ReaPeR$, post: 3507740, member: 56172"
couldn't find anything confirming what you said. furthermore, i find this whole story a bit shady, i mean, first they have the rating then they dont.. its a load of bs if you ask me.
RMS is the industry standard. Every multimeter on the planet tries their hardest to indicate true RMS power. Only the crappiest multimeters measure RMS slightlg wrong and only the most advanced and expensive multimeters give you the option to disable RMS and view the peaks or lows if someone ever needs that functionality for their particular serious research purposes. But, like I said, every power meter and multimeter on the planet indicates RMS power by default and that is the standard which every electrical standard on the planet observes.

It is only armchair youtube viewers who think that some site's video about their method of hooking up a random and un-scientific amount of filtering indicates something about anything meaningful. If anything, they should have used Toms' graph of realtime data and applied RMS to Toms' realtime data to arrive at pretty close to the correct values instead of throwing some random coils into their experiment and skewing the data by an unknown and arbitrary amount without paying any attention to the lowpass or highpass filtering implications and inaccuracies introduced into their data due to an input signal of unknown frequency and randomness and then trying to explain it in a way which sounded enlightened. Neither site got it quite right. But that is besides the point.

The point is that PCI-SIG used the industry standard RMS power when it developes and publishes its industry standard. Thus they do indeed level off the peaks into a more true indication of average power consumption and heat production over extended use and load and do indeed place less importance on any individual power spike. It is similar (in a way) to what PCper tried to do, but in a much more scientific and reproduceable way which does not change drastically depending on your power supply's output ripple frequency or your game's/application's load and computation distribution.
Posted on Reply
#62
$ReaPeR$
ty_ger, post: 3507824, member: 54252"
RMS is the industry standard. Every multimeter on the planet tries their hardest to indicate true RMS power. Only the crappiest multimeters measure RMS slightlg wrong and only the most advanced and expensive multimeters give you the option to disable RMS and view the peaks or lows if someone ever needs that functionality for their particular serious research purposes. But, like I said, every power meter and multimeter on the planet indicates RMS power by default and that is the standard which every electrical standard on the planet observes.

It is only armchair youtube viewers who think that some site's video about their method of hooking up a random and un-scientific amount of filtering indicates something about anything meaningful. If anything, they should have used Toms' graph of realtime data and applied RMS to Toms' realtime data to arrive at pretty close to the correct values instead of throwing some random coils into their experiment and skewing the data by an unknown and arbitrary amount without paying any attention to the lowpass or highpass filtering implications and inaccuracies introduced into their data due to an input signal of unknown frequency and randomness and then trying to explain it in a way which sounded enlightened. Neither site got it quite right. But that is besides the point.

The point is that PCI-SIG used the industry standard RMS power when it developes and publishes its industry standard. Thus they do indeed level off the peaks into a more true indication of average power consumption and heat production over extended use and load and do indeed place less importance on any individual power spike. It is similar (in a way) to what PCper tried to do, but in a much more scientific and reproduceable way which does not change drastically depending on your power supply's output ripple frequency or your game's/application's load and computation distribution.
i got that. that wasn't my point, i just couldn't find the standard's specs on the net, and my point was, why give the compliance in the first place and then take it away? that looks shady to me and totally unprofessional. also how can spikes to 200+ watts be ok for the spec but 10 watts of continuous power be harmful? im not an electrical engineer, but i find the lack of clear info on the subject not to my liking.
Posted on Reply
#63
RealNeil
ZeppMan217, post: 3507400, member: 85360"
I think the funny part is how the reference RX480 got the certification in the first place, since it was not compliant with the requirements.
. I won't be buying a card that could possibly overdraw power from my system.
This stuff costs far too much for me to ignore ~any~ possible risks in it's power delivery system.
Posted on Reply
Add your own comment